text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
When we do not have access control, it is practically impossible to guarantee that features are used only by their target users. If a problem occurs, the person responsible for the system is unable to track the person responsible for it. The lack of permission management allows users to have access to services not needed by them, making room for improper access and possible application failures. This may result in data breaches that cost millions of dollars and reputational damage.
The roles of identity are those responsible for cataloging users within a system so that everyone who has access to it can be properly authenticated, this being one of the three main pillars of information security. It is important for better access control that the roles of identities are clear and allow easy identification of the individual who wants to access them.
It is critical for information security that there is control over what a particular user needs about what he can access. The ideal is to appeal to the maxim of “minimum privileges,” where a person, through the management of permission groups, receive authorization and sees on his screen only what has been allowed.
One of the critical aspects of cybersecurity for businesses in today's world is to assess organizational maturity against the fundamentals of IAM. It will provide an overview of your organization's current situation regarding the security of your digital assets and infrastructure. Here are some important factors to consider:
By implementing a reliable IAM program, a company can strike a balance between security, risk reduction, training its staff (including customers and employees) to use the services they need, whenever they need them, without taking too many digital risks. In the light of advantages and failure prevention that an access management system can provide to applications, it is highly recommended that it receives due attention. Doing so can prevent data breaches; financial and reputational damage to your company. | <urn:uuid:7aa4c395-9cd7-4cdb-8319-7151bc7a4209> | CC-MAIN-2022-40 | https://www.logsign.com/blog/role-of-identity-and-access-management-in-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00770.warc.gz | en | 0.954876 | 368 | 2.71875 | 3 |
In another lesson I explained the different EIGRP packets and their function. In this lesson we’ll take a close look at the EIGRP neighbor adjacency to see what exactly happens when EIGRP routers become neighbors. This is what happens when you enable EIGRP on two routers:
We have 2 routers called R1 and R2 and they are configured for EIGRP. As soon as we enable it for the interface they will start sending hello packets. In this example R1 is the first router to send a hello packet.
As soon as R2 receives the hello packet from R1 it will respond by sending update packets that contain all the routing information that it has in its routing table. The only routes that are not sent on this interface are the one that R2 learned on this interface because of split-horizon. The update packet that R2 will send has the initialization bit set so we know this is the “initialization process”. At this moment there is still no neighbor adjacency until R2 has sent a hello packet to R1.
R1 is of course not the only one sending hello packets. As soon as R2 sends a hello packet to R1 we can continue to setup a neighbor adjacency.
After both routers have exchanged hello packets we will establish the neighbor adjacency. R1 will send an ACK to let R2 know he received the update packets. The routing information in the update packets will be saved in the EIGRP topology table.
R2 is anxious to receive routing information as well so R1 will send update packets to R2 who will save this information in its EIGRP topology table.
After receiving the update packets R2 will send an ACK back to R1 to let him know everything is ok.
Want to see what this looks like on a real router? Let’s use the following topology and see what happens:
This is the topology I’m going to use to configure EIGRP. My goal is to have full connectivity and here are the configurations:
R1(config)#router eigrp 1 R1(config-router)#no auto-summary R1(config-router)#network 18.104.22.168 0.0.0.255 R1(config-router)#network 192.168.12.0 R1(config-router)#exit
R2(config)#router eigrp 1 R2(config-router)#no auto-summary R2(config-router)#network 22.214.171.124 0.0.0.255 R2(config-router)#network 192.168.12.0 R2(config-router)#exit
Let’s break this one down. Router eigrp 1 will start up EIGRP using AS (autonomous system) number 1. This number has to match on both routers or we won’t become EIGRP neighbors.
No auto-summary is needed because by default EIGRP will behave like a classful routing protocol which means it won’t advertise the subnet mask along the routing information. In this case that means that 126.96.36.199/24 and 188.8.131.52/24 will be advertised as 184.108.40.206/8 and 220.127.116.11/8. Disabling auto-summary will ensure EIGRP sends the subnet mask along.
Network 18.104.22.168 0.0.0.255 mean that I’m advertising the 22.214.171.124 network with wildcard 0.0.0.255. If I don’t specify the wildcard you’ll find “network 126.96.36.199” in your configuration. Does it matter? Yes and no. The same thing applies to “network 188.8.131.52 /24”. It will work but also means that every interface that falls within the 184.108.40.206/8 or 220.127.116.11/8 range is going to run EIGRP. Network 192.168.12.0 without a wildcard mask is fine since I’m using a /24 on this interface which is Class C.
If you are working on a lab and are lazy (like me) you can also type in network 0.0.0.0 which will activate EIGRP on all of your interfaces…if that’s what you want of course.
Let’s do a debug on R2 to see what is going on:
R2#debug eigrp packets ? SIAquery EIGRP SIA-Query packets SIAreply EIGRP SIA-Reply packets ack EIGRP ack packets hello EIGRP hello packets ipxsap EIGRP ipxsap packets probe EIGRP probe packets query EIGRP query packets reply EIGRP reply packets request EIGRP request packets retry EIGRP retransmissions stub EIGRP stub packets terse Display all EIGRP packets except Hellos update EIGRP update packets verbose Display all EIGRP packets <cr>
As you can see we have a LOT of debug options for EIGRP. I want to see the hello packets…
R2#debug eigrp packets hello EIGRP Packets debugging is on (HELLO) R2# EIGRP: Received HELLO on FastEthernet0/0 nbr 192.168.12.1 AS 1, Flags 0x0, Seq 0/0 idbQ 0/0 iidbQ un/rely 0/0 peerQ un/rely 0/0
Looking good seems we have received a hello packet from R1.
R2# EIGRP: Sending HELLO on FastEthernet0/0 AS 1, Flags 0x0, Seq 0/0 idbQ 0/0 iidbQ un/rely 0/0
And we are sending hello packets to R1 as well.
R1# %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.12.2 (FastEthernet0/0) is up: new adjacency
R2# %DUAL-5-NBRCHANGE: IP-EIGRP(0) 1: Neighbor 192.168.12.1 (FastEthernet0/0) is up: new adjacency
You can see we have an EIGRP neighbor adjacency.
R2# EIGRP: Sending HELLO on Loopback0 AS 1, Flags 0x0, Seq 0/0 idbQ 0/0 iidbQ un/rely 0/0 EIGRP: Received HELLO on Loopback0 nbr 18.104.22.168 AS 1, Flags 0x0, Seq 0/0 idbQ 0/0
Hmm interesting it seems R2 is schizophrenic and sending hello packets to its loopback0 interface and also receiving them.
This behavior is normal because the network command does two things:
- Send EIGRP packets on the interface that falls within the network command range.
- Advertise the network that is configured on the interface in EIGRP.
So what do you have to do when you want to advertise a network without sending EIGRP packets on the interface and forming EIGRP neighbors? | <urn:uuid:2faca2c1-b19c-4660-83a2-8ee06bf2cf5b> | CC-MAIN-2022-40 | https://networklessons.com/cisco/ccnp-route/detailed-look-of-eigrp-neighbor-adjacency | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00770.warc.gz | en | 0.863815 | 1,597 | 3.03125 | 3 |
We’re all undoubtedly familiar with the issues that come with traffic congestion and although infrastructure changes are largely credited with relieving congestion, data on where and how severely traffic congestion occurs can be difficult to collect. Traffic congestion monitoring solutions use IoT-enabled sensors or drivers’ mobile phones to monitor, track, and predict traffic patterns, in order to provide visibility into traffic congestion and trends.
As the global population grows, roadway infrastructure struggles to keep up, resulting in increased traffic congestion, especially in metropolitan areas. Idling in traffic is not only frustrating for drivers, it’s costly and terrible for the environment. In 2018, American drivers lost 97 hours sitting in traffic, costing the country $87 billion in time and gas at an average of $1348 per driver.
Traditional methods for congestion monitoring were costly and highly prone to human error, often requiring a paid city worker to stand near a problematic road and count the traffic as it passes within an amount of time. Although, visually, it was easy identify traffic congestions – six cars passing in an eight minute period could represent either low traffic density or gridlock traffic.
Traffic congestion monitoring solutions allow cities a high-level view of traffic congestion patterns and trends, without the cost of having someone stand outside and count cars. With this increased visiblity, city planners are better able to make informed decisions on road changes and infrastructure improvements, creating long-term solutions to reduce traffic congestion, even as a city grows.
Traffic congestion monitoring solutions can use either cameras to visually capture and record car volumes or may crowdsource information from drivers’ smartphones.
Traffic congestion monitoring solutions place cameras at key locations in roadways to observe and track congestion. Using machine learning technologies to count car volume and measure capacity, these platforms are able to identify instances of congestion, track when they are most likely to occur, and identify environmental factors, like weather or accidents.
Another option that congestion monitors might use are users’ phone location data, using that data to identify large concentrations of users and their speed to identify traffic congestion.
Monitor congestion patterns to identify changes and track improvements against infrastructure and policy changes.
Eliminate the need for human traffic-counters and prevent error in traffic congestion measurements.
Better inform decisions on road and infrastructure improvements. | <urn:uuid:d16c11bd-a723-45a2-a036-d1adebc48e47> | CC-MAIN-2022-40 | https://www.iotforall.com/use-case/traffic-congestion-monitoring | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00770.warc.gz | en | 0.936055 | 467 | 3.265625 | 3 |
Thought leadership. Threat analysis. Cybersecurity news and alerts.
How to Prevent Supply-Chain Attacks
Kaspersky researchers recently reported that they continue to observe in the 3rd quarter of 2021 supply-chain attacks.
“We continue to see supply-chain attacks, including those of SmudgeX, DarkHalo and Lazarus,” Kaspersky researchers said in their “APT trends report Q3 2021.”
What Is Supply-Chain Attack?
Supply-chain attack is a type of cyberattack in which an attacker inserts malicious code into a legitimate software.
In a supply-chain attack, an attacker turns the compromised software into a Trojan horse. A Trojan horse is a type of malicious software (malware) that’s introduced onto a victim’s computer as it’s disguised as legitimate software.
In a supply-chain attack, by compromising a single software, attackers gain access to hundreds or hundreds of thousands of customers of a legitimate software.
The three common supply-chain attack techniques include hijacking updates, undermining code signing, and compromising open-source code. Attackers may use these three common supply-chain attack techniques simultaneously.
Supply-Chain Attacks Examples
DarkHalo is the name given by researchers to the group that launched the SolarWinds supply-chain attack. Other researchers call the group behind the SolarWinds supply-chain attack Nobelium.
SolarWinds supply-chain attack is one of the high-profile supply-chain attacks that was exposed in December 2020. According to SolarWinds, the "vulnerability" was inserted within the company's Orion products and existed in updates released between March and June 2020.
In a report to the U.S. Securities and Exchange Commission (SEC), SolarWinds said that nearly 33,000 of its more than 300,000 customers were Orion customers, and that fewer than 18,000 customers may have had installed the Orion product that contained the malicious code. One of the notable victims of the Solarwinds supply chain attack is Microsoft.
According to Kaspersky researchers, evidence suggests that DarkHalo had spent six months inside OrionIT’s networks to perfect their attack.
“In June, more than six months after DarkHalo had gone dark, we observed the DNS hijacking of multiple government zones of a CIS member state that allowed the attacker to redirect traffic from government mail servers to computers under their control – probably achieved by obtaining credentials to the control panel of the victims’ registrar,” Kaspersky researchers said. “When victims tried to access their corporate mail, they were redirected to a fake copy of the web interface. Following this, they were tricked into downloading previously unknown malware. The backdoor, dubbed Tomiris, bears a number of similarities to the second-stage malware, Sunshuttle (aka GoldMax), used by DarkHalo last year. ”
Kaspersky researchers called the supply-chain incident in which a threat actor modified a fingerprint scanner software installer package as SmudgeX. The fingerprint scanner software is used by government employees of a country in South Asia for attendance recording.
Kaspersky researchers said the threat actor changed a configuration file and added a DLL with a .NET version of a PlugX injector to the installer package. “On installation, even without network connectivity, the .NET injector decrypts and injects a PlugX backdoor payload into a new svchost system process and attempts to beacon to a C2 [command and control infrastructure],” Kaspersky researchers said.
The Trojanized installer version of the fingerprint scanner software appeared to have been staged on the distribution server from March to June, Kaspersky researchers said.
According to Kaspersky researchers, evidence showed that the threat group known as Lazarus is building supply-chain attack capabilities. The researchers said that one supply-chain attack from this threat group originated from a compromised legitimate South Korean security software.
Another supply-chain attack launched by this group, Kaspersky researchers said, stemmed from a hijacked asset monitoring solution software in Latvia.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA), meanwhile, reported that in 2017, Kaspersky Antivirus was being used by a foreign intelligence service for spying. The U.S. government directed government offices to remove the vendor’s products from networks.
Cybersecurity Best Practices Against Supply-Chain Attacks
Supply-chain attacks aren’t easy to protect against. Your organization’s software vendors, even the top big IT software vendors, are as vulnerable to supply-chain attacks.
Here are some of the cybersecurity best practices against supply-chain attacks:
Supply-chain attackers target not just software. They also target hardware. Attackers compromised hardware components with the end view of compromising hardware users. In 2016, attackers hijacked the design of a mobile phone. The phones sold to customers encrypted users’ text and call details and transmitted the data to a server every 72-hours.
Most of the cybersecurity best practices against software supply-chain attacks also apply to hardware supply-chain attacks.
How to Implement Best Cyber Defense Against BlackMatter Ransomware Attacks
Three U.S. government agencies, the Cybersecurity, and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA), recently issued a cyber security alert and defense tips against BlackMatter ransomware attacks.
What Is BlackMatter Ransomware?
BlackMatter is a relatively new ransomware. It was first observed in the wild in July 2021. This new ransomware exhibits the typical features of a modern-day ransomware, including the double extortion modus operandi.
In double extortion, the ransomware group steals data from victims. After stealing data, the attackers then encrypt victims’ data, preventing victims from accessing their data. After data encryption, attackers demand from victims ransom payment in exchange for a decryption tool that purportedly would unlock the encrypted data.
In double extortion, failure on the part of the victims to pay the ransom payment for the decryption tool leads to the activation of the second ransom demand, that is, victims are named on a leak site as victims of ransomware attacks. These victims are then threatened that their data will be published in case they won’t pay ransom.
Some ransomware actors still demand the second ransom payment – for the non-publication of the stolen data – despite the payment of the first ransom payment, that is, payment for the decryption tool.
Like other modern-day ransomware, BlackMatter ransomware is operated under the scheme called ransomware-as-service (RaaS). In RaaS, the ransomware developer (the one who creates the ransomware custom exploit code) works with affiliates – a different kind of cyberattackers who have existing access to corporate networks.
In a public advertisement posted on the underground forum Exploit, BlackMatter said it wants to buy access to corporate networks in the U.S., Canada, Australia, and Great Britain.
The group further said that it’s willing to pay $3,000 to $100,000 per network, provided the network passed the following criteria:
To signify that it's serious about its offer, BlackMatter has deposited 4 bitcoins ($256,000) on the forum Exploit.
“The [BlackMatter] ransomware is provided for several different operating systems versions and architectures and is deliverable in a variety of formats, including a Windows variant with SafeMode support (EXE / Reflective DLL / PowerShell) and a Linux variant with NAS support: Synology, OpenMediaVault, FreeNAS (TrueNAS). According to BlackMatter, the Windows ransomware variant was successfully tested on Windows Server 2003+ x86/x64 and Windows 7+ x64 / x86,” Recorded Future reported. “The Linux ransomware variant was successfully tested on ESXI 5+, Ubuntu, Debian, and CentOs. Supported file systems for Linux include VMFS, VFFS, NFS, VSAN.”
On BlackMatter website, the group said it doesn't attack hospitals, critical infrastructure, oil and gas industry, defense industry, non-profit companies, and government sector.
According to the joint cybersecurity advisory by CISA, FBI, and NSA, since July 2021, BlackMatter ransomware has targeted multiple U.S. critical infrastructure entities, including two food and agriculture sector organizations in the U.S., and have demanded ransom payments ranging from $80,000 to $15,000,000 in cryptocurrencies Bitcoin and Monero.
In September 2021, BlackMatter attacked the U.S. farmers cooperative NEW Cooperative and demanded from the victim $5.9 million for the decryptor and for the non-publication of the stolen data.
"Your website says you do not attack critical infrastructure,” a NEW Cooperative representative told BlackMatter during a negotiation chat (screenshots of the said negotiation chat were shared online). “We are critical infrastructure... intertwined with the food supply chain in the US. If we are not able to recover very shortly, there is going to be very very public disruption to the grain, pork, and chicken supply chain."
BlackMatter Ransomware Tactics, Techniques, and Procedures
The CISA, FBI, and NSA advisory said that sample of BlackMatter ransomware analyzed in a sandbox environment as well from trusted third-party reporting showed that BlackMatter ransomware uses the following tactics, techniques, and procedures:
Cybersecurity Best Practices
The CISA, FBI, and NSA advisory recommends the following cybersecurity defense tips against BlackMatter ransomware attacks:
Microsoft recently revealed that one of its Azure customers was hit by a 2.4 Tbps distributed denial-of-service (DDoS) attack last August.
In the blog post “Business as usual for Azure customers despite 2.4 Tbps DDoS attack,” Amir Dahan Senior Program Manager at Microsoft’s Azure Networking said the 2.4 Tbps DDoS attack is 140 percent higher than 2020’s 1 Tbps attack and higher than any network volumetric event previously detected on Azure.
Dahan said the 2.4 Tbps DDoS attack on Azure infrastructure originated from approximately 70,000 sources and from multiple countries in the Asia-Pacific region, including Malaysia, Vietnam, Taiwan, Japan, and China, as well as from the United States.
“The attack vector was a UDP reflection spanning more than 10 minutes with very short-lived bursts, each ramping up in seconds to terabit volumes,” Dahan said. “In total, we monitored three main peaks, the first at 2.4 Tbps, the second at 0.55 Tbps, and the third at 1.7 Tbps.”
With the adoption of cloud services, Dahan said, “Bad actors, now more than ever, continuously look for ways to take applications offline.’
In the blog post "Azure DDoS Protection—2021 Q1 and Q2 DDoS attack trends," Alethea Toh Program Manager at Microsoft’s Azure Networking reported that the first half of 2021 saw a sharp increase in DDoS attacks on Azure resources per day. Toh said Microsoft’s Azure mitigated an average of 1,392 DDoS attacks per day in the first half of 2021, the maximum reaching 2,043 attacks on May 24, 2021.
“In total, we mitigated upwards of 251,944 unique [DDoS] attacks against our global infrastructure during the first half of 2021,” Toh said.
Toh added that in the first half of 2021, the average DDoS attack size was 325 Gbps, with 74 percent of the attacks being 30 minutes or less and 87 percent being one hour or less.
In 2020 Google, meanwhile, revealed a 2.5 Tbps DDoS attack on its infrastructure. In the blog post “Exponential growth in DDoS attack volumes,” Damian Menscher, Security Reliability Engineer at Google, said that Google’s infrastructure was hit by a 2.5 Tbps DDoS attack in September 2017. This 2.5 Tbps DDoS attack on Google infrastructure, Menscher said, was a culmination of a six-month campaign that utilized multiple methods of attack, simultaneously targeting Google’s thousands of IPs.
“The attacker used several networks to spoof 167 Mpps (millions of packets per second) to 180,000 exposed CLDAP, DNS, and SNMP servers, which would then send large responses to us,” Menscher said.
Top Attack Vectors
DDoS is a type of cyberattack that floods targets with gigantic traffic volumes with the aim of choking network capacity.
“While UDP attacks comprised the majority of attack vectors in Q1 of 2021, TCP overtook UDP as the top vector in Q2,” Toh of Microsoft's Azure said. “From Q1 to Q2, the proportion of UDP dropped from 44 percent to 33 percent, while the proportion of TCP increased from 48 percent to 60 percent.”
According to Toh, in Q1 of 2021, a total of 33% attack vectors came from UDP flood, 24% from TCP other flood, 21% from TCP ACK flood, 11% from UDP amplification, 7% from IP protocol flood, 3% from TCP SYN flood.
For Q2 of 2021, Toh said, a total of 23% attack vectors came from UDP flood, 29% from TCP other flood, 28% from TCP ACK flood, 10% from UDP amplification, 6% from IP protocol flood, and 3% from TCP SYN flood.
In January, Toh said, Microsoft Windows servers with Remote Desktop Protocol (RDP) enabled on UDP/3389 were being abused to launch UDP amplification attacks, with an amplification ratio of 85.9:1 and a peak at approximately 750 Gbps.
In February, Toh said, video streaming and gaming customers were getting hit by Datagram Transport Layer Security (D/TLS) attack vector which exploited UDP source port 443.
In June, Toh said, reflection attack iteration for the Simple Service Delivery Protocol (SSDP) emerged. SSDP normally uses source port 1900. The new mutation, Toh said, was either on source port 32414 or 32410, also known as Plex Media Simple Service Delivery Protocol (PMSSDP).
Cybersecurity Best Practices
Organizations with internet-exposed workloads are vulnerable to DDoS attacks. Some DDoS attacks focus on a specific target from application layer (web, DNS, and mail servers) to network layer (routers/switches and link capacity). Some DDoS attackers may not focus on a specific target, but rather, attack every IP in your organization’s network.
Microsoft and Google have their own DDoS mitigating measures that can absorb multi-terabit DDoS attacks. On the part of Google, the company said it reported thousands of vulnerable servers to their network providers, and also worked with network providers to trace the source of the spoofed packets so they could be filtered.
Small and medium-sized organizations can now avail of a DDoS protection solution that can absorb multi-terabit DDoS attacks. Today’s DDoS protection solution operates autonomously, without human intervention. Failure to protect your organization’s resources from DDoS attacks can lead to outages and loss of customer trust.
We can also help in preventing DDoS attacks from happening by ensuring that our computers and IoT devices are patched and secured.
2 ‘Prolific’ Ransomware Operators Arrested in Ukraine
Europol has announced the arrest of two “prolific” ransomware operators known for extorting ransom demands between $6 million to $81 million.
In a statement, Europol said that the arrest of the two ransomware operators last September 28th in Ukraine was a coordinated strike by the French National Gendarmerie, the Ukrainian National Police, and the United States Federal Bureau of Investigation (FBI), with the coordination of Europol and INTERPOL.
The arrest of the two ransomware operators, Europol said, led to the seizure of $375,000 in cash, seizure of two luxury vehicles worth $251,000, and asset freezing of $1.3 million in cryptocurrencies.
The arrested individuals, Europol said, are part of an organized ransomware group suspected of having committed a string of ransomware attacks targeting large organizations in Europe and North America from April 2020 onwards.
The group’s modus operandi, Europol said, includes deployment of malicious software (malware), stealing sensitive data from target companies before encrypting these sensitive files.
After data encryption and stealing of data, Europol further said, the group then offers a decryption tool in exchange for a ransom payment. When ransom demand isn’t met, Europol added, the group threatens to leak the stolen data on the dark web.
Authorities refused to give the names of the two arrested individuals. The name of the ransomware group wasn’t disclosed as well.
Disrupting Ransomware Operations
In June 2021, the Cyber Police Department of the National Police of Ukraine arrested six members of the Clop ransomware group. Computer equipment, cars, and about $185,000 in cash were confiscated by the authorities.
“Together, law enforcement has managed to shut down the infrastructure from which the virus spreads and block channels for legalizing criminally acquired cryptocurrencies,” the Cyber Police Department of the National Police of Ukraine said in a statement.
According to the Cyber Police Department of the National Police of Ukraine, the Clop ransomware group is responsible for $500 million worth of damages worldwide. The arrest of the six members of the Clop ransomware group was a joint operation from law enforcement agencies in Ukraine, South Korea, and the United States.
A few days after the arrest of the six members of the Clop ransomware group, the group claimed other victims, showing that the arrest of the members didn’t disrupt the operation of the Clop ransomware group.
In February 2021, French and Ukrainian law enforcement agencies arrested in Ukrain several members of the Egregor ransomware group. Trend Micro, in a statement, said that the arrest of several members of the Egregor ransomware group was made possible, in part, of its assistance.
“Since its first appearance in September 2020, Egregor ransomware has been involved in high-profile attacks against retailers, human resource service companies, and other organizations,” Trend Micro said. “It operated under the ransomware-as-a-service (RaaS) model where groups sell or lease ransomware variants to affiliates, making it relatively easier even for inexperienced cybercriminals to launch attacks. Like some prominent ransomware variants, Egregor employs a ‘double extortion’ technique where the operators threaten affected users with both the loss and public exposure of the encrypted data.”
Ransomware is a persistent and rapidly evolving cybersecurity problem. Ransomware, in general, is a malware that’s traditionally meant to encrypt victim files – preventing victims from accessing their files. After data encryption, attackers then demand from victims ransom payment in exchange for the decryption tool that purportedly could unlock the encrypted files.
Early ransomware attackers demand from their victims to pay only one ransom payment, that is, for the decryption tool. Today’s ransomware attackers demand from their victims two ransom payments, also known as double extortion, one for the decryption tool and the second for the non-publication of the stolen data exfiltrated prior to data encryption.
Clop ransomware enters the victims’ networks through any of the following methods:
. Phishing emails sent to employees of the target organization
. Remote Desktop Protocol (RDP) compromise via brute-force attacks
. Exploitation of known software security vulnerabilities
Similar to Clop ransomware, Egregor ransomware enters the victims’ networks through phishing emails sent to employees of the target organization and RDP compromise. Egregor ransomware has also been known to access victims’ networks through VPN exploits.
Many of today’s notorious ransomware programs are operated under the ransomware-as-a-service (RaaS) model. In a RaaS model, the ransomware developer sells or leases the ransomware program to affiliates who are responsible for spreading the ransomware and generating infections. The developer takes a percentage of the ransom payment and provides the affiliates share of the ransom payment.
Cybersecurity Best Practices
Here are some of the cybersecurity best practices in preventing or mitigating the effects of ransomware attacks:
. Avoid clicking on links and downloading attachments in emails from questionable sources
. Keep all software up to date
. Protect RDP servers with strong passwords, multi-factor authentication (MFA), virtual private networks (VPNs), and other security protections
. Implement the 3-2-1 backup rule: Make three copies of sensitive data, two copies should be in different formats, and keep one duplicate should be kept offsite.
DDoS Attackers Target VoIP Providers
Over the past few weeks, Voice over Internet Protocol (VoIP) providers have been targeted by distributed denial-of-service (DDoS) attackers.
DDoS is a form of cyberattack that often uses a botnet to attack one target. A botnet is a group of infected computers, including Internet of Things (IoT), and controlled by attackers for malicious activities such as DDoS attacks.
VoIP, meanwhile, refers to a technology that allows voice calls over an Internet connection instead of the traditional analog phone line. As VoIP uses the Internet and requires servers, portals, and gateways to be publicly accessible, this technology is a prime target of DDoS attackers.
In DDoS attacks against VoIP providers, attackers will flood VoIP servers, portals, and gateways with requests, making VoIP services unavailable to legitimate users.
Recent Attacks Against VoIP Providers
On August 31, 2021, London-based Voipfone disclosed that it was under DDoS attack.
"We have identified a further DDoS attack, we will post updates as the situation develops,” Voipfone said in a statement. “Our team is working extremely hard to address the ongoing issues that are currently affecting our network. We sincerely apologize for the disruption this must be causing you, and fully understand how frustrating this must be.”
A week after the intermittent DDoS attacks, Voipfone said it has fully resolved the DDoS attacks.
On September 16, 2021, Montreal-based VoIP.ms became the victim of a DDoS attack. On its website, VoIP.ms said it serves 80,000 customers in 125 countries.
“We have identified a large-scale Distributed Denial of Service (DDoS) attack which has been directed at our DNS and POPs,” VoIP.ms said in a statement posted on its website. “Our team is deploying continuous efforts to profile incoming attacks and mitigate them as best they can. We apologize for the inconvenience caused and thank you for your patience while we work on resolving the issue.”
The DDoS attack against VoIP.ms targeted the company’s DNS name servers. In the absence of DNS, VoIP.ms advised customers to configure their HOSTS file to point the domain at their IP address to bypass DNS resolution. In response, the attackers launched DDoS attacks directly at that IP address. To mitigate the DDoS attacks, VoIP.ms moved their website and DNS servers to Cloudflare.
As of September 28th, VoIP.ms said on its Twitter account that it’s advancing towards a more stable and secure network. The company, however, said that its main US carrier is still experiencing issues in their network which is impacting their clients all across North America.
On September 28, 2021, another VOIP provider admitted that it’s under DDoS attack. “Bandwidth and a number of critical communications service providers have been targeted by a rolling DDoS attack,” Bandwidth CEO David Morken, in a statement, said. “While we have mitigated much intended harm, we know some of you have been significantly impacted by this event. For that I am truly sorry.”
North Carolina-based Bandwidth said on its website that it provides local VoIP phone numbers together with outbound and inbound calling, powering popular platforms including Microsoft Teams/Skype for Business, Zoom Phone, and Google Voice. Bandwidth also serves as an upstream provider for VoIP vendors such as Accent.
“The upstream provider continues to acknowledge the DDoS attack is impacting their network and they are actively working to mitigate its effects,” Accent said in a statement. “Accent is seeing a limited impact to inbound calling for our services for certain phone numbers. We will continue to monitor the situation and update the status as appropriate.”
Ransom DDoS Attacks
A threat actor using the name “REvil” claimed responsibility in the VoIP.ms DDoS attack. The ransom note to VoIP.ms was posted on Pastebin. This ransom note has since been removed from Pastebin. REvil also posted updates about VoIP.ms DDoS attack on Twitter. These updates have since been removed from Twitter.
REvil demanded one bitcoin from VoIP.ms. After a failed negotiation, REvil raised the ransom demand to 100 bitcoins.
REvil originally refers to a threat group behind a number of high-profile ransomware attacks. On July 13, 2021, this group stopped its operation. In September 2021, the group resumed its ransomware operations. The original REvil group, however, hasn’t been known to launch DDoS attacks and publicly demanding ransom out of DDoS attacks.
To date, there’s no report of whether Voipfone and Bandwidth received a ransom demand similar to the one received by VoIP.ms.
Ransom DDoS (DDoS) attacks have been around for years. RDDoS attack occurs when a malicious actor extorts money from a target by threatening the target with a DDoS attack.
Threat actors may carry out a DDoS attack first and then followed by a ransom note. Another approach by threat actors is giving the ransom note first and then followed by a DDoS attack. In the last approach, the ransom note may be an empty threat with the threat actor not really capable of launching an actual DDoS attack. However, there’s a possibility that the DDoS threat is a real thing.
Paying the ransom gives ransom DDoS victims false hope that the attack will stop. Paying the ransom can only make your organization the subject of future DDoS attacks as the attackers know that your organization is willing to pay ransom.
What Is Phishing-As-A-Service and How to Protect Your Organization
Microsoft 365 Defender Threat Intelligence Team recently published their findings on a large-scale phishing-as-a-service operation called “BulletProofLink.”
What Is Phishing-as-a-Service?
Phishing-as-a-service follows the software-as-a-service model in which cybercriminals pay an operator to launch an email-based phishing campaign.
In an email-based phishing campaign, the target receives an email from a seemingly legitimate origin. The email, however, is a malicious one, masquerading as coming from a legitimate source. Clicking a link on this malicious email will lead to a compromised or fake website. The login details entered by the target who believes he or she is logging into a legitimate website will then be harvested for criminal activities.
BulletProofLink, also known as BulletProftLink and Anthrax, is an example of a phishing-as-a-service. This phishing-as-a-service was first reported by OSINT Fans in October 2020. According to OSINT Fans, the phishing campaign launched by BulletProofLink started with a phishing email impersonating a Sydney-based accounting firm. The email looked legitimate, with no sign of broken English or a spoofed email sender.
Inside this email is the Remittance Advice receipts.pdf link. Clinking this link, OSINT Fans said, leads to a pixel-perfect clone of the Microsoft 365 login page. “If a victim enters their password on this page, the login credentials are sent straight to the criminals rather than Microsoft,” OSINT Fans said.
In the blog post “Catching the big fish: Analyzing a large-scale phishing-as-a-service operation,” Microsoft 365 Defender Threat Intelligence Team said BulletProofLink offers phishing-as-a-service at a relatively low cost, offering a wide range of services, including email templates, site templates, email delivery, site hosting, credential theft, credential redistribution, and "fully undetected" links/logs.
Microsoft 365 Defender Threat Intelligence Team said BulletProofLink has over 100 available phishing templates that mimic known brands and services. The BulletProofLink operation, the Team said, is responsible for many of the phishing campaigns that impact enterprises today.
The Team also reported that BulletProofLink used a rather high volume of newly created and unique subdomains – over 300,000 in a single run. The Team added that BulletProofLink is used by multiple attacker groups in either one-off or monthly subscription-based business models, creating a steady revenue stream for BulletProofLink’s operators.
BulletProofLink’s monthly service costs as much as $800, while the one-time hosting link costs about $50 dollars. The common mode of payment is Bitcoin.
Infinite Subdomain Abuse
According to Microsoft 365 Defender Threat Intelligence Team, the operators behind BulletProofLink use the technique, which the Team calls “infinite subdomain abuse.” The Team said infinite subdomain abuse happens when attackers compromise a website’s DNS or when a compromised site is configured with a DNS that allows wildcard subdomains.
Microsoft 365 Defender Threat Intelligence Team said infinite subdomain abuse is gaining popularity among attackers for the following reasons:
“It serves as a departure from previous techniques that involved hackers obtaining large sets of single-use domains. To leverage infinite subdomains for use in email links that serve to redirect to a smaller set of final landing pages, the attackers then only need to compromise the DNS of the site, and not the site itself.
“It allows phishing operators to maximize the unique domains they are able to use by configuring dynamically generated subdomains as prefix to the base domain for each individual email.
“The creation of unique URLs poses a challenge to mitigation and detection methods that rely solely on exact matching for domains and URLs.”
Microsoft 365 Defender Threat Intelligence Team said that BulletProofLink's phishing-as-a-service is reminiscent of the ransomware-as-a-service model. Today’s ransomware attacks involve, not just data encryption, but exfiltrating or stealing data as well. In a ransomware-as-a-service scenario, the ransomware operator doesn’t necessarily delete the stolen data even if the ransom has already been paid.
In both ransomware and phishing, Microsoft 365 Defender Threat Intelligence Team said that operators supplying resources to facilitate attacks maximize monetization by assuring stolen data are put to use in as many ways as possible. Victims’ credentials, the Team said, are likely to end up in the underground economy. “For a relatively simple service, the return of investment offers a considerable motivation as far as the email threat landscape goes,” Microsoft 365 Defender Threat Intelligence Team said.
Cybersecurity Best Practices
To protect Microsoft 365 users from phishing-as-a-service operations, Microsoft 365 Defender Threat Intelligence Team recommends the following cybersecurity best practices:
What we Learned from the Biggest DDoS Attack to Date: 22 Million Requests Per Second
Russian internet giant Yandex recently announced that it was hit by a record-breaking distributed denial-of-service (DDoS) attack.
“Our experts did manage to repel a record attack of nearly 22 million requests per second,” Yandex said in a statement. “This is the biggest known attack in the history of the internet.”
In the blog post “Mēris botnet, climbing to the record,” DDoS mitigation service Qrator Lab reported that from August 7 to September 5 of this year, it recorded 5 DDoS attacks at Yandex from a botnet dubbed as "Mēris," which means "Plague" in the Latvian language. The five DDoS attacks at Yandex, Qrator Lab said, started from 5.2 million requests per second (RPS) and culminated at 21.8 million RPS.
In a DDoS attack, multiple internet-connected computers are operating as one to attack a particular target. In launching a DDoS attack, attackers often use a botnet – a group of hijacked internet-connected computers and controlled by attackers to conduct malicious activities such as DDoS attacks.
In a DDoS attack, the hijacked internet-connected computers are also attacked victims. The use of hijacked internet-connected computers results in exponentially increasing the attack power via voluminous requests sent to the target, and resulting in the initial hiding of the true source of the attack.
According to Qrator Lab, the number of infected internet-connected computers reached 250,000, and these infected internet-connected computers or devices come from only one manufacturer: Mikrotik, a Latvian network equipment manufacturer.
Qrator Lab added that the Mēris botnet used the HTTP pipelining technique in launching the DDoS attacks. “Requests pipelining (in HTTP 1.1) is the primary source of trouble for anyone who meets that particular botnet,” Qrator Lab said. “Because of the request pipelining technique, attackers could squeeze much more RPS than botnets usually do. It happened because traditional mitigation measures would, of course, block the source IP. However, some requests (about 10-20) left in the buffers are processed even after the IP is blocked.”
Based on the botnet’s attacking sources (IP addresses), Qrator Lab said that 10.9% came from Brazil, 10.9% from Indonesia, 5.9% from India, 5.2% from Bangladesh, 3.6 from Russia, and 3.3% from the United States.
In the last couple of weeks, Qrator Lab said that it has observed devastating DDoS attacks towards New Zealand, United States and Russia, which is attributed to the Mēris botnet species. “Now it can overwhelm almost any infrastructure, including some highly robust networks,” Qrator Lab said. “All this is due to the enormous RPS power that it brings along.”
Prior to the DDoS attack at Yandex, the record-breaking DDoS attack was launched by a powerful botnet, targeting a Cloudflare customer in the financial industry. The attack reached 17.2 million requests per second.
According to Cloudflare, the said DDoS attack came from more than 20,000 bots in 125 countries around the world. Based on the botnet’s attacking sources (IP addresses), almost 15% of the attack originated from Indonesia and another 17% from India and Brazil combined.
Cloudflare said the attack was launched via a Mirai botnet. The botnet Mirai, which means “future” in Japanese, was first discovered in 2016. The Mirai botnet infects Linux-operated devices such as security cameras and routers. This botnet infects Linux-operated devices such as security cameras and routers by brute forcing known credentials such as factory default usernames and passwords. Succeeding variants of the Mirai botnet took advantage of zero-day exploits.
According to Qrator Lab researchers, they haven’t seen the malicious code, and as such, they aren’t ready to tell yet if it’s somehow related to the Mirai botnet family or not.
Preventative measures against DDoS attacks
In order to prevent your organization’s internet-connected computers or devices from being hijacked as part of a botnet, it’s important to follow these cybersecurity best practices:
According to MikroTik, Mēris botnet compromised the same routers that were compromised in 2018 via a known security vulnerability that was quickly patched. The 2018 vulnerability that was referred to is CVE-2018-14847, a MikroTik RouterOS security vulnerability that allows unauthenticated remote attackers to read arbitrary files and remote authenticated attackers to write arbitrary files due to a directory traversal vulnerability in the WinBox interface.
“Unfortunately, closing the vulnerability does not immediately protect these routers,” MikroTik said. “If somebody got your password in 2018, just an upgrade will not help. You must also change password, re-check your firewall if it does not allow remote access to unknown parties, and look for scripts that you did not create.”
DDoS attacks, even volumetric attacks, can now be prevented autonomously, without human intervention.
Top 3 Worst Cybersecurity Practices
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) recently listed three cybersecurity practices as dangerous practices that can give rise to enhanced damages to technologies accessible from the internet.
Below are the three practices that CISA has deemed as “dangerous” practices. The presence of these bad practices in organizations, CISA said, “is exceptionally dangerous and increases risk to our critical infrastructure, on which we rely for national security, economic stability, and life, health, and safety of the public.”
1. Use of Unsupported (End-of-Life) Software
Security vulnerabilities in software are but normal. Software vendors, within a specified timeframe, are always on the lookout for these software security vulnerabilities. During this specified period, regular or unscheduled security updates, also known as patches, are released by security vendors to fix known security vulnerabilities.
After the specified timeframe, also known as the software’s end-of-life (EOL), software vendors will stop releasing patches. Attackers love to exploit software that have reached their end of life on the premise that many users still use software that have reached their EOL.
An example of software that has reached its end of life is Windows 7 operating system. On January 14, 2020, Microsoft ended its support for the Windows 7 operating system. Customers who purchased an Extended Security Update (ESU) plan can still receive support or security updates from Microsoft. In this case, the continued use of Windows 7 without ESU is a dangerous practice.
“In 2017, roughly 98 percent of systems infected with WannaCry employed Windows 7 based operating systems,” the Federal Bureau of Investigation (FBI) said in its Private Industry Notification (PDF File). “After Microsoft released a patch in March 2017 for the computer exploit used by the WannaCry ransomware, many Windows 7 systems remained unpatched when the WannaCry attacks began in May 2017. With fewer customers able to maintain a patched Windows 7 system after its end of life, cyber criminals will continue to view Windows 7 as a soft target.”
2. Use of Known/Fixed/Default Passwords and Credentials
The use of known/fixed/default passwords is another bad practice that’s disastrous in technologies accessible from the internet.
In July 2021, Microsoft Threat Intelligence Center reported that it observed new activity from the NOBELIUM threat actor using tactics such as password spray and brute-force attacks.
In the blog post "Protecting your organization against password spray attacks," Diana Kelley, Microsoft Cybersecurity Field CTO said that adversaries in password spray attacks “acquire a list of accounts and attempt to sign into all of them using a small subset of the most popular, or most likely, passwords.”
The Microsoft Cybersecurity Field CTO, meanwhile, said that brute-force attacks are targeted compared to password spray attacks, with attackers going after specific users and cycles through as many passwords as possible using dictionary words, common passwords, or conducting research to see if they can guess the user’s password, for instance, discovering family names through social media posts.
In July 2021 as well, UK’s National Cyber Security Centre reported that it observed an increase in activity as part of malicious email and password spraying campaigns against a limited number of UK organizations.
3. Use of Single-Factor Authentication
The use of single-factor authentication is another bad practice that’s disastrous in technologies accessible from the internet. Single-factor authentication is the simplest form of authentication. With single-factor authentication, a user matches one credential to verify oneself online. The most common credential is the password to a username.
“The use of single-factor authentication for remote or administrative access to systems supporting the operation of Critical Infrastructure and National Critical Functions (NCF) is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety,” CISA said. “This dangerous practice is especially egregious in technologies accessible from the Internet.”
Cybersecurity Best Practices
Below are the cybersecurity practices that best counter the above-mentioned bad practices:
"There are over 300 million fraudulent sign-in attempts to our cloud services every day,” Maynes said. “By providing an extra barrier and layer of security that makes it incredibly difficult for attackers to get past, MFA can block over 99.9 percent of account compromise attacks. With MFA, knowing or cracking the password won’t be enough to gain access.”
MFA, however, shouldn’t be your organization’s only defense against malicious actors as there are a handful known ways of bypassing MFA.
. Practice network segmentation. In network segmentation, your organization’s network is sub-divided into sub-networks so that in case of a disaster in one network, the other networks won’t be affected.
Modern Email Threat: Morse Code Used in Phishing Attacks
Microsoft has revealed that cybercriminals are changing tactics as fast as security and protection technologies do, with the latest tactic: The use of Morse code in phishing attacks.
In the blog post "Attackers use Morse code, other encryption methods in evasive phishing campaign," Microsoft 365 Defender Threat Intelligence Team said that a year-long investigation found a targeted, invoice-themed XLS.HTML phishing campaign in which the attackers changed obfuscation and encryption mechanisms every 37 days on average, showing high motivation and skill level in order to constantly evade detection and keep the malicious operation running.
The phishing campaign’s primary goal, Microsoft 365 Defender Threat Intelligence Team said, is to harvest sensitive data such as usernames, passwords, IP addresses, and location – information that attackers can use as an initial entry point for later infiltration attempts.
In a phishing attack, attackers masquerade as a trusted entity and trick a victim into opening an email with a malicious attachment. In the phishing campaign observed for a year by Microsoft 365 Defender Threat Intelligence Team, the attackers initially sent out emails to targeted victims about a bogus regular financial-related business transaction, specifically sending a vendor payment advice.
According to Microsoft 365 Defender Threat Intelligence Team, the malicious email contains HTML file attachment with “xls” file name variations. An attachment with xls file name ordinarily means it’s an Excel file. Opening this attachment, however, leads to a fake Microsoft Office 365 credentials dialog box, and lately to a legitimate Office 365 page.
Entering one’s username and password into the fake Microsoft Office 365 credentials dialog box or legitimate Office 365 page leads to the activation of the attackers’ phishing kit – harvesting the user’s username, password, and other information about the user.
Named after one of the inventors of the telegraph Samuel Morse, Morse Code is a code for translating letters to dots and dashes.
According to Microsoft 365 Defender Threat Intelligence Team, in place of the plaintext HTML code, the attackers used Morse code – dots and dashes – to hide the attack segments.
The use of Morse code in phishing attacks was first reported by u/speckz on Reddit last February. Lawrence Abrams of Bleeping Computer followed up the initial report of u/speckz. Abrams said Morse code was used by a threat actor to hide malicious URLs in their phishing campaign to bypass secure mail gateways and mail filters.
When viewing the HTML attachment in a text editor, Abrams said, instead of the plaintext HTML code, Morse code is placed instead with dots and dashes. For instance, the letter “a” is written in “.-” and the letter 'b' is written in “-…”.
Cybersecurity Best Practices
The changing tactics and speed that cybercriminals use to update their obfuscation and encoding techniques in launching their phishing campaigns via Office 365 environment call for the following cybersecurity best practices:
To better protect your organization against modern threats and mitigate cyber risks, schedule a consultation with one of our cybersecurity experts today.
What Is Kubernetes and How to Protect This Attack Surface
Kubernetes is fast becoming the target of attackers to steal data, steal computing power, or cause a denial of service.
What Is Kubernetes?
Kubernetes is an open-source system that’s often hosted in the cloud. It’s used to automate the deployment, scaling, and management of applications. Companies that use Kubernetes include Google and Tesla.
Google originally developed and released Kubernetes as open-source in 2014. Google Cloud is the known birthplace of Kubernetes. Kubernetes development drew inspiration from Google’s Borg.
“Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines,” Google said. “It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior.”
While Kubernetes offers users a way to automate the deployment, scaling, and management of applications, it presents complexities. "Kubernetes clusters can be complex to secure and are often abused in compromises that exploit their misconfigurations,” the U.S. Cybersecurity and Infrastructure Security Agency and U.S. National Security Agency said in the advisory “Kubernetes Hardening Guidance.”
In February 2018, researchers at RedLock discovered that attackers had infiltrated Tesla’s Kubernetes console which wasn’t password protected. “Within one Kubernetes pod, access credentials were exposed to Tesla’s AWS environment which contained an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive data such as telemetry,” RedLock researchers said.
According to RedLock researchers, attackers in the Tesla case stole the computing power for crypto mining from within one of Tesla’s Kubernetes pods. The researchers added that the attackers used the following evasion techniques to hide the illicit crypto mining:
. The attackers didn’t use a well-known public “mining pool” in this attack, making it difficult for standard IP/domain-based threat intelligence feeds to detect the malicious activity.
. The attackers hid the true IP address of the mining pool server behind a free content delivery network (CDN) service, making IP address-based detection of crypto mining activity difficult.
. The mining software was configured to listen on a non-standard port, making it difficult to detect malicious activity based on port traffic.
. The attackers configured the mining software to keep the usage low to evade detection.
Common Sources of Compromise in Kubernetes
According to the U.S. Cybersecurity and Infrastructure Security Agency and U.S. National Security Agency, the three common sources of compromise in Kubernetes are malicious threat actors, supply chain risks, and insider threats.
Malicious Threat Actors
According to the U.S. Cybersecurity and Infrastructure Security Agency and U.S. National Security Agency, malicious threat actors often target the following Kubernetes architecture for remote exploitation: control plane, worker nodes, and containerized applications.
The Kubernetes control plane is used to track and manage the cluster. The agencies said the Kubernetes control plane lacking appropriate access controls is often taken advantage by attackers.
The Kubernetes worker nodes host the kubelet and kube-proxy service. According to the said agencies, worker nodes are potentially exploitable by attackers.
The agencies added that the containerized applications running inside the Kubernetes cluster are common targets. "An actor can then pivot from an already compromised Pod or escalate privileges within the cluster using an exposed application’s internally accessible resources,” the agencies said.
Supply Chain Risks
In supply chain risks, attackers may compromise a third-party software and vendors used to create and manage the Kubernetes cluster.
A malicious third-party application running in Kubernetes could provide attackers with a foothold. The compromise of the underlying systems (software and hardware) hosting Kubernetes could provide attackers with a foothold as well.
Insiders threats refer to individuals from within the organization who use their special knowledge and privileges against Kubernetes clusters. These individuals can be administrators, users, and cloud service or infrastructure provider.
According to the U.S. Cybersecurity and Infrastructure Security Agency and U.S. National Security Agency, Kubernetes administrators have control over the Kubernetes environment, giving them the ability to compromise the Kubernetes environment.
Users who have knowledge and credentials to access containerized services in the Kubernetes cluster could compromise the Kubernetes environment as well. Cloud service or infrastructure provider, meanwhile, has access to physical systems or hypervisors managing Kubernetes nodes. This access could be used to compromise a Kubernetes environment.
Cybersecurity Best Practices
The U.S. Cybersecurity and Infrastructure Security Agency and U.S. National Security Agency recommend the following best practices in order to protect your organization’s Kubernetes environment:
Steve E. Driz, I.S.P., ITCP | <urn:uuid:6f3308e8-aeb4-4596-90ff-46cae2514145> | CC-MAIN-2022-40 | https://www.drizgroup.com/driz_group_blog/previous/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00170.warc.gz | en | 0.928351 | 11,061 | 2.65625 | 3 |
AES-NI virus Removal Guide
What is AES-NI ransomware virus?
AES-NI ransomware virus uses new distribution tricks in 2017
AES-NI virus is a ransomware-type program that is set to encrypt data on a computer using AES and RSA cryptography ciphers. Earlier variants of this virus used to append .aes_ni or file extensions to corrupted records, while the latest version (SPECIAL VERSION: NSA EXPLOIT EDITION) adds .aes_ni_0day file extensions. Following a successful data encryption, the virus creates a text file called !!! READ THIS – IMPORTANT !!! .txt and saves it on the desktop. This file holds a ransom-demanding message and instructions on how to decrypt data locked by the virus. Examination of the virus’ samples revealed that cybercriminals demand a ransom that’s worth 500-1600 US dollars. However, the sum must be paid in Bitcoins (virtual currency). Researchers at 2-Spyware strongly advise victims not to pay the ransom and remove the ransomware instead.
AESNI ransomware is also believed to be an updated variant of AES-256 virus. Just like the new one, the earlier malware version used to attack computers with far-reaching AES-256 algorithm, which searches for .doc, .jpg, .mp4 and other important files, encodes them, and then appends .aes256 file extension . It must be noted that ransomware uses advanced multi-layer encryption tricks to secure victim’s files, therefore it is technically impossible to restore them without knowing the unique key, which the malicious program auto-generates and transmits to criminals’ servers. If you are not specifically familiar with encryption types, it is enough to know that it is AES symmetric encoding technique and RSA asymmetric method. The first one may use 128 or 256-bit cycle of ciphers to encode the files. Naturally, the 256-bit algorithm includes longer cycles and more elaborate cycles of ciphering than 128-bit one. As a result, AESNI virus generates a more exquisite encryption key which is, theoretically, almost uncrackable. However, it does not mean that it is time to fall into despair and grieve over the lost files. Instead, it is advisable to remove AES-NI and use data backups for data recovery.
Cybercriminals know that victims might get attempted to contact the crooks. For that purpose, they provide the following email addresses: firstname.lastname@example.org and email@example.com. The latest variants of this ransomware provide different addresses – firstname.lastname@example.org, email@example.com firstname.lastname@example.org. Even if the amount of ransom does not seem too high for you to pay, keep in mind that you are dealing with cyber criminals. They are not obliged to transfer the decryption key even after receiving the demanded amount of money. Such assumption is more likely taking into account the story of CryptoWall . Therefore, it is unwise to foster hopes and rely on fraudsters‘ sense of conscience. Instead, make AES-NI removal your current priority. For that, experts advise using programs like ReimageIntego.
AES-NI ransomware covers computer's desktop with the ransom note titled as "YOUR FILES ARE ENCRYPTED." To trick their victims into making payments, its developers also recommend not using any "decryption tools"
Distribution tendencies and prevention
AES NI ransomware employs traditional malware distribution channels, such as spam messages, malvertising, or infectious torrent files. Although malware researchers repeatedly remind users of the deception techniques that ransomware distributors use (specifically, the hackers‘ tendency to disguise in fake tax reports or invoices), a number of users still fall for the bait. Users should also stay clear of fake emails from the Office of Personnel Management (OPM). Though Locky ransomware prefers hiding in disguising under the name of this institution, due to last year’s data breach, other crooks might take up the habit as well. In general, emails carrying corrupted attachments contain the number of typing and grammar mistakes. They are especially visible in the forged emails of official institutions. The absence of special numeric and PIN codes might alert you as well . In addition, beware of trojans, the harbingers of crypto-malware, and remember that up-to-date security applications serve well as shields against them.
Update April 2016: developer of the virus claims to be distributing it using leaked NSA exploits
The developer of AES-NI now claims that recently leaked NSA exploits (by the Shadow Brokers group) helped him to infect Windows servers with the ransomware on a global scale. The criminal who claims to be the author of this ransomware has been quite active in online forums and Twitter lately. His posts suggest that he has successfully employed ETERNALBLUE exploit that targets SMBv2 protocol. However, this statement remains obscure since the only evidence criminal provided was a screenshot of an ongoing scan of a server for three NSA exploits. However, researchers discovered that the number of infected hosts suddenly increased over the weekend after the exploits were unveiled. However, researchers are not inclined to believe that criminal’s claims are true. Some researchers expressed their opinions saying that the ransomware is transmitted using RDP attacks, not NSA exploits. The alleged author of the ransomware strongly denies it and also blames malware researchers for blocking his email accounts, saying that victims can no longer get an answer from him via email. Researchers never do it because it’s the victim who chooses whether to contact the attacker or not.
Update May 2017: AES-NI developer gives up ransomware decryption master keys
May 2017 marks an important stage in AES-NI ransomware development. The security researcher by the screen name Thyrex has posted AES-NI decryption keys allowing virus victims to recover their data free of charge. The researcher has obtained the 369 unique decryption keys, decryption executable and the user’s manual from the ransomware developer himself via private message on Russian web forum. After a closer analysis of these three components Thyrex found that they really work but are meant to decrypt a specific version of the virus which uses email@example.com email to communicate with the victims. We insert the download link at the end of the article.
The motifs behind the disclosure of AES-NI keys are quite vague, but according to the software developer, this malware is already outdated, so there is no reason to keep the AES-NI project going. The unknown hacker also talked about the upcoming release of the rest of ransomware keys. If you are infected with the virus version other than firstname.lastname@example.org, make sure you check back with us later. We will post the recovery tools as soon as they come up.
AES-NI removal methods
The easiest way to remove AES-NI virus is to run a scan with anti-malware software. Programs like ReimageIntego or Malwarebytes can banish the virtual threat completely, so our team recommends installing one of those in case you do not have any security programs yet. Note that none of the malware elimination software is capable of decrypting the files. For that reason, you might need an alternative tool or a backup. The latest Windows OS versions have in-built features that helps to easily back up all your files, however, if the ransomware gets administrator access to the system, these backups can be deleted or encrypted as well. Therefore, the only 100% effective backup is the one you created and transferred to a portable data storage device(USB or a hard drive). In general, creating data backups should be an obligatory task taking into account the recent surge of ransomware threats . Even if you do not have additional copies, our further recommendations might be of use. Take care of AES-NI removal and then proceed to the following steps.
Getting rid of AES-NI virus. Follow these steps
Manual removal using Safe Mode
If you are willing to remove the ransomware from your computer, you can find out that it blocks your security software. To overcome this issue, try rebooting your computer to Safe Mode with Networking.
Manual removal guide might be too complicated for regular computer users. It requires advanced IT knowledge to be performed correctly (if vital system files are removed or damaged, it might result in full Windows compromise), and it also might take hours to complete. Therefore, we highly advise using the automatic method provided above instead.
Step 1. Access Safe Mode with Networking
Manual malware removal should be best performed in the Safe Mode environment.
Windows 7 / Vista / XP
- Click Start > Shutdown > Restart > OK.
- When your computer becomes active, start pressing F8 button (if that does not work, try F2, F12, Del, etc. – it all depends on your motherboard model) multiple times until you see the Advanced Boot Options window.
- Select Safe Mode with Networking from the list.
Windows 10 / Windows 8
- Right-click on Start button and select Settings.
- Scroll down to pick Update & Security.
- On the left side of the window, pick Recovery.
- Now scroll down to find Advanced Startup section.
- Click Restart now.
- Select Troubleshoot.
- Go to Advanced options.
- Select Startup Settings.
- Press Restart.
- Now press 5 or click 5) Enable Safe Mode with Networking.
Step 2. Shut down suspicious processes
Windows Task Manager is a useful tool that shows all the processes running in the background. If malware is running a process, you need to shut it down:
- Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager.
- Click on More details.
- Scroll down to Background processes section, and look for anything suspicious.
- Right-click and select Open file location.
- Go back to the process, right-click and pick End Task.
- Delete the contents of the malicious folder.
Step 3. Check program Startup
- Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager.
- Go to Startup tab.
- Right-click on the suspicious program and pick Disable.
Step 4. Delete virus files
Malware-related files can be found in various places within your computer. Here are instructions that could help you find them:
- Type in Disk Cleanup in Windows search and press Enter.
- Select the drive you want to clean (C: is your main drive by default and is likely to be the one that has malicious files in).
- Scroll through the Files to delete list and select the following:
Temporary Internet Files
- Pick Clean up system files.
- You can also look for other malicious files hidden in the following folders (type these entries in Windows Search and press Enter):
After you are finished, reboot the PC in normal mode.
Remove AES-NI using System Restore
If Safe Mode with networking does not help, you can try System Restore
Step 1: Reboot your computer to Safe Mode with Command Prompt
Windows 7 / Vista / XP
- Click Start → Shutdown → Restart → OK.
- When your computer becomes active, start pressing F8 multiple times until you see the Advanced Boot Options window.
- Select Command Prompt from the list
Windows 10 / Windows 8
- Press the Power button at the Windows login screen. Now press and hold Shift, which is on your keyboard, and click Restart..
- Now select Troubleshoot → Advanced options → Startup Settings and finally press Restart.
- Once your computer becomes active, select Enable Safe Mode with Command Prompt in Startup Settings window.
Step 2: Restore your system files and settings
- Once the Command Prompt window shows up, enter cd restore and click Enter.
- Now type rstrui.exe and press Enter again..
- When a new window shows up, click Next and select your restore point that is prior the infiltration of AES-NI. After doing that, click Next.
- Now click Yes to start system restore.
Bonus: Recover your dataGuide which is presented above is supposed to help you remove AES-NI from your computer. To recover your encrypted files, we recommend using a detailed guide prepared by 2-spyware.com security experts.
If your files are encrypted by AES-NI, you can use several methods to restore them:
Data Recovery Pro solution while trying to decrypt files encrypted by the ransomware
If you do not possess any backup copies of your encrypted files, try using this application for file recovery. It is a practical application when you cannot find missing files.
- Download Data Recovery Pro;
- Follow the steps of Data Recovery Setup and install the program on your computer;
- Launch it and scan your computer for files encrypted by AES-NI ransomware;
- Restore them.
Using Previous Windows Versions feature to recover files encrypted by the virus
If System Restore was enabled on your computer before infiltration of the defined ransomware, you can try using the following steps:
- Find an encrypted file you need to restore and right-click on it;
- Select “Properties” and go to “Previous versions” tab;
- Here, check each of available copies of the file in “Folder versions”. You should select the version you want to recover and click “Restore”.
The benefits ShadowExplorer
Rarely, file-encrypting threats access shadow volume copies. In this regard, there is little information whether the discussed ransomware deletes them. The program recreates the corrupted files according to the patterns of shadow volume copies.
- Download Shadow Explorer (http://shadowexplorer.com/);
- Follow a Shadow Explorer Setup Wizard and install this application on your computer;
- Launch the program and go through the drop down menu on the top left corner to select the disk of your encrypted data. Check what folders are there;
- Right-click on the folder you want to restore and select “Export”. You can also select where you want it to be stored.
A decrypter for AES-NI version that indicates email@example.com email as means of communication with the criminals have just been released and you can download it by clicking this link. Enter this password to unlock the Zip file: 6bvlWD9yz3yBtQyOhtAqFheg.
Finally, you should always think about the protection of crypto-ransomwares. In order to protect your computer from AES-NI and other ransomwares, use a reputable anti-spyware, such as ReimageIntego, SpyHunter 5Combo Cleaner or Malwarebytes
How to prevent from getting ransomware
Choose a proper web browser and improve your safety with a VPN tool
Online spying has got momentum in recent years and people are getting more and more interested in how to protect their privacy online. One of the basic means to add a layer of security – choose the most private and secure web browser. Although web browsers can't grant full privacy protection and security, some of them are much better at sandboxing, HTTPS upgrading, active content blocking, tracking blocking, phishing protection, and similar privacy-oriented features. However, if you want true anonymity, we suggest you employ a powerful Private Internet Access VPN – it can encrypt all the traffic that comes and goes out of your computer, preventing tracking completely.
Lost your files? Use data recovery software
While some files located on any computer are replaceable or useless, others can be extremely valuable. Family photos, work documents, school projects – these are types of files that we don't want to lose. Unfortunately, there are many ways how unexpected data loss can occur: power cuts, Blue Screen of Death errors, hardware failures, crypto-malware attack, or even accidental deletion.
To ensure that all the files remain intact, you should prepare regular data backups. You can choose cloud-based or physical copies you could restore from later in case of a disaster. If your backups were lost as well or you never bothered to prepare any, Data Recovery Pro can be your only hope to retrieve your invaluable files. | <urn:uuid:caac965b-843b-4242-9bf7-de87b5fc07e2> | CC-MAIN-2022-40 | https://www.2-spyware.com/remove-aes-ni-ransomware-virus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00170.warc.gz | en | 0.894805 | 3,432 | 2.546875 | 3 |
By Zach Ugol
One of the challenges in having discussions around diversity, equity, and inclusion (DEI) is in how deeply personal the conversations can be. The words “systemic racism”, “white supremacy”, “white privilege”, etc. carry a lot of weight. It is easy to become defensive; I know I have at times. But in order to engage in discussions around race, I need to not take things personally. As a white male, I need to understand that society is not trying to punish or blame me.
One of the paradigm shifts for me has been beginning to understand the history of racism and learning about the differences between internalized, interpersonal, institutional, and systemic racism (refer below). The conversations around race are not only around what is happening today, but how the past is impacting what is happening now. Whether we acknowledge it or not, some of the racial challenges facing our society are the reverberations of past racially unjust acts.
A few weeks ago, !mpact Makers brought in Dialectix for our first training session for a small group of employees. The goal of the session was to provide background around DEI and to clarify definitions around certain terms. In order to facilitate meaningful discussions around race, we needed a common language with which to speak. One of the main focuses was around the difference between internalized racism, interpersonal racism, institutional racism, and systematic racism.
- Internalized – within individuals
- Interpersonal – between individuals
- Institutional – within institutions
- Systemic or Structural – across multiple institutions
Since this training session, I have been reflecting on the ways in which the history of racism in the U.S. impacts the present. Though laws and policies may have changed, the decisions made decades ago have a continuing effect on our society at present.
One example (of which I am sure there are more) is with minority owned small businesses impacted by COVID-19. A friend of mine who works at the Federal Reserve of Richmond published research around the importance of small business lending during COVID-19. It seems logical that businesses with access to existing credit products are more likely to survive this economic downturn. The challenge is that minority-owned small businesses “may be at a disadvantage, as these firms are less likely to have an existing credit product – 41 percent of black-owned businesses do not have outstanding debt.” Minority businesses may not be able to obtain loans to keep their businesses afloat or may face higher interest rates because of their lack of credit history (which may discourage them from obtaining a loan). In fact, further research by the Federal Reserve noted that minority owned businesses were less likely to have access to funding from the Paycheck Protection Program (PPP) as minority owned businesses may not have had established banking relationships with creditors in order to submit their applications prior to funding being unavailable. These issues stem from a lack of a strong and healthy relationship between banks and communities of color.
The question is “Why?” Why don’t minority-owned small businesses have a relationship with a creditor? Perhaps, when they went to secure a loan in the past they were denied because of where their business was located, a lack of collateral, a lack of credit history or low credit score, above average interest rates, or a host of other reasons that disproportionately impacted people of color. And while the policies around lending may have changed years ago, the ramifications of those policies are still being felt. This is an example of systemic racism, where there is no one individual or even group of people responsible – rather, the way in which the system is set up causes a disadvantage for minorities. Which is precisely the challenge in addressing systemic racism; there is no one to blame.
Systemic racism is the hardest to address and correct because of its long-term effects on society. What I have begun to recognize is that in order to address these inequities, we (myself included) need to understand and acknowledge how decisions made decades ago have implications today. We need to take time to listen to history, to understand the root of the challenges facing our society so that we can begin the process of healing and dismantling unjust systems. Until we are honest about the nature and history of the problem(s), we cannot even begin to solve them. | <urn:uuid:c8ef44cf-8efd-4c18-9ac7-1d8949903cff> | CC-MAIN-2022-40 | https://www.impactmakers.com/blog/impact-makers-journal-entry-2-whats-past-is-present/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00170.warc.gz | en | 0.972903 | 889 | 2.546875 | 3 |
Bacteria generally have a bad reputation, as people first think of certain strains that can cause serious illnesses like pneumonia or meningitis.
University of Cincinnati researchers have now engineered a probiotic designed to target and break down cancer cell defenses, giving therapies an easier way inside to kill tumors. The findings were recently published in the journal Advanced Healthcare Materials.
Nalinikanth Kotagiri, Ph.D., the senior author of this study, an assistant professor in UC’s James L. Winkle College of Pharmacy and a UC Cancer Center member, studies “solid cancers” or those defined as abnormal cellular growths in “solid” organs such as the breast or prostate, as opposed to leukemia, a cancer affecting the blood.
Kotagiri explains many solid cancers have an extracellular matrix made up of collagen and hyaluronic acid. The matrix forms a barrier around the cells and makes it harder for antibodies and immune cells to reach the tumors.
Shindu Thomas, the first author of this study and a graduate student in the Kotagiri lab, worked with E. coli Nissle, a bacteria that has been used as a probiotic for around 100 years and is different from E. coli strains that cause sickness. Through new technology, any protein or enzyme can be manufactured on the E. coli Nissle bacteria.
In this case, the bacteria was engineered to secrete an abundance of smaller structures called outer membrane vesicles on the outer edge of cells. The vesicles carry the same materials present on the bacteria itself, so researchers designed the bacteria to carry an enzyme that breaks down cancers’ extracellular matrix.
Kotagiri said bacteria tend to thrive in low-oxygen and immunodeficient environments, two characteristics found in solid cancers. Because of this, the specially designed bacteria are naturally drawn to these cancers.
“We took advantage of this unique feature of E.coli Nissle to home and localize into these tumors,” said Kotagiri. “And then once bacteria are lodged there, they start making nanoscale vesicles which carry the enzyme much deeper into the tumor matrix.”
After creating the new probiotic, researchers studied the bacteria’s effect on animal models of breast and colon cancer. The bacteria is delivered intravenously about four or five days prior to the cancer treatment, allowing the bacteria time to populate and break down the cancer’s defenses and prepare it to take to the treatment.
After administering the bacteria and then subsequent doses of either immunotherapy or another pharmaceutical, drugs used in targeted therapy, Kotagiri said mice survived twice as long compared to those given the cancer therapy alone. Imaging showed the bacteria and enzyme were effective at breaking down the extracellular matrix and allowing the therapy to reach the cancer cells.
The study found the bacteria affected the tumors but was not attacking healthy cells in other organs like the heart, lungs, liver and brain. Kotagiri said this shows the bacteria can be safe and will not cause infection in other parts of the body, but more research needs to be done to examine its safety in large animal models and potentially humans, particularly in immunodeficient environments.
“This always comes with a word of caution as to how you can utilize this strategy without causing any sepsis or any overt infections in the body,” he said.
Kotagiri said his lab began to look more closely at how bacterial probiotics can address biomedical problems around 2018, as there are about one to two times as many bacterial cells than human cells in your body at any given time.
“There’s bacteria in the gut, on the skin, inside your lungs, inside your mouth, even inside tumors,” Kotagiri said. “So why not take advantage of that and find interesting ways to make them a bit more proactive?”
If the engineered bacteria continues to prove itself safe and effective, Kotagiri said there are a wide variety of ways to engineer the bacteria for different uses, including potentially using the bacteria to treat disease conditions in the gut, mouth and skin. There is also potential to engineer the bacteria armed with multiple proteins and molecules to make a monotherapy platform (or therapy that uses one type of treatment) rather than just facilitating combination therapy, he said.
“So the bacteria can essentially serve as a mothership that would carry the necessary therapeutic payload to unique niches in the body and from there it’s a self-sustaining entity,” Kotagiri said. “While the possibilities are endless there are also significant challenges. We have to be good stewards of making that kind of evidence possible for the community to understand what are the limits and what can be done.”
The microenvironment surrounding tumor tissues provides a favorable niche for bacteria to inhabit. Bacteria including Bifidobacterium5, Clostridium6, Salmonella7, and Escherichia8 have been illustrated to preferentially colonize in tumors after being administrated in mice. Following bloodstream clearance mediated by inflammation, bacteria are generally entrapped in the tumor vasculature.
Obligate anaerobes such as Bifidobacterium and Clostridium survive in the anoxic region. In addition, the presence of available nutrients in necrotic tumor tissues attracts facultative anaerobes like Salmonella and Escherichia to the cancerous site via chemotaxis.
Consequently, they thrived in the hypoxic/necrotic regions of tumors to evade clearance by the immune system. Bacterial therapy is not new9, and its implementation for tumor treatment has been recently acknowledged by the advent of synthetic biology. In general, the tumor-seeking bacteria are tailored to synthesize a variety of therapeutic agents12.
By administration locally or systemically, the engineered bacteria target tumors where they reside, replicate, and continuously produce the payloads on site. It enables in situ delivery of the produced bioactive molecules to tumor site, which improves the therapeutic efficacy.
The tumor-targeting bacteria have been genetically instructed to deliver a variety of bioactive payloads, notably involving prodrugs-converted enzymes10, short hairpin RNA11, cytokines12, antigens13, antibodies14, and bacterial toxins15. These approaches generally show encouraging results. Nevertheless, they have intrinsic limitations that most of the produced payloads are restricted to proliferating cells or/and afflicted with tumor penetration. Hemolysin appears to be a promising protein payload.
It is naturally produced in bacteria and displays a pore-forming activity that lyses mammalian erythrocytes16. As illustrated previously, Staphylococcus aureus α-hemolysin (SAH) was expressed in E. coli17. Recombinant SAH was shown to penetrate into tumor tissue and eradicate cancer cells.
As a result, in situ delivery of SAH by E. coli reduced the volume of MCF7 tumor by 41%. Like SAH, hemolysin E (HlyE) is a pore-forming protein which naturally appears in E. coli18, S. enterica19, and Shigella flexneri20. HlyE is cytotoxic to cultured mammalian cells and macrophages21. It causes the formation of transmembrane pore on the host cell. The damaged cell membrane in turn induces cell apoptosis.
The application of the HlyE-mediated cell lysis for cancer treatment was investigated in the later work. As first exemplified by 4T1 tumor, the administration of HlyE-expressing S. typhimurium significantly decreased the tumor volume15.
Colorectal cancer ranks the third most common malignant tumor and is marked with a low 5-year survival rate. The number of patients who newly contracted this disease accounts for almost 10% of new cancer cases worldwide22. It appears necessary to further explore a potential method for medical intervention of colorectal cancer.
Probiotic bacteria have emerged as the most promising chassis for living therapeutics23. E. coli Nissle 1917 (EcN) is a probiotic strain free of enterotoxins and cytotoxins and is used for the conventional treatment of various gastrointestinal illnesses24. EcN that produces the therapeutic proteins has been illustrated for cancer therapy in the murine tumor model. Azurin is a cytotoxic protein which induces cancer cell apoptosis.
The administration of azurin-producing EcN suppressed the growth of B16 melanoma and 4T1 breast cancer while prolonged the survival of tumor-bearing mice25. EcN with azurin also enabled to restrain pulmonary metastasis developed by 4T1 cancer cells. EcN has innate prodrug-converting enzymes. By intratumoural injection, EcN caused a significant reduction in tumor growth and an increase in survival of the CT26 colon cell-bearing mice after prodrugs were administrated26.
In the human tumor model, tumor growth was suppressed by engineered EcN which produced cytotoxic compounds27. In this study, the issue was addressed by development of bacterial cancer therapy (BCT) based on HlyE-producing EcN. To approach the goal, HlyE was expressed under the control of the araBAD promoter (PBAD).
The strategy of metabolic engineering was applied to EcN for the temporal and spatial control of the HlyE expression. As a result, the engineered EcN preferred colonization in tumor tissues and expressed HlyE that effectively caused tumor regression in mice xenografted with human colorectal cancer cells.
reference link : https://www.nature.com/articles/s41598-021-85372-6
More information: Shindu C. Thomas et al, Engineered Bacteria Enhance Immunotherapy and Targeted Therapy through Stromal Remodeling of Tumors, Advanced Healthcare Materials (2021). DOI: 10.1002/adhm.202101487 | <urn:uuid:bff8acc3-b0a9-4ea5-88e0-359409c4be53> | CC-MAIN-2022-40 | https://debuglies.com/2021/11/19/researchers-have-now-engineered-a-probiotic-designed-to-target-and-break-down-cancer-cell-defenses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00170.warc.gz | en | 0.938192 | 2,086 | 3.59375 | 4 |
To be clear, a meaningful way to think about capacity is the amount of data that can flow through an AP in an interval of time. Let’s imagine that amount as a pie. The depth of the pie is the time interval, lets say one second. Then the diameter of the pie represents the number of bits that can be transmitted via that technology during that second. So the entire pie represents the AP’s throughput in terms of bits per second, or bps. Pie, oh glorious pie.
There are three scenarios to consider here.
To learn more about how RPMA is able to meet the needs of the majority of IoT devices read our white paper, How RPMA Works. | <urn:uuid:3c33a464-6579-46b4-bafc-228df871a06c> | CC-MAIN-2022-40 | https://www.ingenu.com/2015/11/how-many-devices-can-your-pie-serve/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00170.warc.gz | en | 0.948246 | 143 | 2.609375 | 3 |
Clearly, IoT security is more important than ever before – but unfortunately, IoT security is also more challenging than ever before.
Some background: the COVID-19 pandemic and lockdown of 2020 threw all of the analyst predictions into chaos, but as the economy starts to emerge from the crisis, IT spending is once again expected to resume, and that includes the growth of the Internet of Things (IoT).
IoT is not a monolithic category but breaks down into numerous industries and use cases. Forrester Research predicts that in 2021, the IoT market growth will be driven by healthcare, smart offices, location services, remote-asset monitoring, and new networking technologies.
Much of that is being driven by the fallout from COVID-19. The explosion in remote work is driving some of this, as is remote medicine. New devices are coming out to do diagnosis for patients who can’t or won’t go in to see their doctor.
One of the key concerns related to the successful adoption of the IoT is having sufficient security mechanisms in place. This is especially important for IoT medical devices, because health care is so heavily regulated for privacy reasons.
And it’s not just securing the specific IoT devices, it’s all devices. Your connected refrigerator might not appear to be much of a threat to the security of your home if it is compromised by a hacker, but it can act as a gateway to more important devices on your home network. Then it becomes as significant as a heart monitor.
The same applies to industrial IoT (IIoT). Last summer it was revealed that Russian hackers penetrated the control systems of U.S. nuclear power plants. The consequences for such a compromise to a global manufacturing operation are considerable.
So it’s no surprise IoT security has been top of mind for IT managers for some time.
Understanding IoT – and Its Complexity
The Internet of Things (IoT) is a collection of devices that are connected to the Internet; these IoT devices are not traditional computing devices. Think of electronic devices that haven’t historically been connected, like copy machines, refrigerators, heart and glucose meters, or even the coffee pot.
The IoT is a hot topic because of its potential to connect previously unconnected devices and bring connectivity to places and things normally isolated. Research suggests improved employee productivity, better remote monitoring, and streamlined processes are among the top benefits for companies that embrace IoT.
What Should You Know About IoT Security?
There is an unfortunate pattern in technology that has repeated over the years; we rush to embrace the new and secure it later. Such has been the case with IoT devices. They often make the news because of hacks, ranging from mild to severe in their threat.
IoT security is top of mind at the Department of Homeland Security, which produced a lengthy paper on securing IoT devices. While the document is five years old and much has changed in the IoT world, many of the principals and best practices outlined are still valid and worthy of consideration.
Research from 451 Research shows 55% of IT professionals list IoT security as their top priority – and the figure is likely growing. So what can you do to secure your IoT devices? A lot, and over many areas. Let’s dig in.
1) Assume Every IoT Device Needs Configuring
When the market sees the advent of smart cat litter boxes and smart salt shakers, you know we’re at or approaching peak adoption for IoT devices. But don’t just ignore such features or assume they are securely configured out of the box. Leaving them unconfigured and not locked down is an opening to a hacker, whatever the device.
2) Know your Devices
It’s imperative that you know which types of devices are connected to your network and keep a detailed, up-to-date inventory of all connected IoT assets.
You should always keep your asset map current with each new IoT device connected to the network and know as much as possible about it. Facts to know include manufacturer and model ID, the serial number, software and firmware versions, and so forth.
3) Require Strong Login Credentials
People have a tendency of using the same login and password for all of their devices, and often the passwords are simple.
Make sure every login is unique for every employee, and require strong passwords. Use two-factor authorization if it’s available, and always change the default password on new devices. In order to ensure trusted connections, use public key infrastructure (PKI) and digital certificates to provide a secure underpinning for device identity and trust.
4) Use End-to-End Encryption
Connected devices talk to one another, and when they do, data is transferred from one point to another, and all too often when no encryption. You need to encrypt data at every transmission to protect against packet sniffing, a common form of attack. Devices should have encrypted data transfer as an option. If they don’t, consider alternatives.
5) Make Sure to Update the Device
Make sure to update the device on first use, as the firmware and software may have been updated between when the device was made and when you bought it. If the device has an auto-update feature, enable it so you don’t have to do it manually. And check the device regularly to see if it needs updating.
On the server side, change the name and password of the router. Routers are often named after the manufacturer by default. It’s also recommended that you avoid using your company name in the network.
6) Disable Features You Don’t Need
A good step in protecting a device is to disable any feature or function you don’t need. That includes open TCP/UDP ports, open serial ports, open password prompts, unencrypted communications, unsecured radio connections or any place a code injection can be done, like a Web server or database.
7) Avoid Using Public Wi-Fi
Using the Wi-Fi at Starbucks is rarely a good idea, but especially when connecting to your network.
All too often public Wi-Fi access points are old, outdated, not upgraded, and have easily broken security. If you must use public Wi-Fi, use a Virtual Private Network (VPN).
8) Build a Guest Network
A guest network is a good security solution for visitors who want to use your Wi-Fi either at home or in the office. A guest network gives them network access but walls them off from the main network so they can’t access your systems.
You can also use a guest network for your IoT devices, so if a device is compromised, the hacker will be stuck in the guest network.
9) Use Network Segmentation
Network segmentation is where you divide a network into two or more subsections to enable granular control over lateral movement of traffic between devices and workloads. In an unsegmented network, nothing is walled off. Every endpoint can communicate with another, so once a hacker breaches your firewall they have total access. In a network that is segmented, it becomes much harder it is for hackers to move around.
Enterprises should use virtual local area network (VLAN) configurations and next-generation firewall policies to implement network segments that keep IoT devices separate from IT assets. This way, both groups can be protected from the possibility of a lateral exploit.
Also consider deploying a Zero Trust Network architecture. Like its name implies, Zero Trust basically secures every digital asset and makes no assumption of trust from another digital asset, thus limiting the movement of someone with unauthorized access.
10) Actively Monitor IoT Devices
We can’t stress this enough: real-time monitoring, reporting and alerting are imperative for organizations to manage their IoT risks.
Traditional endpoint security solutions often don’t work with IoT devices so a new approach is required. That means real-time monitoring for unusual behavior. Just as with a Zero Trust network, don’t let IoT devices access your network without keeping a constant eye on them. | <urn:uuid:a5186e42-fbee-4da8-a43b-9251b1aecb9b> | CC-MAIN-2022-40 | https://www.datamation.com/networks/iot-security-10-tips-to-secure-the-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00370.warc.gz | en | 0.935815 | 1,670 | 2.578125 | 3 |
At this point, the consensus is clear. Passwords are a vulnerable and outdated security measure, and data will be safer once organizations and individuals progress to stronger forms of authentication. The problem is that it’s difficult to move people away from what they know. Tech insiders may be familiar with more effective technologies like biometrics and security keys, but many members of the general public cannot even conceive of a security framework predicated on something other than a secret string of letters and numbers.
That’s why the a recent study from the NIST was so concerning. The study looked at the password habits of young children, and revealed that children exhibit many of the same bad behaviors as their parents. People of any generation tend to reuse passwords, and share those passwords with their friends. In that regard, the study suggests that passwords are inherently flawed, at least to the extent that they incline people towards poor security practices.
The real issue is that the study shows that those practices are being perpetuated. Despite all of the attempts to raise awareness about other security technologies, passwords are still the primary security measure for another generation. The longer that goes on, the more that behavior becomes entrenched, which further delays the rise of passwordless authentication.
So how do you combat that problem? And what implications does the NIST study have for those working to get rid of passwords?
According to FIDO Alliance Executive Director and CMO Andrew Shikiar, the NIST’s findings do not necessarily change the task currently facing privacy and security advocates. He thinks the next generation will be fine because it’s easy to teach kids new tricks.
“People can have behavioral change. I’m less worried about kids than I am about adults because kids are very malleable, for better or for worse,” Shikiar said.
“Kids do what they’re taught. They don’t always do what they’re told, but they do what they’re taught, and they’re being taught by teachers who have been using passwords for all their lives.”
The problem, then, isn’t that kids are learning bad habits (though that’s obviously less than ideal), but that teachers are still passing on the same assumptions that they learned when they were younger. After all, most of us were raised with passwords, and recognize them as the thing that stands between our secrets and potential cybercriminals.
The upshot is that if you want to reach kids, you first need change the minds of the parents and teachers that have been entrusted with their care. Fix that problem, and the next generation will sort itself out in time.
“The education market isn’t always the most nimble, but there’s a good opportunity to not only practice better authentication habits today, but in doing so, to educate tomorrow’s users, the next generation, on practicing better login hygiene,” Shikiar said.
Of course, getting older people to unlearn their habits is easier said than done. Thankfully, Shikiar believes that it is possible to achieve that cultural shift with the proper messaging.
“At the end of the day, not entering a password is easier than entering a password, but people aren’t accustomed to that,” he said. You just need to find a way to convince people that the technology is safe in order to get them on board.
“How do you get people to choose to enroll a biometric?” Shikiar asked. “What’s the right terminology? What’s the right iconography? What does the user journey have to look like to get someone to enroll, and then utilize, a biometric authenticator versus a password?”
The fact that passwordless authenticators are so easy to use is ultimately what makes them easy to teach. In FIDO’s own research, people are initially reluctant to use biometrics. However, Shikiar indicated that the vast majority (97 percent) are eager to use the technology once they understand what is happening, and how it works. The market has also borne that out, most notably with the debut of Touch ID.
“Apple has proven that it’s possible to consumerize better security and better logins,” Shikiar explained. “When Touch ID first came out, people are like, why would I need to do that? I can just use my PIN code to unlock my phone. But all of a sudden people liked Touch ID. The mass consumerization of biometric technology on handsets, and the widespread acceptance of that as a preferred means to unlock, tells me that it’s not a huge leap to get people to go understand what I do to unlock is now what I do to log in. That’s a small leap.”
The challenge now is to build on that success. Passwordless technologies like security keys and biometric authentication are already sophisticated enough to deploy at scale. That means that public perception is the only thing slowing adoption rates. Companies like Samsung and Google followed in Apple’s footsteps with fingerprint sensors and facial authentication in modern smartphones, and a similar push could create a similar shift with other sectors and devices.
Ask for Permission
While changing adult minds can lay the groundwork for cultural change, it is not necessarily sufficient when it comes to protecting children. There are unique legal considerations when dealing with minors that aren’t there when dealing with consenting adults, especially when it comes to the collection of biometric data. After all, how can you use biometrics to verify a kid’s identity if that kid cannot give you permission to use that data in the first place?
To an extent, educators can sidestep the problem with technologies like security keys. Those kinds of device-based solutions may not be good for young children who are apt to lose them, but they can be effective for older students. For example, teachers could hand out security keys at the high school and university level, and the students could use those keys (instead of passwords) to log into shared computers, or to log into remote learning tools.
However, Shikiar believes that there is still a role for biometrics at any age. He drew a distinction between remote and local authentication systems, and argued that the latter can enable the safe use of biometric data for kids since it does not involve any data collection.
“I’d be fine with my kids using biometrics on a Chromebook as long as it was stored locally on that device,” said Shikiar, who has a 9-year-old and a 10-year-old of his own. “If they want to use biometrics, they should use the technology that’s built into devices that kids are using [to access educational materials]. Use local authenticators to let kids log in.”
As it relates to kids, local technologies minimize the legal exposure for tech developers because there is no database for hackers to break into, and the company cannot access or exploit the data for commercial purposes. The actual biometric data (whether it be a fingerprint, a faceprint, or some other modality) stays on the device, and remains in the possession of the individual who registered it. That also means that businesses do not need to ask for consent (even for minors), since they are not asking to see, store, or use any sensitive information.
For Shikiar, that makes it the only viable biometric authentication option for minor citizens. One of the main drawbacks of passwords is that they are stored in a centralized location, and server-side biometrics only recreates that problem.
“Even a strong password can be manipulated out of your hands. It can be stolen off a server,” Shikiar concluded. “Until we get rid of these server-side credentials, we won’t be able to break the cycle of credential theft, credential stuffing, and data breaches.”
Whatever the case, the simple fact of the matter is that the current generation of children is being raised online. They are gaining access to online services from a young age both at school and at home, and that means that their caretakers need to make sure that those outlets are as secure as they would be for an adult. The NIST survey demonstrates that parents and educators have not yet accepted that responsibility, and that needs to change if the tech industry wants to cultivate a truly passwordless society. | <urn:uuid:792751c2-718b-4169-a680-36b2e2ca59fe> | CC-MAIN-2022-40 | https://mobileidworld.com/how-biometric-tech-can-save-kids-from-password-perils-091401/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00370.warc.gz | en | 0.956995 | 1,784 | 3.0625 | 3 |
Cybersecurity researchers from Guardicore Labs have discovered a new multi-functional peer-to-peer (P2P) botnet written in the programming language Golang that has been actively targeting SSH servers since January 2020. Named “FritzFrog,” this modular, multi-threaded and file-less botnet has successfully breached over 500 servers so far including well-known universities in the US and Europe and a railway company, according to Guardicore. In addition to implementing a made from scratch P2P protocol, communications are done through an encrypted channel with the malware package creating a backdoor to the victims’ systems for continued access by the attackers. Although Golang-based botnets have been observed before, what makes FritzFrog unique is that it’s fileless, meaning that it assembles and executes payloads in memory, is more aggressive in carrying out brute-force attacks, while also being efficient by distributing the targets evenly within the botnet.
By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in | <urn:uuid:b3be5e9f-64eb-48b1-b3c2-4f8e511307ce> | CC-MAIN-2022-40 | https://www.binarydefense.com/threat_watch/fritzfrog-botnet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00370.warc.gz | en | 0.956312 | 220 | 2.578125 | 3 |
The EU Commission has published draft regulations setting out harmonised rules on artificial intelligence (AI) that aim to regulate this rapidly developing area of technology. The regulations take a risk-based approach, identifying AI uses in one of the following categories: (i) an unacceptable risk, (ii) a high risk and (iii) a low or minimal risk.
Included in the first category are AI systems that: deploy subliminal techniques to distort a person’s behaviour; exploit the vulnerabilities of a specific group of people in a way that causes physical or psychological harm; classify the trustworthiness of people based on their social behaviour in a way which leads to detrimental or disproportionate treatment; and use real-time biometric in public spaces for law enforcement, unless required for a specific public safety objective.
The regulations define eight high-risk applications of AI including biometric identification, management of critical infrastructure, systems that determine access to employment, education and asylum. High-risk AI systems must be subject to risk management, human oversight, transparency, record-keeping and appropriate data governance practices as they aim to minimise the risk of algorithmic discrimination and infringement of fundamental rights, including privacy.
Fines for infringement of the rules on unacceptably risky and high-risk AI applications are provided of up to €30 million or 6% of annual global turnover, whichever is higher. The regulations also provide for the creation of a European Artificial Intelligence Board, which will be comparable to the European Data Protection Board.
Dyann Heward-Mills, CEO of HewardMills and European Commission ethics adviser, said: “The draft regulations make it clear that companies developing and deploying AI must uphold the highest ethical standards. The European Commission has moved to put privacy and consumer protection front and centre of the coming AI revolution in line with its ambition to create an ecosystem of trust around AI. Of particular note is the need for human oversight and appropriate data governance for high-risk AI applications. This represents an opportunity for independent and qualified Data Protection Officers (DPOs) and privacy practitioners to play a critical role.” | <urn:uuid:10d06363-0fec-4a8b-be36-564e5b4432ec> | CC-MAIN-2022-40 | https://www.hewardmills.com/the-eu-commission-has-proposed-harmonised-rules-on-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00370.warc.gz | en | 0.919264 | 427 | 2.515625 | 3 |
Within the construction industry, artificial intelligence (AI) is beginning to change the way buildings are designed, constructed and utilised after construction. Artificial intelligence is where machines exhibit their own intelligence through using algorithms to solve problems using inputted data. By harnessing robotics, construction managers can utilise intelligent machines that can perform routine tasks that were once completed by humans, such as bricklaying. Alternatively, AI systems can collate and organise information for engineers to use within project planning and design implementation.
Here is how the construction industry is starting to use AI in order to complete projects that contain fewer errors, less omissions, safer working practices, improved workflows, and more on-time worksite completions.
The four categories of artificial intelligence in construction
Within the industry, the utilisation of AI and robotics is broken down into four categories, and these are as follows:
Planning with equipment: Artificial intelligence is used in the creation of construction plans. Autonomous equipment is considered as AI as it is aware of its surroundings and is capable of navigation without human input. In the planning stages, AI machinery can survey a proposed construction site and gather enough information to create 3D maps, blueprints and construction plans.
Before this advancement, these processes would take weeks – now they can be done in one day. This helps to save firms both time and money in the form of labour.
Administrative roles: Once construction has begun, AI is being used to manage the project and control tasks. For example, workers can input sick days, vacancies and sudden departures into a data system and it will adapt the project accordingly. The AI will understand that the task must be moved to another employee and will do so on its own accord.
Construction methodology: AI database systems are now helping to inform engineers on how specific projects should be constructed. For example, if engineers were working on a proposed new bridge, AI systems would be able to advise and present a case for how the bridge should be constructed. This is based on past projects over the last 50 years, as well as verifying pre-existing blueprints for the design and implementation stages of the project. By having this information to hand, engineers can make crucial decisions based on evidence that they may not have previously had at their disposal.
There is also the development of autonomous site machinery, which allows the driver to be outside of the vehicle when it is operating at dangerous heights. Using sensors and GPS, the vehicle can calculate the safest route.
Post-construction: Once buildings have been constructed, whether they are used for commercial purposes or it is a development of new houses, AI systems can be used inside the structure. In the US alone, $1.5 billion was invested in 2016 by companies looking to capitalise on this growing market.
For example, the hotel chain Wynn announced in 2016 that every room in its Las Vegas hotel would have an Amazon Echo feature by the end of 2017. These devices can be used for aspects of the room such as lighting, temperature and any audio-visual equipment contained in the room. These systems can also be used within domestic settings, allowing homeowners to control aspects of their home through voice commands and systems that control all electronic components from one device.
BIM: Building information modelling and retrospective assessment
So that buildings hold informative, historical information regarding their construction, building information modelling (BIM) can be used so that a building’s history from its construction, to the management decisions alongside construction, up until demolition, are all recorded.
Virtual assistants (VAs) can then be used to add a conversational element alongside this information. By combining VAs alongside NFC (near-field communication), VAs can be given additional information to the building itself in real-time from various sensors in the building. For example, if there were structural problems with a building, then VAs could inform engineers specifically where the problem was and how it can be fixed.
Through working collaboratively with engineers, VAs and AIs can help the industry as a whole to save both time and money in the form of labour; AIs can also help to replace redundant labour to allow for the industry to make efficiency savings that weren’t possible before this type of technology existed. As the future of AI becomes more of a reality within construction, only time will tell how reliant upon intelligent machines we will have to be in order to construct innovative building designs.
To learn more about Artificial Intelligence and hear from industry leaders, attend the AI & Big Data Expo World Series. Upcoming events in Silicon Valley, London and Amsterdam. | <urn:uuid:d244bd27-6b71-4faa-98d1-6e343afd1233> | CC-MAIN-2022-40 | https://www.artificialintelligence-news.com/2018/03/16/how-artificial-intelligence-is-changing-the-construction-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00370.warc.gz | en | 0.95425 | 930 | 3.453125 | 3 |
Cloud Computing Adoption
Nowadays, many companies are changing their overall information technology strategies to embrace cloud computing in order to open up business opportunities. There are numerous definitions of cloud computing. Simply speaking, the term “cloud computing” comes from network diagrams in which cloud shapes are used to describe certain types of networks. All the computing of more than one computer via a network or the service gained from the host computer via a network is considered cloud computing. Through different types of devices such as PCs, smart phones users can access to services and computing resources in clouds. According to the National Institute of Standards and Technology (NIST), ‘‘Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction’’. Cloud computing represents a convergence of two major trends (IT efficiency and business agility) in information technology. The term IT efficiency refers to using computing resources more efficiently through highly scalable hardware and software resources. Furthermore, the business agility is the ability of a business to use computational tools rapidly, to adapt quickly and cost efficiency in response to changes in the business environment. Cloud computing can remove traditional boundaries between businesses, make the whole organization more business agile and responsive, help enterprises to scale their services, enhance industrial competitiveness, reduce the operational costs and the total cost of computing, and decreases energy consumption. It would seem that cloud computing can provide new opportunities for innovation by allowing companies to focus on business rather than be stifled by changes in technology.
Although this new technology can help organizations to achieve business efficiencies, evidence indicates that not all companies intend to adopt cloud-based solutions. Borgman and his colleagues conducted a study on the factors influencing cloud computing adoption and found that the security and privacy, identity management standards, and the need for sharing and collaboration in today’s highly competitive world have a positive effect on using and adopting the cloud computing. In fact, a data breach is a security incident in which a company or a government agency loses sensitive, protected or confidential data. Cloud computing involves storing data and computing in a shared multi-user environment, which increases security concerns. Privacy-enhancing techniques, monitoring mechanisms, authentication, encryption, and the security of data in the cloud environment are good ways to enhance cloud security and minimize risk.
Many scholars suggested several strategies to help decision makers improve cloud security which are as follows:
1. Ensure effective governance, risk and compliance processes exist
2. Audit operational and business processes
3. Manage people, roles and identities
4. Ensure proper protection of data and information
5. Enforce privacy policies
6. Assess the security provisions for cloud applications
7. Ensure cloud networks and connections are secure
In sum, cloud computing has become ubiquitous in recent years. Studies indicated that factors such as the relative advantage of cloud computing (such as improving the quality of business operations, performing tasks more quickly, increasing productivity, cost savings, and providing new business opportunities), ease of use and convenience in using the cloud infrastructure, privacy and security, and the reliability of cloud providers can affect the adoption of cloud computing. Hence, decision makers should systematically evaluate these factors before adoption of cloud computing environment.
By Mojgan Afshari
Mojgan Afshari is a senior lecturer in the Department of Educational Management, Planning and Policy at the University of Malaya. She earned a Bachelor of Science in Industrial Applied Chemistry from Tehran, Iran. Then, she completed her Master’s degree in Educational Administration. After living in Malaysia for a few years, she pursued her PhD in Educational Administration with a focus on ICT use in education from the University Putra Malaysia. She currently teaches courses in managing change and creativity and statistics in education at the graduate level. | <urn:uuid:2852c54b-4941-4130-8dec-81d22298a415> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/05/nist-adoption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00370.warc.gz | en | 0.933499 | 795 | 3 | 3 |
The GDPR Explained
The General Data Protection Regulations (GDPR) is a data protection law created to protect the personal information of European Union (EU) residents whose data is collected or data subjects as mentioned in the GDPR.
Since its activation in 2018, the GDPR is currently one of the most comprehensive data privacy laws in the world. It inspired various other countries to follow suit and implement their own data protection laws, such as California’s CCPA (California Consumer Privacy Act) and UK’s GDPR in their Data Protection Act (DPA) of 2018.
Non-compliance to GDPR requirements can lead to big fines, namely up to €20 million or up to 4% of your company’s annual turnover from the preceding financial year, whichever is bigger.
According to the GDPR, there are two different types of businesses that require GDPR compliance: data controllers and processors.
According to Article 4, controllers are defined as a legal entity that defines what a company can do to collected data, the means, location, and the purposes behind the processing of personal data.
Controllers are more responsible for data protection compared to processors. Processors are only responsible for processing the data on behalf of the controller, without the authority to determine anything else.
When there’s a breach, a processor reports the breach to controllers, while the controller is then responsible for reporting the breach to the data protection authority and victims of the breach.
What is defined as personal data under the GDPR?
Simply put, personal data is any data related to the identity of a living person. This includes personally identifiable information (PII) in various forms. The most common form of PII is your IP address, cookies, name, physical address, social security number, health records, and other information.
This includes indirect information, such as a person’s online behavior, identification number, social, mental, economic, and cultural data linked to the person.
Does the GDPR apply to companies outside of Europe?
If the GDPR is an EU privacy law, does it apply to companies outside of Europe?
The main purpose of the GDPR is protecting the privacy and data of EU customers. That means if your company processes or stores data from EU citizens, you’ll need to watch out for the GDPR even if you’re not based in any EU countries.
If your business runs online, it’d be hard to prove that you don’t have any European citizens currently residing in Europe in your user base. If you do business online, you can’t be sure of this nor prove it so you’ll need to comply to GDPR anyway even if you’re not specifying European citizens as your users.
When the GDPR Applies Outside of Europe
Primarily, the GDPR applies to organizations based in the EU, whether your data is stored in or out of the EU. However, if your organization is based outside of the EU, you might be off the hook. Below you’ll find when a non-EU company needs to comply with the GDPR.
Offering goods or services
Generally, you’ll need to comply with GDPR if you regularly process data from EU citizens.
The GDPR also acknowledges that there are occasional instances where a company might serve an EU citizen.
In these cases, the GDPR determines whether the company needs to be compliant using cues that they’re targeting EU citizens instead. For example, accepting payment in Euro or other EU currency could signal that they’re targeting EU citizens. If your website is localized to a certain language in any of the EU member states, this could also be taken as an indicator that you’re serving EU citizens.
Monitoring their behavior
This seems creepy, but it’s actually more normal than you may have thought. Monitoring in this case could mean keeping cookies for tracking web visits or online behavior. Tracking and analytics is something most companies with a healthy marketing department use. Examples include using ads to drive traffic to a landing page or just using web analytics platform, such as Google Analytics or Adobe Analytics.
Additionally, some businesses outside of the EU are required to comply if it’s clear that they’re targeting EU citizens even if they’re not serving anyone in the EU yet. For example, you need to follow the GDPR if you price your products in EU currencies or localized your marketing material to the language of one of the member states.
Does the GDPR apply to an individual?
If you’re collecting personal data for personal reasons, there’s no need to follow all the GDPR guidelines. For example, if you’re organizing an office gathering, that would be considered a personal matter with no business purposes. In this case, there’s no need to ask for written consent or encrypt their contact info when storing the data.
However, the GDPR kicks in if you’re collecting the data for professional or commercial purposes. For example, if you want to promote your new business through an email newsletter, you can’t just add the contact details of your friends and family to start populating your list. They’d also have to go through the proper channels so you can record their consent.
Exceptions to the GDPR
If you process or store data of EU citizens, there are hardly any exemptions to the GDPR.
However, depending on the size of your company, as well as various other factors, you might have fewer responsibilities to the GDPR than others. For example, SMEs (Small to Medium Enterprises) with less than 250 employees still need to comply with the GDPR, but most are exempt from several of the more complicated responsibilities specified in the GDPR, for example appointing a data protection officer (DPO) or lower standard of security and privacy measures.
You also don’t have to ensure compliance if you’re sure that you aren’t processing data of EU customers currently residing in one of the member states. For example, you won’t need to care about GDPR if you run a local restaurant in the US, even if you promote your business online and have a newsletter for promotions.
Train your team to be GDPR compliant with Inspired eLearning
The data protection law safeguards your EU-based customers against risks to their rights to privacy. As a side effect, the comprehensive rules in GDPR also help you improve your data security.
Although we often talk about fines caused by data breaches, the supervisory authority can still give fines if you break one of the GDPR requirements. Make sure that you’re GDPR compliant to evade the fines that you might incur.
If you’re regularly doing data collection or do processing activities on a natural person residing in the EU, chances are you need to comply with the GDPR.
Reaching and maintaining GDPR compliance requires cooperation from your entire organization. Your employees know better about the tools they use daily, which places them in a better position to identify gaps that might block your path to GDPR compliance.
Better yet, your employees also need to know how to identify sensitive data and the right protective measures to process sensitive data with minimal risks.
Inspired eLearning offers comprehensive training to help you comply with various international data privacy regulations, including GDPR, HIPAA, PII, and CCPA. Bundle up your privacy training courses with security awareness training and make your customer’s data even more secure. | <urn:uuid:35deaaf7-ddef-41d2-ab3d-b846e782b10e> | CC-MAIN-2022-40 | https://inspiredelearning.com/blog/who-does-gdpr-apply-to/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00370.warc.gz | en | 0.925041 | 1,565 | 3.203125 | 3 |
Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence.
The history of artificial intelligence is filled with theories and attempts to study and replicate the workings and structure of the brain. Symbolic AI systems tried to copy the brain’s behavior through rule-based modules. Deep neural networks are designed after the neural activation patterns and wiring of the brain.
But one idea that hasn’t gotten enough attention from the AI community is how the brain creates itself, argues Peter Robin Hiesinger, Professor of Neurobiology at the Free University of Berlin (Freie Universität Berlin).
In his book The Self-Assembling Brain, Hiesinger suggests that instead of looking at the brain from an endpoint perspective, we should study how information encoded in the genome is transformed to become the brain as we grow. This line of study might help discover new ideas and directions of research for the AI community.
The Self-Assembling Brain is organized as a series of seminar presentations interspersed with discussions between a robotics engineer, a neuroscientist, a geneticist, and an AI researcher. The thought-provoking conversations help to understand the views and the holes of each field on topics related to the mind, the brain, intelligence, and AI.
Biological brain vs artificial neural networks
Many secrets of the mind remain unlocked. But what we know is that the genome, the program that builds the human body, does not contain detailed information of how the brain will be wired. The initial state does not provide information to directly compute the end result. That result can only be obtained by computing the function step by step and running the program from start to end.
As the brain goes through the genetic algorithm, it develops new states, and those new states form the basis of the next developments.
As Hiesinger describes the process in The Self-Assembling Brain: “At each step, bits of the genome are activated to produce gene products that themselves change what parts of the genome will be activated next—a continuous feedback process between the genome and its products. A specific step may not have been possible before and may not be possible ever again. As growth continues, step by step, new states of organization are reached.”
Therefore, our genome contains the information required to create our brain. That information, however, is not a blueprint that describes the brain, but an algorithm that develops it with time and energy. In the biological brain, growth, organization, and learning happen in tandem. At each new stage of development, our brain gains new learning capabilities (common sense, logic, language, problem-solving, planning, math). And as we grow older, our capacity to learn changes.
Self-assembly is one of the key differences between biological brains and artificial neural networks, the currently popular approach to AI.
“ANNs are closer to an artificial brain than any approach previously taken in AI. However, self-organization has not been a major topic for much of the history of ANN research,” Hiesinger writes.
Before learning anything, ANNs start with a fixed structure and a predefined number of layers and parameters. In the beginning, the parameters contain no information and are initialized to random values. During training, the neural network gradually tunes the values of its parameters as it reviews numerous examples. Training stops when the network reaches acceptable accuracy in mapping input data into its proper output.
In biological terms, the ANN development process is the equivalent of letting a brain grow to its full adult size and then switching it on and trying to teach it to do things.
“Biological brains do not start out in life as networks with random synapses and no information content. Biological brains grow,” Hiesinger writes. “A spider does not learn how to weave a web; the information is encoded in its neural network through development and prior to environmental input.”
In reality, while deep neural networks are often compared to their biological counterparts, their fundamental differences put them on two totally different levels.
“Today, I dare say, it appears as unclear as ever how comparable these two really are,” Hiesinger writes. “On the one side, a combination of genetically encoded growth and learning from new input as it develops; on the other, no growth, but learning through readjusting a previously random network.”
Why self-assembly is largely ignored in AI research
“As a neurobiologist who has spent his life in research trying to understand how the genes can encode a brain, the absence of the growth and self-organization ideas in mainstream ANNs was indeed my motivation to reach out to the AI and Alife communities,” Hiesinger told TechTalks.
Artificial life (Alife) scientists have been exploring genome-based developmental processes in recent years, though progress in the field has been largely eclipsed by the success of deep learning. In these architectures, the neural networks go through a process that iteratively creates their architecture and adjusts their weights. Since the process is more complex than the traditional deep learning approach, the computational requirements are also much higher.
“This kind of effort needs some justification—basically a demonstration of what true evolutionary programming of an ANN can produce that current deep learning cannot. Such a demonstration does not yet exist,” Hiesinger said. “It is shown in principle that evolutionary programming works and has interesting features (e.g., in adaptability), but the money and focus go to the approaches that make the headlines (think MuZero and AlphaFold).”
In a fashion, what Hiesinger says is reminiscent of the state of deep learning before the 2000s. At the time, deep neural networks were theoretically proven to work. But limits in the availability of computational power and data prevented them from reaching mainstream adoption until decades later.
“Maybe in a few years new computers (quantum computers?) will suddenly break a glass ceiling here. We do not know,” Hiesinger said.
Searching for shortcuts to AI
Another reason for which the AI community is not giving enough attention to self-assembly regards the varying views on which aspects of biology are relevant to replicating intelligence. Scientists always try to find the lowest level of detail that provides a fair explanation of their subject of study.
In the AI community, scientists and researchers are constantly trying to take shortcuts and avoid implementing unnecessary biological details when creating AI systems. We do not need to imitate nature in all its messiness, the thinking goes. Therefore, instead of trying to create an AI system that creates itself through genetic development, scientists try to build models that approximate the behavior of the final product of the brain.
“Some leading AI research go as far as saying that the 1GB of genome information is obviously way too little anyway, so it has to be all learning,” Hiesinger said. “This is not a good argument, since we of course know that 1GB of genomic information can produce much much more information through a growth process.”
There are already several experiments that show with a small body of data, an algorithm, and enough execution cycles, we can create extremely complex systems. A telling example is the Game of Life, a cellular automaton created by British mathematician John Conway. The Game of Life is a grid of cells whose states shift between “dead” and “alive” based on three very simple rules. Any live cell surrounded by two or three neighbors stays alive in the next step, while dead cells surrounded by three live cells will come to life in the next step. All other cells die.
The Game of Life and other cellular automata such as Rule 110 sometimes give rise to Turing-complete systems, which means they are capable of universal computation.
“All kinds of random stuff happening around us could—in theory—all be part of a deterministic program look at from within because we can’t look at the universe from the outside,” Hiesinger said. Although this is a very philosophical argument that cannot be proven one way or the other, Hiesinger says, experiments like Rule 110 show that a system based on a super-simple genome can, given enough time, produce infinite complexity and may look as complicated from the inside as the universe we see around us.
Likewise, the brain starts with a very basic structure and gradually develops into a complex entity that surpasses the information capacity of its initial state. Therefore, dismissing the study of genetic development as irrelevant to intelligence can be an erroneous conclusion, Hiesinger argues.
“There is a bit of an unfortunate lack of appreciation for both information theory and biology in the case of some AI researchers that are (understandably) dazzled by the successes of their pure learning-based approaches,” Hiesinger said. “And I would add: the biologists are not helping, since they also are largely ignoring the information theory question and instead are trying to find single genes and molecules that wire brains.”
New ways to think about artificial general intelligence
In The Self-Assembling Brain, Hiesinger argues that when it comes to replicating the human brain, you can’t take shortcuts and you must run the self-assembling algorithm in its finest detail.
But do we need to take such an undertaking?
In their current form, artificial neural networks suffer from serious weaknesses, including their need for numerous training examples and their sensitivity to changes in their environment. They don’t have the biological brain’s capacity to generalize skills across many tasks and to unseen scenarios. But despite their shortcomings, artificial neural networks have proven to be extremely efficient at specific tasks where the training data is available in enough quantity and represents the distribution that the model will meet in the real world. In some applications, neural networks even surpass humans in speed and accuracy.
So, do we want to grow robot brains, or should we rather stick to shortcuts that give us narrow AI systems that can perform specific tasks at a super-human level?
Hiesinger believes that narrow AI applications will continue to thrive and become an integral part of our daily lives. “For narrow AIs, the success story is absolutely obvious and the sky is the limit, if that,” he said.
Artificial general intelligence, however, is a bit more complicated. “I do not know why we would want to replicate humans in silico. But this may be a little like asking why we want to fly to the moon (it is not a very interesting place, really),” Hiesinger said.
But while the AI community continues to chase the dream of replicating human brains, it needs to adjust its perspective on artificial general intelligence.
“There is no agreement on what ‘general’ is supposed to really mean. Behave like a human? How about butterfly intelligence (all genetically encoded!)?” Hiesinger said, pointing out that every lifeform, in its own right, has a general intelligence that is suited to its own survival.
“Here is where I see the problem: ‘human-level intelligence’ is actually a bit non-sensical. ‘Human intelligence’ is clear: that’s ours. Humans have a very human-specific type of intelligence,” he said.
And that type of intelligence cannot be measured in the level of performance at one or multiple tasks such as playing chess or classifying images. Instead, the breadth of areas in which humans can operate, decide, operate, and solve problems makes them intelligent in their own unique way. As soon as you start to measure and compare levels of intelligence in tasks, then you’re taking away the human aspect of it, Hiesinger believes.
“In my view, artificial general intelligence is not a problem of ever-higher ‘levels’ of current narrow approaches to reach a human ‘level’. There really is no such thing. If you want to really make it human, then it is not about making current level-oriented task-specific AIs faster and better, but it is about getting the type of information into the network that make human brains human,” he said. “And that, as far as I can see, has currently only one known solution and path—the biological one we know, with no shortcuts.” | <urn:uuid:575d3710-573e-47fa-b65a-5a5876cba452> | CC-MAIN-2022-40 | https://bdtechtalks.com/2021/08/16/self-assembling-brain-book/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00570.warc.gz | en | 0.947117 | 2,567 | 3.484375 | 3 |
How often do you now casually converse with a black (or white) box of some sort? Natural language processing (NLP) has become an integral part of our daily lives: Whether we’re asking our smartphone for directions or engaging with Alexa or Google, NLP and its sub-categories are hard at work behind the scenes, translating our voice or text input and (hopefully) providing an appropriate voice or text output.
But what is NLP, and how does it relate to artificial intelligence (AI) in general? The distinctions are important, as NLP has just as much, if not more, value in the enterprise as it has in our personal lives.
[ Check out our quick-scan primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]
Let’s start with AI, the broader category under which NLP and a number of other flavors of machine-based intelligence reside. “AI is the use of intricate logic or advanced analytical methods to perform simple tasks at greater scale in ways that mean we can do more at large scale with the workers we have, allowing them to focus on what humans are best at, like handling complex exceptions or demonstrating sympathy,” says Whit Andrews, vice president and distinguished analyst with Gartner.
AI is essentially some computerized simulation of human intelligence, says Zachary Jarvinen, head of technology strategy, AI and analytics at OpenText, that can be programmed to make decisions, carry out specific tasks, and learn from the results.
With AI, computers can learn to accomplish a task without ever being explicitly programmed to do so, says Timothy Havens, the William and Gloria Jackson Associate Professor of Computer Systems in the College of Computing at Michigan Technological University and director of the Institute of Computing and Cybersystems.
For those who prefer analogies, Havens likens the way AI works to learning to ride a bike: “You don’t tell a child to move their left foot in a circle on the left pedal in the forward direction while moving your right foot in a circle… You give them a push and tell them to keep the bike upright and pointed forward: the overall objective. They fall a few times, honing their skills each time they fail. That’s AI in a nutshell.”
AI vs. NLP, explained
When you take AI and focus it on human linguistics, you get NLP.
Like machine learning or deep learning, NLP is a subset of AI. But when exactly does AI become NLP? SAS offers a clear and basic explanation of the term: “Natural language processing makes it possible for humans to talk to machines.” It’s the branch of AI that enables computers to understand, interpret, and manipulate human language.
NLP itself has a number of subsets, including natural language understanding (NLU), which refers to machine reading comprehension, and natural language generation (NLG), which can transform data into human words. But, says Wayne Butterfield, director of cognitive automation and innovation at ISG, “the premise is the same: Understand language and sew something on the back of that understanding.”
Natural language processing makes it possible for computers to extract keywords and phrases, understand the intent of language, translate that to another language, or generate a response.
NLP has its roots in linguistics, where it emerged to enable computers to literally process natural language, explains Anil Vijayan, vice president at Everest Group. “Over the course of time, it evolved from rule-based to machine-learning infused approaches, thus overlapping with AI.”
In addition to techniques derived from the field of computational linguistics, NLP (which may also be referred to as speech recognition in some contexts) might employ both machine learning and deep learning methodologies in order to effectively ingest and process unstructured speech and text datasets, says JP Baritugo, director at business transformation and outsourcing consultancy Pace Harmon.
What does NLP look like in action? Let’s look at some of the problems it can solve:
Subscribe to our weekly newsletter.
Keep up with the latest advice and insights from CIOs and IT leaders. | <urn:uuid:6b02830c-f467-4175-a812-1c37f894eee5> | CC-MAIN-2022-40 | https://enterprisersproject.com/article/2020/2/artificial-intelligence-ai-vs-natural-language-processing-nlp-differences?intcmp=7013a000002w1nTAAQ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00570.warc.gz | en | 0.927584 | 882 | 3.078125 | 3 |
This CloudCodes official blog website is going to give descriptions for the reasons due to which the combination of cloud and docker cloud is considered perfect for each other. But before this, we are going to learn about what exactly is docker in cloud computing. So, let’s begin!
What is Docker in Cloud Computing?
Docker is an open-source environment of product containers. These containers help applications to work while it is being shifted from one platform to another like – migration from the developer’s laptop to staging to the production. This is a new era of technology, which enables enterprises to ship, build, and run any product from any geolocation. It is true that several problems are associated with hosting environments, and this Docker technology tries to fix those issues by the creation of a standardized way to distribute and scale the apps. In the current scenario, Docker has become popular among different cloud architecture machines. It permits applications to be bundled and copied where all apps are dependent on each other. Cloud users find this concept useful when it comes to working with scalable infrastructure. When docker gets integrated with the cloud, it is named Docker Cloud.
Docker Cloud is an official online service to deliver Docker products. Several online services like Azure, AWS, Google cloud platform, etc., are present for enterprises in today’s date. Although these services provide flexibility in work, they require configurations of everything. On the other hand, Docker Cloud is found as an advance managed cloud system, where it could render orchestration and develop different options for its clients. This new concept prevents customers from wasting their time in several kinds of configuration processes and enables them to work more on their business growth.
Reasons for Using Docker Cloud In A Company
Following illustrated are the points, which describe reasons for combining Docker and cloud computing technology. One can learn the benefit of doing the same when it is about distributing and scaling the apps.
- Eliminate The Useless Costing – There was a time when virtual systems were accessed like building blocks of the cloud. They were entirely isolated from the external world and comprised their own set of directories, operating system, virtual network adapters, etc. These settings in a virtual machine make it portable as well as easy to duplicate. But, it also causes heavy footprints because virtual systems demand lots of storage and memory space, increasing the overall business finance. Here comes the role of Docker cloud! A docker is built by blocks of containers. In comparison to a virtual machine, a container can be seen as lightweight. The major difference between both of them is that containers share their host’s OS kernel and can easily share storage space. The term ‘sharing’ refers to adding up a new container to a host enabling enterprises to distribute apps on smaller host instances. This reduces the overall finance because industries are working on Docker Cloud, which is a cloud hosting service.
- Categorized Business Resources – It is possible for containers to share resources on a host but, they are only good till the time virtual systems are creating boundaries between apps. Containers still work in their memory space, restricting two different containers from interfering work of each other. Suppose there exists two containers having two different versions of Java runtime, this might cause a problem of resource sharing because the host is the same. This problem is troubleshooting in the Docker cloud technology where the idea of isolation is used. It makes a valid assumption on how to deploy user software. It acquires the need away to initialize the host server in a docker swarm and enables Docker Cloud for managing resources without demanding any understanding for the app context.
- Provide Ease in Orchestration – Several industries demand using an orchestration framework when they experience deployment complexity. The Docker in cloud computing enables its clients to make use of Docker Swarm for orchestrating their software infrastructure. They can mention a definition of their desired infrastructure, which can be similar to one of the known orchestration frameworks. It means that a technician can give a description in YAML format, entering all the services needed for running an app. This comprises technologies like load web services, load balancers, databases, cache servers, etc. Docker cloud will allow its users to manage the warmth they deploy in it, holding all the physical hosts they deploy to their stack.
- Gain Ease in Deployment – The mechanism of docker in cloud computing enables end-users to gather their applications like a docker image. This picture can be downloaded from a registry like Docker Hub and ran within the container. No manual procedures like software installation, driver installation, etc., are present in this mechanism. A docker image enables the deployment system to be dumb and leaves the customers with only a single worry point. This single point of tension is to run applications whenever customers are ready. Well, docker cloud is capable of consuming the docker image on its own. One can mention the arguments for the command line, which can acquire the recently created image and then, push it to core production just in few seconds.
Docker Provides A Modern Way of Working
Docker in cloud security, which has undoubtedly taken an evolutionary step towards the management of deployment platforms. It is an extremely new way of working in premises where work management becomes easier for industries. No extra finance for the storage server and cloud infrastructure maintenance is required in Docker cloud. Enterprises can adopt this modern strategy of working to grow their business and hence, achieve heights of success. | <urn:uuid:02060426-dd9f-4992-b154-c478403365dc> | CC-MAIN-2022-40 | https://www.cloudcodes.com/blog/cloud-and-docker-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00570.warc.gz | en | 0.944674 | 1,095 | 2.578125 | 3 |
The vulnerability found in the Log4j logging system should be old news for you at this point. However, what your company does about it is still worthy of your attention.
What Is It?
Log4j is a Java language library for logging events within an application. These logs facilitate troubleshooting when something goes wrong and provides visibility into the performance of applications. This library has been around for years and is favored by many, many development teams.
One of its features is that Log4j resolves variables within log messages. The bad actors force an application to log a message that contains a variable that in turn points to a malware web server that’s outside your organization. Log4j’s variable resolution mechanism then causes it to download the malware. You’re not protected because outbound web connection requests are typically allowed from any system within the enterprise. Many applications require access to the global internet for a variety of purposes, making it difficult to use outbound firewall rules.
A lot of software has included Log4j. In some cases, software developers may not be aware that Log4j has been included by some other library they’ve used in their software. Candidates include healthcare, building control systems, any applications that provide a web interface, and network management, just to name a few. The problem is that applications that use Log4j typically run with elevated privileges, so malware that’s executed can do anything. Then, once in that system, it’s relatively easy to leave back door exploits that could be activated much later or to migrate to other systems.
It’s going to be tough to track down all the applications and verify that they don’t contain a vulnerability. You can’t manage things you don’t know exist. Ideally, you’ve been identifying all applications used by your organization so you can monitor and manage them. If not, this is a good time to start.
What Are Attackers Doing?
The original set of attackers were installing crypto-mining applications or botnet clients, which only disrupt operations because they steal CPU time. More recent attacks are expected to implement ransomware that encrypts data and holds it hostage. Organizations that have significant intellectual property or personally identifying information can expect data theft. This includes losing customer lists, product plans, chemical and pharmaceutical formulations, proposal documents, and pricing information.
What Should Your Organization Do?
CISA (see External References below) includes a list of actions that organizations should take. It’s very likely that you’ll have a mix of the above actions to perform, depending on each application, how it’s deployed, and how it’s used in your environment. The fastest step is to configure Log4j to not perform lookups. In some cases, you can remove the logging mechanism from existing software. Finally, install patches to software that your organization uses. Note that the 2.15.0 release on Dec 10th has already been superseded by 2.16.0. Have your team recheck CISA and vendor guidance frequently as the situation is far from over.
For some applications you can configure access lists that prevent external access or only allow access to specific address ranges for vendors. However, you’ll need to know a lot about an application’s network usage patterns to make this work, which prevents it from being recommended.
Various vendors are also providing guidance on what steps your organization can take. Splunk and Microsoft have good descriptions of the vulnerability and their recommendations for handling it (see External References).
For more information or if NetCraftsmen can help, please contact our Chief Technology Officer, John Cavanaugh directly at email@example.com. | <urn:uuid:3b3f7644-195b-496a-9ae8-01563dae014a> | CC-MAIN-2022-40 | https://netcraftsmen.com/guiding-your-company-through-the-log4j-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00570.warc.gz | en | 0.941009 | 779 | 2.84375 | 3 |
The world is becoming digital, and so does our daily life, and with the advancement of technology day by day, everyone is adopting such modern ways in their life. The same is true for criminals; they also use modern technology to commit crimes.
You may have heard about Cyberbullying and Cyberstalking, the common cybercrime. Most individuals think both are the same things, but they are not. So, how do both terms differ from each other? In this post, we will see in detail their difference; let’s move ahead to know about them.
CyberBullying and CyberStalking: What is the Difference?
Cyberstalking is cyberbullying; you can think of cyberstalking as a first step or low-level crime, and cyberbullying is the next step or high-level crime.
First, I have discussed the term individually to clarify things.
Using email, the internet, or other electronic communications to stalk is known as cyberstalking. An evil behavior in which an offender threatens someone, possibly a person, a group, or an organization. It is a hazardous type of cybercrime that may lead to severe psychological problems in the victim.
In most cases, stalkers commit such crimes for revenge to impose control over their victims. However, you may have seen many instances where the stalker is some random stranger committing such a crime.
Such as a random fanatical admirer stalking a celebrity. But in reality, cyberstalking is seldom carried out by a stranger; more commonly, the stalker targets someone they know intimately or professionally. Their purpose could be to dominate, hold, threaten, terrify, or hurt their target. A cyber stalker can:
- Track the victim’s online and, in certain situations, offline actions
- Track the victim’s whereabouts and monitor them online or offline
- Irritate the victim;
- Humiliate, scare, manipulate, or blackmail the target
- May post fake information about them
- Obtain additional details about the victim to seize their identity
- Infect the victim’s machine with a virus
- Do other genuine crimes such as robbery or abuse.
The use of digital technology to bully someone is known as cyberbullying. It involves using Text, SMS, certain Apps, different social media platforms, and games through which individuals can chat, contact and exchange data with one another.
In this, the offender uploads, send and spreads disgusting, negative, fake, or insulting data about another individual.
They may reveal their private data causing shame and humiliation for that person. The person involved in such a crime likes the power gained by shaming and insulting another individual.
Some of the actions of cyberbullying are strictly illegal and outlawed in nature.
Cyberbullying is a significant crime of cyberharassment and involves many harsh criminal acts, such as:
- Recording any audio or video of someone in their private place without their consent
- Publishing and spreading that material without being in their knowledge
- Sextortion (threats to expose sexual images)
- Making inappropriate and abusive messages and phone calls
- Child Pornography
- Giving death threats
- Giving violent threats
This type of crime is most common among the young, especially the kids. According to a study, around 37% of young people aged 12 to 17 have been bullied online. 30% have experienced it more than once.
Difference Between CyberStalking and CyberBullying:
In Cyberstalking, an individual becomes obsessed with gathering as much information as possible involving minor to significant details, Such as their Birthdays, relationship status, age, education, and so on. They monitor the victim’s current activities, daily activities, etc. And the main objective is to stalk and harass, but no actual harm is made.
- Tracking down somebody’s private and personal data frightens them by messaging them dozens of times daily to let them realize you are monitoring them.
- “Crawling” on their social media profiles to discover their location so you can make an unwelcome appearance.
- Publishing about them constantly and without their approval are all examples of cyberstalking behaviors.
Cyberbullying, on the contrary, is a more direct kind of bullying. The criminal keeps in contact with the victim and says harsh and cruel things to them—spreading rumors about an individual that could be embarrassing and shameful for the victim. Blackmail the person about revealing their personal, private data and, in extreme cases, discloses all the data.
For example, if an individual does contact with another individual on any platform such as games, social media, or other apps, becomes friends, shares intimate images and information, and becomes personal. At some point, one individual starts blackmailing to expose all the data. In severe cases, an individual captures someone’s private moments and posts them on the internet.
The purpose is harm, and the victim could even commit suicide due to severe psychological damage. | <urn:uuid:818115f4-62eb-405f-af24-2e4214845e18> | CC-MAIN-2022-40 | https://nextdoorsec.com/difference-between-cyberstalking-and-cyberbullying/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00570.warc.gz | en | 0.950059 | 1,049 | 3.125 | 3 |
Digital forensics is the use of scientifically derived and proven methods for the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence. This evidence can be extracted from many digital sources such as CD/DVDs, hard drives, flash drives, memory sticks, and magnetic tapes, etc.
Digital forensics serves as a supporting proof or corroborating evidence often made by prosecutors and defendants to refute a claim that a certain activity was done by a specific person using a piece of digital equipment. The most common use is to recover erased digital evidence to support or disprove a claim in court of law or in civil proceedings such as the eDiscovery process in courts. Forensics is also used during internal corporate investigations or intrusion investigation which includes additional activities like network and log review.
Types of Digital Forensics
At NII, we have a full-fledged team and a well-equipped lab to carry out the following types of digital forensics:
- Computer forensics
- Reveal the current state of computer system
- Obtain evidence from various storage medium such as computers, embedded systems, USB pen drives
- Examine system logs and Internet history.
- Some of the artefacts we can get from such investigations include:
- Hidden, deleted, temporary and password-protected files
- Sensitive documents and spreadsheets
- File transfer logs
- Text communication logs
- Internet browsing history
- Pictures, graphics, videos and music
- Checking Event logs and System Logs
- Checking Illicit, pirated or legitimate software installations
- Mobile device forensics
- Recover digital evidence from a mobile device.
- Investigate call logs and text messages (SMS/Email)
- Providing location information via GPS or cell site logs
- Investigate communication stores such as BBM, WhatsApp, WeChat, etc.
- Artefacts that can be retrieved are:
- Phone number and service provider information
- Incoming and outgoing call logs
- SMS, Emails, IRC chat logs
- Contact details from address books and calendars
- GPS and location based data
- Network forensics
- Monitor and analyze LAN/WAN/internet traffic (even at the packet level)
- Retrieve and analyze logs from a wide variety of sources
- Determine the extent of intrusion and the amount of data retrieved
- Forensic data analysis
- Investigation for financial frauds
- Correlating with financial documents
- Working closely with Certified Fraud Examiners
- Database forensics
- Forensic study of databases and their metadata.
- Investigation on database contents, log files and in-RAM data
How NII can help you?
NII has done extensive projects in digital forensics and has a dedicated team for carrying out these various activities. We have co-operated with law enforcement authorities in helping them getting leads in the forensics investigations and also played a vital part in internal corporate investigations for many of our clients. Our work ethics and quality deliverables have won accolades from many of our clients and their testimonials are strongest testimony to our professional and quality work deliverables. A representative list of some of the projects we have done are:
- Analysis of dozens of hard drives and correlating them with financial documents to build a water-tight case of tax evasion, FEMA violations, disproportionate assets, etc. against the accused who was arrested on other grave charges. The evidence and reports provided by us enabled regulatory agencies to pursue multiple independent cases against the accused and law enforcement was able to file a 5000-page charge-sheet
- Analysis of server logs to determine a breach in one of the country’s main telecom firms done by Pakistani hackers prior to Independence day. Complete details of the steps taken by the hacker and the malware uploaded onto the servers was provided along with detailed recommendations on how to ensure such an event doesn’t occur in the future
- Disk-based analysis to retrieve deleted files, email correspondence and Internet browsing history of the suspect and determine the exact nature of the financial fraud as well as determine the list of accomplices.
- Analysis of smartphones and tablets to retrieve BB Messenger, WhatsApp, and SMS communication
- Empaneled by a multi-national bank for all forensic cases in the Asia-Pacific region
We Provide Digital Forensics Services in New York, Dubai, Singapore, India, Sri Lanka, Nepal, Bangladesh, Philippines, Indonesia, Thailand, Australia, New Zealand.
Contact us at [email protected] | <urn:uuid:3aa3cc94-c696-48b0-8829-a29cd192ebc9> | CC-MAIN-2022-40 | https://www.niiconsulting.com/services/breach-response/digital-forensics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00570.warc.gz | en | 0.921405 | 921 | 3.171875 | 3 |
A salt is a piece of random data added to a password before it is hashed and stored.
Adding a salt to stored passwords is a security process used alongside the hashing of passwords before they are stored. A salt is automatically and randomly generated for this purpose, and since a user is not involved in this process the salt can be complex, compounding the complexity of the hashing process. The increased difficulty in such a scenario further protects passwords from being useful should a system be breached. Salting passwords can help a system defend against a pre-computed hash attack, which is also known as a rainbow table or predictive method of unhashing a password store.
“Hash attacks on servers where encrypted passwords are stored are mitigated by salting the hashed passwords. The additional number streams affixed to the hash values create increased complexity and difficulty, making these attacks mathematically infeasible — or at least shifting the ROI.” | <urn:uuid:83e915bd-7a8c-4232-9f9c-6105ef73f599> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/salt | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00570.warc.gz | en | 0.948292 | 195 | 3.171875 | 3 |
Self-defense classes can help students build confidence in themselves. They can develop self-discipline, improve their physical condition and street awareness. Self-defense can aid students particularly when they are walking alone at night after classes. That’s the time when students are most vulnerable to attack.
What is self Defence and its importance?
Self-Defense is the method by which one can protect oneself with one’s own strength. It involves various techniques but the first step towards it is Fitness. Learning self-defense through fitness is of prime importance as there are tremendous power imbalance and unsafety where we live today.
Is learning self-defense worth it?
Self-defense classes are definitely worth it and beneficial as it teaches you to recognize and avoid dangerous situations and how to defend yourself in the event that you are attacked. … Some classes are just a crash course in the basics of self defense while others are quite in-depth.
What does self-defense teach you?
Self defense classes help you to set goals. Whether you want to nail a specific move, or work hard to feel like you can protect yourself, you are setting a goal. It gets you back in class each week, and will help you in your everyday life. It helps you develop a drive that you may not have had before.
Why is it important to learn self-defense like arnis?
What is important is to preserve life. Hence, arnis and other martial arts are known as forms of self-defense. They are not used to bully or intimidate innocent and weak people. With regular training, an arnisador refines his skills and techniques.
How does self-defense help you develop your self respect?
Increased self confidence: Training in self defense helps people, especially women, develop more confidence in themselves and their surroundings. Knowing that you have the ability to defend yourself gives you the confidence and freedom to fully explore the world, meet new people and find new ways to engage with others.
Why do you think that learning Arnis is really important and useful for you as a student?
Arnis is said to develop self-discipline and control because Arnis is a simple martial art and it is the defending of oneself by only using sticks. It give more extension to your arm and to give you a wide range. You will learn to discipline yourself and it will develop self control.
What are the benefits of learning Arnis?
What are the benefits?
- Learning to use what’s around you for self-defense.
- Improved strength and cardio.
- Quicker reflexes.
- Increased confidence in tense situations.
What is Arnis and its importance?
Self-defense is important as it could help you ensure your safety. Arnis can taught us discipline and self control. Like other martial arts, arnis can be used to practice and make us learn self control and disciple. It also gives us ability to think fast and make our body stronger as we will move a lot. | <urn:uuid:d8ba6d26-6cad-4642-b8f8-0730988079f2> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/why-is-it-important-to-learn-self-defense.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00570.warc.gz | en | 0.964784 | 616 | 2.65625 | 3 |
This is an extended, less-edited version of an article appearing in IEEE Security and Privacy in December 2012. This version specifically identifies all of the textbooks I reviewed while looking at information security design principles.
I have added an Afterword to note a ninth security principle added to the second edition of my textbook Elementary Information Security.
Here is the citation for the published article:
Smith, R.E.; , “A Contemporary Look at Saltzer and Schroeder’s 1975 Design Principles,” Security & Privacy, IEEE , vol.10, no.6, pp.20-25, Nov.-Dec. 2012
The information security community has a rich legacy of wisdom drawn from earlier work and from sharp observations. Not everyone is old enough or fortunate enough to have encountered this legacy first-hand by working on groundbreaking developments. Many of us receive it from colleagues or through readings and textbooks.
The Multics time-sharing system (Figure 1 – photo by Tom Van Vleck) was an early multi-user system that put significant effort into ensuring security. In 1974, Jerome Saltzer wrote an article outlining the security mechanisms in the Multics system (Saltzer, 1974). The article included a list of five “design principles” he saw reflected in his Multics experience. The following year, Saltzer and Michael Schroeder expanded the article into a tutorial titled “The Protection of Information in Computer Systems” (Saltzer and Schroeder, 1975). The first section of the paper introduced “basic principles” of information protection, including the triad of confidentiality, integrity, and availability, and a set of design principles.
Over the following decades, these principles have occasionally been put forth as guidelines for developing secure systems. Most of the principles found their way into the DOD’s standard for computer security, theTrusted Computer System Evaluation Criteria (NCSC, 1985). The Saltzer and Schroeder design principles were also highlighted in security textbooks, like Pfleeger’s Security in Computing (Pfleeger, 1989), the first edition of which appeared in 1989.
Different writers use the term principle differently. Some apply the term to a set of precisely worded statements, like Saltzer and Schroeder’s 1975 list. Others apply it in general to a collection of unidentified but fundamental concepts. This paper focuses on explicit statements of principles, like the 1975 list. The principles were concise and well stated on the whole. Many have stood the test of time and are reflected in modern security practice. Others are not.
In 2008, after teaching a few semesters of introductory information security, I started writing my own textbook for the course. The book was designed to cover all topics required by selected government and community curriculum standards.
Informed by an awareness of Saltzer and Schroeder’s design principles, but motivated primarily by the curriculum requirements, the textbook, titled Elementary Information Security, produced its own list of basic principles (Smith, 2012). This review of design principles arises from the mismatch between the classic list and this more recent list. The review also looks at other efforts to codify general principles, both by standards bodies and by other textbook authors, including a recent textbook co-authored by Saltzer himself (Saltzer and Kaashoek, 2009).
The Saltzer and Schroeder List
Saltzer and Schroeder’s 1976 paper listed eight design principles for computer security, and noted two additional principles that seemed relevant if more general.
- Economy of mechanism – A simple design is easier to test and validate.
- Fail-safe defaults – Figure 2 shows a physical example: outsiders can’t enter a store via an emergency exit, and insiders may only use it in emergencies. In computing systems, the save default is generally “no access” so that the system must specifically grant access to resources. Most file access permissions work this way, though Windows also provides a “deny” right. Windows access control list (ACL) settings may be inherited, and the “deny” right gives the user an easy way to revoke a right granted through inheritance. However, this also illustrates why “default deny” is easier to understand and implement, since it’s harder to interpret a mixture of “permit” and “deny” rights.
- Complete mediation – Access rights are completely validated every time an access occurs. Systems should rely as little as possible on access decisions retrieved from a cache. Again, file permissions tend to reflect this model: the operating system checks the user requesting access against the file’s ACL. The technique is less evident when applied to email, which must pass through separately applied packet filters, virus filters, and spam detectors.
- Open design – Baran (1964) argued persuasively in an unclassified RAND report that secure systems, including cryptographic systems, should have unclassified designs. This reflects recommendations by Kerckhoffs (1883) as well as Shannon’s maxim: “The enemy knows the system” (Shannon, 1948). Even the NSA, which resisted open crypto designs for decades, now uses the Advanced Encryption Standard to encrypt classified information.
- Separation of privilege – A protection mechanism is more flexible if it requires two separate keys to unlock it, allowing for two-person control and similar techniques to prevent unilateral action by a subverted individual. The classic examples include dual keys for safety deposit boxes and the two-person control applied to nuclear weapons and Top Secret crypto materials. Figure 3 (courtesy of the Titan Missile Museum) shows how two separate padlocks were used to secure the launch codes for a Titan nuclear missile.
- Least privilege – Every program and user should operate while invoking as few privileges as possible. This is the rationale behind Unix “sudo” and Windows User Account Control, both of which allow a user to apply administrative rights temporarily to perform a privileged task.
- Least common mechanism – Users should not share system mechanisms except when absolutely necessary, because shared mechanisms may provide unintended communication paths or means of interference.
- Psychological acceptability – This principle essentially requires the policy interface to reflect the user’s mental model of protection, and notes that users won’t specify protections correctly if the specification style doesn’t make sense to them.
There were also two principles that Saltzer and Schroeder noted as being familiar in physical security but applying “imperfectly” to computer systems:
- Work factor – Stronger security measures pose more work for the attacker. The authors acknowledged that such a measure could estimate trial-and-error attacks on randomly chosen passwords. However, they questioned its relevance since there often existed “indirect strategies” to penetrate a computer by exploiting flaws. “Tiger teams” in the early 1970s had systematically found flaws in software systems that allowed successful penetration, and there was not yet enough experience to apply work factor estimates effectively.
- Compromise recording – The system should keep records of attacks even if the attacks aren’t necessarily blocked. The authors were skeptical about this, since the system ought to be able to prevent penetrations in the first place. If the system couldn’t prevent a penetration or other attack, then it was possible that the compromise recording itself may be modified or destroyed.
Today, of course, most analysts and developers embrace these final two design principles. The argument underlying complex password selection reflect a work factor calculation, as do the recommendations on choosing cryptographic keys. Compromise recording has become an essential feature of every secure system in the form of event logging and auditing.
Security Principles Today
Today, security principles arise in several contexts. Numerous bloggers and other on-line information sources produce lists of principles. Many are variants of Saltzer and Schroeder, including the list provided in the Open Web Application Security Project’s wiki (OWASP, 2012). Principles also arise in information security textbooks, more often in the abstract sense than in the concrete. Following recommendations in the report Computers at Risk (NRC, 1991), several standards organizations also took up the challenge of identifying a standard set of security principles.
Most textbook authors avoid making lists of principles. This is clear from a review of twelve textbooks published over the past ten years. This is even true of textbooks that include the word “Principles” in the title. Almost every textbook recognizes the principle of least privilege and usually labels it with that phrase. Other design principles, like separation of privilege, may be described with a different adjective. For example, some sources characterize separation of privilege as a control, not a principle.
Pfleeger and Pfleeger (2003) presents its own set of four security principles. They are, briefly, easiest penetration, weakest link, adequate protection, and effectiveness. These principles apply to a broader level of security thinking than Saltzer and Schroeder design principles. However, the text also reviews Saltzer and Schroeder’s principles in detail in Section 5.4.
The remaining few textbooks that specifically discuss design principles generally focus on the 1975 list. The textbook by Smith and Marchesini (2008) discuss the design principles in Chapter 3. The two textbooks by Bishop (2003, 2005) also review the design principles in Chapters 13 and 12, respectively.
“Generally Accepted Principles”
Following Computers at Risk, standards organizations were motivated to publish lists of principles. The OECD published a list of eight guidelines in 1992 that established the tone for a set of higher-level security principles:
Accountability, Awareness, Ethics, Multidisciplinary, Proportionality,
Integration, Timeliness, Reassessment, and Democracy.
In its 1995 handbook, “An Introduction to Computer Security,” NIST presented the OECD list and also introduced a list of “elements” of computer security (NIST, 1995). Following the OECD’s lead, this list presented very high level guidance, addressing the management level instead of the design or technical level. For example, the second and third elements are stated as follows:
“Computer Security is an Integral Element of Sound Management”
“Computer Security Should Be Cost-Effective”
The following year, NIST published its own list of “Generally Accepted Principles and Practices for Securing Information Technology Systems” (Swanson and Guttman, 1996). The overriding principles drew heavily from the elements listed in the 1995 document. The second and third elements listed above also appeared as the second and third “Generally Accepted Principles.”
The OECD list also prompted the creation of an international organization that published “Generally Accepted System Security Principles” (GASSP) in various revisions between 1996 and 1999 (I2SF, 1999). This was intended to provide high-level guidance for developing more specific lists of principles, similar to those used in the accounting industry. The effort failed to prosper.
Following the 1999 publication, the sponsoring organization apparently ran out of funding. In 2003, the Information System Security Association tried to restart the GASSP process and published the “Generally Accepted Information Security Principles” (ISSA, 2004), a cosmetic revision of the 1999 document. This effort also failed to prosper.
In 2001, a team at NIST tried to produce a more specific and technical list of security principles. This became “Engineering Principles for Information Technology Security” (Stoneburner, et al, 2004). The team developed a set of thirty-three separate principles. While several clearly reflect Saltzer and Schroeder, many are design rules that have arisen from subsequent developments, notably in networking. For example:
- Principle 16: Implement layered security (Ensure no single point of vulnerability).
Principle 20: Isolate public access systems from mission critical resources.
- Principle 30: Implement security through a combination of measures distributed physically and logically.
Principle 33: Use unique identities to ensure accountability.
While these new principles captured newer issues and concerns than the 1975 list, they also captured assumptions regarding system development and operation. For example, Principle 20 assumes that the public will never have access to “mission critical resources.” However, many companies rely heavily on Internet sales for revenue. They must clearly ignore this principle in order to conduct those sales.
Training and Curriculum Standards
When we examine curriculum standards, notably those used by the US government to certify academic programs in information security, we find more ambiguity. All six of the curriculum standards refer to principles in an abstract sense. None actually provide a specific list of principles, although a few refer to the now-abandoned GASSP. A few of Schroeder and Saltzer’s design principles appear piecemeal as concepts and mechanisms, notably least privilege, separation of privilege (called “segregation of duties” in NSTISSC, 1994), and compromise recording (auditing).
The Information Assurance and Security IT 2008 curriculum recommendations (ACM and IEEE, 2008) identify design principles as an important topic, and provide a single example: “defense in depth.” This is a restatement of NIST’s Principle 16.
Saltzer and Kaashoek
Co-authors Saltzer and Kaashoek published the textbook Principles of Computer Design in 2009 (Saltzer and Kaashoek, 2009). The book lists sixteen general design principles and several specific principles, including six security-specific principles. Here is a list of principles that were essentially inherited from the 1975 paper:
- General principle: Open design
- Security principle: Complete mediation
- Security principle: Fail-safe defaults
- Security principle: Least privilege
- Security principle: Economy of mechanism
- Security principle: Minimize common mechanism
Here are new – or newly stated – principles compared to those described in 1975:
- Security principle: Minimize secrets – a thoughtful addition to the list that could be prone to misunderstanding. Secrets should be few and changeable, but they should also maximize entropy, and thus increase the attacker’s work factor. The simple principle is also true by itself, since each secret increases a system’s administrative burden: a late 1990s fighter jet project required dozens of separately-managed crypto keys to comply with data separation requirements that had been added piecemeal.
- General principle: Adopt sweeping simplifications – a restatement that acknowledges how hopelessly complex modern systems have become. In the 1970s, a Unix operating system could support a dozen separate users with a megabyte of RAM; a single user on a modern desktop easily consumes a gigabyte of RAM, much of it containing software programs.
- General principle: Principle of least astonishment – a concise and much clearer restatement of the “psychological acceptability” principle described in 1975.
- General principle: Design for iteration – an important first step towards incorporating continuous improvement as a design principle.
Neither of the uncertain principles listed in 1975 made it into this revised list. Despite this, event logging and auditing is a fundamental element of modern computer security practice. Likewise, work factor calculations continue to play a role in the design of information security systems. Pfleeger and Pfleeger highlighted “weakest link” and “easiest penetration” principles that reflect the work factor concept. However, there are subtle trade-offs in work factor calculations that may makes it a poor candidate for stating as a concise and easy-to-apply principle.
Elementary Information Security
The textbook Elementary Information Security presents a set of eight basic information security principles, While many directly reflect principles from Saltzer and Schroeder, they also reflect more recent terminology and concepts. The notion of “basic principles” stated as brief phrases seems like a natural choice for introducing students to a new field of study.
The textbook’s contents were primarily influenced by two curriculum standards. The first was the “National Training Standard for Information System Security Professionals,” (NSTISSC, 1994). While this document clearly showed its age, it remains the ruling standard for general security training under the US government’s Information Assurance Courseware Evaluation (IACE) Program (NSA, 2012). In February, 2012, the IACE program certified the textbook as covering all topics required by the 1994 training standard. The second curriculum standard is the “Information Technology 2008 Curriculum Guidelines” (ACM and IEEE Computer Society, 2008). The textbook covers all topics and core learning outcomes recommended in the Information Assurance and Security section of the Guidelines.
To fulfill their instructional role, each principle needed to meet certain requirements. Each needed to form a memorable phrase related to its meaning, with preference given to existing, familiar phrases. Each had to reflect the current state of the practice, and not simply a “nice to have” property. Each had to be important enough to appear repeatedly as new materials were covered. Each principle was introduced when it played a significant role in a new topic, and no sooner. Students were not required to learn and remember a set of principles that they didn’t yet understand or need.
This yielded the following eight principles:
Continuous Improvement – continuously assess how well we achieve our objectives and make changes to improve our results. Modern standards for information security management systems, like ISO 27001, are based on continuous improvement cycles. Such a process also implicitly incorporates compromise recording from 1975 and “design for iteration” from 2009. Introduced in Chapter 1, along with a basic six-step security process to use for textbook examples and exercises.
Least Privilege – provide people or other entities with the minimum number of privileges necessary to allow them to perform their role in the system. This literally repeats one of the 1975 principles. Introduced in Chapter 1.
Defense in Depth – build a system with independent layers of security so that an attacker must defeat multiple independent security measures for the attack to succeed. This echoes “least common mechanism” but seeks to address a separate problem. Defense in depth is also a well-known alternative for stating NIST’s Principle 16. Introduced in Chapter 1.
Open Design – building a security mechanism whose design does not need to be secret. This also repeats a 1975 principle. Introduced in Chapter 2.
Chain of Control – ensure that either trustworthy software is being executed, or that the software’s behavior is restricted to enforce the intended security policy. This is an analogy to the “chain of custody” concept in which evidence must always be held by a trustworthy party or be physically secured. A malware infection succeeds if it can redirect the CPU to execute its code with enough privileges to embed itself in the computer and spread. Introduced in Chapter 2.
Deny by Default – grant no accesses except those specifically established in security rules. This is a more-specific variant of Saltzer and Schroeder’s “fail safe defaults” that focuses on access control. The original statement is less specific, so it applies in safety and control problems. Introduced in Chapter 3.
Transitive Trust – If A trusts B, and B trusts C, then A also trusts C. In a sense this is an inverted statement of “least common mechanism,” but it states the problem in a simpler way for introductory students. Moreover, this is already a widely-used term in computer security. Introduced in Chapter 4.
Separation of Duty – decompose a critical task into separate elements performed by separate individuals or entities. This reflects the most common phrasing in the security community. Some writers phrase it as “segregation of duty” or “separation of privilege.” Introduced in Chapter 8.
The textbook’s list focused on memorable phrases that were widely accepted in the computer security community. Principles introduced in earlier chapters always resurface in examples in later chapters. In retrospect, the list is missing at least one pithy and well-known maxim: “Trust, but verify.” The book discusses the maxim in Chapter 13, but does not tag it as a basic principle.
For better or worse, three of the 1975 principles do not play a central role in modern information security practice. These are simplicity, complete mediation, and psychological acceptability. We examine each below.
There is no real market for simplicity in modern computing. Private companies release product improvements to entice new buyers. The sales bring in revenues to keep the company operating. The company remains financially successful as long as the cycle continues. Each improvement, however, increases the underlying system’s complexity. Much of the free software community is caught in a similar cycle of continuous enhancement and release. Saltzer and Kaashoek (2009) call for “sweeping simplifications” instead of overall simplicity, reflecting this change.
Complete mediation likewise reflects a sensible but obsolete view of security decision making. Network access control is spread across several platforms, no one of which makes the whole decision. A packet filter may grant or deny access to packets, but it can’t detect a virus-infected email at the packet level. Instead it forwards email to a series of servers that apply virus and spam checks before releasing the email to the destination mailbox. Even then, the end user might apply a digital signature check to perform a final verification of the email’s contents.
Psychological acceptability, or the “principle of least astonishment” is an excellent goal, but it is honored more in the breach than in the observance. The current generation of “graphical” file access control interfaces provide no more than rudimentary control over low-level access flags. It takes a sophisticated understanding of the permissions already in place to understand how a change in access settings might really affect a particular user’s access.
Only a handful of Saltzer and Schroeder’s original 1975 design principles have stood the test of time. Nonetheless, this represents a memorable success. Kerckhoffs, a 19th century European cryptographic expert, published a list of principles for hand-operated cipher systems, some of which we still apply to cryptosystems today. But most experts only recognize a single principle as “Kerckhoffs’s Principle,” and that is his view on Open Systems: a cryptosystem should not rely on secrecy, since it may be stolen by the enemy. In addition to the Open System principle, both the principle of least privilege and of separation of privilege appeared on the 1975 list and are still widely recognized by security experts.
Perhaps lists of principles belong primarily in the classroom and not in the workplace. The short phrases are easy to remember, but they may promote a simplistic view of technical problems. Students need simplicity to help them build an understanding of a more complex reality.
The second edition of Elementary Information Security was published in 2016. Chapter 4 introduced a ninth security principle: Trust, but verify.
When Saltzer and Schroeder developed their design principles, computer engineers looked upon system design as a process to develop an artifact with specific properties. We assumed that we could implement a property and the system would retain it. A “secure” computer would remain secure as long as it is operated correctly. Today, we’re painfully aware that important properties are very hard to achieve and ensure, especially since modern systems evolve over time. We wanted to assume that the security systems would block attacks without requiring intervention or even awareness by the system’s administrators.
Modern systems implement numerous logging and assessment procedures to try to detect potential vulnerabilities or the effects of current or previous attacks. Trust, but verify seems like a pithy but effective statement of that design requirement.
A reader noted that I’d identified Kerckhoffs as a French crypto expert when he was in fact born in Holland. I called him French because he was working in Paris and published his famous article in a French military journal. As someone who was born in one place, brought up in another, and spent his adult life elsewhere, I don’t know how to conclusively identify a person’s homeland. Now I’m calling him European, which should solve the dilemma for now.
ACM and IEEE Computer Society, 2008, Information Technology 2008 Curriculum Guideline, http://www.acm.org/education/curricula/IT2008%20Curriculum.pdf, (retrieved March 1, 2012).
Bishop, 2003. Computer Security: Art and Science, Boston: Addison-Wesley.
Bishop, 2005. Introduction to Computer Security, Boston: Addison-Wesley.
I2SF, 1999. “Generally Accepted System Security Principles” International Information Security Foundation.
ISSA, 2004. “Generally Accepted Information Security Principles,” Information System Security Association.
Kerckhoffs, Auguste, 1883. “La cryptographie, militaire,” Journal des sciences militaires IX.
NCSC, 1985. Trusted Computer System Evaluation Criteria, Ft. Meade, MD: National Computer Security Center.
NIST, 1995, “An Introduction to Computer Security,” NIST SP 800-12, Gaithersburg, MD: National Institute of Standards and Technology.
NSA, 2012. “IA Courseware Evaluation Program – NSA/CSS,” web page, National Security Agency. http://www.nsa.gov/ia/academic_outreach/iace_program/index.shtml (retrieved Feb 29, 2012).
NRC, 1991. Computers at Risk: Safe Computing in the Information Age, Washington: National Academy Press. http://www.nap.edu/openbook.php?record_id=1581 (retrieved Feb 29, 2012).
NSTISSC, 1994. “National training standard for information security (INFOSEC) professionals,” NSTISSI 4011, Ft. Meade, MD: National Security Telecommunications and Information Systems Security Committee.
OWASP, 2012, “Category: Principle – OWASP,” web page, Open Web Application Security Project, https://www.owasp.org/index.php/Category:Principle (retrieved Feb 29, 2012).
Pfleeger, Charles, 1997. Security in Computing 2nd ed., Wiley.
Pfleeger, Charles, and Shari Pfleeger, 2003. Security in Computing 3rd ed. ,Wiley.
Saltzer, Jerome, 1974. “Protection and the control of information sharing in Multics,” CACM 17(7), July, 1974.
Saltzer, Jerome, and Kaashoek, 2009. Principles of Computer Design, Wiley.
Saltzer, Jerome, and Schroeder, 1975. “The protection of information in computer systems,” Proc IEEE 63(9), September, 1975.
Shannon, 1949. “Communication Theory of Secrecy Systems,” Bell System Technical Journal 28(4).
Smith, Sean, and Marchesini, 2008, The Craft of System Security,
Smith, Richard, 2012. Elementary Information Security, Burlington, MA: Jones and Bartlett.
Stoneburner, Gary, Clark Hayden, and Alexis Feringa, 2004. “Engineering Principles for Information Technology Security,” SP 800-27 A, Gaithersburg, MD: National Institute of Standards and Technology.
Swanson, Marianne, and Barbara Guttman, 1996. “Generally Accepted Principles and Practices for Securing Information Technology Systems,” SP 800-14, Gaithersburg, MD: National Institute of Standards and Technology.
Textbooks Reviewed but not Cited
Forouzan, 2008. Cryptography and Network Security, McGraw-Hill.
Gollmann, 2006. Computer Security 2nd ed., Wiley.
Newman, 2010. Computer Security: Protecting Web Resources, Jones and Bartlett.
Stallings, 2003, Network Security Essentials, Prentice-Hall.
Stallings, 2006. Cryptography and Network Security, Prentice-Hall.
Stallings and Brown, 2008. Computer Security: Principles and Practice, Prentice-Hall.
Stamp, 2006. Computer Security: Principles and Practice, Wiley.
Whitman and Mattord, 2005. Principles of Information Security 2nd ed., Thomson. | <urn:uuid:08140f77-f900-4b4f-b044-005cb8b3214b> | CC-MAIN-2022-40 | https://cryptosmith.com/2013/10/19/security-design-principles/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00570.warc.gz | en | 0.913115 | 5,945 | 2.5625 | 3 |
Phishing Awareness Poster
Due to recent events (RSA’s SecurID breach), we thought it would be prudent to create a “phishing” awareness poster.
Wikipedia sums up the meaning of “phishing” the best: Phishing is a way of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity in an electronic communication. Communications purporting to be from popular social websites, auction sites, online payment processors or IT administrators are commonly used to lure unsuspecting individuals. Phishing is typically carried out by e-mail or instant messaging, and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Phishing is an example of social engineering techniques used to fool users, and exploits the poor usability of current web security technologies.
To help you with your user awareness program, and to offer you a periodic reminder option, I have created the attached phishing poster that reminds users about the following:
- Don’t open any attachments that you are not expecting.
- Verify the authentication of an e-mail or its sender.
- Confirm your authorization to perform actions requested in an e-mail.
- Don’t enter sensitive information on unsolicited websites.
- Never use your network login username / password on websites.
- When in doubt, don’t! Call your Information Security Officer or Network Administrator.
Here’s a security awareness reminder poster that you may print and either hand out to your employees or post in conspicuous locations: Gone Phishing! | <urn:uuid:814bbddc-3a2e-4cdb-bfe6-dcf1a147013c> | CC-MAIN-2022-40 | https://my.infotex.com/gone-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00570.warc.gz | en | 0.885614 | 342 | 2.78125 | 3 |
How to Know if an Email is a Phishing Scam or Not
Phishing scams are a major threat to all email users, especially businesses. The scary part is that they’re becoming increasingly sophisticated. Phishing emails popped up sometime in the early 90s. However, back then, they weren’t too hard to detect. For instance, typos were commonplace in an old-school phishing mail, and that was a dead giveaway.
Of course, this was a long time ago, when email was still in its infancy. Times have changed and today’s cybercriminal has changed with the times. Their tactics have evolved and phishing emails are far more convincing than they used to be. They are well written and personalized. Hackers and cybercriminals already have a rough idea of who you are, and that means today’s phishing emails are targeted.
Today’s phishing emails also look authentic; they replicate legitimate emails in terms of design and aesthetic. In fact, at first glance, you wouldn’t know the difference between a real email from your bank and a fraudulent version. Needless to say, this makes fighting phishing scams a major challenge.
On the rise
According to data from the RSA, phishing attacks are only growing, and this is despite an increase in user awareness. One major reason for this growth is the simplicity of executing such scams. Malware developers now offer automated toolkits that scammers can use to create and host phishing pages with the utmost ease.
It is estimated that each phishing attack manages to extract an average of $4500 in stolen funds.
So, the big question is – how does one protect their email, especially at a time when phishing scams are evolving? Well, here is what the experts have to say.
Never trust just a name
A common tactic used by scammers is spoofing the display name in an email. According to a study done by ReturnPath, around 50% of 760,000 email threats targeting some of the world’s biggest businesses had made use of this tactic.
This is how it works – let’s say a scammer spoofs a brand name such as “Nike.” The email address of the sender may look something like “Nike firstname.lastname@example.org.” But, even if Nike doesn’t actually own the domain “customersupport.com,” DMARC and other email authenticity and anti-fraud tools will not to block the mail. This is because the email is legitimately from customersupport.com, even though this domain has nothing to do with Nike. There is no authentication for the “comment” that goes along with the email address (in this example, that is the word “Nike”).
Anyway, the actual problem begins when the user receives the mail. You see, most inboxes are designed to show only the display name (Nike, in this case), which creates an illusion of legitimacy.
So, what’s the solution?
Make sure you always check the actual address and research it on Google to determine its legitimacy. Also, you can configure your email viewers to show you the full email address of message senders. LuxSci supports this as a preference in its WebMail interface. By seeing the full email address, you can be more skeptical when that address looks “phishy”.
Never click a link
If you see links in an email, do not click them without running an investigation first. Start your investigation by hovering over the link to see if it directs to the right domain. If it doesn’t, you can be sure that you’re being scammed.
Of course, you might wonder how one differentiates between a legitimate and fraudulent domain. If it’s a business that you patronize, you’re probably already aware of the legitimate domain name. However, if it isn’t your vendor or brand or name that you recognize, there is no need to bother with that mail anyway. Just put it in the trash.
You don’t have to respond to random marketing campaigns. The same applies to downloading attachments. If it’s from a business or person you don’t know, don’t download it. If it is, investigate first. As a best practice, don’t open or click on anything that you are not expecting without asking or investigating.
Nothing is immediate when it’s via email
Email isn’t the first choice when it comes to urgent communication. If you find a mail that asks you to act immediately, you can often ignore it. Phishing scams often involve emails that induce panic and stir up your need to respond right away. The response here often involves giving away sensitive information. It’s a form of social engineering.
Legitimate mailers will not usually do this (unless you have ignored previous less time-sensitive warnings) and if you are still concerned, install anti-malware solutions that send out in-app or in-program notifications. These programs will also monitor your email, web browser or messaging apps for threats.
In other words, those panic-inducing emails can often go straight to the trash. If you are uncertain if the urgent warning is real … contact the sender. However, don’t use the information provided in the email, if you can help it. Call a phone number that you have or can look up on the web. If you are skeptical, do not trust phone numbers and email addresses provided to you in the questionable email itself.
Identifying phishing scams isn’t an absolutely cut and dried process, because they evolve at a rapid rate and can be very targeted. So, it’s best to be generally skeptical. When it comes to information security, a healthy dose of skepticism is necessary.
If you aren’t a 100% sure about the mail you’re receiving, it is best to put it in the trash or reach out directly to the sender via some other communications channel to check. It is always better to err on the side of safety and security.
Want to discuss how LuxSci’s HIPAA-Compliant Email Solutions can help your organization? Interested in more information about “smart hosting” your email from Microsoft to LuxSci for HIPAA compliance? Contact Us | <urn:uuid:66ed1cbc-5bf7-4657-bad9-5355fbcf3d02> | CC-MAIN-2022-40 | https://luxsci.com/blog/how-to-know-if-an-email-is-a-phishing-scam-or-not.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00570.warc.gz | en | 0.942408 | 1,341 | 2.53125 | 3 |
Incident Response – Everything You Need to Know
Learn Why It’s Crucial to Have a Proper Incident Response Plan in Place
You probably heard us say this before: a cybersecurity incident can happen anytime, anywhere, to anyone, with consequences that vary from data leaks to losing huge amounts of money or even regulatory fines. Companies shouldn’t neglect security incident management, and incident handling is not complete without a proper cyber incident response plan. Read on to find out more!
What Is Incident Response
Incident response refers to the steps that should be made to prepare for, detect, contain and recover from a cyber security incident. These steps are described in a document called incident response plan, along with all the procedures and responsibilities of the incident response team.
Incident Response Plan
Cyber incidents are more than just technical issues; they’re also business complications. The sooner they’re dealt with, the less damage they’ll do – fortunately, more and more companies understand this (the incident response market is expected to grow at a CAGR of 20.3% until 2023), and understand that they need a cyber security incident response plan. The incident response steps that are essential to this type of plan are the following:
This is the most important part of an incident response plan, as it affects how well an organization will respond in the event of a cyberattack. To enable the organization to address an incident, several critical factors must be implemented:
- the incident response policy – a set of written principles, rules, or procedures that provide guidance;
- the response strategy, taking into account the organizational impact that a security incident may have;
- communication – an incident response team needs to know exactly whom they should contact, when and for what;
- access control – it’s important for the incident response team to have all the necessary permissions to perform their tasks.
In this incident response phase, incident response teams should determine whether an incident has occurred or not, based on information from various sources (firewalls, intrusion detection systems etc.). You want to know when did the event happen, how was it discovered, what areas have been compromised, if and how your operations will be affected, and also if the incident’s point of entry has been discovered.
The containment stage refers to limiting further damage by isolating the infected endpoints or shutting down production servers. To preserve evidence and understand how systems were infiltrated, it’s also important to use some sort of forensic software that must take an image of them as they were at the time of the incident.
During this stage, incident response teams should check for any backdoors the attackers might have installed, and apply security patches.
As I mentioned in a previous article, during this phase of a cyber security incident response plan, the root cause of a cybersecurity attack and all the malware that got into a system are eliminated.
After the containment phase, eradication is the implementation of a more permanent repair. It’s critical because the response team’s goal should be to delete the access points that bad actors utilized to break into your network. All of the events that occur during this stage should be meticulously documented.
Restoration efforts and data recovery are included in the recovery phase of an incident response plan. The response team should continue to monitor the affected systems for malicious activity after certifying that they have been properly recovered. It’s important to perform tests to check if the systems that were involved in the incidents are totally operational and clean.
The response team should submit a full report on the incident in the last step of the incident response plan to get insight into how each of the preceding phases could be improved. The report must also give a detailed account of what happened throughout the incident, so that it can also be utilized as new employee training material and as a reference for any team exercises.
Security Incident Management
Security incident management is ensured by security incident response teams, who must prevent, manage and respond to cyber security incidents.
The key activities in a security incident response team are incident management, incident investigation, technical analysis, incident scoping, communication, regulatory concerns, decision making, remediation and reporting.
- Incident management – the person in this role should oversee all the operations and gather information for future reports.
- Incident investigation – this role involves the use of forensics across all endpoints to discover indicators of compromise.
- Technical analysis – this position requires technical know-how, and can be occupied by malware, forensics or network analysts.
- Incident scoping – the person with this task needs to discover the extent of a breach, and work closely with the technical analysts.
- Communication – internally and externally, crisis communications entails revealing the investigation’s findings, as well as the scope and potential repercussions.
- Regulatory concerns – if a breach involves regulatory or compliance issues (and often they do), having someone on the team who knows how to handle disclosure requirements or deal with law enforcement groups is critical.
- Decision making – during the course of incident response and investigation, key decisions will need to be made, and the team will need executive advice on how to continue.
- Remediation and reporting – as mentioned before, documenting the entire process is essential to security incident response management, because it allows elaborating detailed remediation recommendations.
Best Practices – and Tools that Might Help
When it comes to security incident response management best practices, here are a few things that you should keep in mind:
- never skip the recommended stages of incident response plans – it’s important to manage security incidents throughout their entire lifecycle.
- establish clear, detailed operational procedures, to enable security teams to stay calm during critical incidents and know exactly what needs to be done.
- consider investing in automated communication technologies that allow teams to concentrate on tackling high-priority problems without losing time during a crisis.
- if you lack the necessary expertise, outsource incident response management to a managed service provider, whose team of cybersecurity specialists can help you establish a high-level internal incident response strategy and provide emergency support in the event of a cyberattack.
In terms of security incident management tools, you need:
- Security monitoring tools – Log analysis and management, SIEM, Network and Host-Based IDS, Netflow / Traffic Analyzers, Web Proxies.
- Orientation / Evaluation tools – Asset Inventory, Threat Intelligence Security Research.
- Remediation and Recovery tools – Incident Response Forensic tools, System Backup & Recovery Tools, Patch Management, but also Security awareness training tools and programs.
Heimdal™ Security can help you with several of these aspects (Log analysis, SIEM, IDS, Traffic filtering, Asset inventory, Forensics, Patch Management), and the best option for you would be to try our EDR service – it is a unified endpoint management software that provides you with all the information you need regarding your company’s cybersecurity in a single dashboard.
Our enhanced EDR tool is a powerful cybersecurity solution that delivers endpoint protection, advanced investigation, threat hunting capabilities, and quickly responds to complex malware, both known and yet undiscovered.
It gives you more visibility into your endpoints and allows you to respond more quickly to threats, thanks to its multiple modules: Threat Prevention, Vulnerability Management, Next-Gen Antivirus, Ransomware Encryption Protection, Privileged Access Management, Application Control.
Although no one wants to experience a data breach or other security incident, it’s necessary to prepare for one. Do it by creating an incident response plan, knowing what to do in the event of an incident, and learning everything you can afterwards.
Drop a line below if you have any comments, questions or suggestions – we are all ears and can’t wait to hear your opinion! | <urn:uuid:7fa2fc4b-b55f-403d-b514-224aae414745> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/incident-response/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00770.warc.gz | en | 0.942844 | 1,630 | 2.640625 | 3 |
Noction IRP Lite
Manuals & Datasheets
Tier 1 Carriers Data
Noction Flow Analyzer
Guides & Documentation
Tips & Tricks
Noction IRP Lite
Manuals & Datasheets
Tier 1 Carriers Data
Noction Flow Analyzer
Guides & Documentation
Tips & Tricks
Posts tagged with "BGP"
BGP – the protocol that runs the Internet
The ipTTL and ipTotalLength information elements and how they are used for network traffic analysis
Aug 15, 2022
Modern networks are getting increasingly complex. So many things can go wrong with them. Business can suffer, and that is something we definitely want...
SNMP evolution and version differences. SNMP security models/levels details.
Jun 7, 2022
Simple Network Management Protocol (SNMP) is a network application layer protocol, initially developed for network management but is now mainly used for network monitoring. SNMP...
BGP community-based traffic engineering
May 11, 2022
BGP community attribute is a transitive optional attribute that we use for tagging (marking) a set of prefixes sharing a common property. Every BGP...
IP Transit and the Tiers of Transit Providers
Apr 12, 2022
The Internet consists of Internet Service Providers (ISPs) of all shapes and sizes. ISPs have two options when it comes to connecting to the...
The tightening of Internet control in Russia and potential Internet isolation tech intricacies
Apr 6, 2022
The Internet is for everyone, yet not everyone accepts that. In light of the news about Cogent Communications and Lumen Technologies disconnecting clients in...
BGP Anycast Best Practices & Configurations
Mar 16, 2022
What is Anycast? Anycast is not a new version of IP or the latest networking technology. The RFC 1546 describing Internet anycasting for IP has...
BGP YANG Model and Configuration
Dec 20, 2021
YANG Data Modeling Language Yet Another Next Generation (YANG) RFC 6020 is standard-based data modeling language for the NETCONF management protocol. YANG is used to...
The Number of the Beast or the Practical usage of the Blackhole Community
Apr 13, 2021
BGP blackhole filtering is a routing technique used to drop unwanted traffic. Black holes are placed in the parts of a network where unwanted...
The dark side of BGP community attributes
Nov 30, 2020
A while ago, RIPE Labs published the two-part article BGP Communities - A Weapon for the Internet. That may have been a bit of...
BGP peer groups, dynamic update peer groups and BGP templates
May 7, 2020
As a router gains more and more BGP neighbors, two issues arise. The first one is that with many neighbors, whenever the router needs to...
BGP errors, BGP error codes, and BGP error handling
Apr 3, 2020
In this blog post we’ll be looking at BGP errors. For that, our first question should be: is there an error, or is everything...
BGP – the right tool for so many jobs
Mar 20, 2020
Like other very successful protocols such as HTTP and DNS, over the years BGP has been given more and more additional jobs to do....
BGP security: an overview of the RPKI framework
Mar 19, 2020
The Resource Public Key Infrastructure (RPKI) system is a way to couple an IP address range to an autonomous system number through cryptographic signatures,...
The 768k or Another Internet Doomsday? Here’s how to deal with the TCAM overflow at the 768k boundary.
May 2, 2019
On August 8th, 2014 some ISPs experienced a phenomenon called the “512k Day”. The global BGP routing table, which consists of the global Internet...
BGP Labeled Unicast (BGP-LU)
Oct 17, 2018
This blog post discusses BGP Labeled Unicast (BGP-LU) which is used in multi-regional networks to carry the label information. While the RFC3107 “Carrying Label...
Recursive Lookup in BGP
Oct 12, 2018
The aim of this article is to discuss the importance of Recursive Lookup in BGP. First of all we need to understand the purpose...
Accumulated IGP and BGP
Oct 4, 2018
RFC 7311 defines an optional non-transitive BGP attribute called the Accumulated IGP Metric Attribute (AIGP). As we know, IGP stands for Interior Gateway Protocol...
QoS Policy and its propagation via BGP (QPPB)
Sep 24, 2018
Quality of Service (QoS) refers to a collection of technologies that networking devices use to apply different treatment to packets as they pass through...
BGP Conditional Route Injection
Aug 7, 2018
We have recently posted the BGP Route Aggregation eBook that discusses the benefits of route aggregation and explains the various optional parameters associated with...
BGP Route Dampening: obsolete or still used in the industry?
Aug 1, 2018
The unstable route whose availability alters repeatedly is called a flap. When flaps occur, an excessive number of BGP UPDATE messages are sent to...
BGP Attribute Filtering and Error Handling
Jul 26, 2018
The BGP Attribute Filter feature enables BGP speakers to take a certain action based on the presence of a specified path attribute inside the...
BGP Optimal Route Reflection as an alternative to BGP Add Path
Jul 10, 2018
Full iBGP Mesh and Route Reflection To avoid loops in a Full iBGP Mesh, BGP routers are only allowed to learn prefixes over iBGP from...
Network Address Translation (NAT) and BGP Explained
Jul 5, 2018
Network Address Translation (NAT) was originally described in RFC 1631. Although initially presented as a short term solution for preventing the IPv4 addresses depletion,...
Advertising Multiple Paths in BGP (BGP-Addpath)
May 28, 2018
The BGP advertisement of a prefix with new attributes replaces the previous announcement of that prefix. This behavior is known as an Implicit Withdraw,...
BGP and Traffic Engineering Mechanisms for WISPs
May 10, 2018
In this blog post, we will take a look at Kevin Myers's presentation where he discusses deployment of both OSPF and BGP routing protocols...
Bit Indexed Explicit Replication and the BGP Extensions for BIER
Apr 11, 2018
In this article we’ll take a look at the architecture for multicast forwarding called the Bit Indexed Explicit Replication or simply BIER. However, we’ll...
BGP/MPLS Layer 3 VPNs Practical Configuration
Mar 21, 2018
In our previous blog article we’ve discussed the benefits and the fundamental principles of BGP/MPLS L3 VPNs. We have covered the definition of the...
BGP / MPLS Layer 3 VPNs
Mar 13, 2018
What is VPN? Virtual Private Networks (VPN) are designed to provide users with the private networks capabilities over a shared infrastructure. They connect remote sites...
Internet of Things and BGP
Nov 17, 2017
The Internet of Things (IoT) attracts huge public attention nowadays. It is a network of interconnected physical devices (things) which sense and interact with...
BGP and Cryptocurrencies
Oct 19, 2017
What is Cryptocurrency? Cryptocurrency is a digital asset that works as a medium of exchange. It uses cryptography to secure transactions and to control the...
BGP in Large-Scale Data Centers
Sep 19, 2017
Large-scale data centers (DCs) connect hundreds of thousands or even more servers, having millions of users. Generally, every DC runs two kinds of applications....
BGP over GRE Tunnel
May 5, 2017
In this blog post we are going to explain how Generic Routing Encapsulation (GRE) tunnel might be used in a situation when the Border...
What BGP Looking Glass servers are and how network administrators use them.
Mar 16, 2017
Whether you are an experienced network administrator or you have just started to learn Border Gateway Protocol (BGP), Looking Glass (LG) is definitely a...
Noction bets on BGP FlowSpec
Sep 26, 2016
While networks are agnostic to the content of the serviced packets, end users have a very good understanding of what traffic they have and...
BGP and asymmetric routing
Jun 22, 2016
What is Asymmetric Routing? Asymmetric routing is the situation where packets from A to B follow a different path than packets from B to A....
Migrating to BGP
May 30, 2016
Here on the Noction blog we've covered many aspects of using BGP to connect your network to the internet. However, often implementing BGP isn't...
BGP and equal-cost multipath (ECMP)
Mar 25, 2016
One given in the network world is that bandwidth will increase. So that Gigabit Ethernet link that provided bandwidth to spare a year ago...
Lesser-known BGP path attributes
Nov 5, 2015
The past few weeks we've discussed the main BGP path attributes: the next hop, the AS path, the MED. We also covered the community...
Optimizing network performance for online gaming
Aug 25, 2015
In many online games, very good network performance is a prerequisite for satisfactory gameplay. However, there are many kinds of games, and their networking...
BGP security: announcing prefixes without authorization
Apr 14, 2015
In our last post, we looked at protecting the TCP session that carries BGP information between two routers, mainly against spoofed TCP resets. However,...
BGP Security: the MD5 password and GTSM
Apr 1, 2015
Back in the late 1980s and the early 1990s when BGP was developed, security was still an afterthought for protocols used on the internet....
Do route optimizers cause fake routes?
Mar 27, 2015
Yesterday, an incident occurred where an Autonomous System (AS) advertised more than 7,000 prefixes that belong to other networks. These are "more specific" prefixes—subsets...
What should your peering policy look like?
Feb 27, 2015
As discussed in earlier posts, as networks grow larger it starts making sense to exchange traffic with other networks directly (peering) rather than pay...
IPv4 BGP vs IPv6 BGP
Feb 6, 2015
BGP is older than IPv6. Even BGP-4, the version we still use today, predates IPv6: the first BGP-4 RFC (RFC 1654) was published in...
The top 4 causes of network downtime
Jan 12, 2015
Business people often believe that in order to have a reliable system, you need reliable components. Using reliable components to build your network certainly...
BGP and OSPF. How do they interact.
Dec 15, 2014
In some ways, BGP is nice and simple. For instance, there's only one BGP: BGP version 4. Many network professionals have been asking the...
Understanding BGP Communities
Nov 21, 2014
After reading the title of this article you may be thinking of small neighborhoods in the world's most-connected cities where BGP-minded people live together,...
Filtering your BGP updates
Oct 30, 2014
One thing we all learn quickly after getting started with BGP is that if left to its own devices, the protocol will happily propagate...
Why BGP is not enough?
Sep 11, 2014
It’s common knowledge that Border Gateway Protocol has no ability to make performance-based routing decisions and often routes traffic through paths that are congested...
Route Reflectors – an alternative to full-mesh iBGP configuration
Oct 25, 2013
The typical fully-meshed iBGP configuration of routers can become very difficult to manage in large networks. Because of the increasing numbers of iBGP sessions...
Routing anomalies – their origins and how do they affect end users
Sep 20, 2013
Nowadays, the Internet is made-up of more than 45,000 active Autonomous Systems (ASes), each with a different complexity level and specific configurations. To accomplish...
BGP-based inter-domain traffic engineering: important considerations
Aug 7, 2013
Inter-domain traffic engineering aims to optimize traffic performance, originating and terminating in different administrative domains. At the moment, Autonomous Systems exchange traffic via exterior...
Deploying BGP for redundant IP connectivity
Mar 13, 2013
All organizations that depend on Internet for sales revenue or business continuity require internet redundancy. Downtime lowers productivity, yields losses and painfully affects the...
Is BGP multi-homing enough for WAN network performance?
Mar 16, 2012
BGP multihoming has become as necessary to the networks connected to Internet, as the use of redundant power sources or multiple data centers. No...
2022 © Noction - All rights reserved | <urn:uuid:ec2fee82-cf98-4304-a863-4b92bce5c99e> | CC-MAIN-2022-40 | https://www.noction.com/tags/bgp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00770.warc.gz | en | 0.886977 | 3,055 | 2.609375 | 3 |
Google today announced that the company will be backing an offshore wind project called the Atlantic Wind Connection off the New Jersey coast — and parts south — to the tune of $1.8 billion. Instead of installing wind turbines, Google and its partners, Good Energies, Marubeni and Trans-Elect are laying the groundwork for them by placing transmission lines between Virginia and New Jersey that will eventually deliver up to 6 gigawatts of renewable energy when all is said and done. Those lines will link with big population centers (the big red dots in the image above) to “relieve grid congestion” and help avoid a repeat of the debilitating 2003 blackout. It also helps make a strong business case for renewable energy that services big metropolitan areas.
All told, the wind energy “backbone” will take 10 years to complete and cost $5 billion. The first phase will be complete by early 2016 and stretch 150 miles between New Jersey and Delaware.
Image credit: Google | <urn:uuid:7373fea3-e65e-4669-9c47-4276968c5a06> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2010/10/google-backs-offshore-wind-transmission-project.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00770.warc.gz | en | 0.924718 | 199 | 2.578125 | 3 |
Newer, more sophisticated cyber threats are forcing businesses to rethink security practices. Unfortunately, outdated security measures no longer provide sufficient protection for sensitive data. Hackers are getting sneakier and their attacks more sophisticated. A recent IBM study found that 48% of cybersecurity professionals say the frequency and sophistication of insider threats have increased in the last year alone. This is why the “Zero Trust” security framework is becoming increasingly important among businesses.
Zero Trust security is a way of thinking about your security architecture and overall risk management strategy. The idea is that you shouldn’t trust any user trying to access your services, even if they work within your organization. This is different from how many businesses have traditionally configured their security settings. In a traditional trust environment, administrators give employees broad access to whatever they need. As a result, there’s no way to determine who is connecting to the network or their intentions. For example, let’s say that you have three employees in your marketing department who need access to your company’s marketing database. You might configure your security settings so that each of them can see everything in that database. In a Zero Trust environment, you would configure each marketing employee, so they only have access to the exact data they need to do their job. The idea is that if one of these employees leaves the company or does something malicious with their access, they can’t see everything in the database.
Microsoft describes the fundamental Zero Trust security principles as follows:
Verify explicitly: Always authenticate and authorize based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
Use least privileged access: Limit user access with just-in-time and just-enough-access (JIT/JEA), risk-based adaptive policies, and data protection to help secure data and productivity.
Assume breach: Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defences.
Zero Trust starts with identity, verifying that only the people, devices and processes granted access to your resources can access them.
Next comes assessing the security compliance of device endpoints – the hardware accessing your data – including the IoT systems on the edge.
This oversight applies to your applications, too, whether local or in the cloud, as the software-level entry points to your information.
Next, there are protections at the network layer for access to resources – especially those within your corporate perimeter.
Infrastructure includes data hosted on on-premises or in the cloud – physical or virtual, including containers and micro-services and the underlying operating systems and firmware.
And finally, data protection across your files and content, as well as structured and unstructured data wherever it resides.
Zero Trust is a comprehensive end-to-end strategy and requires integration across your entire digital estate, including identities, endpoints, networks, data, apps, and infrastructure.
The foundation of Zero Trust security is identities. Both human and non-human identities need strong authorization. It involves connecting from personal or corporate endpoints with a compliant device and requesting access based on strong policies rooted in the Zero Trust principles.
For example, Microsoft’s Zero Trust security model involves unified policy enforcement where the Zero Trust policy intercepts the request, explicitly verifies signals from all six foundational elements based on policy configuration, and enforces least privileged access. Signals include the user’s role, location, device compliance, data sensitivity, application sensitivity and much more. In addition to telemetry and state information, the risk assessment from threat protection feeds into the policy engine to automatically respond to threats in real-time. The policy is enforced during access and continuously evaluated throughout the session.
Policy optimization further enhances this policy. Governance and compliance are critical to a robust Zero Trust implementation. Security posture assessment and productivity optimization are necessary to measure the telemetry throughout the services and systems.
The telemetry and analytics feed into the threat-protection system. Large amounts of telemetry and analytics enriched by threat intelligence generate high-quality risk assessments that can be manually investigated or automated. Attacks happen at cloud speed – your defence systems must act at cloud speed, and humans can’t react quickly enough or sift through all the risks. The risk assessment feeds into the policy engine for real-time automated threat protection and additional manual investigation if needed.
Traffic filtering and segmentation are applied to evaluate and enforce the Zero Trust policy before access is granted to any public or private network. Data classification, labeling, and encryption should be applied to emails, documents, and structured data. Access to apps should be adaptive, whether SaaS or on-premises. Runtime control is applied to infrastructure, with serverless, containers, IaaS, PaaS, and internal sites, with just-in-time (JIT) and Version Controls actively engaged.
Finally, telemetry, analytics, and assessment from the network, data, apps, and infrastructure are fed back into the policy optimization and threat protection systems.
There are many advantages to implementing a Zero Trust environment, including:
Improved Compliance: A secure environment can also help you Ensure regulatory compliance. For example, Zero Trust shields all user and workload connections from the internet, so they can’t be exposed or exploited. This invisibility makes it easier to demonstrate compliance with privacy standards and regulations (e.g., PCI DSS, NIST 800-207) and results in fewer findings during audits.
Reduced Cyber Attacks: A Zero Trust environment reduces your risk of cyber attacks by preventing unauthorized access and malicious usage of your systems. Following the principle of least privilege, every entity is assumed hostile. Therefore, every request is inspected, users and devices are authenticated, and permissions are assessed before trust is granted. In addition, trust is continually reassessed as context changes, such as the user’s location or accessed data.
Without trust, an attacker who gets inside your network or cloud instance through a compromised device or other vulnerability won’t be able to access or steal your data.
Better Visibility: The Zero Trust security approach requires you to determine and classify all network resources. This gives you more visibility into who accesses what resources for which reasons and helps you understand what must be done to secure those resources.
Customer Trust: Today, organizations work in a diverse and distributed ecosystem, making it challenging to keep customers’ personal information private. A Zero Trust strategy makes it possible to ensure data privacy and, in turn, build customer trust.
Enhanced Flexibility: An organization’s technology needs are constantly shifting. As a result, applications, data and IT services may be moved around. Before zero-trust, moving applications and data from private data centers to the cloud, or vice versa, required new security policies to be created at each new location. This is not only a time-consuming process but can also result in mistakes that lead to security vulnerabilities. You can centrally manage app and data security policies with Zero Trust and use automation tools to migrate these policies where required.
The Zero Trust framework is a new way of thinking about your security architecture and risk management strategy. And while implementation requires time and expertise, the benefits are immediate and extend far beyond security. From making better use of your resources to enhancing compliance, a Zero Trust framework improves your security posture and can help you build strength and resilience throughout your organization. | <urn:uuid:f5545539-bced-400c-b6ce-5b6c138af51b> | CC-MAIN-2022-40 | https://gibraltarsolutions.com/blog/zero-trust-security-what-it-is-how-it-works-and-why-you-need-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00770.warc.gz | en | 0.9225 | 1,529 | 2.59375 | 3 |
Change behavior—how humans accept, embrace, and perform change—is the core of modern change management. ITSM frameworks incorporate various approaches to change management, but one started it all: Kurt Lewin’s 3 Stage Model of Change.
Initially a popular concept, current ITSM thinking criticizes Lewin’s model for being too simplistic and abstract to manage change in a real way. In today’s speedy, complex, and dynamic landscape of enterprise IT, the three-step model provides limited actionable guidance.
Still, understanding these steps provides an essential view into change management, so let’s have a look.
What is the 3 Stage Model of Change?
A leader in change management, Kurt Lewin was a German-American social psychologist in the early 20th century. Among the first to research group dynamics and organizational development, Lewin developed the 3 Stage Model of Change in order to evaluate two areas:
- The change process in organizational environments
- How the status-quo could be challenged to realize effective changes
Lewin proposed that the behavior of any individual in response to a proposed change is a function of group behavior. Any interaction or force affecting the group structure also affects the individual’s behavior and capacity to change. Therefore, the group environment, or ‘field’, must be considered in the change process.
The 3 Stage Model of Change describes status-quo as the present situation, but a change process—a proposed change—should then evolve into a future desired state. To understand group behavior, and hence the behavior of individual group members during the change process, we must evaluate the totality and complexity of the field. This is also known as Field Theory, which is widely used to develop change models including Lewin’s 3 Stage Model.
The 3 Stages of Change
Let’s look at how Lewin’s three-step model describes the nature of change, its implementation, and common challenges:
Step 1: Unfreeze
Lewin identifies human behavior, with respect to change, as a quasi-stationary equilibrium state. This state is a mindset, a mental and physical capacity that can be almost absolutely reached, but it is initially situated so that the mind can evolve without actually attaining that capacity. For example, a contagious disease can spread rapidly in a population and resist initial measures to contain the escalation. Eventually, through medical advancement, the disease can be treated and virtually disappear from the population.
Lewin argues that change follows similar resistance, but group forces (the field) prevent individuals from embracing this change. Therefore, we must agitate the equilibrium state in order to instigate a behavior that is open to change. Lewin suggests that an emotional stir-up may disturb the group dynamics and forces associated with self-righteousness among the individual group members. Certainly, there are a variety of ways to shake up the present status-quo, and you’ll want to consider whether you need change in an individual or, as in a company, amongst a group of people.
Let’s consider the process of preparing a meal. The first change, before anything else can happen, is to “unfreeze” foods—preparing them for change, whether they’re frozen and require thawing, or raw food requiring washing. Lewin’s 3 Step Model believes that human change follows a similar philosophy, so you must first unfreeze the status-quo before you may implement organizational change.
Though not formally part of Lewin’s model, actions within this Unfreeze stage may include:
- Determining what needs to change.
- Survey your company.
- Understand why change is necessary.
- Ensuring support from management and the C-suite.
- Talk with stakeholders to obtain support.
- Frame your issue as one that positively impacts the entire company.
- Creating the need for change.
- Market a compelling message about why change is best.
- Communicate the change using your long-term vision.
Step 2: Change
Once you’ve “unfrozen” the status quo, you may begin to implement your change. Organizational change in particular is notoriously complex, so executing a well-planned change process does not guarantee predictable results. Therefore, you must prepare a variety of change options, from the planned change process to trial-and-error. With each attempt at change, examine what worked, what didn’t, what parts were resistant, etc.
During this evaluation process, there are two important drivers of successful and long-term effectiveness of the change implementation process: information flow and leadership.
- Information flow refers to sharing information across multiple levels of the organizational hierarchy, making available a variety of skills and expertise, and coordinating problem solving across the company.
- Leadership is defined as the influence of certain individuals in the group to achieve common goals. A well-planned change process requires defining a vision and motivation.
The iterative approach is also necessary to sustain a change. According to Lewin, a change left without adequate reinforcement may be short-lived and therefore fail to meet the objectives of a change process.
During the Change phase, companies should:
- Communicate widely and clearly about the planned implementation, benefits, and who is affected. Answer questions, clarify misunderstandings, and dispel rumors.
- Promote and empower action. Encourage employees to get involved proactively with the change, and support managers in providing daily and weekly direction to staff.
- Involve others as much as possible. These easy wins can accumulate into larger wins, and working with more people can help you navigate various stakeholders.
Step 3: Refreeze
The purpose of the final step—refreezing—is to sustain the change you’ve enacted. The goal is for the people involved to consider this new state as the new status-quo, so they no longer resist forces that are trying to implement the change. The group norms, activities, strategies, and processes are transformed per the new state.
Without appropriate steps that sustain and reinforce the change, the previously dominant behavior tends to reassert itself. You’ll need to consider both formal and informal mechanisms to implement and freeze these new changes. Consider one or more steps or actions that can be strong enough to counter the cumulative effect of all resistive forces to the change—these stronger steps help ensure the new change will prevail and become “the new normal”.
In the Refreeze phase, companies should do the following:
- Tie the new changes into the culture by identifying change supports and change barriers.
- Develop and promote ways to sustain the change long-term. Consider:
- Ensuring leadership and management support and adapting organizational structure when necessary.
- Establishing feedback processes.
- Creating a rewards system.
- Offer training, support, and communication for both the short- and long-term. Promote both formal and informal methods, and remember the various ways that employees learn.
- Celebrate success!
Lewin’s 3 Stage Model of Change provides an intuitive and fundamental understanding of how changes occur, in context of the social behaviors observed at an individual and collective level within a group. Since the theory was first introduced in 1951, change management has taken both supportive and opposing directions. This is a vital reminder: when modern-day change management frameworks are not working for specific use cases and business needs, consider these fundamentals of understanding social behavior in light of change.
- BMC Business of IT Blog
- IT Leadership & Best Practices, a multi-part Guide
- What Is Goodhart’s Law? Balancing Measurement & Authenticity
- The Fourth Industrial Revolution Explained
- Business vs IT vs Digital Transformation: Strategy Across 3 Critical Domains | <urn:uuid:acdb36bc-bd85-4ae9-88f7-4f381218a458> | CC-MAIN-2022-40 | https://blogs.bmc.com/lewin-three-stage-model-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00770.warc.gz | en | 0.914267 | 1,625 | 2.578125 | 3 |
To attain the desired security, today’s institutions need specially trained personnel. However, human beings make mistakes that might affect the level of performance. Face Recognition Security System is one of the solutions that may minimize the error, i.e. detect intruders to restricted or high-security areas. Facial recognition is biometric technology used for authentication and identification of individuals, by comparing the facial features from an image with the stored facial database. Home facial recognition cameras are under development. Although there are problems regarding privacy and inaccuracy, the design of future systems is aiming to reduce them.
What is FRSS composed of and how does it work?
The system is composed of two parts: hardware, which consists of a camera, and software, which consists of face-detection and face-recognition algorithms. When a person enters a defined zone, the camera takes a series of snapshots and sends them to the software to be analyzed and compared with an existing database of trusted people. An alarm goes off if the user isn’t recognized. Many face recognition software has been implemented during the past decade. Each of them uses different methods and different algorithms. Some facial recognition software extracts the facial features from the input image to identify the face. Other algorithms normalize a set of face images and compress the face data, so it can be used for facial recognition. A new method is three-dimensional facial recognition, where a 3-D sensor captures information about the shape of the face, so that only distinctive features of the face, such as the contour of eye sockets, nose, and chin, are used. This new method offers some advantages over other algorithms. Here, recognition isn’t affected by the change of light, and the face can be identified from a variety of angles, including a profile view.
What is real-time face recognition?
Real-time face recognition is part of the field of biometrics. Biometrics is the ability for a computer to recognize a human through a unique physical trait, and it’s one of the fastest-growing fields in advanced technology. Face recognition provides the capability for the computer to recognize a human by facial characteristics. Predictions indicate a biometrics explosion in the next century, to authenticate identities and avoid unauthorized access to networks, databases, and facilities. A facial recognition device takes an image or a video of a human face. Afterward, it compares it to other image faces in a database, taking into account its structure, shape, and proportions, the distance between the eyes, nose, mouth, and jaw, upper outlines of the eye sockets, the sides of the mouth, location of the nose and eyes, and the area surrounding the cheekbones.
Advantages and use
Facial recognition is not intrusive. It can be done from a faraway distance without the person being aware. It’s commonly used in banks or government offices. These systems can also be used for surveillance purposes like searching for wanted criminals, suspected terrorists, or missing children. Face recognition devices are more beneficial to use for facial authentication than for identification purposes. It’s easy to alter someone’s face – the person can disguise using a mask. The environment is also a consideration, as well as subject motion, and focus on the camera. Facial recognition, when used in combination with another biometric method, can improve verification and identification results dramatically.
Growing demand for facial recognition systems in China
Two prominent technologies fueling China’s AI growth are facial recognition and AI chips. The former advances the government’s ambitious country-wide surveillance plans, while the latter is a direct challenge to US-made chips. In 2017, around 55 cities in China were part of a plan called Xue Liang, or “sharp eyes,” which involves processing footage from surveillance cameras in public and private properties to monitor people and events. Media reports suggest that the system may eventually power China’s Social Credit System, a metric to gauge the “trustworthiness” of its citizens. Startup Megvii already has access to 1.3B face data records for Chinese citizens. It is backed by Chinese insurance companies, government entities, and corporate giants. In 2016, Alibaba Group (through Ant Financial) and Foxconn partnered with the city of Hangzhou for the “City Brain” project, which uses AI to analyze data from surveillance cameras and social feeds. Ant Financial separately uses facial recognition for payments at Alibaba-owned retail stores.
Is the facial recognition camera the right step in home security?
Facial recognition has become a common feature of the modern world. The home use market for facial recognition is growing. They are mainly used for security, but can also be benefited to automate processes and for family monitoring. Smart home facial recognition provides users with improved security and convenience.
– Spotting strangers on your property – Home surveillance with facial recognition allows a security team to spot and respond to suspicious, threatening, or banned people. In the event a crime occurs, these security systems provide crucial evidence to police officers. This can lead to faster justice and a higher chance of recovering stolen goods. They require the user to set up a database of known people. Upon the detection of an unknown face, alerts are sent to the owner, and a video of the event is stored. Other uses could be to tell the owner when a family member brings a stranger home. It could even be set up to produce alerts showing when a delivery driver drops off a package.
– Family monitoring – Outside of security, monitoring the people living at a house is a key feature. This includes telling parents when their kids arrive home, or alerts when they enter restricted areas that could lead to injury – if you want to make sure your child doesn’t burn themselves while there are pots on the stove or if you have a home workshop that isn’t fit for children. Facial recognition cameras can monitor older relatives by e.g. sending alerts if they were to fall. Another use is tracking other visitors to the house such as nannies, caregivers, cleaners, and repairmen.
– Home automation via facial recognition – After arriving home from a long day at work, a camera detects your face and the smart lock opens the door. The lights turn on and your favorite playlist starts playing. Whatever your routine is it can be set to trigger especially for you and other family members based on facial recognition. With advances in AI technology and smart home control systems this level of automation is becoming a reality.
The future of facial recognition systems
The facial recognition market is expected to garner $9.6 billion by 2022, registering a CAGR of 21.3% during the forecast period 2016-2022. It’s expected to witness robust growth during the forecast period owing to its increasing usage in both law enforcement and non-law enforcement applications. Moreover, facial recognition is widely preferred over other biometric technologies, such as voice recognition, skin texture recognition, iris recognition, and fingerprint scanning, due to its non-contact process and easy deployment. Currently, this technology is majorly used for security and marketing purposes. For instance, billboards have been designed with integrated software that is used to identify gender, age, and ethnicity to deliver targeted advertising.
Common troublesome issues to consider
While we’re eager to look at novelties through rose-tinted glasses, we must also consider some of the cons that include e.g. privacy concerns. Many people believe the storage of biometric data, and global databases of facial recognition images are an invasion of privacy. Mind that this data could be sold to third parties or become a target for cybercriminals. That’s why it’s better to store them than to send them to a popular cloud. There are potential hurdles for home use facial recognition systems. A customer setting up a facial recognition camera system is consenting for it to store images of themselves.
However, problems can occur with data obtained from other people. In America and many other countries, it’s legal for a homeowner to record anyone on their property – it may cause issues with any contractors staying at home. To remove some of these concerns, several companies use local storage and local processing for face recognition. Sharing personal data with third parties is impossible on this type of device. Today, some facial recognition algorithms have an accuracy of 99.97% (in ideal conditions!). The accuracy can be affected by certain factors: lighting, distance from the camera, and the angle of the face with relation to the camera. AI vision usually can’t work properly in the night. According to Digital Trends, a military-level facial recognition device can identify people from 1km away. For a consumer-level security camera, the best distance is around 5 meters. Facial features greatly change with the face angle, compared to the registered data. Computer scientists have addressed this issue with data augmentation.
Some extremely sophisticated cameras can recognize the owner’s side-view faces by registering only one front-view photo. Another still unresolved issue is facial recognition bias. Studies have shown that current facial recognition systems have greater accuracy when identifying Caucasian male faces. Systems have been found to falsely identify Black and Asian faces at higher rates. The National Institute of Standards and Technology (NIST) tested 189 facial recognition systems from 99 developers. It found false identifications were 10 to 100 times higher for darker-skinned faces. Worth to mention, AI has no bias at all, neither do software engineers. Even the best AI algorithm has difficulty in reading facial features on darker skin or a woman with heavy facial cosmetics. Another issue is would-be criminals fooling facial recognition security systems. Studies have shown high-quality photos of the owner can unlock many devices using facial recognition identification. Perhaps a greater problem is the use of 3D printed masks – by showing someone else’s face it can also fool facial recognition systems. Currently, there is no fix to this. | <urn:uuid:3f46d07f-afb1-42dc-85a7-ec8bdbaf20c5> | CC-MAIN-2022-40 | https://hummingbirds.ai/how-facial-recognition-cameras-benefit-home-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00770.warc.gz | en | 0.944782 | 2,035 | 3.359375 | 3 |
Ethernet cables used to be a lot more complicated than they are today.
Depending upon the devices you were connecting, you’d need to use different ethernet cables in order for them to communicate correctly.
For example, let’s say you had your computer connected to a router with an ethernet cable. Later in the day, if you wanted to connect that computer to another computer, you’d have to use a special type of ethernet cable.
The special type of cable I’m referring to here is a crossover ethernet cable.
In order for your network to work properly, you had to know when to use a regular cable and when to use a crossover cable. If you weren’t using the right cable, you’d be out of luck. Your devices wouldn’t be able to communicate with each other.
Now for the good news. Things have changed when it comes to crossover ethernet cables. It’s not as complicated as it used to be.
I know what you’re wondering.
What does that mean when it comes to crossover ethernet cables? When should crossover cables be used today?
If you have relatively modern equipment in your network, you won’t need to use crossover cables at all. Modern technology has enabled devices to automatically adjust the way they communicate so it doesn’t matter what type of ethernet cable is used to connect them.
Whether you use a crossover or regular ethernet cable (also called a straight through cable), the device will be able to communicate with the other device it’s connected to.
What a relief.
In this post, I’ll explain the details of how crossover cables work, as well as why you probably don’t need to use them anymore.
What is a crossover ethernet cable?
Before we dive into how they’re used (or not used) today, let’s take it from the top.
What are crossover ethernet cables in the first place? What makes them so special compared to a straight through cable?
This answer might be a buzzkill, but crossover ethernet cables provide the same benefit as straight through cables. They allow devices to communicate through a wire instead of wirelessly.
The difference is that crossover and straight through cables are designed to work with different devices.
Let’s break this down.
Straight through ethernet cables allow two different types of devices to communicate. For example, if you wanted to connect your computer to a router, you’d need to use a straight through cable.
You’d also need a straight through cable to connect:
- A computer to an ethernet switch
- An ethernet switch to a router
- A router to a modem
To put it simply, most of the wired connections you’ve made in your life require straight through cables.
Crossover cables differ in that they’re meant to connect two of the same type of device. This could be:
- A router to a router
- An ethernet switch to an ethernet switch
- A computer to a computer
Hopefully this makes sense.
Crossover cables were needed in the first place because of how the above devices communicate. Most of the time, computers, routers, and switches were wired to different device types. As a result, that’s what they were made for.
Whenever they had to communicate with the same type of device over a wire (e.g. two computers wired together), however, things didn’t go so smoothly.
It was like trying trying to call yourself on the telephone.
This is where the crossover cable came in to save the day.
How do crossover cables work?
Crossover cables were designed to allow for two of the same type of device to communicate with each other.
How exactly do crossover cables do that?
It’s all about the wiring within the cable.
How is a crossover cable wired?
Let’s start with the basics to make sure we’re all on the same page here.
Inside an ethernet cable, there are generally 8 copper wires that are twisted together in pairs.
The 8 copper wires are used to transfer data from one device to the other. At each end of the cable, the 8 copper wires are aligned next to each other, where they’re fed into a connector.
These connectors are called RJ45 connectors.
If you look closely at the end of an RJ45 connector, you can see the colors of the wires going into the connector.
Now, if I compare the colors of the wires inside a straight through ethernet cable, both ends will have the same pattern of colors.
If I compare the wires inside the ends of a crossover cable, things start to go off the rails.
What gives? Why are the wires not in the same pattern at both ends?
That’s because each wire location within an ethernet cable is meant for a specific purpose. Each wire in the cable will have a different role depending upon its position inside the connector.
For example, some wire positions are meant for sending data to the other device attached to the cable, while others are designed to receive data from the other device attached to it.
Let’s dive a bit further into this.
Ethernet cable pinout standards
When it comes to ethernet cables, there are two standard ways of laying out the wires inside them. These are called wire pinouts.
The two standard wire pinout options are called T568A and T568B. The purpose of these standards is to ensure that all ethernet cables of a given pinout are made with the same wire configurations. This makes it much easier for the user (you) to determine which cables are needed in different situations.
Could you imagine if every ethernet cable had a different layout of wires inside it?
Here’s how the wires are ordered in a T568A pinout:
And here’s what a T568B pinout looks like:
Can you notice the differences between the two?
In comparing the two pinouts, you can see that the T568B pinout differs from the T568A pinout in that:
- The two orange wires (striped and solid) are in positions 1 and 2 in the T568B layout instead of the two green wires (striped and solid) in the T568A pinout
- The green striped wire is in position 3 in the T568B pinout instead of position 1 in the T568A pinout
- The solid green wire is in position 6 in the T568B pinout as opposed to position 2 in the T568A standard
- Wire positions 4, 5, 7, and 8 are the same in both standards. Here’s a fun fact: wires 4, 5, 7, and 8 in an ethernet cable aren’t used
Crossover ethernet cable pinout
So why are these pinouts important to crossover and straight through cables?
As you may have guessed, these two different cable types have different combinations of these pinouts.
Straight through ethernet cables have a T568A cable pinout on each end of the ethernet cable.
Crossover cables are a little different. They have a T568A pinout on one end and a T568B pinout on the other.
The reasoning behind this has to do with the types of devices that these cables connect. Each device connected to an ethernet cable is classified based upon the type of device it is.
Medium-dependent interface classifications
This can get a little complicated, so I won’t dive too far into the weeds here.
Basically, a device can have one of two possible classifications. It can either be a medium-dependent interface (MDI) or medium-dependent interface crossover (MDIX) device.
All you need to know here is that MDI devices are endpoint devices like computers, while MDIX devices are usually networking equipment like switches and modern routers.
Let’s tie this all together.
Based upon a device’s classification as an MDI or MDIX device, it’ll expect to receive data on certain pins of the ethernet cable. It’ll also transmit data on certain pins of the ethernet cable based upon this classification.
For example, MDI devices expect to transmit data on pins 1 and 2 of an ethernet cable. MDIX devices expect to receive data on pins 1 and 2 of an ethernet cable.
MDI devices expect to receive data on pins 3 and 6, while MDIX devices transmit data on pins 3 and 6.
Do you see where this is going?
For a connection between an MDI and MDIX device, each device expects to receive data on a pin that the other device transmits data on. This is exactly what we want. It’s also why a straight through cable can be used to connect these devices.
What if the two devices you need to connect are the same type of device?
As you can see, this doesn’t work as well.
Both devices are expecting to send data on the same pins. The same goes for the receiving pins.
With both devices looking to send and receive data on the same pins, we need to change things up if we want them to be able to communicate.
That’s where the crossover cable comes in.
What is a crossover cable used for?
As its name suggests, a crossover cable will cross the wires inside the cable to allow two of the same type of device to communicate. It does this by connecting the transmit pins of one of the devices with the receiving pins on the other device.
Here’s what it looks like when you connect the devices with a crossover cable.
As you can see, whether you have two MDI devices or two MDIX devices, the pin changes required to pair each transmit pin with a receive pin are the same.
This is why you can use a crossover cable regardless of the classification of the device. As long as you’re connecting two devices that are the same type (either MDI or MDIX), you can use a crossover cable.
When should I use a crossover cable?
Crossover cables used to be an essential part of most networks, but thankfully times have changed.
Over the years, advancements in computing technology have been made to make it easier on us when we’re setting up our networks.
When it comes to ethernet cables, one major feature that’s been developed is called auto-MDI/MDIX. This feature allows your devices to automatically determine if they’re connected to an MDI or MDIX device.
From there, your device will decide if it needs to change the pins that it transmits and receives data on. In other words, it’ll automatically make adjustments depending upon the type of device it’s connected to.
This makes things much easier on you, because you don’t have to worry about what ethernet cable you’re using to connect the devices. Your device will do all the work for you.
So what does this mean, exactly?
It means you don’t need to buy crossover cables for your network anymore. You can use straight through ethernet cables to connect all your devices (assuming they’re not more than 20 years old).
I would recommend you buy straight through ethernet cables for your network instead of crossover cables. They’re generally cheaper than crossover cables, and they’re much more available. It’s not worth the time (or the money) to get crossover cables at this point.
There’s one last important thing to note.
If you have really old devices, they may not have the auto MDI/MDIX feature. If this is the case, you’ll still need to use crossover cables to connect two of the same type of device. With that said, chances are all your devices will have the auto MDI/MDIX feature.
Hopefully you have a good handle on why crossover ethernet cables were created, and how they were useful at the time. You should also understand why you don’t need to use them anymore in your network.
Do yourself a favor, buy straight through ethernet cables when you need to connect devices in your network.
If you have any questions about this material, please drop a comment below. In addition, check out some additional posts I wrote that are related to this topic: | <urn:uuid:2c93f07a-b133-4cf3-bbb8-53c96d19615e> | CC-MAIN-2022-40 | https://network-from-home.com/home-network/when-to-use-a-crossover-cable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00770.warc.gz | en | 0.928826 | 2,637 | 3.5 | 4 |
Everyone has heard the expression, “You get what you pay for.” It suggests that the functionality of something is directly proportional to its price. But that rule of thumb is being turned on its head by open-source software.
Open-source software is free, but it differs from “freeware” in some major respects.
The code underlying the software is open for anyone to see and make changes in. That isn’t the case in freeware.
Open-source software also encourages collaboration and community in its development. Freeware is usually developed by a single author and while it may attract devotees, it rarely attracts collaborators.
Open Source Hall of Fame
Although the concepts underlying open-source software seem almost counterintuitive in an age of greed and jealous guarding of intellectual property, it has worked well in several areas, notably in operating systems (Linux), server software (Apache), Web browsers (Mozilla-Firefox) and database management (MySQL).
There’s also a very good Web authoring program that’s open source, Nvu, pronounced N-View.
As a dabbler in Web authoring, I can honestly say that Nvu is one of the easiest to use and satisfying programs that I’ve seen in this category in a long time.
All major platforms are covered by the software — Windows, OS X and several flavors of Linux — as well as more than a half-dozen foreign languages.
The interface is built around two window panes. One is a site manager, which provides functions you’d find in an FTP program. FTP is how files that make up a Web site are uploaded to the Internet. The other pane is where you build your Web pages.
As you build a page, you can see various views of your work by clicking on tabs at the bottom of the editing pane. View changes are lightening fast.
There’s a “normal” view. It displays your page as it will appear online, but with table borders and anchors visible. Tables are like spreadsheets for objects on a Web page. Anchors are links to specific locations on a page. That contrasts with “links,” which direct a browser to an entire Web page.
There’s also an “HTML tags” view. Here your page appears as it would online, but its objects are labeled with yellow boxes that indicate the underlying HTML code. “P,” for example, would indicate the code for paragraph. Changes in the code for an object can be made by clicking on a box.
When you click on a box, a form box pops up. Formatting choices can be made by clicking buttons and altering text fields in the pop-up.
There’s a source code view too, which shows you the raw HTML code for yourpage.
What makes the view setup even more convenient is that you can make editing changes in any view. That includes dragging and dropping text and images from other applications or the Internet onto a page.
The program also allows you to edit multiple pages during a session. Each page you have open appears as a tab at the top of the editing pane. You can swiftly move between pages by clicking the appropriate tab.
Elements on a page can be speedily created and formatted by using the toolbars at the top of the program’s interface.
With a click of an icon on the main toolbar, you can publish your site and insert elements like anchors, links, images, tables and forms.
With the formatting toolbar, you can tag blocks of text, tinker with their size and color, and set their style — bold, italic, underline, align them and create numbered or bulleted lists.
There’s even a built-in CSS editor that advanced Web authors will find very useful.
Nvu is a tremendous piece of work that outshines many of its commercial competitors. Not only is it a free lunch, but it’s a mighty tasty one, too.
John Mello is a freelance business and technology writer who can be reached at email@example.com.
Read More Reviews… | <urn:uuid:9c825201-5d58-4614-aac1-85aed8c989f9> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/open-source-web-editor-makes-a-tasty-free-lunch-43137.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00770.warc.gz | en | 0.933566 | 920 | 2.703125 | 3 |
In a bid to build the next generation of home financing infrastructure, Fannie Mae uses TBM to understand not only technology costs but also business operation cost and time efficiency. The company is shaping investment in technology and operations with models that allow business users to explore the total costs and connections between applications, business operations and business capabilities such as turning a pool of loans into a mortgage-backed security.
The Federal National Mortgage Association (FNMA), colloquially known as Fannie Mae, is a government-backed corporation that provides widespread access to affordable mortgage credit. It does that by buying loans from banks, packaging those loans in pools, securitizing the loan packages, and selling them to investors.
During the Great Depression, as borrowers defaulted on mortgages en masse and banks found themselves strapped for cash, President Franklin D. Roosevelt and Congress created Fannie Mae in 1938 in order to buy mortgages from lenders, freeing up capital that could go to other borrowers. In 1968, as the US Government’s budget was strained by the war in Vietnam, Fannie and its assets were sold to private investors and, in 1970, Fannie was listed on the New York Stock Exchange, followed by decades of robust growth alongside a rising housing market.
After the 2008 financial crisis, Fannie was placed under the conservatorship of the Federal Housing Finance Agency (FHFA) to guarantee solvency while working on a plan to restructure the secondary mortgage market. Now, Fannie is adopting a TBM approach to technology and business cost transparency in anticipation of helping to build the next generation of home financing infrastructure. Whatever future unfolds from the restructuring plan, Fannie aims to be prepared with facts and insight about the building blocks of technology and process that continue to supply vital liquidity into the American housing market.
The Operations and Technology (O&T) division’s first response to the new pressure for transparency was to create reports and dashboards to better respond to FHFA requests. One such report had 400 different data points that were input manually, every single day. To be prepared for on-the-fly questions about IT cost, the CIO carried around a thick binder of printed reports.
When another round of company budget cuts came around, O&T was always an easy target, not only because it was the largest budget at Fannie Mae, but also because no one really could tell the business which technology costs were relevant to them.
“At that time we were using technical jargon to communicate costs to the business,” remembers Sheenal Patel, client engagement manager for Service & Performance Management (SPM), the O&T group that runs Fannie’s TBM program. “The business was allocated a large lump sum of indirect cost. We couldn’t explain what the impact would be to the business if we were asked to reduce cost. IT was a big black box.”
Gunther Schultz, a vice president in the business operations side agrees. “I didn’t know what my applications cost. I just had one big cost number for all applications.”
“Our goal wasn’t necessarily cost reduction,” recalls Gboyega Adebayo, Fannie’s lead TBM analyst. “It was cost transparency. We really wanted to peel back the onion and understand the cost of what we do on a day-to-day basis, and we wanted to be able to communicate that to our business partner. We needed data that told a story.”
Telling that story was part of an overall transformation from technology provider to service provider.
The Service and Performance Management (SPM) team set out to create a prototype of service costs focused on infrastructure. In a proof of concept, they used data from a handful of financial and operational sources to show storage and server costs tied to a handful of applications. Although it took them several months to compile this incomplete view, the first glimpse of real data in a service context was enough to get buy-in to investigate a repeatable solution. Patel explained, “We really wanted a solution that didn’t require hiring professional services every time we needed help. We wanted to grow at our pace. That’s when we adopted the TBM methodology and configured our own TBM system.”
The SPM team worked with the business to define services they understood and cared about. They include Operational Services (business labor and supporting technology to automate the business operations services O&T provides to the company), Application & Integration Services (specific applications-as-aservices as well as infrastructure services), End User Services (e-mail, laptops) and Projects & Investments.
They loaded data from multiple sources into their TBM system and configured the model to flow costs into services they defined within O&T. After several iterations what they discovered is that business and O&T service owners often wanted very different views and each team had different priorities which weren’t consistent throughout the organization. TBM Program Manager Mina Han confessed, “Our impulse was to make the data perfect, but our idea of perfect wasn’t the same as the business.” Since then the team has adopted an approach of sharing works in progress with the business, as Han said “to partner together and figure out what provides the most value.”
“The TBM service cost model really opened doors and a lot of eyes across Fannie Mae,” said Adebayo. “It was the first time application owners could see the total cost of what they were providing and the business could see what they were getting beyond just the project dollars they knew. Application support, risk controls and management overhead, the full cost of infrastructure services … It was mindboggling when you realize how much it cost to keep a lot of these applications running.”
Schultz recounted his reaction from the business side: “The ability to actually isolate an application, double click on it, and understand that it’s made up of this much hosting, this much level one support, this much level two support … It gives me a better questioning path. I can ask why – ‘Why is that so high? Why is that app different from that app?’ We’re having further conversations of how do we allocate that cost? Is that the right methodology? We’re finally at the table together because we have a singular currency to discuss. The cost model has definitely brought us closer together.”
Part of sitting down together has meant working together on where to invest and where to pare back. “And so now with TBM, we have the ability to show, here are your levers. This is what you can turn off, and this is what you can turn on.”
Schultz agreed. “It allows me to help drive the roadmap for which applications we should invest in and which ones we would like to sunset because of that cost.”
The Service and Performance Management team at Fannie Mae offered advice for those just starting out with TBM or considering doing so.
Use TBM analysis to drive data improvement. “Everyone has gaps and inaccuracies in their data,” stated Patel. “Don’t let that be your crutch. TBM lets you tie dollars to those gaps so data owners know where to focus their improvement efforts. And they can use the reporting to see the progress.”
Roll out reports as soon as possible. “Don’t wait for things to be perfect before rolling anything out,” advised Han. “What you may think is perfect in your mind is most likely not what’s perfect in your customer’s mind, so the key is really to partner together to figure out what works and what provides the most value.”
Just get started. “Your data’s never going to be perfect, and you’re never going to have 100% buy-in before you start,” said Adebayo. “But just throw the information out there because when people see it, their eyes will be wide open. They’ll see what needs to be fixed, not just for TBM but for all the other reasons that system and data are there in the first place.” | <urn:uuid:93329ed0-517a-4905-ba73-7c92e012b085> | CC-MAIN-2022-40 | https://www.apptio.com/it/case-study/how-fannie-mae-uses-bill-of-it-to-communicate-service-costs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00770.warc.gz | en | 0.968591 | 1,723 | 2.578125 | 3 |
In today’s world, being constantly connected to people and systems through devices such as smartphones, tablets and computers is pretty much a normal state of affairs. And this ‘always on’ situation will only increase over time - everything will talk to everything else: person-to-person, machine-to-person and machine-to-machine. While this opens up a world of opportunities, the downside is that more and more connections also mean more and more opportunities for attack and compromise. The question remains can you provide adequate security in a cloud age where everything is connected to everything else?
The good news is that many of these interactions rely on an API (Application Programming Interface) to communicate to an application or system somewhere in the world. APIs have quickly become the primary channel for business transactions in most modern enterprises. When you type in a website address, for example, the request goes out to a remote server belonging to that website that stores the information you require – the API is that part of the server that receives your request and then responds to it. Essentially, the APIs are the connecting technologies that allow the communications. Or put more simply, APIs are the glue that holds the digital world together.
Today’s world simply wouldn’t be possible without APIs providing a standards-based way for applications to talk to each other and share data.
But wherever there is innovation, there is also the dark side of threat and attack, and always someone who will aim to exploit weaknesses. APIs are designed to share data but are not designed to thwart threat and attack.
One of the core weaknesses of API strategy is the adoption of architectures based on frameworks, toolkits, agents, and adapters. While these toolkits do indeed provide developers with capabilities to deploy an API architecture, they do not solve the threat problem that gets exposed.
In fact, too much reliance on the developer, rather than on API security products, is a major cause of API vulnerabilities. This is not necessarily the developer’s fault. There are all levels of competency when it comes to developers, but is it really fair to ask all of them to build fail-safe code every single time? This approach is neither realistic, nor repeatable. However, this is the foundation of many API architectures today.
Inconsistent coding, misunderstanding of the ecosystem, and underestimating threat are all strategies discovered only too late, usually in the news coverage of the breach that just occurred. There have been several recent examples of major enterprises being caught off guard.
In April 2018 it was discovered that a major Identity Access Management company in the Cloud was hacked via an API that exposed the ability to gain access to all 40 million user accounts across 2000 independent customer enterprises that this company was serving. It is a stark example of how IAM and APIs are integrally tied together, but also how a single API threat can destabilize not only one environment, but thousands.
In 2017, Instagram reported that it had fixed an API weakness that led to personal information about some of its high profile users, including Justin Bieber, being leaked. Then Google, which owns YouTube, was notified of a flaw in the YouTube code that could allow anyone to erase any video posted by anyone on YouTube regardless of the password and the encryption code.
API Security – Whose Job Is It Anyway?
The challenge with API vulnerabilities is that they are rarely easy to spot, and often require specialized technology to detect. But awareness is growing. According to the latest peer-reviewed list of the ten most critical web application security risks (as compiled by OWASP (opens in new tab)), nine of the top 10 vulnerabilities now include API components.
Research from Ovum (‘API Security: A Disjointed Affair’ 2016) has shown that nearly one third of APIs go through specification without being looked at by the IT security team at all. Perhaps the biggest vulnerability is having an API, but not realising it (although everything with a URL has an API).
As awareness grows, security specialists are beginning to shift their focus towards API security products, rather than API solutions. In turn this is causing many API vendors to reposition themselves as “API security specialists”, adding bolt-on security features to their existing API toolkits, frameworks, and adapter-based solutions in order to placate their customers concerned about API Security.
The problem with this bolt-on approach is that these API frameworks and toolkits, which offer a single entry point to a system, are not built with security in mind and because APIs are the center point of modern communication, also logically become the central target of attack. An API architecture centralizes API access control and security, so it must be designed with secure API technologies and architecture principles such as a locked down secure operating system and self-integrity health checks to detect and prevent compromise. API Security Gateway technology has emerged as a distinct and unique category of API technology where “Security” means the cyber-hardening of the API Gateway product itself so that API enablement can be done securely.
You need not look very far to understand how industry vulnerabilities can affect insecure technologies. For example, the recent discovery of Spectre and Meltdown vulnerabilities in the Intel chipset that affected any system running potentially vulnerable 3rd party applications is an example of the risks when the OS isn’t locked down. These vulnerabilities have left a large number of the world's computer processors exposed over the last twenty years to bugs that made them susceptible to hackers. However, technologies such as API Security Gateways with locked down operating systems that do not allow 3rd party code to run on the system were not affected by these types of vulnerabilities.
API toolkit and framework vendors are challenged with retrospectively adding security features to an already insecure baseline, which is akin to adding bars around the windows of a house but leaving the front door open. These insecure API solutions will be continually plagued with the chicken and egg of exploit and patch, which is the reverse of what you should expect in your API security solution strategy. Implementing security with a toolkit is a dichotomy. Would you trust your corporate firewall to be hand-built by a framework or toolkit?
An API Gateway that is genuinely secure and able to call itself an API Security Gateway will typically provide three layers of fundamental protection:
1. Secure, Locked-Down OS to detect and prevent compromise and ensure the inability to break the security model of the system by installing 3rd party applications or having root shell access to the operating system.
2. Cyber-Secure Policy Enforcement Points (PEPs) to allow secure enforcement of the authentication and authorisation of users within any Identity Management ecosystem.
3. Real-time Protection and Monitoring to proactively monitor and enforce compliant traffic to applications and services, and take protective measures if threats are detected.
We are in the middle of an API revolution and APIs are seeing explosive growth in every industry sector around the world. We are witnessing the beginning of a new Gold Rush as IT security experts rush to stake their solution’s claim as the most secure approach. At the moment, there is still a sense that we are living through a ‘Wild West’ period of history where IT security experts are competing to define the secure architecture that will prove to be the industry standard. Many of the voices are just adding noise to an already deafening din, while others are giving businesses a false sense of security due to overlooked vulnerabilities in their design. Toolkits, adapters, and frameworks are not the answer to security, they are the cause of insecurity.
In the rush to adopt an API strategy, remember, all that glitters is not gold. Bear in mind the security implications of adopting a framework, toolkit, or platform only approach to enabling connections and data across untrusted boundaries. Securing APIs requires API Security products designed for that purpose, not developers and coding to do it. Expenditure on cybersecurity is set to increase. Gartner has predicted (opens in new tab) that worldwide security spending will reach US$ 96 billion in 2018, up eight percent from 2017. Not surprising since the cost of insecurity will greatly outweigh that amount.
This means that the ‘gold rush’ is actually to prevent a breach rather than enable one. Be careful to choose wisely and avoid wasting your hard-earned cash on ‘fool’s gold’.
Jason Macy, Chief Technical Officer at Forum Systems (opens in new tab)
Image Credit: Wright Studio / Shutterstock | <urn:uuid:3765d7f1-6ce2-4ec1-9db5-cdf1eff8f238> | CC-MAIN-2022-40 | https://www.itproportal.com/features/api-security-gold-rush-or-wild-west/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00770.warc.gz | en | 0.948823 | 1,746 | 2.8125 | 3 |
Coding AI – Where the Future and Coding Intersect
When you imagine the process of creating a computer program, you probably imagine a room full of caffeine-fueled developers hastily coding on expensive computers in an open-spaced office with exposed brick walls and designer lighting. That scene is set to change in the future, however, as the world of coding advances further towards a future where the elementary parts of coding are left to the computers themselves.
This is the basic promise of machine learning, which is a fundamentally different discipline to what most people would consider coding.
Defining Machine Learning and Applying It to Coding AI
In today’s technology environment, software engineers develop programs that complete tasks by following a logical set of instructions. Those instructions “the software’s code” contains all of the information the program needs to fulfill its function.
Machine learning, however, is different. Instead of writing a code that tells a computer exactly what to do, machine learning uses a layered system of rewards to train computers to come up with the most efficient method for achieving tasks on their own.
Look to HBO’s popular TV show Silicon Valley for an example of this technology in practice. At one point, the show’s characters develop an app that identifies hot dogs. This app exists in real life – you can download it from the iOS App Store.
To build this app without machine learning, a developer would have to define every aspect of what makes an object a hot dog and then instruct the program to check every one of those factors with every photograph you show it. This would be frustrating, time-consuming, and infeasible, if not downright impossible.
With machine learning, a developer needs only to develop a system through which the program is rewarded for correctly identifying hot dogs and then feed tens of thousands of photographs of hot dogs into the algorithm. Eventually, the program will learn for itself what attributes it must look for.
Combining Machine Learning with Coding AI
So far, we know that the greatest minds in tech can do more than successfully identify a hot dog. AI experts expect intelligent machines will be able to perform any intellectual task a human can by 2050.
So can the same layered, reward-based algorithm create its own code to solve problems? Microsoft thinks so, and has a working prototype of such technology in the form of DeepCoder.
When combined with an AI interface – which allows for a conversational, natural language interface between developers and their machines – the future of coding may not involve writing a single line of code.
Instead, a future developer may simply be the person best-suited to explain, in everyday language, the problem that coding AI can solve. The coding program would then draw on millions of examples of successful code to find and compile the right combination of instructions to meet the developer’s needs.
That doesn’t mean that coding will be obsolete, however. Having computers write code doesn’t mean humans will stop doing so. It only means that humans will have a much broader spectrum for solving previously intractable problems by communicating naturally with the computers they used to simply input commands for.
Keep your employees ahead of the curve with continuous learning. Discover the IT training that’s right for you. | <urn:uuid:73d33899-3805-4a5b-9ce7-77d98f76ac89> | CC-MAIN-2022-40 | https://centriq.com/blog/coding-ai-where-the-future-and-coding-intersect/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00170.warc.gz | en | 0.932485 | 673 | 3.4375 | 3 |
Random access memory, RAM, provides short term storage space for data and program code that a computer processor is in the act of using, or which it expects to use imminently. RAM is found in both SSD and HDD systems.
One of the key characteristics of RAM is that it is much faster than a hard disk drive or other long term storage device, which means that the computer is not kept waiting for data to process.
The simple answer to the question “why do you need RAM?” is: speed.
What is RAM and How Does it Work?
The name random access memory derives from the fact that, in general, any random bit of data can be read from or written to random access memory as quickly as any other bit. This contrasts with storage media such as spinning disks or tape, where the access speed depends on the exact location of the data on the storage media, the speed of rotation of the media, and other factors.
In fact, random access memory is not the fastest storage area that a computer has access to. The very fastest areas are the hardware registers built in to the central processing unit, followed by on-die and external data caches. However these areas are very small (because they are expensive) – often measured in bytes, kilobytes, or a few megabytes.
By contrast, a modern computer system will have random access memory storage measured in Gigabytes, while slower but cheaper longer term storage provided by hard drives or solid state drives is often measured in Terabytes.
Random access memory also differs from long term storage in that it is volatile. This means that it can only store data (or program code) while the computer is powered on. As soon as the power is cut all the data contained in random access memory is lost.
For that reason, random access memory is sometimes known as working memory, which is used while the computer is operating. Before switching off, all data that is to be retained must be written to non-volatile long term memory so that it can be accessed in the future.
Data stored in random access memory can be accessed far faster than data stored in the computer’s hard drive.
RAM’s Other Functions
As illustrated in the graphic above, the function of RAM is to provide fast temporary storage and workspace for data and program code, which includes both applications and the system’s operating system along with hardware drivers for each hardware device, such as hard disk controllers, keyboards and printers.
But because random access memory works very quickly compared to longer term storage, it is also used in other ways which take advantage of this speed.
One example is the use of random access memory as a “RAM disk.” This reserves random access memory space and uses it as a virtual hard disk drive. This is assigned a drive letter, and appears exactly like a conventional disk drive except that it works much faster.
In some circumstances this can be very useful, but the drawbacks are that the size of a RAM disk is limited, and using a RAM disk means there is less random access memory left for regular usage.
Another use for random access memory is “shadow RAM.” In some operating systems (but not Windows), some of the contents of the system’s BIOS – which is stored in the system’s read-only memory (ROM) – is copied into RAM. The system then uses this copy of the BIOS code instead of the original version stored in ROM.
The benefit of this is speed: reading BIOS code from RAM is about twice as fast as reading it from ROM.
Types of RAM Memory
Random access memory chips are generally packed into standard sized RAM modules such as Dual In-line Memory Modules (DIMMs) or more compact Small Outline Dual In-line Memory Modules (SODIMMs) which can be inserted into a computer motherboard’s RAM module sockets.
The two most common forms of random access memory today are:
- Dynamic random access memory (DRAM) which is slower but lower cost
- Static random access memory (SRAM) is faster but more expensive.
Many people wonder about the differences between DRAM vs. RAM, but in fact DRAM is just a type of RAM.
SRAM vs. DRAM
To understand why SRAM memory is more expensive than DRAM memory, it is necessary to look at the structure of the two types of random access memory.
Random access memory is made up of memory cells, and each memory cell can store a single bit of data: either a zero or a one. Put simply, the cells are connected in a grid made up of address lines and perpendicular bit lines. By specifying an address line and a bit line, each individual cell is uniquely addressable.
The difference between SRAM memory and DRAM memory is the structure of the cells themselves.
DRAM is by far the simpler of the two cell types, since it consists of just two electronic components: a capacitor and a transistor. The capacitor can be filled with electrons to store a one, or emptied to store a zero, while the transistor is effectively the switch which enables the capacitor to be filled or emptied.
A problem with this type of cell is that capacitors are leaky, so as soon as they are filled with electrons they begin to drain. That means that full capacitors which are storing ones have to be repeatedly refilled thousands of times a second. For that reason these random access memory cells are called “dynamic,” and a side effect is that they consume more electricity than their static counterparts.
To learn more about the differences between these technologies, see SRAM vs DRAM.
Understanding SRAM Cells
Static random access memory cells are far more complicated because they are built using several (usually six) transistors or MOSFETS, and contain no capacitors. The cell is “bistable” and uses a “flip flop” design. Put simply, this means that a zero going in to one half results in a one coming out; this is fed into the other side, where the one going in results in a zero coming out.
This is fed back to the start, with the result that the cell holds a fixed value until it is altered. This static state results in the name static random access memory. But it is important to note that, like dynamic random access memory, static random access memory is volatile and loses all data when it loses power.
The relative complexity of static random access memory means that it offers much lower storage density, which in turn makes it much more expensive per byte stored. However, the complex design is actually much faster than dynamic random access memory, so SRAM is generally only used to provide short term storage in caches (both on-die and external to the processor). In contrast, DRAM is generally used for what is commonly called random access memory.
How Much Random Access Memory is Best?
Most operating systems specify a recommended amount (or a minimum) of random access memory that a system needs to run the operating system. For example, the minimum RAM requirement for Windows Server 2019 is 512 GB and for Windows 10 it is 1 GB (32-bit version) or 2 GB (64-bit version).
But in order to run multiple applications, then more RAM will almost certainly be required. Having inadequate random access memory resources will slow the system down and in some cases cause programs to crash or be unable to operate as required. In practice most of these systems will have at least 8 GB or random access memory installed.
The general rule of thumb about RAM is that more is better: a system with more random access memory will in usually be able to run more applications at the same time and operate faster than a similar system with less random access memory.
But there are a few constraints. The most obvious constraint is cost, and a system that is only lightly used and works satisfactorily does not strictly need additional random access memory, although it may benefit from it. So it may not be worth spending additional financial resources on it.
Another constraint is the hardware of the system itself. That’s because every motherboard has a limit to the amount of random access memory that can be installed on it.
But in general, increasing the amount of random access memory in a system is one of the most cost effective ways of increasing performance.
RAM History, Trends and Future Developments
Early RAM technology involved cathode ray tubes (similar to those found in older monitors and television sets) and magnetic cores. Today’s solid state memory technology was invented towards the end of the 1960s.
Recent developments include the introduction of double data rate (DDR) random access memory, and successive generations of this technology including DDR2, DDR3, and DDR4. Each version of DDR is faster and uses less energy than the previous one. The standard for DDR5 is being finalized and the first DDR5 products are expected towards the end of 2019.
Next Generation RAM: Optane
The biggest change that is on the horizon is a new technology from Intel called 3D XPoint, branded as Optane.
3D XPoint is less expensive than DRAM, but somewhat slower. It therefore offers a low cost alternative to DRAM in systems that require a huge amount of random access memory, such as those running in-memory databases. Equipping such systems with DRAM may be prohibitively expensive, but 3D XPoint could provide adequate performance at much lower cost.
An additional benefit of 3D XPoint is that it is non-volatile, meaning that in the event of a system crash or power cut the system can be restarted much more quickly. This is because data does not have to be read back into memory from slower long term storage, and data loss is more easily avoided. | <urn:uuid:df55745c-29f6-4e57-b931-70aef97722b0> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/ram/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00170.warc.gz | en | 0.948643 | 2,022 | 4.34375 | 4 |
- 1 Network video surveillance systems
System components: cameras, encoders, switches, routers etc. Also Power over Ethernet (PoE).
What light is and how a camera sees.
How to achieve usable images: camera placement, resolution, pixel density etc.
Factors that affect bandwidth consumption, and actions for lowering it.
- 5 Edge storage, input/output ports, analytics and thermal cameras
SD cards, failover recording (being able to record video footage despite network failure). Typical input and output devices. How analytics can be used to pinpoint specific events taking place. Thermal camera technology.
The most common types of attacks and how to mitigate them.
- 7 Maintenance and support
Basic troubleshooting, and how to reach Axis technical support.
At the end of the course, you can test you knowledge in an online quiz. | <urn:uuid:72d7ad0d-cc14-4e99-b692-7b873b64e957> | CC-MAIN-2022-40 | https://www.axis.com/en-nz/learning/academy/instructor-led-training/network-video-fundamentals-new-zealand | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00170.warc.gz | en | 0.814558 | 174 | 3.15625 | 3 |
New Training: Configure VRRP in SD-WAN
In this 6-video skill, CBT Nuggets trainer Keith Barker explains and demonstrates how to implement VRRP as part of SD-WAN. Watch this new networking training.
Watch the full course: Cisco CCNP Implementing Cisco SD-WAN Solutions
This training includes:
25 minutes of training
You’ll learn these topics in this skill:
VRRP Feature Templates
Apply VRRP via Device Template
What is Virtual Router Redundancy Protocol (VRRP)?
Virtual Router Redundancy Protocol (VRRP) is a protocol that provides for redundancy in a network by switching to a backup router when the master router fails. The protocol gets its name because the master router and its backup routers share a virtual IP address. Using VRRP can increase the availability and reliability of your routing paths.
Under VRRP, the master and backup routers are all members of a redundancy group that forward the same packets as if there were only one router. But if the master router fails to do this, a new master is selected, and it then takes over the virtual MAC address of the previous master and forwards packets on behalf of it.
You can also use VRRP for load sharing.
VRRP is an open standard defined by RFC 5798. It is similar to Cisco's Hot Standby Router Protocol (HSRP), which is a proprietary protocol that Cisco has claimed has been infringed by VRRP. | <urn:uuid:3d12d393-b609-4733-9743-1c26af58951a> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-configure-vrrp-in-sd-wan | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00170.warc.gz | en | 0.893269 | 322 | 2.828125 | 3 |
What is a Reflection Amplification Attack?
Let’s start by defining reflection and amplification attacks individually.
A reflection attack involves an attacker spoofing a target’s IP address and sending a request for information, primarily using the User Datagram Protocol (UDP) or in some caes, the Transmission Control Protocol (TCP). The server then responds to the request, sending an answer to the target’s IP address. This “reflection”—using the same protocol in both directions—is why this is called a reflection attack.. Any server operating UDP or TCP-based services can be targeted as a reflector.
Amplification attacks generate a high volume of packets that are used to overwhelm the target website without alerting the intermediary. This occurs when a vulnerable service responds with a large reply when the attacker sends his request, often called the “trigger packet”. Using readily available tools, the attacker is able to send many thousands of these requests to vulnerable services, thereby causing responses that are considerably larger than the original request and significantly amplifying the size and bandwidth issued to the target.
A reflection amplification attack is a technique that allows attackers to both magnify the amount of malicious traffic they can generate and obscure the sources of the attack traffic. This type of distributed denial-of-service (DDoS) attack overwhelms the target, causing disruption or outage of systems and services.
The most prevalent forms of these attacks rely on millions of exposed DNS, NTP, SNMP, SSDP, and other UDP/TCP-based services.
What Are the Signs of a Reflection Amplification Attack?
Reflection amplification attacks are relatively easy to identify because they usually involve a large volumetric attack. Such attacks are indicated by a substantial flood of packets with the same source port to a single target. It is important to note that incoming packets rarely share the same destination port number, which is why this is a good indication of an attack. Attackers will often use multiple vulnerable services at the same time, combining these into extremely large attacks.
Why Are Reflection Amplification Attacks Dangerous?
Reflection amplification attacks are dangerous because the servers used for these types of attacks can be ordinary servers with no clear sign of having been compromised, making it difficult to prevent them. Attackers are attracted to reflection amplification attacks because they don’t require sophisticated tools to launch. These attacks require minimal effort to create enormous volumetric attacks by using a modest source of bots or a single robust server.
How Can Organizations Mitigate and Prevent Reflection Amplification Attacks?
The primary defense against reflection amplification attacks is to block the spoofed source packets. Because attacks come from legitimate sources, using trusted services such as DNS and NTP, it becomes difficult tell the difference between genuine user workloads and reflected traffic generated by attackers. Adding to the challenge, when a service comes under attack, legitimate user traffic may be forced to retry responses due to the slowdown in service, possibly causing these retries to be falsely identified as DDoS attacks in their own rite.
Organizations can take the following steps to mitigate reflection amplification attacks:
- One general DDoS mitigation strategy is to employ rate limiting, which can be applied to destinations or to sources, to prevent systems from being overwhelmed. Destination rate limiting may inadvertently impact legitimate traffic, making this a less desirable approach. Rate limiting the source is considered more effective. This approach restricts sources based on a deviation from a previously established access policy.
- Blocking ports that are not needed can reduce vulnerability to attacks. This does not prevent attacks on ports that are used by both legitimate and attacker traffic, however.
- Traffic signature filters can be used to identify repetitive structures that are indicative of an attack. The downside to such filtering may be its impact on performance. Inspecting every packet may ultimately overwhelm defenses.
- Threat intelligence services can help organizations identity vulnerable servers, allowing them to block the IP addresses of these vulnerable servers. This proactive approach can provide more precise mitigation. Netscout/Arbor publishes a set of AIF filter lists on a regular basis which contain up-to-date information on vulnerable servers which are actively being used as DDoS Reflectors. | <urn:uuid:9ebf73ff-d1a2-4c93-86c6-c8ea67ccb401> | CC-MAIN-2022-40 | https://www.netscout.com/what-is-ddos/what-is-reflection-amplification-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00170.warc.gz | en | 0.92978 | 862 | 3.1875 | 3 |
Viruses are not the only thing you need to worry about as every day, hackers invent new ways to wreak havoc for personal gain. From front-page attacks like ransomware to less obvious “grayware,” there are several types of malicious software programs and each one requires a unique defensive strategy. According to the Verizon 2021 Data Breach Investigations Report, last year, small organizations accounted for less than half the number of breaches that large organizations showed. This year these two are less far apart with 307 breaches in large and 263 breaches in small organizations.
5 tell-tale signs of malware infections
For decades, you have been trained to look for a virus when your computer performed more poorly than usual. But as new types of advanced malicious software are released, hackers have made it harder to notice when something is amiss. Here are some lesser-known signs your computer has been infected.
If you notice any of these signs, shut down your computer immediately and contact an IT professional about stopping the malware’s spread.
- Your security software is mysteriously disabled.
- Filenames have changed for no reason.
- Unknown apps or browser toolbars have appeared.
- An unrecognized webpage pops up when you open a new browser window.
- Your email contacts are receiving strange messages from you.
Now, if you subscribe to a managed IT services, unlimited tech support is included in your service. But for businesses that still rely on the “call IT repairmen after something breaks” model, malware prevention is going to be especially important.
Tips for avoiding the most common malware attacks against small businesses.
Full disclosure, the majority of cyberattacks are made possible by users who circumvent security software and hardware. “Phishing” (sometimes called social engineering) is when hackers disguise themselves as a trustworthy source, such as a bank employee, and ask for private information, such as a credit card expiration date. So, the best way to avoid almost any type of malware is employee training. But beyond that, there are some more black-and-white solutions.
WHAT ARE THEY?
Trojans are programs that seem benign to unsuspecting users but hide their true purpose.
HOW TO AVOID TROJANS
Since Trojans are disguised as seemingly harmless apps, a cautious mindset is your best form of defense. In other words, be careful when installing free software, even if it comes from a trusted source like the Google Play store. Forbidding employees from installing software that isn’t approved by your IT department is a good place to start.
WHAT ARE THEY?
Viruses were some of the first malicious programs ever created. When a file is opened that is infected with a virus, that virus can spread itself to other files and computers.
HOW TO AVOID VIRUSES
Because viruses can’t hide behind the guise of a useful program, they are usually distributed as documents attached to emails. In addition to regularly reminding your employees to be wary of attachments, you should have a high-end spam filter and email-based antimalware software, ideally with monthly audits from an IT staff.
WHAT ARE THEY?
Worms are malware that spread themselves without the need for any human action. They are standalone programs that exploit network security holes and, unlike viruses, worms don’t need to be opened or installed to work.
HOW TO AVOID WORMS
Because they spread via deeply rooted hardware and software vulnerabilities, the most important thing to do is to install vendor-issued updates and patches for apps, operating systems, and firmware. In a horrific real-world example, Microsoft patched the vulnerability that made the WannaCry possible before the ransomware attack was released. The malware was so immensely successful only because so many people failed to update Windows.
WHAT ARE THEY?
Ransomware is set apart by its use of extortion and encryption. When a computer or server is infected, all its files are rendered unreadable until victims pay hackers a fee to return everything to normal.
HOW TO AVOID RANSOMWARE
Because it is based on unbreakable encryption, there is usually no recovering from a ransomware attack unless you have robust and secure backups stored somewhere safe from the spread of infection. Many off-the-shelf antimalware programs contain so-called ransomware protections, but struggle to recognize never-before-seen threats. Cloud-based backup services are inexpensive and ensure your data is always accessible regardless of the latest advancements in ransomware infections.
WHAT ARE THEY?
Grayware programs do not actively alter, steal, or destroy information, but still manage to cause problems. This type of malware slows down your computer, reveals your private information, and floods your computer with ads.
HOW TO AVOID UNWANTED APPLICATIONS
These unwanted applications often come installed on new computers or bundled in free software packages. Take the time to periodically factory-reset company-issued devices. Windows 10 includes a user-friendly “Refresh” feature that wipes everything from a computer except its documents and critical applications. Anyone should be able to wipe a mobile device, but an IT provider can do it in a fraction of the time.
A formula for putting a dollar value on your security needs.
It is clear to most business owners that IT security services are non-negotiable. Budgeting how much to spend on those services is not always as clear. Cybersecurity is not something you want to skimp on, but we’ll be the first to tell you that you shouldn’t give an IT provider carte blanche. Thankfully, there is a simple formula to make sure the funds you set aside for prevention never exceed the costs of a breach.
Annual breach costs = Number of incidents per year * Potential Loss per incident
It’s a simple equation, but the variables vary greatly depending on the location and industry of your business. For example, Kaspersky Lab estimates that the average small-business data breach costs $117,000, but that number could be 10x higher if you are in the healthcare industry.
So even if you experience incidents only every other month — which we assure you is woefully optimistic — you could justify an annual cybersecurity budget of almost $700,000 (6 events x $117,000)!
Netrix provides managed IT services rather than break/fix contracts. We charge a flat monthly fee and take care of everything related to cybersecurity. Software vulnerabilities are patched before they cause a breach, your inboxes are kept free of malware, and your firewalls are top of the line — all for less money than the costs of potential data breaches. | <urn:uuid:c453cdd6-791c-4f0a-bc5f-ee4d596f8b5f> | CC-MAIN-2022-40 | https://netrixglobal.com/blog/the-smb-guide-the-abcs-of-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00170.warc.gz | en | 0.93531 | 1,398 | 2.6875 | 3 |
The Internet of Things (IoT) is already revolutionizing the way key industries do business, and the benefits are only set to increase over coming decades as IoT technologies are further adopted. According to Australia’s IoT Opportunity: Driving Future Growth – An ACS Report, with regards to the construction, mining, healthcare, manufacturing and agriculture sectors, which represent 25% of Australia’s GDP, IoT technologies have the potential to achieve annual benefits of A$194-308 billion over a period of 8-18 years. That is an average productivity improvement of 2% per annum.
Let’s take a closer look at how IoT is set to revolutionize these key Australian industries.
The construction industry is set to benefit up to $96billion over coming decades due to increases in productivity resulting from IoT. Technologies such as Building Information Modelling (BIM), sensors, automation, and 3D printing are all set to have an increased presence in construction sites of the future.
The predicted benefits for the Australian manufacturing industry over coming decades are up to $88 billion, despite the industry already being the most advanced regarding IoT adoption. Factories of the future may be remotely controlled and even connected allowing for real-time supply chain management. There will also be the increased adoption of sensor technology for monitoring and maintenance.
The healthcare industry could reap benefits of up to $68 billion in the coming decades as it takes advantage of IoT technology. ‘Smart Hospitals’ are the future, where service is more personalized and technologies such as 3D printing, robotics, nanotechnology and genetic coding are employed. Additionally, the use of wearable technologies by patients will reduce the number of visits to their GP and allow for remote access to real-time data.
Benefits of up to $34 billion could be achieved in coming decades by the mining industry as it adopts IoT technology. Sensors providing real-time visualizations of data and collaboration, and also the use of autonomous vehicles will increase the productivity of the sector, and are already employed by industry leaders.
Agriculture, Forestry, and Fishing
‘Smart farms’ are set to offer farmers increased yields are lower costs, with annual predicted benefits of up to $22 billion. Increased productivity will be the result of technologies including autonomous vehicles, sensors for crops, tracking on livestock, automation, and drones.
The five industries discussed are predicted to reap the significant benefits from IoT. However, they do not represent the limits of the reach of IoT technologies. While at its core IoT is a simple connected device, the broader impact of IoT technologies is an economic and social good, whereby there are not only improvements to capabilities and productivity, but more broadly improvements to everyday life and the planet.
Stay Up To Date
Stay up to date with the latest news, tips and product features | <urn:uuid:deda91c7-1cd5-462e-a1b3-eb20895be94d> | CC-MAIN-2022-40 | https://firstwave.com/blogs/the-iot-revolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00170.warc.gz | en | 0.932469 | 589 | 2.515625 | 3 |
Many of us have a wonderful image of summer as a carefree, happy time spent where “kids can be kids”; we take for granted the prospect of enriching experiences such as summer camps, time with family, and trips to museums, parks and libraries. Unfortunately, there are a number of young students who face summer months that are anything but idyllic. When the school doors close at the end of the school year, these children struggle to access educational opportunities as well as basic needs such as healthy meals and adequate adult supervision.
The result is often referred to as “seasonal brain drain” or “summer learning loss.” As most educators know, it’s a pretty common occurrence among children when they return to school after summer vacation. In fact, statistics show that over the summer break, most students lose the equivalent of one to nearly three months of learning. Having this amount of knowledge erased from children’s minds leads to an academic setback that can take weeks, and in some cases months, to remedy after the school bell rings in the fall.
The effects of summer learning loss are both well documented and devastating. In the case of at-risk students, this backsliding is more dramatic and is a significant contributor to the achievement gap. So, last summer, my team of 32 dedicated teachers and assistant teachers orchestrated a program with a focus on boosting the academic development of 500 at-risk students in grades 3–8 from 24 of our district’s lowest performing Title I schools.
We did this through our aptly named “5th Quarter Innovation Zone Summer Program.” The program ran for 80 hours over four weeks. Full breakfast and lunch was provided daily, and we held four classes: reading, writing, mathematics, and a special area rotation. The special areas included everything from art, physical education, environmental studies, readers’ theatre and drum playing.
None of these students were designated as needing specific individualized education programs (IEP), each student did require intensive and targeted instruction to address individual deficiencies in mathematics, reading, or writing to advance to the next grade level.
The 5th Quarter Innovation Zone Summer Program focused on closing the achievement gaps of our students, engaging them in successful reading experiences and demonstrating the impact of innovative summer academic programs. Everything we did was designed to build a sense of confidence in our students and a shared responsibility among the staff to provide the best resources and support to help those students. Our focus was a strong curriculum and good teaching coupled with plenty of literacy activities – purposeful reading as well as writing and discussion. We wanted to make sure all the students were properly challenged and engaged in rigorous instructional activities that matched the requirements of state assessments.
When it came to our elementary students, we selected a technology-based reading program for students of all abilities in pre-kindergarten through grade five. The program, Lexia Reading Core5, is designed to accelerate fundamental literacy skills by addressing all strands of reading with 18 levels of age-appropriate, skill-specific activities. All activities are aligned to the most rigorous state standards. Equally important to us was that this program simplifies differentiated instruction, enabling at-risk students to close their personal reading gap more quickly and enjoy success more consistently through reading experiences specifically designed to meet each student at just the right level.
While in the dedicated 55-minute reading portion of the summer school day, our students progressed through three rotations: small group lessons, teacher-led instruction, seat work using the reading program on iPads. My staff liked how the program prescribed the number of minutes each student needed to be on the program to close their skill gap and progress towards meeting grade level requirements. It ensured that their students made progress very quickly.
The effectiveness of the reading effort was demonstrated by the outcomes: 441 students used the reading program and completed more than 40,000 units. Most students (416) used it for at least an hour a week, with about half using the program for over 90 minutes a week. In all, 266 students earned a certificate for completing a level and an additional 19 students earned two certificates. For 100 of those students, earning their certificate meant progressing to the next grade level of material in the reading program; for example, advancing from first grade skills to second grade skills.
The growth in our writing and math programs was also strong, with 77 percent of our third graders and 76 percent of our fourth graders increasing their writing scores by at least one grade level. In math, our third graders averaged a nearly 15 percent increase from pre-test to post-test scores while fifth graders averaged a nearly 29 percent increase.
There was even more good news — the learning wasn’t just done by our students. We now have a better understanding of the fact that summer education needs to be a community-wide commitment. Schools and communities need to ensure that there are quality programs like the 5th Quarter Innovation Zone available to all students, no matter their socioeconomic status.
Public agencies, community organizations, and local schools and universities need to join forces in an effort to address summer learning loss. With a collaborative effort, the quality and accessibility of programs will increase and students will see the direct benefit when school is back in session in the fall. Everything we used, from tech tools to the wide variety of other special activities, was about extending student learning opportunities beyond the school classroom and closing skill deficiencies. And the results leave no doubt that the out-of-school-time strategies we employed have had positive effects on the achievement of low-achieving and at-risk students. | <urn:uuid:faa12acd-92c4-4e1b-b989-47b00f5cfc8e> | CC-MAIN-2022-40 | https://mytechdecisions.com/compliance/sc-district-uses-technology-to-prevent-summer-learning-loss/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00371.warc.gz | en | 0.966194 | 1,145 | 2.828125 | 3 |
Exam A QUESTION 1
Click the Exhibit button.
You have set up a timeline and menus with features shown in the three Properties panel views in the exhibit. Timeline #1 has only one chapter point. When a viewer clicks the Chapter Menu – Chapter 1 button, Timeline #1 will play starting at Chapter 1 and continue to the end of Timeline #1.
What will happen next?
A. Timeline #2: Chapter 1 will start to play.
B. Timeline #2: Chapter 2 will start to play.
C. The DVD will return viewers to the main menu.
D. The DVD will return viewers to the chapter menu.
Correct Answer: C QUESTION 2
You have ten separate timelines that you want to play in different orders from different buttons.
You want button one to play the odd numbered timelines and button two to play the even numbered timelines. What should you do?
A. use two different play lists and assign one to each button
B. use a combination of end actions and overrides on the buttons
C. use a combination of end actions and overrides on the timelines
D. use two different chapter play lists and assign one to each button
Correct Answer: A QUESTION 3
What happens when you set the Video property in the Motion tab of the menu Properties? | <urn:uuid:eeb95cbb-b65a-4d27-829f-426f3f733d01> | CC-MAIN-2022-40 | https://www.certadept.com/tag/adobe-9a0-062 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00371.warc.gz | en | 0.839298 | 271 | 2.828125 | 3 |
Having the user interface and the terminal window for access to the command line means that there are a couple of ways to navigate through Linux and do what needs to be done. In previous examples of using the command line, files were edited, users were added, and parameters were set up. In managing some of the Oracle files and directories, it is useful to know some of the basic commands or how to look up the option for the commands. Changing directories, copying and moving files, editing, and looking at the content of the file are all basic actions in Linux (and the commands are almost the same as what is used in Unix, with a couple of possible differences in the parameter options). The following are some useful Linux commands, with a brief definition:
- pwd This shows the current directory (print working directory).
- more filename Lists the file.
- ls Lists the files in the directory.
- echo $VAR Shows value of variables or echoes back the text.
- mv filename newfilename Renames a file.
- cp filename /newdirectory Copies a file.
- rm filename Removes (deletes) a file; wildcards can be used but are not recommended for a root directory.
Manual pages are available to provide details for commands as well as available options. The man pages also provide examples for how to use the commands in the details of the page. This information can be accessed by typing man and then the command. The following is an example of the command, and Figure 1 shows the results of this command and what can be found in the man pages. | <urn:uuid:f7d4d227-e628-4ef5-954d-cc6c600384fe> | CC-MAIN-2022-40 | https://logicalread.com/manage-oracle-12c-with-linux-mc05/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00371.warc.gz | en | 0.921702 | 327 | 3.578125 | 4 |
Database management systems allow us to organize our data to easily find the information we need when we need it, even if the data doesn’t have any sort of order on its own. These systems are essential tools in today’s technological world, powering large and small businesses alike with their ability to store data and retrieve it when needed.
What is a DBMS?
A database management system (DBMS) is software that provides methods to create, manage, and access a large volume of data. These applications help automate processes such as adding new entries, modifying existing entries, and deleting entries when necessary.
Database management systems also serve to help users find information more quickly and efficiently. Users no longer need to spend time looking through hundreds of documents one at a time—they can now see what they’re looking for with just a few clicks.
Read more on Datamation: Current Database Trends & Applications
How does a database management system work?
The primary function of a database management system is to provide users with access to stored data. Therefore, a DBMS must allow users to add new information, modify existing information, and delete old data.
In addition, a DBMS must ensure that only authorized users have access to any given piece of information. For a DBMS to perform these functions, it needs structure and organization that allows users to retrieve specific types of data based on certain criteria.
For example, if users wanted all accounts associated with customers who live in New York City, they could simply enter “New York” into a search field and return all matching records from the table. Once a database has been organized according to the organization’s specific needs, developers can begin creating applications around it.
What are the basic concepts and features of DBMS?
The fundamental concepts and features of a DBMS include data models, query languages, file organization and indexing, normalization, candidate keys, and key fields.
A data model is an abstract representation of a database system. It is used to design and implement a database or define its schema—the structure and organization of how data is physically stored.
Data models are designed using a methodology called conceptual modeling. However, most data models are based on at least one formal model, such as entity-relationship modeling.
Just as programming languages are used to create software applications, DBMSs have their own specific languages which database administrators use to create databases. They’re generally called query languages, and they allow users to search and manipulate data stored in databases. The most commonly used query language is structured query language (SQL).
SQL is the standard language for database management and there are five widely used SQL sub-languages, they include:
- Data definition language (DDL)
- Data manipulation language (DML)
- Data control language (DCL)
- Transaction control language (TCL)
- Data query language (DQL)
Other query languages include NoSQL and XQuery. Each query language has its own syntax and capabilities, but they all follow similar principles. Each allows you to retrieve data from a database table or view, modify it if necessary, add new records to an existing table or view, and remove unwanted records.
File organization and indexing
Database file organization is required for better storage space utilization, reduction in access time, and faster retrieval.
The two levels of database files are index files and data files. Index files contain indexing structures that define data locations for faster retrieval when searching for specific records within a table structure. Indexing structures in DBMS include B-tree and other types of balanced trees, hash tables, bitmaps, etc. Data files store both fixed-length records and variable-length records.
Normalization is a process that eliminates redundant data and ensures that relationships between different records in a database make sense. Normalizing data in a database involves breaking down related tables into multiple tables based on business rules. Breaking up related tables into separate entities allows us to store data in more efficient ways while also helping to ensure consistency across multiple tables. This separation also makes updates easier and more reliable.
Normalization is an important part of designing database schemas because it helps to prevent data redundancy and avoidable issues like database corruption. Additionally, it helps make databases easier to update and improves query performance.
In a Relational database, the candidate key is a condition for defining a relationship between two or more tables. The candidate keys must be included in each table that refers to them. A candidate key is generally composed of unique attributes and contains values that never change during data storage. They help define groups.
A key field or primary key is a unique identifier for each row in a database table. A table can only have one, and a primary key must be unique across all tables and views. Primary keys are sometimes called natural keys or auto-incrementing fields. The value in a primary key is automatically incremented by 1 for every new record added to that table.
Types of DBMS models
The most common types of database management systems include relational, distributed, hierarchical, client-server, and network models.
The relational database management system (RDBMS) is a database model that organizes data in tables. Each table consists of rows and columns with cells containing data items, also called fields.
An RDBMS provides facilities for defining, storing, retrieving, and modifying structured information. Structured data can be stored in multiple ways, such as lists, files, or documents; however, it is often stored within an RDBMS as a collection of interrelated tables.
A distributed database is a collection of logically linked databases that appear to users as a single, integrated database. The information within these individual databases may be physically stored in different locations across a network, but it seems part of one unified whole.
This allows for greater flexibility and scalability when dealing with large amounts of data. Distributed database management systems allow multiple computers or nodes to access shared data simultaneously, often over a common network. Each node can update its copy of data while other nodes have access to all copies at once.
A hierarchical database is a type of database in which each record has a set of fields and values organized into levels and sub-levels. Hierarchical databases store information by separating it into related groups called sets.
The highest level, also known as the root, contains all available information. As you move down to lower levels, more specific subsets appear. The number of records at each level reflects the amount of detail about each item.
A client-server database is a particular model wherein the database resides on a server, and users access it from their workstations. This configuration allows for multiple users to access data simultaneously and means that there are fewer servers for companies to maintain.
It’s important to note that a client-server database can be centralized or decentralized. Centralized systems have all information stored in one place, while decentralized systems allow different parts of an organization to keep their databases separate.
A network database model is based on a network data model that allows each record to have multiple parents and multiple child records. Network databases enable users to build a flexible model in which entities can be related in many different ways.
Business benefits of DBMS
A DBMS can help organizations achieve greater efficiency in their operations by more effectively managing data across multiple applications, including different departments within an organization. Database management systems also help companies avoid information redundancy by consolidating various sources of data. The better a company can manage its data, the more easily it can adapt to changing market conditions and make well-informed business decisions.
Data storage and management are critical aspects of any business operation. Therefore, businesses need effective tools for storing and managing the data they can use across all facets of their organization. With a robust DBMS, companies have more control over how they store, access, share, and secure data; as a result, they benefit from increased organizational productivity while reducing costs related to IT infrastructure maintenance.
Also read: Best Data Modeling Tools | <urn:uuid:74204c99-3c5b-4d77-96c4-be5091b1098a> | CC-MAIN-2022-40 | https://www.cioinsight.com/big-data/what-is-a-dbms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00371.warc.gz | en | 0.91102 | 1,685 | 3.609375 | 4 |
The process of establishing an inventory of authorized software programs or executable files allowed on a computer system is known as application whitelisting.
Instead of deploying resources to mitigate a cyber-attack, using whitelisting, IT discovers the malicious program beforehand and blocks its access. IT builds a list of authorized applications that can be pushed to users’ computers or mobile devices. This ensures that whatever users have access to has been approved by the administrators.
Whitelisting vs blacklisting
Blacklisting is the simple opposite of whitelisting and unlike application whitelisting, blacklisting works by explicitly restricting access to the specific websites that are blocked by the IT. This is a simple way of blocking out known malware. On the other hand, with application whitelisting the IT admin authorizes apps that are deemed safe and the user gets access only to those specific applications.
Which is better?
Both blacklisting and whitelisting have their own set of pros and cons. The people in favor of blacklisting argue that it is a very tedious and complex task to outline all the websites or applications a user might need to perform his/her set of tasks. Maintaining this whitelist is tough because of the increasing complexity and interconnectivity of corporate processes and apps.
While on the other hand people in favor of whitelist argue that all this effort is worth the advantages that you gain by proactively protecting systems from malicious or inappropriate programs. This is considered as rather a stronger security protocol since unlike backlisting, where a program is blocked only when the system recognizes it to be malicious, whitelisting proactively blocks all the programs that are not registered in the system. Hence, protecting the system from any new malware that is not registered in the system.
Risks of using application whitelisting
Attackers can easily replace whitelisted apps with harmful apps by generating a malware version that is the same size and has the same file name as an authorized program, then replacing the whitelisted app with the malicious one. Although you can combat this issue by using cryptographic hashing and other ways to verify the authenticity of the app, it still remains a sizeable risk. Application whitelisting should be used in addition to standard and emerging security technologies to ensure security from modern threats.
Advantages of application whitelisting
Keeps malware and ransomware away
Many phishing and malware attacks rely on an attacker’s ability to download and run malicious programs on a victim’s computer. Organizations with strong data and security governance can use an application whitelist to limit applications to those that have been pre-approved by the company.
Users frequently attempt to install insecure or unlicensed software on their computers. Even if the intention isn’t harmful when installing these applications, they will end up harming the end-user and potentially serve as a gateway to the entire company’s database, but if these certain applications or programs aren’t on the whitelist, users won’t be able to install them, and IT departments will be notified right away.
The administrators are the ones who make the decisions in the application whitelisting procedure. As a result, they will decide which applications will be added to the whitelist and will be able to run on an endpoint, making the system safer. If any end-user was allowed to participate in the decision-making process, security breaches may occur because an ordinary end-user could unintentionally run any software, whether malicious or not.
Depending on the reporting capabilities of an application whitelisting solution, the company may be able to figure out which users are partaking in unsafe conduct. Some application whitelisting technologies can generate reports that show which users have attempted to install or execute unlicensed apps, as well as any malware found.
Since there is no bloatware that is installed in the machines besides the corporate apps which are needed for the user to finish his/her work. It helps the system focus on the limited number of programs hence, improving the system speed.
Limited IT assistance
Post whitelisting, there are limited number of apps that a user can install. This reduces the work of IT admins as they needn’t worry about new app installations and can attend to other important tasks.
Another advantage of adopting application whitelisting is that it boosts workplace productivity. Your workers will be restricted to client-approved apps. This implies that their sole emphasis would be on work.Hence, boosting productivity
Challenges in application whitelisting
The impact of application whitelisting on the end-user is one of the most significant concerns. By depending on a deny-by-default approach, a user must first whitelist a program before being allowed to use it. This method might be inconvenient in some firms, causing workflow delays that irritate employees.
The interweaving of the application whitelist management and patch management procedures is the major problem with application whitelisting. Program patches will lead whitelisting software to stop recognizing the patched application as genuine unless an organization has a plan in place to deal with the patch management process.
As stated above, whitelisting is a solid way of securing your corporate data, but choosing the right program to whitelist the application is a huge task in itself. Hexnode UEM is a unified endpoint manager which is adopted by businesses to manage endpoints from a centralized remote console. Hexnode also supports all the major OS. With Hexnode, not only do you gain access to application whitelisting but also to its fleet of features like device restrictions, password policies and app configuration among others. Hence, having Hexnode is the right step towards a cyber-safe future.
As more businesses adopt proactive security strategies, application whitelisting is gaining traction as a legitimate security technique. Application whitelisting is very beneficial for many businesses when used in conjunction with other traditional and sophisticated security procedures. | <urn:uuid:2a68f5bc-6b0d-4fc0-8d2f-3f43b5fada55> | CC-MAIN-2022-40 | https://www.hexnode.com/blogs/application-whitelisting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00371.warc.gz | en | 0.931608 | 1,217 | 3.078125 | 3 |
Today’s K-12 digital education environment opens the door wide to cyber-based threats such as ransomware and malware. Students are connecting to cloud-based learning platforms, video collaboration tools, and the Internet from their school-issued Chromebooks and personal devices to attend class and do their homework. All of this occurs outside of the school’s traditional security perimeter, placing students at greater risk.
Cyber criminals specifically target the education industry because it handles and stores large volumes of sensitive student data. According to the K-12 Cybersecurity Resource Center, school districts are responsible for safeguarding data for more than 50 million U.S. students and their families. A child’s personally identifiable information (PII), like their social security number, is particularly valuable because a child’s identity is “fresh” and nefarious activity is less likely to be detected.
The Lookout Security Platform delivers a unified, converged solution purpose-built for K-12 education. It provides the comprehensive visibility and protection that keeps students and their families safe, and helps teachers and administrators comply with the Family Educational Rights and Privacy Act (FERPA). | <urn:uuid:fe6a4c34-6e06-4844-8dd5-c328134196a9> | CC-MAIN-2022-40 | https://www.lookout.com/solutions/k12edu | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00371.warc.gz | en | 0.946557 | 234 | 2.6875 | 3 |
Weak Maritime Cybersecurity Creates Environmental Risks
by Anastasios Arampatzis
Modern ships carry onboard a plethora of IoT sensors to optimize performance and reduce fuel consumption. Besides those performance-driven sensors, ships are equipped with IoT sensors to monitor oil spills which communicate via satellite communications to send imagery and data back to the shore. Implementing weak cybersecurity measures to protect the integrity of these devices can create significant environmental risks and even severe ocean life damage.
Oil spills threaten oceanic ecosystems
Marine oil spill pollution poses a serious threat to the ecology of the world’s oceans. Thousands tons of oil are spilled into the oceans every year due to both human-centric causes, such as tanker accidents, rupture of rigs/pipelines or malfunctioning of oil extraction platforms, and natural events, such as natural seepage from seabed oil structures.
Oil spills constitute a serious environmental and socio-economic problem. Oil spill surveillance is an important part of oil spill contingency planning. Accurate detection and forecasting of oil spills and their trajectories is beneficial to fisheries, wildlife, resolving disputes related to liability, and resource management for monitoring and conservation of the marine environment. Oil spill monitoring is one of the most important applications for operational oceanography. The different means of detection and monitoring oil spills are vessels, aircrafts, and satellites.
The disaster at BP’s Deepwater Horizon rig in 2010 was the largest marine oil spill in history, spewing an estimated 4.9 million barrels of oil into the Gulf of Mexico. The oil leak was discovered two days after the initial explosion on the afternoon of April 22nd and flowed for a total of 87 days. Tracking the movement of the oil and identifying its concentration were two challenges immediately faced by the cleanup operation. In those scenarios, situational awareness is key to damage mitigation.
Which is where the IoT comes in handy today.
IoT oil spill sensors
IoT sensors are deployed in numerous vessels and oil extraction platforms, combining a variation of technologies to detect oil spills and minimize the damage to the oceans’ life. Two of the most common technologies employed rely on infrared detection and laser fluorosensors.
Infrared sensors detect multiple indicators simultaneously, only triggering alarms when the “right” combination is identified. These indicators relate to the nature of oil and include wave height and behavior, surface characteristics and drift, the reflectance, absorbance and the resulting contrast between oil and water.
On the other hand, laser fluorosensors can operate as well during full day-light conditions as it does at night. Laser fluorosensors are useful instruments because of their unique capability to identify oil on backgrounds that include water, soil, weeds, ice and snow. They are the only sensors that can positively discriminate oil on most backgrounds.
The capabilities of these IoT multispectral sensors give first responders all the information they need to identify the category, concentration, and type of oil in real time. That information can then be used to make informed decisions on the best way to handle the leak and disperse the oil.
Securing IoT oil spill sensors
However, integrated IoT oil spill detection sensors are vulnerable to many potential cyberattacks. A typical cyberattack tends to jam sensor signals and reports wrong detection results, which can adversely impact the source detection of petroleum leaks. With inaccurate information, the solution performance of the IoT service is severely influenced.
IoT oil spill detection sensors fall under the category of Cyber Physical Systems (CPS). CPS and IoT play an increasingly important role in critical infrastructure, government and everyday life, which makes them an attractive target for security attacks for various purposes including economical, criminal, military, espionage, political and terrorism as well.
The consequences of unintentional faults or malicious attacks could have severe impact on human lives and the environment. Proactive and coordinated efforts are needed to strengthen security and reliance for CPS and IoT.
CPS security threats can be classified as cyber or physical threats, as explained below, and if combined, these can result in cyber-physical threats. The main cyber threats are wireless communications exploitation and jamming, unauthorized access to sensors to intercept, manipulate or disclose information, and GPS jamming and exploitation. On the other hand, physical threats are damage or loss, intentional or unintentional.
These threats are realized by exploiting known network and platform vulnerabilities, such as weaknesses in protecting and encrypting communications and data in transit, resulting in man-in-the-middle attacks, spoofing and eavesdropping, and vulnerabilities in software and databases.
Mitigating these risks and threats starts with risk identification and management to identify, analyze, rank, evaluate, plan and monitor any possible risk through risk assessment. However, the foundation of any risk assessment is having visibility into the assets deployed.
Selecting the appropriate security measures to protect these IoT devices should comply with the security requirements of confidentiality, integrity, availability and reliability. In addition, those systems need to be resilient to withstand both cyber and physical accidents and malicious attacks. The adoption of security measures has many benefits when it comes to protecting CPS ecosystems. However, despite these advantages, IoT and CPS systems can be impacted by the application of these security measures. The following concerns and challenges should also be considered when selecting security controls:
- Reduced performance
- Higher power consumption
- Transmission delays
- Compatibility issues
- Operational security and safety delays
Maintaining a secure CPS environment is not an easy task due to the constant increase of challenges, integration issues and limitation of the existing solutions including the lack of security, privacy and accuracy. Nonetheless, this can be mitigated through different means including cryptographic and non-cryptographic solutions (i.e IDS, honeypots and deception techniques, and firewalls).
CPS and the convergence of Digital & Physical Security
The new age of connectivity and automation creates tremendous opportunity in many business sectors. Without considering adequate digital security controls, smart infrastructures & connected devices/systems, can be vulnerable to potential cyber incidents. The potential of a combined physical and cyber-attack represented possible threat scenarios that could impact industries and enterprises.
The new age of connectivity demands a holistic approach to Security Risk Management. The convergence of Digital & Physical Security, in terms of processes, technology & roles, needs to become the new era in Security Risk Management. The convergence is about how the entire process of handling any type of security incidents that are considered to be major is taken care of.
Risk, regardless of whether it results from physical or logical security weakness, needs to be expressed in terms that are meaningful to business, and at the same time to be managed holistically.
How ADACOM can help
CPS systems are key components of Industry v4.0, and they are already transforming how humans interact with the physical environment by integrating it with the cyber world. More specifically, IoT based oil spill detection solutions can greatly advance the preservation of oceanic ecosystems and help minimize the impact of such events. Securing these systems in the maritime environment requires careful consideration and planning.
ADACOM can help shipping organizations be resilient against cyber incidents and data breaches through a comprehensive risk management and cyber security technology adoption program, which includes the following:
- Identify, evaluate and propose treatment for the cyber security related risks
- Define and develop the information security management system in compliance with the international requirements
- Maximize the effectiveness and the adoption of the required Information Security controls in both Company premises and Vessels.
- Adoption of the required cyber security technology such as, endpoint protection, threat protection, privileged access management, identity management.
You may learn more by contacting our experts.
If you wish to learn more about oil spill detection and how to secure cyber physical systems, the following resources can be useful:
Pilžis, Vaišis, Oil Spill Detection with Remote Sensors, 2016, available at https://www.researchgate.net/publication/310622651_OIL_SPILL_DETECTION_WITH_REMOTE_SENSORS
Fingas, Brown, A Review of Oil Spill Remote Sensing, 2018, available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5795530/
Yaacoub, Salman, Noura, Kaaniche, Chehab, Malli, Cyber-physical systems security: Limitations, issues and future trends. 2020, available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7340599/ | <urn:uuid:0ba498de-449e-4c46-86dd-2802417f0ec7> | CC-MAIN-2022-40 | https://www.adacom.com/news/press-releases/weak-maritime-cybersecurity-creates-environmental-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00371.warc.gz | en | 0.924822 | 1,763 | 3.15625 | 3 |
In this blog post
Is the end-user at the center of everything you do? Do you consider human emotions while conceptualizing a product or a solution? Well, let us open the doors of Design Thinking
What is Design Thinking?
- Design thinking is both an ideology and a process, concerned with solving in a highly user-centric way.
- With its human-centric approach, design thinking develops effective solutions based on people’s needs.
- It has evolved from a range of fields – including architecture, engineering, business – and is also based on processes used by designers.
- Design thinking is a holistic product design approach where every product touch point is an opportunity to delight and benefit our users.
Human Centred Design
With ‘thinking as a user’ as the methodology and ‘user satisfaction’ as the goal, design thinking practice supports innovation and successful product development in organizations. Ideally, this approach results in translating all the requirements into product features.
Part of the broader human centred design approach, design thinking is more than cross-functional; it is an interdisciplinary and empathetic understanding of our user’s needs. Design thinking sits right up there with Agile software development, business process management, and customer relationship management.
5 Stages of Design Thinking
- Empathize: This stage involves gathering insights about users and trying to understand their needs, desires, and objectives.
- Define: This phase is all about identifying the challenge. What difficulties do users face? What are the biggest challenges? What do users really need?
- Ideate: This step, as you may have already guessed, is dedicated to thinking about the way you can solve the problems you have identified with the help of your product. The product team, designers, and software engineers brainstorm and generate multiple ideas.
- Prototype: The fourth stage brings you to turn your ideas into reality. By creating prototypes, you test your ideas’ fitness.
- Test: You present the prototype to customers and find out if it solves their problem and provides users with what they need. Note that this is not the end of the journey; you need to get feedback from the users, adjust the product’s functionality, and test it again. This is a continuous process similar to the build-measure-learn approach in the lean start-up methodology.
Benefits of Design Thinking in Software Development
1. Feasibility check: Design thinking enables software development companies to test the feasibility of the future product and its functionality at the initial stage. It enables them to keep end-user needs in mind, clearly specify all requirements and translate all this into product features.
2. No alarms and no surprises: Once you’ve tested your MVP and gathered feedback from users, the team can confidently proceed to the product development. You can be quite sure that there will be little to no difference between the approved concept and final version.
3. Clarity and transparency: Design thinking approach allow product designers/developers to broaden their vision, understand and empathise with the end-users’ problems and have a detailed blueprint of the solution they should eventually deliver.
4. Continuous improvement: The product can be (and sometimes should be) modified after its release when user feedback is at hand. It becomes clear which features work and which can be done away with. The product can undergo some series enhancements on the basis of feedback. This leaves place for continuous improvement and software development process becomes flexible and smooth.
Real-world Success Stories
During Indra Nooyi’s term as PepsiCo’s CEO, the company’s sales grew 80%. It is believed that design thinking was at the core of her successful run. In her efforts to relook at the company’s innovation process and design experience, she asked her direct reportees to fill an album full of photos of what they considered represents good design. Uninspired by the result, she probed further to realize that it was imperative to hire a designer.
“It’s much more than packaging… We had to rethink the entire experience, from conception to what’s on the self to the post product experience.”, she told the Harvard Business Review.
While other companies were adding new flavours or buttons to their fountain machines, PepsiCo developed a touch screen fountain machine, a whole new interaction between humans and machines.
“Now, our teams are pushing design through the entire system, from product creation, to packaging and labelling, to how a product looks on the shelf, to how consumers interact with it,” she said.
Back in 2009, Airbnb’s revenue was limping. They realized that poor quality images of rental listings may have something to do with it. They flew some of their employees to a city and got them to take high quality photos and upload it on their website. This resulted in a 100% increase in their revenue.
Instead of focusing on scalability, the team turned inward and asked, ‘what does the customer need?’ This experiment taught them a few big lessons, empathy being just as important as code was one of them.
Mint.com is a web-based personal financial management website. Part of their success is attributed to the human-centric design of the website which tracks and visualizes how a person is spending their money. Bank accounts, investments, and credit cards can easily be synchronized on Mint, which then categorizes the expenses to help the user visualize their spending. They built a product that illustrates a core principle of design thinking: truly understanding the position and mindset of the user. They had 1.5 million customers within 2 years.
Design thinking is a human-centred approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.
Vasu heads Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications etc.
Outside of his professional role, Vasu enjoys playing badminton and focusses on fitness routines. | <urn:uuid:e2c9b67e-b38e-49fe-a353-63deb3767c33> | CC-MAIN-2022-40 | https://www.gavstech.com/design-thinking-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00371.warc.gz | en | 0.951251 | 1,386 | 3.03125 | 3 |
The growing trend of collecting and analyzing data in manufacturing has the potential to make positive impacts in manufacturing. Thanks to the growth of Internet of Things, today’s advanced machines are built with numerous sensors all producing an ever-increasing amount of sensor and log data.
Gaining valuable actionable insights from the data can be leveraged to improve efficiencies and reduce costs; however, this vast amount of data can be overwhelmingly complex and costly to process, which can outweigh the potential efficiencies and savings.
As more devices are connected across the factory floor, there will be an exponential increase in the amount of data produced. This explosive growth in data also means an increase in computing, storage and networking power and infrastructure.
But, the traditional approach to analytics isn’t necessarily fit for IoT. Pushing all this data to a central data center to be processed and analyzed can be costly and not easily scaled.
One solution is clear, in order to enable the effective and efficient handling of IoT analytics, running some of the analytics “on the edge” is essential. Exploring edge analytics (also known as fog computing) provides a much more effective and scalable way to use computation and bandwidth resources much more efficiently. And for factories and plants already embracing IoT, it will optimize operation efficiencies and scalability.
The following are representative examples:
- Data can enable new services for customers. For example, in the industrial sector, sensorial data has traditionally been collected and used in a very limited way, such as measuring some aspect of an industrial process or machinery, adjusting a controller and then discarding the measured data. So, not only can more sensors be utilized, but the data can be stored, mined and analyzed in new ways and then used for new services.
- IoT data coupled with advanced machine learning approaches can detect anomalies in an industrial process to drive response and action in a faster and more efficient manner.
- By inspecting and analyzing historical IoT data, using advanced data analytics, we can detect patterns indicating changes that require attention. For example, deterioration of machinery enables predictive maintenance to fix the problem when the impact is low in terms of money and production time lost.
But how would distributed IoT analytics work? The hierarchy begins with “simple” analytics on the smart device itself to more complex analytics on multiple devices on the IoT gateways and finally the heavy lifting, the big data analytics that are running in the Cloud. This distribution of analytics offloads the network and the data centers creating a model that scales.
Many business processes do not require “heavy duty” analytics, and therefore the data collected, processed and analyzed on or near the edge can drive automated decisions. For example, a local valve can be turned off when edge analytics detect a leak.
Harness the computational power at the sensor and device to run valuable analytics on the device itself. Additionally, these sensors and other smart connected devices typically are tied to a local gateway with potentially more computational power available.
This, in turn, enables more complex multi-device analytics close to the edge. Offloading data analysis from the network and the data centers creates a model that effectively scales.
Some actions need to be taken in real time because they cannot tolerate any delay between the sensor-registered event and the reaction to that event. This is extremely true of industrial control systems when sometimes there is no time to transmit the data to a remote Cloud. This is remedied with a distributed model.
Through considering edge analytics, we’re beginning to understand that there are some trade-offs that must be considered.
Edge analytics is all about processing and analyzing subsets of all the data collected and then only transmitting the results. We are essentially discarding some of the raw data and potentially missing some insights.
The question is whether we can live with this “loss,” and if so, how should we choose which pieces we are willing to “discard” and which need to be kept and analyzed?
The answer is not simple and determined by the application. Some organizations may never be willing to lose any data but the vast majority will accept that not everything can be analyzed. This is where we will have to learn by experience as organizations begin to get involved in this new field of IoT analytics and review the results.
It’s also important to learn the lessons of past distributed systems. For example, when many devices are analyzing and acting on the edge, it may be important to have somewhere a single “up-to-date view,” which in turn, may impose various constraints. The fact that many of the edge devices are also mobile complicates the situation even more.
If you believe that the IoT will expand and become as ubiquitous as predicted, then distributing the analytics and the intelligence is inevitable and desirable. It will help us in dealing with big data and releasing bottlenecks in the networks and in the data centers; however, it will require new tools when developing analytics-rich IoT applications.
Making sense of this flood of data will be the challenge that will drive better and more advanced analytics in manufacturing; however, many currently recognize that the gap between collected data and analyzed data is growing.
We need to be smart about how we tackle this challenge: Do we collect everything? Do we store everything? Do we analyze everything just because we can?
About the Author: Gadi Lenz is Chief Scientist at AGT International. He discusses topics such as IoT, big data, analytics and other insights over at his blog: The Analytics of Everything. | <urn:uuid:86939207-4a07-4efc-9ec2-53c6ec108f16> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/article/13215288/living-on-the-edge-in-manufacturing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00371.warc.gz | en | 0.92698 | 1,132 | 2.578125 | 3 |
Step 1: Know where the data is stored and located (aka Data Discovery)
This is the process of discovering/detecting/locating all the structured and unstructured data that an organization possesses. This data may be stored on company hardware (endpoints, databases), employee BYOD, or the cloud.
There are many tools available to assist in the discovery of data (for both in transit and in storage) and these vary between on-prem and cloud-related data. This process is intended to assure that no data is left unknown and unprotected. This is the core of creating a data-centric approach to data protection as an organization creates an inventory of all of its data. This inventory is a critical input to a broader data governance strategy and practice.
Information assets are constantly changing and new assets are added that will make any static list out of date and ineffective almost immediately. When establishing the process for data discovery ensure to use automation. It is the only way you can keep an active view of your information assets and be able to effectively manage the risk.
Step 2: Know the sensitivity of the data (aka Data Classification)
Once the data is discovered, that data needs to be classified. Data Classification is the process of analyzing the contents of the data, searching for PII, PHI, and other sensitive data, and classifying it accordingly. A common approach is to have 3 or 4 levels of classification, typically:
3 level policy:
- Private / Internal
4 level policy:
- Private / Internal
- Highly Confidential / Restricted
Once a policy is created, the data itself needs to be tagged within the metadata (this is the implementation of the data classification policy). Traditionally, this has been a complex and often inaccurate process. Examples of traditional approaches have been:
- RegEx, Keyword Match, dictionaries
- Finger Printing and IP Protection
- Exact Data Match
- Optical Character Recognition
- Compliance coverage
- Exception management
Approaches to data classification have evolved and organizations must leverage new capabilities if they are to truly classify the large volume of data they create and own. Some examples are:
- Machine Learning (ML) based document classification & analysis, including the ability to train models and classifiers using own data sets using predefined ML classifiers (making this simple for organizations to create classifiers without the need to complex data science skills). (See this analysis from Netskope.)
- Natural Language Processing (NLP)
- Context Analysis
- Image Analysis and classification
- Redaction and privacy
These approaches must have the ability to support API-based, cloud-native services for automated classification and process integration. This allows the organization to build a foundational capability to use process and technology, including models, together to classify data which then becomes a data point on additional inspection if needed. The result is to provide a real-time, automated, classification capability.
Classification escalation and de-escalation is a method commonly used to classify all discovered data. For each data object that has not been classified, a default classification should be applied by injecting into the metadata the default level of classification (for example, if not classified, default to confidential or highly confidential). Based on several tests or criteria, the object’s classification can slowly be escalated or de-escalated to the appropriate level. This coincides with many principles of Zero Trust which is fast becoming and will be, a fundamental capability for any Data Protection Strategy.
(More information on Zero Trust can be found in the Netskope article What is Zero Trust Security?)
A note on determining “crown jewels” and prioritization
Data classification goes a long way in helping an organization identify its crown jewels. For the purpose of this conversation, “crown jewels” are defined as the assets that access, store, transfer or delete, the most important data relevant to the organization. Taking a data-centric approach, it’s imperative to understand the most important data, assessing both sensitivity and criticality. This determination is not driven by data classification alone.
A practical model to determine the importance of the data is to take into account three pillars of security—Classification, Integrity, and Availability—with each assigned a weighting (1-4) aligned to related policies or standards. A total score of 12 (4+4+4) for any data object would indicate the data is highly confidential, has high integrity requirements, and needs to be highly available.
Here is an example of typical systems in use by an enterprise and typical weightings.
Highly confidential = 4 Confidential = 3
Internal = 2
Public = 1
High integrity = 4
Medium integrity = 3
Low integrity = 2
No integrity requirement = 1
|Availability (being driven from the BCP and IT DR processes):|
Highly available = 4
RTO 0 - 4 hrs = 3
RTO 4 - 12 hrs = 2
RTO > 12 hrs = 1
An organization can set, based on risk appetite, a total score of 12 for any data object, which would indicate that the data is highly confidential, has high integrity requirements and needs to be highly available. An organization can set, based on risk appetite, what score determines the crown jewel rating. In addition, this enables the organization to prioritizes controls and where needed, remediation activity, in a very logical and granular way. The score can then be applied to the applications, systems, and third parties that use that data, creating a grouping of assets (applications, systems, and/or third parties) that would indicate crown jewel status (or not).
Keep an eye out for Part 2, where we will dig further into knowing the flow of your data, getting visibility into who can access your data, and knowing how well your data is protected. If you’d like to learn more about How to Design a Cloud Data Protection Strategy you can read a complimentary copy of the white paper here! | <urn:uuid:2be1ed09-23c8-468f-bbac-9afcac749bae> | CC-MAIN-2022-40 | https://www.netskope.com/jp/blog/a-practical-guide-to-cloud-data-protection-part-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00371.warc.gz | en | 0.898055 | 1,361 | 2.875 | 3 |
Researchers at Cyble have discovered at least 9,000 exposed Virtual Network Computing (VNC) endpoints that can be accessed without authentication, allowing threat actors easy access to internal networks. These platform-independent systems offer control of remote computers via Remote Frame Buffer protocol (RFB) over a network connection. If these endpoints aren’t fully secured with a password, they can be used as an entry point for unauthorized users. Cyble’s report stated, “Researchers were able to narrow down multiple Human Machine Interface (HMI) systems, Supervisory Control and Data Acquisition Systems (SCADA), Workstations, etc., connected via VNC and exposed over the internet.” Cyble began monitoring for attacks on the default port for VNC and found over six million requests over one month. Demand for accessing critical networks is high on hacker forums with users asking to buy VNC access and others providing instructions on how to find exposed VNCs.
By Anthony Zampino Introduction Leading up to the most recent Russian invasion of Ukraine in | <urn:uuid:9e6eedf4-fc27-4791-9926-5d54381ed3b2> | CC-MAIN-2022-40 | https://www.binarydefense.com/threat_watch/over-9000-vnc-servers-exposed-online-without-a-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00371.warc.gz | en | 0.897662 | 214 | 2.53125 | 3 |
Virtual Private Networks (VPNs) are becoming increasingly popular as a lower cost and more flexible way to deploy a network across a wide area. With advances in technology comes an increasing variety of options for implementing VPN solutions. This tech note explains some of these options and describes where they might best be used.
Refer to Cisco Technical Tips Conventions for more information on document conventions.
There are no specific prerequisites for this document.
This document is not restricted to specific software and hardware versions.
Note: Cisco also provides encryption support in non-IOS platforms including the Cisco Secure PIX Firewall, the Cisco VPN 3000 Concentrator, and the Cisco VPN 5000 Concentrator.
The Internet has experienced explosive growth in a short time, far more than the original designers could have foreseen. The limited number of addresses available in IP version 4.0 is evidence of this growth, and the result is that address space is becoming less available. One solution to this problem is Network Address Translation (NAT).
Using NAT a router is configured on inside/outside boundaries such that the outside (usually the Internet) sees one or a few registered addresses while the inside could have any number of hosts using a private addressing scheme. To maintain the integrity of the address translation scheme, NAT must be configured on every boundary router between the inside (private) network and the outside (public) network. One of the advantages of NAT from a security standpoint is that the systems on the private network cannot receive an incoming IP connection from the outside network unless the NAT gateway is specifically configured to allow the connection. Moreover, NAT is completely transparent to the source and destination devices. NAT's recommended operation involves RFC 1918 , which outlines proper private network addressing schemes. The standard for NAT is described in RFC1631 .
The following figure shows NAT router boundary definition with an internal translation network address pool.
NAT is generally used to conserve IP addresses routable on the Internet, which are expensive and limited in number. NAT also provides security by hiding the inside network from the Internet.
For information on working of NAT, see How NAT Works.
Generic Routing Encapsulation (GRE) tunnels provide a specific pathway across the shared WAN and encapsulate traffic with new packet headers to ensure delivery to specific destinations. The network is private because traffic can enter a tunnel only at an endpoint and can leave only at the other endpoint. Tunnels do not provide true confidentiality (like encryption does) but can carry encrypted traffic. Tunnels are logical endpoints configured on the physical interfaces through which traffic is carried.
As illustrated in the diagram, GRE tunneling can also be used to encapsulate non-IP traffic into IP and send it over the Internet or IP network. The Internet Packet Exchange (IPX) and AppleTalk protocols are examples of non-IP traffic. For information on configuring GRE, see "Configuring a GRE Tunnel Interface" in Configuring GRE.
GRE is the right VPN solution for you if you have a multiprotocol network like IPX or AppleTalk and have to send traffic over the Internet or an IP network. Also, GRE encapsulation is generally used in conjunction with other means of securing traffic, such as IPSec.
For more technical detail on GRE, refer to RFC 1701 and RFC 2784 .
Encryption of data sent across a shared network is the VPN technology most often associated with VPNs. Cisco supports the IP Security (IPSec) data encryption methods. IPSec is a framework of open standards that provides data confidentiality, data integrity, and data authentication between participating peers at the network layer.
IPSec encryption is an Internet Engineering Task Force (IETF) standard that supports Data Encryption Standard (DES) 56-bit and Triple DES (3DES) 168-bit symmetric key encryption algorithms in IPSec client software. GRE configuration is optional with IPSec. IPSec also supports certificate authorities and Internet Key Exchange (IKE) negotiation. IPSec encryption can be deployed in standalone environments between clients, routers, and firewalls, or used in conjunction with L2TP tunneling in access VPNs. IPSec is supported in on various operating system platforms.
IPSec encryption is the right VPN solution for you if you want true data confidentiality for your networks. IPSec is also an open standard, so interoperability between different devices is easy to implement.
Point-to-Point Tunneling Protocol (PPTP) was developed by Microsoft; it is described in RFC2637 . PPTP is widely deployed in Windows 9x/ME, Windows NT, and Windows 2000, and Windows XP client software to enable voluntary VPNs.
Microsoft Point-to-Point Encryption (MPPE) is an informational IETF draft from Microsoft that uses RC4-based 40-bit or 128-bit encryption. MPPE is part of Microsoft's PPTP client software solution and is useful in voluntary-mode access VPN architectures. PPTP/MPPE is supported on most Cisco platforms.
PPTP support was added to Cisco IOS Software Release 12.0.5.XE5 on the Cisco 7100 and 7200 platforms. Support for more platforms was added in Cisco IOS 12.1.5.T. The Cisco Secure PIX Firewall and Cisco VPN 3000 Concentrator also include support for PPTP client connections.
Since PPTP supports non-IP networks, it is useful where the remote users have to dial in to the corporate network to access heterogeneous corporate networks.
For information on configuring PPTP, see Configuring PPTP.
Virtual Private Dialup Network (VPDN) is a Cisco standard that allows a private network dial-in service to span across to remote access servers. In the context of VPDN, the access server (for example, an AS5300) that is dialed into is usually referred to as the Network Access Server (NAS). The dial-in user's destination is referred to as the home gateway (HGW).
The basic scenario is that a Point-to-Point Protocol (PPP) client dials in to a local NAS. The NAS determines that the PPP session should be forwarded to a home gateway router for that client. The HGW then authenticates the user and starts the PPP negotiation. After PPP setup is complete, all frames are sent via the NAS to the client and home gateways. This method integrates several protocols and concepts.
For information on configuring VPDN, see Configuring a Virtual Private Dial-Up Network in Configuring Security Features.
Layer 2 Tunneling Protocol (L2TP) is an IETF standard that incorporates the best attributes of PPTP and L2F. L2TP tunnels are used primarily in compulsory-mode (that is, dialup NAS to HGW) access VPNs for both IP and non-IP traffic. Windows 2000 and Windows XP have added native support for this protocol as a means of VPN client connection.
L2TP is used to tunnel PPP over a public network, such as the Internet, using IP. Since the tunnel occurs on Layer 2, the upper layer protocols are ignorant of the tunnel. Like GRE, L2TP can also encapsulate any Layer 3 protocol. UDP port 1701 is used to send L2TP traffic by the initiator of the tunnel.
Note: In 1996 Cisco created a Layer 2 Forwarding (L2F) protocol to allow VPDN connections to occur. L2F is still supported for other functions, but has been replaced by L2TP. Point-to-Point Tunneling Protocol (PPTP) was also created in 1996 an an Internet draft by the IETF. PPTP provided a function similar to GRE-like tunnel protocol for PPP connections.
For more information on L2TP, see Layer 2 Tunnel Protocol.
PPP over Ethernet (PPPoE) is an informational RFC that is primarily deployed in digital subscriber line (DSL) environments. PPPoE leverages existing Ethernet infrastructures to allow users to initiate multiple PPP sessions within the same LAN. This technology enables Layer 3 service selection, an emerging application that lets users simultaneously connect to several destinations through a single remote access connection. PPPoE with Password Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP) is often used to inform the central site which remote routers are connected to it.
PPPoE is mostly used in service provider DSL deployments and bridged Ethernet topologies.
For more information on configuring PPPoE, see Configuring PPPoE over Ethernet and IEEE 802.1Q VLAN.
Multiprotocol Label Switching (MPLS) is a new IETF standard based on Cisco Tag Switching that enables automated provisioning, rapid rollout, and scalability features that providers need to cost-effectively provide access, intranet, and extranet VPN services. Cisco is working closely with service providers to ensure a smooth transition to MPLS-enabled VPN services. MPLS works on a label-based paradigm, tagging packets as they enter the provider network to expedite forwarding through a connectionless IP core. MPLS uses route distinguishers to identify VPN membership and contain traffic within a VPN community.
MPLS also adds the benefits of a connection-oriented approach to the IP routing paradigm, through the establishment of label-switched paths, which are created based on topology information rather then traffic flow. MPLS VPN is widely deployed in the service-provider environment.
For information on configuring MPLS VPN, see Configuring a Basic MPLS VPN. | <urn:uuid:b97f583a-4c88-494d-9db9-5412acad3f77> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/support/docs/security-vpn/ipsec-negotiation-ike-protocols/14147-which-vpn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00371.warc.gz | en | 0.898215 | 1,968 | 3.234375 | 3 |
After realizing success in establishing dominance over the threat the first step is to stabilize the environment
Stabilizing the network environment enables the complementary efforts of departments and parties in the network. The initial efforts of cybersecurity professionals aim to stabilize the network health situation within the operational network and immediate business ecosystem.
These efforts may include assessments of the system such as infrastructure, staff, training and education, logistics, and software vulnerabilities and programs. Achieving measurable progress requires early coordination and constant dialog with other members; ultimately, this also facilitates a successful transition from cyber security-led efforts to nontechnical departments.
Essential tasks may include— • Assess public and private network hazards within their organization.
Assess existing software and hardware infrastructure including preventative and configuration management and gold-standard software deployment templates.
Evaluate the need for additional security capabilities.
Repair existing deficient systems
Operate or augment the operations of existing systems.
Prevent malware epidemics through immediate device vaccination.
Support improvements to local network capacity.
Promote and enhance the network infrastructure. | <urn:uuid:102b1e18-f667-44e3-9fe7-80b887ce2a50> | CC-MAIN-2022-40 | https://km.crowdpointtech.com/phase-4-stabilize-a-level-of-consistent-privacy-protection-and-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00571.warc.gz | en | 0.903143 | 216 | 2.796875 | 3 |
HIE and RHIO Advocacy & Public Policy Information Technology
A health information exchange (HIE) is defined as the mobilization of healthcare information electronically across organizations within a region, community or hospital system. The goal of an HIE is to facilitate access to and retrieval of clinical data to provide safe, timely, efficient, effective and equitable patient-centered care. An HIE is also useful to Public Health authorities to assist in analyses of the health of the population.
Healthcare Information and Management Systems Society (HIMSS) maintains updated information on HIE news, advocacy and public policy information, technology and resources. Resources include audio recordings outlining the 2009 outlook for healthcare IT (HCIT) grants funding, a healthcare job mine and upcoming HCIT conferences. Formal organizations are now emerging to provide both form and function for HIE efforts. These organizations (often called Regional Health Information Organizations or RHIOs) are geographically-defined entities which develop and manage a set of contractual conventions and terms, arrange for the means of electronic exchange of information, and develop and maintain HIE standards. Learn the latest information about HIEs and RHIOs by clicking "Download Whitepaper" to request the URL to this resource. | <urn:uuid:5b2f3f44-2819-434a-8d50-e8757fc4a8d2> | CC-MAIN-2022-40 | https://www.givainc.com/healthcare/health-information-exchange-regional-health-information-organization.cfm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00571.warc.gz | en | 0.912343 | 247 | 2.765625 | 3 |
Data extracted from devices during the mobile forensics process can provide investigators and attorneys with the information they need to crack a case wide open. Mobile devices go everywhere the users goes which means they can tell a story about who the user is communicating with, what they are communicating about, and where the user has been.
What digital forensics artifacts can you find on a mobile phone?
As of 2016, there were 395.9 million wireless subscriber connections of smartphones, feature phones, and tablets in the United States, roughly 120% of the population. Mobile forensics is the service through which examiners extract and evaluate the data stored within a mobile device. Modern smartphones contain a plethora of information that could potentially be of evidentiary value including:
- Incoming, outgoing, missed call history
- Phone book or contact lists
- SMS text, application-based, and multimedia messaging content
- Pictures, videos, audio files, and sometimes voicemail messages
- Internet browsing history, content, cookies, search history, analytics information
- To-do lists, notes, calendar entries, ringtones
- Documents, spreadsheets, presentation files and other user-created data
- Passwords, passcodes, swipe codes, user account credentials
- Historical geolocation data, cell phone location data, Wi-Fi connection information
- User dictionary content
- Data from various installed apps
- System files, usage logs, error messages
- Deleted data from all of the above | <urn:uuid:43b7073e-5e53-49c2-b607-7c55e5440819> | CC-MAIN-2022-40 | https://www.gillware.com/phone-data-recovery-services/mobile-forensics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00571.warc.gz | en | 0.892553 | 304 | 2.734375 | 3 |
Despite having been around for just 50 years, microprocessors have become the backbone of modern society, and their influence is only expected to increase from here on out. Given its tremendous importance, it’s a good opportunity to reflect upon the 4004, the tiny chip that started it all.
The 4004 emerged from a partnership between Intel and Japanese calculator company Busicom. In 1969, Busicom contracted Intel to design a microprocessor for a new desktop calculator. Prior to this arrangement, Intel had mostly dealt in the memory business. The project was tasked to Ted Hoff and Stanley Mazor, who were later joined by Federico Faggin and Masatoshi Shima. Faggin had the critical semiconductor expertise Intel was looking for, and Shima, a Busicom engineer, contributed his own experience to expedite the project.
The result of their collaboration, the 4004 processor, was released two years later and addressed a number of challenges in processor design. Before the 4004, processors had their logic programming hard-wired into the physical architecture, which usually consisted of many smaller components connected to multiple printed circuit boards (PCBs). They were not only complex to develop but had a large footprint. Moreover, each system was highly specialized and only served a single purpose.
Intel deemed this development method untenable and sought to design a chip that was more flexible, easier to integrate, and could serve a wider range of purposes. The company designed the 4004 with this key philosophy in mind. As a result, the chip condensed many of the previously discrete circuits into a single tiny package, reducing cost and product development time, and making it usable for a wider range of applications. The concept was revolutionary, and Intel hailed the chip as a “new era in integrated electronics.”
Although primitive by today’s standards, the 4004 had ground-breaking performance for its time. Built with 2,300 transistors, it featured a clock rate of 740 kHz and processed more than 90,000 instructions per second. Intel also managed to cram the memory and data bus into its standard 16-pin package, greatly reducing complexity and opening the pathways to smaller devices with varying amounts of storage and memory. The microchip could address a whopping 640 bytes of random-access memory, as well as 4 KB of read-only memory.
The chip also made it easy for system manufacturers to integrate into their products. Along with the 4004, Intel also released the 4001 ROM chips, 4002 RAM chips, and the 4003 shift register, creating a four-chip set. The 4004 only needed a single 256-byte ROM chip to work; computer manufacturers could use the processor’s internal register to do double duty as RAM, which was very costly at the time.
The first batch of chips went into the Busicom 141-PF calculator. Under its contract, Busicom had exclusive rights to the 4004 processor, but Intel convinced it to release the exclusivity in exchange for a discount on the chips. The chip saw some success after its commercial debut, making its way into several of Intel’s systems – computers, and a voting machine. But more importantly, it set the stage for the more powerful and ubiquitous Intel 8008 chip released just months after.
Busicom went out of business in 1974, but Intel had cemented a foothold in the microprocessors space. Today, the company is worth more than $200 billion. Its chips, as well as a plethora of other technologies, are found in all sectors of the technology industry. Although modern processors–built upon tens of billions of transistors and are infinitely faster–no longer bear any resemblance to the 4004, they all owe their roots to its stroke of ingenuity. | <urn:uuid:5b651952-72ca-4bc7-b0cc-f93467e5d146> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/the-little-chip-that-could-intels-4004-turns-50/465740 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00571.warc.gz | en | 0.973731 | 770 | 3.734375 | 4 |
Data Protection 101: What You Need to Know About Data Security and Integrity
How many times have you heard or read the following statement: Data is the lifeblood of every organization. If that’s true, why are so many companies lackadaisical regarding their data protection strategies? The reasons are many, from lack of support from the C-suite to simply not understanding what data protection is or its importance. Nonetheless, the need for a solid data protection strategy continues to intensify.
Every day, an estimated 2.5 quintillion bytes of data are generated — and there are no signs of things slowing down. Data comes from an increasingly vast and diverse array of sources: digital photos and videos; sensors that gather everything from climate-information to the number of steps taken-per day; posts to social media sites; cell phone GPS-signals and much more.
Complicating the matter is the constant barrage of threats ranging from malware to human error — and the plethora of tactics to combat them. It’s hard to stay on top of data protection best practices, much less the most recent advances. Even the sheer number of relevant terms are overwhelming. Is a backup and recovery plan the same as a disaster recovery plan? Is disaster recovery the same as business continuity? What’s the difference between backup and replication? Where do things like encryption and deduplication fit in? | <urn:uuid:992dc15b-03f8-4c60-b152-9c97f96ffaaa> | CC-MAIN-2022-40 | https://www.flexential.com/resources/white-paper/data-protection-101-what-you-need-know-about-data-security-and-integrity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00571.warc.gz | en | 0.913647 | 283 | 2.59375 | 3 |
This course begins by describing how the Parallel Sysplex evolved and why it is integral component of today’s enterprise IT environment. Its key features are discussed in terms of the benefits it provides to the organization – system availability, data integrity, workload and data sharing, and automated recovery to name a few. A break-down of the major Parallel Sysplex components is then presented, describing their importance and how they can be configured.
Any personnel wishing to become familiar with Parallel Sysplex hardware and software concepts and requirements.
A good understanding of mainframe operational concepts, or successful completion of Interskill’s z/OS – Concepts and Components course.
After completing this course, the student will be able to:
- Describe the Benefits Derived from Implementing a Sysplex
- Explain how Sysplex Technology has Evolved
- Describe the Features of a Parallel Sysplex
- Identify the Major Components of a Parallel Sysplex Configuration
- Describe how each Parallel Sysplex Component is Used
Introduction to the Parallel Sysplex
Driving Forces Behind Sysplex Creation
Data Sharing in a Parallel Sysplex
Parallel Processing Capabilities
Geographically Dispersed Parallel Sysplex
Interacting with the Parallel Sysplex
Parallel Sysplex Features
Sysplex Failure Management
Single System Image
Parallel Sysplex Components and Configuration
Couple Data Sets
Purpose of the Coupling Facility
Coupling Facility Configurations
Coupling Facility List, Lock, and Cache Structures | <urn:uuid:6e2cd6cb-9c5d-4c91-838f-f089c65da7b9> | CC-MAIN-2022-40 | https://interskill.com/?catalogue_item=parallel-sysplex-fundamentals-2-4&noredirect=en-US | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00771.warc.gz | en | 0.810262 | 356 | 2.796875 | 3 |
Open topic with navigation
In IDOL, all the data in a document is split up into fields. Usually, there is a title field and a field containing the body of the document content. Depending on the repository and type of data, there are often also:
metadata fields, containing information about the file, such as the creation date and file type.
document detail fields, containing additional information about the content, such as the author, book title, and publication date.
When you decide what content to index, you must also decide how to store different document fields. At this stage, you consider the balance between the size of the index, the time taken to index data, and the speed of query responses.
This section contains the following topics:
Types of Field. The different groups of fields that generally occur in documents.
Field Content. The types of information that fields contain.
Field Properties. Configure properties for different types of fields.
Field Names and Field Identifiers. Use field names and field identifiers for queries and other operations.
Regenerate Data Indexes. Regenerate the field indexes of different types. | <urn:uuid:ec99ee51-ce49-4f37-ace0-baaf087dc7fd> | CC-MAIN-2022-40 | https://www.microfocus.com/documentation/idol/IDOL_11_6/IDOLServer/Guides/html/English/expert/Content/IDOLExpert/Fields/Fields.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00771.warc.gz | en | 0.87492 | 235 | 2.765625 | 3 |
Blueprinting Business Knowledge: Concept Models
Extracted from Business Knowledge Blueprints: Enabling Your Data to Speak the Language of the Business, by Ronald G. Ross, 2020.
Businesses today face a host of challenges that demand a new approach, not just to data but to the business knowledge that lies behind data. Perhaps the driving force is poor data quality. Or IT's disconnect from the language the business uses. Or lack of a big picture or common vocabulary. Maybe all the above.
Recognition is growing rapidly that traditional data design techniques are also inadequate for the larger challenges of integrated digital business, including highly-customized products, machine intelligence, and software ontologies.
The common factor in meeting these challenges? It's what the company knows — business knowledge. That knowledge isn't limited to structured data — it applies equally to textual business communications ('unstructured data'). How did structured data and 'unstructured data' ever get disconnected? Business text matters!
Business knowledge is more complicated — I prefer to say far richer — than most realize. It requires a blueprint, which must be developed deliberately through clarification and synthesis. The goal is to create a shared, structured understanding of concepts.
What is a concept? The dictionary defines it simply as something conceived in the mind — a thought, idea, notion. For business what's critical is whether a concept is shared.
People usually already have concepts of things in their minds — you typically don't design concepts from scratch. The problem is that those concepts aren't shaped the same. And they're not harmonized. Think of the problem as essentially redesigning concepts to get everyone on the same page — that is, to create shared understanding.
By providing essential structure for the design, a concept model is the core component of the business knowledge blueprint you need. A concept model provides the basis for creating a robust business vocabulary with business-friendly definitions. We define concept model as follows:
concept model: a set of concepts structured according to the relations among them
A concept model should be primarily of, by, and for business people. To emphasize the point, perhaps we should say business concept model. Even when we drop business from the term and say just concept model, remember business is always implicit.
Merriam-Webster Unabridged.
# # #
About our Contributor:
All About Concepts, Policies, Rules, Decisions & Requirements
We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions. | <urn:uuid:db2ef23e-7db6-4fc8-a8ae-bb269dec2345> | CC-MAIN-2022-40 | https://www.brcommunity.com/articles.php?id=c024 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00771.warc.gz | en | 0.94211 | 564 | 2.53125 | 3 |
Artificial Intelligence: Towards Dynamic Predictive Framework
In the present digital era, several corporate firms are concentrating more on the development of advanced artificial intelligence tools, and startups are investing in the same to compete with established organizations. In the year 1956, John McCarthy coined the term AI for the first time as “science and engineering technology for developing intelligent machines”.
The main theme behind the development and success of artificial intelligence is its capability to guide machines in performing several cognitive-based activities, by capturing human intelligence aspects such as voice or text recognition, visual identification, and problem-solving functions. These data will be initially pre-processed for extracting unique features of captured data and forwarded to training and testing for verification.
Through the knowledge gained from the historical data and experiences, several corporate firms are performing predictive analysis to increase their productivity by studying their present strength. But traditional predictive analytics techniques are found to be less accurate due to the lack of analytical skills and manual processes. The advancements in Artificial Intelligence have led researchers to focus on deploying AI-based algorithms for predictive analytics and have gained huge popularity.
In recent years, machines are trained with dynamic predictive analysis for smart thinking and act quickly at a particular situation. Several well-known organizations such as Google, Apple, and Microsoft are investing a huge amount in research and development of AI tools to achieve excellent customer service along with minimizing the drawbacks.
In the year 2017, CEO of Google, Sundar Pichai announced a new initiative Google.ai, which utilizes latest machine learning algorithms for a search engine to access and search contents with better processing speed. Employing predictive analytics and artificial intelligence tools in Google.ai helps in determining the shortest path between the stations or nearby places with less time consumption.
The new site is in progress from the Google development team, where it employs AutoML in which the neural network can develop another neural network through reinforcement learning. This helps to achieve auto draw application, where unskilled artists will provide their ideas through Google, and Artificial Intelligence tries to recognize the particular drawing.
In a nutshell, the embedding technologies of artificial intelligence and predictive analysis are leading people towards advanced technological aspects through futuristic strategies and scalable processes. It provides a new way of handling huge amount of data and to achieve optimal results through the intervention of humans and humanoid machines. In future, the safety parameters should be ascertained during the design and development for achieving accurate results.
Check This Out:-
Top Artificial Intelligence Solution Companies
Check This Out:-
Top Artificial Intelligence Solution Companies in UK | <urn:uuid:90c16f0c-e53d-472e-a7d8-4cd323fa4bfb> | CC-MAIN-2022-40 | https://www.cioreview.com/news/artificial-intelligence-towards-dynamic-predictive-framework-nid-28471-cid-117.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00771.warc.gz | en | 0.930713 | 512 | 2.875 | 3 |
hand, assert that the real relationship lies between time and space. But,
as it turns out, they both are right.
As organizations find new ways to use Global Positioning Systems (GPS) to
cut distances traveled, they wind up saving both time and money. In fact,
they may even wind up saving lives.
”When you are dealing with a potential bioterrorism event, minutes can
make a difference in terms of whether you can save peoples’ lives,” says
Mark Smith, Ph.D., epidemiologist for the Guilford County, N.C. Health
Smith was the project coordinator for Rapid Response Project 516, funded
by a grant from the Centers for Disease Control (CDC) under the
Bioterrorism Act of 2002. The project developed field survey techniques
to quickly track down disease sources. These techniques incorporate
GPS-enabled handhelds and laptops, together with Geographic Information
System (GIS) applications and data.
Fortunately, it hasn’t been needed yet to counter a bioterrorism attack.
But it was utilized in 2004 during an outbreak of Legionnaire’s disease
at a manufacturing plant. Smith’s mobile GIS team conducted surveys in
the area to help out another team that used a paper-based system. As the
results came in, scientists uploaded electronic survey data directly into
”The CDC epidemiologists pulled up the data immediately and followed up
on cases in real-time as the data came in,” says Smith. ”We heard three
weeks later that the paper survey was still sitting on someone’s desk.”
From NAVSTAR to OnStar
While currently best known as a driving and fishing aid, GPS was
originally a military navigation system called the NAVSTAR GPS. The
President Ronald Reagan ordered it be made available for commercial use
after Soviet fighters shot down Korean Air Flight 007 (a Boeing 747
carrying 269 passengers and crew) when it strayed into Russian air space.
GPS consists of three elements — a fleet of satellites (at least 24 at a
time), ground-based control centers and user receivers.
The satellites orbit the earth twice a day at an elevation of 11,000
miles, emitting a continuous signal containing the satellite’s time and
position. The user devices then analyze the signals from three or more
satellites to determine the device’s precise location. Early systems
provided accuracy to within about 70 feet. Further enhancements can bring
that number down to within one centimeter, even while the receiver is
In addition to the U.S. system, the Russian military has its own version,
called GLONASS, and the European Union plans to have a 30-satellite
Galileo Positioning System operational by 2008.
The science behind GPS systems is impressive but, as with other types of
technology, the real value lies in the applications — and they are
getting more sophisticated all the time. The Indian Institute of
Technology in Bombay, for example, is using GPS sensors to detect the
movement of a dam in a seismically active area. The Alaska Railroad
Corporation has incorporated GPS into its train collision avoidance
system. Utilities use GPS-enabled devices from St.Paul, Minn.-based 3M
Corp. to locate and map underground cables, wires and pipes. Biologists
use the devices to track migratory animals. John Deere and Company of
Moline, Ill. uses GPS guidance systems in self-steering tractors.
Then, of course, there are the ubiquitous automobile onboard navigation
systems, such as GM’s OnStar.
Space Equals Time Equals Money
While, perhaps not as exotic as those uses listed above, some of the more
sophisticated GPS deployments are designed to maximize the productivity
of mobile workers.
According to a October 2005 report from IDC, a major industry research
firm based in Framingham, Mass., there were 650 million mobile workers
worldwide in 2004. By 2009, that figure will rise to more than 850
million — greater than one-quarter of the total workforce. It costs
businesses money every minute those workers are moving rather than
working. And GPS is the tool of preference when it comes to minimizing
such ‘work breaks’.
Sears Holdings Corp. of Hoffman Estates, Ill., for example, recently
implemented a new system to optimize routes for its 11,000 field service
technicians. Called the Sears Smart Toolbox, Sears and Environmental
Systems Research Institute, Inc. (ESRI) of Redlands, Calif, co-developed
the applications. Each service vehicle contains a GPS, satellite and
cellular communications system, as well as a wireless LAN access point.
It also contains a ruggedized laptop from Itronix, a part of General
Dynamics Corp. of Falls Church, Va.
The system downloads the technician’s daily work schedule to the laptop.
Once the vehicle is in motion, the laptop switches to navigation mode,
giving the driver verbal turn-by-turn directions to the next job site.
”Technicians don’t have to look at maps, don’t have to call the
customers for directions and don’t get lost,” says Dave Lewis, ESRI’s
project manager for the Sears installation. ”When you are talking about
11,000 technicians, even if you save them 10 minutes a day, it is a huge
The system also reports in to the backend server when the technician
arrives at the work site. The tech then takes the laptop into the
building where he is working as it contains service documentation. And if
additional information is needed or the worker needs to order parts, the
laptop connects via wireless LAN to other equipment in the truck, which
uses a satellite or cellular connection to download the required data
from the servers at headquarters.
The system also tracks each technician’s progress throughout the day and
can adjust schedules and routes as needed when jobs take longer than
Go to the next page to find out how GPS helped save lives after a major hurricane.
In the case of Smith’s Rapid Response Team, the goal was to cut down the
time it took to conduct a Rapid Needs Assessment (RNA) — a methodology
developed by the CDC and the World Health Organization to gather data on
health needs during disasters. They also need to get the work done
without assistance from CDC headquarters or other external bodies.
As was clearly demonstrated in the days following Hurricane Katrina,
disaster responses must first be a local action. Communities can’t wait
for the federal government to arrive on the scene.
Smith contracted with Bradshaw Consulting Services, Inc. (BCS) of Aiken,
S.C., to install and configure appropriate hardware, develop customized
data collection forms and train users. This GPS combo consists of HP’s
iPaq or Dell Axim X50V handhelds running Windows and a laptop. The
handhelds also include a GPS card from GlobalSat. The applications and
forms utilize several ESRI products, including ArcPad and ArcGIS.
A typical scenario might involve 10 or more teams going out into the
field with handhelds, each connected to a laptop field computer at the
staging area. The survey teams would be guided by the GPS/GIS to the
appropriate locations to conduct the interviews. The surveys
automatically include the GPS coordinates for the location, so surveyors
don’t have to determine the address and fill it in. They then return to
the base station to upload the survey information from the laptop.
”We can do the analysis right there in the field if we need to,” says
Smith. ”But if we have a wireless phone card in the laptop, we can
access the server at the state capital so our state epidemiologists can
analyze the data themselves in real time.”
Joey Wilson, BCS’s mobile technologies manager, says GPS really proved
itself this fall when the CDC requested help conducting an RNA in Florida
following Hurricane Wilma. Within 24 hours of deployment, interviewers
were trained and on the ground.
”The interviewers came from North Carolina, but GPS helped them go
directly to locations around an unfamiliar city without lost time,” says
As a result they were able to conduct more than 300 interviews in less
than three days and digitally transfer the data to the CDC. This cut the
time the CDC needed to calculate the needs for 150,000 people in the area
from several weeks to a matter of days.
”The CDC had been developing its own in-house questionnaire application,
but it didn’t include a GPS component,” Wilson adds. ”Now they are
aware of how valuable locational awareness is and how much time it saves | <urn:uuid:b8238f6f-f791-4f47-ac75-751e35d3dbb2> | CC-MAIN-2022-40 | https://www.datamation.com/erp/it-finds-its-way-with-gps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00771.warc.gz | en | 0.924867 | 1,932 | 3.109375 | 3 |
For decades, the accepted wisdom in storage management has been the need for solid, persistent and (typically) hardware-based data storage systems. Over the past 20-plus years, this has meant a shared storage platform as the primary location for data.
As storage form factors become more fragmented, some suppliers even offer persistent storage based on temporary entities, such as containers. Does this make sense and how can persistence be achieved with such an ethereal construct?
Shared storage arose from a need to reduce costs, consolidate and eliminate the management overhead of storage deployed in hundreds or even thousands of servers in the datacentre.
A shared storage array was a good solution. Fibre Channel and Ethernet networks offered the ability to connect servers over distance, without cabling issues. And servicing one or two (rather than hundreds) of physical devices reduced maintenance and costs for the customer and supplier.
We now live in a different world. Today, applications are mostly virtualised and container technology has started to gain ground. Shared storage is seen as difficult to use, because it focuses on connecting physical servers to physical storage.
But modern applications work on logical volumes, file systems and object stores. Public cloud computing extends this paradigm, and obfuscates the physical view of hardware altogether.
Persistence of data is still important, however. So how can we achieve this while meeting the needs of new application deployment methods? It’s worth looking at what the requirements are.
Containers, storage array, I/O blender
Virtualisation brought us the problem of the I/O blender – an increasingly random workload created by many virtual machines that access the same LUN or file share. To overcome the issues of shared storage in virtualised environments, VMware (for example) its own file system with specific additional commands to reduce contention and fragmentation. We also saw the introduction of features such as VMware Virtual Volumes (VVOLs), which specifically aim to eliminate the physical LUN and treat virtual machines as objects.
Issues in storage access seen with server virtualisation are exacerbated further with containers. In the container world, a single physical host may run hundreds of containers, each vying for storage resources. Having each container access a long and complex storage stack introduces the risk of contention and goes against the benefits of the lightweight nature of a container.
But this is what many suppliers are doing. Volume plugins for Docker, for example, provide automation to map LUNs and volumes on physical arrays to physical hosts and then onto an individual container.
With the increased adoption of public and hybrid cloud architectures, the idea of a central fixed storage array becomes something of a problem. Applications have become more portable, with the ability to spin up containers in seconds and in many different datacentre locations. This paradigm is in distinct contrast to that of physical servers, which typically would be installed and not moved for years, before eventual decommissioning.
As we can see, delivering storage for container environments brings a new set of requirements, including:
- Data mobility – Containers move around, so the data has to be able to do that too. Ideally, that means not just between hosts in one datacentre, but across geographic locations.
- Data security – Containers need to be secured at a logical or application level, rather than at the LUN level, because containers expect to be recycled regularly.
- Performance – Containers introduce the idea of hundreds of unrelated applications working on the same physical host. I/O must be efficient and easy to prioritise.
Delivering storage with containers
One solution to the problem of persistent storage and containers is to use containers themselves as the storage platform.
At glance, this seems like a bad idea. A container is designed to be temporary, so can be discarded at any time. Also, an individual container’s identity is not fixed against anything that traditional storage uses. And there is no concept of host WWNs or iSCSI IQNs, so how can persistent storage with containers be achieved and why is it worth doing?
Let’s address the “why” question .
As we have discussed, containers can be short-lived and were designed for efficiency. Eliminating the I/O stack as much as possible contributes to the overall performance of a container environment. If storage is delivered through a container, the communication path between application and storage is very lightweight – simply between processes on the same server. As an application moves, a container on the host can provide access to the storage, including spinning up a dedicated storage container if one did not already exist.
Read more about containers and storage
- Containers often need persistent storage, but how do you achieve that? We look at the key options, including Docker volumes versus bind mounts, and Docker Volume Plugins.
- Red Hat launches storage delivered via containers and predicts a future in which costly and inflexible storage hardware and pricey hypervisors will be a thing of the past.
Clearly, there is a lot of back-end work to be done to keep data protected and available across multiple hosts, but this is less of a challenge than with traditional storage arrays because for many applications, only one container accesses a data volume at any one time.
Disaggregating access to storage in this way eliminates one of the issues we will see as NVMe becomes adopted more widely – the problem of having data pass through a shared controller. NVMe has much greater performance than traditional SAS/SATA, making a shared controller the new bottleneck for storage. Disaggregation helps mitigate this issue, in the same way as hyper-converged systems distribute capacity and performance for storage across a scale-out server architecture.
The question of “how” can be answered by looking at the location for persistence.
The media offers the answer here, with either spinning-disk HDDs or flash drives providing that capability. Configurations, access, and so on can be distributed across multiple nodes and media, with consensus algorithms used to ensure data is protected across multiple nodes and devices. That way, if any host or container delivering storage were to die, another can be spun up or the workload rebalanced across the remaining nodes, including the application itself. By design, the data would move with the application.
Container storage suppliers
This is the kind of architecture that is being implemented by startup companies such as Portworx, OpenEBS, Red Hat and StorageOS. Each uses a distributed node-based scale-out architecture, with storage and applications that run on the same platform. Essentially, it is a hyper-converged model for containers.
Some suppliers, such as Scality (with RING) and Cisco HyperFlex (formerly Springpath), use containers within the architecture for scalability, even though the products are not just for container environments.
For all suppliers, integration with container orchestration platforms is essential. Kubernetes is leading this charge, with Kubernetes Volumes the most likely way for storage to be mapped in these environments.
There are some issues that still need to be considered as the technology matures.
is the question of data services. Clearly, compression and deduplication have an effect on performance. The efficiency of these features will be key in gaining adoption, as we saw with the all-flash market. End-users will expect data protection, such as snapshots, clones and replication.
Then there is the subject of integration with public cloud. Today’s solutions are mostly focused on single-site implementations, but true mobility means being able to move data around in a hybrid environment, which is much more of a challenge.
Finally, we should highlight issues of security.
The recent Meltdown vulnerability has a specific risk for containers, with the ability to access data from one container to another on unpatched systems. This raises questions about data security and the use of techniques such as in- encryption that may be required to protect against the inadvertent leaking of data.
There is a particular challenge for the startups to solve here, which may have a direct impact on the uptake of container-based storage systems. It may also make some businesses think that the idea of physical isolation (shared storage) goes some way to mitigating against unforeseen risks as they are discovered and reported. | <urn:uuid:0a982abf-f2ce-4534-b202-9a640479dccb> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Storage-in-containers-The-answer-to-persistent-storage-needs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00171.warc.gz | en | 0.94477 | 1,701 | 3 | 3 |
In this edition of Coffee Break with AI, we want to take you on a journey of AI and games. There have been multiple milestones where nobody thought an AI model would be able to beat professionals in their respective fields, but every single time we have been proven wrong.
There will be two blogs about this topic: this blog will discuss AI models that do not involve a neural network and the next blog will talk about models that do use them.
Let’s get into it, and turn the page of our history book to 1997.
Deep Blue beats Kasparov at Chess
One of the earliest events where an AI was pitted against a human was in 1997. IBM built a chess-playing model, called Deep Blue. It beat the reigning chess world champion, Garry Kasparov, in a six-match game, with a score of 3½ to 2½ (half points are awarded when a game ends in a draw), under standard chess time constraints.
During Deep Blue’s development, from 1989 to 1997, chess was the generally accepted measure for the ability of machines to act intelligently. One can debate whether this is actually correct, but at that time, the idea was that if an AI could beat a human at chess, it would prove that the AI indeed had a form of human-like intelligence.
How did it work?
The AI methods that are used today weren’t available yet in the 90s. There was not enough processing power and not enough data to run the big neural networks. What IBM used was a brute force approach where, for every position on the board, they analyzed which moves they would be able to do in future steps. The final version of the AI model was able to look up 200.000.000 moves into the future in one second, evaluate what the best move would be based on statistics, and decide on a play within the time limit of chess rounds. This version of the model was able to beat Kasparov on May 11th, 1997.
What are the benefits?
The IBM team needed as many lookups as possible per second, but the computer chips at that time didn’t allow for such intense parallel processing. So they made a chip that could do that and highly optimized the parallel programming techniques. This is the basis for modern parallel processing in today’s computer chips and smart devices.
It’s now possible to have 3 different social media apps open on your phone while simultaneously listening to music. It also allows the processing of big data to train very complex models like the protein folding model we have talked about in this blog. And this all was instantiated because IBM needed more optimized parallel programming to win at a chess game!
Watson wins Jeopardy
The next milestone involves IBM again. They were looking for a new PR event which could put the spotlight on themselves. This turned into the creation of a big machine — it literally filled a room the size of a master bedroom — named Watson to play Jeopardy. Jeopardy is a game where the quiz-master provides an answer and the associated question needs to be provided by a contestant. Again, success. Watson beat the two top Jeopardy players, Brad Rutter and Ken Jennings, in 2011, with a huge difference in points. The AI won with a 50.000 dollar difference to its opponents.
It started out as a PR stunt only, with the bonus that it would also challenge IBM’s team to tackle natural language processing. Watson needed to parse the question given by the quiz-master, understand the question, find an appropriate answer and respond, all the while having to adhere to the rules of the game like any human player would.
After a prompt was given, it had 6 seconds after pressing a buzzer to provide a response. Otherwise, Watson would get a wrong answer and lose its opportunity to one of the other contestants. If the answer was wrong, there was a monetary penalty. And on top of all of that, just like the human players, it could not access the internet during the game, so it needed to have a big knowledge base.
How did it work?
The IBM team opted to put multiple smaller systems in place that worked together. They created a system called DeepQA (Deep Question Answering), built on top of a system they had already worked on previously. This system parsed the prompt of the quiz-master and worked out what possible responses were available by using parallel computation (throwback to the chess accomplishment!).
In mere seconds, 100s of algorithms came back with possible responses and an evaluation score on how probable it was that each one of the options was correct. This was turned into a ranked list and the top candidate was chosen as the final response.
What are the benefits?
Watson really pushed the field of natural language processing forward. Any field that has loads of unstructured data, such as text, could use a system that is similar to Watson. You can think of healthcare, where there are lots of patient records that could be parsed to provide help with a diagnosis. Nowadays, a version of Watson is also used in customer service, for example, in chatbots.
Libratus wins at poker
It is important to know that in 2012 there was a boom in neural network usage. The science community realized there was now enough data and computing power that neural networks were outperforming the state of the art. From that point on, almost all AI-related game playing models used neural networks and we will talk about those milestones in chapter two of this blog series.
However, in this section of this blog, we will talk about a model that didn’t use a neural network, but could still hold its own in this new era.
In 2017, Carnegie Mellon University created a machine learning model called Libratus. This model beat 4 top poker players in a 20-day tournament of no-limit Texas hold ’em. Libratus ended up winning with a difference of 1.7 million dollars. Luckily for the contestants, they weren’t playing with real money!
In 2019, Libratus was used as a basis for an even better model, called Pluribus. This model beat the two best poker players in the world: Darren Elias, who holds the record for most World Poker Tour titles, and Chris “Jesus” Ferguson, winner of six World Series of Poker events. Where Libratus only played 1 on 1, Pluribus played six-player games and also won!
Poker is an imperfect information game, which means that not all the information is known at all times. In chess, both players know the configuration of the board at all times. However, in poker, you only know your own cards and the cards that are on the table, but not which cards will be drawn and you also don’t know the cards of your opponents. This is an imperfect information situation. On top of that, your opponents will also try to throw you off by bluffing.
How did it work?
Libratus has three different systems working together. The first system is a model that trained on the game of poker for millions of hours, learning from scratch how poker works and getting better after each iteration through reinforcement learning.
Think of reinforcement learning as a computer model trying out different things, getting a reward when it does well, getting a penalty when it doesn’t go well, and learning from this interaction. This first system gives the model a basic understanding of the game and a global strategy.
The second system uses the general information of the first system to plan a detailed strategy for the game that is currently being played. That strategy will differ per game, as each plays out differently. Since poker has so many different possibilities (10160), there are always game situations that the model hasn’t encountered yet, so it requires a third system. The model is retrained regularly, using the data of the games that were played most recently.
What are the benefits?
The idea of teaching poker to an AI is that, if a model can make strategic decisions in a situation with imperfect information and can also bluff correctly, this could be extrapolated and applied in other fields as well. These include automated negotiations, better fraud detection, self-driving cars, setting military or cybersecurity strategies, or planning medical treatment.
We have discussed three big milestones in the world of AI and games. Hopefully, we have started to give insight into why putting research into AI and games can be very beneficial for other areas. Of course, we are not finished yet. Our next blog will talk about three milestones where AI models used neural networks to beat games, including videogames!
Do you have questions or comments?
We’d love to hear them so please leave us a message below.
Follow us on social media for the latest blog posts, industry and IFS news!
Photo Credit: Nathan Dumlao | <urn:uuid:f28989a9-7cb8-416e-af56-5c352326f3e1> | CC-MAIN-2022-40 | https://blog.ifs.com/2020/07/coffee-break-with-ai-games-and-ai-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00171.warc.gz | en | 0.971789 | 1,855 | 2.953125 | 3 |
The Cat is Out of the Bag
If you provide IoT (Internet of Things) solutions, then you must provide requisite security. IoT security is fundamental for more reasons than the obvious. Certainly, general protections are always necessary when it comes to technology. With IoT, though, this is especially considerable.
For one thing, IoT tech is becoming ubiquitous. There are all kinds of applications out there, from remotely operated garage door openers, thermostats, LED lights, security cameras, and refrigerators, to data collection devices used in productive capacities. IoT enables more cohesive, comprehensive Big Data solutions. Big Data increases visibility, which can further optimize many areas of operations from supply chain management to production.
Botnets and Consequences
That said, there are many who don’t believe the issue is as bad as it is represented. Unfortunately for that perspective, object examples exist where IoT devices were used by cybercriminals to launch concerted, devastating attacks. In 2016, toward the end of the year, there was a DDoS (Distributed Denial of Service) attack on the east coast. This attack was caused by poor security measures pertaining to IoT devices like routers, webcams, smartphones, and more. A “Botnet” is what facilitated the attack.
A “Botnet” is a set of automated software designed to act in a way similar to real users but maximized for one reason or another. Twitter has many profiles which aren’t managed by actual people but are really “bots”. These are used for a variety of political purposes, but it’s important to note that bots of this kind aren’t restricted to social media. Especially in terms of IoT, Botnets can wreak absolute havoc on operations. Having security measures in place to combat these things is absolutely fundamental.
One thing you can do to help expedite security involves education. If you’re selling IoT devices or making a business promoting them, then you want to ensure that your own digital “house” is clean of bad actors hailing from cybercriminal realms. You don’t want to download third-party apps, as these are often rife with Trojan software used as a means of hijacking systems for varying purposes. IoT security protocols should have strict limitations on applications; only those vetted by your tech team should be used on the network. You need to provide education for clients as well, so that they understand the dangers of IoT in addition to its many positives.
Staying Ahead of The Game
Something else to consider is remaining competitive. There is currently pressure for those designing and supplying IoT solutions to effect better security measures. If you can get ahead of the market on this, then the IoT solutions you provide may be desired over those competitors provide. This demand will increase as IoT becomes more ingrained professionally.
Keep reading: Ways IoT Is Changing Digital Marketing
Optimizing Your IoT Provisions
IoT security is tantamount to the success of businesses providing IoT solutions. If you want to see the greatest ROI for your new IoT provision, it makes sense to find established means of securing IoT systems, and those which are more obscure. It’s not quite the wild west, but there is a lot of room for innovation here and the right application of successful security is more likely to make your business an industry leader in IoT. | <urn:uuid:9ea2f8d3-692e-4a3b-b1a7-62f3cade498c> | CC-MAIN-2022-40 | https://iotmktg.com/iot-security-important-many-businesses-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00171.warc.gz | en | 0.952394 | 694 | 2.75 | 3 |
Have you ever wondered about how technology has widened the generation gap?” Do you find it challenging to raise kids in the digital era? Use parental control to keep an eye on your kid’s online activities and protect them.
The universal rules of child-raising are still the same, but today’s parents are dealing with an entirely new playing field when it comes to their kids. Things like technological advances and increasingly more expensive supplies have made parenting a very different experience.
Today’s kids get a good part of their exposure to the digital world from quite an early age. As a parent, we must provide the correct guidance and protection as well as ingrain better digital etiquette in them for their safety.
Let us find out how technology has impacted parenting and try to follow the best parenting tips given below.
Top 7 parenting tips for dealing with the internet generation
Monitor their smartphone usage
Monitoring your kid’s smartphone usage does not mean that you should watch over their shoulders all the time.
Instead, ask your children to share their passwords for email, social media, or any of the games they play online. Occasionally, inform your kid that you will be taking a look at their social media pages or email accounts. Overview the messages they’ve sent and received, but do not pry.
Carefully observe for abnormal activity, bullying, or for names you do not recognize. Make sure that your child is with you while you are examining his/her accounts. He can answer the questions you may have. It will help to build a foundation of trust, and they would know that you are not spying on them.
Ask them to use your phone
Even if you monitor your child’s smartphone usage, you can never know if they are deleting emails and conversations before you read or not.
So, when you allow them to use social media for the first time, ask them to add their email address or social media account to your phone. This way, you will be informed about the new messages that they receive.
But, make sure you should not read those messages. You should be aware of what is going on in your child’s digital world.
Discourage sharing personal information online
Children are very trusting by nature. Online predators are experts in making kids believe that they care for them more than their parents. Your child might share personal information online without being aware of the dangers associated with it.
Kids love sharing their experiences, day-to-day activities, information about their holidays on social media. Anyone with malicious intentions can use this information to harm your child.
Tell your children about the importance of keeping private things private.
Put a timer on the Internet
Kids use smartphones to research and complete school assignments too. And we know that they are up even after you go to bed.
In that case, you can apply a timer on the modem. So, at a specified time, the internet will turn off. Kid’s safety apps also help you to schedule your child’s bedtime and cure their smartphone addiction.
Instruct them to avoid meeting strangers online
Pedophiles, cyberbullies, online predators often use anonymous chat rooms or known social media sites to get in touch with unsuspecting kids.’
Educate your child about Internet hazards, such as cyberbullying, online predation, identity theft. Make them aware of the characteristics of cybercriminals. Help them to identify unknown persons’ intentions and ask them to cut ties immediately if they are uncomfortable or unable to understand those intentions.
Build such a strong relationship with your kids that they would never hesitate to share their problems with you.
Talk to your kid
Without understanding what your kids are going through, you cannot guide them. Make yourselves available to your kids, so that they would never feel isolated in a problematic situation. Most importantly, never judge or demean them but instead sympathize with them and show them the correct way.
The Internet is a phenomenal yet threatening place. Make them understand the cons of technology entirely so that they can use their benefits wisely.
Use Bit Guardian Parental Control App
As we have mentioned earlier, parental control apps help you to inspect your child’s smartphone activities. You can block the addictive apps, unwanted calls, apply a screen-time control, prohibit their access to the Play Store.
Using child monitoring apps does not indicate that you do not trust your child. Instead, it suggests that you care too much about them, so you cannot leave them unsupervised.
An American author, psychologist, James Dobson has accurately stated, “children are not casual guests in our home. We have temporarily loaned them for loving them. We are responsible for installing a foundation of values on which they can successfully build their lives.”
Every parent ultimately aims to raise a kid who is confident, kind, and successful. Be a techno-savvy and loving parent; use parental control apps. And follow the best parenting tips that we have shared here to deal with the Internet generation. | <urn:uuid:49d4b6c3-674c-4808-a022-ad16f93cfff9> | CC-MAIN-2022-40 | https://blog.bit-guardian.com/useful-parenting-tips-to-deal-with-internet-generation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00171.warc.gz | en | 0.949633 | 1,038 | 2.5625 | 3 |
Nothing But Net
Mark McFadden Addressing the Need for More Addresses
- By Scott Bekker
Can you remember when people were worried that we would run out of addresses for Internet devices? Looking at the pace of growth of the Internet some people projected that we would run out of IPv4 address space as early as 2006.
Since that time, two important measures were undertaken. First, a stern rationing program for Internet addresses was implemented. The combination of draconian IP address space management along with the transition from class oriented address allocation (remember Class A, Class B and Class C allocations?) to Classless Interdomain Routing (CIDR) slowed the growth in IP address utilization. The Internet registries have been successful in ensuring the conservation of a scarce resource -- addresses.
The second step was the development of a replacement for the twenty-year-old Internet Protocol (IP). The current version, IP version 4, has a built-in limitation to the number of addresses available. The Internet engineering community seized on the opportunity to develop a new IP protocol, called IPv6 with a substantially larger available address space. For comparison IPv4 has a 32-bit address space that provides about seven billion addresses -- incredibly, IPv6 provides 340,282,366,920,938,463,463,374,607,431,768,211,456 usable IP addresses.
Address space is not the only motivation for IPv6. Other features of IPv6 address critical business requirements for more scalable network architectures, improved security and data integrity, integrated quality-of-service (QoS), autoconfiguration, mobile computing, data multicasting, and more efficient network route aggregation at the Internet’s backbone. IPv6 is a big package, and addressing is only the most visible component of the work.
What’s remarkable is that next to nothing is happening in North America with IPv6.
You would expect the nation that leads the world in Internet connectivity, bandwidth, and business applications to be on the leading edge of such a critical technology. But in fact, North America is far behind in the transition to IPv6 and its advantages.
The Regional Internet Registries (RIRs) are non-profit organizations that dole out resources such as IP address allocations to large service providers and telecommunications providers. Europe’s RIR, Reseaux IP Europeans, has given out more than twenty-two allocations of IPv6 address space to European network operators. In the Asian Pacific region, the Asian Pacific Network Information Center has assigned more than fifteen. In North America the number is a paltry six.
Why the lack of interest in a key technology for the future of the Internet?
Part of the reason must be in Network Address Translation (NAT) boxes. I recently reviewed Microsoft’s beta of Internet Security and Acceleration Server for the print version of ENT. The ease with which NAT services can be implemented and configured is astonishing. No wonder use of address space has slowed -- many companies are simply "hiding" their networks using NATs and private addressing.
Also, demand for IPv6’s advanced services -- like QoS and autoconfiguration -- seems pretty limited.
Still, the lack of interest is surprising. Given the potential explosion of mobile computing -- and the addressing demands it will place on us -- it seems certain there will be a surge in addressing requirements. When that time comes will your organization be in a position to take advantage of IPv6? Or, like so many organizations today, will you continue to patch and cope with a twenty-year-old protocol? --Mark McFadden is a consultant and is communications director for the Commercial Internet eXchange (Washington). Contact him at [email protected].
Scott Bekker is editor in chief of Redmond Channel Partner magazine. | <urn:uuid:d7334cac-be3d-4117-967d-a2e0a6a946c1> | CC-MAIN-2022-40 | https://mcpmag.com/articles/2000/09/07/nothing-but-net.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00171.warc.gz | en | 0.925055 | 790 | 2.65625 | 3 |
Capacitors are made in hundreds of sizes and types. Several of these will be discussed in the following section.
Fixed paper capacitors are made of layers of tinfoil. The dielectric is made of waxed paper. Wires extending from the ends connect to the foil plates. The assembly is tightly rolled into a cylinder and sealed with special compounds. Some capacitors are enclosed in plastic for rigidity. These capacitors can withstand severe heat, moisture, and shock.
Rectangular oil filled capacitors are hermetically sealed in metal cans. They are oil filled and have very high insulation resistance. This type of capacitor is used in power supplies of radio transmitters and other electronic equipment.
Can Type Electrolytic Capacitors
Can type electrolytic capacitors use different methods of plate construction. Figure 1 shows in detail how three separate can type electrolytic capacitors are put together. Figure 2 shows several of the single-ended capacitors.
Figure 1. This chart shows a number of can type electrolytic capacitors, along with their voltage ratings and common uses.
A–Basic can type.
Figure 2. Selection of can type electrolytic capacitors
Some capacitors have aluminum plates and a wet or dry electrolyte of borax or carbonate. A dc voltage is applied during manufacturing. Electrolytic action creates a thin layer of aluminum oxide that deposits on the positive plate. This coating insulates the plate from the electrolyte.
The negative plate is connected to the electrolyte. The electrolyte and positive plates form the capacitor. These capacitors are useful when a large amount of capacity is needed in a small space.
The polarity of these capacitors is very important. A reverse connection can destroy them. The cans may contain from one to four different capacitors.
The metal can is usually the common negative terminal for all the capacitors. A special metal and fiber mounting plate are supplied for easy installation on a chassis.
Tubular Electrolytic Capacitor
Tubular electrolytic capacitor, Figure 3, construction is similar to the can type, Figure 4. The main advantage of these tubular capacitors is their smaller size.
Figure 3. Tubular electrolytic capacitors.
Figure 4. Tubular electrolytic capacitor.
They have a metal case enclosed in an insulating tube. They are also made with two, three, or four units in one cylinder.
A very popular small capacitor used a great deal in radio and TV work is the ceramic capacitor, Figures 5 and 6. The ceramic capacitor is made of a special ceramic dielectric.
The silver plates of the capacitor are fixed on the dielectric. The entire component is treated with special insulation that can withstand heat and moisture.
Mica capacitors are small capacitors. They are made by stacking tinfoil plates together with thin sheets of mica as the dielectric. The assembly is then molded into a plastic case.
Figure 5. This chart shows several types of ceramic capacitors. Also listed are their voltage ratings and common uses.
B–Multilayer resin dipped.
C–Multilayer terminal base.
Figure 6. Typical ceramic capacitor.
Variable capacitors consist of metal plates that join together as the shaft turns, Figure 7. The stationary plate is called the stator. The rotating plate is called the rotor.
Figure 7. Typical variable capacitor consisting of a stator and rotor.
When we adjust or turn the dial on a radio, we are actually adjusting a variable capacitor inside the radio. By changing the amount of capacitance inside the radio circuit, we are changing the radio frequency. This capacitor is at maximum capacity when the plates are fully meshed. The schematic symbol for a variable capacitor is shown in Figure 8.
Figure 8. Schematic symbol for a variable capacitor.
A trimmer capacitor, Figure 9, is a type of variable capacitor. The adjustable screw compresses the plates and increases capacitance. Mica is used as a dielectric.
Figure 9. Types of trimmer capacitors.
Trimmer capacitors are used where fine adjustments of capacitance are needed. They are used with larger capacitors and are connected in parallel with them.
To adjust trimmer capacitors, turn the screw with a special fiber or plastic screwdriver called an alignment tool. A regular screwdriver should not be used for this purpose as the capacitance effect will cause an inaccurate adjustment.
Tantalum capacitors are similar to aluminum electrolytic capacitors, Figure 10. However, tantalum capacitors use tantalum, not aluminum, for the electrode. Tantalum capacitors have three distinct advantages that make them quite useful.
- Tantalum capacitors have a larger capacitance over a smaller area, which makes them ideal for smaller circuits.
- Tantalum capacitors have a long shelf life.
- Tantalum resists most acids, consequently tantalum capacitors have less leakage current.
Figure 10. This chart shows three types of tantalum capacitors. Voltage ratings and common uses are also listed.
A–Epoxy dipped solid electrolyte.
B–Hermetically sealed solid electrolyte.
C–Hermetically sealed sintered-anode. | <urn:uuid:75f23ffd-eb12-4662-acb9-33842385119d> | CC-MAIN-2022-40 | https://electricala2z.com/electrical-circuits/capacitors-types-fixed-variable-capacitors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00171.warc.gz | en | 0.897418 | 1,150 | 3.765625 | 4 |
I just read the WordPress article about World IPv6 Day, and many of the comments in response expressed that they only had a very basic understanding of what an IPv6 Internet address actually is. To better explain this issue, we have provided a 10-point FAQ that should help clarify in simple terms and analogies the ramifications of transitioning to IPv6.
To start, here’s an overview of some of the basics:
Why are we going to IPv6?
Every device connected to the Internet requires an IP address. The current system, put in place back in 1977, is called IPv4 and was designed for 4 billion addresses. At the time, the Internet was an experiment and there was no central planning for anything like the commercial Internet we are experiencing today. The official reason we need IPv6 is that we have run out of IPv4 addresses (more on this later).
Where does my IP address come from?
A consumer with an account through their provider gets their IP address from their ISP (such as Comcast). When your provider installed your Internet, they most likely put a little box in your house called a router. When powered up, this router sends a signal to your provider asking for an IP address. Your provider has large blocks of IP addresses that were allocated to them most likely by IANI.
If there are 4 billion IPv4 addresses, isn’t that enough for the world right now?
It should be considering the world population is about 6 billion. We can assume for now that private access to the Internet is a luxury of the economic middle class and above. Generally you need one Internet address per household and only one per business, so it would seem that perhaps 2 billion would be plenty of addresses at the moment to meet the current need.
So, if this is the case, why can’t we live with 4 billion IP addresses for now?
First of all, industrialized societies are putting (or planning to put) Internet addresses in all kinds of devices (mobile phones, refrigerators, etc.). So allocating one IP address per household or business is no longer valid. The demand has surpassed this considerably as many individuals require multiple IP addresses.
Second, the IP addresses were originally distributed by IANI like cheap wine. Blocks of IP addresses were handed out in chunks to organizations in much larger quantities than needed. In fairness, at the time, it was originally believed that every computer in a company would need its own IP addresses. However, since the advent of NAT/PAT back in the 1980s, most companies and many ISPs can easily stretch a single IP to 255 users (sharing it). That brings the actual number of users that IPv4 could potentially support to well over a trillion!
Yet, while this is true, the multiple addresses originally distributed to individual organizations haven’t been reallocated for use elsewhere. Most of the attempted media scare surrounding IPv6 is based on the fact that IANI has given out all the centrally controlled IP addresses, and the IP addresses already given out are not easily reclaimed. So, despite there being plenty of supply overall, it’s not distributed as efficiently as it could be.
Can’t we just reclaim and reuse the surplus of IPv4 addresses?
Since we just very recently ran out, there is no big motivation in place for the owners to give/sell the unused IPs back. There is currently no mechanism or established commodity market for them (yet).
Also, once allocated by IANI, IP addresses are not necessarily accounted for by anyone. Yes, there is an official owner, but they are not under any obligation to make efficient use of their allocation. Think of it like a retired farmer with a large set of historical water rights. Suppose the farmer retires and retains his water rights because there is nobody to which he can sell them back. The difference here is that water rights are very valuable. Perhaps you see where I am going with this for IPv4? Demand and need are not necessarily the same thing.
How does an IPv4-enabled user talk to an IPv6 user?
In short, they don’t. At least not directly. For now it’s done with smoke and mirrors. The dirty secret with this transition strategy is that the customer must actually have both IPv6 and IPv4 addresses at the same time. They cannot completely switch to an IPv6 address without retaining their old IPv4 address. So it is in reality a duplicate isolated Internet where you are in one or the other.
Communication is possible, though, using a dual stack. The dual-stack method is what allows an IPv6 customer to talk to IPv4 users and IPv6 users at the same time. With the dual stack, the Internet provider will match up IPv6 users to talk with IPv6 if they are both IPv6 enabled. However, IPv4 users CANNOT talk to IPv6 users, so the customer must maintain an IPv4 address otherwise they would cut themselves off from 99.99+ percent of Internet users. The dual-stack method is just maintaining two separate Internet interfaces. Without maintaining the IPv4 address at the same time, a customer would isolate themselves from huge swaths of the world until everybody had IPv6. To date, in limited tests less than .0026 percent of the traffic on the Internet has been IPv6. The rest is IPv4, and that was for a short test experiment.
Why is it so hard to transition to IPv6? Why can’t we just switch tomorrow?
To recap previous points:
1) IPv4 users, all 4 billion of them, currently cannot talk to new IPv6 users.
2) IPv6 users cannot talk to IPv4 users unless they keep their old IPv4 address and a dual stack.
3) IPv4 still works quite well, and there are IPv4 addresses available. However, although the reclamation of IPv4 addresses currently lacks some organization, it may become more econimically feasible as problems with the transition to IPv6 crop up. Only time will tell.
What would happen if we did not switch? Could we live with IPv4?
Yes, the Internet would continue to operate. However, as the pressure for new and easy to distribute IP addresses for mobile devices heats up, I think we would see IP addresses being sold like real estate.
Note: A bigger economic gating factor to the adoption of the expanding Internet is the limitation of wireless frequency space. You can’t create any more frequencies for wireless in areas that are already saturated. IP addresses are just now coming under some pressure, and as with any fixed commodity, we will see their value rise as the holders of large blocks of IP addresses sell them off and redistribute the existing 4 billion. I suspect the set we have can last another 100 years under this type of system.
Is it possible that a segment of the Internet will split off and exclusively use IPv6?
Yes, this is a possible scenario, and there is precedent for it. Vendors, given a chance, can eliminate competition simply by having a critical mass of users willing to adopt their services. Here is the scenario: (Keep in mind that some of the following contains opinions and conjecture on IPv6, the future, and the motivation of players involved in pushing IPv6.)
With a complete worldwide conversion to IPv6 not likely in the near future, a small number of larger ISPs and content providers turn on IPv6 and start serving IPv6 enabled customers with unique and original content not accessible to customers limited to IPv4. For example, Facebook starts a new service only available on their IPv6 network supported by AT&T. This would be similar to what was initially done with the iPad and iPhone.
It used to be that all applications on the Internet ran from a standard Web browser and were device independent. However, there is a growing subset of applications that only run on the Apple devices. Just a few years ago it was a forgone conclusion that vendors would make Web applications capable of running on any browser and any hardware device. I am not so sure this is the case anymore.
When will we lose our dependency on IPv4?
Good question. For now, most of the push for IPv6 seems to be coming from vendors using the standard fear tactic. However, as is always the case, with the development of new products and technologies, all of this could change very quickly. | <urn:uuid:038d4566-da49-4924-8824-3228b19ba9cb> | CC-MAIN-2022-40 | https://netequalizernews.com/2011/06/13/ten-things-you-should-know-about-ipv6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00171.warc.gz | en | 0.962882 | 1,723 | 3.15625 | 3 |
What is an indicator of compromise (IOC)?
An indicator of compromise is a piece of digital forensic data that indicates a potential network breach. This information helps security investigators identify malicious or suspicious activity including threats, data breaches, and malware. IOCs can be collected during routine cybersecurity scans or manually if suspicious activity is detected.
Since IOC identification is primarily reactive, the discovery of an IOC typically means that an organization has already been compromised. However, this detection helps organizations to stop in-process attacks sooner and reduce the attack’s impact. In addition, investigating IOCs can be used to repair existing security flaws and create more intelligent detection tools. Comparatively, indicators of attack (IOA) are used in identifying attacks in real time.
Cyberattacks are becoming more sophisticated, which makes IOCs harder to detect so it’s important to know what to look for.
Examples of IOCs
- Repeated log-in activity indicating a brute-force attack
- Abnormal or inhuman network traffic patterns or traffic from a location unrelated to the organization
- Strange activity from admin accounts including requests for permission changes or other settings
- Mobile device application changes
- Increased database activity
- Multiple requests for the same file
- Finding large amounts of data stored incorrectly
- Domain Name System (DNS) request oddities
- Unfamiliar applications in the network
Accepted Risks of Not Investigating IOCs
If a company chooses not to investigate IOCs, it is putting itself in a vulnerable position. By leaving IOCs unexplored, companies are accepting the possibility of unknown risks in their systems that they might not be prepared for.
A major concern that comes with not investigating an IOC is the possibility of data unknowingly leaving the system. In this case, companies do not know definitively if data has been stolen, and if data is stolen, what that stolen data could be. Depending on the type of company, this data can be PII, PHI, credit card information, or customer and vendor data.
By accepting the risk of data loss, companies are potentially putting their customers at risk of fraud or identity theft. In addition, if data is leaked and discovered on the dark web, the company’s reputation is negatively impacted.
Private and public companies alike also need to consider the implications of ignoring an IOC when it comes to compliance requirements and requirements for reporting the loss of sensitive data.
By not investigating an IOC, companies can no longer confidently say their networks are secure. Ransomware is a threat to companies of all sizes and poses significant risk. If a company ignores the IOC, they are accepting the risk that their network could be compromised leading towards a possible data breach and ransomware event.
While the company is proceeding as usual, their data could be continuously exfiltrated and the network encrypted. Threat actors now have two ways to extort money from the company: the need for a decryption key and for keeping data from being released. There is also the potential that the longer this attack goes on, the larger the ransom amount is. By properly investigating an IOC as soon as it’s discovered, the earlier a potential ransom can be dealt with.
Why You Should Always Investigate IOCs
Ultimately, companies should not be ignoring IOCs. They have a duty to their customers and a legal responsibility to complete due diligence in these situations. While the company might get lucky that the IOC did not lead to a larger event, that is unlikely, and companies need to be prepared for the worst. If they accept the risks of leaving an IOC uninvestigated, they need to be prepared to be held accountable for their actions.
Being able to identify an IOC is a key component that every cybersecurity protocol should have.
At Blue Team Alpha, we can help. We offer a Compromise Assessment service that works to identify IOCs to determine if your system has been breached. We also offer Managed Security Operations Center (SOCaaS), which is a managed security service that provides continuous data analysis, threat intelligence, and security incident reporting. Contact us
today to learn more. | <urn:uuid:d152f54c-492f-4e89-b812-e1182f66b7e1> | CC-MAIN-2022-40 | https://blueteamalpha.com/vulnerability-management/why-you-should-investigate-iocs-and-what-can-happen-if-you-dont/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00371.warc.gz | en | 0.94094 | 846 | 2.578125 | 3 |
- Independent Basic Service Set (IBSS)
- Basic Service Set (BSS)
- Extended Service Set (ESS)
Independent Basic Service Set (IBSS) allows two or more devices to communicate directly with each other without a need for a central device. This is known as Ad hoc mode where a peer to peer network between stations is formed without the need for an Access Point.
Basic Service Set (BSS) Wireless LAN is established using a central device called an Access Point that centralizes access and control over a group of wireless devices. All wireless devices do not communicate directly with each other but instead they communicate with the AP, and the AP forwards the frames to the destination stations. The Access Point manages the wireless network, advertises its own existence by broadcasting the Service Set Identifier (SSID) and any device that needs to use the wireless network must first send an association request to the Access Point. The Access Point can require any of the following criteria before allowing a client to join.
- A matching Service Set Identifier (SSID)
- A compatible wireless data rate
- Authentication credentials
After a client has associated itself with the Access Point, all communications to and from the client will traverse the AP.
The wireless coverage area of an AP is called the Basic Service Area (BSA), sometimes also referred to as Wireless Cell. An AP can also be connected to a wired Ethernet Local Area Network through an uplink port connection, unlike the Independent Basic Service Set in which the wireless network cannot be connected to the wired network.
The BSS is uniquely identified by the Basic Service Set Identifier (BSSID) which is the Layer 2 Mac address of the BSS access point. The wireless network although is advertised using an SSID which announces the availability of the wireless network to devices.
Extended Service Set (ESS) is created by connecting multiple Basic Service Set (BSS) via a distribution system. Two or more Access Points are connected to the same Local Area Network to provide a larger coverage area which allows the client to move from one AP to another AP and still be the part of the LAN. This process is known as roaming in which a client can physically change locations and still be connected to the LAN. When a client senses that radio signals from the current AP are getting weaker, it finds a new AP with stronger signals starts using that AP. An ESS generally includes a common SSID to allow roaming from access point to access point without requiring client re-configuration.
The wireless coverage area created by joining two or more Access Points via distribution system is called an Extended Service Area (ESA). Stations within the same ESA may communicate with each other, even though these stations may be in different basic service areas and may even be moving between basic service areas, this requires that the wireless medium and the backbone of the ESS must be layer 2 link.
A Distribution System connects multiple Access Points to form an ESS, while doing so it provides the wireless stations with the option of mobility. It is the means by which an access point communicates with another access point to exchange frames for stations in their respective BSSs, forward frames to follow mobile stations as they move from one BSS to another, and exchange frames with a wired network. Network equipment outside of the extended service set views the entire ESS and all of its mobile stations as a single layer 2 network where all stations are physically stationary. Thus the ESS hides the mobility of the mobile stations from everything outside the ESS. This allows correct inter-operation with other network protocols that do not understand the concept of mobility.
The figure below shows a Basic Service Set on the left side and an Extended Service Set on the right side.
Although not shown in the figure an Access Point can act as a bridge between the wireless and wired LANs, allowing stations on either LAN to communicate with each other.
In this lesson, we learned different types of Service Sets in a Wireless LAN. This topic is one of the foundation topics in the Wireless Section of Cisco CCNA Certification and anyone who is preparing for the CCNA Certification must be able to define and differentiate the services sets of a wireless LAN. | <urn:uuid:dd40d56c-9bea-4726-bd5c-19ffb048bea0> | CC-MAIN-2022-40 | https://www.certificationkits.com/ccna-200-301-topic-articles/cisco-ccna-200-301-wireless/cisco-ccna-200-301-wireless-bss-ess/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00371.warc.gz | en | 0.9369 | 856 | 3.015625 | 3 |
Considering a move to the cloud? You’re in good company. More and more organizations are incorporating some type of cloud—public, private, or hybrid—into their IT strategy.
If you’ve been evaluating public cloud services as part of that mix, you’ve probably encountered the debate between a “serverless” approach and the use of containers. From the time Amazon Web Services (AWS) introduced its Lambda serverless offering in 2014, this application deployment model has gained interest among organizations migrating to the cloud. Meanwhile, containerization—offered by Microsoft Azure, Google Cloud, and also AWS—continues to be a popular approach.
But what exactly are the differences between these application deployment models? And what happens if you want to use both?
A serverless approach—sometimes called function-as-a-service (FaaS)—promises to reduce development work and minimize management burdens. Developers produce only enough code for a particular function or task. When a predefined event occurs, the code is executed and the function is performed. For example, function code might be used to automatically back up newly created files or store data from a video surveillance system only when motion is detected.
Even though this serverless model does technically use a server, it does not require developers or administrators to deal with operating systems or virtual machines. There is no need for configuration, capacity planning, or load balancing. Function code runs on demand, using only the amount of resources needed for that task.
Containers offer a lightweight form of virtualization that can simplify development and enhance IT efficiency. While traditional server virtualization uses virtual machines (VMs) that include a guest operating system, containers include only the application along with any libraries or other dependent software that the application needs to run. Multiple containers use a single container runtime engine, which sits on the host operating system.
Containers are typically easier to build and deploy than VMs. Developers can use the open-source Docker platform to streamline the process of packing, distributing, and managing container-based applications. And because containers are portable, self-contained units, with all of the necessary software, they run consistently in a variety of environments.
Like the function code created for serverless environments, container-based applications can be used for single tasks or for task-based parts of larger applications. Or you can run more complex applications within containers, such as a chatbot to support customer service agents.
What If You Want Both?
There are pros and cons to each application deployment model. For example, containers give you full control of the application and how it runs, but they can require more initial development work and maintenance than going serverless. Of course, while serverless might entail less coding, there is still a learning curve. And because serverless is a newer approach, fewer tools are available to streamline processes. Nevertheless, going serverless can help you control costs since you pay only when your function is executing—you can avoid spending money on idle servers.
Many organizations could benefit from both approaches. But managing two types of application deployment models—potentially on two different public cloud platforms—can become complex, especially if that means adopting distinct management tools.
Here’s the good news: It doesn’t have to be one or the other. With the right managed cloud services provider, you can support both serverless and container approaches while avoiding the complexity of using multiple tools.
*The above article appeared recently as part of CenturyLink’s ThinkGig Blog. CenturyLink is just one of your many viable options to support your cloud strategy. Talk to an ATC expert to start considering what options are best for you. Get a FREE consultation here. In the interest of providing CenturyLink its due credit, here is their pitch:
Selecting the Best Execution Venues with CenturyLink
With CenturyLink, you can choose the best execution venue for each application instead of struggling to fit all of your applications into one model or another. CenturyLink is cloud-agnostic. The CenturyLink Cloud Application Manager platform can support serverless and container application deployment models across a full range of public and private cloud environments. We can help you manage function code through AWS Lambda, containers on Microsoft Azure, plus software-defined environments within your own data center walls.
We offer end-to-end services regardless of which models you choose. We can help with everything from cloud migration planning and execution to networking, security, governance, cost optimization, and more.
And importantly, CenturyLink gives you the flexibility to choose which services you want to outsource. You could decide to outsource everything or divide up responsibilities. Schedule a free session with a Hybrid IT expert to explore which option is right for you. | <urn:uuid:e258a295-7afe-4f60-a529-70485854e410> | CC-MAIN-2022-40 | https://4atc.com/serverless-or-containers-you-can-have-both-with-the-right-managed-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00371.warc.gz | en | 0.921849 | 985 | 2.828125 | 3 |
According to the ISC’s 2021 Cyber Workforce Report, the global cybersecurity workforce currently needs to grow by 65% to effectively defend the world’s businesses from rapidly increasing cyber risks. The cybersecurity industry is therefore far from exempt from the global skills shortage which continues to dominate the HR landscape.
Artificial Intelligence (AI) is a tool that is becoming increasingly popular in supporting businesses to resolve such challenges. No longer are AI and machine learning (ML) only reserved for the most technologically savvy departments, but rather widely-used technologies across the enterprise. When introducing AI into a wider range of business functions, organisations can reduce the amount of resources used for basic administrative responsibilities and focus their energies on more innovative development opportunities.
Despite growing AI usage, misunderstandings around its potential remain common in departments which are traditionally less technologically-driven, such as HR. Although misconceptions might be common, in reality, AI is largely designed to automate certain manual processes, with the goal of reducing administrative burden and thus increasing efficiency. As such, there is a significant case to be made for introducing AI into a business’s HR strategy, especially for cybersecurity organisations, because as more businesses adopt new and emerging technologies, security threats change.
Driving rational decisions made by machines
When AI was beginning to mature in the late 20th century, experts described the technology in terms of machines that mimicked and displayed “human” skills, such as learning and problem-solving. They assumed that machines would be progressively humanised, learning to carry out tasks in the same way that a human would. This is certainly an aspect of AI, but doesn’t quite tell the full story. In fact, this understanding of AI is closer to how ML is defined today. ML is an application of AI that leverages data and algorithms to imitate how humans learn, with its accuracy progressively improving over time. ML is generally used for things like online chatbots and customer recommendation engines, which facilitate machines progressively “learning” how to give more human suggestions to users.
Leading researcher Stuart Russell makes the case that AI is not centred around humanising machines, but rather involves programming machines to act rationally. Machines aren’t becoming more human, but are being programmed in a way that leads to rational functions being carried out. The difference is significant.
In the vast majority of organisations today AI is about teaching machines to carry out processes that can be a drain on resources if carried out by humans. For example, a developer could program a computer to select a business coach for an employee based upon the data that it gathers about the individual’s job role, responsibilities, and personal needs, among other factors. There is no real benefit derived from a human executing this action instead of a computer, as the two would likely come to an equivalent conclusion - that's what we're seeing at CoachHub. And this really matters in coaching, where the working alliance between coach and coachee is one of the biggest indicators of coaching success. Much of what AI does, especially in the HR space, is a process of automation which saves time and costs, as well as increasing the scale at which an action can be undertaken.
A range of benefits to deliver success
There are a huge variety of additional benefits to integrating AI into existing business functions that go beyond efficiency gains. Whether it’s employee time, money spent on licences for manual software, or hours sunk into training, all of these things can be reinvested into the business. This is especially important in the HR department, where even the smallest resource recovery can be invested directly into improving the employee experience in some way, which is crucial in today’s tight candidate market. It’s also worth considering the quality gains when using AI to automate processes. As long as algorithms are trained correctly, AI does not make mistakes in the same way that a human naturally would from time to time. This not only contributes to the overall efficiency gains that derive from implementing AI, but can also help a business to realise the technology’s broader applications.
Over time, AI could learn about which types of people development materials, such as videos or interactive games, are most engaging for different employee profiles. The data gathered would then be used to recommend materials to employees with a similar profile, with the aim of driving greater engagement with people development materials. Taking this one step further, the AI could use this employee engagement data to carry out predictive analytics, suggesting which kinds of materials could be most useful for the organisation’s HR team to create in future. HR leaders can then get ahead of the curve, ensuring that they are always providing their employees with highly relevant learning experiences, particularly in cybersecurity, an industry that is constantly evolving over time and responding to changing needs.
Revolutionising HR processes at the touch of a button
Integrating AI into time-consuming, admin-driven processes is an important step for organisations looking to increase efficiency and use resources more effectively. Within the HR department, AI tends to drive a lot of routine processes, allowing resources to be allocated to innovative new programmes, such as digital coaching, which may not have been accessible before.
Such a reallocation of resources also brings benefits for workers, who can increasingly self-serve HR functions quickly and seamlessly, focusing their efforts on the job they were employed to do. The prevalence of AI is certainly not something to be feared within the HR arena, but an opportunity that is sure to bring more and more benefits in years to come. | <urn:uuid:bc48ce83-cc93-4b2b-b927-c12ff6c7598a> | CC-MAIN-2022-40 | https://cybermagazine.com/articles/implementing-artificial-intelligence-into-a-hr-strategy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00371.warc.gz | en | 0.962719 | 1,122 | 2.75 | 3 |
It''s no exaggeration to say that without access to electricity, almost every developed nation on earth would grind to a halt. A well running, and well protected, electrical grid is crucial for normal day-to-day life, in schools, hospitals, homes and businesses. It is this ubiquitous need for electricity that drives the very real possibility of a cyberattack targeting the national electrical grid – arguably the most crucial piece of national infrastructure in a modern economy- and drives the security industry to make sufficient progress to protect it.
This makes the electrical grid something of a holy grail for cybercriminals and nation-state actors, who could use a widespread blackout for financial and political gain in many ways. We can ascertain confidently that the threat is real, but what about the response from the utility industry, and from the security industry? Are we prepared for large-scale attacks on the national infrastructure, and if we aren’t, how can we be?
A global survey released earlier this month by Accenture revealed that 63% of utility executives believe their country faces at least a moderate risk of electricity supply interruption from a cyberattack on electrical distribution grids in the next five years. It is a sobering thought that such a high percentage of people within the utility industry believe this is feasible. Revealed in the same report, and even more sobering, is that only 27% of executives suggested that they design specific protection for key assets and 37% that they manage cyber incidence recovery programs.
This shows that the utility industry is yet to take the threat presented by malicious actors targeting their key assets seriously. The technology to protect utilities exists, and has done for some time. As updating, repairing, and modernizing the physical infrastructure of electrical grids is clearly a priority for these organizations, to neglect security entirely represents a catastrophic underestimation of how devastating an attack could be.
Possibly the most worrying facet of this report is that it is not new information. In 2014, the Poneman Institute published results of a survey that indicated 67% of critical infrastructure companies had suffered some sort of cyberattack. Of course, without the details, we can’t definitively assess whether these attacks are applicable to production environments; however, recent events such as the attacks in Ukraine have shown us that transitions from enterprise networks to operational networks are a very real possibility.
According to the 2016 Verizon Data Breach Investigations Report (DBIR), 24 utility-related breaches were included in their analysis, with many more spanning industrial environments from mining to healthcare. So, we know that throughout the last few years, security professionals and industry surveys have indicated that critical infrastructure could be at risk. But when was the possibility first raised?
In the United States, the concept of protecting the electrical grid from cyberattack was put into motion seriously in 2006 and has been the topic of executive orders and government initiatives for the past decade. More work is still to be done, as many of these systems were not developed to be cyber-resilient. Operators are continuing to develop policies, procedures, and technical requirements to meet cybersecurity goals. Vendors of these systems are continuing to improve them to incorporate cybersecurity as a stakeholder, but as mentioned earlier, their core priorities are increased efficiency and affordability.
The effort to improve efficiency and lower cost is driving the technical requirements for converged networks and increased data transfer requirements, as much of the proposed technologies enable efficiency through advanced analytics and increased connectivity. In addition, operators need remote support for more advanced systems, including support for information security requirements, also increasing the need for connectivity to (sometimes) previously non-IP-connected systems.
With this increase in connectivity comes an increase in risk. Anytime new pathways to these systems are added, they must be properly secured based on their individual characteristics following a risk-based approach. Requirements and guidance for security can be found in several publications developed by professionals in both information security and operational environments.
With the increase in risk, the opportunity for directed cyberattack also increases. Potential attackers range from hobbyists to nation-states disrupting infrastructure. Now, the actual risk versus the perceived risk to a given organization is complete speculation and is about as difficult to predict as the next location lightning will strike. That said, the precedent for infrastructure disruption as a powerful means of attack has already been set globally.
Though the United States has been pursuing critical infrastructure protection (CIP) initiatives for several years through regulation, much of the world is not at the same level of maturity and is just now beginning to delve into the operational technology cybersecurity realm. This is particularly true in the Middle East, India and the Asia Pacific region, as a significant increase in awareness has led to security initiatives in these areas.
What we can take away as a positive is that utility executives are aware of the potential risks, and we can hope they are actively pursuing remediation programs to improve the security of their operations, keeping our core infrastructure safe. | <urn:uuid:7e112556-87d6-4612-8113-99e2e62614df> | CC-MAIN-2022-40 | https://enterprisetechsuccess.com/article/How-cybercriminals-hacking-the-electrical-grid-could-become-a-grim-reality-in-the-not-so-distant-future/ZU9TbkpzQ3IxWFFhWjlNMm4zL3dSZz09 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00371.warc.gz | en | 0.965313 | 997 | 2.53125 | 3 |
Natasha Alam from True Blood has joined the growing list of celebrities speaking out publicly against bullying. Celebrities are raising awareness and bringing attention to this escalating challenge.
Alam recently filmed an anti-bullying public service announcement (PSA), click here to learn more.
Canada recently targeted bullying with their National Bullying Awareness Week and the UK recently promoted the Big March to bring attention to bullying, violence and harassment in schools.
Each of these efforts encourages people to speak out about bullying and victimization, and adults are being urged to listen. These campaigns also mention prevention, the need for awareness and how everyone (students, parents, teachers, staff, community members, etc.) can play a role and make a difference.
I agree that PREVENTION is critical, and I agree we need to help victims be heard and encourage Security Teams and Prevention Teams to listen. Unfortunately, traditional ‘safe school’ approaches are not delivering the results we need.
The statistics are real; the challenges victims face and the suicides are real, and it is clear that the time is now for new approaches.
The PSAs, marches and awareness weeks are all great first steps. However, bullying is a systemic problem that needs comprehensive tools and solutions to deliver multi-directional awareness, accountability, auditability and measurability. How is your school measuring your efforts? Are administrators measuring incident reports and tips provided by victims and bystanders? Are you measuring if school leaders and communities are listening? Are you measuring if prevention and intervention efforts are working or not working on an ongoing basis? Are you measuring if your efforts are meeting the OCR Dear Colleague letter’s guidelines?
Awareity wants to know… How is your school addressing bullying? Do you have a new innovative approach? | <urn:uuid:a6244f3c-8c96-48ef-9900-486d5122c06d> | CC-MAIN-2022-40 | https://www.awareity.com/2010/12/01/bullying-psasok-then-what/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00371.warc.gz | en | 0.959363 | 363 | 2.65625 | 3 |
The Berkeley Lab website recently published a short “five-question” interview with David Brown, the director of the Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) since August 2011. Mathematics, according to Brown, is the foundation of modern computational science.
When Brown got a post-doc position at Los Alamos National Laboratory after earning his PhD in applied mathematics from California Institute of Technology in 1982, he thought he would only stay a couple of years, before accepting a teaching position elsewhere. Best laid plans, and all that because that 2-year plan evolved into a 31-year tenure with US Department of Energy (DOE) national laboratories, including 14 years at Los Alamos National Laboratory and 13 years at Lawrence Livermore National Laboratory.
Brown explains that when he made the move to Lawrence Livermore National Lab, he was able to “apply his knowledge of math and science to the development and oversight of new research opportunities for scientists and mathematicians at that lab and throughout the DOE.” Brown’s passion for the field made him an ideal candidate to lead the extensive research program in applied mathematics at Berkeley Lab.
Brown refers to mathematics as the language of science, and says this language is what allows science to be put on computers. From there, it’s not a huge jump to see why the DOE invests in math research.
“New and better mathematical theories, models and algorithms…allow us to model and analyze physical and engineered systems that are important to DOE’s mission,” notes Brown. “Often math is used to make a very difficult problem tractable on computers.”
Brown cites a notable example from 30 years ago. Mathematician James Sethian’s work with asymptotic methods set the stage for breakthroughs in combustion simulation techniques. That discovery undergirds modern supercomputing codes used in everything from combustion to astrophysics to atmospheric flow.
Asked how math applies to supercomputers, Brown responds:
The scientific performance of big applications on supercomputers is as much a result of better mathematical models and algorithms as it is of increases in computer performance. In fact, the increases in performance of many scientific applications resulting from these better models and algorithms has often exceeded the performance increases due to Moore’s Law . And Moore’s Law, which predicts of doubling of performance every 18 months, offers a pretty impressive increase on its own. These improvements in performance help scientists make much more efficient use of supercomputers and study problems in greater detail.
An applied mathematician by training, Brown is especially interested in the development and analysis of algorithms for solving partial differential equations (PDEs). In 2001, the Overture project, which Brown led, was selected as one of the 100 “most important discoveries in the past 25 years” by the DOE Office of Science. | <urn:uuid:9d1efcdf-1f41-488c-9087-147554b1f2f5> | CC-MAIN-2022-40 | https://www.hpcwire.com/2013/09/17/the_math-supercomputing_connection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00371.warc.gz | en | 0.943794 | 589 | 2.71875 | 3 |
Social Engineering is, without a doubt, the oldest computer hack. A loose term defining a range of hacks and scams, social engineering has persisted through countless centuries of human history. Simple in premise, difficult to defend against, and constantly evolving, social engineering represents one of the single greatest threats to information security in the history of technology.
It’s easy to forget that even the most secure firewall combined with the latest and greatest security software is still operated by a human being behind the keyboard. That extra-tight security is only as secure as the person operating the machine. This often presents the easiest method of entry into a secure system, as human beings are much easier to trick than machines.
The ever-changing security landscape doesn’t hinder social engineering hacks. It can even enable them, allowing for more complex and effective methods to gain information from secure systems by using the humans that run them. We’re going to take a look at a few of the more creative social engineering scams seen this year, but first, we need to take a look back in time to get a frame of reference for how this method of manipulation has evolved.
Social engineering is a broad term used to encompass several different types of manipulation, often in the context of confidence tricks. This can be expanded to include a pretty wide range of techniques to influence everything from political or social change to information security.
As it pertains to information security, social engineering is used to obtain access to what would otherwise be a secure system. A tightly locked e-mail server with usernames and passwords, for example, could be cracked with a simple phone call that ends with a password reset in the hacker’s favor.
Gaining the credentials to use the company web portal with a similar technique would be another. These attacks are easy to defend against in theory, but in practice, it’s in our nature to fudge the rules a bit when we’re sympathetic to someone’s plight. Who hasn’t forgotten their login credentials once or twice in the middle of a crucial project that needed to be finished by a deadline?
This kind of manipulation of human empathy is what makes social engineering so successful. The best defense is strictly informed and enforced best security practices to counter this kind of manipulation. The use of emotional manipulation to gain access to otherwise secure locations goes back centuries, and it takes a coordinated defense to ensure things stay properly secured.
The year is 800 B.C. A decade-long war has raged between two ancient nations. The conflict comes to an end when, playing on pride, the general of one army offers a gift to the opposing nation-state’s city: A large wooden horse. The horse is loaded with elite infantry who overwhelm the city’s troops in the dead of night and allow the invading forces to crush the city’s drunken defenses.
While the story of the Trojan Horse probably isn’t real, it’s one of the earliest literary examples of a successful social engineering hack. It’s so ubiquitous in computer security that we even named a virus after it: the Trojan Horse is used as a backdoor method of entry into an otherwise secure system. It highlights that even over two-thousand years ago, the idea of misdirection and manipulation to breach security had already been established.
Moving forward a couple thousand years and a few leaps forward in technology, a more modern-day definition of social engineering began to take place. Brought into the public’s eye by rogue black hat-turned-white hat hacker Kevin Mitnick, social engineering was coined as an information security term around the mid-60’s, when a much younger Mitnick began exploiting the technique to run circles around the FBI for decades.
In several books on the subject, Mitnick outlines...
To read the full article, and others written by Columbia University Students, please refer to the source. | <urn:uuid:4283a966-afc2-4b8e-9561-be1fc1435601> | CC-MAIN-2022-40 | https://www.mitnicksecurity.com/in-the-news/social-engineering-from-the-trojan-horse-to-firewalls | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00371.warc.gz | en | 0.938457 | 807 | 3.28125 | 3 |
The artificial intelligence market is fueled by the potential to use data to change the world. While many organizations have already successfully adapted to this paradigm, applying machine learning to new problems is still challenging.
The single biggest technical hurdle that machine learning algorithms must overcome is their need for processed data in order to work — they can only make predictions from numeric data. This data is composed of relevant variables, known as "features." If the calculated features don’t clearly expose the predictive signals, no amount of tuning can take a model to the next level. The process for extracting these numeric features is called “feature engineering.”
Automating feature engineering optimizes the process of building and deploying accurate machine learning models by handling necessary but tedious tasks so data scientists can focus more on other important steps. Below are the basic concepts behind an automated feature engineering method called Deep Feature Synthesis (DFS), which generates many of the same features that a human data scientist would create.
Invented at MIT
DFS was conceived in MIT’s Computer Science and Artificial Intelligence Lab in 2014. Our founders, Kalyan Veeramachaneni and Max Kanter, used DFS to create the “Data Science Machine” to automatically build predictive models for complex, multi-table datasets. They used this system to compete in online data science competitions and beat 615 out of 906 human teams.
They first shared this work in a peer-reviewed paper during IEEE’s International Conference on Data Science and Advanced Analytics in 2015. Since then, it has now matured to not only power Feature Labs’ products, but also motivate & enable researchers around the world, including those at Berkeley and IBM.
Understanding Deep Feature Synthesis
There are three key concepts in understanding Deep Feature Synthesis:
1. Features are derived from relationships between the data points in a dataset.
DFS performs feature engineering for multi-table and transactional datasets commonly found in databases or log files. We focus on this type of data because it is the most common type of enterprise data used today: a survey of 16,000 data scientists on Kaggle found that they spent 65% of their time using relational datasets.
2. Across datasets, many features are derived by using similar mathematical operations.
To understand this, let’s consider a dataset of customers and all of their purchases. For each customer, we may want to calculate a feature representing their most expensive purchase. To do this, we would collect all the transactions related to a customer and find the Maximum of the purchase amount field. However, imagine that we did this for a dataset comprised of airplane flights. If we applied Maximum to a numerical column in this scenario, it could calculate “the longest flight delay,” which could predict the potential for delays in the future.
Even though the natural language descriptions differ completely, the underlying math remains the same. In both of these cases, we applied the same operation to a list of numeric values to produce a new numeric feature that was specific to the dataset. These dataset-agnostic operations are called “primitives.”
3. New features are often composed from utilizing previously derived features.
Primitives are the building blocks of DFS. Because primitives define their input and output types, we can stack them to construct complex features that mimic the ones that humans create today.
DFS can apply primitives across relationships between entities, so features can be created from datasets with many tables. We can control the complexity of the features we create by setting a maximum depth for our search.
Consider a feature which is often calculated by data scientists for transactional or event log data: “the average time between events.” This feature is valuable in predicting either fraudulent behavior or future customer engagement. DFS achieves the same feature by stacking two primitives, Time Since and Mean.
This example highlights a second advantage of primitives, which is that they can be used to quickly enumerate many interesting features in a parameterized fashion. So instead of Mean, we could use Maximum, Minimum , Standard Deviation, or Median to automatically generate several different ways of summarizing the time since the previous event. If we were to add a new primitive to DFS — like the distance between two locations — it would automatically combine with the existing primitives without any effort needed from the user.
Back in September, we announced that we were open-sourcing an implementation of DFS for both veteran and aspiring data scientists to try out. In the three months since then, Featuretools has become the most popular library for feature engineering on Github.
This means that a community of people can join together to contribute primitives from which everyone can benefit. Since primitives are defined independently of a specific dataset, any new primitive added to Featuretools can be incorporated into any other dataset that contains the same variable data types. In some cases, this might be a dataset in the same domain, but it could also be for a completely different use case. As an example, here is a contribution of 2 primitives to handle free text fields.
It’s easy to accidentally leak information about what you’re trying to predict into a model. One of our previous retail enterprise customer’s applications is a great example: production models didn’t match the company’s development results. They were trying to predict who would become a customer, and the most important feature in their model was the number of emails that their prospects had opened. It showed high accuracy during training but unfortunately didn’t work when it was deployed into production.
In retrospect, the reason for this is readily apparent — these prospects only started reading emails after becoming customers. The company’s manual feature engineering step wasn’t properly filtering out the data they had received after the outcome they were predicting had already come true.
DFS in Featuretools can automatically calculate the features for each training example at the specific time associated with the example by using “cutoff times.” It accomplishes this by simulating what the raw data looked like at a past point in time in order to perform feature engineering on the valid data. This leads to fewer label leakage problems, which helps data scientists become more confident about the results they are deploying into production.
Augmenting the Human Touch with Automation
DFS can be used to develop baseline models with little human intervention. We have shown how this is possible in public demos using Featuretools. However, the automation of feature engineering should be thought of as a complement to critical human expertise — it enables data scientists to be more precise and productive.
For many problems, a baseline score is enough for a human to decide if an approach is valid. In one case, we ran an experiment against 1,257 human competitors on Kaggle. We produced feature matrices using DFS and then utilized a regressor in order to create a machine learning model.
We found that with almost no human input, DFS outperforms both baseline models in this prediction problem. In a real-world setting, this is valuable supporting evidence for leveraging machine learning in this use case. Next, we showed how adding custom primitives can be used to outperform more than 80% of our competitors and get close to the best overall score.
Applying Deep Feature Synthesis
We recently wrote about how automated feature engineering was used to increase revenue for a global bank’s fraud detection model. In that case, we were predicting if an individual transaction was fraudulent, but we created features based on historical behaviors of the customer who made the transaction. DFS created features such as “the time since the last transaction,” “the average time between transactions,” and “the last country in which this card was used.” All of these features depend on the relationships between the various data points and required using cutoff time to make sure only behavior from before the fraudulent event was used.
As a result, the number of false positives dropped by 54% compared to the bank’s existing software solution, thereby shrinking the number of customers affected by incorrectly blocked transactions. The financial impact of the new model was estimated to be €190,000 per 2 million transactions evaluated.
Deep Feature Synthesis vs. Deep Learning
Deep Learning automates feature engineering for images, text, and audio where a large training set is typically required, whereas DFS targets the structured transactional and relational datasets that companies work with.
The features that DFS generates are more explainable to humans because they are based on combinations of primitives that are easily described in natural language. The transformations in deep learning must be possible through matrix multiplication, while the primitives in DFS can be mapped to any function that a domain expert can describe. This increases the accessibility of the technology and offers more opportunities for those who are not experienced machine learning professionals to contribute their own expertise.
Additionally, while deep learning often requires many training examples to train the complex architectures it needs to work, DFS can start creating potential features based only on the schema of a dataset. For many enterprise use cases, enough training examples for deep learning are not available. DFS offers a way to begin creating interpretable features for smaller datasets that humans can manually validate.
A Better Future with Feature Engineering
Automating feature engineering offers the potential to accelerate the process of applying machine learning to the valuable datasets collected by data science teams today. It will help data scientists to quickly address new problems as they arise and, more importantly, make it easier for those new to data science to develop the skills necessary to apply their own domain expertise.
Renowned machine learning professor Pedros Domingos once said, “One of the holy grails of machine learning is to automate more and more of the feature engineering process.” We agree wholeheartedly, and we couldn’t be more excited to work at the forefront of this field! | <urn:uuid:2f8b1315-35f4-4130-938c-69786c7526a4> | CC-MAIN-2022-40 | https://innovation.alteryx.com/deep-feature-synthesis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00371.warc.gz | en | 0.950359 | 2,039 | 3.265625 | 3 |
We are often looking at ARM binaries in our favorite disassembler as we work on mobile applications and “Internet of Things” devices. As we worked on this binary we discovered a particular branch instruction that we wanted to modify. If you are familiar with the X86 jmp instruction, the ARM bx and bl instructions are similar.
Below is some ARM assembly. To the right side of the image is the raw encoding for the BLX instruction:
This particular instruction uses PC relative addressing. The BL instruction permits a branch of up to -16777216 to 16777214 bytes. It suited our purposes for modifying this binary to jump to a routine with the same signature, but resulting in a very different outcome in the execution of the binary.
It would be nice if the bits for the address were neatly packed in order, but they are not. It takes a little math and bit mangling to massage a new address into an ARM instruction that did what we wanted. So, of course, we googled for the results and found the ARMv7 reference manual: ARM v7 Reference See section A 6.7.18 for the details of the ARM BLX instruction. This is all that is really needed to decode and encode an instruction as it gives the layout for the opcode and the logic required to encode the address.
However, the following URLs are useful examples of encoding and decoding:
We needed to modify a lot of instructions, so we opted to automate this using a Python program we created. This is just a quick and dirty script we hacked up in an hour, which pleased us greatly:
You find the PC relative BLX/BL instruction you want to replace, and then find the new instruction you want it to link to. These are the two inputs into the file. Next, open up your favorite hex editor and replace the raw machine code bytes with your new instruction. When the program executes it will jump to your desired sub-routine. | <urn:uuid:9910fb4d-12c5-427e-a88a-acb09c43ba7e> | CC-MAIN-2022-40 | https://carvesystems.com/news/patching-bl-blx-instructions-in-arm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00371.warc.gz | en | 0.930832 | 405 | 3.078125 | 3 |
By this point, we’ve figured out how to leverage GeoDNS to accurately target our users and deliver location-specific responses. But none of what we’ve learned comes close to tackling the volatility of the Internet. For that, we need network monitoring
Monitoring services use vast networks of nodes that constantly ping your resources to determine whether they are up or down. They can also detect how long it takes to reach a resource (response time) and the number of hops between a user and the resource.
You can inject this data into your DNS configurations for truly intelligent query routing that can react to changing network conditions.
Think of DNS monitoring like radio traffic updates. Every few minutes, a radio announcer will cut in with an update on current traffic conditions. You can use this information to alter your route and avoid road closures and heavy traffic.With a traditional GPS device, you have no concept of traffic conditions and if you live in a congested area like DC then you’re no better off than you were with a paper map. | <urn:uuid:f091f462-a5ad-4dad-a6a1-9be84f498718> | CC-MAIN-2022-40 | https://constellix.com/other/geodns-explained | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00571.warc.gz | en | 0.930957 | 215 | 2.515625 | 3 |
Existing multiple sclerosis therapies systematically modulate the immune system to dampen its erroneous attack on the protective myelin sheaths around nerve cells, which is the hallmark of the autoimmune disease. But this approach puts patients at a higher risk of infection.
Scientists at Thomas Jefferson University said they have found a way to train the immune system to tolerate self-antigens that trigger inflammatory responses in MS while leaving the rest of the immune system intact.
They isolated tiny sacs called extracellular vesicles from cells known as oligodendrocytes. The sacs contained myelin antigens, and when they injected those particles into mice, it suppressed MS, according to a new study published in Science Translational Medicine. | <urn:uuid:18633499-b4be-4672-9f81-9642e1b0734d> | CC-MAIN-2022-40 | https://biopharmacurated.com/treating-multiple-sclerosis-with-an-antigen-specific-cell-therapy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00571.warc.gz | en | 0.918033 | 149 | 3.03125 | 3 |
A distributed denial-of-service (DDoS) attack is a flood of illegitimate traffic that is sent to a network resource from an IP address or a group of IP addresses, rendering the network resource unavailable. A DDoS attack is a serious security threat that can affect all types of networks, from the simplest business network to the most complex corporate network. Fortunately, NetFlow Analyzer can help you detect DDoS attacks and mitigate the harm they can cause.
Understanding DDoS attacks
DDoS attacks take advantage of the Transmission Control Protocol (TCP) three-way handshake that is carried out for every connection established using TCP. Not surprisingly, hackers have found a number of ways to defeat the three-way handshake.
In a DDoS attack, the attackers disturb the sequence of the three-way handshake either by not responding to SYN-ACK from the server or by sending a SYN packet continuously from a non-existent IP (spoofed IP).
In the three-way handshake, the responding server maintains a queue for sending the SYN-ACKs. During the attack, the client doesn’t respond to the SYN-ACK sent from the server so that the server is made unavailable. The server maintains a queue of SYN-ACK for all the SYN packets received from the spoofed IP address. The queue overflows and the server become unavailable.
There are various types of denial-of-service attacks, such as Network Time Protocol (NTP) DDoS attacks, ICMP floods, teardrop attacks, peer-to-peer attacks, slow read attacks, and reflected or spoofed attacks.
Last month, an attack on an unsecured NTP server was reported to be the largest DDoS attack ever, with an attack size of approximately 400Gbps. The attackers used a technique called NTP reflection. They spoofed the source IP address of the sender, who periodically sent request packets to NTP servers for time sync. As a result, a large set of responses were sent by the NTP server to the spoofed address, causing temporary congestion on the network and reducing the resource availability.
Mitigating DDoS attacks with NetFlow Analyzer: One customer’s approach
James Braunegg from Micron21 is a NetFlow Analyzer customer who has done a lot of research on DDoS attacks and has written and published many blog posts on his website detailing his findings, including how to identify and mitigate a DDoS attack. James uses NetFlow Analyzer to identify and mitigate anomalies in his data center network and keep it running efficiently with high availability.
James holds a Monash University post graduate master’s degree (MBMS) and joined Micron21 in 2004 to establish the company’s technical operations. James’ background involves running a highly successful IT hardware company for the past 20 years, along with supporting corporate networks and end-user custom software solutions focusing on individual customer support. His main focus at Micron21 is the management of the data center and its supporting infrastructure.
How James identifies DDoS attacks
In one recent incident, James used NetFlow Analyzer to analyze the abnormal spikes in his data center traffic (see image below). By using NetFlow Analyzer’s alerting mechanism, he was able to identify the abnormalities and mitigate the DDoS attack easily.
Being a data center management specialist, James’ job is to ensure high availability to Micron21 clients. Using NetFlow Analyzer, he was able to identify an NTP DDoS attack on another client. Learn how NetFlow Analyzer helped James identify this anomaly, and read his post about NTP DDoS attacks. James has also documented his experience about using NetFlow Analyzer, security analytics, and anomaly detection. You can download the case study here.
“Winding back the clock by say four or five years , I remember trying lots of software and evaluating lots of options with one goal in mind: find attack traffic and quickly identify the source and destination along with the protocol in near real time, enabling us to lower the time it took to deal with threats; relying on SNMP data for this purpose was useless,” said James. “In the end we chose ManageEngine NetFlow Analyzer, which provided a fantastic starting point for us in providing real-time visibility. While now we use NSFOCUS hardware mainly for DDoS detection and mitigation, we still to this day use ManageEngine Netflow Analyzer within our NOC.”
James closes with a recommendation and an invitation for you: “I still today highly recommend ManageEngine Netflow Analyzer. If you need any more information please contact me!” | <urn:uuid:83c181e7-00f7-4681-9728-c7b02ea7fe9e> | CC-MAIN-2022-40 | https://blogs.manageengine.com/network/netflowanalyzer/2014/04/02/ddos-attack-detection-using-netflow-analyzer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00571.warc.gz | en | 0.926223 | 1,006 | 2.96875 | 3 |
The capability to recognize patterns and predict outcomes based on past data through artificial intelligence (AI) — including deep learning (DL) within machine learning (ML) — has taken computing to the next level.
Machine learning (ML) is why an e-commerce company can highlight the products you are most likely to need based on your buying behavior or how a streaming provider comes up with the most compelling content suggestions based on your watch history.
Although systems supporting AI applications are considered smart, most don’t learn independently and often rely on human programming. For example, data scientists have to prepare inputs and select variables used in predictive analytics.
Deep learning can automatically do this without human intervention, as it is meant to learn and improve by itself through analyzing algorithms. Deep learning, with the help of artificial neural networks, tries to imitate how humans think and learn.
Deep learning has emerged as an influential piece of digital technology that enterprises leverage to optimize their business models and predict the best possible outcomes.
See below to learn all about deep learning technology and the top deep learning providers in the market:
Choosing the right deep learning providers
Top deep learning providers
PyTorch or Torch is a Lua-based deep learning and scientific computing framework that presents broad support for machine learning algorithms. It is used widely among enterprise leaders, like Google, IBM, and Walmart.
Torch uses CUDA and C/C++ libraries to process and scale the building model production and flexibility. Contrary to Torch, PyTorch runs on Python, which means anyone who can work with Python can build their deep learning models.
Lately, PyTorch is being increasingly adopted and gaining more recognition as a highly competent deep learning framework. PyTorch is a port to the Torch deep learning framework employed to create deep neural networks and perform complex tensor computations. PyTorch’s architectural attributes make the deep modeling process more simplified and transparent than Torch.
- Expedites design process with rapid prototyping
- Supports multiple GPUs and implementation of parallel programs on multiple GPUs
- Can exchange data with external libraries
- Simplified user interface
Developed by Google, TensorFlow is one of the most popular end-to-end open-source deep learning frameworks that work with desktop and mobile. TensorFlow supports languages like Python, C++, and R to build deep learning models.
TensorFlow is supported by Google and has a Python-based framework, making it one of the most preferred deep learning frameworks. Plus, it comes with additional training resources and walk-throughs for learning.
TensorFlow can leverage natural language processing to power tools like Google Translate and help with speech, image, and handwriting recognition, summarization, text classification, and forecasting. TensorBoard is TensorFlow’s visualization toolkit that provides comprehensive data visualization and measurements during machine learning workflows. TensorFlow Serving is another TensorFlow tool used to quickly deploy new algorithms and experiments while maintaining the old server architecture and APIs. It also integrates different TensorFlow models and remains extendable to accommodate other models and data types.
- Supports computation on multiple GPUs
- Comprehensive graph visualization on TensorBoard
- Abundant reference and community support
Microsoft Cognitive Toolkit, previously known as CNTK, is an open-source deep learning framework that trains deep learning models. It is known for its training modules and various model types across servers. It conducts systematic convolution neural networks and streamlined training for data, including images, speech, and text.
With Microsoft cognitive toolkit, implementing reinforcement learning models or generative adversarial networks can be done. Compared to other toolkits, the Microsoft cognitive toolkit is known for empowering higher performance and scalability while operating on multiple machines.
Due to the fine granularity of building blocks, users don’t have to use low-level language while making complex, new layer types. Both RNN and CNN neural models are accommodated with the Microsoft cognitive toolkit. Hence, it is highly proficient in resolving writing, image, and speech recognition issues.
The Microsoft Cognitive Toolkit supports both RNN and CNN type neural models and is thus capable of handling image, handwriting, and speech recognition problems. For now, this framework’s capability on mobile is fairly limited due to the lack of support with ARM architecture.
- Provides support with interfaces like Python, C++, and Command Line
- High efficiency and scalability for multiple machines
- Works with complex image, handwriting, and speech recognition
- Supports RNN and CNN neural networks
Deeplearning4j is a deep learning library for the Java Virtual Machine (JVM). This deep learning ecosystem is developed in Java and effectively supports JVM languages, such as Scala, Clojure, and Kotlin. Deeplearning4j framework supports parallel training and micro-service architecture adaptation run by linking distributed CPUs and GPUs Eclipse. Deeplearning4j is widely adopted as a distributed, commercial, enterprise-focused, deep learning platform. Deeplearning4j enables deep network support with the help of RBM, DBN, convolution neural networks, recursive neural Tensor networks, recurrent neural networks, and long short-term memory.
This framework runs on Java, and it is much more efficient than Python for certain applications. DL4J is as fast as the Caffe frame for image recognition when using multiple GPUs. It also shows extreme potential in text mining, natural language processing, fraud detection, and speech tagging.
As the core programming language of this deep learning framework, Java unlocks many features and functionalities for its users and serves as an effective way to deploy deep learning models to production.
- Executes deep learning processes by leveraging the entire Java ecosystem
- Capable of processing massive amounts of data in less time
- Involves multi-threaded as well as single-threaded deep learning
- Framework can be implement in addition to Hadoop and Spark
MXNet is a deep learning framework that supports programming languages like Python, R, Scala, C++, and Julia. It was designed specifically to meet high efficiency, productivity, and adaptability requirements. MXNet is Amazon’s deep learning framework of choice, also used in their reference library.
One of MXNet’s most notable features is its functions in distributed learning. It offers efficient, nearly linear scaling and uses hardware to its fullest extent. MXNet ecosystem also enables the user to code in a range of programming languages. Developers can train their deep learning models based on whichever language they are proficient in, without needing additional skills or expertise.
MXNet can scale and work with several GPUs as the back end is written in C++ and CUDA. It also supports RNN, CNN, and long short-term memory networks. MXNet deep learning framework use cases include imaging, speech recognition, forecasting, and natural language processing.
- Hybrid programming accommodates both imperative and symbolic programming
- Efficient distributed training
- Supports several different programming languages for added flexibility
- Excellent scalability with near linearity on GPU clusters
Developed by Microsoft and Facebook as an open-source deep learning ecosystem, Open Neural Network Exchange (ONNX) represents a common file format, so AI developers can use models with different frameworks, tools, runtimes, and compilers. It enables developers to switch between platforms.
ONNX comes with an emphasis on in-built operators, standard data types, and an expandable computation graph model. These models are natively supported on Caffe2, MXNet, Microsoft Cognitive Toolkit, and PyTorch. ONNX also offers converters for other machine learning frameworks, such as CoreML, TensorFlow, Sci-kit Learn, and Keras.
ONNX is a dependable tool that prevents framework lock-in by making hardware optimization easy and allowing model sharing. Users can convert their pre-trained model into a file and merge it with their applications. ONNX has gained recognition due to its adaptable nature and interoperability.
- Element of interoperability and flexibility
- Delivers compatible run times and libraries
- Freedom to use preferred DL framework inference engine of choice
- Optimizes hardware performance
Deep learning features
Supervised, semi-supervised, and unsupervised learning
Supervised learning is the simplest learning method, as it assumes the labels of each given image and makes the learning process for the network easier. Semi-supervised learning is used to train an initial model by employing a few labels and then repeatedly applying them to a greater number of unlabeled data. Unsupervised learning uses algorithms to identify patterns in a dataset, including data points that are not classified or labeled.
Deep learning acts as a comprehensive neural network. Hence, it possesses a large number of interconnected neurons organized in layers. The input layer receives information. Several hidden layers process the information, and the output layer provides valuable results.
Deep learning algorithms depend more on high-end machines as compared to ML applications. They require advanced GPUs to process heavy workloads. A huge amount of data, structured or unstructured, can be processed with deep learning, and the performance also improves as more data is fed.
Hyperparameters, like batch size, learning rate, momentum, and several epochs or layers, need to be tuned well for better model accuracy, since they connect layer prediction and final predicted output. Over-fitting and under-fitting can be well adjusted in deep learning using hyperparameters.
Benefits of deep learning
- Deep learning algorithms can automatically generate new features in the training dataset without human intervention. It can perform intricate tasks without extensive feature engineering, allowing faster product rollouts with superior accuracy.
- Deep learning also works well with unstructured data. This feature is useful, since the majority of business data is unstructured. With traditional ML algorithms, the wealth of information in unstructured data often goes untapped.
- Learning complex features and conducting intensive computational tasks become more efficient with multiple layers of deep neural networks. Due to its ability to learn from errors, deep learning can carry out perceptive tasks. It verifies the accuracy of its predictions and making necessary adjustments.
- A deep learning model takes several days to learn the model’s parameters. Coupled with parallel algorithms, training can be distributed across multiple systems. It can complete the training much faster depending on the volume of training datasets and GPU capacities.
- While training can be cost-intensive, it helps businesses reduce expenditure by preventing inaccurate predictions or product defects. Manufacturing, retail, and health care industries can leverage deep learning algorithms to reduce error margins dramatically.
- Combining deep learning and data science, more effective processing models can be achieved as they ensure more reliable and concise analysis outcomes. Deep learning has many applications in analytics and forecasting, such as marketing and sales, HR, and finance.
- Deep learning is scalable as it can process large amounts of data and perform extensive computation processes cost-effectively. This quality directly impacts productivity, adaptability, and portability.
Deep learning use cases
Self-driving cars use deep learning to analyze data to function in different terrains, like mountains, bridges, urban and rural roads. This data can come from sensors, public cameras, and satellite images that will help test and implement self-driving cars. Deep learning systems can ensure that self-driving cars handle in all scenarios through training.
Deep learning and GPU processors can provide better image analysis and diagnosis for patients. Artificial intelligence can also help develop new, more effective medications and cures for life-threatening diseases and expedite treatment as well.
Deep learning makes it more convenient to find or predict the trends for a particular stock and whether it will be bullish or bearish. Analysts can consider multiple factors, including several transactions, buyers, sellers, and the closing balance of the previous day, while training the deep learning algorithm. Qualitative equity analysts can train deep learning layers using P/E ratio, return on equity or assets or capital employed, and dividend.
Deep learning algorithms can help detect specific news and its origin and determine whether it is fake. For instance, if deep learning is allowed to mainstream social and local media during elections, it can also help predict election results.
As hacking and cybercrime have become more sophisticated, deep learning can serve as an adaptive model to counter cyberattacks. Deep learning can learn to detect different types of fraudulent transactions on the web and track their origin, frequency, and hotspots by taking factors, like IP addresses and router and device information.
Image recognition through deep learning can ultimately help the AI system classify different variables and points of consideration based on their appearance. A practical example of image recognition can be seen in face recognition for surveillance.
What to look for in a deep learning provider
Deep Learning unlocks several practical use cases of machine learning and artificial intelligence technologies. Deep learning has the power to break down tasks in the most efficient manner and assist machine applications with superior intelligence and adaptability.
To know which one of the deep learning frameworks above best meets your requirements depends on several factors, some of which are as follows.
- Architecture and functional attributes
- Speed and processing capacity
- Debugging considerations
- Level of APIs
- Integration with existing systems
It also depends a lot on your level of expertise, so one should consider implementing a beginner-friendly DL mechanism in the initial stages. Python-based frameworks turn out to be more straightforward for beginners as well.
But if you happen to be experienced, there is a whole set of considerations, such as integration with applications and platforms, resource requirements, usability, availability and coherence of the training models. A rigorous evaluation system, constant trial and error, and an open mind will help you get through to your ideal deep learning framework. | <urn:uuid:70186a16-7eea-49b7-8159-cc1f5d831b4a> | CC-MAIN-2022-40 | https://www.datamation.com/artificial-intelligence/deep-learning-providers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00571.warc.gz | en | 0.921845 | 2,883 | 2.78125 | 3 |
As a small business owner, it is important to invest in the right technology to keep costs down and productivity up. By moving to the cloud, small businesses can take advantage of the latest technology without breaking the bank. They can also enjoy increased productivity and streamlined workflows that can help them become more efficient and competitive.
What is cloud computing?
Cloud computing is the ability to access information and applications over the Internet. This means that instead of having a program installed on your computer, you can access it, or store it, on a remote server. For example, if you wanted to use a word processing program, you could either download and install a program like Microsoft Word onto your computer, or you could access Word through Microsoft 365, which is stored on a remote server.
What does cloud computing do?
Cloud computing is a way to use technology to make your business more efficient. It involves using remote servers to store and manage your data, instead of using your own computer systems. This can save you money on hardware and software costs, and it can also increase your productivity and efficiency. Additionally, cloud computing offers a host of other features that can help you grow your business. For example, you can use cloud-based applications to manage your finances, communicate with customers, and track inventory.
What are the main cloud service models?
There are three main cloud service models: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).
Software as a Service (SaaS)
Software as a Service is the most common type of cloud service. With SaaS, you access applications that are stored on a remote server. This means that you don't have to download and install the applications onto your computer. Instead, you can access them through a web browser or mobile app. Some of the most popular SaaS applications include Microsoft Office 365, Salesforce, and Google Apps.
Platform as a Service (PaaS)
Platform as a Service is a less common type of cloud service, but it can be very useful for small businesses. With PaaS, you can use remote servers to host and run your own applications. This means that you don't need to buy or set up the hardware and software required to run your applications. PaaS providers include Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
Infrastructure as a Service (IaaS)
Infrastructure as a Service is the most basic type of cloud service. With IaaS, you can rent access to remote servers, storage, and networking capacity. This means that you can outsource the management of your IT infrastructure to a cloud provider. IaaS providers include Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
What are the benefits of moving to the cloud for small-to-medium-sized businesses?
Moving to the cloud is easier than ever, and there are countless benefits to be gained. Here are five reasons why small businesses should make the move:
1. Increased Efficiency and Productivity
Cloud-based solutions can help small businesses increase their efficiency and productivity by allowing employees to work from anywhere at any time. This can be especially beneficial for businesses with remote workers or employees who need to work outside of the office. In addition, cloud-based solutions often come with features that can help businesses grow, such as collaboration tools, which can make it easier for employees to work together.
2. Reduced Hardware and Software Costs
Migrating to the cloud can help reduce costs for small businesses in a few key ways. First, they can save on hardware costs, as they can often use the infrastructure of the cloud provider instead of purchasing and maintaining their own servers. Second, they can save on software costs, as many cloud providers offer software as a service (SaaS) with monthly subscription pricing. This can be significantly cheaper than purchasing and maintaining software licenses for each employee. Finally, small businesses may be able to avoid certain networking and infrastructure costs, such as the cost of wiring and cabling.
A cloud computing provider will store your data for you without the need to purchase and maintain your own hardware and networking equipment. This can save you money and hassle, and it also means that you won’t have to worry about maintaining a secure network.
3. Access to a Wide Range of Features
Cloud-based solutions offer a wide range of features that can help small businesses grow their business. These features include online storage, CRM, accounting, and email marketing tools, just to name a few. write about the many features of cloud-based solutions
Small businesses can access these features quickly and easily, without the need for expensive hardware or software. This makes cloud-based solutions a cost-effective option for small businesses that are looking to grow their business.
Additionally, cloud-based solutions are often updated with new features, making them a powerful tool for small businesses. With so many features available, small businesses can find the right solution to fit their specific needs.
4. Easy Scalability
Cloud-based solutions are perfect for small businesses that are in growth mode or experience periodic spikes in traffic or activity. Thanks to the scalability of cloud solutions, businesses can easily adjust their plans and resources to accommodate these changes. This means that your business can keep up with the competition without having to worry about scaling your infrastructure.
5. Greater Reliability and Security
Cloud-based solutions are typically more reliable and secure than on-premise solutions. This is because they are hosted in secure data centers with robust security measures in place. Cloud-based solutions are also updated more frequently, which means businesses can benefit from the latest features and enhancements without having to worry about installing and maintaining software updates themselves.
How do I migrate my business to the cloud?
Small businesses need to do a lot of planning before they migrate to the cloud. They need to make sure that they have a good understanding of what the cloud is, what it can do for them, and how it works. They also need to make sure that their systems are compatible with the cloud and that they have the resources to manage their migration.
Once they've done their planning, small businesses can start migrating to the cloud. They should start by moving their most critical systems and then gradually move other systems over. They should also create a plan for data backup and recovery in case something goes wrong during the migration.
Another key step is to ensure that the cloud provider has the right security measures in place to protect your data. It's also important to have a plan in place for how you will access your data and applications in the cloud, especially if you are not located near the provider's data center.
Finally, it's important to test the migration process before you go live. This will help ensure that everything goes smoothly when you make the switch. By following these steps, businesses can migrate to the cloud with minimal disruption to their day-to-day operations.
Small businesses should also be aware of the cost of migrating to the cloud. The initial investment may be higher than keeping things on-premises, but there are often long-term savings to be had. It's important to weigh the costs and benefits of migrating to the cloud before making a decision.
If you're a small business owner, moving to the cloud is a great way to take your operations to the next level. With cloud computing, you can enjoy all of the benefits of advanced technology without having to invest in expensive hardware or software. And best of all, you can access these services from any computer or mobile device with an internet connection. So if you're looking for a way to improve your business performance, consider moving to the cloud! | <urn:uuid:48069f63-d52a-4aa9-a0dd-7e498a32b39b> | CC-MAIN-2022-40 | https://www.digitalboardwalk.com/2022/09/5-reasons-why-small-businesses-should-move-to-the-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00571.warc.gz | en | 0.962711 | 1,581 | 2.765625 | 3 |
Apple CEO Tim Cook was guest speaker at the 40th edition of the International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Brussels and talked about why the United States ought to have a data protection legislation that is just as rigid as the EU’s General Data Protection Regulation (GDPR) which was enacted in May 2018.
Cook mentioned in his keynote address that he made an effort to distance Apple from Silicon Valley firms because they urged trade ins of digital data producing a “data industrial complex.” Bringing this course of action to the extreme creates an entire digital profile that enables companies to learn more about you than what you actually know yourself. A profile represents a set of algorithms that provide steadily extreme content that could lead to harm. We must not sugarcoat the final results. This is monitoring.
Other countries should do the same as the EU in enacting the GDPR laws. Thanks to the GDPR, non-compliant firms that do not protect client data are fined an amount as much as €20 million or 4% of annual global revenue, whichever amount of money is higher.
Cook tweeted four concerns that he thinks the US legislation should address:
1. Companies should de-identify client data or they shouldn’t gather data in the first place.
2. Users have to know when their information is being obtained from them and why it is collected. This allows users to understand which is a valid collection and which isn’t. In any other case, it is a fraud.
3. Companies need to know that the data belong to users. The process for collecting a copy of people’s personal data should be simple, including the process of correction and removal. Everyone has the right to data security.
4. Using Artificial Intelligence (AI) for collecting personal information is not a matter of efficiency, but laziness. Privacy and human values should be respected by companies. Disregard this and there will be big problems.
Although technology is helpful in getting to know about people, organizations are responsible for attaining excellent privacy standards and artificial intelligence. In the research of AI, the humanity, ingenuity and creativity of people must not be taken for granted.
Mr. Cook gave his keynote speech after the news about popular GDPR investigations were out beginning with the business operations of big companies like Facebook, which potentially compromised the personal data of approximately 50 million users and Google, which experienced an alleged breach of its Google+ users. Perhaps it’s time that U.S. companies take seriously the need to adequately secure the personal data of consumers. | <urn:uuid:9ff3614e-2825-4724-ad1b-c91dfa7228ac> | CC-MAIN-2022-40 | https://www.hipaanswers.com/should-u-s-legislators-follow-eu-and-create-gdpr-like-data-protection-laws/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00571.warc.gz | en | 0.960276 | 528 | 2.5625 | 3 |
Mobile phishing is on the rise. And while most of us know what phishing is, let’s quickly define it to make sure we’re on the same page. Phishing is the fraudulent attempt to obtain sensitive information or data, such as usernames, passwords, and credit card details, by disguising oneself as a trustworthy entity in an electronic communication. Mobile phishing a particularly insidious attack, because while people are primed to receive phishing attacks on their desktop and laptop computers, they generally feel safer when using a mobile device — even though attacks on mobile apps and devices are increasing rapidly. A 2020 study from cloud security company Lookout found that mobile phishing attacks grew 37% from Q4, 2019 – Q1, 2020, with much of this growth attributed to new attacks related to COVID-19.
By Alan Bavosa, Vice President of Security Products at Appdome
Phishing attacks rely on social engineering, where hackers target unsuspecting mobile users by tricking them to click on a link that takes them to a malicious site.
Hackers have two main goals when they conduct mobile phishing:
- Data Harvesting: Using man-in-the-middle (MiTM) attacks (and other forms of network or session hijacking techniques), a malicious attacker can gain access to valuable data, such as usernames, passwords, secrets, API keys, and other valuable information. They either monetize this information or use it later in other attacks, such as to infiltrate the ‘backend’ systems or server.
- Malware delivery: Sometimes, attackers trick unsuspecting users into downloading malware. For instance, the attacker may pretend to be your bank, your IT department, or some other trusted entity and ask you to download or update a mobile application that looks like the real app but is a fake copy of the real app with malware embedded inside. Once you download the app, the malware activates, usually at some later time so that you don’t suspect it.
Mobile apps need to be secured against phishing if they have messaging capabilities, but there’s not one quick fix to the problem. Like most cybersecurity defenses, mobile phishing protections need to be multi-layered, using measures such as URL whitelisting, MiTM attack prevention, certificate pinning, certificate validation, anti-tampering, and anti-reversing.
So, let’s take a deeper look at the mobile phishing problem and how mobile developers can protect their apps against this kind of attack.
The Mobile Phishing Problem
As noted above, when people think of phishing attacks, they typically think of a spoofed email that pretends to be from their IT department asking them to download a patch that’s a virus or a bogus request from the CEO to send a wire transfer that ends up going to criminals. However, these days phishing attacks are conducted through mobile channels, including social media apps, gaming apps, banking apps, short messaging services (SMS), and multimedia messaging services (MMS). Many games, especially those that involve large numbers of other human players, include chat capabilities that can act as a vector for phishing.
In short, if the app enables people to communicate with one another, a hacker can abuse it. And why not? It’s effective.
Hackers use many different attack methods to convince their victims to take actions that will move an attack along, often disgusting their presence. For instance, “URL padding” is an attack technique where the hacker uses a real and recognizably safe domain up front, but then adds hyphens or other characters to conceal the true, malicious domain. This means a domain like http://mobile.twitter.com—————-a23x.I-will-steal-your-money.com/login.html would show up as “mobile.twitter.com” in the small address bar of the mobile phone … possibly along with a few hyphens that the user won’t notice. Hackers also use tiny URLs from one of many services that create a smaller link that redirects to a much longer one, hiding the actual domain from the user.
Mobile phishing is often blended with a man-in-the-middle (MitM) attack to increase effectiveness. For example, a user might receive a message in their banking app purportedly from their financial institution asking them to log in via their browser and verify account details. The malicious link leads them to a site that looks exactly like their banks’ website.
Other times, the attacker delivers the fake login screen inside the mobile app, placing the fake screen on top of the real screen in what’s known as an “overlay attack.”
Certainly, developers of financial mobile apps understand the stakes — people’s finances are at risk. But even apps focused on entertainment such as games need anti-phishing protections. After all, mobile gaming revenue surpassed $75 billion in 2020, a 19.5% increase over 2019, according to Sensor Tower. And 43% of that gaming revenue comes from in-app purchases, according to a 2020 study from Wappier. That’s at least $32 billion passing through mobile gaming apps in 2020 alone. Hackers are working hard to siphon some of that money off for themselves.
Combatting Mobile Phishing Requires a Multi-Layered Defense
As explained above, mobile phishing attacks are far from simple, so they require several different techniques to defend against them.
Blacklisting, which blocks known dangerous URLs, is a common tactic, but it’s not very effective. Most of the sites to which phishing attacks link only remain active for a few days, at most, and new sites pop up daily. Attempting to identify them all is a never-ending game of whack-a-mole. When possible, URL whitelisting is a far more effective measure because it allows access to a specific list of sites and blocks all others.
A financial app, for example, might whitelist just a few select URLs to its website, and a game might do the same. In this way, no matter what link a phishing attack sends, the user will be unable to connect to it through the app.
Another method to combat phishing is securing the transport or communication channel to prevent MiTM attacks. For starters, apps should always enforce Transport Layer Security (TLS) versions and ensure that trusted, approved cipher suites are used. Cipher suites are sets of algorithms used to secure a TLS connection, and developers have hundreds of options to choose from, many of which may be outdated or insecure. Only approved, current and secure cipher suites should be allowed.
In addition, it’s important to validate the authenticity of the SSL/TLS certificates used in mobile connections. Certificates work on a chain of trust. “Higher” certificates validate the authenticity of “lower” certificates, all of which depend on a certificate issued by a trusted provider. When a server presents a certificate to an end-user, that’s called a “leaf” certificate, and while these certificates are not intended to be used as certificate authorities, they can be used to sign other certificates. This allows an attacker to insert a malicious, fake certificate into the mobile device without the user knowing. The attacker can then redirect the connection or even alter the content/payload.
To stop this kind of attack, developers should consider defenses such as certification validation, certificate pinning, and certificate role enforcement using “Basic-Constraints.”
Anti-Tampering and Code Obfuscation
Finally, it’s important to protect apps against tampering, reversing, and debugging. The more a hacker knows about your app, the better prepared they will be to launch effective attacks. It’s harder for a cybercriminal to create an app overlay without first taking the app apart to understand how it works.
Code obfuscation prevents reverse engineering techniques that rely on disassembling or decompiling an app’s code via static or dynamic analysis tools, but it needs to be implemented carefully because the app can break if the wrong process is obfuscated. Additionally, obfuscation must be updated line-by-line with every new release of the app. Finally, third-party components such as software development kits (SDKs) can’t be obfuscated unless developers have access to the source code, which is rare.
Anti-tampering and reverse engineering protection stop hackers from adding modifications to apps or even create fake versions of apps. And in the case where hackers can modify an app’s code, developers should ensure that the bogus app won’t function. Checksum verification does exactly this by analyzing the binary to generate a unique hash function. Any changes to the binary will result in a different checksum value from the one the genuine app will generate, which should cause the app to close.
Mobile App Security Implementation Challenges
Implementing all of these protections presents a daunting challenge to mobile development and security teams. Not only are some of these measures such as obfuscation extremely difficult to manually code, but the mobile security skills required are also in short supply. And even if a development team has the skills in house, manually implementing security is expensive and time-consuming, ballooning budgets and delaying release dates. Thankfully, there are ways to implement these features without having to do so manually.
Software development kits (SDKs) can be incorporated into apps to provide security, though they do require some manual coding and bring significant limitations when it comes to obfuscation. Another option is a no-code platform that can embed security capabilities into an app binary without requiring changes to source code. By obfuscating at the binary level, even SDKs and third-party libraries may be obfuscated.
Mobile phishing is a growing threat, but app developers are not helpless to defend against it. By taking a multi-faceted approach, app developers can protect their customers and end-users from being compromised.
About the author
Alan Bavosa is VP of Security Products at Appdome. A long-time security product exec, Alan has previously served as chief of product for Palerra (acquired by Oracle) and Arcsight (acquired by HP).
Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same. | <urn:uuid:42bcdb01-136b-4815-a0b1-f89842a870a9> | CC-MAIN-2022-40 | https://cisomag.com/mobile-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00771.warc.gz | en | 0.930032 | 2,176 | 2.890625 | 3 |
The pandemic period has seen a significant increase in attacks on cyber-physical systems, largely due to the growth in connectivity for many devices in the modern world. A common approach to try and stymie these attacks is a simple one: disconnect devices from the internet.
The rationale behind the approach, which is known as “air gapping,” is simple. If a device isn’t connected to the web, it can’t be attacked by hackers. The approach has gained backing from the likes of the CIA, who recommend it as part of an organization’s ransomware defenses.
A time-honored tactic
The approach is not really a new one and has been a fundamental part of business continuity programs for many years, and certainly long before the latest wave of ransomware attacks made securing cyber-physical systems so important. For instance, organizations would commonly try to protect primary sources of software or data from destruction (whether malicious or otherwise) by creating a backup copy stored offline.
It’s a practice that has grown in recent years, not least due to the high-profile hacks on companies like Sony and Saudi Aramco, where highly sensitive data was deleted. However, despite the tremendous boom in ransomware attacks, air gapping remains a somewhat niche activity.
For instance, the recent digital threats report from Microsoft highlighted the security holes created by poorly patched systems, and keeping software up to date is much harder when the system is air-gapped. Indeed, it can be tempting to assume that taking systems offline is all the security one needs to do and that IT teams get too relaxed. Of course, the system will eventually need to be updated, and the longer the lag between updates, the more vulnerable the system will be when it “comes up for air.”
Air-gapping is also vulnerable to human errors. For instance, successful air-gapping requires precise replicas of live systems to be maintained at all times, which can be a labor-intensive process. If staff get lazy, it can be tempting to connect the air-gapped system to the net to expedite the process, thus breaking the air gap and giving attackers an easy way in.
Research from the Karlsruhe Institute of Technology (KIT) recently highlighted how even when systems are air-gapped, they should not be regarded as immune from attack. They demonstrated that data can still be transmitted to the LEDs in regular office devices using a laser. This then allows attackers to communicate with the devices over a distance of a few meters.
The idea that hackers might use lasers to attack a target might sound like something out of a James Bond movie, but the researchers show that it is a very real possibility. They demonstrated the so-called LaserShark attack at the 37th Annual Computer Security Applications Conference (ACSAC).
It was the culmination of an extensive project that focused on hidden communication via a range of optical channels. The project highlights various previous attempts to break into air-gapped systems using acoustic or electromagnetic channels. While these were found to be effective, they required the attacker to be extremely close to the target. They also cite previous work into the use of optical channels, but these were only found to work with small distances and with low data rates. These methods also typically only allow for the receipt of data rather than inserting data.
The researchers, who worked with colleagues from TU Braunschweig and TU Berlin, instead use a directed laser beam to introduce data into an air-gapped system as well as receive data. What's more, the approach doesn't require any additional hardware attached to the device being attacked.
"This hidden optical communication uses light-emitting diodes already built into office devices, for instance, to display status messages on printers or telephones," the researchers explain.
The approach works by aiming the laser at LEDs already installed on the device and then measuring the response. Through this method, the researchers were able to establish a secret communication channel that worked at distances of up to 25 meters. What's more, the channel worked with sending and receiving data at around 18 kilobits per second inwards and 100 kilobits per second outwards. All of this was achieved using commonly available office devices that are installed in offices everywhere.
"The LaserShark project demonstrates how important it is to additionally protect critical IT systems optically next to conventional information and communication technology security measures," the researchers explain. | <urn:uuid:e62ac88d-13ea-4448-b0bf-d79c41674b8e> | CC-MAIN-2022-40 | https://cybernews.com/security/even-air-gapped-it-systems-are-vulnerable-to-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00771.warc.gz | en | 0.955165 | 916 | 2.921875 | 3 |
DoS, DDoS, and Zero-Day DDoS: What is the Difference?
Today’s modern businesses are heavily reliant on their ability to stay connected, both to internal mission-critical systems as well as external consumer-facing platforms. However, the necessity of operating in a multi-operational environment has left many businesses vulnerable to a popular form of a cyberattack – Denial-of-Service (DoS).
Denial-of-Service attacks are designed to cripple an organization’s ability to keep its services running efficiently. And these web-based assaults have only increased in severity over the years. Now, there are various methods attackers use, including DDoS and Zero-Day DDoS attacks, both designed to keep victims guessing while wreaking havoc on websites, databases, and connected businesses systems.
To combat these dangers effectively, however, it’s important to understand the difference between DoS, DDoS, and Zero-Day DDoS, and how your business can stay protected.
A Denial-of-Service attacks’ sole purpose is to disrupt or completely disable the service of its intended target. Company websites, digital sales platforms, and online customer databases are all examples of typical DoS targets. Unlike viruses or malware, DoS attacks are not dependent on specific software exploits or compromised access credentials to operate. Instead, they focus on tying up critical network resources, making it impossible for others to connect to the same service.
Examples of DoS Attacks
• Buffer Overflow Attacks – These are one of the most common forms of DoS attacks and are designed to completely overwhelm a network with traffic until it can no longer function.
• SYN Floods – Also known as a “half-open attack”, SYN floods work by starting connections with a targeted server but failing to send all the packet data required. As this process is repeated, it leaves numerous open port connections on a server, slowing performance and inevitably crashing the server.
Distributed Denial-of-Service (DDoS)
Distributed Denial-of-Service attacks function in a similar way to their predecessors with one primary distinction – they use multiple, or “distributed,” sources to attack from. While standard DoS attacks may be easier to diagnose as individual IP addresses are recognized quicker, DDoS attacks are much more difficult to track and even more so to mitigate. By using multiple slave computers to bombard a single target, DDoS attacks are typically much more aggressive than DoS attacks and are capable of taking down larger systems.
Examples of DDoS Attacks
• Ping of Death – This is a form of Denial-of-Service attack that sends connection ping packets that are much larger than the server can process. Depending on the target system, multiple connection requests of this type will cause the server to reboot or crash altogether.
• Slowloris – As the name implies, Slowloris is designed to be a “low and slow” attack on a server. These types of attacks use less bandwidth than other DoS and DDoS attacks and are therefore harder to spot. Multiple connections are created with the host site over time and kept open indefinitely, not allowing new requests to be open and cutting off service to legitimate users.
The term “Zero-Day” is a general reference to the use of new software and systems that have just launched or haven’t received their first batch of updates. Hackers will typically target these new systems in an effort to find vulnerabilities that developers haven’t noticed or have had a chance to repair. In network security configurations, Zero-Day attacks can be highly dangerous, as undetected exploits can create backdoors for hackers, allowing them to take control of their target’s systems unimpeded.
Examples of Zero-Day DDoS Attacks
• Teardrop – These attacks specifically target older operating systems that haven’t been updated and aren’t capable of reading fragmented data packets. When the system is unable to offset the fragmentation of these packets, it causes a denial-of-service condition.
• Botnets – Hackers use Zero-Day exploits to take control of unsuspecting victims computers. Once they gain access, these computers are then used as slaves to assist in DDoS attacks, most times without the owners even knowing it’s occurred.
What are the Business Risks of DoS Attacks?
Denial-of-Service attacks can present several risks to both the reputation and sustainability of businesses of any size. If a company’s website goes out of commission for any amount of time, it could cost hundreds of thousands of dollars each day it takes to recover. Increased downtime of services can also cause irreparable damage to a brand’s reputation.
In addition to the negative impact they have on revenue streams, DoS attacks can also lead to system data loss or file corruption. In business environments where regulatory compliance standards need to be maintained at all times, DoS attacks can lead to a variety of legal and financial issues that can impact a business’s long-term viability.
How Can Your Company Stay Protected?
Keeping your business protected from a DDoS attack begins by taking a proactive approach to server management and cybersecurity planning.
• Increase Bandwidth – Ensuring your server can support higher levels of bandwidth will give you the flexibility you need to address DDoS attacks while minimizing server downtime.
• Establish a Backup Server – Creating a backup or “failover” in the event your primary servers become compromised will enable you to keep mission-critical systems operational in the event of a Denial-of-Service event.
• Invest in Managed Security Services – To successfully combat a DDoS attack, early detection and response is essential. Managed security services can help your business mitigate the risks associated with Denial-of-Service attacks by providing 24/7 monitoring of your systems while immediately responding to suspicious network activity.
Today, DoS and DDoS attacks are some of the most dangerous cyberweapons that hackers deploy against companies. By understanding the risks they pose and taking proactive measures to keep your business protected, you’ll be able to lower the risk of being affected by Denial-of-Service attempts.
ArmorPoint is a security information and event management solution that provides a cost-effective and reliable way to continually protect your business from emerging threats. Through its customizable service pricing model, ArmorPoint’s cost-effective packages and dynamic levels of expert management support the security strategies of all companies, regardless of available budget, talent, or time. And since ArmorPoint offers 24/7 security support with a team of dedicated specialists, they can provide you with the manpower you need to expertly manage all of your cybersecurity initiatives. See how ArmorPoint can make a difference in your security posture with a risk-free 30 day free trial. | <urn:uuid:9cedfbb7-514a-4d93-adc4-defa7d1e1919> | CC-MAIN-2022-40 | https://armorpoint.com/2019/05/10/dos-ddos-and-zero-day-ddos-what-is-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00771.warc.gz | en | 0.937761 | 1,429 | 3.375 | 3 |
Knowing how to read a URL has become an essential skill for everybody, not just network admins and web developers. URLs are how many scammers and phishers attack their victims. These fake URLs can get to inboxes, Slack channels, text messages, or message queues in social media platforms.
In this post, we will walk you through how to read a URL. Reading a URL, which breaks it down into its constituent parts, makes it easy to decide if a URL is real or fake.
Reading URLs takes a little practice. There are just a few things to look for when you examine an URL.
It's easiest to read from right to left.
👉 The primary thing you need to locate is the root domain. The root domain is the "apple" in "apple.com" and the "slack" in "slack.com". Finding the root domain of a URL will help telll you if it's a real or fake domain. Here are the steps to follow:
Here's an example: https://support.apple.com/sakjdhi8?df8vdf/vv98df987
Don't let the long sequence of characters on the right fool you.
Once you find the root domain, you can find the subdomain. If there's a period to the left of the root domain and then more text to the left of the period, this extra text is a subdomain. The owner of the root domain can use whatever subdomains they want. For example, let's deconstruct the following domain.
For the above domain, the root domain is "reset-my-account" and the subdomain is "slack". The root domain is not "slack".
Most companies use subdomains for various products, features, or functions. In the case of Slack, they use subdomains for customer workspace like workspace-name.slack.com
🤕 One trick attackers often us is to buy and use root domains that look like real domains.
👉 Here are come common tricks:
After you find the root domain, examine it to make sure it is the word you think it is.
🦹♀️ Attackers will try to make domain names seem overwhelmingly long and complex to make it so you don't look for the root domain. They can add 100s of characters to the right side of a domain.
It doesn't matter how long the sequence of characters on the right is, just follow the rules for finding the root domain. To recap:
👉 Look for the "/" (single slash) farthest from the right. If there is no "/" in the domain, then you are going to start at the far right character of the domain.
👉 Once you find the right "/" , the next section of the URL will be the type of domain - .com, .co, me, .io, .ru, on and on. The left side of the domain type will be a "."
👉 To the left of the domain type is the root domain. This is sandwiched by a "'." on both sides.
👉 To the left of the rood domain are subdomains or nothing. We'll learn mroe about subdomains in a future lesson.
Remember - always find the root domain before clicking on a URL.
Given how important it is to be able to read a domain to protect yourself from phishing attacks, we created a URL game. This game, taken 100% in Slack, asks users to decide if a URL is real or fake. Based on user responses, the game logic presents various trainings on how to read a domain.
If you want to see a demo or get a trial of Haekka security and privacy games in Slack, schedule it here. | <urn:uuid:7c42baa6-e671-4d41-85ed-cad67416db9f> | CC-MAIN-2022-40 | https://www.haekka.com/blog/reading-a-url-to-tell-if-its-real-or-fake | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00771.warc.gz | en | 0.896078 | 800 | 3 | 3 |
Like the thunderbolt-sporting superhero, flash memory is quick, flash memory is quick. With no moving parts, flash resembles primary memory more so than secondary disk storage when it comes to saving and reading data.
As the cost of flash declines, data center managers are seeing flash as an attractive alternative to traditional hard-disk methods. By some estimates, flash memory prices have fallen at an average of 35% per year over the last few years.
The Next Generation of Data Centers
The consumerization of IT, coupled with online services like social media, creates a need for flexibility and performance improvements within current data centers without the associated costs.
Disk storage was revolutionary in its ability to cost-effectively store larger quantities of data compared to the then-outgoing storage medium: tape. Despite the introduction of flash, spinning disks persisted as the de facto server architecture. Why? Regardless of flash’s significant performance benefits, it was simply too expensive to consider as a full-fledged alternative to spinning disks. Additionally, for the price, flash was relatively smaller in capacity and incapable of retaining comparable levels of data with respect to spinning disks.
However, recent developments in flash have chipped away at its shortcoming: flash is now quite reasonable in price. As the cost of flash has fallen, its primary benefits – speed in throughput and latency – have dramatically increased concurrently. Furthermore, flash outperforms hard disks and consumes a small fraction of the electricity, sometimes at the ratio of one to 16. And despite relative durability shortfalls, recent improvements have made flash an increasingly feasible – and desirable – option, even in high-volume environments such as the data center.
The All-New Data Center, Flash Included
The modern data center resembles an alpha city: skyscrapers of networking equipment line the narrow passageways between each block of appliances; ethernet cords are strung throughout the center much like the electric cabling powering city buses. Towers of storage capacity fill the room; surely they couldn’t all be running on flash storage?
Today, those data centers could very well be all flash. Tomorrow, it’s certain that they will. For major industries whose profit margins depend on the availability and speed they can offer their customers, flash storage’s undeniable performance is becoming less of a temptation and more of a business necessity. File hosting services, like Amazon’s Simple Storage Service (S3) or Dropbox, handle requests in the realm of 650,000 per second. Telcos and service providers – whose business models are extremely sensitive to latency and downtime – are particularly interested in an all-flash data center model.
Consider the drastic differences between hard-disk drives (HDDs) and solid-state drives (SSDs). A state-of-the-art HDD spinning at a maximum 15,000 revolutions per minute – beyond which the hardware has a high probability of failure – achieves anywhere between 180 and 200 input/output operations per second (IOPS). While the technology has proven sufficient in the past, today’s data storage requirements are now being challenged to go even faster. Performance achievable from flash storage supersedes even the best HDDs on the market. Solid-state drives offer between 3,000 and 3,500 IOPS per unit, netting a performance benefit of over 94 percent. The use-case for flash has never been more attractive both from the price standpoint as well as performance benefits.
Despite the appeal of an all-flash data center model, other industries continue to hold off on adoption until the cost of flash drops further. Today, the tradeoff is between price and capacity. But when the price of flash catches up to the price of spinning disks, there will suddenly be no reason for choosing the latter. Just as spinning disks replaced tape storage with its better value proposition, so too will the market shift towards adopting flash as the preferred storage medium.
Software-Defined Storage and the All-Flash Data Center
Software-defined storage is another storage trend gaining comparable traction to flash. While it might be too soon to call the two trends linked, it is indisputable that a software-defined approach to storage infrastructure presents organizations the required flexibility to seamlessly adopt an all-flash data center strategy.
Software-defined storage works by migrating features typically found in hardware to the software layer. The virtualization of hardware operations means IT managers can do away with built-in and inefficient redundancies found within the hardware layer. No matter the vendor, cost or technology, hardware inevitably fails. Flash storage, in particular, currently has a quicker time-to-failure rate than spinning disk options. In traditional storage setups without RAID cards, the failure of a disk typically prompts an error that will impact the end-user’s experience. The error is typically resolved by using RAID cards to hide the errors, which can be costly. With the right software-defined approach, these problems would be concealed, allowing users to continue unabated. Furthermore, software-defined storage is hardware-agnostic and can operate on any hardware setup.
By adopting a software-defined approach to storage architecture, the organization could still utilize a single name space spanning all its storage nodes. Organizations could also run applications in the storage nodes, turning them into “compustorage” nodes. As a result, the storage hardware itself would not need to be large or expensive, but would still retain high levels of performance and speed. Therefore, rather than building a large, expensive and traditional installation, organizations can start with a small number of cheap servers, and if needed, scale linearly from there and be left with a cost-effective, high-performance data center.
Four Immediate Benefits of an All Flash Data Center
Other benefits of an all-flash data center running software-defined storage technology include:
- Substantial performance improvement through the ability to use the faster flash technology throughout the data center
- Operating more applications on the same hardware, due to hardware performance gains
- SSDs have a smaller footprint in the data center. Since SSDs are physically smaller than spinning disks, they require less real estate to house them.
- Reducing power consumption means that SSDs reduce operating costs by generating less heat than a spinning disk and requiring less energy for cooling.
Our superhero Flash can run as fast as the speed of light and today’s data is no different. The technology to capture and store this information is always changing. It was not long ago when spinning disks were novel in design and capability. The introduction of HDDs phased out tape storage and presented new possibilities for innovation in computing as a whole. Now the time has come for flash to take the helm as the industry norm with its unmatched performance, low energy usage and rapidly declining cost. The data center of tomorrow is right around the corner: coupling an all-flash architecture with a software-defined strategy beckons the next wave of storage.
About the Author:
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Bernbo has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Bernbo worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators. | <urn:uuid:5d628993-74a3-4632-bd9f-911a8bd44d61> | CC-MAIN-2022-40 | https://www.dbta.com/Editorial/Think-About-It/All-Flash-Data-Centers-Quick-as-Lightning-98088.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00771.warc.gz | en | 0.932247 | 1,532 | 2.734375 | 3 |
5G enables a new kind of network. One designed to connect everyone and everything together including machines, objects, and devices. Could it be setting the stage for singularity?
5G is a new digital system for transforming bytes (data units) over air. This new interface uses milometer wave spectrum. It enables the use of more devices within the same geographic area. For example, 4G can support 4,000 devices per square kilometer (.62 miles). 5G will support around 1 million devices in the same area. And those 1 million devices will operate at ultra-fast speeds. We’re talking exponentially faster download and upload speeds with hardly any latency (the time it takes devices to communicate with wireless networks).
The world is still in the throes of the pandemic. That is true. But the pandemic hasn’t slowed the adoption of 5G. Most companies continue to implement 5G networks. This technology has the potential to transform the lives of people around the world.
However, legislation and security will have to keep pace, to protect against numerous potential threats.
According to the Global Mobile Suppliers Association (GSA), at the end of March 2020, 70 mobile network operators launched the 5G commercial network in 40 countries. Sixty-three of those launched mobile services, (57 full launches, 6 limited availability launches). Thirty-four operators had launched 5G fixed wireless access or home broadband services (27 full launches, 7 limited availability launches).
Despite the impact of the coronavirus pandemic, the spread of 5G technology continues. Measures like social distancing have delayed the launch of 5G services in some countries. Operators had to stop. Mobile companies in other countries keep launching 5G network.
Joe Barrett, president of GSA, says, “We have all been surprised by how 5G has taken off. Deployments and commitments from across the globe have picked up pace. There are commercial launches in the world’s largest mobile market. The combination of these milestones will lead to an explosion in 5G users.”
“5G has the potential to cover up to 65 percent of the world’s population with 2.6 billion subscriptions by 2025. An Ericsson Mobility Report. says it will generate 45% of the world’s total mobile data traffic.”
5G Connection Technology & Advantages
5G is the 5th generation wireless connection. It represents the most advanced leap forward in mobile technology. At the speed of 20 Gbps, which is 20 times faster than 4G, 5G promises a faster and more reliable technology. Users can transfer a much larger amount of data with a latency of one millisecond. Connected vehicles, remote medical procedures, smart homes, and smart cities become reality. We are entering the age of big data. There are huge opportunities for digital service providers. Huge amounts of Data and metadata will be collected through their services. Phone companies are already releasing 5G mobile devices.
A study by Ericsson found that 5G adoption would come in three phases:
1. Premium smartphone downloads of content in seconds rather than minutes.
2. 5G home wireless broadband to challenge traditional cable TV (video and broadband delivered video).
3. 5G hot zones of ultra-high speed in demanding locations like airports, offices and shopping areas.
5G Risks & Threats
In a March 3, 2020 article in the digital magazine CIO, Jaikumar Vijayan reports on the security risks that will accompany the benefits of fifth-generation cellular networks. In the article, he quotes vice-president of Gartner Research Katell Thielemann, “5G is emerging as an accelerator to deployment. But it is also a source of concern from a security standpoint. Speed to market and cost considerations are taking precedence over security considerations.”
The resulting complex digital connectivity could prove to be a weakness. Modern life becomes dependent on connected technologies. This hyper-connectivity amplifies existing dangers and creating new ones. With the extended adoption of 5G, the world will be more connected. Data will be continuously exchanged between devices and applications. The threat of cyberattack will increase. There will be a greater number of vulnerable entry points to a network. There will be many opportunities to attack 5G infrastructures. This includes billions of IoT devices, and next private networks, that were not connected before.
Some of the weaknesses that have been discovered so far:
˃ High reliance on suppliers, some of which are state-backed, could pose risks of cyber attacks from some countries to others.
˃ 5G networks will be introduced gradually, so old 3G / 4G networks and new 5G architecture will have to coexist for a while, which will increase security concerns. The European Union Agency for Cybersecurity (ENISA) highlights, in its latest report on future 5G threats in terms of cyber security, the following protocols of concern: TCP / IP (DHCPv4), SS7 / Diameter, SIP, TLS, ARP, BGP.
˃ The network has moved away from centralized, hardware-based switching to distributed, software-defined digital routing.
˃ 5G further complicates its cyber vulnerability by virtualizing higher-level network functions, in software, formerly performed by physical appliances.
˃ 5G creates more entry points for attackers.
˃ Even if it were possible to lock down the software vulnerabilities within the network, the network is also being managed by software, often early-generation artificial intelligence, that itself can be vulnerable. According to Steve Durbin, Managing Director of the Information Security Forum (ISF), “Nation states, terrorists, hacking groups, hacktivists, and even rogue competitors will turn their attention to manipulating machine learning systems that underpin products and services.”
˃ Increase in short range communications will require many more cell towers that will potentially bring more attacks from cyber criminals.
Cybersecurity & Measures
Cyber accountability requires a combination of market-based incentives and appropriate regulatory oversight.
“With 5G networks there’s more computing functionality that you can deploy at the endpoint,” says Scott Crawford, analyst at 451 Research. That means organizations will need to pay more attention to tasks like identifying and validating endpoints. They will also need to ensure that their connected devices are in compliance with security policies before they interact with other devices or with sensitive data.
Techniques That Will Redefine Cybersecurity Approaches in the 5G Era
Reversing the under-investment in cyber risk reduction
The continuously changing environment requires organizations to make substantial investments in new technologies and processes. Companies will have to invest in compliance. New regulations will emerge.
Implementing machine learning and AI protection
There will be a major advantage of using AI-powered solutions.
Security products will continue self-learning and updating to fit a given environment.
Shifting from lagging indicators of cyber-preparedness (post-attack) to leading indicators
A 2018 White House report indicates a problem of under-reporting cybersecurity incidences. Failure to report such crimes inhibits the ability to respond immediately. and effectively.
Continued cooperation between the public and private sectors is the key to effectively managing cybersecurity risks. Both the private sector and government agencies working together can better share information and raise cybersecurity standards. This kind of coordinated effort can develop trust and accelerate the closure of the 5G gap. Such a program could also limit the damage when cyber attackers successfully penetrate a network.
Cybersecurity starts with the 5G networks themselves
All the networks that deliver 5G must have proactive cyber protection programs.
Insert security into the development and operations cycle
It’s more important to integrate security. Software, firmware, and hardware have to be better protected.
The National Institute for Standards and Technology (NIST) Cybersecurity Framework has established five areas for best practices. The five areas are: identify, protect, detect, respond, and recover. These principles are the basis of industry best practices. The Consumer Technology Association (CTA) has helped produce an anti-botnet guide. It outlines best practices for device manufacturers.
“Cybersecurity will play a critical role, with organizations called to adopt a granular segmentation of their networks. The Zero Trust model will become a real standard,” explains Greg Day, VP and CSO EMEA of Palo Alto Networks. The Zero Trust approach is based on a simple principle: “never trust, always verify.” It’s assumed that any person, or device, requiring access is to be considered a potential threat. So authorized access to a specific areas are restricted.
An effective 5G defense strategy is based on 3 fundamentals:
1. Reduce risk by implementing a “Zero Trust” strategy, contrasting the increase in the perimeter that can be attacked.
2. Ensure the correlation of data flows in a roaming world and the visibility of the suppliers’ ecosystem.
3. Make sure cybersecurity strategy keeps pace with reducing latency and increasing data.
The Pace of Digital Innovation and Threats Requires a New Approach to the Business-Government Relationship
In a Brookings article, Tom Wheeler and David Simpson warn that “the toughest part of the real 5G race is to retool how we secure the most important network of the 21st Century. Never have the essential networks and services that define our lives, our economy, and our national security had so many participants, each reliant on the other-and none of which have the final responsibility for cybersecurity. Here are some of the key take-away points:
˃ More effective regulatory cyber relationships with those regulated
˃ Recognition of marketplace shortcomings
˃ Consumer transparency
˃ Inspecting and certifying connected devices
˃ Stimulate closure of 5G supply chain gaps
˃ Re-engage with international bodies
Contracts aren’t enough. Most small and medium 5G network providers are not bound by any of government contracts.
5G and Privacy
People are always concerned about how tech companies treat their data. The GDPR in Europe and CCPA in California are legislation that protect personal data. To comply with GDPR, any company that collects, stores, and processes personal data has a significant set of obligations. There will be many actors in the 5G ecosystem interacting with personal data. Only a privacy-by-design approach to 5G can ensure their satisfaction. This is according to the first white paper of the 5G PPP Security Working Group.
Any future 5G system should be able to answer Lawful Interception (LI) requests. LI should be performed in a secure way without compromising the privacy of network users. The information provided must be verifiably trustworthy and securely delivered. Given packetized and dominantly encrypted network traffic delivery, a must-have technical building block of LI is Deep Packet Inspection (DPI). Without DPI, no analytic insights can be derived from live or recorded user traffic, thus rendering LI powerless.
“There is a lack of clear-cut security regulations for mobile wireless communications based on 5G at this point. The current 3GPP (3rd Generation Partnership Project) standards mainly apply to earlier mobile telephony protocols. They don’t fully address the emerging challenges.” says David Balaban, computer security researcher.
5G, the future of connectivity, is now a reality. Initial impressions are mixed. 5G is coming whether businesses and the public at large are ready for it or not. Navigating the transitional challenges is going to be quite an undertaking. There are privacy and security concerns. 5G providers, government, and businesses will have to collaborate to come up with solutions.
We can expect road bumps, hacks, and misappropriation of private data along the way. But 5G opens up a world of possibilities for everyone. Everyone can benefit. From the big-city executive to the farmer in Iowa using agricultural AI to support a greater crop yield.
This new network holds the key to advancing the spread of a slew of exponential technologies. It will set in motion fundamental changes to a industries and services. It’s just one more thing to make your world a little faster. Actually, it will make it a whole lot faster.
But here’s a question for you. Will 5G set the stage for what Jayshree Pandya, in a Forbes article, calls the troubling trajectory of technological singularity? | <urn:uuid:610784b6-38bf-4a3b-bcbc-eedffe6dbd14> | CC-MAIN-2022-40 | https://www.ironorbit.com/blog/2020/05/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00771.warc.gz | en | 0.927339 | 2,583 | 2.78125 | 3 |
In the current scenario, the internet has become a quintessential part of almost all businesses. Be it at home or at the office, the internet has eased our life in many ways. But at the same time, it has opened new attack methods for cybercriminals. They are using browser-based attacks to infect the systems and access your valuable data. So, if you are looking for a way to safeguard yourself from such attacks, then 'remote browser isolation' might just be the answer.
What is remote browser isolation?
In short, remote browsing isolation is a method by which you can enjoy a seamless malware-free version of the internet. It isolates the web-based malware from reaching your computer, thus securing the integrity of a network.
How does it work?
It is achieved by executing codes from a web page in a secure virtual/disposable container on a server, which is situated between a user's device and the internet. Only the visual content of the web page along with files are sent to the users. Thus, if an attack occurs, the impact is confined to the container, preventing only the adverse effects from spreading beyond. At the end of each session, this disposable container is destroyed along with any nefarious content. And once again, a fresh container is rebuilt.
How is remote browsing isolation better?
Remote browsing isolation enables users easy web access on any kind of computing device and operating system, which otherwise is a limitation for other isolation techniques. In addition, the ease with which it can be deployed amongst a variety of devices is another reason for its usage. Also, it is cost-effective and can be scaled quickly and cheaply. | <urn:uuid:a0b10581-8eee-4f52-8b9b-9dd369f0adda> | CC-MAIN-2022-40 | https://cyware.com/news/how-can-remote-browser-isolation-secure-you-from-web-based-threats-e9da13ea | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00771.warc.gz | en | 0.931854 | 340 | 2.828125 | 3 |
Lots of things are being called “smart” these days — everything from light bulbs to cars. Increasingly, the smarts come from some form of artificial intelligence or machine learning.
AI is no longer limited to big central data centers. By moving it to the edge, enterprises can reduce latency, improve performance, reduce bandwidth requirements, and enable devices to continue to operate even when there’s no network connectivity.
One of the main drivers for the use of AI at the edge is that the sheer amount of data produced in the field would cripple the internet if it all had to be processed by centralized cloud computing solutions and traditional data centers.
“The need to send all of that data to a centralized cloud for processing has pushed the limits of network bandwidth and latency,” says Ki Lee, vice president at Booz Allen Hamilton.
Enter the era of AI-enabled edge computing.
Few companies are experiencing this problem to the degree that Akamai does. Akamai runs the world’s largest content distribution network, with, at last count, about 325,000 servers in over 135 countries, delivering more than 100Tb of web traffic every second.
Edge computing is key to improving performance and security, says Ari Weil, Akamai’s global vice president for product and industry marketing.
Take bots, for example.
“Bots are a huge problem on the internet,” says Weil. They attack Akamai’s customers with automated credential stuffing and denial of service attacks. Plus, they clog up the pipes with useless traffic, costing Akamai money.
Cybercriminals are also using bots to try to penetrate the defenses of companies and research firms and healthcare organizations. Sometimes, their evil knows no bounds. For example, hackers have recently begun using bots as the COVID-19 equivalents of ticket scalping — snatching up vaccine appointment slots.
Akamai sees 485 million bot requests per hour, and 280 million bot login attempts per day. In the battle against them, Akamai began deploying AI at the edge in 2018 to determine whether a particular user is a real human being or a bot.
In 2019, Akamai also began using centralized deep learning to identify bot behaviors and develop better machine learning models. Those models are then distributed to the edge to actually do the work.
AI is also used to analyze threat intelligence at Akamai. “It’s a big data problem,” says Weil. “We take a huge amount of data, in a massive data lake, and try different models against the data to find malicious signatures. Once we identify the patterns, we can use this across the platform.”
Sometimes the messages are innocuous but come from a malicious source — command and control traffic, for example.
“We train the edge model to recognize traffic that’s coming out of this particular region, or this particular IP address, and apply the mitigation techniques right at the edge,” says Weil.
The end result is that Akamai saves money because it doesn’t have to carry the traffic from either the bots or the malware. Customers save money because they don’t have to pay for wasted bandwidth. And customers are more secure because they have fewer bots and malware samples to deal with.
In the fourth quarter of 2020, Akamai was able to stop 1.86 billion application-level attacks, says Weil, and thwart more than 70 billion credential abuse attacks.
Managing edge IoT
AI at the edge can also decrease the data and network load of internet of things strategies. IoT devices can generate a massive amount of information, but often that information is routine and repetitive.
“There’s a lot of ‘I’m OK, I’m OK’ messages being generated [by IoT devices],” says Weil. “So you sift through all that and look for the signal that says that the system might be failing. That needs to get back to the manufacturer.”
To do this, machine learning technology is deployed at the edge to learn what the critical signals are and to preprocess the data before it is sent on to the customer.
Take, for example, a connected car. It moves from one cell zone and tower to another, to different states, even to different elevations and climates. A reading that is appropriate for one location might not be appropriate for another, or the problem could be signaled by a rapid change in the data. Here, machine learning is becoming essential.
“Bringing the intelligence to the devices is one of the biggest growth areas of IoT right now,” says Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting.
The issue comes up in many industries, not just cars, though moving vehicles do have some of the biggest requirements for latency. “You don’t want to go back to the main data center to get a decision and bring the decision back,” she says. “There’s no time for that.”
But even slow-moving or stationary devices benefit from more processing at the edge.
“A common example is solar panels in the middle of nowhere,” she says. “They don’t have great cell service or WiFi. Being able to process data and make decisions locally is really important.”
Distributed intelligence also enables companies to reduce the volume of message traffic back from the devices, which reduces networking costs — and energy use.
“Data storage is expensive and not energy efficient,” she says. “If you can eliminate a lot of the data you would have otherwise transferred and stored, then it’s a great energy conservation piece.”
AI is also being increasingly used at the edge to provide devices with differentiating functionality.
“On my wrist, I have a smartwatch and a recovery device,” Fontana says. “The recovery device senses my metrics — my heart rate, breathing pattern. It makes calculations on how rested my body is and how hard I should push myself on my next workout.”
The advantages of decentralized AI
AI functionality at the edge can help create an intelligent distributed computing environment across network devices — a unique benefit for organizations that know how to leverage this.
The utility industry is especially keen on distributed intelligence, says Tim Driscoll, director of information management outcomes at energy and water resource management technology company Itron.
“Meters at the very edge of the utility distribution grid have an app platform similar to the common smartphone model,” he says. These meters use machine learning to respond to varying voltage and load conditions. “This allows the meters to provide proactive, real-time recommendations for grid control.”
But more intriguing is that the meters can work together, learning from their own communication network behavior, performance, and reliability — then use that to elect leaders among themselves that speak to the network on their behalf.
“This simplifies network management by removing the need for centralized analysis,” he says.
And as power systems evolve to include more distributed power generation in the distribution grid, edge computing becomes even more important. Traditionally, only local load was a variable for power networks — generation and power flow were all controlled centrally. Today, all three are variables.
“This is the main driver for autonomous, local, real-time response powered by edge processing and machine learning,” Driscoll says.
Beyond better latency and lower costs, bringing AI and machine learning to the edge can also help make AI faster, according to Booz Allen Hamilton’s Lee. That’s because decentralized, edge AI maximizes the frequency at which models are calibrated, “which not only reduces model development costs and schedules, but also increases model performance,” he says.
Risks and challenges
But AI at the edge also posts risks and challenges, Lee says. This includes the current lack of standards.
“We see a wide variety of edge hardware devices, processor chipsets, sensors, data formats and protocols that are usually incompatible,” he says, adding that there needs to be more focus on developing common open architectures.
In addition, many players in this space are focusing on one-off solutions that aren’t scalable or interoperable — or are based on traditional software delivery models.
“We’re still seeing monolithic applications that are purpose-built for specific devices,” he says. “From a design perspective, we’ve also seen typical hub-and-spoke architectures,” which can fail when connectivity is limited.
Another challenge of distributed AI is cybersecurity. “With the number of deployed edge devices the attack surface significantly increases,” he says.
We’ve already seen attackers take advantage of insecure IoT devices, such as the Mirai botnet that infected hundreds of thousands of devices in 2016. As IoT devices proliferate — and get smarter — the risks they pose will also increase.
One approach is to apply machine learning to the problem, using it to detect threats. But edge hardware is typically smaller and more resource constrained, limiting how much data can be processed, Lee says.
Where AI-powered edge computing can make a big difference in cybersecurity is in micro-data centers, says Shamik Mishra, CTO for connectivity in the engineering and R&D business at Capgemini.
“Threat detection, vulnerability management, perimeter security, and application security can be addressed at the edge,” he says, and AI algorithms can be decentralized to detect threats through anomaly detection.
New technology, such as secure access service edge, are also emerging, Mishra says. These combine wide area networks with security functionality.
“The more we distribute a functionality, the more the system becomes vulnerable as the surface area for attacks increases,” he says. “So, edge compute applications must keep security as a design priority.”
Continue reading for free
Create your free Insider account or sign in to continue reading. Learn more | <urn:uuid:244d2523-0e2c-44ad-b816-f85507ad824c> | CC-MAIN-2022-40 | https://www.cio.com/article/191500/ai-makes-edge-and-iot-smarter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00171.warc.gz | en | 0.935882 | 2,195 | 2.59375 | 3 |
The U.S. job market is a paradox. More and more jobs are being lost to intelligent automation, but digital technologies are also creating new roles that are going unfilled because of a digital skills shortage. Since those jobs originate in technology, it’s easy to assume STEM talent must fill them. This rather myopic thinking is aggravating a shortage of STEM skills into what two in five Americans term a crisis.
It’s true that technology skills of the past were primarily programming-focused and required STEM education. Today, however, digital skills are more pervasive in the business world. We can prepare talent from a variety of disciplines – with capabilities sharpened in corporate “finishing schools” – for these jobs of the future.
Enterprises, then, must play a lead role in embracing workforce transformation to tide over the talent crisis.
Take advantage of the blurring lines between white- and blue-collar jobs
With tech roles growing exponentially within companies, it’s unrealistic to think graduates with four-year degrees can fill all of the open positions. These same jobs can be performed by non-degree holding workers with specialized skills nurtured through alternative education paths – coding camps, online certification classes, training-on-employment and other types of vocational skilling. Community colleges are another great source of raw talent. Our new-collar workers will be individuals with backgrounds in diverse disciplines, with perspectives useful for tackling business challenges from every conceivable angle.
Shift from the disciplinary-approach of conventional education to nurturing non-disciplinary skills
What’s required of talent these days is a mix of technical skills (coding, data science, etc.), soft skills (strong work ethic, high cognitive ability, etc.) and holistic skills (such as problem-finding and empathy for users). These skills aren’t dependent on a college major, but on an ability to learn and operate with a set of continually improving contemporary skills. In fact, at Infosys, we challenge full stack developers, liberal arts experts and design talent to work together to build out digital solutions for our clients.
The linear education-to-employment equation must give way to the continuum of lifelong learning
Another evolving idea is on-demand, modular learning delivered to the workforce throughout the period of their employment. These aren’t presented as “learning breaks” from business-as-usual but as a way to learn, while working and on the job. By training our workers, we can not only remove barriers that would otherwise have prevented them from fully participating in the modern economy, we can also ensure our workforce remains relevant for the future.
Amplify and scale the workforce
American companies can mitigate their skills deficit by amplifying the capabilities of existing resources – to become better problem finders while machines evolve to be efficient problem-solving partners. It’s time we moved from a solely fulltime-employee-led talent pool to a heterogeneous workforce of regular employees, gig workers and software-led intelligence.
Recently we published quantitative and qualitative research that explains and demonstrates the value of such an approach.
The talent problem isn’t likely to disappear soon, but businesses that create pathways for workers across the talent spectrum to learn, train and succeed can certainly make it more manageable. | <urn:uuid:59f0e1b2-3ae6-4167-a2e4-8b8bfddb94bf> | CC-MAIN-2022-40 | https://www.cio.com/article/219960/rooting-out-the-digital-skills-crisis.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00171.warc.gz | en | 0.933127 | 673 | 2.640625 | 3 |
You have probably heard the term zero-day or zero-hour malware, but what exactly does it mean?
It's simple: it just means the malware is using a software vulnerability for which there is currently no available defense or fix. The vulnerability allows the malware to perform actions on your system that should not be permitted, such as running arbitrary code. Such malicious actions can impact the confidentiality, integrity, or availability of your system.
If a vulnerability is known already (i.e. not a zero-day), then chances are the software vendor has patched it, and/or security software vendors have added defenses against it. So you can protect yourself against known vulnerabilities simply by keeping your software, including your anti-malware defense, up to date. But these precautions will not protect you against zero-days.
You can think of the search for new vulnerabilities as a race. When security researchers and good guys find them, they warn the software vendor so the vulnerability can be patched. The best practice (what's called "responsible disclosure") is to initially do this privately, so the bad guys won't get a head's up. Once some time has passed, allowing the vulnerability to be patched, the finding is made public. At this time, it might get a CVE number from the Mitre Corporation so that any interested party may refer to the vulnerability using a standard name.
Unfortunately, the bad guys are also in this race. They look for vulnerabilities in order to accomplish their ends, which generally involve ripping you off in some way. They try to find undisclosed vulnerabilities and create malware that takes advantage of them.
So are we defenseless against zero-day attacks? Happily, the answer is no. Anti-Exploit software like Malwarebytes Anti-Exploit can monitor your system for the sorts of actions associated with zero-day exploits and shut them down before they harm your system. If you'd like to learn more about the technical details, you may read about them in this blog post about how Malwarebytes Anti-Exploit works. | <urn:uuid:b7606d4f-133a-4650-9bcc-042584cb93d5> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2017/04/what-is-a-zero-day | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00171.warc.gz | en | 0.958566 | 418 | 2.828125 | 3 |
Artificially natural: revitalizing a waterway in Florida with human-made reefs
IBM volunteer and retiree Jean Cannon, along with her colleagues and neighbors in Siesta Key, Florida, are using artificial reefs to help restore and revitalize the aquatic ecosystem surrounding their community.
Plastic waste is invading our oceans and rivers, threatening sea life and aquatic habitats. But not all materials are created equal. A community in Florida is actually adding a specially treated, UV and marine-hardened plastic to its waterway—in the form of an artificial reef, which can be recycled—to attempt to revitalize the ecosystem.
IBM retiree Margaret Jean Cannon is a volunteer supporting the needs of residents in Siesta Key on the west coast of Florida, near Sarasota. One of those needs includes protecting the canal system surrounding many of their homes.
In 2020, serving as secretary of the Siesta Key Association (SKA), Jean—as she’s known, together with her board colleagues and members of the community, decided they needed to take action on the quality of the aquatic ecosystem so vital to their neighborhood and others nearby.
The solution: installing artificial mini reefs made out of polypropylene to mimic the natural ecosystem that attracts sea life and helps it thrive. The material, also called boat UV treated plastic, doesn’t flake, fall apart or breakdown, is more resistant to water-damaging deterioration, can be recycled and is engineered to last 75 years or longer.
IBM Volunteers spoke with Jean to learn more about the Siesta Key Grand Canal Regeneration Project. The video below at the end of the interview provides additional information about the project.
Jean, why is this such an important project?
Water quality, and improving it, is critical to our way of life, our health and this area’s commerce depends on sea life and the quality of the water. And I know that’s true in so many places around the world.
In our case, Siesta Key is an island 8-miles long with limited resources surrounded by the Gulf of Mexico, Sarasota Bay, and the Intercoastal waterway. The nine-mile Grand Canal is at the center of the Island and has many dead-ends. It gets its water from Sarasota Bay and the Intercoastal, and there is one entrance and exit. Two studies done in the late 1990s pointed to the loss of sea life habitat, sediment and low flow as problems with the canal and the studies’ summaries stated that if these problems were not addressed, the Grand Canal would dry up and parts of it could die.
I moved here 24 years ago to live on the beach and enjoy fishing and water sports. In recent years, this area has redeveloped, with more seawalls; and more people are visiting, moving and living on the water. We understand that development will happen, but the pressures it puts on the environment can be managed more sustainably—that’s what we’re trying to do.
How are you involved in the Grand Canal Regeneration Project as a volunteer?
I am the lead for Siesta Key Association and co-leader on the overall project. There are five other team members:
- Phil Chiocchio is co-leader and brought the project to SKA’s attention. He works with David Wolff. Phil taught at Ringling College and is involved with Sarasota Bay Estuary and Sarasota County Environment organizations.
- David Wolff is the owner of the non-profit Ocean Habitats, Inc. He is an entrepreneur and the mini reef creator.
- Dave Vozzolo is learning the science of water from a Florida Sea Agent assigned to the University of Florida project. Dave will have about three other volunteers joining the team to test the canal water and record observations.
- Paul Westpheling is our lead on video and interviewing. He is a retired journalist and broadcaster and has helped us share our story with others.
- Joyce Kouba manages the SKA website and she supports the team installing the reefs.
We also collaborate with Dr. Ryan Schloesser of Mote Marine Lab to help with the water science and identification of species.
Phil Chiocchio, Jean Cannon and Dave Vozzolo
When did the project start and what was involved in the decision to proceed?
We started in November 2020, with a plan that Phil and I presented to the SKA members and Board of Directors. The Board approved the pilot, which had a simple goal to obtain homeowners’ agreement to purchase ten mini reefs to be installed in December. A mini reef costs USD 300, and we suggested that they could also be given as gifts since we were looking at the first installation around the holidays.
Well, we exceeded the initial goal when 23 reefs were ordered and had orders for more.
The Board agreed we should continue, and we had two other installs in February and March of 2021. We are now starting the project’s science aspect to understand what is needed beyond the mini reefs.
Tell us more about the mini reef and what it does
David created the mini reef to mimic a mangrove environment, and it replaces lost habits for sea life. It attracts oysters, juvenile fish and crabs. It needs about two feet of water and is mounted under a dock, tied to the pilings with ropes and pulleys so it floats and moves with the tides. There are smaller sizes for low tide areas. Mini reefs can be installed quickly and last hundreds of years.
David had been looking for a large project to showcase what this product can do. Siesta Key’s Grand Canal has over 875 homes, and the majority of the homes have docks. So we serve as a good model for similar canal environments in South Florida.
Mini reefs ready to be installed in the water
What’s the significance of tying a red ribbon on the dock piling?
It identifies to the others that the dock owner has installed mini reefs. We are encouraging neighbors to talk to other neighbors. And it also makes people smile. The first install was just before Christmas. We had some dock owners receive mini reefs as gifts for Christmas presents. We decided to keep doing it.
How are you coaching other communities to consider similar projects?
We are collaborating with The Center of Anna Maria Island, which has been doing installs for a few years. The Island of Anna Maria is mostly in open water waterways rather than dead-end canals. We are sharing resources and data collection templates. We both will submit our water data findings to the State of Florida database.
Phil and I spoke to a community in Venice, the island south of us. We presented our installs and talked to them about planning theirs. Their Board also approved the project, and Phil intends to help with the installation. They are also on a canal and have about 70 docks.
On Siesta Key, we have an area called Riegle’s Landing, located on the Intercoastal, with about 31 docks. I understand that they are also planning to install mini reefs. I have contacted them and will need to follow up.
What do you hope will be the results of the project?
Solve the problem of lost habitats, improve our water quality, regenerate juvenile sea life in the canal’s water and be a model for other water communities’ revitalization.
Leadspace image: IBM volunteer and retiree Jean Cannon, along with her colleagues and neighbors in Siesta Key, Florida, are using artificial reefs to help restore and revitalize the aquatic ecosystem surrounding their community. | <urn:uuid:ec6f6737-0faa-4e36-a733-52c1b73a7ce2> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/corporate-social-responsibility/2021/03/artificially-natural-revitalizing-a-waterway-in-florida-with-human-made-reefs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00171.warc.gz | en | 0.95677 | 1,566 | 3.265625 | 3 |
What does cyber security mean to you? Anti-virus software? Annoying firewalls? Lots of silly rules from the IT department? If that rings true, you may lack cyber security awareness.
Before I try to define "cyber security awareness," I need to admit that it is a bit of a misnomer. Can you imagine talking about "disaster awareness"? ("Oh, the building might flood or burn down. The first comes from water and the second from fire.") Or maybe you are aware that you shouldn't eat that candy bar instead of having a healthy lunch, even though you do it anyway. The problem with "cyber security awareness" is that most people really mean something along the lines of "knowing what to do and doing it." Being aware of spilled food on the floor is not the same as wiping it up, but you cannot wipe it up if you are not aware of it.
So I like to say that "security awareness" (and I am using that broader term intentionally) involves making people aware enough to act - it requires creating a security mindset, not just a set of rules. At its core, it is not just being aware of threats, but understanding the threats and their impact on the organization and its people, including themselves. With that understanding must come the appropriate action.
There is a popular security awareness story about thumb drives, the small USB flash drives. A security researcher peppers a parking lot with some in order to see what will happen. Some are picked up by employees, and in one test, over half were inserted into ports on company computers. Instead of just reporting back to the researchers as these did, they could have contained serious malware that would have hurt the company. In fact, this has been used as a genuine attack vector.
The usual lesson is "never stick unknown devices into your computer." The real issue, though, was those who plugged in the drives lacked the mindset that would tell them that the drives could be a threat. The correct response to finding the thumb drives should have been to turn them in to the security folks in case they were either genuinely lost. The security pros could have made the proper decision.
Most people will not develop a security mindset on their own; they need some kind of education. This is often called "Cyber Security Awareness Training" or "Security Awareness Training". I personally prefer the latter as it can then cover more aspects of physical security which are sometimes left out if the focus is on cyber security.
Most security awareness training I've seen is somewhat focused on threats - what the bad guys can do. That is important, but it is not enough. Here are four characteristics that are essential for good security awareness training:
- Focus on developing a security mindset. That includes ensuring people understand the impact to their organization, themselves, and other employees if a threat is realized. One simple example is that if IT has to spend time (and money) cleaning up a virus infection, that money may not be available for raises or bonuses. It is critical that each and every employee see the WIIFM (what's in it for me)!
- Make participants aware of the threats, but make it clear that there will be new ones that come along. That requires constant awareness and vigilance.
- Empower employees to act. "If you see something, say something" is an excellent start, but it is more. If a door that should be locked is not locked, in addition to saying something, employees must be empowered to lock the door.
- It must not be "one and done". Hearing about risks and what to do is not a one-time If it were, we would not have fire drills.
Learning Tree has courses in the cyber security area to help people gain the necessary skills.
Cyber security awareness is an attitude. It is, perhaps, a specialized part of situational awareness. It means being aware and it means acting. It's sort of like "street smarts;" it isn't an event, it is a lifestyle.
Cyber Security Training
AUTHOR: John McDermott | <urn:uuid:65074cc9-3f42-4154-94ce-f68568c8635c> | CC-MAIN-2022-40 | https://www.learningtree.ca/blog/cyber-security-awareness-care-get/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00171.warc.gz | en | 0.975784 | 848 | 2.734375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.