text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Kar-Wing Lau and his partners had decided to tackle one of the toughest engineering challenges in the world: Ending the computing industry’s reliance on air cooling for everything from computers to data centers. Air cooling is inherently inefficient. Bulky heat sinks, thousands of fans, and raised floors take up a lot of space. Computer room A/C and A/C compressors consume a lot of electricity. And worst of all, the cooling performance is poor: With an average Power Usage Effectiveness (PUE) of 2.0 in Hong Kong, air cooling wastes 50% of the energy it uses. As it turns out, there is a more efficient way to transfer heat out of a system: Harnessing the physics of phase change. A phase change happens, for example, when water boils and produces steam. In this situation, the water stays the same temperature (100°C) and all the excess heat drives the transition from liquid to gas. This insight led to a true innovation: 2-Phase Immersion Cooling. When one places hardware in an open bath and surrounds it with a liquid that has a very low boiling point – Lau chose 3M’s Novec™ 7000 that boils at 34°C (93°F) – heat generating components make the fluid boil and take the heat away. That’s phase 1. The rising steam then condenses (phase 2) and falls back into the tank passively without the use of any pumps, and the cycle continues. Using this technology, Lau built a 500kW data center in the hot and humid climate of Hong Kong. It achieved a PUE of 1.02. The data center, despite being housed in Hong Kong’s sticky climate, saved more than 95 percent of its cooling electricity energy. This represented a staggering $64,000 savings per month. Additionally, the IT equipment in the data center uses 10 times less space than traditional data centers, requiring less than 160 square feet (15 square meters). The system is located in one of Hong Kong’s high rise buildings and fits into the size of a standard shipping container. As Lau put it at the time, “This new data center project demonstrates the elegance of immersion cooling and showcases that it has what it takes to be the new gold standard in the industry.”
<urn:uuid:702915f8-ab2f-432c-9ea9-dec4947dd3f9>
CC-MAIN-2022-40
https://liquidstack.com/blog/the-beginnings-of-2-phase-immersion-cooling-in-hong-kong
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00173.warc.gz
en
0.903053
479
2.796875
3
By Art Reisman Editor’s note: Art Reisman is the CTO of APconnections. APconnections designs and manufactures the popular NetEqualizer bandwidth shaper. APconnections removed all deep packet inspection technology from their NetEqualizer product over 2 years ago. Article Updated March 2012 As the debate over Deep Packet Inspection continues, network administrators are often faced with a difficult decision: ensure network quality or protect user privacy. However, the legality of the practice is now being called into question, adding a new twist to the mix. Yet, for many Internet users, deep packet inspection continues to be an ambiguous term in need of explanation. In the discussion that follows, deep packet inspection will be explored in the context of the ongoing debate. Exactly what is deep packet inspection? All traffic on the Internet travels around in what is called an IP packet. An IP packet is a string of characters moving from computer A to computer B. On the outside of this packet is the address where it is being sent. On the inside of the packet is the data that is being transmitted. The string of characters on the inside of the packet can be conceptually thought of as the “payload,” much like the freight inside of a railroad car. These two elements, the address and the payload, comprise the complete IP packet. When you send an e-mail across the Internet, all your text is bundled into packets and sent on to its destination. A deep packet inspection device literally has the ability to look inside those packets and read your e-mail (or whatever the content might be). Products sold that use DPI are essentially specialized snooping devices that examine the content (pay load inside) of Internet packets. Other terms sometimes used to describe techniques that examine Internet data are packet shapers, layer-7 traffic shaping, etc. How is deep packet inspection related to net neutrality? Net neutrality is based on the belief that nobody has the right to filter content on the Internet. Deep packet inspection is a method used for filtering. Thus, there is a conflict between the two approaches. The net neutrality debate continues to rage in its own right. Why do some Internet providers use deep packet inspection devices? There are several reasons: 1) Targeted advertising — If a provider knows what you are reading, they can display content advertising on the pages they control, such as your login screen or e-mail account. 2) Reducing “unwanted” traffic — Many providers are getting overwhelmed by types of traffic that they deem as less desirable such as Bittorrent and other forms of peer-to-peer. Bittorrent traffic can overwhelm a network with volume. By detecting and redirecting the Bittorrent traffic, or slowing it down, a provider can alleviate congestion. 3) Block offensive material — Many companies or institutions that perform content filtering are looking inside packets to find, and possibly block, offensive material or web sites. When is it appropriate to use deep packet inspection? 1) Full disclosure — Private companies/institutions/ISPs that notify employees that their Internet use is not considered private have the right to snoop, although I would argue that creating an atmosphere of mistrust is not the mark of a healthy company. 2) Law enforcement — Law enforcement agencies with a warrant issued by a judge would be the other legitimate use. 3) Intrusion detection and prevention– It is one thing to be acting as an ISP and to eaves drop on a public conversation; it is entirely another paradigm if you are a private business examining the behavior of somebody coming in your front door. For example in a private home it is within your right to look through your peep hole and not let shady characters into your home. In a private business it is a good idea to use Deep packet inspection in order to block unwanted intruders from your network. Blocking bad guys before they break into and damage your network and is perfectly acceptable. 4) Spam filtering- Most consumers are very happy to have their ISP or email provider remove spam. I would categorize this type of DPI as implied disclosure. For example, in Gmail you do have the option to turn Spam filtering off, and although most consutomers may not realize that google is reading their mail ( humans don’t read it but computer scanners do), their motives are understood. What consumers may not realize is that their email provider is also reading everything they do in order to set target advertising Does Content filtering use Deep Packet Inspection ? For the most part no. Content filtering is generally done at the URL level. URL’s are generally considered public information, as routers need to look this up anyway. We have only encountered content filters at private institutions that are within their right. What about spam filtering, does that use Deep Packet Inspection? Yes many Spam filters will look at content, and most people could not live without their spam filter, however with spam filtering most people have opted in at one point or another, hence it is generally done with permission. What is all the fuss about? It seems that consumers are finally becoming aware of what is going on behind the scenes as they surf the Internet, and they don’t like it. What follows are several quotes and excerpts from articles written on the topic of deep packet inspection. They provide an overview not only of how DPI is currently being used, but also the many issues that have been raised with the practice. For example, this is an excerpt from a recent PC world article: Not that we condone other forms of online snooping, but deep packet inspection is the most egregious and aggressive invasion of privacy out there….It crosses the line in a way that is very frightening. Recently, Comcast had their hand slapped for re-directing Bittorrent traffic: Speaking at the Stanford Law School Center for Internet and Society, FCC Chairman Kevin Martin said he’s considering taking action against the cable operator for violating the agency’s network-neutrality principles. Seems Martin was troubled by Comcast’s dissembling around the BitTorrent issue, not to mention its efforts to pack an FCC hearing on Net neutrality with its own employees. — Digital Daily, March 10, 2008. Read the full article here. Later in 2008, the FCC came down hard on Comcast. In a landmark ruling, the Federal Communications Commission has ordered Comcast to stop its controversial practice of throttling file sharing traffic. By a 3-2 vote, the commission on Friday concluded that Comcast monitored the content of its customers’ internet connections and selectively blocked peer-to-peer connections. — Wired.com, August 1, 2008.Read the full article here. To top everything off, some legal experts are warning companies practicing deep packet inspection that they may be committing a felony. University of Colorado law professor Paul Ohm, a former federal computer crimes prosecutor, argues that ISPs such as Comcast, AT&T and Charter Communications that are or are contemplating ways to throttle bandwidth, police for copyright violations and serve targeted ads by examining their customers’ internet packets are putting themselves in criminal and civil jeopardy. — Wired.com, May 22, 2008. Read the full article here. However, it looks like things are going the other way in the U.K. as Britain’s Virgin Media has announced they are dumping net neutrality in favor of targeting bittorrent. The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register. — The Register, December 16, 2008. Read the full article here. Canadian ISPs confess en masse to deep packet inspection in January 2009. With the amount of attention being paid to Comcast recently, a lot of people around the world have begun to look at their ISPs and wonder exactly what happens to their traffic once it leaves. This is certainly true for Canada, where several Canadian ISPs have come under the scrutiny of the CRTC, the regulatory agency responsible for Canada. After investigation, it was determined that all large ISPs in Canada filter P2P traffic in some fashion. — Tech Spot, January 21, 2009. Read the full article here. In April 2009, U.S. lawmakers announced plans to introduce legislation that would limit the how ISPs could track users. Online privacy advocates spoke out in support of such legislation. In our view, deep packet inspection is really no different than postal employees opening envelopes and reading letters inside. … Consumers simply do not expect to be snooped on by their ISPs or other intermediaries in the middle of the network, so DPI really defies legitimate expectations of privacy that consumers have. — Leslie Harris, president and CEO of the Center for Democracy and Technology, as quoted on PCWorld.com on April 23, 2009. Read the full article here. The controversy continues in the U.S. as AT&T is accused of traffic shaping, lying and blocking sections of the Internet. 7/26/2009 could mark a turning point in the life of AT&T, when the future looks back on history, as the day that the shady practices of an ethically challenged company finally caught up with them: traffic filtering, site banning, and lying about service packages can only continue for so long before the FCC, along with the bill-paying public, takes a stand. — Kyle Brady, July 27, 2009. Read the full article here. [February 2011 Update] The Egyptian government uses DPI to filter elements of their Internet Traffic, and this act in itself becomes the news story. In this video in this news piece, Al Jazeera takes the opportunity to put out an unflattering piece on the company Naurus that makes the DPI technology and sold it to the Egyptians. While the debate over deep packet inspection will likely rage on for years to come, APconnections made the decision to fully abandon the practice over two years ago, having since proved the viability of alternative approaches to network optimization. Network quality and user privacy are no longer mutually exclusive goals. Created by APconnections, the NetEqualizer is a plug-and-play bandwidth control and WAN/Internet optimization appliance that is flexible and scalable. When the network is congested, NetEqualizer’s unique “behavior shaping” technology dynamically and automatically gives priority to latency sensitive applications, such as VoIP and email. Click here for a full price list.
<urn:uuid:089cdde5-3444-47f0-9c37-4f0ab3c6f9f1>
CC-MAIN-2022-40
https://netequalizernews.com/tag/spy-deep-packet-inspection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00173.warc.gz
en
0.951885
2,172
2.65625
3
The Winternitz signature uses a checksum that’s very similar to the approach used in Merkle’s scheme. Recall that the threat is an attacker who increments any byte(s) of the message. The checksum must ensure that the attacker cannot increment any byte of the message proper without voiding the checksum, and that they cannot maul the checksum in a way that would help them. The solution in this case is to compute a checksum that consists of the sum of the differences between the 255 (the maximum value of a message byte) and each actual message byte being signed. The resulting sum is encoded as a base-256 integer and added to the message. Both the message and checksum are signed. For an -byte message, the exact checksum formula is: Where is the byte of the message . As an obvious example, the message (255, 255, 255, 255) would have a checksum of 0. The message (0, 0, 0, 0) would have an integer checksum of 1020, which would encode (base-256) as the bytes (3, 252).
<urn:uuid:a25ee64f-82b4-49d5-b122-28f5d9303198>
CC-MAIN-2022-40
https://blog.cryptographyengineering.com/winternitz-checksum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00374.warc.gz
en
0.876656
238
2.921875
3
What Is A Wireless Bridge? A wireless bridge is a connection between two end points, those end points can be: buildings, CCTV cameras, telecommunication masts or just about any IP device that you want to link. One of the endpoints becomes a ‘master’ and the other a ‘slave’. Often the master will determine the frequency and other attributes the link is established using – as it assumes the role of a controller. Unlike an access point which serves multiple end points or devices in a 30 – 360 degree spread, a wireless bridge is purely directional between point A and point B. They can be used over both short (just a few meters) and long (over many miles) distances, making them ideal for a variety of connectivity solutions. A wireless bridge is often directional using 10 degree beam widths, allowing all of the RF (radio frequency) energy to be focussed in one direction. The fact it is directional minimises its potential to hear or pick up interference from other transmitters, and the fact that it is focussed in one direction is a positive in a noisy RF environment because the receiver and transmitters are deaf to what is around it outside of that focussed 10 degree beam width zone. Fresnel Zone & RF Line Of Sight Over long distances the 10 degrees beam width can get quite large and is called a “Fresnel zone”. A Fresnel zone is rugby ball shaped and its size will depend on the frequency used. When investigating whether you have line of sight between two end points the mistake can be made to just look at optical line of sight, and not RF line of sight. Due to the Fresnel zone, this is two completely different things. If anything encroaches on the Fresnel zone, you have a near or none line of sight link – this can often occur when you have optional line of sight. Depending on the type of bridge used (frequency or hardware vendor), it will not be detectable using a WiFI device. While you may be able to detect the frequency in use with a WiFi spectrum analyser, you will not be able to join the network or obtain any of the data from it. Above all else, most wireless bridges will use encryption to keep the data secure. A wireless bridge is layer 2, this means it is using wired switches to communicate between either end of the link and NOT a router, a bridge is not a routed (layer 3) deployment. This is sometimes called a ‘flat’ network, all of this means you can in effect create what BT once called a LES circuit – this is a LAN (local area network) extension. Wireless Bridge Speeds The speeds available to anyone looking for a wireless bridge are now up to 20Gbps. To achieve this you would bond 2 x 10Gbps links together achieving 20Gbps full duplex. Full Duplex means the same speed in both directions (uplink and downlink). The speeds achieved will not always meet what it states on the box or marketing material, as lots of factors can impact the achieved results. We have discussed RF line of sight, for example a 300Mbps link can achieve just 14Mbps (or no link at all) if the line of sight is infringed on too much or there are too many interfering channels. Sometimes a building in the way can be your friend, as you can bounce (or to use the correct term ‘multipath’) the signal off the building. This means the receiver is going to receive multiple data streams and is going to combine them to offer one single usable link. Buildings in the link path do this better than trees, but it’s important to point out that high capacity requirements will always need a strict line of sight. Radio Frequency and Microwave Wireless Bridges There are various frequencies wireless bridges operate over, and in this section we are going to discuss some of the most common ones. The following frequencies will be deployed in line of sight conditions, but can work in near to none line of sight: 3GHz: Requires licensing and offers good penetration of foliage for near to none line of sight links 2.4GHz: License Exempt and used for short links but this is also used for legacy WiFI so it often isn’t a good idea to use. 5GHz: Can be licensed exempt (Band A or B) or Light Licensed (Band C) and used for links of up to 800Mbps (using an 80Mhz channel) The following frequencies are normally higher capacity and require RF line of sight to work correctly, this is often a condition of the license from Ofcom (that they are deployed in line of sight): 6-38GHz: Licensed and can provide over 2Gbps extending long distances. These are licensed in the UK and these licenses can be expensive when operating in the UK especially when you want higher capacity over long distance. 60GHz: License Exempt and usable for links of up to 1Gbps – 2.5Gbps over distances of around 800m – 1600m 70GHz: Light Licensed and usable for links of up to 1Gbps – 2.5Gbps. Slightly greater range than 60GHz owing to atmospherics. 80GHz: Light Licensed and capable of 10Gbps over much longer distances than 60/70GHz Each of the frequencies above require line of sight to work at their full capability. Bridges Using Laser/FSO (Free-Space Optics) Laser links offer a really good high speed and interference free wireless bridge, the downsides however include fog and sun sometimes blocking the pencil thin beam of light that connects both ends. A laser link will not modulate down (go from 1Gbps to 100Mbps) in the event of something entering the line of sight path, it will just stop working until the obstruction is cleared. This can be extremely problematic for businesses that require constant uptime to fully function. Theoretically some wireless bridge hardware can be configured so it acts as an access point serving more than one end point, while it may still bridge traffic over layer 2 to multiple locations it would no longer be a ‘wireless bridge’. In most cases you will need to use different antennas and the solution is designed differently in the first instance. If you just have two end points then you need a bridge, if you have more than two end points you want to serve then you need a fixed wireless access point, this is also called point to multipoint. Wireless bridges use point to point technology to wirelessly connect two end points together. By using high frequency radio waves, wireless bridges can now transfer data up to 20Gbps (full duplex). Most commonly, wireless bridges are used to connect: buildings, CCTV cameras, sensors, telecommunication masts, corporate WAN connections, internet access or anything else that requires a data connection. Yes wireless bridges can work without line of sight but don’t expect hundreds of Mbps through a non-line of sight connection, it all depends on distance and the amount of intrusion there is into the Fresnel zone Need a Wireless Bridge? If you need a wireless bridge installed for your business, enter your details into our form above to find out how much it will cost with no obligation. You can also call us on 03331 500 140 for any free help and advice you may need.Get a Free Quote
<urn:uuid:8de95634-92f7-4b8c-8b7f-b3d00cbd76f2>
CC-MAIN-2022-40
https://www.apcsolutionsuk.com/what-is-a-wireless-bridge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00374.warc.gz
en
0.940127
1,534
3.5625
4
The Cyber Kill Chain was developed by Lockheed Martin as a framework to help organizations understand the process of cyber attacks. If you understand every point in the chain of events of a cyber-attack you can focus your efforts on breaking that chain and mitigating the damages. Many organizations have taken their own approach to defining the correct Cyber Kill Chain, with varying degrees of success. For the purposes of this article, we will be focusing on the original 7-step Cyber-Kill Chain developed by Lockheed Martin. We will go through each step in the chain involves and how you break the chain to better protect your data. How the Cyber Kill Chain Works in 7 Steps Each stage of the Cyber Kill Chain is related to a certain type of threat, both external and internal. For the most part, whatever threat you face (from malware, phishing, insider threats and more) it is likely that they will fall into one or more of the activities on the kill chain. Step 1 – Reconnaissance In this stage, attackers are selecting their victim and researching their security vulnerabilities. They may be locating what sensitive data you have, where it’s stored, who has access to it and what the best routes are into the network. Step 2 – Weaponization The attackers have finished their research into your organization’s vulnerabilities and have selected their targets. In this step, they are working out how best to get inside the network. This might be through a virus or malware tailored to exploit known vulnerabilities. Step 3 – Delivery The attack method is delivered into the target environment. The actual method used may vary but it most commonly comes through malicious email attachments, websites, or USB devices. Step 4 – Exploitation In this step, the malicious code has been inserted or the vulnerability has been exploited, and the attackers are setting themselves up to execute on their mission. Step 5 – Installation The malware installs an access point that enables the attackers to get access to the target environment. Step 6 – Command and Control The attackers now have uninterrupted access to the target environment and can manipulate it at will. Step 7 – Actions on Objective The original goals of the attack can now be executed on command. The outcome of this could be anything from data theft to Ransomware. Whatever the objective is, if this step is completed successfully, you have been the victim of a data breach and are likely going to face severe costs to reputation and the bottom line. Criticisms of the Cyber Kill Chain Whilst the original Cyber Kill Chain was revolutionary in understanding the nature of cyber-threats, it was created in a time where the belief was that most security threats originated from outside the organization. The Cyber Kill Chain, therefore, does not consider the insider threat, which research suggests is the most prevalent threat you are likely to face. For example, in the weaponization, delivery and installation stages of the kill chain, it is heavily implied that the attack will be delivered through some sort of malware or virus. In many cases, data breaches occur when privileged users abuse their access controls. In these sorts of attacks, steps 2, 3 and 4 are largely irrelevant. Other criticisms of the Cyber Kill Chain include the fact that the first few steps are happening outside of the control of security teams, making it practically impossible to break the chain at these points. An Updated Cyber Kill Chain for Today’s Security Threats A better way to look at the Cyber Kill Chain would be to combine weaponization and delivery into a simpler “Intrusion” step. In this step the attackers (whether they are insiders or external attackers) will be able to exploit existing vulnerabilities in the network or permissions structure to gain a foothold. A new step could be added to explain how insiders move throughout your environment. More often than not, when an attacker has privileged access, they move laterally to other systems and user accounts to gain access to even more sensitive data. Another step missed by Lockheed Martin is where attackers cover their tracks to intentionally try to confuse forensics and investigations. This step should be considered if the attack is premeditated or malicious. Often, data breaches are accidental so this step will not be seen. Often, with malicious attacks, the attackers will attempt to block normal users and systems from having access to data so they can do their work unimpeded. This is known as denial of service. So, the updated Cyber Kill Chain might look something like this: - Privilege Escalation - Lateral Movement - Obfuscation / Anti-Forensics - Denial of Service - Actions on Objective In order to fully visualize the Cyber Kill Chain you have to imagine it more as a circle. Just because an attacker has reached step 8 in the chain doesn’t mean that the attack is over. Data breaches are a persistent threat to your organization and must be dealt with accordingly. Breaking the Cyber Kill Chain Theoretically, the Cyber Kill Chain can be broken at any stage (excluding the reconnaissance phase). Mostly, the chain can be broken through proactive and continuous monitoring of interactions with data and systems. For example, if you detect that permissions are being escalated through real time alerts, you can take immediate action to prevent the threat from gaining access to sensitive data. Similarly, obfuscation is less effective if you are tracking and monitoring and audit trail of logs. In some cases, you might even be able to detect threats in the reconnaissance stage. If a user accesses a file containing sensitive data for the first time, and they shouldn’t have access to this file, then you can immediately prevent them from having that access. This might prevent a threat from materializing altogether. To do this effectively, you cannot be relying on normal event logs or a SIEM solution alone. There will be too much noise to sift through and you will not get the context you need to make real world decisions. Data Security Platforms can help to add more value to your SIEM and provide more detailed reporting and alerting. If you would like to see how Lepide can help you break the Cyber Kill Chain, schedule a demo of the Lepide Data Security Platform today.
<urn:uuid:c855e592-f859-4307-8653-8e1e2f454a57>
CC-MAIN-2022-40
https://www.lepide.com/blog/what-is-the-cyber-kill-chain-and-how-it-works/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00374.warc.gz
en
0.939026
1,281
2.71875
3
Today almost every aspect of our life has moved online. And we can be quite happy about it because you no longer need to waste a whole hour to commute to work, you don’t need to go to the bank to make a deposit or you don’t need to go to a shopping center to buy something, you also don’t have the need to go to school as you already have it at your desk at home. All of these things you have already at home, namely online. And many more things of our daily and not daily life have moved online. We started to enjoy more flexible and fast communication with the whole world and you just can’t count them all how many incredible perks we’ve got today. Though reality comes by itself. In many corners of this new cyber world various inferior entities are waiting and preying. That’s why we present you with the most scariest online threats you can encounter there and should be aware of: Overtaken IoT devices Internet of things devices are known to be particularly vulnerable to online cyber threats because of their weak or hard coded passwords. Threat actors can sometimes easily look up their passwords on the internet; that’s why specialists urgently recommend users to keep their IoT devices software regularly updated and change any default password on any IoT device. IP cameras, routers, digital video recorders and many other IoT devices can easily get exposed to the internet and as a result put you into danger. If anything from various kinds of malware can use the internet to spread and do its malicious actions then imagine the consequences of a data breach on a home network consisting not only of your IoT devices, but also computer, mobile, laptops. Some malware is particularly keen on exploiting vulnerabilities in IoT devices such as spyware, trojans, ransomware, etc to use such devices resources and conduct via them designated malicious actions. One example of IoT devices exploits is the creation of a botnet usually to target other users with DoS attacks. Having some of your important passwords leaked It might not be scary if your Instagram password from 2010 has been leaked but your leaked bank account password definitely should make you worry. And if you happen to have one of your passwords leaked it’s out of question that it needs to be immediately rechanged. Try to use for each important account a unique, strong and complex password that won’t be easily guessed by anyone encroaching on your privacy. We know sometimes it’s hard to remember each one of them, especially nowadays that you can have up to fifty different accounts. The solution could be using a password manager that can help you with securing all your important passwords. Phishing has always been some of the most common types of scam. Threat actors try various techniques to lure you to click their malicious attachment. But there’s also one kind of phishing that doesn’t fall behind in popularity among other scam techniques. Threat actors create fake websites to make potential victims enter their valuable information. The information stolen in such a way may vary from banking account credentials to your login credentials to other sites. With the wide spread of today’s photoshiping technologies it doesn’t make it that hard for threat actors to make up perfect spots for scams. That’s why specialists recommend before entering any data onto a website first check if you are at a legitimate one. To do this simply look up its URL address and see whether it’s the same as you know the official one should have. Denial of Service This type of an online cyber attack occurs when threat actors create floods of requests at a targeted computer or network with the help of another controlled computer. The amount of such malicious requests can rise so high that the victim’s infrastructure can’t respond to all the requests and simply goes off. The similar kind of the same attack is DDoS where threat actors would use a whole network for attack. DoS attacks usually are meant to tear apart “handshake” connections and thus carry out DoS. In other cases when a network is disabled threat actors will go on launching other attacks. Sometimes the number of exploits in such attacks can rise to millions of bots. Such huge botnets can be found in different locations and are hard to trace. Man in the Middle One of the common attacks people face when using unsecure public wifi connection. In this kind of an attack threat actors can insert themselves between victim and website and redirect traffic so that they see what data victim is exchanging with it. In other cases using compromised connection threat actors can even install some malware onto your device. Typically correspondents won’t even know that there’s a third party in a channel. This kind of an attack can happen to users of various e-commerce sites, financial applications, SaaS businesses and similar websites where logging is required. Usual goal of such an attack is to steal sensitive information like credit card numbers, account details or login credentials. The real king of today’s cybercriminal landscape. Ransomware constantly makes headlines on the news with ever increasing ransom demands which sometimes can unbelievably skyrocket. Though these days most ransomware attacks occur on big companies and enterprises individuals sometimes suffer no less damage. The stealthy malware encrypts data and holds it hostage until payment is done. In some cases ransomware criminals employ double extortion technique and sometimes it can be even triple extortion technique. Double extortion technique means in addition to encryption and ransom demand also to steal data and threaten victims with publishing it on the internet. I advise you to study malware VS ransomware: what is the difference? These additional actions are meant to have another pull on the victim’s willingness to pay. Another triple extortion technique involves also those whose data has been compromised in an attack. Criminals threaten them with publishing stolen data and so creating a third pull for the company or enterprise to pay the ransom.
<urn:uuid:bd8acd3e-86a8-4783-a4d2-24a8ca06e247>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/scariest-online-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00374.warc.gz
en
0.937221
1,234
2.515625
3
The runcksumlist variable contains a list of checksum values. By default, runcksumlist is an empty list. Populate it by running the Privilege Management for Unix and Linux utility program pbsum, which generates application and file checksum values. Use checksum values to determine if the target files or applications have changed by establishing baseline checksum values and then comparing those baseline checksum values against a checksum that is generated during security policy file processing. If the checksum value that was generated during security policy file processing does not match any of the values in runcksumlist, then the file or application has changed since generation of the baseline checksum, and Privilege Management for Unix and Linux refuses to run it. Application checksum values can be used to determine if a virus has infected an application or if the file has been changed. There is no read-only version of this variable. This run variable does not apply to pbssh. If it is present in the policy, it does not have any effect on pbssh and is ignored. runcksumlist = list of checksum values; A list of strings that represents checksum values generated by pbsum. The default value is empty, which specifies no checksum checking. For more information, please see the following:
<urn:uuid:8f1c06a5-6ea1-4a50-a322-45bd5ddc1c27>
CC-MAIN-2022-40
https://www.beyondtrust.com/docs/privilege-management/unix-linux/policy-language/variables/task-information/runcksumlist.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00374.warc.gz
en
0.848906
311
2.875
3
Let’s face it, the terms that describe the different types of automation can be confusing. So, let’s break it down. BPA is the use of technology (such as RPA, IA, and workflow) to automate business processes. This includes the routing of information from step-to-step and the automatic processing of tasks. RPA and IA are types of Business Process Automation, though they perform separate functions on their own. RPA, also called simple automation, is the automation of rules-based processes with structured data to facilitate sharing of information between applications. The systems are accessed by a user account controlled by a “bot.” IA, on the other hand, is an emerging technology that incorporates workflow, machine learning, and artificial intelligence to automate complex processes that require consideration and decision making. Check out this helpful breakdown of the terms based on data from Deloitte. To learn more on this, please register to Digitech Systems/Epson co-hosted webinar on October 9th at 1pm
<urn:uuid:bfdca2db-4b84-4a67-bf6d-46cc7517b41d>
CC-MAIN-2022-40
https://www.e-channelnews.com/still-confused-about-the-difference-between-rpa-ia-and-bpa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00374.warc.gz
en
0.943704
218
2.609375
3
Do I Need Calculus For Computer Science? In today’s world, computer science is one of the most relevant courses that one needs to study. It’s a dynamic and fast-growing field that has become an essential part of the world we presently reside in. Do you see yourself becoming a software developer, data scientist, systems analyst, network architect, or product manager in the future? If yes, computer science is the right course of study. To gain admission and study this field, one of the things you need is to have a good knowledge of mathematics. According to the University of Oxford, mathematics is a fundamental intellectual tool in computing. Some of the aspects of maths that you need to take seriously to successfully study computer science are binary math, discrete math, and statistics. But here’s a burning question; do you also need Calculus for Computer science? Yes, you need a basic understanding of Calculus for Computer Science. Here are some of the areas you’ll likely need the knowledge of Calculus – computer graphics & visualization, simulations, scientific computing, computer security, design and analysis of algorithms, coding, etc. In most institutions across the world, you’ll at least study three different calculus courses before you’ll be eligible to earn a bachelor’s degree in computer science. Read on to find out more about these calculus courses and a few other kinds of maths that you’ll need. Why Do You Need So Much Math for Computer Science? Before going ahead to address the issue of calculus for computer science, let me start by addressing this question: why does computer science require so much mathematics? First, you need to understand that not all fields of computer science require mathematics. However, a larger percentage of them will need you to have a solid knowledge of this subject. Calculus, statistics, algebra, discrete math, and binary math are the five most important aspects of maths that you need for computer science. They are advanced mathematics. Since that’s the case, you can’t just jump to these courses without going through the foundation – here’s where other math courses come into the scene. Why Is Calculus So Important? As you now know, calculus is one of the most important aspects of mathematics you’ll need to earn a bachelor’s degree in Computer Science. But here’s a quick question, how exactly is calculus relevant to computer science? To start with, I’ll say Calculus is essential only in a few aspects of computer science. The fields, as earlier stated, include the following: - Computer graphics & visualization - Machine learning - Data science - Discrete Math and Combinatorics - Design and analysis of algorithms In computer graphics and visualization, you need a good knowledge of Analytic Geometry, Linear Algebra, and Differential Geometry. The last part, differential geometry, is an area of mathematics that has Multivariate Calculus as a prerequisite. Apart from being a prerequisite for differential geometry, you also need a basic understanding of calculus for Fourier transform & Wavelet transform. Optimization is related to computer science in the sense that you need to learn it to be able to optimize code. The purpose of optimizing your code is to make it smaller, so it can consume less memory, executes quickly. In the non-linear aspect of optimization, you need a strong knowledge of multivariate calculus to be able to develop anything. Of course, this doesn’t mean you don’t need the calculus for linear optimization. The derivative of the objective function, an aspect of linear optimization, you need a basic understanding of calculus. Machine learning, Data Science, Robotics, Statistics You need calculus in machine learning, data science, and robotics. However, you need it more for statistics and probability. You need calculus in almost every aspect of statistics. For instance, at the basic level, there’ll be a time you’ll have to integrate over sections of a probability distribution – of course, that’s calculus. Discrete Math and Combinatorics Discrete math is an aspect of mathematics that you’ll be studying as an undergraduate looking to earn a bachelor’s degree in Computer Science. It’s popularly called the mathematical language of computer science. For this course, you need a good knowledge of calculus to integrate and derive some formulas. Analysis of algorithms Lastly, for you to successfully mathematically analyze any algorithm, you need a good understanding of lambda calculus. You’ll Study Calculus in School? It’s no more news that you’ll need a strong knowledge of calculus for computer science. Many experts will even tell you that calculus is a fundamental course for computer science majors. In college, you’ll need to pass at least three or more calculus courses before you can earn a bachelor’s degree in computer science. As a computer science major, you’ll learn several different aspects of calculus, such as vector analysis, series, derivatives and integrals, calculus theorem, and integration. Fundamental Theorem of Calculus Even if you don’t have prior knowledge of calculus in high school, you’ll be introduced to it in your first semester. The introduction to single-variable calculus will cover everything you need to know about derivatives and integrals. At the end of the course, you should be able to understand everything that has to do with the Fundamental Theorem of Calculus. Advanced calculus 1 After your first calculus semester in college as a computer science major, you’ll need to pass the course to be able to move to a higher one – advanced calculus. For advanced calculus, you’ll further learn about single-variable calculus. However, for this stage, your focus will be more on series, such as Taylor’s series, convergent series, harmonic series, power series, and many more. That’s not all; you’ll also learn about linear algebra and how it relates to calculus. Advanced calculus 2 Before you’ll be able to study for advanced calculus 2, you need to pass advanced calculus 1. In advanced calculus 2, you’ll learn about calculus of two and three dimensions, which is called “multivariate calculus.” Some of what you’ll get to learn in this course include Taylor’s theorems, Lagrange multiples, vector analysis, etc.
<urn:uuid:a76f181c-ee3f-438c-bd31-1073ba5e1099>
CC-MAIN-2022-40
https://gigmocha.com/do-i-need-calculus-for-computer-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00574.warc.gz
en
0.920077
1,375
2.921875
3
Tuesday, September 27, 2022 Published 2 Months Ago on Sunday, Jul 24 2022 By Adnan Kayyali A research team at the Ruhr-Universität Bochum has reportedly developed a novel new sensor that allows early Alzheimer’s detection, the key to treating the neurodegenerative disease. The results of the study have been published in the journal Alzheimer’s Association. According to a statement by the researchers, early detection can be done as early as 17 years before symptoms occur with a simple blood test. The tool apparently recognizes the misfolding of the biomarker protein amyloid-betta, resulting in distinctive brain deposits. “Our goal is to determine the risk of developing Alzheimer’s dementia at a later stage with a simple blood test even before the toxic plaques can form in the brain, in order to ensure that a therapy can be initiated in time,” Professor Klaus Gerwert, founder of the Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum, said this statement. His team collaborated with a team at the Hermann Brenner-led German Cancer Research Center in Heidelberg (DKFZ). Blood plasma collected from individuals between 2000 and 2002 and then frozen was studied by the researchers. The individuals weren’t yet known to have Alzheimer’s disease at that time. The study’s authors then chose 68 participants who had received an Alzheimer’s disease diagnosis throughout the course of the 17-year follow-up and compared them to 240 control participants who had not received a diagnosis. They wanted to see if there were any early indications of Alzheimer’s disease in the blood samples used in the study. “Surprisingly, we found that the concentration of glial fibrillary acidic protein (GFAP) can indicate the disease up to 17 years before the clinical phase, even though it does so much less precisely than the immuno-infrared sensor,” Gerwert stated. The accuracy of the early Alzheimer’s detection test in the symptom-free stage was subsequently further improved by combining the amyloid-beta misfolding and GFAP concentration. The group now has some really big ambitions for their new gadget. Gerwert added that the research team plans to use the misfolding test to create a better screening method for early Alzheimer’s detection and will manifest their vision in a newly founded startup, betaSENSE. They hope to be able to prevent the disease in the pre-symptomatic stages before any permanent damage is done to the patient’s brain. The invention has already received international patent protection, and the researchers predict that as medical technology advances over time, its significance will only increase. “The exact timing of therapeutic intervention will become even more important in the future,” Léon Beyer, first author and Ph.D. student on Klaus Gerwert’s team, anticipated. “The success of future drug trials will depend on the study participants being correctly characterized and not yet showing irreversible damage at study entry.” Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Technology and MedTech sections to stay informed and up-to-date with our daily articles. With all its innovation and glory, the Qatar world cup is almost here. Qatar has branded this world cup edition as the world cup of the invention. In many ways, this statement can hold its weight. Some of the technology and aspects of the Qatar world cup are revolutionary. The Arabic nation will be hosting […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:eb704724-1f32-46e5-b980-b73ca91f3938>
CC-MAIN-2022-40
https://insidetelecom.com/early-alzheimers-detection-now-possible-with-novel-sensor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00574.warc.gz
en
0.936914
806
3.03125
3
What Is Ransomware? Ransomware is malicious software (malware) used in a cyberattack to encrypt the victim’s data with an encryption key that is known only to the attacker, thereby rendering the data unusable until a ransom payment is made by the victim. Ransom amounts are typically high, but usually not exorbitant, to get victims to simply pay the ransom as quickly as possible, instead of contacting law enforcement and potentially incurring far greater costs due to the loss of their data and negative publicity. Ransomware is commonly delivered through exploit kits, waterhole attacks (in which one or more websites that an organization frequently visits is infected with malware), malvertising (malicious advertising), or email phishing campaigns. Ransomware typically identifies user files and data through some sort of an embedded file extension list. Files that match one of the listed file extensions are then encrypted. The ransomware then typically leaves instructions for the victim on how to pay the ransom. Ransomware Defence Checklist Complete the form below to get your free copy of our Ransomware Defence Checklist! Smart Network Security With an increasingly distributed workforce, security is more important than ever. AlphaKOR is a trusted Cisco partner, offering organizations of any size with advanced security solutions. Talk to our team about your organization’s needs to get started with Cisco security products. WHY WE REQUEST YOUR INFORMATION This Checklist is 100% free. Just sharing some free knowledge that we hope you’ll find useful. Keep us in mind next time you have technology questions!
<urn:uuid:cd11a969-255a-4130-9d7a-902bb0ee5449>
CC-MAIN-2022-40
https://staging.alphakor.com/ransomware-checklist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00574.warc.gz
en
0.935706
419
2.53125
3
Zoom went from an obscure teleconferencing company to a household word (a verb) when the pandemic hit. Zoom wasn’t the best videoconferencing app by any means. It lacked many key features like true end-to-end encryption and had some serious data oversharing problems. But Zoom was dead simple to use and kinda fun to say. For better or worse, it became the de facto tool for many of us to keep in touch. That was eight months ago. (Feels like eight years.) Over that time, Zoom has made many important improvements. This week it has finally rolled out what appears to be true end-to-end encryption (E2EE). So I thought it would be good to talk about what this means and how to set it up. What Does End-To-End Encryption Mean? Encryption is the process of scrambling something such that no one else can read, see or hear it (depending on what you’re encrypting). Decryption reverses this scrambling to get back the original file, video or audio. A single cryptographic algorithm (a fancy name for a mathematical process) is responsible for both encryption and decryption. The process itself isn’t a secret, which allows it to be vetted by lots of really smart people. The secrecy depends entirely on a key – basically, a glorified password. As long as this key is kept safe and secret, the scrambling is effectively irreversible. But saying something is “encrypted” isn’t sufficient for true privacy. Let’s take Alice and Bob as an example. They’ve set up a Zoom call and Zoom says that it’s encrypted. Prior to this new E2EE feature, what “encrypted” really meant was that the video was scrambled between Alice and Zoom, and between Zoom and Bob, but not while it traversed Zoom’s servers. Zoom could view it. So end-to-end encryption means that the video stream is encrypted the entire way, even as it passes through Zoom’s servers. Who Holds the Key? But that’s still not good enough, if you want full privacy. Because if Zoom has access to the encryption key, then they can still use that key to decrypt your meeting (possibly much later, if the video is saved). Zoom now offers both options: “Enhanced encryption”, where Zoom controls and holds the key, and “End-to-end encryption”, where only Alice and Bob (or rather their smartphones or computers) hold the key. In truth, you’re still trusting that Zoom isn’t somehow able to access the key (accidentally or maliciously). That’s why open source tools like Signal and Jitsi are still preferable, if you need to be really sure. But for most people, Zoom is fine. Setting Up E2EE on Zoom In order to enable E2EE on Zoom, you’ll need to first enable the feature on your account settings. (And if you don’t have an account, you’ll need to create one.) Look for “Allow use of end-to-end encryption” and enable it, like below. You should then make sure you have the latest version of Zoom installed (this feature requires that you use the computer or smartphone app). If you have the latest version, you might want to restart it after changing the above setting to make sure that your client is in sync with your account settings. There are two main ways to launch a Zoom call, and each has a slightly different way to enable E2EE. - New Meeting. This is an “immediate meeting” which uses your “personal meeting room”. Under “New Meeting” menu, under your personal meeting ID (PMI), go to PMI Settings. - Scheduled Meeting. Click the “Schedule” button. In both cases, you should now see something like the following. Select “End-to-end encryption” under Security. Note that all participants must have E2EE enabled on their accounts for this to work (first step above). See this Zoom E2EE FAQ for details. When this is done correctly, you’ll see the following at the upper left of your meeting window (click the green shield icon). Under “Encryption” you should see “End-to-end”. Need practical security tips? Sign up to receive Carey's favorite security tips + the first chapter of his book, Firewalls Don't Stop Dragons. Don't get caught with your drawbridge down!
<urn:uuid:f0b415c9-2058-4644-bb8a-22f22ca3b879>
CC-MAIN-2022-40
https://firewallsdontstopdragons.com/zoom-now-with-actual-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00574.warc.gz
en
0.936208
983
2.609375
3
In space no one can hear you call out for pizza, but technology being developed in a NASA-funded project might let astronauts print one instead — or any number of potentially delectable meals. Systems and Materials Research Corporation received a US$125,000 grant from NASA to build a prototype device that prints food. The project, led by mechanical engineer Anjan Contractor, is still quite a ways from the replicator technology of Star Trek, but it could be the next step in providing sustenance for those planning to leave the Earth’s orbit. “There is serious work being done on this end,” said Chris Carberry, executive director of Explore Mars. “This is being developed very much to help get us to Mars. It certainly is one step that is necessary in getting there.” 3D Printing of Food The technology will utilize progressive 3D printing and inkjet technologies; SMRC will design, build and test a complete nutritional system for long-duration missions beyond low Earth orbit. To illustrate the concept, Contractor has envisioned a test system that could print a pizza. For starters, the printer would create a layer of dough, which would be cooked while printed. Then tomato powder would be mixed with an oil and water solution to create the sauce. A topping could include a nondescript protein layer. “Printing food this way could be a pretty big deal for long-term stays in space and Mars,” said futurist Glen Hiemstra, “and also for us here on Earth.” The process of getting NASA funding for the project is almost as complex as printing 3D food. “We are in negotiations to sign a contract,” said NASA spokesperson David Steitz. “The first step is to apply for the Small Business Innovation Program, which program has three stages. The first is the feasibility study, where we look at the proposal. … We then award a contract to build a prototype — and finally, the third part is to partner with a third party to build a business that can become a NASA contractor.” SMRC has passed the first milestone and has received a $125,000 grand for six months to deliver that feasibility study based on the concept. The company could then apply for a $750,000 grant, which would give it two years to develop a prototype and would lead to actually testing the food with astronauts. One of the biggest reasons for developing a printed food solution is weight — not the type that’s carried around waistlines, but the type filling spacecraft cargo holds. “One of the challenges we face with deep space exploration is weight and mass in terms of the size of the craft and the fuel used,” Steitz told TechNewsWorld. “It is one thing to go up to low Earth orbit, such as the International Space Station, where basic materials such as food can be resupplied. But this isn’t the case with missions to Mars or the asteroids.” The important areas of research thus continue to be in human life support systems, including everything from air to water to food. Contractor is just one of many innovators addressing a big problem. “We have some of the smartest minds in house working on food storage, but with all innovation required to get us out there, you can’t just rely on your brightest minds alone,” said Steitz. 3D Parts Printing While this latest research could change the way people eat in space, 3D printing could have applications well beyond the table. “In 2015, we’ll bring a 3D printer to the space station, which will be the first time we’ll even test added manufacturing in space,” Steitz noted. “We’ve looked at this for spacecraft parts that in the past had been created by hand. So we have a lot of experience with added manufacturing and 3D printing technology.” NASA’s goal is to eventually print a whole spacecraft — and to do it in space rather than on Earth. This could allow for significant design changes. Whether it’s the spacecraft itself, its parts, or the food that astronauts eat, added manufacturing means money saved. “Right now, it costs about (US)$10,000 per pound to take something just into low orbit,” said Steitz. “Therefore, on long missions, we have to improve the sustainability with air, water and food — and 3D printing provides a great potential for that,” he added. “The potential benefits would be that the form and fit in the delivery system could be addressed, as well as the inclusion of critical nutrients such as protein,” Steitz added. “There are obviously issues to be worked out, such as what it would taste like and what would it look like, but we’ll get there.”
<urn:uuid:1d50246c-f977-4010-937b-cbae3983cc85>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/3d-food-printer-could-sustain-long-distance-space-explorers-78095.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00574.warc.gz
en
0.959942
1,041
3.109375
3
Because of the prevalence of English on the Internet, as well as language technology such as speech recognition and translation software, smaller languages may be falling by the wayside, according to a recent study. Languages such as Icelandic, Latvian and Lithuanian don’t have enough speakers to gain traction as popular languages on the Web, and even German, Italian, Spanish and French could be at risk because of a dearth of resources to power translation tools, speech-to-text technology and voice-controlled devices. The research was conducted by the National Centre for Text Mining at the University of Manchester in England. Among the researchers on the study was John McNaught, the center’s deputy director. He joins us for this podcast. Download the podcast (17:15 minutes) or use the player: Here are some excerpts: Hello, and welcome to a TechNewsWorld podcast. My name is David Vranicar, I’m a reporter for TechNewsWorld, and today we are talking about language extinction on the Internet. With us is John McNaught, the deputy director of the National Centre for Text Mining. He joins us from Manchester and the University of Manchester in England. Professor McNaught was part of a study titled, “Europe’s Languages in the Digital Age.” It was published by META Net, a group that promotes technological foundations for a multilingual European information society. The report has gotten some traction recently for its finding, which did not bode so well for a slew of European languages and their future prospects on the Web. So Professor McNaught is going to break down what they found and what it means for the Internet moving forward. TechNewsWorld: It seems like it might be kind of self-reinforcing phenomenon, where the strong languages would just become more prevalent and the ones that don’t have as big of populations or that don’t have as much usage — that they would continue to slid further and further. Is that what you’re finding, that the strong ones are getting stronger and vice versa? John McNaught: Well, I think it’s fair to say that the strong ones are strong. I’m not sure to what extent we can say the strong ones are getting stronger for the simple reason that there is a very large proportion of people who do not wish to communicate in any language but their own — or cannot communicate in any language but their own. So I think there’s a finding that only about 10 percent of people would be prepared to use English for online services, so they wouldn’t want to start ordering things online or engage in a bank or whatever in English if that wasn’t their own language. We have to be very careful about just assuming that certain languages will just get stronger and stronger and stronger, and that other ones will just slide away. We also have to be careful about maintaining balance — maintaining cultural balance, balance in terms of heritage, balance in terms of societies getting along together. Because when you look at many of the trends over many, many centuries, often it’s language that is at the root of many of the world’s problems: People don’t understand each other. And they need help in order to bring people to understand, to celebrate our diversity, to celebrate our cultural heritage and so on. And we can do this through language technology if we have the means. TNW: Yeah, I thought that was one of the more interesting things about the write-up that you all did about your findings, that this really transcends not being able to buy the newest pair of shoes on the Internet and goes beyond simple things. There really is a cultural significance to it. McNaught: Yes, I wholly agree with that view. TNW: Is this gap — the gap between the languages or the slow progression of these language tools — is it something that the “free market” could rectify? Can you apply normal terms of supply and demand to this issue? Or does that kind of fall by the wayside when you get into this language landscape? McNaught: Well, that’s an interesting question. I think it’s only natural for big players in the market to focus on big returns. And there may not be much traction in investing in, say, machine translation systems between two minor languages. That’s not going to bring big returns. So you tend to find the big players concentrate on the major languages and indeed on translation into English as opposed to translation into other languages. So there’s that phenomenon. On the other end of the scale, Europe is quite active in the small and medium-sized enterprises sector. However, it becomes very expensive for an SME to develop a language application on its own simply because it needs access to very large languages resources, it needs the language expertise and so on to develop the tools. So it can become a significant struggle for SMEs to do this even if they wish to do so, especially in minority language area. TNW: Is this problem too big to be solved by Google and Google Translate? Is it too deep for that? McNaught: I think it is. Google Translate is good as far as it goes. I use it myself regularly. However, it remains at a certain level of achievement and we know we can do better. The thing about machine translation — I think it is one of the hardest things that one can attempt to achieve on a computer. And machine translation in common with many natural language processing tasks, relies in many cases on deeper understanding and on the content of a message. And many of the techniques that are around at the moment are based on statistics instead of on meaning. So if you’ve seen something so many times before, you might propose it as matching something you’re seeing now. This gets you so far, but these programs really have no knowledge of semantics, or meaning. And we’re trying to get to this, but it’s a hard task. We can do certain things, we can get so far. I think you can make a comparison with the efforts going on in particular physics trying to find the Higgs Boson, that kind of thing. People got together on a large scale, there was massive funding. We are after something equally elusive, which is understanding, which is meaning and getting a computer to be able to handle this. And it’s that kind of large-scale effort that’s needed to bring people together, to provide them with the frameworks to meet the challenge.
<urn:uuid:d1b6a85d-c3b9-44be-a091-4ba66b9a6f9d>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/language-on-the-net-what-weve-got-here-is-a-failure-to-communicate-76315.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00574.warc.gz
en
0.963087
1,377
2.65625
3
Part 1 of this series explores the practical aspects of locating greenhouses in urban environments. These are not your grandpa’s greenhouses. Anyone familiar with common plastic-enclosed passive solar structures designed simply to hold plants over cold seasons or grow flowers and less-hardy fruits might be surprised about how these humble buildings have transformed into today’s dynamic hydroponic “bio-structures.” The modern high-tech greenhouse represents the timely convergence of some of the most sophisticated scientific know-how available from a variety of industries — agriculture, horticulture (the scientific cultivation of fruits, vegetables, herbs, flowers and ornamental plants in nurseries and gardens), energy (solar, wind and fuel cells), greenhouse manufacturing, hydroponics (soil-less growing), environmental control software, lighting, heating and ventilation, polymers and many more. Properly built and operated, these state-of-the-art, computerized food producing machines can enclose valuable unused urban space in glass to produce abundant crops of commercially grown food for local consumption. Controlling the Environment Controlled Environment Agriculture (CEA) is a high-tech industry for the production of food crops, flowers, houseplants and medicinals within controlled environment greenhouse structures. “Once food production is put on a rooftop — or any confined area — the need to develop intensive, high-productivity, year-round growing systems demands CEA technologies,” said Gene Giacomelli, director of the Controlled Environment Agriculture (CEA) Program at the University of Arizona. CEA combines engineering, plant science and computer-managed greenhouse control technologies used to optimize plant growing systems, plant quality and production efficiency. CEA systems allow stable control of the plant environment, including temperature, light and CO2 (which plants must absorb in combination with water, nutrients and sunlight to produce the sugars vital for their growth). CEA also provides separate control of plant root-zone environments. Computer-coordinated CEA activities include environment control (encompassing air temperature and movement, humidity, supplemental light and CO2 concentration) and mechanization and automation of operations that were formerly done by hand-mixing, fertilizing and placing root media, seeding and transplanting, nutrition management, hydroponic crop production, “fertigation” and material movement at harvest. “Systems from many companies have been developed during the past 30 years to monitor and control greenhouses for flower and vegetable production,” Giacomelli told TechNewsWorld. “They were developed to improve the capabilities of CEA, the quality of the products, the savings of labor allowed by automation, and the effectiveness of sensors that can at times more accurately determine the environment or climate of the crop, and then immediately make necessary changes.” A “gold standard” already exists for hydroponic greenhouses. The North American Greenhouse/ Hothouse Vegetable Growers (NAGHVG) was founded by leading North American greenhouse growers: Village Farms, Eatontown, N.J.; Windset Farms, Delta, British Columbia; Eurofresh Farms, Willcox, Ariz.; Houweling’s Hot House Nurseries, Camarillo, Calif., and Delta, B.C.; and Gipaanda Greenhouses, Ladner, B.C. Based in Bellingham, Wash., the NAGHVG has developed a “Certified Greenhouse” program. Standards include: - Every aspect of the growing process is monitored and controlled, from irrigation to climate control and growing medium. - Vegetables can be protected from pollution, wildlife and other potential contaminants, which creates conditions for the safest possible produce-growing methods. - Eco-friendly integrated pest management (IPM) is used. - The greenhouses must be able to provide reliable supplies of produce throughout the year. The NAGHVG’s “Certified Greenhouse” program involves ongoing audits to ensure that certified producers continue to meet the standards set by the association. Certification is granted exclusively to greenhouse operations that comply with NAGHVG’s definition of a greenhouse: - Facility includes a fully enclosed permanent aluminum or steel structure clad either in glass or impermeable plastic for the controlled-environment growing of certified greenhouse/hothouse vegetables. - Facility must use computerized irrigation and climate control systems, including heating and ventilation capabilities. - Facility must use hydroponic methods and must grow produce in a soilless medium that substitutes for soil. - Facility must practice IPM. Computers can operate hundreds of devices within a modern greenhouse (vents, heaters, fans, hot water mixing valves, irrigation valves, curtains. lights. etc.) by utilizing dozens of input parameters, such as outside and inside temperatures, humidity, outside wind direction and velocity, CO2 levels and even the time of day or night. A computer can keep track of all relevant information such as temperature, humidity, CO2 and light levels. It dates and time-tags the information and stores it for current or later use. Such a data acquisition system enables a grower to gain a comprehensive understanding of all factors affecting the quality and timeliness of the greenhouse product. Dozens of software developers have developed CEA-oriented applications. Major players include: - Priva: automated climate and process control in the horticultural and building intelligence markets; - Argus Control Systems: automated control systems; - Hoogendoorn Growth Management: process automation systems; - Micro Grow Greenhouse Systems: greenhouse environmental control systems; - Link4 iGrow: intelligent environmental controllers. Virtual Grower is a decision support tool for greenhouse growers to monitor plant growth and control energy management in greenhouses. It was developed by the U.S. Department of Agriculture (USDA) Agricultural Research Service (ARS) Application Technology Research Unit at the University of Toledo (Ohio). Users of the software can build a virtual greenhouse with a variety of materials for roofs and sidewalls, design the greenhouse style, schedule temperature set points throughout the year, and predict heating costs for over 230 sites within the continental U.S. Different heating and scheduling scenarios can be predicted with the input of a few variables, with accurate data based upon historical records collected by USDA monitoring stations across the country. “Specific software is not important,” said UA’s Giacomelli. “But including it within a monitoring and control system that is dependable and effective is one of the most important aspects of CEA with emphasis on urban agriculture.” Solar and wind power are the two primary renewable energy forms most commonly used by greenhouses for energy sources that are “off-grid” — not connected to an electricity distribution system. A third technology is fuel cells. Active solar greenhouses use supplemental energy to move solar-heated air or water from storage or collection areas to other regions of the greenhouse. PVs are arrays of cells containing a solar photovoltaic material that converts solar radiation into direct current electricity. Materials presently used for photovoltaics include silicon, cadmium telluride and copper indium selenide/sulfide. Building-integrated photovoltaics (BIPVs) are increasingly incorporated into new domestic and industrial buildings as a principal or ancillary source of electrical power, and are among the fastest growing segments of the PV industry. Typically, an array is incorporated into a building’s roof or walls, and roof tiles with integrated PV cells can now be purchased. Arrays can also be retrofitted into existing buildings. Productivity is also improving. Since flat solar panels only get direct sunlight for three to four hours per day, Sunflower Solutions developed a patent-pending manually adjusted solar power tracking system that dramatically increases the amount of sunlight that solar arrays can capture. In addition to free-standing PV solar arrays, greenhouses can capture solar energy through the use of enclosures made of a special PV glass with integrated solar cells that convert sunlight into electricity. The solar cells are embedded between two glass panes and a special resin is filled between the panes, securely wrapping the solar cells on all sides. Each individual cell has two electrical connections, which are linked to other cells in the module to form a system which generates a direct electrical current. This means that the power for a greenhouse can be produced within the roof and facade areas. Manufacturers include SCHOTT and Pythagoras Solar. The growing demand for renewable energy sources has dramatically advanced the manufacture of solar cells and photovoltaic arrays in recent years. However, due to expenses related to implementation, use of solar electric (photovoltaic or PV) heating systems for greenhouses remains cost-prohibitive for most small businesses unless used to produce high-value crops. Wind turbines are mechanical rotary devices that extract wind energy and convert it to electricity. In addition to traditional turbines utilizing familiar paddle-shaped blades, a new class of vertical-axis helical turbines from companies like San Diego-based Helix Wind have proven to be superior in producing electricity in the variable winds of urban environments. Using the twisted-ribbon shape of a helix, these generators overcome problems like noise, impact and price. Helical turbines are nearly noiseless because they spin at the same speed as the wind blowing into them. Small “hybrid” electric systems that combine wind and solar technologies offer several advantages over either single system. Many hybrid systems are standalone units operated off-grid. For times when neither the wind nor the solar systems are producing, most hybrid systems provide power through batteries and/or an engine generator powered by conventional fuels, such as diesel. If the batteries run low, the engine generator can provide power and recharge the batteries. Adding an engine generator makes the system more complex, but modern electronic controllers can operate these systems automatically. An engine generator can also reduce the size of other components needed for the system. The storage capacity must be large enough to supply electrical needs during non-charging periods. Since traditional renewable energy technologies like solar and wind are often intermittent, greenhouse operators have turned to fuel cells as an energy alternative. Fuel cells convert air and nearly any fuel source like hydrogen, natural gas or a wide range of biogases into electricity via a clean electrochemical process, rather than dirty combustion. Even running on a fossil fuel, the systems are much cleaner than a typical coal-fired power plant. Fuel cells are devices that produce a continuous electric current directly from the oxidation of a fuel, e.g., that of hydrogen by oxygen. They were invented over a century ago and have been used in practically every NASA mission since the 1960s. But until now, they have not gained widespread adoption because of their inherently high costs. Legacy fuel cell technologies like proton exchange membranes (PEMs), phosphoric acid fuel cells (PAFCs), and molten carbonate fuel cells (MCFCs) have all required the use of expensive precious metals, corrosive acids or hard-to-contain molten materials. Combined with performance that has been only marginally better than alternatives, they have not been able to deliver a strong enough economic value proposition to overcome resistance. Among the newest technologies is the solid-oxide fuel-cell (SOFC), available from companies like Bloom Energy and Technology Management. The technology is gaining wide acceptance due to its use of low-cost ceramic materials and its extremely high electrical efficiencies. In addition to their use as auxiliary power units in vehicles, SOFCs can be used for stationary power generation in greenhouses, with outputs from 100 W to 2 MW. Developed originally by SOHIO/British Petroleum, the TMI system operates on a range of liquid and gas fuels and is designed for operation and maintenance by end-users without special tools, equipment or access to a trained service workforce. SOFC is an ideal alternative energy technology of choice for greenhouse applications, according to Tim Madden, president of Akron, Ohio-based hydroponic greenhouse development firm Biodynamicz, which is integrating TMI’s fuel cells into its designs. “We’re focused on SOFCs as auxiliary power sources because they’re able to convert a wide variety of fuels like hydrogen, methane, butane or even gasoline and diesel, and because they do it with such high efficiency,” Madden told TechNewsWorld. “TMI’s SOFCs are attractive as energy sources because they’re clean, reliable and almost entirely nonpolluting. And because there are no moving parts, the cells are vibration-free and quiet, which eliminates the noise pollution associated with power generation.” Fuel cell technology improvements are continuing to come from places like the automotive industry and so prices will continue to drop as production quantities increase. Let There Be Light As for the use of modern light-emitting diode (LED) semiconductor technology in plant grow lights, LEDs present many advantages over traditional lighting technologies like high pressure sodium (HPS) and High Intensity Discharge (HID). When electric current flows through an LED, electrons travel through an energy “bandgap” in the diode crystal, releasing energy in the form of light. This effect is called electroluminescence and the color of the light is determined by the materials used to make the LED. A single LED grow light directly replaces 600-watt HPS light while consuming 50 percent less energy and is rated for a lifecycle of 50,000 hours. Cool-running LED lights also eliminate the need for ballasts (current regulators in lamps), reflectors, noisy fans or expensive cooling systems. Even though LED technology is more expensive than traditional HID lighting, the units require less maintenance, bulb replacement costs are eliminated, savings on energy costs start from day one of ownership, and total ROI is generally 12 to 18 months. A recent IMS Research report stated that while Nichia, Osram Sylvania and Philips Lumileds remain the leading suppliers of packaged LEDs, they are being challenged by companies in Taiwan and Korea, notably Seoul Semiconductor. “Artificial lighting is too expensive to install and operate to be economical in vegetable production due to the high amount of natural sunlight needed, especially for crops like strawberries and tomatoes,” Verti-Gro’s Tim Carpenter told TechNewsWorld. “LED light does not offer a full spectrum of light for flowering vegetables and fruits. LED for commercial use is a ways off but is improving rapidly, especially for vertical growing. I believe there will still be a need for a substantial amount of real sunlight in order to produce vegetables and berries profitably.” BioDynamicz’ Madden likes LED lighting in greenhouses. “The light produced from an LED source is better absorbed by plants, and greenhouse growers have been trying for years to eliminate hot HID fixtures because heat can damage crops,” he explained. “LEDs solve that problem.”
<urn:uuid:00cb48a0-594d-42f9-b32f-ad4cef8e79a7>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/urban-gardening-part-2-greenhouse-technology-70310.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00574.warc.gz
en
0.922543
3,129
3.65625
4
A virtual private network (VPN) is a security tool that is essential these days for the sake of privacy on the internet. Without it, the internet activity of a user can be easily viewed and intercepted by other people. This includes online banking, online shopping, business e-mails, downloading of files and browsing history, etc. The worst part is that all of this can be used to trace down the device back to you. The Internet Protocol (IP) address can be used to track down someone in real life as it reveals the physical location. However, when the user is connected to a reliable VPN service, the internet activity, and data are encrypted and the IP address is masked with its help. No one can find out what the user is doing on the internet, not even the government or hackers. There are many versions of VPNs out there. Some are even free but not recommended. The premium ones offer a lot of protection and features. Using a VPN is definitely a plus when it comes to interaction on the internet. However, can you run two VPNs at the same time? Does this offer double protection? Many users ask these questions. Anyways, a VPN can help in: - Remaining anonymous on the internet. - Helping in the prevention of cyberattacks and spying. - Providing freedom to browse the internet. - Providing online safety and privacy. - Providing access to blocked and restricted websites and applications. - Bypassing several censorships and firewalls. - Improving internet connection and speeds. - Making online gaming experience better. Can You Run Two VPNs At The Same Time? Many users want more privacy and protection and think that if you can run two VPNs at the same time. The answer is yes, there are ways to achieve this. However, experts claim that it does not benefit as much as it is hard to achieve or establish. You cannot just simply install two VPNs and think that you will get double protection by turning both of them on. It is much more complicated than that. A routing error might occur if you try to use two VPNs at once. To make it work the user will have to need to manually change the configuration of the Open VPN files. The best way is to install one VPN on the operating system and the other on a virtual machine. For this, the user will have to download and install a virtual machine on the computer. Then the IP address will route it via the computer before going via the virtual machine installed. This will definitely slow down the speed and the speed will keep dropping depending on the number of added tunnels. The simplest way possible to accomplish this is by a double VPN. Many VPN companies allow and provide double encryption that saves hours of work. Because the traffic is streamlined via only one specific provider, there will not be any conflicting VPNs. The traffic will be sent from one server and will be redirected to the second one, seamlessly. In simple words, the traffic and the online data will be secured by not just one but by two VPN servers working simultaneously. About the level of encryption, it is not doubled but instead, another layer of encryption is added which will make the online traffic reach the web after passing the VPN server. Overall, the time before it even reaches the internet and gets back to your device, the online data and traffic is encrypted and then decrypted. That is just not it. The data and traffic are re-encrypted again and then decrypted for another time. This is just because of the two VPN servers. Moreover, instead of just one, the Internet Protocol (IP) address is also masked two times behind the two VPN servers. Some of the benefits of using a double VPN connection are as follows: - As an alternative between TCP and UDP, the connections between the servers can offer a better level of online security to the user. - The first VPN server’s address will be visible to the second VPN and it will not be able to see the real Internet Protocol (IP) address. - By setting up a double VPN, the data and online traffic of the user will be encrypted two times. - By using a double VPN connection, there will be a safer environment for the user and this will eventually reduce the risks of a traffic correlation attack and will not leave the user exposed on the internet. - By establishing a multihop VPN connection, the user will be able to hide his or her digital footprints, which he or she leaves on various multiple geo-locations. When it comes to the disadvantages of using two VPNs at the same time, there are not many. The main logical issue is how a multihop or a double VPN connection decreases the connection speeds. Instead of this, there is no major issue. Setting up a multihop or a double VPN connection can be quite expensive as it involves many servers and connections for more privacy and security. Some ways of setting up a double VPN are: - Establish a VPN chain with multiple providers. - By the host machine plus (+) virtual machine method. - By using a browser extension and various VPN connections. - By using a router and various VPN connections. - By self-configuration of a VPN’s settings. In short, setting up a VPN chain, multihop connection, or a double VPN connection is more likely to give you benefits if you are concerned about your online privacy. Some of the VPN providers offer many built-in features that let the user decide if he or she wants to set up a VPN chain or a double VPN connection. The only other alternative is to configure a connection on your own, either by using a virtual machine or by using servers from different and multiple VPN providers. However, this answers the question “can you run two VPNs at the same time?” which many people keep asking who do not have good knowledge about tech.
<urn:uuid:2f6b8c89-5cc0-49b1-a92f-519978e2c530>
CC-MAIN-2022-40
https://internet-access-guide.com/can-you-run-two-vpns-at-the-same-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00574.warc.gz
en
0.94126
1,220
2.796875
3
The global fight to minimise climate change is gaining momentum as more and more industries come aboard. With its commitment to achieve carbon neutrality by 2030 and reaching 75 percent of this goal already in 2025, the data center industry is at the forefront. With the first milestone barely four years away, the clock is ticking, and data center operators need to find ways to achieve carbon neutrality. Bridging the gap between reliability and sustainability Increasing the use of renewables in power systems is an obvious way of reducing data center climate impact. Renewables have come a long way in recent years and are now mature, efficient, and competitive. However, their inherently unstable nature runs counter to the need for uninterrupted power in a data center. To put it very simply, you can’t power a server rack from a PV panel on overcast days. Fortunately, you do not have to choose between sustainability and reliability. With hybrid eco-systems, you can reduce climate impact while ensuring constant power. This is achieved by adding and prioritising renewables but backing them up with other power sources, and by running those (typically fossil-fuelled) power sources more efficiently, thereby lowering fuel consumption and reducing climate impact. What is a hybrid eco-system? Hybrid power is a solution covering a given load demand using a combination of two or more different power sources. The operator can combine the connected sources as needed to exploit their benefits and reach different operational targets. When renewables are included in a hybrid solution, it is often called an eco-system. Examples of hybrid eco-systems include a wind or PV plant with a battery energy storage system (BESS), or a setup that combines many power sources: mains power, gensets, several renewables, and BESS. Flexible by nature, hybrid eco-systems are ideally suited for reducing climate impact because you can configure them to run in a climate-friendly way. For example, they can be set up to use renewable power sources whenever possible, relying on mains or diesel gensets only for backup. How do you set up and control a hybrid eco-system? With several different power sources and many possible operating scenarios, the control solution for a hybrid eco-system needs to be intelligent, quick-reacting, and flexible. In order to guarantee uninterrupted uptime, it must also be designed for resilience with full redundancy. At DEIF, we recommend controlling hybrid eco-systems using an intelligent power management system (PMS) with interconnected controllers capable of handling all the different power sources, and of retaining system control even if a controller fails. We offer a complete range of compatible controllers that can be combined and connected as desired: AGC-4 Mk II genset and mains controllers and ASC-4 sustainable controllers, capable of handling renewables such as PV panels and turbines plus BESSes. When combined in an intelligent PMS, the controllers exchange information about the current load demand and the available power from the various sources. They ensure that the load demand is met according to your requirements, for example by prioritising renewable. They also allow you to carry out other tasks such as storing low-tariff mains power in a BESS for later use or running gensets at their optimal duty point. Advanced DEIF controllers require no programming. They come with factory logic that has been proven in many different critical power applications, and they only need to be configured for the specific application. Designed for compatibility, they can quickly be reconfigured on the fly without any downtime, and the system can easily be expanded if more power sources are added to your hybrid eco-system as your requirements change. Not all or nothing This controller flexibility is great news if you were wondering if going hybrid is all or nothing at all. With a DEIF PMS, you can make the transition at your own pace, expanding or reconfiguring your hybrid eco-systems as needed. This means that getting started on the road to carbon neutrality is relatively easy. If you already have a reliable genset or mains-based power setup, for example, you can expand it with one or more renewables and DEIF controllers. The transition to carbon-neutral data center power needs to happen, but it does not need to happen all at once. With a reconfigurable, flexible, and intelligent DEIF PMS controlling your hybrid eco-system, you are free to chart your own course towards green data center operations. For more details, download DEIF’s free whitepaper, ‘Achieving carbon neutrality with hybrid eco-systems’ at deif.com.
<urn:uuid:5880a7c3-42e1-4473-a9e0-566be6562c54>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/opinions/achieve-sustainable-uptime-with-hybrid-eco-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00574.warc.gz
en
0.933079
962
2.953125
3
With growing cyberattacks on small businesses, it has become imperative for you to get prepared to tackle any unforeseen security incident. And cybersecurity training is the best way to do so. In this article, we will explore various options for training in cybersecurity. What Is Cybersecurity Training and Why Is It So Important? Cybersecurity training, in simple terms, is educating and training individuals on how to safely use the company’s systems, networks, servers, or other components of IT infrastructure to minimize security risks.
<urn:uuid:e2adf232-0641-46dc-8bef-1e8786b8252a>
CC-MAIN-2022-40
https://managementcurated.com/organization-management/10-best-cybersecurity-training-options/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00774.warc.gz
en
0.925265
104
2.6875
3
Recently Ericsson and Qualcomm have begun promoting their state-of-art technology, LTE-U. Is it any better that LTE-A, which is increasingly available all over the world, including US, Europe, Russia, China and so on? And what are these combinations of letters supposed to mean anyway? Let’s start with the name. The first abbreviation, LTE-A, stands for “LTE-Advanced”, or “advanced 4th gen mobile network.” Such networks have been rolling out all around the world. As for LTE-U, it is by no means some “LTE-Ultimate” or “LTE-Unbeatable,”,as some may think. “U” stands for “Unlicensed” here. So, this mobile network technology relies on the use of the so-called “unlicensed” frequency spectrum. Now, what does “unlicensed” mean here? That’s quite simple: the majority of radio frequencies, including those used by mobile operators or radio stations, are licensed. Those frequencies are controlled by a government authority and can be used only by those to whom the license to transmit on this frequency was issued. For low-capacity civil transmitters, unlicensed spectrum is used: anyone can use those frequencies to transmit the radio signal. This makes complete sense: imagine you had to obtain permission for each RC toy you purchased for your child! The main requirement for those operating in this “free” radio frequency spectrum is certain maximum limit of the transmission power in order not to disturb operation of other people’s appliances. For instance, 27 MHz range is used be RC toys, 433 MHz is for walkie-talkies, and 2.4 GHz and 5 GHz ranges are allocated for Wi-Fi routers. These ranges vary from country to country, which provokes certain compatibility issues. The concept of LTE-Unlicensed is based on the deployment of LTE networks using “free” frequency ranges. Of course, we are talking about low-power base stations, i.e. femtocells and picocells designed for indoor use. The more free frequencies they are able to “aggregate,” which means to bring together several parallel transmission channels into one, the higher data rates would be. One rogue thought would start to bug you immediately: why would we need Wi-Fi then? Well, no one is disposing of Wi-Fi; quite the opposite, the technology continues to serve its intended purpose, which is being the foundation of small local wireless networks. Initially, this technology was used to build broadband access networks just because there was no other option. After all, Wi-Fi as such lacks several features, which are critical for reliable broadband wireless networks: it does not deploy state-of-art functions to manage network efficiency in case of high number of connections, does not enable secure authorization or other things like carrier aggregation, which is the process of uniting several frequency ranges into a single transmission channel. LTE already has it all by design. Since the unlicensed spectrum consists of various bits and pieces residing in various frequency ranges, the ability to aggregate them all would enable higher bandwidth networks. — Kaspersky Lab (@kaspersky) July 3, 2015 While 5G networks are nowhere near, Wi-Fi might serve the environment of choice to connect PCs, TVs, and other home appliances. Traffic-consuming devices, primarily smartphones and tablets, will belong to LTE-U world. Moreover, to integrate those into home networks, the Link Aggregation technology was designed. It brings together LTE and Wi-Fi frequencies, forming a cumulative frequency “pool”, available to devices and supporting both wireless technologies. This approach will help to balance the traffic between different networks or utilize both networks at once to facilitate faster data rates. And, of course, it will enable a seamless network swap without interruption of current session. In other words, it looks like a free roaming within your home network. LTE-U would be deployed quite similarly to existing 3G femtocells: a subscriber would need to purchase a specific femtocell and register it with his/her mobile operator. From the provider’s point of view, an encrypted VPN channel would be deployed, creating two separate logical connectivity channels working through a single physical cable. On a side note, LTE-U would not increase the speed of your home provider’s Internet connection. At the same time, mobile devices would use wireless connections more efficiently. Is #LTE-U better that #LTE-A, how’s it relevant to #5G and what are those abbreviations stand for?Tweet However, the idea is not ideal: LTE-U deployment presupposes service provider’s involvement, and those folks are not always proactivity champions. Many SPs used another trick to address the same challenge, i.e. offloading indoor connections, calling it Wi-Fi Offload. In other words, they rolled out massive carrier Wi-Fi networks, enabled seamless offload and SIM card based authorization, launched voice-over-Wi-Fi services (Wi-Fi Calling), etc. They invested a great deal of money into that infrastructure and have to get return on their investment. When these investments are fully returned, 5G will have already been around for some time. On the other hand, those operators who are still to tackle the problem of massive traffic growth, face quite a dilemma: they would need to either invest into Wi-Fi Offload or pioneer in LTE-U development, acknowledging the lack of support and the somewhat murky future of the newborn technology. We’ll live, we’ll see.
<urn:uuid:28db806c-c1c2-4d82-8c23-f87e9d99e997>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/lte-unlicensed-explanation/9402/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00774.warc.gz
en
0.946427
1,195
2.90625
3
Butter is a dairy product traditionally made from cow’s milk, though many different varieties are available. It’s used in cooking and baking and can be added to many different dishes. A high in saturated fat. Also contains several important nutrients, including vitamins A and E. Butter is high in calories, which may contribute to weight gain if eaten in high amounts. CLA, a type of fat that may have cancer-fighting properties, help reduce body fat, and improve immune function. Butter contains butyrate, a type of fat that may improve digestive health, decrease inflammation, and support weight control according to human and animal studies. Though saturated fat may not be linked to a higher risk of heart disease, replacing it with polyunsaturated fat is associated with a lower risk of cardiovascular events. For more tips, follow our today’s health tip listing.
<urn:uuid:6939f4d4-26de-4f07-ba94-b519c3c37e07>
CC-MAIN-2022-40
https://areflect.com/2019/08/05/todays-health-tip-benefits-of-butter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00774.warc.gz
en
0.940924
185
2.953125
3
What is a KVM extender? KVM stands for “keyboard, video, and mouse," and a KVM extender is basically a device that extends these interfaces and enables remote access to a computer over distances from a few feet up to several miles, or even over the Internet. A KVM extender unit consists of a transmitter device, sometimes called “local unit,” and a receiver device, also called “remote unit.” These devices can be connected over either CATx copper or fiber cable, and the newest technology can even extend signals over a standard IP network. You can connect your PC to a transmitter at work and plug in the receiver at your home office and work at your computer, just like you would with a direct connection. What are the interfaces being used? The video interface is usually either DVI or HDMI on most modern devices, while older computers might be equipped with VGA only. In the past, keyboard and mouse were always separate interfaces and were using a PS2 6-pin mini-DIN connector. That technology has been almost completely phased out, and now USB is the standard connector. It usually doesn’t matter where you plug in your mouse or keyboard, as long as it is a USB port. Other optional interfaces that can be supported are audio and RS-232. Why would anyone need a KVM extender? Computer fans are loud, and computer CPUs have fans for cooling because they generate a lot of heat. Plus, they take up a lot of space. None of these features is ideal in an office environment. By using a KVM extender, CPUs can be backracked in a server room in a temperature-controlled environment. All the user needs is a tiny receiver unit on the desk where the keyboard, video display, and mouse would be connected. An industrial environment has different challenges. The work environment might be dusty or dirty—areas where regular CPUs with fans will not last long. The fans will pull the dirt into the cabinet, clogging it up and causing the computer to overheat. By using a KVM extender, the PC can be relocated to a cleaner environment, and the keyboard, video, and mouse workstation can be connected to a remote KVM unit that is fanless. These are just two examples of how KVM extenders are being used, but the variety of applications for KVM extenders is extensive. Learn more at Blackbox.com/KVM-Extenders or check out our KVM Extenders Selector.
<urn:uuid:58688f79-2ca5-4482-8790-0f776f78bcd5>
CC-MAIN-2022-40
https://www.blackbox.com/en-be/insights/blogs/detail/technology/2022/01/11/use-kvm-extenders-for-a-better-work-environment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00774.warc.gz
en
0.951752
519
3.046875
3
Everyone is familiar with the USB ports on laptops, printers, smartphones, and tablets and most of the functions they are capable of performing. However, the USB ports on KVM switches are not always the same as the ones you find on more common devices. Learning what each USB port on a desktop KVM switch can do is important if you are planning to purchase a KVM switch in the near future. In this article, I’ll detail the different USB ports on desktop KVM switches so you will know exactly what you need for your application. When looking at the common desktop KVM switch, you will notice there are usually four USB Type A ports for your user peripherals and one USB Type B port for every computer port. In most cases, two of the four USB Type A ports are for USB HID devices. These USB HID ports support 1.5 Mbps or 12 Mbps for human interface devices only. The other two USB Type A ports are USB 2.0/3.0 ports for your pass-through USB devices. These ports support 12 Mbps, 36 Mbps, 480 Mbps, 5 Gbps, and now even 10 Gbps on USB 3.1 Gen 2. What are the other key differences between USB HID and USB 2.0/3.0 ports? USB HID ports on a KVM switch typically have a Keyboard/Mouse icon next to them. These ports only understand basic USB input devices, like keyboards and mice. HID ports are special because the KVM switch monitors them in order to execute user hotkey commands (for example, switching ports by pressing Ctrl-Ctrl-#). The USB HID interface cannot process anything unique, and the chip that drives this circuit isn’t as smart as a laptop or computer USB host controller with drivers, so the input devices need to be simple. What do I mean by simple? I am referring to keyboards and mice that only have standard keys. USB HID ports do not support advanced devices like keyboards and mice with gaming or function keys, built-in USB hubs, smart card readers, LCD screens, devices that require drivers, or peripherals that don’t consume over 120 mA of power. Additionally, flash drives, external disk drives, USB headsets, and similar devices will not work in USB HID ports. These peripherals require faster speeds and a pass-through connection to the target computer to operate. The USB 2.0/3.0 ports on a KVM device only switch the USB peripherals from PC to PC. They feature faster speeds than USB HID ports and support peripherals that require drivers as long as the PC has the necessary drivers loaded. These USB 2.0/3.0 ports also support USB HID devices, like keyboards and mice, and external disk drives or flash drives. But, between switching ports, make sure you are not copying or transferring data to these storage devices. If you do, the KVM switch will disconnect and reconnect, making your desk drive unusable until you format it again. The main difference between USB HID and USB 2.0/3.0 ports is the devices they can control. You should plug your keyboard and mouse into the USB HID ports and connect your other peripherals to the USB 2.0/3.0 ports. If you want to use a gaming keyboard and mouse, and they aren’t working on the USB HID ports, connect them to the USB 2.0/3.0 ports and see if they work. You will lose keyboard and mouse hotkeys, but you can still switch devices by using the pushbuttons. Higher-end KVM switches operate the same as desktop KVM switches. Their transmitters and receivers have options for USB HID and USB 2.0 ports. But some of these commercial-grade systems do not highlight the fact they are USB HID or USB 2.0, so you really need to look at the specification to ensure you’re purchasing a product that supports your application. Still not sure which KVM switch you need? Give our technical support team a call today. They are happy to answer any questions you have about KVM switches and can recommend you products that can work perfectly in your environment. You can also view Black Box KVM solutions here. About the Author Garrett Swindell has 20+ years’ experience programming, implementing server to client communications, and designing intricate control system. As a product engineer, his primary focus is developing connections between users and computers/servers though the use of hardware and software. Garrett assist local and international projects from start to finish with compliance regulations and performing product compliance testing with recognized test houses.
<urn:uuid:8b8c8cdd-7b4c-4725-82ca-c0ef0f4bc9f3>
CC-MAIN-2022-40
https://www.blackbox.com/en-se/insights/blogs/detail/technology/2021/02/24/understanding-kvm-usb-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00774.warc.gz
en
0.934996
965
2.625
3
The Internet of Things (IoT) is the term used to collectively refer to networked devices that monitor and interact with the environment and each other. IoT usually refers to devices beyond traditional IT infrastructure such as servers, PC’s, tablets, phones, and networking equipment. Although strictly speaking all these are networked devices with sensors and software to monitor and interact with their environments so they can be classed as part of the IoT. Devices more typically referred to as IoT include those with sensors to detect environmental changes, can have cameras, accelerometers, or other task-specific electronics. All IoT devices collect and share data. Most are networked to share data in real time, often via IoT Gateways, but some collect and store data for later retrieval if used in places where networking connectivity is poor. However, the vast majority of IoT devices don’t tend to need any human interaction, and this is a significant change from the pre-IoT Internet where people using devices provided most of the data created. Use of IoT is accelerating across many sectors at a phenomenal rate. From consumer devices and appliances to the Industrial Internet where IoT devices are being used to streamline work in manufacturing, engineering, health & safety, remote management, and other scenarios. It’s unlikely that any area of business will be untouched by the growth of IoT over the next decade. Gartner estimates that 6.3 Billion IoT devices will be in use by the end of 2017, with 63% of that total in consumer devices. They report that of the remaining 37% used for business 17% will be generic devices and 20% will be vertical business-specific devices. By 2020 they predict that 20.4 Billion IoT devices will be in use with the percentage in use by consumer devices still 63%, but business usage will change to 21.5% business use of generic IoT devices and 15.5% using vertical business-specific devices. This business use of IoT devices Gartner estimate will be worth $1.4 Trillion in 2020 for hardware alone, with associated services to deploy and manage the IoT infrastructure running into billions. The rise of IoT devices brings with it an increase in network complexity and in the amount of data that is collected and moved over the Internet. All the IoT devices collecting data need to send it to back-end systems to be stored, analyzed, and integrated. Increasingly this is done by Cloud-based infrastructure based on Microsoft Azure, Amazon AWS, or other public or private Cloud service. This influx of data introduces bandwidth problems that need to be solved, availability issues as servers are sent all the data, and also security issues. Currently, many IoT devices are not built with robust security protocols built in. Many recent DDoS attacks have used insecure IoT devices to flood websites and web applications (for example the Mirai botnet attack of 21st October 2016). These issues with IoT devices can be addressed using tools that are available today. KEMP LoadMaster can sit between IoT devices out in the world, and the Cloud or private data center based application infrastructure that will analyze and use the data they collect. Our white paper LoadMaster - Powering the Internet of Things explains how this is done and is available here
<urn:uuid:e4cfedbe-f231-4377-a05b-b5117f939dee>
CC-MAIN-2022-40
https://kemptechnologies.com/glossary/internet-things-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00774.warc.gz
en
0.943568
661
3.03125
3
November 1, 2013 | Written by: IBM Research Editorial Staff Share this post: Editor’s note: This article is by IBM Research scientists Jeehwan Kim and Hongsik Park, who work in carbon electronics. |Dr. Jeehwan Kim holds promise as the linchpin material for breakthroughs in everything from high-frequency transistors and photo-detectors, to flexible electronics and biosensors because of its supreme electrical, optical and mechanical properties And while some labs, IBM included, have been able to demonstrate its abilities in high-speed transistors, the performance is still limited by the quality and size of graphene. The extremely high fabrication cost for high-quality graphene is also another hurdle for many of these potential applications. Our team at IBM’s Thomas J. Watson Research Center is getting close to making this feasible, as we are the first to successfully develop a reproducible technique to fabricate single-oriented, single-layer graphene at wafer-scale. |Dr. Hongsik Park Two-dimensional semiconductor materials need to be a perfect single layer so their unique properties can be harnessed. In our paper “Layer-Resolved Graphene Transfer via Engineered Strain Layers” – published in Science this month – we used the idea that every element in the periodic table has a different adhesion (atomic binding energy) to graphene in order to produce single-oriented, single-layered graphene at a wafer-scale. From sub-millimeter flakes to 4-inch wafers For high performance device application, graphene – which looks like a 2D honeycomb of crystalline carbon atoms – must have uniform orientation and thickness. So, the atoms in a sheet of graphene must be aligned and “smoothed” to a single atom. The realization of large scale oriented graphene has been a hot research topic for more than a decade. But since the first demonstrations of removing graphene from graphite, by using the so-called “scotch tape method” of peeling away the graphene, the size of single-oriented graphene has been limited to less than a square millimeter – too small for real world use. Our new techniques, though, show how it’s possible to break through this decade-old barrier and make 100 millimeter diameter wafer-scale sheets of graphene (4 inches). To do this, we exfoliate the graphene twice. The first exfoliation separates graphene from a silicon carbide (SiC) substrate by usinga stressed nickel layer. Once this step is completed, we perform a second exfoliation that removes any graphene in excess of a single-layer by using a thin gold layer – thus leaving only single-layer, single-oriented graphene. IBM Research’s earlier work on a gr aphene transistor did produce the world’s fastest graphene transistor (reaching 100 GHz) by using graphene formed on SiC. But the graphene was restricted to an expensive SiC substrate in the lab previously, unable to be cost-effectively mass produced. Our technique sets the graphene free. Just being able to reproduce the separation of graphene from one SiC wafer over and over again will immediately lower the cost, as the graphene won’t depend on the SiC substrate for use in transistors. Also, the active graphene device area, when on SiC, is restricted to just a few micrometers because multiple layers of graphene impede performance. Now, we can “grow” single-layer graphene to much larger, usable, sizes. According to Nature’s 2012 “Graphene Roadmap,” current transistor performance will top off in the year 2021, but the progress of graphene in the next generation of transistors could not only pick up where silicon will leave off, it could deliver up to 1 THz of high-speed compute operation over the next decade.
<urn:uuid:437b6312-eb69-441a-a63e-72c328b4f613>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2013/11/exfoliating-wafer-scale-graphene-down-to-one-layer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00774.warc.gz
en
0.9215
853
3.390625
3
Avantra: Accelerate your Hyperautomation Journey with Avantra The term open source firewall is becoming more common in today’s digital world, as companies search for more flexible and convenient ways to implement high-level security in order to ensure business continuity. A firewall, as most companies know, is a software barrier used to block information and access harmful to a system. Open-source firewall software combines the benefits of firewall technology with the “open-source” movement. With an open-source solution, companies can download software and code for free, and make changes to the source programming, with the right developer knowledge. What is an Open Source Firewall? An open source firewall is a flexible security mechanism designed to act as a barrier between internal and external networks. In an open-source landscape, companies can download, and access firewall technology maintained by a growing developer community. Access to the core code of the firewall ensures companies can adjust their security settings according to their needs. While open-source firewalls are more common among Linux and other open-source operating systems, they are available for Windows, Mac, and Android devices too. Some of the most common options include: Open Source Firewall For Windows Products like Perimeter 81 and Clear OS offer open-source firewall access for Windows users, with a range of add-on applications configurable through a web-based interface. Open Source Firewall For Mac Mac users have access to various open-source firewalls like “Lulu”, a shared-source firewall capable of blocking every unknown connection your business doesn’t approve. Other options include Vallum, Netmine, and Glassware. Open Source Firewall For Android NetGuard is one of the most popular open-source firewall options for Android. There are only a handful of open-source options available on the Android landscape today. Open Source Firewall For Linux Open-source firewalls are particularly common in the Linux environment. Most of the top options work with Linux operating systems, including IPFire, pfSense, OPNSense, and Endian firewall. Confused about which open source firewall will be the best for you? Check out our Top 10 list of Open Source Firewalls for 2022. Why and When is an Open Source Firewall Needed? A firewall is an important addition to any security strategy in today’s unpredictable digital world. The right firewall technology can monitor network traffic, prevent virus attacks, minimise the risk of hacking and stop spyware from infiltrating a company. However, many common firewall options are somewhat restrictive and inflexible for the needs of modern companies. Open-source software offers an economical and adjustable way to implement custom security strategies. The Biggest Benefits of an Open Source Firewall Solution Include: - Affordability: Open-source licensing is free to use, though you may need to pay for security hardening, support, and assistance with management. - Reliability: Because open-source code is constantly updated and managed by vast communities of developers, it tends to outlive its original authors. - Flexibility: open source software can be modified to address problems unique to your business, and you’re not obliged to. - Open collaboration: Because open-source communities are very active, you can often find assistance with any issues you might have easily. - No vendor lock-in: You’re not limited to support from a specific vendor for your technology, you can take your code wherever you like. How to set up an Open Source Firewall? The exact process of setting up an open-source firewall will vary depending on the software or solution you choose. One of the most common options is pfSense, which provides a very simple configuration and set-up process. To implement an open source firewall like pfSense onto your device, you’ll need to download the installer from the website, making sure you get the version which matches your operating system. Boot your device after the installation and wait until the device displays the software license screen. You’ll need to accept the licence terms, choose your keyboard layout, and click continue. Many tools like PfSense automatically partition the disk and move to installation, after which point you can configure the console. Follow these steps to complete the setup: - Connect your network: Start an “auto-detection” for the WAN interface and follow the instructions on-screen, connecting the cable when required. Make sure you physically label the interfaces too. Once you have the LAN and WAN interfaces identified, click “y” to continue. You’ll be able to adjust your IP address if necessary. - Run the configuration wizard: Using your web browser, go to the LAN IPv4 address you configured in the previous step, and log in using the “pfsense” password, and username “admin”. The initial setup wizard will begin. Enter the chosen name for your firewall, and move through each stage of the wizard. You’ll be able to set up your WAN interface, choose your time zone, and set a new admin name and password. Click on “reload” to apply any changes to your device. - Implement IPv6 options: If your ISP also offers IPv6, you’ll need to set up the WAN interface to match those provided by the ISP. Select the pull-down menu from the top menu bar and click on “WAN” interface. You’ll need to set up IPv6 on your LAN interface too, and many open-source solutions like PfSense offer a range of configurations to choose from. - Establish local network services: From the menu bar on your admin page, click on the “DHCP” server, and click “Enable” to switch it on for your LAN interface. You’ll then be able to connect the IPv4 addresses allocated to your devices. Leave the DNS and WINS server options unset, then scroll to the bottom of the page to hit “save”. - Additional configuration: If you need VPN links to a cloud provider, or to other locations, you’ll also be able to set these up as needed. Additional services are usually also available from open-source providers, like traffic prioritization, load balancing, web filtering, and multiple internet connections. Can Open Source Firewall Prevent Malware? When setting up an open source firewall solution, it’s important to be aware of what these services can actually do. Firewalls often come in two different formats: Host-Based and Network firewalls. Network firewalls are often used by companies containing a huge range of computers, servers, and different users, and the firewall monitors the communications between them. Aside from tracking employee behaviour and restricting access to certain websites, this type of firewall can also safeguard the sensitive internal data of the business, such as employee information and customer databases. Host-based firewalls work in a similar way but are stored on a single computer. Host-based firewalls are commonly recommended for business computers not protected by a network firewall. These tools are easy to install and can protect against various attacks, including email viruses, malware, and malicious cookies. While open-source firewalls can be a great tool to ensure high-level security, you must also focus on regression testing in order to ensure business continuity. EM360, as you know it is about to change. CIA hackers, Google visionaries and some of the other biggest influencers from the tech industry are waiting to engage with you on the technologies that will define the future of enterprise tech. All you have to do is sign up as a premium EM360 Tech Community Member. Features You Can Unlock As An EM360 Tech Community Member: - Engage with the leading influencers of Cyber Security, Data Management, Enterprise AI and more. - Gain access to our expanding library of exclusive content and resources. - Get insights and opinions from industry leaders on the latest trending topics. - Rise through the ranks to become an Industry Guru and GET PAID to express your opinion. If you are a tech enthusiast, this is the place you need to be. Find out more about the EM360 Tech Community.
<urn:uuid:b620dcaa-44d9-4398-a3db-a4a24a771493>
CC-MAIN-2022-40
https://em360tech.com/tech-article/open-source-firewall
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00174.warc.gz
en
0.899815
1,732
2.53125
3
What are microservices? Microservices (or microservices architecture) are a cloud native architectural approach in which a single application is composed of many loosely coupled and independently deployable smaller components, or services. These services typically - have their own technology stack, inclusive of the database and data management model; - communicate with one another over a combination of REST APIs, event streaming, and message brokers; and - are organized by business capability, with the line separating services often referred to as a bounded context. While much of the discussion about microservices has revolved around architectural definitions and characteristics, their value can be more commonly understood through fairly simple business and organizational benefits: - Code can be updated more easily - new features or functionality can be added without touching the entire application - Teams can use different stacks and different programming languages for different components. - Components can be scaled independently of one another, reducing the waste and cost associated with having to scale entire applications because a single feature might be facing too much load. Microservices might also be understood by what they are not. The two comparisons drawn most frequently with microservices architecture are monolithic architecture and service-oriented architecture (SOA). The difference between microservices and monolithic architecture is that microservices compose a single application from many smaller, loosely coupled services as opposed to the monolithic approach of a large, tightly coupled application For more on the differences between microservices and monolithic architecture, watch this video (6:37): The differences between microservices and SOA can be a bit less clear. While technical contrasts can be drawn between microservices and SOA, especially around the role of the enterprise service bus (ESB), it’s easier to consider the difference as one of scope. SOA was an enterprise-wide effort to standardize the way all web services in an organization talk to and integrate with each other, whereas microservices architecture is application-specific. The post "SOA vs. Microservices: What's the Difference?" goes into further details. How microservices benefit the organization Microservices are likely to be at least as popular with executives and project leaders as with developers. This is one of the more unusual characteristics of microservices because architectural enthusiasm is typically reserved for software development teams. The reason for this is that microservices better reflect the way many business leaders want to structure and run their teams and development processes. Put another way, microservices are an architectural model that better facilitates a desired operational model. In a recent IBM survey of over 1,200 developers and IT executives, 87% of microservices users agreed that microservices adoption is worth the expense and effort. You can explore more perspectives on the benefits and challenges of microservices using the interactive tool below: Here are just a few of the enterprise benefits of microservices. Perhaps the single most important characteristic of microservices is that because the services are smaller and independently deployable, it no longer requires an act of Congress in order to change a line of code or add a new feature in application. Microservices promise organizations an antidote to the visceral frustrations associated with small changes taking huge amounts of time. It doesn’t require a Ph.D. in computer science to see or understand the value of an approach that better facilitates speed and agility. But speed isn’t the only value of designing services this way. A common emerging organizational model is to bring together cross-functional teams around a business problem, service, or product. The microservices model fits neatly with this trend because it enables an organization to create small, cross-functional teams around one service or a collection of services and have them operate in an agile fashion. Microservices' loose coupling also builds a degree of fault isolation and better resilience into applications. And the small size of the services, combined with their clear boundaries and communication patterns, makes it easier for new team members to understand the code base and contribute to it quickly—a clear benefit in terms of both speed and employee morale. Right tool for the job In traditional n-tier architecture patterns, an application typically shares a common stack, with a large, relational database supporting the entire application. This approach has several obvious drawbacks—the most significant of which is that every component of an application must share a common stack, data model and database even if there is a clear, better tool for the job for certain elements. It makes for bad architecture, and it’s frustrating for developers who are constantly aware that a better, more efficient way to build these components is available. By contrast, in a microservices model, components are deployed independently and communicate over some combination of REST, event streaming and message brokers—so it’s possible for the stack of every individual service to be optimized for that service. Technology changes all the time, and an application composed of multiple, smaller services is much easier and less expensive to evolve with more desirable technology as it becomes available. With microservices, individual services can be individually deployed—but they can be individually scaled, as well. The resulting benefit is obvious: Done correctly, microservices require less infrastructure than monolithic applications because they enable precise scaling of only the components that require it, instead of the entire application in the case of monolithic applications. There are challenges, too Microservices' significant benefits come with significant challenges. Moving from monolith to microservices means a lot more management complexity - a lot more services, created by a lot more teams, deployed in a lot more places. Problems in one service can cause, or be caused by, problems in other services. Logging data (used for monitoring and problem resolution) is more voluminous, and can be inconsistent across services. New versions can cause backward compatibility issues. Applications involve more network connections, which means more opportunities for latency and connectivity issues. A DevOps approach (as you'll read below) can address many of these issues, but DevOps adoption has challenges of its own. Nevertheless, these challenges aren't stopping non-adopters from adopting microservices - or adopters from deepening their microservices commitments. New IBM survey data reveals that 56% of current non-users are likely or very likely to adopt microservices within the next two years, and 78% of current microservices users will likely increase the time, money and effort they've invested in microservices (see Figure 1). Figure 1: Microservices are here to stay. Within the next two years, 56% of non-users are likely to adopt microservices, 78% of users will increase their investment in microservices, and 59% off applications will be created with microservices. (Source: 'Microservices in the enterprise 2021: Real benefits, worth the challenges.') Microservices both enable, and require, DevOps Microservices architecture is often described as optimized for DevOps and continuous integration/continuous delivery (CI/CD), and in the context of small services that can be deployed frequently, it’s easy to understand why. But another way of looking at the relationship between microservices and DevOps is that microservices architectures actually require DevOps in order to be successful. While monolithic applications have a range of drawbacks that have been discussed earlier in this article, they have the benefit of not being a complex distributed system with multiple moving parts and independent tech stacks. In contrast, given the massive increase in complexity, moving parts and dependencies that come with microservices, it would be unwise to approach microservices without significant investments in deployment, monitoring and lifecycle automation. Andrea Crawford provides a deeper dive on DevOps in the following video: Key enabling technologies and tools While just about any modern tool or language can be used in a microservices architecture, there are a handful of core tools that have become essential and borderline definitional to microservices: Containers, Docker, and Kubernetes One of the key elements of a microservice is that it’s generally pretty small. (There is no arbitrary amount of code that determines whether something is or isn’t a microservice, but “micro” is right there in the name.) When Docker ushered in the modern container era in 2013, it also introduced the compute model that would become most closely associated with microservices. Because individual containers don’t have the overhead of their own operating system, they are smaller and lighter weight than traditional virtual machines and can spin up and down more quickly, making them a perfect match for the smaller and lighter weight services found within microservices architectures. With the proliferation of services and containers, orchestrating and managing large groups of containers quickly became one of the critical challenges. Kubernetes, an open source container orchestration platform, has emerged as one of the most popular orchestration solutions because it does that job so well. In the video "Kubernetes Explained," Sai Vennam gives a comprehensive view of all things Kubernetes: Microservices often communicate via API, especially when first establishing state. While it’s true that clients and services can communicate with one another directly, API gateways are often a useful intermediary layer, especially as the number of services in an application grows over time. An API gateway acts as a reverse proxy for clients by routing requests, fanning out requests across multiple services, and providing additional security and authentication. There are multiple technologies that can be used to implement API gateways, including API management platforms, but if the microservices architecture is being implemented using containers and Kubernetes, the gateway is typically implemented using Ingress or, more recently, Istio. Messaging and event streaming While best practice might be to design stateless services, state nonetheless exists and services need to be aware of it. And while an API call is often an effective way of initially establishing state for a given service, it’s not a particularly effective way of staying up to date. A constant polling, “are we there yet?” approach to keeping services current simply isn’t practical. Instead, it is necessary to couple state-establishing API calls with messaging or event streaming so that services can broadcast changes in state and other interested parties can listen for those changes and adjust accordingly. This job is likely best suited to a general-purpose message broker, but there are cases where an event streaming platform, such as Apache Kafka, might be a good fit. And by combining microservices with event driven architecture developers can build distributed, highly scalable, fault tolerant and extensible systems that can consume and process very large amounts of events or information in real-time. Serverless architectures take some of the core cloud and microservices patterns to their logical conclusion. In the case of serverless, the unit of execution is not just a small service, but a function, which can often be just a few lines of code. The line separating a serverless function from a microservice is a blurry one, but functions are commonly understood to be even smaller than a microservice. Where serverless architectures and Functions-as-a-Service (FaaS) platforms share affinity with microservices is that they are both interested in creating smaller units of deployment and scaling precisely with demand. Microservices and cloud services Microservices are not necessarily exclusively relevant to cloud computing but there are a few important reasons why they so frequently go together—reasons that go beyond microservices being a popular architectural style for new applications and the cloud being a popular hosting destination for new applications. Among the primary benefits of microservices architecture are the utilization and cost benefits associated with deploying and scaling components individually. While these benefits would still be present to some extent with on-premises infrastructure, the combination of small, independently scalable components coupled with on-demand, pay-per-use infrastructure is where real cost optimizations can be found. Secondly, and perhaps more importantly, another primary benefit of microservices is that each individual component can adopt the stack best suited to its specific job. Stack proliferation can lead to serious complexity and overhead when you manage it yourself but consuming the supporting stack as cloud services can dramatically minimize management challenges. Put another way, while it’s not impossible to roll your own microservices infrastructure, it’s not advisable, especially when just starting out. Within microservices architectures, there are many common and useful design, communication, and integration patterns that help address some of the more common challenges and opportunities, including the following: - Backend-for-frontend (BFF) pattern: This pattern inserts a layer between the user experience and the resources that experience calls on. For example, an app used on a desktop will have different screen size, display, and performance limits than a mobile device. The BFF pattern allows developers to create and support one backend type per user interface using the best options for that interface, rather than trying to support a generic backend that works with any interface but may negatively impact frontend performance. - Entity and aggregate patterns: An entity is an object distinguished by its identity. For example, on an e-commerce site, a Product object might be distinguished by product name, type, and price. An aggregate is a collection of related entities that should be treated as one unit. So, for the e-commerce site, an Order would be a collection (aggregate) of products (entities) ordered by a buyer. These patterns are used to classify data in meaningful ways. - Service discovery patterns: These help applications and services find each other. In a microservices architecture, service instances change dynamically due to scaling, upgrades, service failure, and even service termination. These patterns provide discovery mechanisms to cope with this transience. Load balancing may use service discovery patterns by using health checks and service failures as triggers to rebalance traffic. - Adapter microservices patterns: Think of adapter patterns in the way you think of plug adapters that you use when you travel to another country. The purpose of adapter patterns is to help translate relationships between classes or objects that are otherwise incompatible. An application that relies on third-party APIs might need to use an adapter pattern to ensure the application and the APIs can communicate. - Strangler application pattern: These patterns help manage refactoring a monolithic application into microservices applications. The colorful name refers to how a vine (microservices) slowly and over time overtakes and strangles a tree (a monolithic application). You can learn more about these patterns in "How to use development patterns with microservices (part 4)." While there are many patterns for doing microservices well, there are an equally significant number of patterns that can quickly get any development team into trouble. Some of these—rephrased as microservices “don’ts”—are as follows: - The first rule of microservices is, don’t build microservices: Stated more accurately, don’t start with microservices. Microservices are a way to manage complexity once applications have gotten too large and unwieldly to be updated and maintained easily. Only when you feel the pain and complexity of the monolith begin to creep in is it worth considering how you might refactor that application into smaller services. Until you feel that pain, you don’t even really have a monolith that needs refactoring. - Don’t do microservices without DevOps or cloud services: Building out microservices means building out distributed systems, and distributed systems are hard (and they are especially hard if you make choices that make it even harder). Attempting to do microservices without either a) proper deployment and monitoring automation or b) managed cloud services to support your now sprawling, heterogenous infrastructure, is asking for a lot of unnecessary trouble. Save yourself the trouble so you can spend your time worrying about state. - Don’t make too many microservices by making them too small: If you go too far with the “micro” in microservices, you could easily find yourself with overhead and complexity that outweighs the overall gains of a microservice architecture. It’s better to lean toward larger services and then only break them apart when they start to develop characteristics that microservices solve for—namely that it’s becoming hard and slow to deploy changes, a common data model is becoming overly complex, or that different parts of the service have different load/scale requirements. - Don’t turn microservices into SOA: Microservices and service-oriented architecture (SOA) are often conflated with one another, given that at their most basic level, they are both interested in building reusable individual components that can be consumed by other applications. The difference between microservices and SOA is that microservices projects typically involve refactoring an application so it’s easier to manage, whereas SOA is concerned with changing the way IT services work enterprise-wide. A microservices project that morphs into an SOA project will likely buckle under its own weight. - Don’t try to be Netflix: Netflix was one of the early pioneers of microservices architecture when building and managing an application that accounted for one-third of all Internet traffic—a kind of perfect storm that required them to build lots of custom code and services that are unnecessary for the average application. You’re much better off starting with a pace you can handle, avoiding complexity, and using as many off-the-shelf tools as you possible. Tutorials: Build microservices skills If you're ready to learn more about how to use microservices, or if you need to build on your microservices skills, try one of these tutorials: - Introduction to microservices - Quick lab: Create highly scalable web application microservices with Node.js - Get started with Java microservices using Spring Boot and Cloudant - Create, run, and deploy Spring microservices in five minutes - Microservices, SOA, and APIs: Friends or enemies? Microservices and IBM Cloud Microservices enable innovative development at the speed of modern business. Learn how to leverage the scalability and flexibility of the cloud by deploying independent microservices into cloud environments. See what it would be like to modernize your applications with help from IBM. Take the next step: - Free your development teams by relying on automated iteration with help from IBM cloud native development tools. - Learn more about managed Kubernetes by getting started with Red Hat OpenShift on IBM Cloud or the IBM Cloud Kubernetes Service. Also, check out IBM Cloud Code Engine for more about serverless computing. - Microservices are just as much about team process and organization as technology. Strategically plan your DevOps approach with help from IBM DevOps. - Try IBM Instana® Observability (link resides outside ibm.com) for application performance monitoring. Get started with an IBM Cloud account today.
<urn:uuid:a796fc5d-ef93-481e-ae83-d9e1883179d0>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/microservices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00174.warc.gz
en
0.938713
3,903
3.28125
3
Coronavirus Email Scams The recent coronavirus outbreak has motivated cybercriminals to send virus related malware attacks across the world. Phishing emails claiming to possess information on protecting against the virus have appeared, spreading misinformation and malicious software. These emails encourage victims to open attached documents containing malware that can freeze or completely steal valuable data. Scammers use fear and uncertainty to manipulate victims into infecting their computer with malware. However, incorporating tragic events, potential pandemics or natural disasters into their attacks is nothing new. Beware of Phishing After Any Big Event Attackers customize phishing emails to current or upcoming events like tax season, hurricane season, and holidays. Regardless of the occasion, the goal is the same: to access valuable information. The attacks prey on people’s desperation for answers and suggest that they have can give them to you. Furthermore, there have been cases of scams emerging in places like Michigan and New York. Officials in these states are warning residents to be vigilant of emails asking for donations or personal payment card information. Coronavirus scam emails were popping up in early February which prompted Michigan’s Department of Health and Human Services to warn citizens on their dangers. The Federal Trade Commission even sent out a memorandum advising people on how to spot email scams and stay safe online. Additionally, the FTC says cyber criminals could be setting up fraudulent websites that sell fake products using illegitimate emails, social media posts and texts to trick people into sending them money or personal information. Common attributes of a fake email are spelling and/or grammar errors. If you receive a suspicious link, hover your cursor over it to view the destination url. Protecting Against Coronavirus Phishing Scams Here are some tips recommended by the FTC to keep safe against scammers: 1) Be suspicious of emails claiming to be from the Center for Disease Control and Prevention (CDC) or anyone purporting to be an “expert” with information on the virus. 2) Avoid emails that allude to any “investment opportunities.” Social scams will promote products claiming they can cure, detect, treat or prevent the disease are fake. 3) If you’re going to donate , do the proper research into the organization and payment method. Don’t be pressured to donate and especially if it’s through an email link. 4) Ignore offers for vaccinations . Ads that say they have the cure or treatment for coronavirus are probably scams. Any medical breakthrough will be announced on mainstream media networks. 5) For up-to-date information on the virus visit the Center for Disease Control and Prevention (CDC) and the World Health Organization (WHO) Don’t Be Misled These scams will continue to spread and they won’t go away any time in the near future. In fact, scammers will certainly take greater advantage of the misinformation and fear from media coverage. Moreover, cyber scammers in China were reported sending malicious emails containing malware. It’s difficult to protect yourself from these types of attacks but Threat actors also targeted users in Japan with a campaign that spread malicious documents with supposed information on the virus. Unsurprisingly, these social engineers even sent emails impersonating the CDC to lure unsuspecting users into malware traps. The Coronavirus is a real threat but it’s important to keep a level head and not expose yourself to even greater harm online. Ultimately, even Facebook has begun planning to ward off misinformation on the virus. Other social media platforms have voiced concern about the spread of false claims on their platforms as well. The virus has attracted the attention of a global audience but that doesn’t mean you have to fall victim to those looking to profit off of that attention.
<urn:uuid:70109077-74a1-4393-a525-8adcc1177b29>
CC-MAIN-2022-40
https://nerdssupport.com/how-cyber-attackers-use-coronavirus-to-steal-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00174.warc.gz
en
0.932751
776
2.859375
3
University of Michigan researchers have built a memristor computer capable of identifying Greek letters, reconstructing compressed images, and detecting cancers. A memristor is a non-volatile type of resistive memory that retains its state, which could be on, off or an in-between value. When current flows through it that state changes and persists. There’s more information here. Memristors are analog and HP researchers in 2008 suggested they could be built into an analog computer. This would sum or subtract lots of in-between states of the memristor to arrive at a judgement about a problem such as face or speech recognition. The idea is that a memristor functions like synapses in the human brain, which assimilate the input from previous synapses and pass on their resulting state to other synapses. This makes memristors interesting to neuromorphic computing researchers, who try to mimic brain functions using electronic circuits. A conventional computer with CPU and DRAM has data loaded into DRAM from a backing store, then processed by the CPU, and fresh data loaded into DRAM. In a memristor array the data is already present, embodied in the cell states, and the data load time is cut. Researchers from HRL Labs and the University of Michigan, led by Wei Lu, fabricated a functioning memristor array in 2012, and called it an artificial synapse for neuromorphic computing. They said memristors combine the functions of memory and logic like the synapses of biological brains. Seven years later University of Michigan researchers, again led by Wei Lu, have built a Memristor computer. University of Michigan device The researchers built a crossbar memristor array chip, with 5,800 memristors. It plugs into another chip containing a digital OpenRISC CPU, communications channels, and two types of converter: analog-to-digital and digital-to-analog. The resulting hybrid device forms the first programmable memristor computer. The researchers say that it could be faster than both digital processors and GPUs in working on neuromorphic problems, which require many cores to work in parallel. A CPU has up to 64 cores while a GPU can have thousands of more specialised cores, and works 10 to 100 times faster than a CPU. The memristor array could be reloaded with a set of values or data vectors that represent a particular state or thing: the Greek letter ‘alpha’, an image, or the attributes of particular types of cancer. An input dataset is compared against this matrix of data vectors to see if it matches. The researchers developed software to map machine learning algorithms onto the memristor array’s matrix-like structure and applied three types of machine learning problem to this hybrid chip. - Perceptron information classification with 100 per cent accurate identification of imperfect Greek letters (noisy 5×5 pixel arrays); ‘Ω’, ‘M’, ‘Π’, ‘Σ’, and ‘Φ’, after online learning runs. - Sparse coding; the hybrid chip was able to find the most efficient way to reconstruct simple images (4×4 bar patterns) in a set and identified compressed patterns with 100 per cent accuracy. - Two-layer neural network; after learning runs it found commonalities and differentiating factors in simple breast cancer screening data, classifying each case as malignant or benign with 94.6 per cent accuracy. Research findings are published in a paper, “A fully integrated reprogrammable memristor-CMOS system for efficient multiply-accumulate operations” [paywall]. The researchers envisage a memristor array with millions of cells that would work 10 to 100 times faster again than GPUs. This research project is highly promising but it would be useful to know how the hybrid chip’s speed compares to a digital CPU or a GPU system doing the same work on the three problems. This would give a base point from which we can assess the practical utility of memristor technology.
<urn:uuid:7e2d2ad8-6ae7-4f44-854a-c195d1485d92>
CC-MAIN-2022-40
https://blocksandfiles.com/2019/07/23/memristor-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00174.warc.gz
en
0.904431
848
3.359375
3
If you have multiple computers on your network, you can shut them down remotely regardless of their operating system. If you are using Windows, you’ll need to set the remote computer to be able to shutdown computer remotely. Once this is set up, you can perform the shut down from any computer, including Linux. Enabling the Remote Registry Service (Windows) Open the Start menu on the computer you want to be able to shut down remotely. Before you can remotely shut down a Windows computer on your network, you’ll need to enable Remote Services on it. This requires administrator access to the computer. The Next Step Is Type Services.Msc Type .services.msc while the Start menu is open and press ↵ Enter. This start the Microsoft Management Console with the “Services” section open. Then You Need to Find “Remote Registry” Find “Remote Registry” in the list of services. The list is sorted alphabetically by default. Select “Properties” of Remote Registry Right-click “Remote Registry” and select “Properties.” This will open the Properties window for the service. Select Option “Automatic” And Accept Your Choice Select “Automatic” from the “Startup type” menu. Click “OK” or “Apply” to save changes. After That Open Firewall Click the Start button again and type “firewall.” This will launch Windows Firewall. Allow Application or Feature Through Firewall Click “Allow app or feature through Windows Firewall.” You’ll find this on the left side of the window. Choose and Click Change Settings Button Click the “Change settings” button. This will allow you to make changes to the list below it. Check the “Windows Management Instrumentation” Check the “Windows Management Instrumentation” box. Check the box in the “Private” column. Note!!! Your user account must also have administrator permissions on the remote computer. If it doesn’t, the shutdown command will fail due to lack of permissions. Use Command Line to Remote Shutdown To shut down the computer, launch a Command Prompt window on another computer (click Start, type Command Prompt, and press Enter). Type the following command into the command prompt window for a graphical interface: Add One or More Required Computer Names From the remote shutdown dialog window, you can add one or more computer names and specify whether you want to shutdown or restart the computer. You can optionally warn users and log a message to the system’s event log. Further Step Is Checking Computer Name Not sure what the name of the remote computer is? Click Start on the remote computer, right-click Computer in the Start menu, and select Properties. You’ll see the computer’s name. You Can Also Use the Command Line Instead of the Graphical Interface You can also use a command instead of a graphical interface. Here’s the equivalent command: shutdown /s /m \\chris-laptop /t 30 /c “Shutting down for maintenance.” /d P:1:1 Consider Using Action1 to Shutdown Remote Computer if: - You need to perform an action on multiple computers simultaneously. - You have remote employees with computers not connected to your corporate network.
<urn:uuid:9d0fca38-db22-43fb-8b3b-d3dfa2ad9c05>
CC-MAIN-2022-40
https://www.action1.com/how-to-shutdown-remote-computer-on-windows-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00174.warc.gz
en
0.813973
726
2.796875
3
State and local governments continue to find ways to enhance their communities by improving air quality and reducing carbon emissions, and technology is playing a large part in this. It’s no secret that carbon emissions have been a mushrooming problem for the past 75 years. And the issue is forcing governments and corporations to take a hard look at how their unique carbon footprint adds to this global problem. Just in the past month, several major technology companies have announced initiatives to reduce their impact on the environment. Simply put, if we continue the path we are on now, at the end of this century, we won’t have time to turn back. The risk of the earth’s temperature rising even one more degree could translate to catastrophic results. Which is why Microsoft has made a bold pledge: To become carbon-negative by the year 2030. Microsoft is committed to advancing the nation’s economic, environmental, and energy security for all, and this initiative is helping them achieve it. Let me explain what that means. First, here is a quick Carbon 101 definition: We all release carbon into the atmosphere. This has not been a problem until industrialization, where we started emitting more carbon than the earth can absorb. Think of our planet as a bank account. We have been making cash withdrawals without making savings deposits and are WAY behind in making up for this. Now, if this were something that affected you personally, to get back on track, you would need to put together a plan. You would refinance or consolidate your debt and craft a strict budget to follow to get back on track. This is exactly what Microsoft is doing by announcing its commitment to become carbon negative, with the pledge to reduce their emissions by half of what they are now and to remove more emissions than they emit. Now, let us review the three different ways carbon is released: - Your daily emission – the car you drive - The indirect emissions from daily life – traditional energy sources like electricity - The extended emissions that must happen for supply chains to function – think about the simple process of ordering something online. The manufacturer gets the order, it ships to a distribution center, then makes its way to your home. For Microsoft to achieve carbon-negative, they are focusing on two things: - Technology – which translates to investing, adopting and the development of sustainable technologies to help reduce the carbon from the atmosphere. - Nature – by planning and implementing an aggressive change in carbon emission. How you can get started As a leader/decision-maker in state and local government, think about how this initiative can help you achieve your environmental goals. If you position technology correctly, it will play a critical role in protecting our planet and protecting the people in our communities. Here are a few examples to get started. Reporting on and driving sustainability within your organization begins with the ability to quantify the carbon impact. This data is crucial for reporting existing emissions and will help drive additional decarbonization efforts. Agencies or departments can achieve estimated carbon savings by running workloads in the cloud versus on-premises or in data centers are already ahead of the curve. Considerations for measurement are related to power, connectivity and physical space. Which is why housing servers on premises, storing data, running applications, platforms and infrastructure are physically and financially demanding. This ultimately leads to a higher emissions score, so take inventory of your on-premises assets. Government agencies can improve outcomes for citizens by implementing a “digital strategy” or migrating assets from on-premises to the cloud. By eliminating physical assets and optimizing your technology environment, emissions are significantly reduced. In addition, migrating to the cloud will increase your organization’s security posture, improve remote collaboration among your teams, and potentially reduce costs. And, enabling a remote workforce reduces carbon emissions from employees driving in to work each day. Increasing the flow of data among your unique lines of business by getting the right data to the right people the first time and connect data with environmental initiatives. In January 2020, Microsoft launched a sustainability calculator, designed to help companies measure their carbon emissions. Those responsible for reporting sustainability can now quantify the carbon impact of each Azure subscription over time and datacenter region. You can compare estimates from running your workloads in Azure versus on-premises data centers and get the crucial data for reporting decarbonization improvements. Create a business justification for cloud migration We’ve established that migrating to the cloud is a positive step toward reducing your carbon footprint. For your project to gain steam, perform the following: - Build the business justification with tangible, relevant costs and returns for cloud migrations (This will focus on the ROI and technical specifications). - Create a financial model – where will the funding come from? - Gain internal alignment with stakeholders by firmly aligning business value and business outcomes. - Provide a clear picture as to what changes will come with transformation. - Campaign for executive support. - Engage the right vendor partnership. This is where partnering with the right technology company matters. Besides reducing your carbon footprint, here are more benefits to cloud technology Reduce Costs – Get rid of the capital expense of building, running and maintaining on-premises infrastructure. Don’t forget the added carbon emissions from servers and other network assets and the extra cost of electricity to run it all. You’ll be saving staff hours with reducing the number of IT techs to monitor it onsite. Enhanced Security – Microsoft invests more than $1Billion USD annually on cybersecurity, which provides fast and reliable connectivity for the growing remote workforce. Keep your sensitive data secure from those you want to keep out, and accessible for those who need to use it. Business Continuity – A backup and disaster recovery plan is critical to keeping your data and applications protected from disruptions to your business. Built-in redundancy is also critical to ensure there is no single point-of-failure. Why partner with Microsoft, and why does it matter? State and Local Government (SLG) leaders make conscious choices every day that impact how they serve their constituents. By actively choosing cloud technology, they naturally reduce their carbon footprint. When government leaders focus on establishing the right public and private partnerships with organizations willing to put effort behind a bold environmental initiative, they receive: - A partner willing to walk the walk at every layer of its infrastructure, starting with their own employees - A partner that can influence other technology contractors to be carbon responsible - A partner willing to pioneer sustainable technologies geared towards carbon reduction and strategies tied to the environment. Microsoft’s suite of products drive innovation among the government community. To learn more about how these applications are creating efficiencies, check out our eBook Security and Innovation for Government Agencies. At Arctic IT, our goal is to help you gain a deeper understanding of your current infrastructure to help drive meaningful sustainability conversations within your organization. If your team is ready to take technology on a path of environmental stewardship, please reach out to email@example.com to get started today.
<urn:uuid:9fb62a4f-b4dd-4bb6-9ef3-2c526f00ac8b>
CC-MAIN-2022-40
https://arcticit.com/reduce-your-carbon-emissions-with-microsoft-cloud-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00174.warc.gz
en
0.939135
1,458
3.40625
3
While blended learning may offer a temporary solution as the COVID-19 pandemic begins to ease and students start returning to campus to complete their course, it also provides a number of benefits as a long-term solution. Blended learning can have both positive and negative effects for a learner within higher education, both of which should be considered when looking to implement a blended learning approach. Positive effects for students Increased student engagement A blended learning method can make it easier for some students to engage and study at their own pace. This learning style allows students who struggle to stay engaged during in-person teaching to engage with their studies in their own time, at their own pace without the need for a teacher. Increase student achievement For those students who do experience increased engagement and motivation with blended learning, their achievements will likely increase too. When students are engaged with learning, they have higher chances of retaining information, understanding topics and working towards assignments and exams – this will all be reflected in their final grades for the course. Students learn at their own pace The beauty of a blended learning environment is that they allow students to move through tasks and learning modules at their own pace. This is incredibly helpful for students who have a slightly different learning style or may work slightly slower than others and feel rushed during in-person teaching sessions. Classroom time can be focused on more meaningful activities In a blended learning environment, students are able to complete tasks online or watch lectures in their own time which means that classroom time can be freed up for more interactive, instruction-focused lessons. This will make much better use of valuable face-to-face teaching sessions. Enhanced student experience With the help of technology, blended learning opens doors for students to be able to access professional resources, research archives and even connect with professionals in their field of study, which will help to enhance their learning experience. Improved accessibility and inclusion One of the biggest dilemmas faced by higher education organizations is creating an environment that all students can fully access, no matter their abilities, lifestyle, location or course. Online resources included in a blended learning approach to learning can be accessed from anywhere in the world with a device and wi-fi connection so not student is limited to what they can access throughout their course. Ownership of learning To successfully commit to blended learning, students must organize their studies and interact with online materials on their own terms. This allows students in higher education to take ownership of their learning without constant instruction from an educator.
<urn:uuid:ba88aabd-1594-42f1-8ce9-cc330ee2faf9>
CC-MAIN-2022-40
https://www.appsanywhere.com/resource-centre/online-or-distance-learning/blended-learning-effects
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00374.warc.gz
en
0.96929
510
3.109375
3
What are cloud services? Cloud technology has been around for some time now, providing businesses with on-demand computing services that support business growth, while optimising performance and enhancing security. But what exactly are cloud services and how can they benefit your organisation? What are cloud services? Simply put, cloud services are services that are delivered to users over the internet (“the cloud”). There are a wide range of cloud services available, each designed to provide access to applications and resources without the need for internal infrastructure or traditional hardware. Services can include the delivery of servers, storage, networks, platforms, databases, software, analytics and intelligence. You might already be using cloud services in some form without even realising. What do cloud services do? Cloud services enable businesses to store information and data on physical or virtual servers, helping businesses to lower operating costs, run their infrastructure more efficiently and scale as needs change. There are many more benefits to cloud computing services, which we’ll come onto later. How do cloud services work? As above, cloud services store data on remote servers accessed via an internet connection. This information and data is then maintained and controlled by cloud computing providers and their cloud platforms. Typically, businesses only pay for the cloud services they use, making them economically viable for many. Customers can access their stored information and other cloud services via the provider’s server, ensuring they don’t need to host the applications or data on their own on-premise servers. In contrast to traditional hardware and software solutions, users simply need a computer, network connection and operating system to access cloud services. Types of cloud computing Cloud computing is bespoke by nature. Different businesses have different needs, so their cloud computing services should be different too. Before you decide which cloud services you need in your organisation, you’ll need to determine the type of cloud environment you need. There are three main types of cloud platforms that a business can implement their cloud services on: Public clouds are managed solely by the cloud provider, delivering computing resources over the internet for customers to access via their web browser. All hardware, software and supporting infrastructure is owned by the cloud provider. Examples of public cloud include Microsoft Azure and Amazon Web Services. Private clouds are cloud computing resources that are used exclusively by a single business, often those that wish to store more sensitive data. They are often physically located on a company’s on-site data centre, but third-party service providers can also be engaged to host their private cloud. All services and infrastructure are maintained on a private network and access can be via browsers or virtual machines. As the name suggests, hybrid clouds combine both public and private cloud environments, using technology that enables the sharing of data and applications. Moving data and applications between public and private cloud means that businesses can benefit from greater flexibility and optimised security, compliance and infrastructure. Types of cloud services Once a cloud deployment type has been decided, you can start to think about the types of cloud services you need for your business. There are three basic types: Infrastructure as a Service, Software as a Service, and Platform as a Service. Infrastructure as a Service (IaaS) Infrastructure as a Service is where companies rent IT infrastructure from a cloud provider, usually on a pay-as-you-go basis. Infrastructure could include storage, networks, operating systems, servers and virtual machines. IaaS offers the complete data centre framework that organisations need to run cloud services, without the personal maintenance or upkeep. Businesses that use IaaS eliminate the need for resource-intensive, on-site installations which can be costly and time-consuming. Software as a Service (SaaS) Software as a Service is a distribution model that allows software applications to be instantly delivered across the internet, usually on a subscription basis. Cloud providers host and manage the applications and data in their own servers, databases, network and computing resources. However, Independent Software Vendors (ISVs) might also contract a third-party cloud provider to host the application in the provider’s data centre. End users can then access the applications via their web browser on any device. SaaS products include email, calendar and office tool applications such as Microsoft Office 365. Platform as a Service (PaaS) Platform as a Service enables developers to build cloud applications without setting up or managing the underlying infrastructure of servers, storage, networks and databases. PaaS cloud computing services deliver an instant environment in which software applications can be developed, tested, delivered and managed. The solution means organisations can benefit from quick turnarounds and practically no maintenance. PaaS providers will supply a database, operating system and programming language to make it easy for developers to create bespoke web or mobile applications. Benefits of cloud services As we’ve already mentioned, there are several benefits to cloud services for all types of businesses. More organisations are moving away from traditional IT resources and solutions, and into cloud-based solutions to reap these benefits and ensure their business is prepared for the future of digital and the modern workplace. Some of the key advantages to cloud services include: Improved performance & collaboration By using cloud services, businesses can increase their opportunities for collaboration. Especially in a remote workforce. Cloud services enable employees to easily access, share and edit files in real time, from any location. With increased collaboration and the ability for teams to get things done quicker, general business performance is also improved. Greater cost efficiencies With no capital expenditure on expensive hardware and software, as well as setting up and maintaining on-site data centres, cloud services can really save you money. By engaging with a cloud provider, you negate the need for any of this stuff. And that’s before you factor in paying for round-the-clock electricity and IT specialists to manage your infrastructure. Additionally, many cloud services are offered on a subscription or pay-as-you go basis, ensuring you never over-pay for the services you receive. One of the key drivers to cloud services is the ability to scale as your business needs change. You can simply flex your IT resources up and down as you need to, ensuring you only use and pay for what you require at any given time. And with remote working becoming the future of the workplace, flexibility is key. With professional cloud services from a reputable provider, you can also benefit from strengthened security measures. Many cloud providers offer technologies and controls that help protect your data, applications, and infrastructure from ever-growing cyber threats. In addition, most cloud computing services are run on large secure data centres that are regularly upgraded to the highest security standards. If you keep your IT resources in-house, your teams are often spending lots of time setting up hardware, patching software and completing other mundane IT tasks. However, cloud services can completely remove the need for many of these maintenance activities, freeing up your IT teams to focus on their core business objectives. Reduced carbon footprint With less physical servers and potentially no need for an on-site data centre at all, businesses can significantly reduce their carbon footprint. In addition, with more opportunities for file sharing and collaboration, there is less need for your employees to use energy and spend money printing documents. So, there you have it. Cloud services explained. If you’re looking for further information about exactly how cloud services can support and enhance your business, get in touch with us today for a free consultation.
<urn:uuid:c447eb9a-c6a0-4fc4-86dd-2a27e9896937>
CC-MAIN-2022-40
https://www.nasstar.com/hub/blog/what-are-cloud-services
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00374.warc.gz
en
0.943891
1,565
2.9375
3
What is Enhanced 911 calling?keyboard_arrow_down Enhanced 9-1-1 (also called E 9-1-1) is a federally-mandated program that seeks to improve the effectiveness and reliability of local emergency services by providing emergency dispatchers with valuable information when a 9-1-1 call is received by the Public Service Answering Point (PSAP). Why is Enhanced 911 important?keyboard_arrow_down Wireless phones are important to members of the public when they want to call 9-1-1. Wireless calls to 9-1-1 account for more than 80% of all emergency 9-1-1 calls made in the United States. There are more than 600,000 wireless 9-1-1 calls made every day in the U.S. That number is constantly growing as more people use their wireless phones to call for emergency help, to save lives and to help fight crime. T-Mobile’s fact sheet about Personal & Public Safety provides more details about how we partner with local jurisdictions to make sure that Enhanced 9-1-1 calling meets all United States government requirements. Who is responsible for making sure that mobile 911 calls get connected and stay connected?keyboard_arrow_down Enhanced 9-1-1 is regulated by the FCC and initiated by jurisdictional request. There is close coordination between the jurisdiction and T-Mobile to make sure that the most up-to-date features are provided to the local Public Safety Answering Point (PSAP). The deployment of E9-1-1 requires upgrades to local 9-1-1 PSAPs) plus coordination among public safety agencies, wireless carriers, technology vendors, equipment manufacturers, and local wireline carriers. Is the general public being exposed to radio waves that exceed the government’s allowable limits?keyboard_arrow_down Antennas on cell sites are designed so that the vast majority of the RF energy emitted from the antennas is directed outward from the source. Most macro site antennas are mounted on monopoles or on building facades & rooftops – usually at 50’ to 150’ above ground level. This, and the fact that radio waves generally dissipate over distance, means the general public experiences significantly lower levels of allowable limits than required by the Federal Communications Commission. Street-level measurements of RF emissions are typically 150 times — and more — below the FCC limit. As the FCC explains, “Measurements made near typical cellular and PCS cell sites have shown that ground-level power densities are well below the exposure limits recommended by RF/microwave safety standards used by the FCC.” FCC RF Exposure Guidelines According to the FCC: “While it is theoretically possible for cell sites to radiate at very high power levels, the maximum power radiated in any direction usually does not exceed 500 watts.” According to the FCC There are internet studies that say there are health risks associated with wireless antennas—how do you explain that?keyboard_arrow_down There is certainly a lot of information – and misinformation – available today on this topic and it can be difficult to review and interpret it. Although there are isolated studies that suggest facilities may pose a health risk, the safety limits adopted by the FCC are “based on the recommendations of expert organizations and endorsed by agencies of the Federal Government responsible for health and safety. Therefore, there is no reason to believe that such towers could constitute a potential health hazard to nearby residents or students.” The studies that we rely on are considered statistically sound and are accepted by the scientific community worldwide. T-Mobile can provide reputable third-party and independent sources who have conducted studies. Some of them are: - The American National Standards Institute (ANSI) - The Institute of Electrical and Electronics Engineers (IEEE) - The Federal Communications Commission (FCC) - The American Cancer Society - The World Health Organization FCC Questions & Answers Are electromagnetic fields (EMF) harmful or dangerous?keyboard_arrow_down According to the World Health Organization, “Electromagnetic fields are present everywhere in our environment but are invisible to the human eye. Electric fields are produced by the local build-up of electric charges in the atmosphere associated with thunderstorms. The earth’s magnetic field causes a compass needle to orient in a North-South direction and is used by birds and fish for navigation.” What WHO says about EMF Property values and site aesthetics Will a wireless antenna facility near my residence affect the value of my property?keyboard_arrow_down Experts in the field of real estate report that the availability of cell service is an important feature to current home buyers. Michael Estrin of Bankrate.com noted, “Today, (home) buyers want to know about the home’s technology. They want to hear about cell service and Internet, not cable and telephone.” The City of Portland, Oregon added, “The argument that property values are affected by wireless sites (positively or negatively) has very little documentation other than anecdotal sources. City staff is aware of arguments on both sides of this issue.” Can T-Mobile camouflage cell sites to blend in with surrounding topography?keyboard_arrow_down T-Mobile works hard to build the least obtrusive, most technically feasible sites to provide reliable service. When we modernize or upgrade sites, most often we do not change the structure, but instead replace antennas and radio equipment with updated technologies. When we build a new wireless facility, we look to existing structures (e.g. industrial, commercial, and municipal buildings and structures) before resorting to a new freestanding facility. We also place a priority on sharing space on an existing carrier’s facility before considering building a new one. Better wireless coverage often requires antennas be mounted on structures that are taller than most of the surrounding structures in an area. Stronger, more reliable communications require an unobstructed line-of-sight between a customer’s phone and the cell site. We always work with each community to make cell sites as unobtrusive as possible while still meeting service goals. Why does T-Mobile need to build a cell site in my neighborhood?keyboard_arrow_down Our decision to locate a new cell site is based on scientific criteria. In making our decision on where to locate a new site, T-Mobile undertakes a rigorous engineering analysis of available RF signal coverage and future expansion needs. To choose a potential site, terrain data within the service area is entered into a computer, along with a series of variables, such as proposed antenna height, foliage and building data, population density, available radio frequencies and wireless equipment characteristics. From this information, engineers determine an area for the optimum location and height of the antenna to maximize coverage within the cell. T-Mobile also looks at different usage patterns of our customers, including the ability to make and hold calls inside buildings and vehicles. Many times a user can make a call on the street, but not be able to make or hold a call as they enter a building. Network data is scientifically measured to determine the amount of traffic at individual cell sites, including the number of dropped and blocked calls. Plus, field technicians, engineers and third-party researchers conduct “drive tests” to collect real-time statistics. These tests simulate the customer experience and provide critical data on signal strength and call clarity. The Federal Communications Commission has regulations that spell out local authority parameters related to cell site deployment and upgrades. In a nutshell, what are those rules?keyboard_arrow_down In 2012, Congress enacted the Spectrum Act to help streamline the process for minor modifications to existing wireless facilities. Since then, the FCC has continued to clarify those rules. Basically, if a project qualifies as a non-substantial modification a state or local government must approve the request within 60 days. To read the latest Declaratory ruling, read In the Matter of Implementation of State and Local Governments' Obligation to Approve Certain Wireless Facility Modification Requests Under Section 6409(a) of the Spectrum Act of 2012 When T-Mobile conducts alternative site analyses, why are some potential locations rejected, while others remain in the mix?keyboard_arrow_down T-Mobile performs rigorous research and considers a number of viable sites before selecting the specific location for a new wireless facility. Our site development process includes the following: - Radio design engineers develop a “search ring” — a map outlining the geographic boundaries where the new site will ideally be located. - Trained personnel conduct an analysis of potential sites for a wireless facility within the search ring and eliminate parcels unsuitable or not zoned for telecommunication facilities. - Property owners whose parcels are suitable for a wireless facility and can be zoned to permit this use are contacted to see if they are interested in leasing their property to T-Mobile. - A list of potential parcels and landlords are submitted to the radio design engineers, who select the best site option.
<urn:uuid:fd0dc230-ad30-4ae0-91aa-c16702c5de91>
CC-MAIN-2022-40
https://howmobileworks.com/faqs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00374.warc.gz
en
0.929444
1,891
2.578125
3
Saturday, October 1, 2022 Published 2 Years Ago on Wednesday, Oct 14 2020 By Adnan Kayyali Governments, people and enterprises are constantly seeking new ways to mitigate the spread of the COVID-19 virus. In recent months, investments in Coronavirus-fighting technologies have skyrocketed, and so has their profitability as both public and private sectors show keen interest in using tech solutions. Of course, technological advancement doesn’t just come out of nowhere – though it may seem that way to most of us. A Boston Consulting Group analysis has identified the 3 areas in which preexisting innovations are being repurposed as Coronavirus-fighting technologies. These include, virus detection and containment, healthcare provision and enablement, and economic resilience and plasticity. Wearable tech is nothing new. From smart watches that count your steps to insulin indicators worn by diabetics. Recently however, wearables have been used in combination with other technologies for early COVID-19 detection, monitoring, and predictive diagnostics. As an example, CarePredict wristbands helped track people’s close contacts in nursing homes, allowing for quick action, bypassing the need for contact tracing technology, or at least localizing it for better privacy. In another more scalable example, Fitbit used its wearable’s data points about a patient’s current health to aid in COVID-19 research. By plugging this data into an algorithm, the device can help researchers find commonalities or disturbances in every patient’s health. Temperature, sleep patterns and heart rate are all examples of data being constantly updated and can help in early detection and prevention. Telcos and operators have been using their massive reach for something other than advertisements and reminders. Well, they are sending reminders, among other things like public health broadcasts, useful information and facts as well as delivering healthcare contact info through SMS. Some operators were asked to play 30-second COVID-19 awareness messages upon making some calls, to ensure illiterate customers would receive essential information. Among all Coronavirus-fighting technologies, AI is perhaps the most widely utilized. From detecting and categorizing anomalies in a person’s bodily function as discussed above, to early warning systems used by governments to prevent the worst before it happens. By quickly analyzing movement paths and location data, an entire country can identify active COVID-19 cases and track their location, pinpointing areas that need lockdown or warning. Many international COVID-19 research centers can consolidate their individual findings and information for all other members to use, speeding up research and improving the testing and detection process. One thing that has been growing steadily for some time is virtual reality. While most may think of video games or entertainment in general when thinking VR, new circumstances as well as the inherent utility of virtual interactive 3D spaces, have lead other industries to adopt this tech. Doctors and researchers have been using virtual and augmented reality to teach and demonstrate treatments and safety procedures against COVID-19. Lately, VR has been used for demonstrating resuscitation techniques for COVID-19 patients suffering from acute or severe symptoms. The ability for students and professionals to see the procedure or treatment occur in front of them helps them assimilate more information and experience the different scenarios while coping with distractions. Source: World Economic Forum The Asus Rog Phone 6 Pro is the latest upgrade to the rog phones family. A great gaming phone with loads of cool features and excellent screen fidelity lets us look closely and see what the fuss is all about. How Good Does It Look The Asus Rog Phone 6 Pro has an intimidating, rough, […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:dbaa18b1-67d3-4080-bba0-e30937493d5d>
CC-MAIN-2022-40
https://insidetelecom.com/4-coronavirus-fighting-technologies-being-pushed-today/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00374.warc.gz
en
0.9362
791
2.625
3
Cyber attack is an unauthorized access to a computer information system or infrastructure with a malicious intent to steal sensitive document, compromise network, vandalize the resources or use the resources for further malicious actions by individuals or organizations or nations. For this discussion, we define a high-performance compute (HPC) environment as a compute resource running Linux operating system (OS) with around at least 500 to 1000 compute nodes having approximately 4000 to 12000 compute cores. They all invariably have high speed, low latency, high bandwidth interconnect network fabric such as Infiniband. They also have attached storage arrays of the order of hundreds of terabytes to petabytes. These kinds of resources are used simultaneously by an average of 200 to 300 users mostly from academic research environments in universities at any time although they may have over 1000 overall users. The complexity of securing the system increases with number of users, as there will be more cases of lost or compromised passwords. Usually in HPC clusters only few login nodes are open to the public network as the compute nodes are in a private network. Security breach involving computer viruses such as Trojan viruses or computer worms usually associated with Windows OS will not be discussed here. We will only discuss proactive mitigating steps to minimize interruptions in the operation of the resource. The expectation in HPC environment is that the research done are mostly open and the resources should be easily accessible and the policy should accommodate the need of researchers who are collaborating around the globe. There is a need for balance between security and convenience. Because of the convenience factor intrusion prevention is little bit harder on HPC systems and they are more vulnerable. However, there are a lot of positive benefits in operating an HPC environment in universities compared to the compute environments in financial institutions. Expectation is that typical researchers are not storing any personnel information such as social security number or private medical data on these systems. The biggest worry is that hackers may vandalize the system when they couldn’t find any useful data or use this resource to stage criminal activities such as executing a distributed denial of service (DDoS). If the users are involved in any research with private medical data then they are required to do their research on HIPAA complaint compute environment. We will not address how to set up a HIPAA complaint system in this write up as it brings additional complexity of encrypting all the research data. HPC sites typically do not have to worry about attacks such as denial of service as these kinds of attacks are usually against high volume web portals such as news organizations or government web sites. Protecting passwords and disabling unencrypted network protocols: In the 1990s research compute environment used protocols such as telnet, ftp where the data between remote computers are communicated in clear text format. So, it was easy for anybody with reasonable expertise to intercept the communication and read the contents. It was easy to listen to an open port and record the keystrokes of users. None of the HPC sites that we know are running these kinds of protocols anymore. The traffic among HPC systems connected through public or private network now is exclusively through encrypted protocols using OpenSSL such as ssh, sftp, https etc. Since almost all HPC resources are running some version of Linux operating system they all invariably run Iptables based firewall at the host level, which is the primary tool to restrict access to service ports from outside network. Many of them open only few ports such as port 22 for ssh. Iptables also help in operating the system when there are known zero-day vulnerabilities by isolating the resources from outside network. Typical way the HPC systems compromised is through either users not protecting their password or using passwords that are easy to exploit such as ‘test123’. By virtue of the design of the Linux OS, the exploit at the user level is often contained local to a particular user because regular users do not have elevated privileges and they do not have access to files of other users or users from a different group. Even though the security breach through compromised password is usually contained in a user environment, they become escalated in a situation where there is a flow in the Linux kernel itself, which will allow the hacker to trigger local root exploitation and elevate the privileges. In such situation the OS needs to be reinstalled with updated kernels. Linux kernels in the 2000s had frequent kernel flows and were susceptible to memory corruption (buffer overflow), which are becoming very rare these days. Another kind of problem is if the security package itself has flows such as Heartbleed bug (heartbleed.com) in OpenSSL, which was detected in 2014 even though the bug existed for many years prior to that. In a scenario where users are hacked, often times owners of the accounts are unaware of the fact that they have been compromised. From our experience of running an HPC cluster for the past 12 years it is often the activity of the hackers that expose or alert the system administrators of the system about possible compromise meaning if somebody just login to the system and do nothing their actions are often overlooked. But as soon as the imposter or hacker start using the resources the monitoring tools that are often embedded in Linux OS can record the strange behavior of the system and an alert system administrator can execute remedial actions. Almost always the behaviors of hackers are completely different from that of the owner of the account. Activities such as sudden burst of network activity, increased network latency, over loading the system with CPU usage, unauthorized jobs bypassing the job scheduler etc. are good indicators of possible compromise. Typically the hackers are exposed in 8 to 10 hours in such scenarios. Often times the affected systems are quarantined from outside network for forensic activities and all the logs are examined to trace the origin of attack such as time, frequency of attack, source host, source port, destination host, destination port and the protocol or application that is used in attacking the system. System will be put back to service after remedial actions are taken such as notifying the appropriate authorities if necessary, upgrading or removing the faulty application or kernel as well as any other upgrades. More around this topic... © HPC Today 2022 - All rights reserved. Thank you for reading HPC Today.
<urn:uuid:4a986fb5-4874-4bae-852e-3aea08a5ea00>
CC-MAIN-2022-40
https://www.hpctoday.com/viewpoints/cyber-security-in-high-performance-computing-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00374.warc.gz
en
0.955194
1,255
2.96875
3
March 21, 2019 by Siobhan Climer Virtualization is a fascinating technology to trace historically. While the roots of virtualization date back as early as the 1960s, with initial development by IBM, Bell Labs, and MIT’s Project MAC, most businesses today consider VMware to be a pioneer in bringing modern virtualization to the market. Through server virtualization, organizations achieve improved efficiencies and controls — while also saving money. Applications and operating systems that were once chained to a single bare metal server or endpoint can today be transformed into software for greater flexibility. This has revolutionized the modern data center. Once IT directors discovered they could support more applications on fewer servers, the amount of hardware companies needed to support was cut down dramatically. Since then, virtualization has moved from servers to desktops and more. What Is Server Virtualization? Server virtualization is the process of creating multiple virtual instances on a single physical sever by using a layer of software known as a hypervisor. Instead of running one operating system instance and one application on one server, server virtualization allows IT administrators to partition the server into multiple operating system instances and associated workloads, all from that same single server. When discussing server virtualization, the physical server is called a host and it runs a host operating system. The virtual machines (VMs) are guests and run guest operating systems. Server virtualization is the basis for what is now cloud computing and hybrid IT, the idea that workloads and computing can occur beyond the physical servers in virtual instances. Find out how server virtualization can benefit your unique environment by joining us for our free weekly demo days. In this hands-on lab, our engineers will test out how virtualization would impact your environment. Why Is Server Virtualization Useful? With server virtualization, organizations achieve: Reduced Hardware and Operating Costs With less equipment to purchase and maintain, hardware and operating costs could be reduced by as much as 50%, and energy costs by 80%. Virtualized data centers gain the ability to move workloads dynamically within a resource pool for greater flexibility. Scale existing apps or deploy new ones by spinning up more virtual machines. Reduced Provisioning Time Provision new servers up to 70% faster through virtualization. There’s no new hardware to buy. There’s no installation required. Everything is done digitally from the hypervisor manager. Greater Compute Power Per Server The physical servers that remain in the data center will make full use of their compute power. Run a collection of virtual machines on one server instead of a single application. Leveled-Up Disaster Recovery Additional disaster recovery options become available through virtualization. Virtualization is a no-brainer for the modern data center. Instead of hosting each application on its own designated server, sever virtualization transforms applications into digital software. One physical server can now host dozens of applications as virtual machines with no decrease in performance or availability. Condense your data center, power less hardware, and save as much as 50% on recurring technology expenses. Getting Started With Server Virtualization Expert certified engineers conduct an analysis of your data center and develop a detailed roadmap for how to improve performance. The Infrastructure Optimization Roadmap is a great first step for those exploring virtualization. Even if you are already running VMs, virtualization also includes other aspects of your infrastructure, such as desktops and applications. Find out how Mindsight works with VMware and Citrix to support increased performance in all aspects of the data center for our clients today. Like what you read? Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges. About The Author Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She previously taught STEM programs in elementary classrooms and museums, and writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s writing fantasy, gardening, and exploring the world with her twin two-year old daughters. Find her on twitter @techtalksio.
<urn:uuid:06db1001-1f5c-41ba-ab05-fadf0cfec818>
CC-MAIN-2022-40
https://gomindsight.com/insights/blog/server-virtualization-modernizing-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00574.warc.gz
en
0.924948
1,002
3.4375
3
The most basic, yet important part of mastering Unix is to fully understand the nuances of file permissions. Tools exist to manage permissions easily, but true enlightenment and quick troubleshooting skills come to those who wholly master the concept. Remember, 80% of Unix problems are permissions issues. At the most basic level, there are three types of access: - Read – the ability to open a file and read it - Write – the ability to write the file - Execute – the ability to execute (run) the file Directories, though similar, are subject to special rules. Write permissions on a directory imply that you can create new files and directories within. Execute permissions are required to ‘cd’ into the directory, and read permissions are required to list (‘ls’) the contents. You will generally see permissions represented as r, w, or x; for read, write, and execute. Running ‘ls -al’ on the command line will show three sets of these strung together. For example: -rwxr-xr-x The dash means that the permission is not set. The first place is always reserved for special identifiers, like ‘d’ for directories or ‘c’ for character devices. The next place begins the actual permissions, for the user, group, and other categories. Every access control in Unix is based on “who you are.” The user is identified by the uid (user ID), as defined by a person’s user account. The third field in the /etc/password file, for example, specifies what a user’s uid is. Similarly, every user belongs to a default group, as identified by the fourth field in the passwd file. Users can belong to many groups, but they’re always a member of their default group. The above example of -rwxr-xr-x means that the owner of the file may read, write and execute it, the group members may read or execute it, and everyone else on the system may also read or execute the file. A full example, from the output of ‘ls -l’ is: -rw-r--r-- 1 charlie root 164 2006-12-10 23:51 test.js The file named test.js is owned by me with read and write permissions, is set to the root group who can only read it, and also allows everyone else to read it. How it Really Works That’s basically enough to get by, but being able to understand the more advanced modes of file permissions, your umask, and the numeric representation demands a full understanding. In reality, there are 8-bits available for each type of attribute. Take a look at Figure 1 and note that wherever you see a 1 in the binary column, a corresponding permission will exist. Figure 1: The Unix 8-bit Permissions Model As you can see, if a “bit” in a certain position of the binary representation is set, the permissions in that space are activated. The number column is the decimal representation, and the “Binary” column is how it really works, from the operation system’s perspective. Example time. Let’s say we wish to give ourselves read/write/execute permissions, the group read/execute, and everyone else read/execute permissions. The following commands both do the same thing: chmod u+rwx .; chmod go+rx . chmod 755 . Since we know that setting ‘5’ means rx, we can simply say ‘5’ instead of ‘rx.’ The real advantage to knowing the decimal representation is that we can set any arbitrary permissions with a single command. Running the chmod command using the mnemonics requires that we run it each time for each set of permissions. Likewise, to set our umask, we must know how the permissions are numerically represented. The umask is the default mode with which files and directories will get created. It’s a mask, so if we want to create all files with permissions like 755, we must take the mask. Simply subtract 7 from each item, and 022 reveals itself as the magic setting. See the umask man page for further details. There are, in fact, three other modes you can set on a file or directory. All Unixes support the following: - set user id (suid) on execution - set group id on execution - the sticky bit If suid is enabled, the permissions look like this: This means that when the file is executed, it will run with the permissions of the owner of the file. It’s dangerous, but some times necessary and quite useful. For example, a file suid and owned by root will always run as root. When sgid is enabled, the permissions look like this: When set on a directory, sgid means that all files created within the directory will have the gid set to the current directory’d gid. This is handy when sharing files with other people, who will often forget to give other members read or write permissions. The sticky bit looks like this: When the sticky bit is enabled, only the owner of the file can change its permissions or delete it. Without the sticky bit, anyone with write permissions can change the modes (including ownership) or delete a file. This one is also handy when sharing files with a group of people. There are other tidbits of information, once you get into the nuts and bolts of Unix file permissions too. For example, you can also set ACL attributes, which get horribly complex. Yes, you can give individual users access to your files, but it’s better not to. Creating a new group and sticking to general permissions can accomplish most things. Often the extended attributes aren’t necessary, and ACLs likely won’t work over NFS if you’re using Linux. Spend some time with the chmod manual page to master tricky parts, if they still aren’t clear. It will also mention some implementation-specific limitations you may need to be aware of.
<urn:uuid:1cd15a0a-a5a8-4d3e-9745-2ae5d14e5b89>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/security/back-to-basics-with-unix-permissions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00574.warc.gz
en
0.91547
1,425
3.75
4
An Alternative Approach to Cryptocurrency SecurityGideon Samid of BitMint Explains 'Quantum Randomness' Today's cryptocurrencies are based on cryptographic standards that eventually could be broken via quantum computing, says Gideon Samid of BitMint, which has developed a virtual currency based instead on the concept of "quantum randomness." Samid says relying on algorithmic complexity for cryptography is dangerous because advances in quantum computing could render that approach ineffective. "Rather than build more and more complexity and hope that the complexity will withstand an attack, we say don't hinge your security on algorithmic complexity; hinge it on lavish use of randomness," he says. In a video interview with Information Security Media Group, Samid discusses: - Why he believes today’s cryptocurrencies and cryptography will be vulnerable to quantum computing; - BitMint initiatives with the People's Bank of China; - Why identity and money will become interlinked. Samid is chief technology officer for BitMint and the architect for the BitMint digital currency, basing security on quantum randomness rather than algorithmic complexity. He also has developed AI-assisted innovation tools, earning 23 patents in computer and material science. He is a lecturer at the University of Maryland and Case Western Reserve University.
<urn:uuid:d6daaaa6-7d50-43e9-907f-c76266711cdf>
CC-MAIN-2022-40
https://www.govinfosecurity.com/alternative-approach-to-cryptocurrency-security-a-16357
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00574.warc.gz
en
0.932966
264
2.578125
3
With large-scale incidents of identity theft, more insecure connected devices such as mobile phones and smart watches, and increasing storage of patient data in the cloud, cybersecurity is becoming a major concern in the healthcare industry. The amount of sensitive patient data in hospitals and healthcare organizations creates security vulnerabilities, and medical equipment like insulin pumps and pacemakers have multiple points of entry for hackers. In 2018, healthcare data breaches affected 11.5 million patients. Each record costs a hospital or healthcare organization around $380, 2.5 times the global average. Grand View Research predicts that the global healthcare cybersecurity market will reach $10.85 billion by 2022. Today, innovations in Artificial Intelligence (AI) and Machine Learning (ML) can help hospitals improve their efficiency and cut security costs through behavioral modeling and automated data analysis at scale. What is Artificial Intelligence and Machine Learning? AI is a process by which a machine can learn how to think like a human. Machine learning is an application of artificial intelligence where a system can learn from past experience and make new decisions without additional programming. AI and machine learning can avoid human errors, automate routine tasks, mine big data sets for actionable insights, and predict future trends. How are AI and Machine Learning Impacting Healthcare Cybersecurity? When applied to the healthcare industry, here are some ways that trending technologies like AI and machine learning can improve the quality of patient care through increased security: Machine learning systems can identify and pre-empt suspicious hacker behavior much more efficiently than traditional reactive methods of fixing vulnerabilities post-attack. Security Information and Event Management (SIEM) software products indicate when multiple devices or unknown users are requesting Electronic Health Record (EHR) access. Over 40% of healthcare breaches result from criminal activity like ransomware and phishing. Phishing and other sophisticated attacks can circumvent anti-virus software and systems with rule or signature-based access, so healthcare organizations need to rely on deep learning systems that can absorb new information and intuitively recognize patterns, thereby outsmarting their opponents. Deep learning is a subfield of machine learning where supervised, semi-supervised or unsupervised learning occurs through artificial neural networks. Almost 20% of cybersecurity attacks in healthcare involve Internet-of-things (IoT) devices. In the past, hospitals and device manufacturers have not upheld stringent guidelines on security standards. It is also difficult to implement security controls into critical devices because many have outdated operating systems and proprietary code. A report by TripX revealed three devices - a blood gas analyzer, a picture archive and communications system, and an X-ray system - where malware infections leaked into other parts of the healthcare network. AI can help quickly identify malware threats before they turn into cyberattacks. Traditionally, healthcare organizations have protected systems and devices by strengthening passwords, patching (fixing security vulnerabilities), and segmenting networks. Today, many time-consuming tasks can be simplified with AI automation. Securing Medical Records For organizations that need to comply with HIPAA, patient data privacy is top of mind. AI and machine learning solutions can work with large algorithms and datasets like EHRs to protect health data. They compare new data access requests with those from client companies and flag suspicious behavior through big data analysis. Facilitating Blockchain Technology Artificial intelligence can complement blockchain technology and create secure digital transactions housed in the cloud. The Department of Health and Human Services recently piloted a new blockchain-machine learning initiative, where machine learning algorithms cleanse contract-writing system data for tracking through blockchain technology. The project then analyzes historical pricing data to negotiate better contract prices. This is anticipated to lead to $720 million in cost savings per year. The Future of AI and Machine Learning in Healthcare Improving security in the healthcare system has life-changing repercussions. Each second a system is down, patient lives are at risk and there is little access to drug histories, care logs, and instructions for surgery. The imperative for intelligently identifying threats and thwarting attacks is leading to more and more organizations turning to artificial intelligence. Some of the barriers to large-scale adoption of improved cybersecurity are that machine learning apps may need to train on large datasets protected by HIPAA regulations and that hospitals are not used to investing in expensive cybersecurity tools (they invest less than 6% of budget in cybersecurity, much lower than the federal budget for cybersecurity). Additionally, employees at all levels of the healthcare organization need to constantly be reminded to enforce security protocols and take precautionary measures. This may require the introduction of gamification, game mechanics or competitive elements housed within a system or application, to change workplace norms. By conducting research into different AI and machine learning systems and how automation can work with existing data and processes, hospitals can be more confident in their ability to focus on providing high-quality patient care and patient safety for people in need, without worrying that malicious users are taking advantage of their work for easy financial gains.
<urn:uuid:e5fb837c-a2b4-4e5c-bf6d-e8ea6d823064>
CC-MAIN-2022-40
https://blog.24by7security.com/how-ai-and-machine-learning-help-healthcare-organizations-improve-cybersecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00574.warc.gz
en
0.930676
1,016
3.15625
3
Recently on the blog, we’ve talked a lot about DNS encryption and how DoH impacts end users. But there is more to DNS security than just encrypting DNS requests and responses. It’s probably safe to say that as a DNS filtering company, we have a lot of thoughts about the umbrella term “DNS security.” There are so many different types of dns attacks on the internet but in this article we will cover the most common that victims fall for. But first, let's go over what DNS security is and why having secure DNS is so important. It’s actually pretty simple, though fairly broad. When we talk about secure DNS, we’re talking about adding security at the DNS layer to protect end users from malicious site content, malware, phishing attacks, and other DNS-level attacks. For a brief overview of DNS, you can check out our blog on DNS filtering. The end goal of DNS security is to mitigate possible threats at the DNS level—and this includes insider threats! It’s safe to assume that everyone at your company logs into a computer at some point in their working day. And a large majority of those people are accessing the internet. Since internet usage across all industries is so ubiquitous, protecting employees at the DNS level is imperative. The moment an employee encounters a malicious URL without proper DNS security in place, it puts your business at tremendous risk. That employee may have highly confidential information that the hacker wants to access, or it can release malware onto that computer that could then spread to the entire network. Just navigating to the wrong website could result in all of your systems being taken offline for an unknown amount of time. So how do these attacks even occur to begin with? Technically, almost any online attack could be considered a DNS attack since it needs to use DNS to spread. What follows is by no means a complete list of all DNS attacks that can occur, but these are the attacks that people fall victim to most often. Phishing attacks are a favorite among hackers. This is because they’re relatively easy to implement compared to other attacks. These attacks can be implemented via a website or an email in an attempt to lure victims to take the bait. Attacks that target a certain company or group of people are known as “spear phishing” attacks. It’s easy to be conned by these attacks when the hacker is a skilled manipulator and does their research on you and your company. But even the attacks that aren’t that well crafted have a high likelihood of working if the victim isn’t paying close attention. The unprecedented takeover of multiple celebrity Twitter accounts in July was a result of a spear phishing campaign that targeted Twitter employees with account control access. I’ll keep this short since malware is a very broad term and we cover a type of malware attack below. The term malware is actually an abbreviated form of “malicious software.” It can be spread through forced downloads, phishing schemes, or malicious ad content. You’ve probably noticed that phishing attacks and malware attacks are sometimes interconnected. Phishing refers to the way an attack is deployed and malware refers to the actual malicious software that winds up on a victim’s computer. So, a phishing attack is not always a malware attack, though it can be. And vice versa. Ransomware is the most common form of malware attack. The malware users downloads (or is forced to download) allows hackers to encrypt user files (or entire computers, networks, etc.) and then ask that a ransom be paid. In July, the GPS navigation company Garmin had a multi-day outage as the result of a ransomware attack. The hackers encrypted parts of their network which blocked users from being able to use Garmin devices. A DDoS attack occurs when an attacker targets a network or server in an attempt to overwhelm the system with a large amount of internet traffic. A DDoS attack is an interesting hybrid of malware and botnet attacks. A computer or device is infected by malware which turns those devices into “bots,” with the hacker gaining control over said bots. These bots then send requests to the targeted server aiming to overflow systems and create a “denial of service” error. That’s a very high-level look at DDoS attacks. Under the umbrella of DDoS attacks, there are many types of attacks. In February 2020, AWS mitigated a DDoS attack that was 2.3 Tbps in size (the largest DDoS attack ever, nearly doubling the previous record). This is when a malicious actor intercepts a communication between two parties. Most commonly we see this when a user is temporarily redirected to a fake login page that will collect personal information or login credentials. Think of it as an advanced form of a phishing attack. It’s incredibly technical and the hacker needs to have strong coding abilities, so it doesn’t rely on their ability to manipulate. Instead, it completely hinges on their ability to camouflage themselves. These types of attacks are where DNS encryption is essential. Sometimes also called “domain theft”, hijacking is when a domain name is stolen from the holder of the registered domain. The true owner of the domain is completely locked out. One method of hijacking involves attackers taking control of the domain owner’s DNS records. And note that we’re not just talking about a WordPress-hosted website here. We’re talking about complete ownership of a domain. This means they gain control over directing website visitors and can direct all incoming and outgoing emails. The hijacker might continue to run the website as-is in order to gain information about the website’s users (and in turn steal from them), turn the hacked website into a way to deploy malware, or simply sell the hijacked domain through secondary markets. In one famous case, former basketball player Mark Madsen purchased a domain from eBay for more than $100,000. The domain purchased was actually a hijacked domain. Note that this is the one attack listed here where DNS filtering can’t help you completely (except in cases where domain hijacking is attempted via phishing schemes). I recommend you lock down the logins for your registered domain (such as your GoDaddy account), add 2FA, and use strong passwords. To avoid placing your company at risk of DNS level attacks, you need to implement DNS filtering with DNS encryption enabled. When looking for a DNS security solution you should also prioritize network redundancy and the ability to log DNS activity and report on it. But it doesn’t start and stop with your DNS filtering provider. Use a role-based access approach, meaning only the people who need access to any given system get access to it. Change your passwords frequently and make 2FA mandatory when applications have the option for it. Finally, work toward comprehensive cybersecurity awareness training within your organization. When everyone is more familiar with types of attacks they might encounter and how they can protect themselves, your company is safer.
<urn:uuid:ea6dedd8-443b-492f-8e3e-a97955d0d09a>
CC-MAIN-2022-40
https://www.dnsfilter.com/blog/secure-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00574.warc.gz
en
0.947632
1,466
2.734375
3
When you hear the term "reputation risk management," you might think of a buzzword used in the business sector. Reputation risk management is a term used to describe how companies identify potential risks that may harm their reputation and mitigate them before they blow off. As companies grow, so grows their public reputation. Heading potential PR disasters or credible crises off at the pass can keep organizations from losing revenue, confidence, and trust from their clients. Suffice it to say, putting your best foot forward and keeping it there is crucial. Now, here's a thought: If businesses know they have much to lose if their reputation is threatened, shouldn't parents and guardians also consider that their children can lose out if their digital footprint is at risk? To cap off Internet Safety Month, we're going to ditch the buzzword in favor of a phrase that parents, teens, and young kids can easily grasp: You must manage your online presence. Before we delve into how parents and guardians can take charge, it is crucial that we first understand one thing when it comes to having a digital life: Your online presence is your online reputationOur digital footprint starts the moment we or someone we know shares something about us online. This could be a solo or group photo, a Facebook status update, or a name mention in a Tweet. Even those who claim to be inactive on the Internet can still have an online presence, thanks to other people in their lives. Our footprints don't stop at our first "Hello, World!" though. The more we use the Internet, and the more we're included in other people's social media feeds, the more of our footprints are left for anyone online to see. These marks we leave behind can be collectively referred to as our online presence. How we present ourselves to and conduct ourselves in the digital world affects how people perceive us online—now and in the future. Having an online presence, whether it's a positive on negative one, affects our reputation—online and in the real world. If "Jane Doe" is known to exhibit behavior tantamount to bullying in a forum she frequents, she already has a bad reputation in that community. Who she is and how she behaves in that community can also spill over to other online forum communities as well. There are consequences for bad behavior online. She may be blocked from those communities. Or worse, someone may Google her name and become aware of her bullying behavior online. She could feel the impact of her negative actions in the workplace or beyond when coworkers or friends become aware that Doe is engaged in bullying in forums, they can assume that she has the tendency to bully people in real life as well. Leaving only negative digital footprints online, then, has no longer become an option. What you can tell your kids to manage their online presence"Google yourself." Maybe it has been a while since your kid started using the Internet, or you and your child are just curious of what might come up. (Hint: type your name in quotes) Either way, it's advisable to look up where your name, public posts, and/or photos end up every now and then. If your child has a common name, you can further add modifiers (like the school they go to or city/state/town you live in). Just run many searches with varying modifier combinations and see what comes up. As for photos, you can use Google's image and reverse image searches. To do the latter, go to the Google Image Search page and click the camera icon in the search bar. You can then paste the URL of an image you have of your child (in the first tab) or drag-and-drop to upload their picture (in the second tab), so Google can crawl the web in search for other copies of the one you just provided. Google Image Search page processing the image you uploaded for reverse lock-upOther things you can use to search for are email addresses, social media usernames, and phone numbers. You can also set up Google to alert you if other information about your child (like their name) pops up on the Internet at some point in the future. "Watch out for information you don't want made public." It's possible that you may have already stumbled upon a few pieces of information or pictures you or your child may not want online, or at least visible to the public. This information may have been put up years ago or yesterday. Posts can be easily removed on sites you or your child can control, such as Facebook and Twitter. But for third-party sites, it may need a bit of legwork. For copyrighted material such as photos, you can contact the site owners and reference the Digital Millennium Copyright Act (DMCA) [PDF]. As the parent or responsible adult, you may also need to contact each website that has information about your child that you don't want there. It's also time to review those security and privacy settings of your child's accounts to see if there has been a policy update or if you need to modify additional settings. "Start cleaning up your online act." A good starting point will be teaching them good computing and Internet practices, if you haven't already. We have various references of how one can do this here on the Malwarebytes Labs blog. So to avoid reinventing the wheel, below are the links you may want to visit and read up on: - Simple steps for online safety - 10 ways to prevent malware infection - How to avoid potentially unwanted programs - 3, 2, 1, GO! Make backups of your data - How do I secure my social media profile - 10 tips to maintain an online presence in a privacy-hostile world Lastly, impress in them the idea of thinking first before posting anything. Online, it's easy enough for anyone to misconstrue what one is trying to say because cues like facial expressions and body language are non-existent. A flippant joke or a sarcastic remark could start a flame war. Even an innocent post can sometimes get someone else in trouble. "Deactivate/Delete accounts you're no longer using." This may seem obvious, but at times, accounts that are no longer used are left active for an indefinite and extended period because your child may have decided to use another account, or wholly avoided people in a particular online community. The latter is one of the best reasons why your child's account should be deactivated. This is especially helpful if, for example, your child was caught in a crossfire between warring parties and one group started targeting him or her via that account. Save everyone the headache (and the insanity) and deactivate the account. In a perfect world......every Internet user would be sharing all of their achievements, and everyone would be applauding. Every Internet user would be encouraging everyone who needs encouraging. Every Internet user would be honest, civil, and tactful. Every Internet user would be sharing photos of only their best, wholesome selfies, their cats, and funny GIFs. But this isn't a perfect world. Someone will always say something that another may find offensive. Someone will put someone else down, talk in Caps Lock, and share photos of their wild partying or of a drunk friend who passed out on a sidewalk. In the end, realize that there is data online about someone that puts them in a bad light. Your child may not be exempted. So help them take control and guide them on how to be more responsible with what they share now and in the future.
<urn:uuid:450755e9-7dff-46f3-b7cd-fd95703bbdfb>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2018/06/internet-safety-month-manage-childs-online-presence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00574.warc.gz
en
0.953682
1,542
2.546875
3
The Evolution of the Matrix IoT, or Internet of Things, devices can generate up to trillions of data points per second depending on the ecosphere in which they inhabit. These devices are not necessarily stationary either. Some may move within a designated geography, while others roam as free in and out of an area as one could imagine simply by being attached to a person rather than a thing. Each of these devices is created by a host of manufacturers over potential several generations with varying degrees of interoperability, security, efficiency, and effectiveness. With over 31 billion, and growing, IoT devices deployed they permeate almost every facet of our lives. It is anticipated that by 2025 over 75 billion devices will be generating petabytes of data per second. Impressive as this sounds, it still is less than .006% of “things” that can be connected and the race to fill that void is staggering. First coined in 1999 by Kevin Ashton, IoT did not really start its meteoric rise until just over a decade later when a Garner report (2011) listed it as a new and rising technology. Since then, the AEC (Architecture/Engineering/Construction) industry embraced the possibilities of smart building technology in creating more efficient buildings that worked within parameters set by its occupants to accomplish many desired outcomes – sustainability being a leading one. As networks, storage, and IoT devices have improved, we have moved from an Internet where people were the primary client/customer, and the data generated by them and was analyzed after the fact. We have now moved to one where things reign supreme and the data is generated first and we need to sift through it to analyze it and decide what is useful and relevant. Out of this massive data cache, we have seen a division of IoT into five basic branches that each form their own unique universes, but as some suggest with our own Universe, cross over each other and can help or hinder. These include Consumer, Commercial, Industrial, Infrastructure, and Military. As networks, storage, and IoT devices have improved, we have moved from an Internet where people were the primary client/customer, and the data generated by them and was analyzed after the fact Within each of these applications, the technology deployed and the data collected generated several dilemmas. The largest of these is who owns the data and who determines what is collected and how it is to be used. The best example of this is within our tech companies such as Google, Facebook, Apple, and others who target trillions of points of data of its users through the use of their services – passive or otherwise – that can then be parsed, analyzed, and monetized and has often been the target of privacy rights groups, yet we benefit from that data. We also have seen how that data can be used for nefarious intentions as well with the recent political unrest surrounding the Presidential Election in the US where data was used to monetize and organize, and not always for altruistic purposes. A building owner may think they own their data but that may not be the case. Another significant issue is the security of these devices. While we may be in an age of IoT where there are many mature players who have developed, often through trial and error, robust IoT devices that can be effectively placed in a built environment, there are still many newcomers rushing to market, lacking in experience, or with not-so-great intentions producing IoT products that open vulnerabilities to the bad actors or self-induced catastrophes. These can include weak, bad, or hardcoded passwords; insecure or unneeded network services; insecure edge devices; outdated firmware, hardware, or software, and many other security holes. Smart solutions with an intelligent building or complex being designed and built must take into account not just these vulnerabilities within the systems designed into the buildings, but also must look at the transient devices that are IoT connected to protect from failure points. This is often one of the biggest tasks overlooked by owners, occupants, and the design and construction teams where blinders are worn to anything that isn’t attached to the building. This opens up the project for failure points and security vulnerabilities. Current generations of devices have limited intelligence at the edge mainly due to lack of computing power and storage, however, that continues to change at an exponential pace. This is mainly a factor that on-demand information requirements are changing and latency of processing in the cloud slows things down. So as these edge devices improve, vulnerabilities could increase more than there are today and transient devices that may connect to a building's edge topography will cause headaches down the line. Blockchain technology, as is used in Bitcoin transactions, has been continuing to make inroads in the IoT world and has the promise to advance security but this needs to be coupled with a thorough threat assessment beyond the boundaries of the project in addition to the actual architecture of a projects IoT solution. Other security features continue to be developed and integrated but until the IoT industry truly aligns and overseeing authorities place limits on what is acceptable IoT will continue to be a wild west.
<urn:uuid:d80c9955-9c21-49d8-8207-5ca3155f5f00>
CC-MAIN-2022-40
https://construction.cioreview.com/cxoinsight/the-evolution-of-the-matrix-nid-32819-cid-25.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00774.warc.gz
en
0.969619
1,027
2.875
3
ACL (Access Control List) – A method of keeping in check the Internet traffic that attempts to flow through a given hub, router, firewall, or similar device. Access control is often accomplished by creating a list specifying the IP addresses and/or ports from which permitted traffic can come. The device stops any traffic coming from IP addresses or ports not on the ACL. ARP – Address Resolution Protocol (ARP). A network protocol that is used to convert IP addresses to physical network addresses by sending an ARP broadcast to request the address AH – Authentication Header – Part of the Internet Protocol Security (IPsec) protocol suite, which authenticates the origin of IP datagrams and guarantees the integrity of the data. BYOD – Bring your own device – The authorised use of personally owned mobile devices such as smartphones or tablets in the workplace. DMZ – Demilitarized Zone – A DMZ is commonly provisioned between a corporate network and the internet where data and services can be shared/accessed from users in either the internet or corporate networks. A DMZ is established with network firewalls to manage and secure the traffic from either zone. The name is derived from the term “demilitarised zone”. Firewall – Hardware or software Security system designed to prevent unauthorised access to a network from another computer or network. ISP – Internet service provider – Service Provider that provides access to the internet and related services. IDS – Intrusion detection system – Program or device used to detect that an attacker is or has attempted unauthorised access to computer resources. IPS – Intrusion prevention system – System that also blocks unauthorised access when detected. IKE – Internet Key Exchange – IKE establishes a shared security policy and authenticates keys for services like IPSec that require security keys. Before any secured (over IPSec) traffic can be passed, each VPN Gateway must verify the identity of its peer. This can be done by manually entering pre-shared keys into both hosts . IPSec – IP Security – A framework that provides data confidentiality, integrity and authentication between IPSec peers. IPSec uses IKE to address the negotiation of protocols and algorithms based on local policy and to generate the encryption and authentication keys to be used by IPSec. NAC – Network Access Control – NAC is a security approach that strengthens the security of a secured network by restricting the availability of network resources to endpoint devices that comply with a defined security policy of the organization or group. RADIUS – Remote Authentication Dial In User Service – RADIUS is a networking protocol that provides centralized access, authorization and accounting management for users to connect and use a network service. When a person or device connects to a network ,”RADIUS” authentication is required as part of security control. Phishing – Method used by criminals to try to obtain financial or other confidential information (including user names and passwords) from internet users, usually by sending an email that looks as though it has been sent by a legitimate organization (often a bank). The email usually contains a link to a fake website that looks authentic. Proxy server – Proxy is terms used for device which sits between an end system and remote server and acts as a mediator. The client requesting the resource connects to the proxy server and once validated proxy connects to remote server and provides the requested content to the client. VPN – Virtual Private Network – A VPN is a computer network that uses public telecommunication infrastructure such as the Internet to provide remote offices or individual users with secure access to their organization’s network. Backdoor – A design fault, planned or accidental, that allows the apparent strength of the design to be easily avoided by traffic manipulation. Certificate – An electronic document attached to someone’s public key by a trusted third party, which verifies that the public key belongs to a legitimate owner and has not been compromised. Certificates are intended to help you verify that a file or message actually comes from the entity it claims to come from. CA – Certificate authority – A trusted third party who verifies the identity of a person or entity, then issues digital certificates vouching that various attributes (e. g., name, a given public key) have a valid association with that entity. Encryption – The process of disguising a message to make it unreadable by humans. The resulting data is called ciphertext. Event logs – A log of user actions or system occurrences which help in auditing and getting security breach. Hacker – A user who breaks into sites for malicious purposes. MD5 – MD5 is one of series of message digest algorithms which involves appending a length field to a message and padding it to a multiple of 512-bit blocks. Each of these 512-bit blocks is fed through a four-round process to result in a 128-bit message digest. NAT – Network address translation – NAT hides internal IP addresses from the external network. When a firewall/Router is configured to provide NAT, all internal addresses are translated to public IP addresses when connecting to an external source. Public-key encryption – A cryptographic system that uses two keys, public key known to everyone and a private or secret key known only to the recipient of the message. RSA – A standard for public-key cryptosystems named after its inventors, Ron Rivest, Avi Shamir, and Rick Adleman . Its security is based on factoring very large prime numbers. The size of the key used in RSA is completely variable, but for normal use, a key size of 512 bits is common. The RSA algorithm is based on the fact that there is no efficient way to factor very large numbers. SSL – Secure Sockets Layer – A technology embedded in Web servers and browsers that encrypts traffic.Its an Encryption technology for the Web used to provide secure transactions, such as the transmission of credit card numbers for e-commerce. Stateful Inspection – A term 1st introduced in CheckPoint which allows a firewall to analyze packets and view them in context. (Also called stateful multi-layer inspection) BOTNET – A botnet is a collection of internet-connected devices, which may include PCs, servers, mobile devices and internet of things devices that are infected and controlled by a common type of malware. Users are often unaware of a botnet infecting their system.
<urn:uuid:4756aa4a-c141-4cff-996c-98c9f69548bf>
CC-MAIN-2022-40
https://ipwithease.com/commonly-used-network-security-terms-and-concepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00774.warc.gz
en
0.921133
1,316
3.296875
3
Windows and Linux are 2 of the most famous and widely used Operating system globally. With new updates and features being introduced, both these Linux OS have been in limelight across the world. While the 1st version of Windows (1.0) got released in 1985, Linux, built by Linus Torvalds, came to fore in 1991. While Windows is GUI based Operating system which has commercials associated with licensing, Linux is an Open Source and free for use. Related – Top Linux Interview Questions In fact, notable to share that windows has captured approx. 90% share in Global OS market, whereas Linux sits at meagre 1.6%. Another essential contrast between both is that former is very less secured compared to the latter, also that windows is very less considerate to end-user privacy whereas Linux provides full privacy to its users. Related- LINUX vs UNIX Proponents of Windows boast of the OS being very user friendly and easy to use compared to Linux which is more complex and requires more skill to operate and work on. On the other hand, Linux is considered very customizable with a plethora of features to support bespoke needs which are limited when it comes to Windows. Having shared the thoughts in the above paragraphs, below table summarizes the same in a simple and structured format –
<urn:uuid:66b1b552-769f-47b2-abae-4e92199f71bc>
CC-MAIN-2022-40
https://ipwithease.com/windows-vs-linux-operating-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00774.warc.gz
en
0.96085
269
2.953125
3
|Visual Basic Scripting Edition| |Property Get Statement| Declares, in a Class block, the name, arguments, and code that form the body of a Property procedure that gets (returns) the value of a property. Used only with the Public keyword to indicate that the property defined in the Property Get procedure is the default property for the class. Indicates that the Property Get procedure is accessible only to other procedures in the Class block where it's declared. Name of the Property Get procedure; follows standard variable naming conventions, except that the name can be the same as a Property Let or Property Set procedure in the same Class block. List of variables representing arguments that are passed to the Property Get procedure when it is called. Commas separate multiple arguments. The name of each argument in a Property Get procedure must be the same as the corresponding argument in a Property Let procedure (if one exists). Any group of statements to be executed within the body of the Property Get procedure. Keyword used when assigning an object as the return value of a Property Get procedure. Return value of the Property Get procedure. If not explicitly specified using either Public or Private, Property Get procedures are public by default, that is, they are visible to all other procedures in your script. The value of local variables in a Property Get procedure is not preserved between calls to the procedure. You can't define a Property Get procedure inside any other procedure (e.g. Function or Property Let). The Exit Property statement causes an immediate exit from a Property Get procedure. Program execution continues with the statement that follows the statement that called the Property Get procedure. Any number of Exit Property statements can appear anywhere in a Property Get procedure. Like a Sub and Property Let procedure, a Property Get procedure is a separate procedure that can take arguments, perform a series of statements, and change the value of its arguments. However, unlike a Sub and Property Let, you can use a Property Get procedure on the right side of an expression in the same way you use a Function or property name when you want to return the value of a property.
<urn:uuid:862c1ec2-8e43-4b2d-9ebe-9ac4e60483f4>
CC-MAIN-2022-40
https://admhelp.microfocus.com/uft/en/all/VBScript/Content/html/05167024-a817-405f-a0ce-2057d01f804a.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00774.warc.gz
en
0.801998
475
3.15625
3
Packet Tracer file (PT Version 7.1): https://bit.ly/2vFzGUU Get the Packet Tracer course for only $10 by clicking here: https://goo.gl/vikgKN Get my ICND1 and ICND2 courses for $10 here: https://goo.gl/XR1xm9 (you will get ICND2 as a free bonus when you buy the ICND1 course). For lots more content, visit http://www.davidbombal.com – learn about GNS3, CCNA, Packet Tracer, Python, Ansible and much, much more. The MAC address table contains address information that the switch uses to forward traffic between ports. All MAC addresses in the address table are associated with one or more ports. The address table includes these types of addresses: •Dynamic address: a source MAC address that the switch learns and then ages when it is not in use. •Static address: a manually entered unicast address that does not age and that is not lost when the switch resets. The address table lists the destination MAC address, the associated VLAN ID, and port number associated with the address and the type (static or dynamic). By default, MAC address learning is enabled on all interfaces and VLANs on the router. You can control MAC address learning on an interface or VLAN to manage the available MAC address table space by controlling which interfaces or VLANs can learn MAC addresses. Before you disable MAC address learning, be sure that you are familiar with the network topology and the router system configuration. Disabling MAC address learning on an interface or VLAN could cause flooding in the network.
<urn:uuid:3bd383bd-cf6d-436c-b177-35d70a059f4d>
CC-MAIN-2022-40
https://davidbombal.com/cisco-ccna-packet-tracer-ultimate-labs-mac-address-learning-flooding-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00774.warc.gz
en
0.896826
356
2.984375
3
A query is broken up into terms and operators. There are two types of terms: single terms and phrases. A single term is a single word such as “test” or “hello”. A phrase is a group of words surrounded by double quotes such as “hello world”. Multiple terms can be combined together with Boolean operators to form a more complex query (see below). The analyzer used to create the index will be used on the terms and phrases in the query string. So, it is important to choose an analyzer that will not interfere with the terms used in the query string.
<urn:uuid:bf5601ab-1f45-448c-a236-397a2a6006d7>
CC-MAIN-2022-40
https://community.denodo.com/docs/html/browse/latest/en/vdp/data_catalog_up_to_update_20210209/appendix/apache_lucene_search_syntax/terms
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00774.warc.gz
en
0.938076
136
2.53125
3
OT (operational technology) is responsible for critical processes that, if breached, could have catastrophic consequences, including loss of life. OT encompasses supervisory control and data acquisition (SCADA), industrial control systems (ICS), and distributed control systems (DCS). Emergency services, water treatment plants, traffic management, and other critical infrastructure rely on operational technology solutions to properly function. Cyber-attacks on critical OT infrastructure have been on a stratospheric trajectory, increasing 2000% in recent years! Audacious attacks have been launched on everything from nuclear plants to water treatment facilities. In fact, a poisoning attack at a Florida water treatment facility was particularly scary because of how easily an attacker gained sensitive access via inadequate password management and how they were able to leverage unsophisticated actions via a consumer-grade remote access tool within the environment to order the system to increase the amount of lye in the water. Why Cyber Risk to OT Systems is Increasing For many years, industrial systems relied upon proprietary protocols and software, were manually managed, and monitored by humans, and were not directly connected to the public Internet. In those days, the only way to infiltrate OT systems was to obtain physical access to a terminal—and this was no easy task. OT and IT (Information Technology) integrated little and did not deal with the same kinds of vulnerabilities. Today, it's a starkly different story as we see more industrial systems brought online to deliver big data and smart analytics as well as adopt new capabilities and efficiencies through technological integrations. This transition from closed to open systems has generated a slew of new security risks that are being actively targeted—often with success—by threat actors, and that need to be addressed. As industrial systems become more connected, they also become more exposed to vulnerabilities. Add legacy equipment, safety regulations that may prohibit any modifications being made to equipment, and compliance regulations that require sensitive data to be made available to third parties, and you have quite a challenge on your hands. Additionally, remote vendors, employees (operators), suppliers, and other contractors often remotely access OT systems to perform legitimate maintenance and other actions. Remote vendors and employees have further complicated the situation by using personal devices (BYOD) as well as working from home networks that are not properly hardened. These remote connections have further blurred the IT-OT segmentation and expanded the attack surface, providing new entry points for hackers to exploit. Often, VPNs are used for privileged remote worker or vendor access, but this is an inappropriate and insecure VPN use case as VPNs lack granular access controls and cannot perform session monitoring or management. While VPNs can provide a secure tunnel from one location to another, the access permitted by a VPN is unrestricted—which is completely unjustifiable for any sensitive environment, let alone OT systems. Of all users, privileged users—whether employee or vendor—pose the most risk as the attacker can ride on whatever privileges that worker has to move laterally from the IT network to the OT and ICS system on the production floor. Once in the ICS network, hackers can potentially monitor and manipulate operational components, including reading commands or changing parameters, which can cause dangerous conditions to the environment, jeopardize the safety of plant personnel or the community, and potentially cause monetary loss due to shut down or a disruption in production 4 OT Cybersecurity Best Practices How can organizations securely address a large volume of operators, contractors, and vendors connecting remotely into their network, without the use of a VPN and without compromising processes, operations continuity, or inhibiting business agility or productivity? At minimum, you need to know at all times who (identity) is doing what on your network, from what device, and when. And, critically, you need to be able to exercise complete, granular control over that access at all times—whether it is for an employee or vendor, and whether they are on site, or connecting remotely. Here are four best practices for protecting Operation Technology environments from cyberthreats: #1 Implement a Zero Trust Framework While the zero trust security philosophy is commanding more attention and seeing increased adoption, most organizations remain stuck operating with the traditional network perimeter security model and using VPNs and other tools to grant access for remote access. Securing any network begins with understanding every connected user and device and every bit of data they are trying to access. This is a basic premise of any security framework—including zero trust. To truly embrace zero trust across your OT network, consider implementing the following: - Apply network segmentation: Provide application access independent of network access. This entails enabling contractors and vendors to access only the applications and systems they need--without requiring complex firewall configurations or VPNs. - Provide application-level micro-segmentation, which prevents users from discovering applications that they are not authorized to access. In addition to protecting against malicious insiders or external threat actors, this step also helps protect the environment against human errors, which is the one of the leading cause of breaches and system downtime. - Establish a centralized point of visibility and accessibility for different systems that require various connectivity methodologies. As more OT systems are integrated with IT systems to drive automation, efficiency, and lower costs, keeping these systems known and available on the internet only for authorized users eliminates the biggest attack surface. - Monitor and record all activities performed over remote access via on-screen video recording, keystroke logging, etc. Session monitoring is essential both for security and for compliance. - Exert granular control over the sessions by enforcing least privilege and restricting commands that can be executed by identity/user. - Implement API Security - Protecting APIs is essential to safeguarding the integrity of data communicated between IoT devices and back-end systems. Only authorized devices, developers, and apps should be permitted to communicate with specific APIs. #2 Align the Right Remote Access Tools with the Right Use Cases Over the last year, with the largescale shift to remote work, VPN usage has spiked to an all-time high. Unfortunately, VPNs and other remote access technologies, like RDP, are being stretched beyond their legitimate use cases in ways that are clearly reckless. No where is the risk more potentially dangerous than within OT networks. VPNs and RDPs need to be eliminated from usage in those instances involving privileged access and third-party access. While adequate for providing basic remote employee access to non-sensitive systems (i.e., email etc.), VPNs lack the granular access controls, visibility, scalability, and cost-effectiveness demanded of third party and remote worker access to OT/IoT devices. VPNs cannot enforce the granular least privilege access or monitoring/management over sessions that is imperative for security and oversight of privileged user access. #3 Understanding IT Security Versus OT Security In most organizations, the policies and service agreements to manage IT systems do not extend to the operational technology environment, creating a security and management gap. Managing security and risk in OT environments isn’t as simple as porting over IT security best practices into the OT system. Relying on consumer-grade remote access / support and other such IT solutions is certainly not adequate when it comes to protecting the most sensitive environments. OT technology obsolescence periods are much longer than IT. Legacy systems that have sometimes been in place for 20-25 years proliferate in OT environments. Compare that to the IT world where equipment rarely lasts more than five years. This results in outdated, diverse endpoints where patches aren’t available, or updates can’t be made due to low compute power. IT has had decades to mature security practices and minimize exposure. But the need to manage risk is universal, and organizations must adopt solutions and strategies to secure their OT environments based on their specific needs. #4 Apply Robust Privileged Credential Management Practices – And No Password Sharing! Password malpractice abounds in OT environments, and it continues to be a leading cause of breaches. Credentials are often shared internally and externally, and access is not limited to specific network devices or segments. Reduce the risks associated with privileged credential compromise in your OT environment by safeguarding access to privileged account passwords and SSH Keys. Implement an enterprise-grade privileged credential management solution that provides full control over system and application access through live session management, allowing administrators to record, lock, and document suspicious behavior with the ability to lock or terminate sessions. Such a solution should also eliminate embedded and default passwords, and bring them under active, centralized management. OT and Military-Grade Cybersecurity from BeyondTrust BeyondTrust PAM solutions give OT security managers the tools they need to manage privileged access in a challenging OT environment. To learn how BeyondTrust can help you secure privileged remote access for employees and vendors, enforce least privilege and application control across your OT environment, and ensure all privileged credentials and secrets are consistently security and managed, contact us today. Julissa Caraballo, Product Marketing Manager Julissa Caraballo is a Product Marketing Manager at BeyondTrust. She has over 10 years of experience in software product marketing and lead generation. Previously, Julissa worked as a Marketing Director for a medical management software company. She holds a BA in Business Administration/Marketing and a MBA in Healthcare Management. Her certifications include, Certified Digital Marketing Manager, Pragmatic Marketing Certified and Certified Medical Practice Executive. She can be found on LinkedIn and all social media platforms.
<urn:uuid:82a0761f-98c5-4850-9343-b5ea5717815e>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/operational-technology-ot-cybersecurity-4-best-practices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00174.warc.gz
en
0.939014
1,924
2.75
3
There are two types of DNS record of particular intersest to the delivery of mail, MX and A records. Failure to set your MX records correctly will result in no (or sporadic) delivery of email to your mail server so it is essential that you sent them up correctly. Note: Setting up and configuring DNS servers is outside the scope of support provided by Gordano for GMS. MX stands for Mail eXchange and is a particular type of DNS record that determines where any email destined for your domain should be delivered. An MX record would typically point to the fully qualified name of your mail server, which in turn must have a corresponding A record in DNS that defines the IP address of the mail server. A records define the mapping between a fully qualified hostname and its IP address. It is this mapping that allows users to type in sensible names for your servers such as www.yourdomain.com rather than having to remember the more complex IP address assigned to that server. A typical DNS entry for yourdomain.com may look something like IN MX 10 mail.yourdomain.com IN MX 20 mail.yourisp.com mail.yourdomain.com IN A 220.127.116.11 To provide redundancy, many domains are set up to have multiple MX records as in the following example IN MX 10 mail.yourdomain.com IN MX 20 mail.yourisp.com mail.yourdomain.com IN A 18.104.22.168 The numbers after MX indicate the priority of that entry, the lower the number the higher the priority. So in the above exampel anyone attempting to send mail to yourdomain.com would perform an MX lookup in DNS and obtain the two results. They would first attempt to contact mail.yourdomian.com to deliver the message, and only if they fail to connect to this server would they go on and try mail.yourisp.com. Note that in the example above there is no A record for mail.yourisp.com, this is because the A record for that server would appear in the DNS record for the domain yourisp.com You will also find a tutorial on DNS (along with other subjects) available under the Support > Online section of our web site. Keywords:MX Record DNS resolve MXRecord mx lookup failed
<urn:uuid:8ca6b253-5dd6-4735-be8e-6aac5a8a5bdf>
CC-MAIN-2022-40
https://www.gordano.com/knowledge-base/what-is-an-mx-record/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00174.warc.gz
en
0.917923
489
2.953125
3
MIMO and Beamforming: Looking at the Impact on Throughput and Coverage What happens when using MIMO and Beamforming? A quick brown fox jumps over the lazy dog. This was a sentence that I heard upward of 500 times when I first started in this business as student intern for a regional mobile wireless carrier. Back then, carriers were transitioning from analog to digital mobile wireless. Aside from signal coverage, the main concern was voice quality of wireless connection. Analog wireless was notorious for garbled and noisy connection, and potential customers were hesitant to buy expensive phones and to pay high “per minute” usage fees to use a product with voice quality vastly inferior to fixed wireless. To prove that the voice quality is acceptable, we played pre-recorded sentences while on mobile call, and record them on the receiving end while driving around Boulder, CO. Later, the recordings would be played to an impartial audience, which would grade the voice quality. There were about a dozen sentences that we played in a loop; a quick brown fox jumps over the lazy dog is one of those. These days, a chicken leg is a rare dish is another, and rice is often served in round bowls is yet another I still remember. Our test drives would end at the entrance to Boulder Canyon, where the signal would drop. Let’s see how we can use this sentence to explain what MIMO and Beamforming do to wireless signal throughput and coverage. The setup we used back in the day was basic: one transmit and one receive antenna. Let’s also assume that a modern digital technology would convert the sentence to data, and that it would take one second to transmit that sentence in the basic SISO setup. The coverage edge is the entrance to Boulder Canyon. Now let’s see what happens when use MIMO and Beamforming. 1. MIMO Multiplexing In this MIMO mode, we split this sentence into 4 parts, and transmit each part on a separate transmit antenna. Those 4 parts are received by 4 receiving antennas and parsed back to its original form. Since each antenna transmits ¼ of the original content, the transmission is completed in 0.25 seconds instead of 1 second. Thus, the transmission rate increased 4 times, while the signal coverage is the same. Technically, due to multipath, each receive antenna may get slightly different signal, and the best of the 4 is selected as receive signal. This is only a slight improvement over the basic SISO setup, which lets us drive about 100 meters into Boulder Canyon before the signal is lost. Summary: 4 TX, 4Rx antennas, a quick brown fox is 4 times faster, runs 100 meters into Boulder Canyon. 2. MIMO Diversity In this MIMO mode, each antenna is transmitting the whole sentence. There is no improvement in transmission rate. At the other end, the 4 receive antennas sum the signal up. The resulting signal is higher than what was received at each antenna. As a result, we can hear the sentence even as we drive deeper into Boulder Canyon. The actual net coverage increase is 10*log(4) = 6 dB. This allows us to drive deep into Boulder Canyon before the signal is dropped. Note that we used the number 4 in the equation because there are 4 receive antennas. We would use the same number even if we had only one transmit antenna. Summary: 4TX, 4RX antennas, a quick brown fox runs one kilometer into Boulder Canyon. Let’s assume that we have 4 transmit antennas and just one receive antenna. This time, we can beamform the transmit antennas. What that means is that we use 4 omnidirectional antenna and make them form a directional antenna pattern if we send signal with slightly different amplitude and phase to each transmit antenna. The directional pattern gain is 10 log(4) = 6 dB higher than individual omnidirectional gain. While it is the same net coverage gain as in the previous case, this time we achieve this coverage improvement with only one receive antenna. While we have slightly more complex network at the base station (signal and phase distribution is different for each antenna), we have fewer antennas at the receiver. Note that the transmission rate is still the same. Summary: 4TX, 1RX antenna, a quick brown fox runs one kilometer into Boulder Canyon. 4. Beamforming and MIMO Let’s assume that we have 8 transmit antennas and 2 receive antennas. Half of the transmit and receive antennas have +45° linear polarization, and the other half have -45° polarization. At transmit end, we have 4 collocated +/-45° pairs, and on receive end we have one +/- 45° pair. The four +45° transmit antennas form one directional transmit beam that has 10log(4) = 6dB gain higher than omnidirectional antenna. This beam sends the +45° polarized signal that is recovered by +45° polarized antenna. The same happens with the -45° polarized signal. The original sentence is broken into two halves; one half is sent over +45° polarized signal, and the other half over -45° polarized signal. The whole transmission is completed over 0.5 seconds. Thus, the transmission rate increased 4 times, while the signal increased by 6 dB. Summary: 8 TX, 2RX antennas, a quick brown fox is 2 times faster, runs one kilometer into Boulder Canyon. In the next section, we will explain how we configure MIMO streams in iBwave Design. MIMO Configuration and Beamforming Modeling in iBwave The first step in considering MIMO configuration in iBwave is to configure the System as MIMO system and for now, there are 4 supported options: - 2X2 MIMO - 3X3 MIMO - 4X4 MIMO The number of outputs that can be connected to external antenna will depend on the MIMO configuration: With MIMO configuration, more than 2 output powers per Path (also called Stream or Branch) and each Path would have its own power. There are three options to connect the output of the systems to the antennas: - Use of internal antennas - Use of external MIMO antennas (Connect to DAS) - Use of external SISO antennas (Connect to DAS) In iBwave, the output powers (EIRP) of each Path are displayed separately. In the case of an external MIMO antenna or internal antennas, the Output powers of all paths will be displayed in one frame. For the case of external antennas, the Output powers of each antenna will be displayed separately as each antenna can be placed in different location. This concept is shown in the figure below: The Signal Strength output map is predicted from each path and the strongest value is displayed as a result of prediction. In case of collocated antennas, the predicted values will be almost the same from all paths as they have the same output EIRP power. As for the MIMO MADR output map, it is predicted for the multiplexing mode and a MIMO Multiplexing Gain is applied from a generic MIMO curve to predict the MADR Throughput map. The Diversity Mode is not supported for now. The MIMO MADR will be the SISO MADR multiplied by the MIMO Gain from the MIMO curve and it is a function of SNR. It is worth noting that the MIMO curve can be configured with specific OEM values. An example of MIMO gain curve is shown below where the MIMO Gain is a function of SNR - How Can Machine Learning Help RF Propagation Analysis? - November 19, 2021 - MIMO and Beamforming: Looking at the Impact on Throughput and Coverage - April 27, 2021 - Analysis of CW Measurements @ 28 GHz Taken by Consultix - July 14, 2020
<urn:uuid:24a33a00-e4e6-4980-a53a-57378f76df9f>
CC-MAIN-2022-40
https://blog.ibwave.com/a-quick-brown-fox-jumps-over-the-lazy-dog/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00174.warc.gz
en
0.932742
1,658
2.53125
3
Infographic: Timeline of a Zero-Day Exploit The increase in volume of these cyberattacks has a lot of people asking exactly what is a zero-day exploit? A zero-day exploit is very simply a software exploit that has been found by cybercriminals, but not by developers. So long as the exploit goes unnoticed, hackers will take advantage of it until it’s discovered and patched—this is the cyberattacking equivalent of striking while the iron’s hot. The frequency with which these attacks are targeting businesses should be a concern to all decision makers. According to a Ponemon Institute report, 76% of successful attacks on organization endpoints were zero-day exploit attacks When you factor in how endpoints in particular are more often than not on the receiving end of these attacks, and further factor for the proliferation of endpoints within businesses, you have a recipe for disaster. Multi-layered security in business is crucial in 2020: take a look at our infographic below to get an understanding of what the timeline for a zero-day attack looks like and why you might want to look into your protections right away. In light of recent events, many organizations have found themselves playing catchup with their cybersecurity, trying to implement makeshift solutions to make up lost ground while their workforces are working remotely for the immediate future. To find out more about how you can ensure your business’ cybersecurity is in good shape for now and for the future, download our eBook, “What Makes a Good Cybersecurity Defense for a Modern SMB?”.
<urn:uuid:ba06da49-76d9-48a6-869a-0c58a365d3e5>
CC-MAIN-2022-40
https://www.impactmybiz.com/blog/blog-timeline-zero-day-exploit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00174.warc.gz
en
0.966467
325
2.734375
3
Share this post: In 2012, neuroscientist, TEDTalker, inventor James Kozloski’s wife Sumali was in nursing school in Connecticut – just when the state was struggling to deal with some of the highest rates of hospital-acquired infections in the country. The outbreak, for example, contributed 914 cases of “super bug” Methicillin-resistant Staphylococcus aureus to the some 80,000 annual cases recorded by the CDC (for comparison, California, with 35 million more residents than CT, had 728 cases in 2012). For Sumali and her colleagues, it meant spending much more time on every process, from putting on gowns, to cleaning surfaces. The best prevention for MRSA is to keep things clean, from hands to clothing. This gave James an idea: What if something that MRSA and its ilk couldn’t infect cleaned the hospital doorknobs countertops, and bed railings? Maybe something like a small drone. After working out the creative and technical details with fellow scientist and master inventor Cliff Pickover, as well as colleagues from IBM’s lab in Melbourne, Tim Lynar, a master inventor in workplace safety, and John Wagner, a mathematician, they filed and were issued patent 9,447,448: Drone-based microbial analysis system. Protecting us from what we can’t see More than 1 million drones were sold in the US last year. They’ve gone from military tool, to public use (and acceptance) almost as fast as an infection can spread in a hospital. James envisions small drones swooping into hospital rooms to scan, analyze, and if necessary clean, surfaces before a patient arrives. “Drones are well suited to accessing places that cleaning personnel would have difficulty reaching and even remembering to clean, like the top of cabinets, or within confined spaces,” said Cliff. “And with cognitive capabilities, our drone system could learn what surfaces and surface varieties are more prone to build up of particular microbes, so as to suggest and carry out sterilization procedures and to make predictions.” A drone scanning a hospital room for microbes. Two of every 100 people carry MRSA. And spread of infection is easy, even for those trained in using proper equipment and procedures. An autonomous drone with cognitive technology could recognize and confirm MRSA, down to the microbe, and analyze when and where it may have spread – as well as alert hospital staff. By collecting microbe samples through a drone, or fleet of drones, hospitals could ensure safe conditions for their patients. The decontamination may include, but is not limited to, dispatching a cleaning crew to wipe down and disinfect the positively contaminated area and/or dispatching one or more decontamination drones to the positively contaminated area. The decontamination drones may seek out and identify areas that correspond to the specimen and can disinfect the contaminated areas using various methods including, but not limited to, spraying disinfectant and/or exposing the contaminated area to ultraviolet (UV) light or other types of energy known to kill microbes. “This patent is about protecting hospitals, manufacturing plants, even farms from what we can’t see, by using new methods for collecting and analyzing data. An integrated system could use a number of ‘hive minded’ autonomous drones to ‘see’ microbes in an environment to analyze, treat, and predict their spread,” said James. Hospital patients and workers wouldn’t be the only ones to benefit from a device that can’t get sick. Drones could monitor and map bacterial infections just about anywhere. For example, in the US one in six people are affected by food-borne diseases each year. That results in 128,000 hospitalizations, 3,000 deaths, and $9 billion in medical costs. What if a smart drone could spot offending bacteria sooner – at the grocery story, the processing plant, the farm? James, Cliff, Tim, and John’s patent 9,447,448 establishes the blueprint. Drones analyzing the outdoors for microbes. IBM Patent Leadership “Drone-based microbial analysis system” was just one of 8,088 total patents IBM received in 2016. The company’s patent output covers a diverse range of inventions in artificial intelligence and cognitive computing, cognitive health, cloud, cybersecurity and other strategic growth areas. Read more, here.
<urn:uuid:df7db26a-088c-42c7-b0ae-c509bb6b3e32>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2017/01/drones-to-reduce-outbreaks/?winzoom=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00174.warc.gz
en
0.941071
922
2.640625
3
Laurent - stock.adobe.com From Elon Musk claiming that the unchecked growth of artificial intelligence (AI) could spawn “an immortal dictator”, to the news that a major bank has started using the technology to detect money laundering, AI is seldom out of the headlines. Whether or not AI – the science and engineering of machines that act intelligently – will herald a golden age of leisure, or spell the end of humanity, remains to be seen. What is clear, however, is that businesses globally are already embracing the technology to improve how they work. The possibilities AI presents are vast and its current applications are similarly impressive: it can steer driverless cars; it can read and extract key information from thousands of legal contracts in minutes; and it can review MRI and PET scans and identify malignant tumours with greater accuracy than human doctors. But AI is not without its risks, be it poor performance, misuse or reliance on bad data. Ultimately, computers can make mistakes, and decisions based on flawed, non-human advice can be costly for a business. Even F1 driver Lewis Hamilton discovered this when, during the Australian Grand Prix, a software error and incorrect computer data prompted him not to open up a greater lead over rival Sebastian Vettel, who consequently won the race. Lewis’ experience is instructive: mistrust in AI and concerns over a reliance on bad data are two key worries for businesses, but, when planning to use AI, how can businesses avoid a pile-up in the boardroom? AI is beset by the “black box” problem: many of its processes lack transparency and can’t be easily understood by humans. For example, the developers of AlphaGo, Google DeepMind’s system, could not explain why their system made certain complicated moves in beating the human world champion of board game Go. If we can’t easily comprehend AI’s conclusions, how can we be sure that automated processes are playing fair with their decision-making? Of course, this makes it difficult to accurately assess risk, but businesses still must consider the environment in which the technology is being used: systems running critical infrastructure such as nuclear power stations must set the highest bar for what is considered safe. Before incorporating AI, businesses may need to convince a regulator, perhaps by using software to monitor the technology – algorithmic auditors that will hunt for undue bias and discrimination. This will likely impact performance, since the system will divert processing power to self-analysis, but it could mean the difference between the system getting rejected or approved for commission. Outcomes achieved by an AI system will only ever be as good as the quality of data on which they’re based. There are many variables to determine the quality of inputted data: are the data sets “big” enough, is “real-world” data being used, is the data corrupt, biased or discriminatory? With so much potential uncertainty, businesses should take every effort to minimise the risks. Where data is sourced from a third party, contracts should require transparency around lineage, acquisition methods, and model assumptions, (both initially and on an ongoing basis where the data set is dynamic), and there should be mandated security procedures around the data, to prevent loss, tampering and the introduction of malware – all reinforced by comprehensive rights to audit, seek injunctive relief and terminate. Finally, common sense should apply – businesses should not rely too heavily on a limited number of data points and support big data analysis with other decision-making tools. The corollary of these concerns is to give tremendous power to those who own large repositories of accurate personal data and we therefore expect the issue to become a significant focus for regulatory and contractual protection in the coming years. Isaac Asimov, the famous science fiction writer, once laid down a series of rules to protect humanity from AI. Perhaps it is time businesses did the same. After all, we can’t know the future, but we can prepare for it. And with AI, the future is now. Tim Wright, is a partner at Pillsbury Winthrop Shaw Pittman. Antony Bott, global sourcing consultant at Pillsbury Winthrop Shaw Pittman contributed to this article.
<urn:uuid:a6233274-77c4-44d4-a32f-614b294cad97>
CC-MAIN-2022-40
https://www.computerweekly.com/opinion/AI-black-boxes-and-the-boardroom
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00174.warc.gz
en
0.944177
880
2.640625
3
Nemrut Dağı, or Mount Nemrut, is a 2134 meter tall mountain in eastern Turkey, near Malatya, Adıyaman, and Kahta. Nemrut Dağı is topped by a strange collection of huge statues put there around 62 BC by Antiochos I Theos of Commagene, a megalomanical king. The collection of statues is a hierothesion, representing the king and his relatives, the gods of all the surrounding civilizations. Antiochus created a royal cult that could worship him after his death. It was based on the Greek form of the Persian religion Zoroastrianism. Several Greek inscriptions survive explaining Antiochus's religion and why he created it. As for the mountain's name, "Nemrut" is much more recent and non-historical. It comes from an Armenian legend of the middle ages, in which the Armenian hero Hayk the Great defeated the Biblical king Nimrod. The Armenians equate Nimrod with Baal or Bel, a title given to several gods in Mesopotamian religions. The Kingdom of Commagene The Kingdom of Commagene was an Armenian state that existed as an independent kingdom from 163 BC until 17 AD. It then alternated between being an independent kingdom and a Roman province until 72 AD, when Emperor Vaspasian made it part of the Roman Empire. It was a buffer state, between Armenia, Syria, Parthia, and Roman territory, with a culture that was a mix of the surrounding nations. Its kings claimed to be descendants of Darius the Great, Darius I of Persia. It doesn't appear at all on the first map below, showing Roman power in Asia Minor in 90 BC. It should be where the tip of Cappadocia extends into Greater Armenia. It's often described as lying between the Euphrates River and the Taurus Mountains. Its name appears on the second map, indicating an unbordered region where Cappadocia, Syria, Mesopotamia, and Armenia come together. Commagene was first mentioned as small Syro-Hittite kingdom. The Assyrian texts called it Kummuhu and described it as an ally of Assyria. Sargon II annexed it as a province of Assyria in 708 BC. Alexander the Great conquered the area in the 300s BC. Commagene became part of the Hellenistic Seleucid Empire when Alexander's short-lived empire broke up. Commagene became a state and province in the Seleucid Empire in 163 BC, just as that empire was coming apart. A year later its satrap, Ptolemy, declared himself the king of the independent Kingdom of Commagene. His descendant, Mithridates I Callinicus, embraced Greek culture and married a Syrian princess, thus claiming connections to both Alexander the Great and the kings of the Persian Empire. King Antiochus I Mithridates's son and successor was Antiochus I Commagene. He ruled 70–38 BC. Antiochus was related to the Diadochi. That term was the lowest of official rank titles in Hellenistic times, but in the 19th century historians started using it to refer to the successors of Alexander the Great. Because of all the intermarriage in the families of both his father and mother, Antiochus was the direct descendant of five Diadochi — Seleucus I Nicator of the Seleucid Empire, Ptolemy I Soter of Egypt, Antigonus I Monophthalmus of Macedonia and Asia, Antipater the Macedonian regent, and Lysimachus of Thrace. He then married Princess Isias Philostorgos, daughter of King Ariobarzanes I of Cappadocia. Antiochus was just getting started. He took on the name of Άντίοχος ὀ Θεὸς Δίκαιος Έπιφανὴς Φιλορωμαῖος Φιλέλλην, or Antiochos I Theos Dikaios Epiphanes Philorhomaios Philhellen, or "Antiochus, a Just, Eminent God, Friend of Romans and Friend of Greeks". An eminent god, but a just and friendly one. Sort of the golden retriever of west Asian deities. He set up a royal cult so he could be worshipped after his death. It was based on the Greek form of Zoroastrianism, itself a Persian religion. He was to be entombed in a high and holy place. His tomb should be far from the people and close to the gods, of whom he was one. The gods of Commagene were based on the deities from the surrounding Greek, Armenian, and Iranian civilizations. Some of them blended aspects of similar deities from different religions. - Vahagn / Artagnes / Heracles / Ares - Oromasdes / Aramzd / Zeus / Ahura Mazda - Bakht / Tyche - Mihr / Mithras / Apollo / Helios / Hermes Mount Nemrut Summit Complex Cults of holy mountains were common among the Late Bronze Age Hittites and their Iron Age descendents. Local inscriptions in Luwian speak of kings named Suppiluliuma and Hattusili, lords of the land of Kummaha around 800 BC, who worshiped a sacred mountain named Hurtula. Kummaha was an earlier name for Commagene, Antiochus's kingdom, and Hurtula may have been Mount Nemrut, the tallest peak in Kummaha/Commagene. The complex at the peak of Nemrut Dağı was created by cutting away the peak, leaving a large horizontal platform. Then a tumulus was built, some 49 meters tall and 152 meters in diameter. Presumably there's a tomb under the tumulus. Status and carved slabs were placed on the remaining horizontal platform, which forms terraces around the tumulus. The collection of statues is a hierothesion. It represents the king and his relatives, who are (of course) the deities of all the surrounding civilizations. Despite the impressive level of megalomania, Antiochus intended for his religion to provide happiness and salvation to his followers. There were two feast days for Antiochus every year. His birthday, celebrated on the 16th day of the month of Audnayos, and the anniversary of his coronation, celebrated on the 10th day of the month of Loios. Priests were appointed, they and their descendants were to conduct the celebrations in perpetuity. The priests dressed in traditional Persian robes and placed gold crowns on the statues of Antiochus and his "relatives". They then offered incense, herbs, and sacrifices on altars in front of every image. All the people were invited to banquets celebrating the deceased king. Antiochus had decreed that "grudging attitudes" were forbidden, and everyone should enjoy themselves, eat, and drink wine, all while listening to sacred music performed by temple musicians. The construction and staff were funded from state properties. All this was not sustainable. The kingdom fell soon after and the site was completely forgetten. It was only rediscovered in the late 1800's when the Germans were surveying for a railroad they were building for Turkey. Mount Nemrut is now a UNESCO World Heritage Site. Visiting Mount Nemrut Today You can visit Nemrut Dağı on trips out of Malatya and Kahta. You drive up the mountain in the day, visit the site in late afternoon and watch sunset, then retire to a hotel near the summit. In the morning you get up early to see sunrise from the summit. Then it's breakfast at the hotel and back down the mountain in the late morning. These pictures show a trip up from the north side, from Malatya. I had arrived on a bus out of Göreme, in Cappadocia, after exploring the fantastic landscape in that area. Make way for the local traffic! We're passing a local shepherd as we make our way up the mountain. Below, we have stopped for a break along the mountain road. That's our large van and our group on the bridge. Beyond the van you see a local home. And beyond that, above the thin poplar trees, the lower slopes of Nemrut Dağı climb toward the summit. We are continuing along the road up the mountain. It dropped to a single lane soon after we turned off the highway from Malatya. We're climbing beyond a small mountain village. This was the last settlement on our way up. We have arrived near the summit! The hotel is in sight, and the summit is beyond and to its left. Notice the conical shape of the summit — that's the burial mound or tumulus of Antiochos I Theos Dikaios Epiphanes Philorhomaios Philhellen of Commagene. The German railway engineers discovered the site when they were sighting mountaintops through a surveying transit. And yes, that's snow, and these pictures were taken in early June, right after the roads had opened for the brief summer season. Most people visit Nemrut Dağı in June, July, and August, when it's practical to do so. I'm in front of the hotel with the summit visible beyond it. Yes, it's a little bleak up there, but that's part of the appeal. The hotel itself could use a coat of paint. Mountain winters are rough in eastern Turkey, and it's hard to build and maintain a structure up here. We're walking around the hotel to get loosened up after the long ride. Soon it will be time to hike the rest of the way to the summit. On to the Summit The statues have lost their heads over the years. Eastern Turkey is geologically active and there are many earthquakes in the region. Originally the statues were seated in a row, flanked by a lion and an eagle on each side. The conical summit tumulus is 49 meters tall and 152 meters in diameter. Art historians point out that the statues have Greek facial features, but Persian clothing and hairstyles. The kingdom of Commagene was in the mountains between ancient Greek, Persian, and Armenian civilizations, and so their art borrowed from all of them. The first picture below shows Heracles / Vahagn / Artagnes in the foreground; then either Antiochos himself or else Apollo / Mithra / Helios / Hermes in the Phrygian cap to the left, and a blend of the Greek goddess Tyche and the all-providing goddess Commagene to the right. There are spectacular views over wide areas of eastern Turkey, including the areas where both the Tigris and Euphrates rivers begin. There are bas-relief carvings, thought to have formed a large frieze. They show ancestors of Antiochus, both real and imaginary. One of the bas-relief carvings shows an alignment of stars and the planets Jupiter, Mercury and Mars on 7 July 62 BC. This might indicate the time when construction began on this monument. See the paper here for an alternative analysis suggesting that the monument "represents the sky at special moments of the year 49 B.C." Those authors found that the eastern terrace was not aligned with the direction of the sun's rise on the summer solstice, as had been assumed, but was almost 6° away. This would be the direction of the rising of Regulus during the time of Antiochus' reign, which led them to some unusual conjunctions around Regulus during this general period and a possible connection to the design of the hierothesion. Antiochus followed a very esoteric form of astrology, and directed a calendrical reform that would link the Commagene calendar, a lunisolar one based on the Babilonian calendar, to the Sothic cycle based on the appearance of Sirius and used by the Egyptians. A nomos or inscription here reads: Also new festivals for the worship of the gods and our honors will be celebrated by all the inhabitants of my kingdom. For my body's birthday, Audnayos the 16th, and for my coronation, Loios the 10th, these days have I dedicated to the great diamones' manifestations who guided me during my fortunate reign. [...] I have additionally consecrated two days annually for each festival. The daimones were the divinities represented in the statues. The low light at sunset can make it easier to see some of the details on parts of the frieze. Some of these sandstone friezes contain the oldest known images of dexiosis or two figures shaking hands, pushing back the origin of that social interaction. J.M. DeBord has written Something Coming, a novel set at Nemrut Dağı and featuring King Antiochos I Theos. We hang out in the hotel's dining room after dinner. We will get up early the next morning to hike back to the summit in the dark and witness sunrise from the summit of Nemrut Dağı. Then it's back down to the hotel for breakfast, and into the van for the ride back to Malatya. I stayed at the Otel Sihan in Malatya. It's at Atatürk Caddesi #16, +90-(0)422-321-29-07. At the time rooms were US$ 4/6 for single/double with shower and toilet. The Otel Tehran was nearby and similarly priced, but the Otel Sihan seemed significantly nicer. Arfentour runs the trip up Nemrut Dağı, it was a great trip! They were at Atatürk Caddesi #40/B, +90-(0)422-325-55-88.
<urn:uuid:d27ef4f2-a2bb-4af3-b26b-530c7beb108f>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/turkey/nemrut-dagi/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00174.warc.gz
en
0.961109
2,949
3.296875
3
IoT Security – Why We Need to be Securing the Internet of Things, we saw that security is an absolutely critical component of any IoT system. Without proper security, vulnerable devices can threaten the privacy and safety of consumers, businesses, and governments alike. So why are Internet of Things security issues so prevalent? And how do consumers, businesses, and governments address these problems? Internet of Things Security Issues The Internet of Things is powerful because it leverages diverse networks of sensors to generate huge amounts of data, using that data to perform actions and create extremely useful insights. Since IoT requires mass production of sensors and devices, one issue is that a discovered vulnerability in one of those sensors/devices can mean thousands or millions of sensors/devices that can be affected. So you may wonder, why are these sensors/devices vulnerable in the first place? As we saw in the previous post with the Mirai Botnet, a large part of the problem is that default passwords and login credentials aren’t changed on devices. To build a sensor or device for an IoT application, multiple companies are involved in the production chain. If you’re building a router, for example, you might start with a chip manufacturer like Broadcom, Qualcomm, or Marvel. That specialized chip is purchased by an original device manufacturer (ODM) who then builds the rest of the router around the chip. Finally, that device is purchased by a brand-name company who adds a user interface and some other features, boxes up the device, then sells to consumers. Original vendors use default passwords and login credentials so that the next in the chain can get up and started with ease. The issue is that the next in the chain often doesn’t change the default passwords or log in credentials, leaving them when shipped to consumers. Though this seems an obvious problem, another issue is that many companies building these sensors or devices are new to the Internet of Things. Since they’re new to IoT and the potential pitfalls of connected devices, these companies are unfamiliar with security and often don’t make security a priority. After all, integrating security is more expensive and can slow time to market. As a result, data being sent by these sensors/devices might be unencrypted during communication, meaning it can be intercepted and understood by third-parties. Also, these new IoT companies leave sensors/devices in a network together (e.g. in the home) without isolating them from each other. So if one device is compromised, it can mean access to the entire network and all the other devices on that network. Barriers to Change All of us are familiar with software updates. We get them on our laptops, our tablets, our phones, etc.. But why do we get them? Software is never perfect, as new bugs, issues, or vulnerabilities are discovered, changes need to be made and we make these changes with over-the-air (meaning via an internet connection) software updates. So why don’t we send software updates to vulnerable IoT devices? Well first there needs to be someone creating and sending these software updates, but there’s a distinct misalignment of incentives for companies to do so. Returning to the above example of the router, the chip manufacturers have small profit margins on their chips so they’re incentivized to do as little engineering as possible and don’t have much reason to provide ongoing support. Instead, the chip manufacturers are busy developing and shipping the next version of their chip. The ODMs won’t have their name on the final product so they’re also incentivized to do as little engineering as possible and don’t have much reason to provide ongoing support. Instead, ODMs are busy upgrading to make sure they can work with the next version of the chip. The last step in the chain (the consumer-facing company) has greater incentive to provide ongoing support since it’s their name on the final product, but they may not be able to address newly discovered vulnerabilities because those vulnerabilities are from an earlier step in the chain. The barrier here is a lack of incentives for every step in the chain to take Internet of Things security issues seriously and to provide ongoing support and updates for old products. However, not all of the blame can be placed on the companies. There are also physical constraints on providing necessary updates. The Internet of Things often relies on sensors and devices that have low processing power and small memory. Processing power and memory are expensive and consume more energy, so IoT applications utilize sensors and devices that have just enough to perform their tasks. But with low processing power and memory, these sensors and devices aren’t sophisticated enough to perform over-the-air updates. By nature, over-the-air updates also require connectivity. Many IoT applications have intermittent or unreliable connectivity which poses a further physical constraint on software updates. And even when updates are possible, there may be other reasons not to push those updates. It’s one thing to have your computer shut off for 15 minutes to install an update, it’s quite another for the safety systems of a nuclear reactor to shut off for 15 minutes. For IoT applications that are life-critical, just a few minutes or seconds might be too long to perform an update. For IoT applications that aren’t life-critical, you might not want to push updates because they’re so energy-intensive. Many IoT applications make use of battery-powered sensors and devices, so frequent updates would significantly decrease their expected lifespan. Speaking of lifespan, another aspect of IoT applications that acts as a barrier is the long life-cycle of products. Whereas you might expect a laptop or phone to last 3–5 years, IoT sensors and devices might need to last 15–20 years. Creating a product that remains secure and invulnerable for that long is basically impossible, so ongoing support and frequent updates are needed. But providing ongoing support and frequent updates for that long is quite expensive and faces the barriers described above. So how should consumers, businesses, and governments address Internet of Things security issues? I’ll answer these questions next week in Security in IoT – How to Address IoT Security Issues! So please follow IoT For All and check back! And in the meantime…
<urn:uuid:5aeaba68-113f-4183-b29e-819f0157ecb0>
CC-MAIN-2022-40
https://www.iotforall.com/internet-of-things-security-issues-and-barriers-to-change
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00174.warc.gz
en
0.939082
1,296
2.5625
3
The Web browser is the principal tool by which we all connect to the Internet to consume content. It’s also one of the principal tools that attackers use to get at your personal data and compromise your computer. This helps explain a key trend with Web browser development and security features: virtual browsers. For example, HP just announced its own virtual browser offering, developed with Mozilla in conjunction with its brainchild, Firefox. Google recently lit up the market with its own Chrome beta that offers sandbox features for securing a user’s system from malicious code. Check Point’s Force Field software is offering security browsing features to customers that deploy virtualization. Virtual browsers have been around for a while, but they’re getting a closer look as security progresses on the Web. One core argument for using virtual browsers is that a virtual browser is inherently safer than a browser running natively. Take the HP Firefox browser. It works by way of a virtualization layer from Symantec’s Alitiris division, which it calls Software Virtualization Services(SVS). “SVS is software virtualization and it’s similar to Virtual Machine (define) technology but instead of sandboxing an entire operating system what you’re doing is sandboxing the application from the rest of the operating system,” explained Kev Needham of Mozilla, which develops the Firefox browser that HP is offering with its dc7900 desktops. “So anything that changes gets put in a box and doesn’t affect the underlying system. If anything goes wrong with it, it doesn’t have direct access to the operating system,” he said. The idea of separating the browser from the underlying system is an approach used by a number of vendors to mitigate potential risk. “A browser compromised by malware will not be able to manipulate the file system or registry if it is virtualized because it is not integrated with the rest of the system,” Kurt Roemer, chief security strategist at CitrixSystems, told InternetNews.com. Citrix has been providing remote desktop solutions for a decade, and in recent years has moved squarely into the virtualization space with the acquisition of XenSource. Roemer noted that Citrix, XenApp and XenDesktop solutions currently enable users to virtualize their browser. “By virtualizing applications, we can virtualize any style application, even browsers, and have been doing it for over 10 years,” Roemer said. Roemer also noted that a virtualized browser also offers greater consistency by enabling better version control and control over the patch process. This helps corporate IT departments be aware of the state of the browser running on all systems company-wide. While Citrix offers one approach to virtualization, another is the one utilized by Check Point’s Force Field software. With Force Field, there is a virtualization layer between the browser and the operating system that shields one from the other. Check Point has also layered in anti-phishing and anti-spyware, and key logger jamming as part of the solution. VMware also has a VirtualBrowser appliance based on its VMware Player technology, but it does not include integrated layered security. VMware was not available for comment by press time. From Check Point’s perspective, the move toward browser virtualization by HP and, Google sandboxing is a sign of the times. “This announcement from HP, as well as Google’s beta of Chrome, all emphasize the need for Web browser security and the value of using virtualization for Web browsers,” John Gable, director of product marketing for Check Point’s ZoneAlarm consumer division, told InternetNews.com. Gable argued that ZoneAlarm ForceField also provides a level of security not included in Chrome or described in the HP announcement. “Any browser, running natively or otherwise, is vulnerable to browser exploits that by-pass its defenses to attack the PC and access user information,” Gable said. “In addition to creating a virtualized browser session, ZoneAlarm ForceField also combines active security layers to provide users protection against phishing attacks (define), spyware and other forms of malware. This is an especially important component when it comes to securing users from social engineering attacks that trick them into downloading malicious programs, which is also a problem for any browser, native or otherwise.” Chrome browser isn’t creating a purely virtual Web browser, but it is creating a sandbox approach that is intended to protect users. The sandbox isolates browser processes and limits privileges in order to limit the scope of any potential risk spreading outside of a particular browser tab. Google admits, however, that the sandbox approach alone isn’t enough to protect against all threats. “Running a browser in an isolated, virtualized environment helps protect the system and local data from some security threats, but there are many security risks — such as phishing or cross-site scripting– that aren’t addressed,” a Google spokesperson said in an e-mail to InternetNews.com. “Google Chrome’s sandbox helps protect you from exploits that may arise from processing untrusted This article was first published on InternetNews.com.
<urn:uuid:c01d416a-d4bb-4f34-9c13-5e5d02fcaa20>
CC-MAIN-2022-40
https://www.datamation.com/applications/navigating-virtual-browsers-at-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00174.warc.gz
en
0.923643
1,139
2.546875
3
Modern browsers use the Same-Origin Policy (SOP) by default which means that fetching resources from other origins is not allowed. However, in some situations, such operations are necessary. Cross-Origin Resource Sharing (CORS) was designed to address such situations using HTTP response headers, which include Access-Control-Allow-Origin. What Is Same-Origin Policy The origin is a set of common characteristics of a web resource. It usually includes three elements: the schema (protocol), the hostname (domain/subdomain), and the port. All resources identified by the same schema:hostname/anything:port have the same origin. However, if even one of the three elements is different, not only if the resources have a different domain, modern browsers such as Google Chrome consider the resources as having a different origin. SOP is applied by the browser every time that elements from different origins interact, for example, a page cannot fetch the content of its iframe unless they are of the same origin. The above is just a very short introduction to this topic. To learn more about Same-Origin Policy (SOP), read our article on this subject. How Does CORS Relax the SOP Cross-origin Resource Sharing works as follows: GET /formdata/content.json HTTP/1.1 Host: css.example.com Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36 Origin: https://example.com - The browser receives the response from the web server and sees that the origins are different. - Normally, due to SOP, the browser would deny resource 1 access to resource 2, for example, return an error Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource. - If CORS is set up for resource 2, the response from resource 2 includes special HTTP headers, primarily Access-Control-Allow-Origin. HTTP/1.1 200 OK Age: 277354 Cache-Control: max-age=604800 Content-Encoding: gzip Content-Length: 1701 Content-Type: text/html; charset=UTF-8 Date: Mon, 03 Aug 2020 11:43:26 GMT Vary: Accept-Encoding X-Cache: HIT Access-Control-Allow-Origin: https://example.com - The Access-Control-Allow-Origin header states that resource 1 is allowed to access resource 2. - The browser processes the request. Note that the Access-Control-Allow-Origin header may only specify one source origin or it may specify a wildcard. A wildcard makes resource 2 accessible from all origins. This may, for example, make sense for web fonts, which should be accessible cross-domain. To enable CORS on your web server, consult the enable-cors website, which contains instructions for nginx, Apache, IIS, and many other web servers. We also recommend that you watch an excellent video that shows in practice what is the origin, how SOP works, and how CORS makes cross-origin HTTP requests possible. The above method is used for simple requests that the web browser considers safe, for example, typical GET requests. In other cases, for example when there are custom headers in the request (any other header except Accept, Accept-Language, Content-Language, Content-Type, DPR, Downlink, Save-Data, Viewport-Width, Width) or if the HTTP method is PUT, DELETE, CONNECT, OPTIONS, TRACE, or PATCH. The preflight CORS request is an OPTIONS request with CORS headers: OPTIONS / HTTP/1.1 Host: www.example.com (...) Origin: http://example2.com Access-Control-Request-Method: POST Access-Control-Request-Headers: X-CUSTOM, Content-Type The server response to this OPTIONS method request includes the methods that are allowed, whether it accepts the headers, how to handle credentials, and how long the preflight request is valid: HTTP/1.1 204 No Content (...) Access-Control-Allow-Origin: http://example2.com Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Allow-Headers: X-CUSTOM, Content-Type Access-Control-Allow-Credentials: true Access-Control-Max-Age: 86400 After preflight is complete, regular requests with CORS headers may be sent. Is It Worth It? Now that you know how CORS works, you may ask yourself: is it worth implementing this on my server? It would mean altering server configuration to add CORS headers for specific resources or altering server-side code to send such headers in the response. Will this make your web application safer? Unfortunately, SOP and CORS do not prevent any web attacks. They may be treated as an additional mechanism to prevent cross-site request forgery (CSRF) attacks but they must not be used in place of anti-CSRF tokens. Also, SOP and CORS are completely useless in preventing any cross-site scripting (XSS) attacks. However, because all modern browsers support SOP, you may have to implement CORS headers anyway, just to make it possible for your resources to be used from other origins. This, however, is not a security requirement but a functional requirement – without CORS headers some web applications will simply not work. Get the latest content on web security in your inbox each week.
<urn:uuid:65bb2302-a0a7-47fd-b1e5-6ff0ec963f66>
CC-MAIN-2022-40
https://www.acunetix.com/blog/web-security-zone/cross-origin-resource-sharing-cors-access-control-allow-origin-header/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00174.warc.gz
en
0.818999
1,394
3.46875
3
Coordinated incident response Expedited coordinated responses and resolution of incidents with automation Effective government action when it’s needed most Incident response is one of the most important—and challenging actions of government. From fixing a pothole to providing emergency disaster relief, and everything in between, incident response can mean the difference between safety and catastrophe. - Respond faster - When incidents can be reported from anywhere, response happens faster. - Coordinate between responders - Instead of linear notification, all appropriate people are notified in unison, avoiding missed connections. - Increase visibility - Everyone from dispatch to response are connected and have the visibility they need – all the way to resolution.
<urn:uuid:55b649b1-9dc0-460d-b54f-f5f7fbc8530b>
CC-MAIN-2022-40
https://www.nintex.com/use-case/coordinated-incident-response-automation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00174.warc.gz
en
0.934871
145
3.15625
3
jules - Fotolia For many years, the dominant protocols to access shared storage have been block and file. Block-based access provides the ability, as the name suggests, to update individual blocks of data on a logical unit number (LUN) or volume, with granularity as low as 512 bytes. File-based protocols access data at the file level; an entire file is locked out for access, although the protocol – server message block (SMB) or network file system (NFS) – may allow sub-file updates. File and block are great for certain classes of data. Block-based access works well with applications such as databases, while file protocols work well with files stored in a hierarchical structure. But other data storage access requirements have arisen, at scale and in the cloud. Here, new de facto protocols are emerging. A key one is Amazon’s Simple Storage Service (S3) offering, a highly scalable public cloud storage service that uses objects rather than blocks or files. An object is simply a piece of data in no specific format: it could be a file, an image, a piece of seismic data or some other kind of unstructured content. Object storage typically stores data, along with metadata that identifies and describes the content. S3 is no exception to this. S3 was first made available by Amazon in 2006. Today, the system stores tens of trillions of objects. A single object can range from a few kilobytes up to 5TB in size, and objects are arranged into collections called “buckets”. Outside of the bucket structure, which is there to provide admin and security multitenancy, the operation of S3 is a flat structure with no equivalent of the file structure hierarchy seen with NFS-based storage, common internet file system (CIFS)-based storage or SMB-based storage. Objects are stored in and retrieved from S3 using a simple set of commands: PUT to store new objects, GET to retrieve them and DELETE to delete them from the store. Updates are simply PUT requests that either overwrite an existing object or provide a new version of the object, if versioning is enabled. In S3, objects are referenced by a unique name, chosen by the user. This could be, for example, the name of a file or simply a random series of characters. Other object platforms do not give the user the ability to specify the object name, instead returning an object reference. S3 is more flexible in this way and it makes it easier to use. How exactly are these commands executed? S3 is accessed using web-based protocols that use standard HTTP(S) and a REST-based application programming interface (API). Representational state transfer (REST) is a protocol that implements a simple, scalable and reliable way of talking to web-based applications. REST is also stateless, so each request is unique and doesn’t require tracking using cookies or other methods employed by complex web-based applications. With S3, PUT, GET, COPY, DELETE and LIST commands can be coded natively as HTTP requests in which the header of the HTTP call indicates the details of the request and the body of the call is used for the object content itself. More practically, though, S3 can be accessed using a number of software development kits for languages, including Java, .Net, Hypertext Preprocessor (PHP) and Ruby. Storage tiers in Amazon S3 There are three levels of storage tier available from Amazon, each of which attracts a different price. These are: - Standard: General S3 capacity, used as the usual end point for data added to S3. - Standard (Infrequent Access): A version of S3 capacity with lower levels of availability than Standard for data that doesn’t need to be highly available. - Glacier: Long-term archive storage. Each storage tier is priced differently. For example, in the European Union (EU), entry-level capacity for Standard costs $0.03/GB, Standard Infrequent Access costs $0.0125/GB and Glacier costs $0.007/GB. There is also a charge per number of requests made and for the volume of data read from S3. There is no charge for data written into the S3 service. S3 storage behind the scenes Amazon does not provide any technical details on how S3 is implemented, but we do have knowledge of some technical points that help us understand the way S3 operates. Amazon Web Services (AWS) – of which S3 is only one service – operates from 12 geographic regions around the world, with new locations announced every year. These regions are divided into availability zones that consist of one or more datacentres – currently 33 in total. Availability zones provide data resiliency, and with S3 data this is redundantly distributed across multiple zones, with multiple copies of data in each zone. In terms of availability and resiliency, Amazon quotes two figures. Data availability is guaranteed to be 99.99% available for the Standard tier and 99.9% for Standard Infrequent Access. Availability does not apply to Glacier, as the retrieval of data from the system is asynchronous and can take up to four hours. The second figure is for durability. This gives an indication of the risk of losing data within S3. All three storage tiers offer durability of 99.999999999%. Using S3 for your application S3 provides essentially unlimited storage capacity without the need to deploy lots of on-premise infrastructure to manage it. However, there are a few considerations and challenges when using S3 rather than in-house object storage: - Eventual consistency: S3 uses a data consistency model of eventual consistency for updates or deletes to existing objects. This means that if an existing object is overwritten, there is a chance a re-read of that object may return a previous version, as replication of the object has not completed between availability zones in the same region. Additional programming is needed to check for this scenario. - Compliance: Data in S3 is stored in a limited number of countries, which may cause an issue for compliance and regulatory restrictions in some verticals. Currently, there is no UK region, for example, although one is planned. - Security: The security of data in any public cloud service is always a concern. S3 offers multiple levels of security that include use of S3 keys, S3 managed keys or customer-supplied encryption keys. Obviously, using keys from the customer means the customer has to put in place their own key management regime because loss of those keys would effectively render all data stored useless. - Locking: S3 provides no capability to serialise access to data. The user application is responsible for ensuring that multiple PUT requests for the same object do not clash with each other. This requires additional programming in environments that have frequent object updates (for example, a “read, modify, write” process). - Cost: The cost of using S3 can be significant when data access requirements are taken into consideration. Any data read out of S3 attracts a charge, although this is not the case if S3 is accessed by other web services from Amazon, such as Elastic Compute Cloud (EC2). Also, customers may have to invest in additional network capacity to reduce the risk of bottlenecks between their own datacentre and those of AWS, depending how applications access S3 storage. Amazon is in a strong position with S3, and most object storage software suppliers have chosen to adopt the S3 API as an unofficial de facto standard. This allows applications to be easily amended with little or no modification to use on-premise or in cloud-based storage. Many of these suppliers hope they can add value on top of S3 and stay competitive in the market. Object storage is a rapidly growing part of the IT industry and, as users get to grips with a slightly different programming paradigm, we are likely to see significant growth in this part of the storage ecosystem. Read more about object and cloud storage - Object storage is a rising star in data storage, especially for cloud and web use. But what are the pros and cons of cloud object storage or building in-house? - All but one of the big six storage suppliers have object storage products that target public and private cloud environments and/or archiving use cases.
<urn:uuid:e5982a4f-16c7-4033-ab6c-8325abdfe4a1>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Amazon-S3-storage-101-Object-storage-in-the-cloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00174.warc.gz
en
0.921362
1,752
3.4375
3
What is Data Security Management? The global pandemic has pushed more people to move to the Internet (digital transformation) to work, communicate, have fun, watch movies and TV, socialize, shop, exchange information and more. The digital movement of millions of people has facilitated industries to transform how they market or deliver services. Unfortunately, the campaign has also opened the doors to hackers and increased the instances of data being compromised. Ponemon Institute estimates that the average cost of a data breach is a record-high $4.96 million per incident in terms of impact and resolution. Data security now ranks as the most important aspect of Data Management. Digital transformation encouraged capturing everything that an individual or application does while connected to the Internet in tremendous detail. People working from home, using various devices that fall outside of corporate oversight or monitoring tools, further complicates sound data security management. More startling is how easy a personal device can affect an entire organization with a computer virus or open the door to a hacker. Government regulations for data control, such as the European General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA), provide guidance and fines to force organizations to improve data security management. Still, there is little regulation of personal security management. What is Data security management? There are many definitions of data security management, and data security solutions abound. Every organization must clearly define and communicate the data security program and data security services it offers, as these will differ slightly from place to place. In general, data security management is: - The practice of ensuring that data, no matter its form, is protected while in your possession and use from unauthorized access or corruption. - The blending of both digital (cyber) and physical processes to protect data. - The monitoring of data acquisition, use, storage, retrieval, and deletion such that data is not corrupted at any point in its lineage. - The implementation of technology defenses that prevent data loss prevention from internal malicious actions or hacking. - Encouraging applications and services developers to test against data security standards to improve data leak prevention. - The policies that train and govern individuals on the importance of data security and how best to protect themselves and the business. - The security of data exchanged with external applications or services. - Taking advantage of the use of encrypted cloud storage or encrypted cloud networks to secure data transfers and sharing. - The management of data center security, even if you benefit from cloud services, to ensure that your most precious non-people resource is safe. Data security management practices are not just about sensitive or business-critical information. Data security management practices protect you and your organization from unintentional mistakes or hackers corrupting or stealing your precious resources. Data Security Challenges The most significant impact of a data security breach is losing business from current and potential customers. The Ponemon Institute report points to poor passwords or credential management and misconfigured cloud data storage that allows public access as the top two causes of data security breaches. As organizations gather and store more information from digital networks, big data security challenges and cloud data security challenges increase. To avoid big data security threats and concerns, everyone in your business must be aware of big data security and privacy problems; such as: - Security practices are deemed to conflict with agile development methods. - Inadequate knowledge of how data flows into and through an organization. - Organizations do not know why data has been gathered or created or how to maintain the data to be compliant. - Misuse of email or other forms of social media or digital communication to exchange information (Dropbox or Zoom data sharing, for example). - Lack of understanding of who has access to what and why and keeping that knowledge current as roles change or people leave. - Allowing data to become forgotten (old or stale), representing an opportunity for a data breach or fines. - Immature security and network protection (lack of proper encryption, firewalls, and vulnerability scans). - Postponing updates to applications that include security patches. - Lack of ability to monitor how staff working from home interact or share data. - Allowing for the creation of unmanaged applications instead of improving applications to ensure ease of data use and exchange for employee tasks. - Home networks or mobile device sharing or accessing corporate networks from public hotspots without the use of a secure VPN. Data Security Best Practices The challenges are daunting, but below are recommended data security best practices that organizations can adopt and adapt to prevent data breaches, data loss, data leakage or avoid cybersecurity threats: - A complete audit of data to map the lifecycle and lineage of data in your organization from acquisition to deletion. Once audited, unsuitable data must be cleansed or deleted to avoid security or compliance incidents. - Data classification best practices are to maintain a catalog of data using Master Data Management and metadata. Metadata, which acts like the cards in a library, helps applications or services know which data to use and how to secure it properly during or after usage. This underpins database security best practices. - Restricting access to data according to its use and sensitivity - Accessing data only via approved APIs or applications - A zero-trust mentality should be used to assess all profiles that grant authorization to data by asking the question, does this role or service still require access and if so, why? - All data maintained in physical devices residing in your data center or a cloud must have the same stringent security practices that apply to software and cloud services, including monitoring, alerting and reporting any access attempt, regardless of the reason. - Data encryption best practices, while not foolproof, are one of the safest methods of ensuring data security, especially when combined with encrypting the data transfer. - Cyber security threats are constantly occurring, and the methods of acquiring your data are evolving to overcome your security defenses. Performing continuous vulnerability scans and testing will help keep you safe. - Database security best practices entail managing the schemas to meet the needs of the applications while also having a strong link to access management best practices and controls. - Digital data will make use of cloud analytics for research and decision making. Power BI security best practices will scan and validate data prior to use to maintain integrity and classification of data. - Training and enforcement of data security best practices with staff or using guides from security vendors can help show how easy it is to succumb to a hacker’s efforts. - Data Masking (hiding original data with modified content)) for the development of services is a DevSecOps best practice, especially for personally identifiable information (PII). - Benefit from external experts who test your networks, applications, or cloud services for data security concerns. Some firms have even hired hackers to perform these same activities. - Have an incident management plan of what is precisely supposed to occur and how the breach is to be communicated internally and externally (especially to customers and regulatory agencies). - Have a tested data recovery plan for those instances when data is inadvertently deleted or corrupted. - Ensure applications or services can use any data backup, even ones created years before, to meet regulatory mandates, as software and hardware changes may negate their use. Accessibility of data in a secure fashion and reintroducing archived data is part of sound practices to prevent a data breach. - Monitor to ensure that data is deleted when no longer required or becomes outdated. - Train customers in data security management practices to build their trust that you consider data security to be important. - Have a robust password management policy: minimum characters, how they can be derived, discourage previously used passwords, or benefit from a password management tool. - Introduce multi-factor authentication, fingerprint or facial recognition to protect services and applications better. - Validate your processes against data center security best practices and data management security best practice standards such as ISO/IEC27001 or NIST Data security. - Log all tests and keep them for audit and compliance teams. - For mobile or digital devices: - Regularly update applications and security. - Install spyware or cookie blockers as appropriate. - Remove old applications. - Install mobile device blocking tools in case the device is lost or stolen. Types of Data Security Management The above represents a portion of the data security management best practices that an organization can introduce. Understanding the types of data security practices and threats can help avoid the types of data breaches that you see in the news press, such as: - Security knowledge and language: create a data security language guide to facilitate training and understanding - Malware: Malicious software designed to gain access and cause harm. - Computer Virus: software that changes the way computers run such that they can be controlled externally or made to perform activities, sometimes without your knowledge. Computer viruses, like any virus, can spread from one computer to another and are difficult to remove once introduced without considerable effort. - Types of encryption keys and best practices: the transformation of data into unreadable formats via an algorithm will aid in designing services that resist security attacks. Introducing the various types of data encryption relies on skilled data security measures via trained staff or trusted supplier partners. Remember that keys can be lost, and always have a deputy for those occasions when the key owner is not reachable. - Organizational data security management: assigning roles for data security management to data stewards, data administrators, product owners, developers, external data or software providers, cloud and managed service suppliers are just the beginning of this aspect. - Data deletion, erasure, and destruction: the use of software to completely eradicate data from a storage device (digital or physical) under the auspices of the data owner, data steward or governance team. - There are several types of data breaches, but best practices such as Data Resiliency will implement the capability to restore and recover from physical or software disruptive events in a safe and timely manner. Difference Between Data Security and Data Privacy? Data security helps to maintain the safety of data while your organization is using or storing it. Data Privacy is the core practice of ensuring that personal or organizational information is maintained and used as stipulated by regulatory agencies or internal policy. Data security and privacy are similar in terms of data management overall concepts, but bear in mind that the main difference between data security and data privacy is that data privacy ensures information confidentiality, while data security keeps your data and organization safe. Personal or organizational information is a valuable and tradable commodity, so your data privacy solutions must be able to: - Prove that you obtained the information legally and with the consent of the other party. - Show how your practices blend cyber security and privacy practices such as those from NIST. - Explain how roles for data access are derived, monitored and alerted when a breach occurs. - Illustrate how big data security and privacy (manual data, cyber data, application data) is maintained and secured against loss, leakage and incidents. - Fulfil requests as required by governmental regulations for the deletion of privacy data if requested by the owner (GDPR). - Alert and respond to all those impacted by a data incident promptly. - Backup and restore information to mitigate threats to data. - That data will be backed up or stored safely and only for as long as needed. Regular monitoring of compliance, updating privacy policies as governmental or best practices change, and ensuring that data security software, training and processes underpin data privacy is an organizational commitment owned by senior leadership. 5 Ways A Hybrid Integration Platform Can Make Your Data More Secure How to Manage Data Security Threats To manage data security appropriately, consider these three concepts: - Confidentiality – to comply with privacy rules, data is classified as public or confidential with appropriate safeguards for application and individual accessibility. - Integrity: created or acquired information must be validated as being required for business use before being used by a business task. - Availability: data must be easily accessible and ready for use as required, including recoverability. Security threats try and undermine these Confidentiality, Integrity and Availability (CIA) characteristics. Examples are: - Attacking with password guessing software. - Vishing (voice), Smishing (text) or Phishing (email) attacks: hackers represent themselves as a member of your organization, a customer, a regulatory agent or some valid individual in an attempt to steal information that would compromise data security or data privacy. - As seen in the Verizon data breaches report and following its advice. - The purchasing of data that hackers have mined from any of the above methods to defraud your organization or customers. - Unauthorized downloads of data by staff, especially to personal devices. - Data corrupted or sold by staff that feel they were dismissed or made redundant without cause, often before employee role management can remove them from access lists. - Applications or external partners can inadvertently leave open entry to your organization by their poorly designed services. Constantly monitoring all points of entry will alert you when a mitigating action needs to occur. - Mistakes happen: copying files to non-protected devices, sharing passwords in an emergency, sending data to the wrong person, not following rules are all types of errors that software and training can help limit their impact. - Lack of data backups that can recover services on time. - Data comes in all sorts of formats, and each design presents an opportunity to bypass security defenses. Data monitoring and alerting on your networks or within your applications will help shore up any deficiencies. - Malware or computer viruses: software that compromises or attempts to control your applications or infrastructure. Security scanning and blocking software will mitigate the success of these attacks. - Encryption key management to safeguard access and usage of information regardless of where it is stored or transferred. CIA links to every aspect of your data security and business continuity management policies, practices, software and hardware. Senior management must help maintain the maturity of data security management at all levels of your business and contractually obligate business partners to do the same. Assessing your practices against those of best practices, such as those in the Verizon Data Breach report, will ensure trust and compliance in big data security management. Data Security Management Tools Data security management: - Planning and governance tools to monitor data through its lifecycle. - Knowing what data you have and why you have it. - Validation of data integrity. - Understanding where information is stored and under what circumstances. - Ensuring that data is deleted when and as required. Due to the complexity and abundance of data used or housed by any organization, these tasks require the assistance of software to catalog, track, monitor, control and alert data access and use. The characteristics of software tools to consider are: - All applications and services will have their own data needs, and each must be cataloged and linked to a Data Management System for monitoring and governance. - Backup and recovery software underpin data loss prevention tools used by developers or SaaS partners. - Data Masking of sensitive information, especially for product development. - Data deletion and erasure software with logging and confirmation. - Security network monitoring via firewalls. - Data security systems for physical devices such as alarms or locked doors. - Encryption, tokens and security keys software for digital services or physical access. - Multi-factor (two-factor) authentication, facial recognition, fingerprint access software. - One-time password software for emergency access whereby the password is immediately disabled upon use. - Compliance monitoring, alerting and reporting. - Cloud Access Security Broker: an extra level of software monitoring and control for data stored in a cloud. - Using Big Data security to manage Hadoop open-source platforms, especially for Data Management Platforms containing marketing information. Introducing data security loss prevention software for specialized services such as payments, mobile apps, web browsers or data analytics.
<urn:uuid:42944f22-484a-4468-a85b-1d27100ceada>
CC-MAIN-2022-40
https://www.actian.com/what-is-data-security-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00375.warc.gz
en
0.907173
3,288
2.890625
3
Metaverse security: Unprecedented challenges Metaverse is a world where mixed reality, augmented reality, and virtual reality meet. The world could unprecedentedly change how people interact, play, and conduct business with each other. While everyone thinks about the ultimate potential, the metaverse security does not deserve any less attention. The unprecedented integration of systems Metaverse is based on a completely new integration of different systems and components. It combines a lot of emerging technologies such as virtual reality (VR), augmented reality (AR), artificial intelligence (AI) etc. These technologies realize the connection of two worlds: virtual and real. Such a level of integration also opens new attack surfaces. Metaverse subsequently inherits the security and privacy challenges of these new technologies. Further, vulnerable devices that act as interfaces to connect reality to virtual worlds, such as augmented reality devices or virtual reality devices, could become gateways for malicious intentions or data leakages. Besides, a variety of metaverse security breaches and privacy leakages could happen given the massive amount of collected data, user profiling activities, and unfair decisions of machine learning algorithms. For example, without proper comprehensive security mechanisms such as authentication and authorization methods, attackers could sneak into a virtual environment and act as “the man in the middle” without being seen or noticed. They could alter the environment, eavesdrop on the conversation, and perform certain actions that threaten the end-users, the business, or the games. In the metaverse, users must be able to easily identify each other when conducting business or sharing an environment. They could then trust a virtual person as much as they do in the real world. Existing concepts of digital identity often stay within a closed (eco) system (i.e., each system has its own set of identities), while new technologies such as non-fungible tokens (NFT) suggest a new shift in (mixed) digital identity. Unfortunately, non-fungible token is still in its early day, and different attacks have already been reported. Malicious actors could steal one’s digital identity by exploiting the weakest link in the chain, i.e., humans. Particularly, they could perform phishing, social engineering attacks, and scams. Without appropriate metaverse identity management solutions, it is impossible for the users to identify and trust each other. Trust and verification are integral parts of the success of the digital identity in the metaverse. Given that metaverse will imitate (to a great extent) the reality, the immense amount of data collected from different sensors (e.g., wearable devices, microphones, heart, user interactions) will be a huge concern for users and lawmakers. Such a massive data collection could easily lead to privacy violations. Hence, data privacy in the metaverse will be the top priority for legal considerations. Transparent data usage must be mandated for related companies. Besides, standards and regulations should be considered to create incentives as well as to actively enforce data protection in products used in the metaverse. The metaverse is new yet approaching fast. Its wide adoption is attracting a variety of security and privacy challenges. It is therefore crucial to implement a proper security and privacy mechanisms together with well-defined policies, standards, and regulations to foster security and privacy right in the early days. If we could do it right, we would not have to consider security and privacy as added features as we did with traditional software in the old days.
<urn:uuid:3e330bd5-e974-43b9-bce2-3e7adcecf546>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/metaverse-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00375.warc.gz
en
0.937572
706
2.640625
3
Natural Language Generation, New Software, and AI Natural Language Generation or NLG for short is defined as a subset of Natural Language Processing or NLP that focuses on producing or programming spoken and written language narratives from a data set. In combination with Natural Language Understanding or NLU, NLG represents one of the two primary components of NLP. With this being said, while NLU is predicated on computer reading comprehension, NLG focuses on the ability of computers to write text in response to some form of input data. Moreover, as many NLP software programs struggle to understand the concept of context as it concerns human language and communication, NLG also focuses on deriving meaning from data sets. How does Natural Language Generation work? Natural language Generation functions in accordance with a six-stage process, with each stage focusing on further refining the data that will ultimately be used to create the most fluid, comprehensive, and natural-sounding language or text possible. To this point, the first stage of the NLG process is content analysis. During this stage, a software engineer will begin filtering the data within their data set in order to determine which data should be included in the content that will ultimately be generated. A major aspect of the content analysis stage is identifying the main topics or issues within a source document, as well as the relationship between these topics or issues. The next stage in the NLG development process is data understanding. During the data understanding stage, the data being used will then be further interpreted, with the goal of identifying patterns that can then be used to provide context. Moreover, this stage of the process typically includes the implementation of machine learning algorithms. After the data understanding stage has been completed, the next stage of development is document structuring. During this stage, a specific document plan will be formulated, as well as a narrative structure, in conjunction with the type of data that is being interpreted. Next, a software engineer will move on to the sentence aggregation stage, during which the relevant words, parts of sentences, and sentences will be combined in ways that summarize the topics or issues at hand. Following the sentence aggregation stage, the next stage in the development process will be grammatical structuring. As the name implies, this stage involves implementing grammatical rules that can be used to govern natural sounding text. During the grammatical structure stage, the software program will deduce the syntactical structures for all sentences involved in the process. This information will then be used to ensure that all of these sentences are written in a manner that is grammatically correct. Finally, the last stage of the process, language presentation, involves the final output of the text that has been created, in correspondence with the particular format or template that the software developer has selected. What algorithms are used to create NLG software? Many NLG software programs are created through the implementation of machine learning algorithms, particularly recurrent neural networks. As artificial neural networks are systems of hardware and software that are modeled after the structure and function of the human brain, recurrent neural networks are used to recognize sequential patterns and characteristics within a data set, with the goal of predicting the next likely sequence or scenario. As such, a software developer looking to create an NLG software program would use recurrent neural networks to identify the various parts of speech that make up written and verbal language. Conversely, another technique or methodology that can be used to create NLG software programs is the Markov chain or model. The Markov model is a mathematical model that is used in machine learning and statistics to create and analyze systems that are used to make arbitrary choices, such as gambling and the ranking of websites in online web searches. Makarov chains begin with an initial sequence, and then randomly generate subsequent sequences on the basis of the prior sequence. The software model will then learn about both the current and previous sequence, and then calculate the probability of the next sequence based on the previous two. In the context of NLG, words, phrases, and sentences will be created by selecting words that are likely to appear together from a statistical standpoint. Within the umbrella of Natural Language Processing, Natural Language Generation functions on the basis of creating texts that sound as natural and accurate as possible. As anyone who has ever used a popular voice assistant such as Siri or Alexa can attest to, many NLP products and software programs struggle to understand the nuances and complexities of human language. As such, NLG is the branch of the process that allows such programs to interact with human beings in a manner that is not unnatural or offputting. Through the application of NLG through NLP, software developers continue to make advancements in the larger fields of artificial intelligence and machine learning.
<urn:uuid:8bc508b5-56cb-4099-b885-144057301387>
CC-MAIN-2022-40
https://caseguard.com/articles/natural-language-generation-new-software-and-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00375.warc.gz
en
0.939237
948
3.515625
4
GCHQ has released two research papers authored by Alan Turing, which are believed to have been written while he was at Bletchley Park, engaging in code cracking during World War II. The papers have previously been regarded as still too sensitive to make public, but you'll now be able to view them at the National Archives at Kew, although you'll need to make arrangements and bring ID to do so. The handwritten maths research papers are entitled "Paper on Statistics of Repetitions" and "The Applications of Probability to Crypt". The first paper is an informal piece where Turing details a process to evaluate the best statistical means of determining whether two cipher messages use the same key. The second piece is a longer affair, and consists of a detailed probability analysis of code cracking problems. It all sounds like pretty heavy stuff. The second paper has been dated at either 1941 or 1942, given that Turing mentions Hitler in it, declaring that the German dictator was 52-years-old. A spokesperson for GCHQ commented: "We are delighted to release these papers showing more of Alan Turing's pioneering research during his time at Bletchley Park. It was this type of research that helped turn the tide of war and it is particularly pleasing that we are able to share these papers during this centenary year." Alan Turing's centenary celebration will be happening in June, as he was born on the 23rd of the month in 1912. We expect a Google doodle will be along to mark the occasion.
<urn:uuid:598b919c-9a95-4a50-a04f-5407fcb04371>
CC-MAIN-2022-40
https://www.itproportal.com/2012/04/20/gchq-releases-alan-turing-research-papers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00375.warc.gz
en
0.976892
310
2.609375
3
Series: Version 7.1 z/VM – Concepts, System Initialization, and Shutdown The z/VM Concepts, System Initialization and Shutdown course describes how virtualization, and in particular z/VM, has become more popular in Data Centers and examines the processes used for z/VM start-up and shutdown. z/VM – Monitoring and Controlling z/VM Operations The Monitoring and Controlling z/VM Operations course describes the tasks associated with displaying z/VM system status and activity, and management of z/VM resources. z/VM – Managing Guest Operating Systems The Managing Guest Operating Systems course describes the types of guests that can be installed under z/VM and the methods used to create, display and manipulate CMS files. z/VM – Identifying and Resolving z/VM Problems The Identifying and Resolving z/VM Problems course looks at the tools and methods used to gather information that assists with problem resolution, and discusses how performance issues and general problems are resolved. The processes and utilities used for backup and recovery are also described. Linux on z Systems Fundamentals The Linux on z Systems Fundamentals course discusses common Linux distributions for the z systems environment, how Linux is accessed, its operational implementation, and the general monitoring and management of Linux. The Administrator module provides an overview of the tuning, monitoring, and analyzing tasks performed by the Linux Administrator and contains tips for best practice in these areas.
<urn:uuid:b8b34594-7032-462d-b8da-342ca259658a>
CC-MAIN-2022-40
https://interskill.com/series/version-7-1/?noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00375.warc.gz
en
0.885283
296
2.59375
3
In the digital era, freedom is inextricably linked to privacy. After a good start, the Internet-enabled, technological revolution we are living through has hit some bumps in the road. We have already lost a lot of control over who and what has access to our data, and there are further threats to our freedom on the horizon. It doesn't have to be that way though, and it is not inevitable that the trend will continue. To celebrate Independence Day we want to draw your attention to five technologies that could improve life, liberty and the pursuit of happiness on the Internet. The technologies are listed in a rough order of simplest, soonest, and most likely to happen, to most complex, furthest out, and least likely to happen. DNS encryption plugs a gap that makes it easy to track the websites you visit. The domain name system (DNS) is a distributed address book that lists domain names and their corresponding IP addresses. When you visit a website, your browser sends a request to a DNS resolver, which responds with the IP address of the domain you're visiting. The request is sent in plain text, which is the computer networking equivalent of yelling the names of all the websites you're visiting out loud. Anyone, or anything, on the same local network as you can see your DNS lookups, as can your ISP, which will happily sell your browsing history to the highest bidder. And any machine-in-the-middle (MitM) attackers between you and the DNS resolver—such as rogue Wi-Fi access points—can also silently change your plain text DNS requests and use them to direct you to malicious websites. DNS encryption restores your privacy by making it impossible for anything other than the DNS resolver to read and respond to your queries. You still have to trust the resolver you send your requests to, but the eavesdroppers are out in the cold. DNS encryption is new, and still relatively rare, but it is supported natively by modern versions of Windows, macOS, Android, and iOS, as well as a number of different DNS clients, proxies and applications, including the DNS Filtering module for the Malwarebytes Nebula platform. It's ascendancy seems assured. Passwordless authentication could usher in a world where we no longer rely on passwords, and that could be an enormous, unabashed win for security and peace of mind. The trouble is, that has been true for a very long time indeed, and it hasn't happened yet. There is reason to hope that things are finally about change though. Passwords are a great idea in theory that fail horribly in practice. Humans are poorly equipped to create and remember them, and demonstrably poor at building systems that handle them securely. And yet almost every Internet account requires one. The inevitable result is an epidemic of poor passwords and an entire criminal industry preying on them with relentless automated attacks. For a long time, the successor to the password was widely presumed to be some form of biometric authentication—such as face or fingerprint recognition—but nobody could agree which one. With multiple novel, competing, costly, and incompatible alternatives, passwords remained the clear winner. The solution to that gridlock was FIDO2. FIDO2 is a specification that uses public key encryption for authentication. This allows users to log in to websites without sharing a secret that needs to be secured like a password. There is nothing for a programmer to secure, nothing for an attacker to guess, and nothing that can be stolen in a data breach. The sensitive encrpytion work all happens on a device owned by the user, which can be a specialist hardware key, a phone, a laptop, or any other compatible device. FIDO2 doesn't specify what the device is, or how it should be secured, only that a user must make a "gesture" to approve the authentication. This leaves device manufacturers free to use whatever "gesture" works best for them: PIN numbers, swipe patterns, and any and all forms of biometrics. The end result is a technology that allows you to log in to a website securely using Windows Hello, Apple's Touch ID, and any number of other methods that exist now or could be created in the future. Passwordless authentication is possible today but still extremely rare. However, it took a big step forward in May this year when Google, Microsoft, and Apple made simultaneous, coordinated pledges to increase their adoption of the FIDO2 standard. Onion networking, the technology behind Tor and the "dark web", has been around for twenty years, so it might seem an odd candidate for an emerging technology that could change everything—but what if that's just because we've been thinking about it the wrong way? Tor is a network of servers that allows software clients (like web browsers) and services (like websites) to communicate securely and anonymously. Although the software is extremely good at what it does, today it services a narrow niche of users who put privacy and security above all, and it has become strongly associated with ransomware, illegal drug markets, and other forms of unsavoury criminal activity. According to security evangelist Alec Muffett, we are overlooking a very important aspect of this technology though. Muffett was previously a security engineer at Facebook, where he was responsible for putting the social network on Tor. Speaking to David Ruiz on a recent Malwarebytes Lock and Code podcast, he explained how he sees Tor as "a brand new networking stack for the Internet" that can "guarantee integrity, and privacy, and unblockability of communication." Every Tor address is also the cryptographic public key of the service you want to talk to. For example, the Facebook address is: Having the public key act as the address provides cryptographic assurance that you are talking to the service you want to talk to, bypassing several layers of the OSI model, and cutting out fundamental Internet vulnerabilities, such as BGP hijacking. We should stop thinking about Tor as just an anonymity tool, says Muffet. It should be attractive to anyone who cares about the integrity of their brand and what it has to say: If you are in the position of providing a forum, a messenger service, or news to a mass public ... where your brand name is a really important part of your value proposition, then onion networking is for you, because you can make sure that no one can mess with your traffic.Alec Muffet speaking to Lock and Code. Although mainstream organizations like The New York Times, Pro Publica, Facebook, and Twitter have already embraced Tor, having a .onion site is still very much the exception. In all likelihood, it will take something quite dramatic to change that, but that doesn't mean it can't happen. In 2013, Edward Snowden's revelations about pervasive Internet surveillance triggered a huge gobal effort to make encrypted web traffic the norm, rather than the exception. A similar stimulus today could tip onion networking from its niche into the mainstream. People may be surprised to see cryptocurrencies appearing in our list. If cryptotrading sites are naming stadia and buying superbowl ads then cryptocurrencies are already mainstream and hardly a technology for the future, surely. Its presence near the bottom of our list tells you that isn't how we see it. Cryptocurrencies face a number of cyclone-force headwinds, starting with the current, across-the-board, price crash. The market cap of the biggest currencies, Bitcoin and Ether, is shrinking fast, and some cryptocurrencies have already disappeared completely; the free flow of venture capital money is likely to dry up; there are issues with scalability, scams, rug pulls, thefts from exchanges, and environmental damage; and the pseudo-anonymity blockchains provide is challenged by our ever-improving capacity to identify patterns in payments. More importantly, from the perspective of life, liberty, and the pursuit of happiness, almost nobody is using these currencies as actual currencies—nobody is paid in Bitcoin, and nobody is using Ether to buy groceries. Remember, Bitcoin was supposed to be a peer-to-peer electronic cash system not a vehicle for speculative trading. So why is it on our list at all? For all the reasons to dislike them or write them off, cryptocurrencies are hard to ignore. At its core, the original cryptocurrency, Bitcoin, was supposed to be a trustless, borderless payment system that was built on top of the Internet. What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party."Satoshi Nakamoto", from Bitcoin: A Peer-to-Peer Electronic Cash System It was a vision of what freedom might look like in the digital age. That desire for freedom propelled Bitcoin it in its early days, and the attractiveness of a private, peer-to-peer currency is undimmed, even if nobody has managed to actually build one that works yet. The current crash will pass and the strongest ideas and technology will survive. We suspect that Satoshi's original vision will be one of them, even if Bitcoin isn't. The cornerstone of digital privacy, security, and freedom is encryption, and the last item in our list is one of its holy grails: Encryption that never needs to be undone. Encryption protects your data if your phone is stolen, and it makes your emails, credit card details, and WhatsApp messages tamper proof as they whizz around the Internet. And it's what underpins all of the other things in our list. And all of the examples above have something in common: They are either examples of encryption that's used to protect data at rest, or data that's in transit. Moving or storing data only gets you so far though, sooner or later it has to be used. It can't be used unless it's decrypted, and you need to trust whatever system has access to that decrypted data. Homomorphic encryption algorithms allow mathematical operations to be performed on encrypted data, so that it doesn't need to be decrypted at all, ever, even when it's being used. The result of performing a mathematical operation on the encrypted data is the same as if the data was decrypted, subject to a mathematical operation, and the answer encrypted. This incredible act of needle threading needs to ensure that you can't learn anything about the data from the ciphertext (the encrypted version of the data), and that you can't learn anything at all about it by observing the mathematical operations performed on it. If you had access to homomorphic encryption you wouldn't have to trust anyone you share your data with, whether they are the vendors in your organization's supply chain, or your favorite, data-hungry social network. Almost unbelievably, homomorphic encryption algorithms already exist. The reason you don't have access to their almost magical properties though is that they are prohibitively slow. It currently takes days for them to perform actions that we expect to take seconds. Although slow, these algorithms are already millions of times quicker than they were just a few years ago. And while that rate of improvement will surely decelerate, the processing power of computers is still doubling every few years. At some point in the not-too-distant future, when these two trends meet, it could change how we think about trust and freedom in the digital age completely.
<urn:uuid:1672ebde-487c-4277-83a3-992423305f73>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2022/07/5-pro-freedom-technologies-that-could-change-the-internet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00375.warc.gz
en
0.955906
2,426
2.609375
3
- 11 February, 2022 Symbolic AI: The key to the thinking machine Even as many enterprises are just starting to dip their toes into the AI pool with rudimentary machine learning (ML) and deep learning (DL) models, a new form of the technology known as symbolic AI is emerging from the lab that has the potential to upend both the way AI functions and how it relates to its human overseers. Symbolic AI’s adherents say it more closely follows the logic of biological intelligence because it analyzes symbols, not just data, to arrive at more intuitive, knowledge-based conclusions. It’s most commonly used in linguistics models such as natural language processing (NLP) and natural language understanding (NLU), but it is quickly finding its way into ML and other types of AI where it can bring much-needed visibility into algorithmic processes. Because they are bound by rules, however, symbolic algorithms cannot improve themselves over time, which is, after all, one of the key value propositions that AI brings to the table, says Jans Aasman, CEO of knowledge graph solutions provider Franz Inc. This is why symbolic AI is being integrated into ML, DL, and other forms of rules-free AI to create hybrid environments that provide the best of both worlds: full machine intelligence with logic-based brains that improve with each application. Read the full article at VentureBeat.
<urn:uuid:516b5df5-ad97-4021-a229-507f4c2df3bc>
CC-MAIN-2022-40
https://allegrograph.com/articles/symbolic-ai-the-key-to-the-thinking-machine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00575.warc.gz
en
0.95293
289
2.546875
3
Researchers have discovered critical security flaws in connected smart plugs which can give attackers access to a full home network — as well as your email account. Craig Young, Security Researcher at Tripwire commented below. Craig Young, Security Researcher at Tripwire: “This is entirely unsurprising to anyone who’s been paying attention to the IoT market. Often times these devices do not use authentication at all and when they do it is commonly hardcoded or generated with an insecure algorithm. Product vendors in this space may have expertise when it comes to making hardware but it seems that they lack experience with respect to designing software. IoT security is in its infancy with vendors repeating the many mistakes made by software developers in the 90s. This is quite a serious problem however as more and more devices represent not just an infosec risk but can also present personal safety risks. What would happen to our power grid if millions of Internet connected outlets were compromised and then all triggered to turn on and off devices at the same time?”
<urn:uuid:b4152ccc-b711-418e-91db-2e23c6f5e3d0>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/vulnerable-smart-home-iot-sockets-act-bridge-take-full-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00575.warc.gz
en
0.967954
207
2.546875
3
Object storage is an approach to data storage meant to overcome the limitations that other file systems have in the face of the voluminous data generated by the growing number of users and devices in the world today. This storage system differs from others by organizing data using metadata rather than using a file hierarchy. Contrastingly, traditional file systems store data in a hierarchy and follow a file path to locate requested data. This file storage process is usually the domain of the application using the data. Object storage, however, maintains a table of file metadata, cataloging many aspects and characteristics of these files. By using metadata as a file organization method, applications are unburdened from locating files, and instead request data from the object storage which uses the metadata table to locate the appropriate information. This object storage approach shows its advantage by storing structured and unstructured data as well as the contextual information describing these objects. This shows up in the general case of data lakes, which store massive amounts of data. Object storage can beneficially remove data silos, it makes analyzing diverse datasets more accessible and easier. How object storage works Unlike file storage systems that organize data into a hierarchy, object storage stores data objects in a flat system. These objects then are referenced and retrieved using a Universally Unique Identifier (UUID) or Globally Unique Identifier (GUID) of 128-bit integers, significantly large enough to allow for a wide range of unique IDs, important because of the large number of objects stored. Also unlike file storage systems, object storage uses metadata linked to data objects via UUIDs to find user requests. Files on the other hand retain very little metadata, relying on the hierarchy to locate requested data. The main advantage, object metadata allows for extensive (unlimited) description of the object, for example, video files can include lists of cast and crew. So, a user can locate video files by actor name. Hierarchical file storage systems run into difficulties when scaled to capacities ranging in the petabytes. At this size, file storage performance degrades considerably, and the common solution is to split data into logical unit numbers (LUN), or collections of physical or virtual storage devices. While this improves storage performance, it also adds more complexity, until the complexity begins to cause difficulties as well. Instead, metadata search capabilities are used to overcome the scaling of data, and the ensuing technical challenges that crop up in file storage. Object storage reliability is further enhanced by the technique of Erasure Coding. In essence, Erasure Coding volumes divide data into fragments, or shards, each is packed with error correction data, and then each are placed on different disks. By diversifying the data and packing it with redundant information, it ensures that files can be restored even after many disk failures. In short, Erasure Coding is a modern RAID system. Benefits of object storage The main benefit of object storage is the near unlimited scalability and storage of massive volumes of unstructured data. Metadata allows these systems to organize unstructured data in a highly searchable format. Scalable modernized infrastructure emancipates cloud consumers from storage limitations Efficient storage helps decrease costs and improve ROI Future-proof data infrastructure against capital costs Secure data and ensure compliance through automated features Improve data visibility and accurate reporting. Object storage use cases Object storage use cases tend towards two general kinds, those workloads traditionally fulfilled by file, block, or tape storage devices, and the new innovative solutions that programmatically access object storage to derive insights. Traditional applications include backups, archiving, content storage, enterprise file servers, and collaboration. New innovations include mobile, social, IoT, cloud native apps, and AI/ML or cognitive apps. Active Archiving — Active archiving is a space between deep storage archiving and real-time transparent data access, which tend to cause conflicting complications. Active archiving is typically configured as a tier architecture using tape storage behind disk storage, and replication. Object storage is used to flatten the tier architecture while adding resilience and simplifying administration. Backup Repositories — Backup data is a practical concern for many organizations, which requires capacity management, space reclamation, and data replication. These data backup practices tended to use costly storage arrays and tape silos. With object storage, resilience features allow organizations to move away from replication and use low cost storage, with the benefit of a seemingly “endless” pool of storage. Cognitive and analytic systems — With the advent of AI and machine learning followed the rise of cognitive systems that learn and reason based on human, experience, and environmental interactions. To do this, cognitive systems have been designed to interpret and understand unstructured data, found in images, speech, social media, etc. However, current data storage systems suffer performance issues with the type and scale of data that cognitive systems consume to determine their predictive responses. Object storage provides the metadata architecture that helps to facilitate the performance requirements of these complex and heavy workloads. Enterprise Files Services — Enterprise file services have been a necessary tool for organizations for many decades, however, the dependence on RAID schemes using block storage has steadily introduced complexity in large-scale data protection efforts. Object storage removes this older, siloed approach in favor of a unified back-end, with metadata that allows easy access without locking into one solution, effectively achieving economies of scale. Internet of Things Data Repository — The Internet of Things refers to the connection of a wide variety of devices that penetrate many aspects of life, each with a unique identifier, to eliminate the human element in gathering data. The amount of data generated by IoT systems tends towards the Big Data size, and storing this data requires a more robust system than what block or file storage can support. Object storage supplies the robust storage back-end where data of all varieties can be stored, retrieved, and archived. When to use object storage By comparing the key difference between block storage, object storage, and file storage, IT teams can narrow down the types of storage that will help them achieve their business goals. However, underneath each category there are many options, such as solutions that range from consumer grade block storage, to enterprise SAN block storage. More expensive when volume goes up Less expensive the more volume goes up More expensive when volume goes up Moderate manageability, more when configurations extend to SAN types Metadata makes high volume searchability easier Hierarchical storage makes smaller volumes highly manageable Suitable for scaling Highly suitable for high volumes Not suitable for scaling Highly accessible for large volumes Highly searchable metadata Highly suitable to real-time data transactions, performance use cases Highly expansive data repositories, with less than real-time modifications Workstations and smaller database applications, without plans to scale Object storage vs. block storage vs. file storage Object storage can be compared to two other common storage formats, block, and file storage. These formats aim to store, organize, and allow access to data in specific ways that benefit certain data applications. For instance, file storage, commonly seen on desktop computers as a file and folder hierarchy, presents information intuitively to users. This intuitive format, though, can hamper operations when data becomes voluminous. Block storage and object storage both help to overcome the scaling of data in their own ways. Block storage does this by “chunking” data into arbitrarily sized data blocks that can be easily managed by software, but provides little data about file contents, leaving that to the application to determine. Object storage decouples the data from the application, using metadata as a file organization method which then allows object stores to span multiple systems, but still be easily located and accessed. Business Email Address Thank you. We will contact you shortly. Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us. If you are already subscribed with us you will not receive any email from us where you need to confirm your data. "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required."
<urn:uuid:1c48dc83-90fb-459a-904a-03bcf20e7fc9>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-is-object-storage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00575.warc.gz
en
0.896237
2,021
3.578125
4
Split IPv6 blocks and networks (either from the Unique Local Address Space or the Global Unicast Address Space) into multiple blocks and networks. Divide IPv6 networks into multiple sub-networks to help you better manage your IP space. One IPv6 block or network can be divided exponentially into 2 blocks or networks, up to a maximum of 1024. Before finalizing the split, Address Manager will show you an overview of all the new blocks or networks that will be created. You will also be able to assign names to each of the new objects. - A size-1 block or network can't be split. - The maximum block size that can be split is /126. - A block or network can't be split if the split point falls on a reserved IP address. - A block or network can't be split if the split point falls in between a DHCP range. To split an IPv6 block or network: - Select the IP Space tab. Tabs remember the page you last worked on, so select the tab again to ensure you're on the Configuration information page. - From the configuration drop-down menu, select a configuration. - Click the IPv6 tab. In the IPv6 Blocks section, click the FC00::/6 or the 2003::/3 address space. - Under Address Space, select the block or network that you want to split. - Click the menu beside the block/network name and select Split. - Under Options, select the number of blocks or networks you want to create from the Number of Blocks/Networks drop-down menu.You can divide a block into 2, 4, 8, 16, 32, 64, 128, 256, 512, or 1024 blocks or networks. - Click Continue. The Block/Networks List section opens displaying the number of blocks or networksand their respective IPv6 address ranges. Under Block/Networks List, enter a unique name for any of the newly create blocks or networks in the Block/Network Note: If you entered a name for the IPv6 block or network when you created or edited it, that name will appear in the Block/Network Name text fields with appended sequential numbers to identify each new block. For example, block/networkname-1, block/networkname-2, block/networkname-3, and so forth. If you did not enter a name for the IPv6 block or network when you created or edited it, the Block/Network Name text fields will be blank for all new blocks or networks. - Click Confirm.
<urn:uuid:ed039adf-23fa-4190-839c-5bb6c15b14c5>
CC-MAIN-2022-40
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Splitting-IPv6-blocks-and-networks/9.2.0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00575.warc.gz
en
0.777898
538
2.59375
3
Since security and compliance are high priorities for most organizations, attacks on an organization’s computer systems take many different forms such as spoofing, smurfing, and other types of Denial-of-Service (DoS) attacks. These attacks are designed to harm or interrupt the use of operational systems. System hacking is one of the most important steps that is performed after acquiring information through the above techniques. This information can be used to hack the target system using various hacking techniques and strategies. System hacking helps to identify vulnerabilities and security flaws in the target system and predict the effectiveness of additional security measures in strengthening and protecting information resources and systems from attack. Overview of System Hacking In preparation for hacking a system, you must follow a certain methodology. You need to first obtain information during the footprinting, scanning, enumeration, and vulnerability analysis phases, which can be used to exploit the target system. There are four steps in the system hacking: Gaining Access: Use techniques such as cracking passwords and exploiting vulnerabilities to gain access to the target system Escalating Privileges: Exploit known vulnerabilities existing in OSes and software applications to escalate privileges Maintaining Access: Maintain high levels of access to perform malicious activities such as executing malicious applications and stealing, hiding, or tampering with sensitive system files Clearing Logs: Avoid recognition by legitimate system users and remain undetected by wiping out the entries corresponding to malicious activities in the system logs, thus avoiding detection. Step 1: Gain Access to the System For a professional ethical hacker or pen tester, the first step in system hacking is to gain access to a target system using information obtained and loopholes found in the system’s access control mechanism. In this step, you will use various techniques such as password cracking, vulnerability exploitation, and social engineering to gain access to the target system. Password cracking is the process of recovering passwords from the data transmitted by a computer system or stored in it. It may help a user recover a forgotten or lost password or act as a preventive measure by system administrators to check for easily breakable passwords; however, an attacker can use this process to gain unauthorized system access. Password cracking is one of the crucial stages of system hacking. Hacking often begins with password cracking attempts. A password is a key piece of information necessary to access a system. Consequently, most attackers use password-cracking techniques to gain unauthorized access. An attacker may either crack a password manually by guessing it or use automated tools and techniques such as a dictionary or brute-force method. Most password cracking techniques are successful, because of weak or easily guessable passwords. Vulnerability exploitation involves the execution of multiple complex, interrelated steps to gain access to a remote system. Attackers use discovered vulnerabilities to develop exploits, deliver and execute the exploits on the remote system. Step 2: Perform Privilege Escalation to Gain Higher Privileges As a professional ethical hacker or pen tester, the second step in system hacking is to escalate privileges by using user account passwords obtained in the first step of system hacking. In privileges escalation, you will attempt to gain system access to the target system, and then try to attain higher-level privileges within that system. In this step, you will use various privilege escalation techniques such as named pipe impersonation, misconfigured service exploitation, pivoting, and relaying to gain higher privileges to the target system. Privilege escalation is the process of gaining more privileges than were initially acquired. Here, you can take advantage of design flaws, programming errors, bugs, and configuration oversights in the OS and software application to gain administrative access to the network and its associated applications. Backdoors are malicious files that contain trojan or other infectious applications that can either halt the current working state of a target machine or even gain partial or complete control over it. Here, you need to build such backdoors to gain remote access to the target system. You can send these backdoors through email, file-sharing web applications, and shared network drives, among other methods, and entice the users to execute them. Once a user executes such an application, you can gain access to their affected machine and perform activities such as keylogging and sensitive data extraction. Overview of Privilege Escalation Privileges are a security role assigned to users for specific programs, features, OSes, functions, files, or codes. They limit access by type of user. Privilege escalation is required when you want to access system resources that you are not authorized to access. It takes place in two forms: vertical privilege escalation and horizontal privilege escalation. Horizontal Privilege Escalation: An unauthorized user tries to access the resources, functions, and other privileges that belong to an authorized user who has similar access permissions Vertical Privilege Escalation: An unauthorized user tries to gain access to the resources and functions of a user with higher privileges such as an application or site administrator Step 3: Maintain Remote Access and Hide Malicious Activities As a professional ethical hacker or pen tester, the next step after gaining access and escalating privileges on the target system is to maintain access for further exploitation on the target system. Now, you can remotely execute malicious applications such as keyloggers, spyware, backdoors, and other malicious programs to maintain access to the target system. You can hide malicious programs or files using methods such as rootkits, steganography, and NTFS data streams to maintain access to the target system. Maintaining access will help you identify security flaws in the target system and monitor the employees’ computer activities to check for any violation of company security policy. This will also help predict the effectiveness of additional security measures in strengthening and protecting information resources and systems from attack. Overview of Remote Access and Hiding Malicious Activities Remote Access: Remote code execution techniques are often performed after initially compromising a system and further expanding access to remote systems present on the target network. Discussed below are some of the remote code execution techniques: - Exploitation for client execution - Scheduled task - Service execution - Windows Management Instrumentation (WMI) - Windows Remote Management (WinRM) Hiding Files: Hiding files is the process of hiding malicious programs using methods such as rootkits, NTFS streams, and steganography techniques to prevent the malicious programs from being detected by protective applications such as Antivirus, Anti-malware, and Anti-spyware applications that may be installed on the target system. This helps in maintaining future access to the target system as a hidden malicious file provides direct access to the target system without the victim’s consent. Step 4: Clear Logs to Hide the Evidence of Compromise A professional ethical hacker and penetration tester’s last step in system hacking is to remove any resultant tracks or traces of intrusion on the target system. One of the primary techniques to achieve this goal is to manipulate, disable,or erase the system logs. Once you have access to the target system, you can use inbuilt system utilities to disable or tamper with the logging and auditing mechanisms in the target system. Overview of Clearing Logs To remain undetected, the intruders need to erase all evidence of security compromise from the system. To achieve this, they might modify or delete logs in the system using certain log-wiping utilities, thus removing all evidence of their presence. Various techniques used to clear the evidence of security compromise are as follow: - Disable Auditing: Disable the auditing features of the target system - Clearing Logs: Clears and deletes the system log entries corresponding to security compromise activities - Manipulating Logs: Manipulate logs in such a way that an intruder will not be caught in illegal actions - Covering Tracks on the Network: Use techniques such as reverse HTTP shells, reverse ICMP tunnels, DNS tunneling, and TCP parameters to cover tracks on the network. - Covering Tracks on the OS: Use NTFS streams to hide and cover malicious files in the target system - Deleting Files: Use command-line tools such as Cipher.exe to delete the data and prevent its future recovery - Disabling Windows Functionality: Disable Windows functionality such as last access timestamp, Hibernation, virtual memory, and system restore points to cover tracks
<urn:uuid:08b0c86b-28a8-4dd5-ae36-5f2f616c1403>
CC-MAIN-2022-40
https://cybercoastal.com/system-hacking-introduction-for-beginners/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00575.warc.gz
en
0.900565
1,724
3.53125
4
Updated: Dec 1, 2021 Your password is the gateway to your account. Once inside, a hacker can access a wealth of personal or company information, and use it for their own ends— like divulging trade secrets, fraud or accessing data. Crafting a secure password is essential in reducing risks to your account and information and keeping your data safe — whether at home, work, or on-the-go. From common mistakes to best practices and leading advice from industry experts, understanding how to create a strong, safe password ensures only you keep the keys to your account. We'll cover the following: Why do you need a strong password? Passwords protect sensitive data, ensure privacy for you, your employees and your business, and prevent unauthorised access. It’s for this reason that most leading cybersecurity bodies recommend that only you know your password, even in the scope of a company. Of course, a strong password doesn’t guarantee protection, as it can still be guessed or hacked, but the stronger your password, the better your defence. How are passwords discovered? Most of the time, they’re simply guessed. People often use passwords that are too weak, simple or common to be truly secure, making it an easier job for hackers to compromise accounts. Research by the UK’s leading security body, the National Cyber Security Centre (NCSC), showed that when it comes to setting a password: 15% use their pet's name 14% use family members' names 13% use special dates 6% use a sports team 6% use ‘password’ The NCSC also shared a list of the top 100,000 breached passwords from haveibeenpwned.com, a website created by Microsoft Regional Director Troy Hunt. The data found that the password ‘123456’ has been found 23 million times, ‘qwerty’ 3.8m and ‘password’ 3.6m. Following data breaches, hackers use a practice called 'credential stuffing' to attempt to crack applications. This involves using lists of known usernames, email addresses and passwords to access accounts. But there’s actually more to it than easy or weak passwords, and a lot of it comes down to human behaviour — namely, that we’re creatures of habit, and so fairly predictable. This makes us vulnerable to creating weak passwords from the outset. Here are some common problems: We reuse the same passwords across multiple websites or accounts. We use variations of the same password, whether we’re resetting an old one, or again, using one similar password across the internet. When making a password more complex, we fall into common patterns like starting with a capital, ending with an exclamation mark, and swapping numbers for letters in the middle. We might write down our passwords, or share them with others. We tend to repeat words in longer passwords. While these habits might make passwords easier for us to remember, they also make them easier to crack by others — whether it’s a hacker using software or algorithms, or someone simply typing their best guess. What are the current best practices? Industry leaders don’t always agree on the best approach. The NCSC suggests using ‘three random words’ for each password you set. Not only does this mean your password will be longer — which can mean extra security — but by randomising the words you choose, there’s no discernible link between them that can be guessed, and the combinations are endless. However, other bodies, such as the National Institute of Standards and Technology (NIST) and The Open Web Application Security Project (OWASP), have previously recommended an 8-10 character minimum, with a mixture of casing, letters, numerals and special characters. Other things to think about include: Using a novel username: For example, instead of the person’s name, it could be something more abstract or arbitrary, like fruit, animal or place. However, experts dispute whether this is any more effective than simply using a strong password in the first place, and workplaces may fall into a pattern of using a particular category (e.g. fruit) across the board. Whether administrations know or create, employee passwords: While this could help ensure passwords are created in-line with company policies, it does risk exposing passwords to others in the business. Using expiring or rotating passwords: Frequently resetting passwords may not add any extra security, as users might rehash old ones or only change a character or two when setting a new one. It might be more prudent to reset passwords only if an account is compromised. Implementing a corporate password policy: These can guide employees on creating a strong password, but recommendations may differ between businesses and there could be unexpected outcomes: too complex, and employees might store or write down passwords; too simple, and accounts might be easier to hack into. Password diversity: While you may choose a single approach to setting a password when it comes to businesses, there’s a lot to be said for how diverse employee passwords ultimately are. That’s because hackers may need to use more than one type of software or algorithm to crack into your accounts. How can you make passwords more complex for hackers? There are a number of things we suggest. As a user, we’d recommend you never share your password, even with a person in your company, and don’t risk reusing or rehashing the previous one. You should aim for a password that’s at least 12 characters long for a standard user account, and around 16 characters for an admin or higher access account. Finally, frequently check breached passwords lists, or enter your email address into a site like haveibeenpwned.com, to see if you’ve been hacked. As a company, roll-out multi-factor authentication (MFA), like using a phone to log in as well as a password. The more obstacles in a hacker’s way, the more secure the account. It’s also a good idea to ban common passwords (such as ‘qwerty’) which could be easily cracked. Even better, consider ways you could reduce your reliance on passwords altogether. Essentially, a network of accounts is only as secure as its users. Everyone in your company needs to use a strong password and follow best practices to keep information safe. What’s a good example of a strong password? If you’re coming up with your own passwords, we recommend the NCSC model of ‘three random words’, as we consider this the most secure way to devise a password. You could pick three things from your living room, for example, so you have an easy memory clue, or you could try and invent a story around your three words to help with recall. You could also look into machine-generated passwords, such as those in a password manager, or try NIST’s approach with a blend of characters (though take care not to fall into common patterns, like swapping ‘O’ for ‘0’). Whatever you choose, ensure your password is difficult to guess (even by those who know you) by avoiding common phrases (like ‘I love you’), connected words, important dates and favourite things. Be sure to check against lists of common or compromised passwords, too. Ultimately, though, a password manager may be your best option for account security. What about password managers? Remembering passwords can be tricky, whatever approach you use to create them. Password managers can suggest a complex, randomised, automated passwords for you, store them, and even synchronise them across your devices — so you don’t ever need to remember them. Many people find they make life easier, as logging on can be quicker, you’re less likely to reuse a password, and you won’t need to keep resetting them if you forget. But, there are some key things to consider: Your password manager must be linked to a secure email account and it’s recommended you enable MFA for extra protection. Always use the latest version of a password manager or browser (for instance, if you surf and remember passwords through Chrome). Consider which password manager is right for you (or your company). For example, some store your passwords on a disc. How can CovertSwarm help with my passwords or a corporate password policy? Our approach centres around education; it’s often the case that people are the biggest problem in security matters, as if you don’t know what’s secure (and what isn’t), you can’t protect your account and by extension, important information. Nobody knows cybersecurity like we do, and we practice what we preach by only following the latest industry guidance. Passwords are just one way hackers might access confidential information — and it's hard to know whether you might be vulnerable. When you work with CovertSwarm, we'll cyber attack your business from all angles to find weaknesses, including weak passwords, using penetration testing and ethical hacking methods. We can then deliver training, workshops, demos and more to show how to secure your applications — just ask. Are we heading towards a passwordless future? While it looks like technology is heading that way, a totally passwordless future is still a few years from now. Most data breaches involve weak, default or stolen passwords, and we know that passwords can be hard to create and remember, so going passwordless could make accounts, devices and applications more secure. We’re already seeing passwordless access supported by biometrics like fingerprint and facial recognition, now used on the majority of smartphones and computers, and some authenticator apps are also in common use (for instance, many banks require you to authenticate through your phone if you log on via a desktop). Using a pin, plugging a security ‘key’ into your USB port and wearables like NFC (Near Field Communication, or short-range wireless) smart rings are also circulating. As well as potentially being more secure, these methods generally allow for faster log-in, too — one where a glance, tap or even your proximity can instantly unlock access — and all of those things are much harder to steal or copy than a string of characters. How do I roll-out secure passwords for myself or my organisation? It’s so important to protect your organisation, your employees, and of course yourself, from would-be attackers. Passwords are a great way to safeguard information but can be vulnerable to hacking if best practices aren’t followed. And, you may not realise if your passwords are weak, risky or leaving you open to attack. Strategic planning is always better than firefighting, which is why knowledge is really your greatest ally in creating a strong, secure password. Through relentless penetration testing and ethical hacking, we can find vulnerabilities like weak passwords, so that you can protect your valuable data and prevent unauthorised access to your account(s).
<urn:uuid:5864d62f-069c-4663-820e-2d480665fbec>
CC-MAIN-2022-40
https://www.covertswarm.com/post/password-policy-best-practices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00575.warc.gz
en
0.927757
2,305
3.125
3
Both virtual reality and augmented reality attempt to immerse the users in ways that detach them from the real world, but the subtle differences between the two concepts may ultimately decide which, if either, becomes a mainstream success. While virtual reality or VR devices create a completely fabricated world for the user to inhabit, augmented reality (AR) creates a blend of real and virtual, with the user clearly able to distinguish between the two. Despite both technologies being able to trace their lineage back more than 40 years to the early VR headset the Sword of Damocles, neither has found widespread traction with consumers. While there has been no shortage of failed attempts in the past, ranging from Nintendo’s Virtual Boy console to the Sensorama multimedia device, successes have been short-lived or non-existent. However, with technology giants backing virtual reality through Facebook’s Oculus and Sony’s Project Morpheus and augmented reality gaining more and more apps every day, now could be the time for these technologies to experience the kind of success that has so far eluded them. In terms of similarities and differences between AR and VR, the latter often a requires a substantially larger piece of hardware, as it must block out the user’s real-world surroundings to create a sense of complete immersion. Early headsets were often cumbersome and left wearers feeling nauseated and unwell. AR meanwhile, has seen more recent growth through smartphone apps like Wikitude World Browser and Google Ingress. VR is currently trailing augmented reality, as the latter simply requires a display, camera and motion sensor, all of which are carried by millions in their smartphones. Many people also give augmented reality the edge over its virtual competitor, because they see VR as simply a video game accessory – a 21st century Virtual Boy. However, Oculus founder Palmer Luckey believes that his device has a huge variety of applications. “Gaming is just about the only industry that has the set of tools and skills needed for VR. At its core, VR is an extension of gaming,” he said during a talk at the CES show in Las Vegas. “But it’s the idea of digital parallel worlds, allowing people to communicate and do things in a virtual world. Most don’t spend the majority of their time playing games now, and I can’t see that changing with VR – gaming is not the end game.” Perhaps stemming from the somewhat anti-social reputation of gaming, the predicted rise of virtual reality has also been criticised as encouraging isolation and cutting-off individuals from the real-world. If either VR or AR is to overcome this perception, organisations will instead have to show the potential these devices have to connect individuals in ways that are currently impossible. The possibility of using a VR headset to experience an event, such as a friend’s wedding for example, which you are unable to attend in person could help change negative opinions of these growing technologies. The future of both virtual and augmented reality hangs in the balance. While smartphones have provided an affordable entry point for AR applications, surely the next step is to get consumers interested in bespoke AR devices like SixthSense. Virtual reality meanwhile has often failed to live up to expectations, but recent developments suggest that this is about to change. VR is no longer the dream of sci-fi obsessed computer geeks, but a technology receiving some serious financial backing. Facebook were willing to spend $2 billion on acquiring Oculus, while Sony, Samsung and perhaps even Apple are also preparing to enter the market. Moreover, real-world applications for VR technology are already in development, such as the startup using VR to help patients suffering from a “lazy eye” and scientists using headsets to further their academic research. The question of whether virtual or augmented reality is likely to become a commercial success is a somewhat disingenuous one. Just as AR blends real and manufactured, ultimately the technology behind both VR and AR is also likely to become one. Consumers of the future are unlikely to have to choose whether they want a virtual reality or augmented reality device, but instead whether a VR or AR experience is what they want from a particular headset or wearable gadget.
<urn:uuid:79328676-97b0-4390-b830-6a7245ec05a0>
CC-MAIN-2022-40
https://www.itproportal.com/2015/02/03/virtual-reality-vs-augmented-reality-will-take-off/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00575.warc.gz
en
0.96266
852
2.578125
3
In the world of computing and networking, there is an ongoing battles to have the fastest speeds. In this competition for haste, there is a desire to have latency as low as possible to ensure those lightning-fast speeds. Edge computing and edge applications are a continuation of this effort to lower latency. If you are unfamiliar with latency, it is the delay of time between the initiation of a command or input from one side to the reception from the other. It is usually measured in milliseconds (ms). High latency means there is a larger delay in sending to receiving; low latency means there is a small delay. Latency is typically affected by distance, as longer distances from output to reception means the data being sent has a longer time to travel, while shorter distances will have lower latency as it is received at a quicker time. Latency can also affected by software and hardware elements in the network path, along with network congestion at traffic exchange points — a situation analogous to the problems people have when driving on freeways in traffic. Online gamers experience high latency in perhaps in its most infuriating form as ‘lag.’ There is also the impact of latency on financial trading, satellites, fiber optics, and networks. All of those industries consider high latency highly inconvenient and a detriment to their effective functioning. A few milliseconds of latency could mean millions of dollars lost or potentially fatal disconnects with navigation equipment. One critical purpose of edge computing and edge devices is to minimize the effect of latency on online functions. Compared to a centralized location as a main source of data processing and telecommunications, edge computing decentralizes this process and widens the modalities and locations where these actions can be performed to where it is happening. For edge applications, a centralized location typically struggles to accommodate the flood of data and information it receives, threatening to overload it and impact its performance, causing high latency and disruptions. Because it is now more widely distributed, edge computing can lower latency, as it is performing processing and telecommunications closer to their use, like an Internet of Things (IoT) device. Additional benefits include improved consistency. Rather than moving from hundreds or thousands of kilometers away, an edge application may shorten that to merely tens of kilometers away or on-site, thus reducing latency. Consider the range of edge applications today like IoT devices or smart cameras. High latency would inhibit their ability to operate in real-time as processing would be delayed. With low latency due to edge uses like local servers or a cloud, latency could be cut down considerably for effective use of edge applications. A strong majority of business leaders in one survey said they sought low latency of 10ms or less to ensure the success of their applications, while 75 percent said they require 5ms or less for edge initiatives. But it is possible that enterprises may be too obsessed with lowering the latency to consider whether they truly need such speeds. A market analysis from Spirent Communications and STL Partners found a disconnect from the demands of edge customers for 5G multi-edge computing (MEC) and the capabilities of vendors. STL Partners also found telecoms often struggle to meet consistent latency. edge computing | gaming | interconnection | latency | network
<urn:uuid:d676265f-374f-4789-9897-5a8539623957>
CC-MAIN-2022-40
https://www.edgeir.com/latency-what-is-it-and-how-does-edge-computing-impact-latency-20220621
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00775.warc.gz
en
0.952805
648
3.53125
4
Data center virtualization typically uses virtualization software along with cloud computing technology to replace traditional physical servers and other equipment housed in a physical data center. A data center that uses virtualization in data centers, sometimes referred to as a Software Defined Data Center (SDCC), allows organizations to control their entire IT framework as a singular entity—and often from a central interface. The approach can trim capital and operational costs; improve agility, flexibility and efficiency; save IT staff time and resources; and allow businesses to focus on core business and IT issues. Research firm MarketersMedia reports that the global data center virtualization market is projected to grow by 8 percent annually from 2017 through 2023. That would make data center virtualization a U.S. $10 billion market. Virtualization of a data center and all its hardware components—including servers, storage, and appliances—isn’t a new concept (it dates back to the 1960s). But now, advances in cloud computing, software and other components have made the concept viable and even desirable. Understanding Virtualization of the Data Center Understanding what virtualization of a data center means is critical to proper management of that facility. A number of related terms are used with the concept of virtualization—sometimes interchangeably. They may refer to the same thing, they sometimes overlap, and they also can mean something different. These include: - Server virtualization. This approach to virtualization abstracts the physical hardware by creating a virtual server, typically running in a cloud infrastructure. This masks server resources, processors and operating systems. Server virtualization uses a hypervisor to coordinate processes and instructions with the central processing unit (CPU). A virtual server can operate on-premises or offsite in a virtual data center. - Big Data virtualization. This technique of Big Data virtualization produces a virtual framework for big data systems. It transforms logical data assets into virtual assets. This abstraction layer makes it easier to access data, it typically speeds data access, and it simplifies the management of data. - Virtualization in the data center. This framework, as the name implies, abstracts all physical elements of a data center and creates virtual elements. It can eliminate the need for a physical space to house hardware. - Virtual data center. This term refers to a pool of cloud-based servers and systems that operate as a single virtualized data center rather than a collection of physical assets. In addition, there is sometimes confusion over other related terms, including this main term we’re exploring: - Virtualization. The refers to all services that are separated from the physical hardware and delivery environment through the use of a hypervisor. Virtualization allows a physical server to run multiple computing environments. - Private cloud. This describes physical servers and devices that operate together within a single environment through the use of virtualization. Essentially, pooled virtualization resources create clouds. - Hybrid cloud. Hybrid clouds, which may be comprised of both public and private clouds, may incorporate virtualization in different ways. As a result, changes in usage and resources may lead to performance and manageability issues. Benefits of Virtualization Virtualization enables virtual machines, which are a self-contained instance of software or an OS. They introduce a number of benefits. - Speed and flexibility. In many cases, virtual machines speed the delivery of services and they allow organizations to allocate computing resources more effectively. - Reduced capital costs. Organizations utilize servers and computing resources more effectively. This can push utilization rates from around 60 percent to upwards of 90 percent. This reduces the need for physical hardware and devices, along with software licenses. - Reduce operating costs. Fewer physical servers and devices often translates into reduce energy costs and lower heat buildup. In some cases, virtualization can help build a more efficient data center. - Reduced infrastructure and real estate requirements. Businesses that run virtualization at scale—and within large cloud frameworks—typically reduce the need for data center space or eliminate data centers altogether. In some cases, this can slash real estate and infrastructure costs to the tune of millions of dollars. Challenges of Virtualization Virtualization also presents a number of challenges. These include: Resource management. Managing virtualized machines and the resulting IT environment can prove difficult. Although virtualization software from VMware and Microsoft is designed to simplify the task, it also introduces new complexities, including managing operating systems, microservices, containers and other elements. The result can be multiple dashboards and other elements. Infrastructure. Network connections, network storage devices and storage capacity must all be sufficient—and dynamic enough—to support a virtualized environment, particularly a virtualized data center. Provisioning. Organizations can encounter challenges related to setting up hypervisors and provisioning virtual servers effectively and efficiently. Managing software and other resources. Ensuring that updates and patches are applied effectively can prove difficult in a virtualized environment. There’s also a need to oversee libraries of code, script and containers. How to Manage Data Center Virtualization Although server virtualization is in some ways no different than overseeing physical servers, there are also some important differences. Organizations looking to maximize virtualization of a data center typically benefit by focusing on these key areas: - Embrace standardization. It’s vitally important to ensure that servers run the correct software and systems and that they are updated and patched correctly. In a virtualized environment, the challenges are multiplied, and a subpar physical infrastructure will undermine the virtual framework. Consequently, it’s critical to ensure that the right configurations, templates, containers and libraries are installed and that they reach across the entire environment. - Address sprawl. In many cases, the reason to adopt virtualization is to combat server sprawl. The irony is that virtualization, particularly virtualization of a data center, can create its own form of sprawl. People spin up and spin down virtual machines when they’re not actually needed. They consume resources that they don’t require. The answer is well-designed templates, auditing tools and educating teams and employees about how to use resources effectively. - Deploy the right administration and management tools. It’s important to use the right software and tools to manage virtualization—especially when running more than a single hypervisor. Although vendors such as Microsoft and VMware offer built in tools with their virtualization software, these products may not be robust enough to tackle the intricacies of server virtualization and virtualization of a data center. Many smaller vendors address gaps and missing features by providing deeper visibility into a virtual stack and arming IT with more powerful tools for identifying problems and managing virtual servers and systems. - Ensure that there is adequate network storage and optimized backups. Data storage and backups are both crucial task for any organization, but virtualization of a data center can present different challenges. Storage Area Networks (SANs) are a frequent choice for many organizations looking to tackle virtualization of a data center. But network attached storage (NAS) can work well too, and these devices are generally less expensive. Regardless of the exact approach, it’s important to understand what works best and how to size storage capacity to meet the requirements of a virtualized environment. This requires visibility into where virtual machines store disc images in a SAN or other network storage framework. Data Center Disaster Recovery Disaster recovery and business continuity are challenging issues for any business. Server virtualization and virtualization of a data center can help an enterprise navigate the task more effectively. Among the benefits: - Faster backups and recovery. In many cases, it’s possible to accomplish the task in hours rather than day when data is virtualized. - Better visibility into assets. A well-designed virtualized environment with the right tools can aid in identifying and managing documents, files and other data. - Failover is simplified. If it’s necessary to switch to a redundant system or go back to a known working state, virtualization can help by speeding recovery time. It can also provide a platform for testing systems before moving software back into a production state. - The need for a smaller footprint. Fewer servers, storage and other devices translates directly into lower costs for disaster recovery. Data Center Virtualization Products Numerous companies compete in the virtualization space. So how should a buyer select a data center virtualization product? Here’s the core guiding principle: By focusing on what virtualization tools make sense and where they deliver the greatest value in virtualization of a data center, an organization can improve performance, trim costs and create a more efficient computing framework. These range from large companies like Microsoft and VMware that sell virtualization platforms and tools to best-of-breed providers that sell software and tools to manage environments and address specific tasks. These include creating and managing templates, handling disaster recovery and addressing technical tasks such as provisioning and partitioning. There are also open source tools available to address various tasks related to virtualization in a data center. In addition to the vendors named above, here are additional choices: - Red Hat JBoss Enterprise Data Services: The open source leader is a well respected name in the enterprise data center. - IBM InfoSphere Information Server: Arguably the leading legacy IT name, IBM certainly knows virtualization data centers. - NEC Nblock: NEC sells a solution that provides the building blocks to construct a virtual data center. - CDW Software Defined Data Center: The CDW solution aims to offer flexibility and scalability that covers the compete data center.
<urn:uuid:4f1c9659-8988-476b-ad0a-6d5190d08fb6>
CC-MAIN-2022-40
https://www.datamation.com/data-center/data-center-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00775.warc.gz
en
0.899931
1,962
3.15625
3
These Campaigns Explain Why AV Detection for New Malware Remains Low Here’s why antivirus can’t keep up with advanced malware during the first stages of an attack This year we saw massive spam campaigns like NonPetya or Locky fly below the radar of antivirus software and went undetected during the first hours or even days. Some of them actually went undetected even for months. Second-generation malware usually has the ability to evade detection and bypass antivirus programs users have installed on their computers to keep their data safe. Maybe you’ve never asked yourself this, but do you know how long it takes for antivirus programs to detect advanced types of malware?ra In this article we’ll show you several examples of spam campaigns which went undetected by antivirus software and explain why this happens in the first place. We’ll also provide details on how you can protect efficiently against such threats and close the various security gaps in your system. Did you know that a new malware strain was discovered every 4.2 seconds in the first quarter of 2017? Security experts came to this conclusion in their “Malware trends 2017” report. How antivirus software detects malware Modern malware threats are too sophisticated and can be devastating for your PC without proper online protection. Antivirus programs might not offer 100% protection for your devices, so it’s for the best not to depend solely on them. As users, we need to strengthen our defences and consider adding other security software to keep cybercriminals away. Antivirus is sometimes known as antimalware program, and people tend to use both terms interchangeably, believing that antivirus programs can defend from all types of malware, which is not the case. Here’s what antivirus software can do for you: - Virus scanning, which is done in the background for your files that will open only after the scanning process is over. Most antivirus programs are packed with real-time scanning feature which allows to quickly detect any malicious files on your PC. - Blocks malicious files and prevent them from running, because they put your computer at risk of being infected with malware. - Automatic updates are must-have features to easily track and detect new threats that didn’t exist when your antivirus software was installed; - Malware removal is used to detect and remove specific malware from your devices. AV software might be limited to detecting only old strains of malware and miss the new forms of online threats. - Antivirus might also include phishing protection, vulnerability scan, browser protection and system optimization features. The image above loosely explains how AV can find and detect new malware which is usually delivered via email through a malicious attachment. If your AV program is up to date on your PC and the malware is listed in the AV database, it will be detected and isolated. If it the AV is not up to date, the malware will go undetected. It’s worth mentioning that not all antivirus products offer the same level of computer protection, because each one may include specific and/or advanced features included. In terms of virus detection techniques, AV includes: - Signature-based detection method is using examined files to create a static fingerprint of known malware. It has its limitations: can’t detect new malware for which signatures have not yet been developed; - Heuristic analysis is a method designed to identify new malware by statically examining files for suspicious behaviour without an exact signature match. One of its downsides is that can accidently detect legitimate files as malicious; - Behavioral detection method is rather based on observation, rather than execution, and tries to identity malware by looking for suspicious behaviors in files; - Cloud-based detection can identify malware by collecting data from protected computers while analyzing it on the provider’s infrastructure, instead of performing a local analysis. False positive detections in the antivirus industry We know we have a false positive when an antivirus program informs users there’s a specific vulnerability or a potential infection on their devices, and in fact there isn’t. What’s really happening is that your antivirus programs might block a piece of software that proves to be safe. These mistaken detections are known as false positives and might represent a serious problem for users being impacted by potential malicious attacks. In fact, no antivirus software is perfect, and they might report a false positive at some point. The State of Malware detection and prevention report conducted by Ponemon Institute analyzed how IT security experts manage preventing and detecting malware and advanced threats. Investigations of malware threats are often false positives and a high volume of these false positives will prevent them for seeing the real problems faced by any organization. Those responsible with security operations spend a significant amount of time investigating and chasing false positives. However, study found that only 32% of respondents say their security teams spend time prioritizing alerts that need to be investigated. This graphic below give details on how an average of 40% of malware alerts investigated by security teams are considered to be false positives. Source: Ponemon Institute Report This means that no antivirus software (or other software product) is 100% accurate and may not offer the best accuracy in terms of detection rate and false positives. The antivirus software market is dynamic and complex, and it gets harder to track all the threats targeting users from everywhere. New malware continue to rise at an alarming rate and cybercriminals become more successful in combining technical and psychological skills to launch new cyber attacks. Antivirus software has a hard time detecting new malware threats Antivirus programs aren’t effective as they used to be because of the rise of new malware attacks which they can’t fend off. Online threats have evolved greatly during the last years and users need to find alternative security solutions to enhance protection for these new threats. Malwarebytes analyzed information from scans of approximately 10 million endpoints, the majority of which had one or more traditional AV programs installed. The results of these scans showed that traditional AV solutions are weak and fail to protect against even the most common forms of malware spotted in the wild. Almost 40% of all malware attacks in the wild cleaned by Malwarebytes among endpoints with AV installed occurred on endpoints that had two or more traditional AV solutions installed. You can read the full Malwarebytes report here to see its findings. When it comes to zero-day attacks, the malware is brand new and antivirus software might have problems in detecting them. Antivirus programs do a better job at protecting against known types of viruses and online threats, such as: Trojans, rootkits, backdoors, phishing attacks or botnets. From exploiting a vulnerability in Windows to changing the type of malware delivered ((Non)Petya ransomware, for example), from switching to new infection vectors to delay strain detection, to using auto-updating elements that automate new payload delivery or social engineering attacks, cybercriminals are getting better and better at spreading malware and infecting a large number of computers. Antivirus software might fail to detect malware for various reasons, but one major cause is its reactive scanning procedure. Online criminals often find innovative ways to evade the threat analysis system and infect users’ computers. Here’s how: - From our experience, over the last years, we noticed that antivirus products still have issues in detecting second generation malware in a timely manner. It can take up one or two days to spot them, especially those AV products that aren’t rated as the best top 10 AV software. In this time, cybercriminals can do a lot of things: from stealing your financial information to encrypting your data via a ransomware infection; - Malware creators constantly look for different ways to avoid AV detection. For instance, they equip malware strains with the ability to detect sandboxing mechanisms by checking registry entries, computer’s video and mouse driver, specific communication ports and more. When malware detects that it’s running in a virtual environment (sandbox) it will stop its activity, so antivirus products may conclude that it’s a safe file and just let it pass. Plus, AV don’t spend too much time on a file and might be difficult to detect a code as being malware. - Online criminals often use an evasion technique called Fast Flux to make it more difficult for AV to detect malware. It is usually used by botnets to hide phishing campaigns, malware-loaded websites and other infection sources targeting a large group of users. - Malicious actors can also use encrypted payloads to infect victims’ PCs which will lead to a delay detection by antivirus products and more time for cybercriminals to deploy the malware and spread infection. - Recent attacks targeting MS Office can run malware without the need to trick users to enable macros and making difficult for AV to detect it. Malware authors take advantage of a legitimate Office feature called Microsoft Dynamic Data Exchange (DDE) allowing an Office application to load data from other Office apps. - Cybercriminals use domain shadowing to hide exploits and communication between the payload and the servers they control. For that they need a vast number of URLs they can use and discard. - The bad guys are agile and move faster than security vendors, so they rely on other tactics by using a polymorphic behaviour which involves changing file names and file compression. How second generation anti-malware programs detect new threats Most of the tactics used by malware creators to spread infections are hard to be detected by antivirus programs. Especially second generation malware that might go undetected for months, because malicious actors always update and improve their tactics to avoid detection. Detection continues to be a major issue for both organizations and home users, due to the fact that new malware threats often remain invisible to an antivirus product. This happens because most people rely on a single product, usually an AV product, and don’t update it regularly. Without constant updates, users expose themselves to more attacks, being more vulnerable to be hit by them. Antivirus is not a “set-it-and-forget-it” solution and neither is any other security setup you may install on your device. AV may be your go-to solution to fight malicious software, but there’s always room for improving things. This is why, occasionally, it is important for you to automate everything they can: software updates, traffic filtering, or use other tools to help them improve online security. To enhance protection, you should be using a modern anti-malware solution that is mainly focused on detecting new malware antivirus products can’t easily block. Anti-malware software programs try to scan and detect new cyber attacks based on the data collected from the cybercriminals operation. In this graphic below you can the process for second-generation protection solutions to detect new threats. Malware is usually delivered via spam email with a malicious attachment or links that potential victims might click and get infected. Hackers can also use infected servers and domains to deliver the malware. Think of cyber attacks like an iceberg that hits and causes unprecedented amount of damage for organizations and home users alike. There’s a lot going behind the scenes and you need a bunch of settings and legitimate software products to fight back and protect yourself against these advanced cyber attacks. What can you do You need to filter and sanitize your Internet traffic by eliminating various online threats that you can’t block without getting help. We recommend checking this list of 4 tools that can help you with web traffic filtering: - Use a Virtual Private Network (VPN), which can anonymize Internet traffic and secure it through encrypted connections and communication. With the help of a VPN, you can reduce being exposed to cyber attacks. - Enable your firewall, which is a network system designed to protect unauthorized access to public or private networks. If you have Windows system installed on your PC, you should use its own firewall for free and always on. Otherwise, we recommend using a firewall from a trusted source. - Install a reliable antivirus program. Despite its reactive protection provided, it’s still useful to have a good AV software to help you with web traffic filtering. It might also block blacklisted websites and infected domains, keeping a malware infection away. - Use a second generation anti-malware program to filter your web traffic and block advanced malware threats. A proactive cyber security solution can detect and stop online threats at different stages and provide multi layers of protection. They do that based on the type of infrastructure or infection detected in the first place, which is relatively easy to track. When it comes to the threat landscape, malware authors have switched to more sophisticated attack vectors. When they launch a new attack, cybercriminals rely on innovative tactics and easily target businesses with an outdated infrastructure or software. It’s worth reminding how essential is for businesses to keep their infrastructure up to date and upgrade it. We all know the consequences: security data breaches, business disruption, sensitive data loss, financial damage and many more. Cyber attacks have been happening for many years now, because online criminals keep trying to gain access to a massive volume of users’ data, steal money or financial data, or simply disrupting business operations. What’s changed is the way they operate. Being more skilled and using new methods mean that they know which tactics (will) work. From exploiting a vulnerability in Windows system to changing the type of malware delivered (the case of (Non)Petya ransomware), from switching to new infection vectors to delay strain detection, to using auto-updating elements to automate new payload delivery or social engineering attacks, cybercriminals are getting better and better at spreading malware and infecting a large number of computers. There’s been a lot of talk about botnets used by online criminals for malicious purposes, and for a good reason. Botnets are a bigger problem than we can imagine. A botnet is a large network of compromised computers that are being controlled by online criminals to serve their malicious interests and avoid detection. Hackers use botnets to do a lot of things: launching massive attacks, deliver ransomware, spam or phishing emails and other malicious acts. For example, last year, the Mirai botnet launched one of the biggest DDoS attacks by compromising a major DNS provider that managed to take down big sites like Reddit, Airbnb, Twitter or Spotify. Last month, security researchers warn about the Reaper botnet, a new IoT botnet which seems to be larger than the popular Mirai. In terms of infrastructure used during cyber attacks, there’s been some infamous family of financial stealing malware like Zeus Gameover takedown used by cybercriminals to collect users’ financial information. It has been estimated that this financial malware has infected 1 million users around the world. Another example is the Angler exploit attack in which cybercriminals used two core techniques (domain hijacking and domain shadowing) to bring in new servers and IP addresses from users to grow their distribution network. In this type of attack, AV detection was generally low generating serious problems for businesses and public institutions in Europe and other places. These kind of cyber attacks keep happening and we’ll explain in the following spam campaigns behind-the-scenes why antivirus software has a low detection rate when it comes to new malware. 12 spam campaigns behind the scenes Detection is still an issue for antivirus and antivirus software have fallen behind in terms of effectiveness, because they lack the ability to detect and remove second generation malware in a timely manner. We analyzed 12 different spam campaigns happening in specific days and the data found in VirusTotal indicated a low detection rate for AV engines during the first stages of an attack. See here the data shown by VirusTotal and how many antivirus engines initially detected the spam campaign. See more details from Virus Total below: Data shown by Virus Total. You can see below data from Virus Total and the number of AV engines detecting the spam campaign. Virus Total shows us how many AV engines detected this malicious spam campaigns. See more details from Virus Total here. More details from Virus Total here Here’s the AV detection rate by Virus Total. Virus Total gave us the following data: See details here: See the AV detection rate below: As seen in these pictures, it took AV at least two days to detect malware. During different malicious campaigns analyzed (Locky, Hancitor or Hancitor), it took antivirus products even more time (8 days, 12, or 29 days) to spot malware. During these days, plenty of things can happen: cybercriminals can harvest your financial data (online banking credentials, usernames, passwords, etc.), or they can encrypt data via a ransomware infection. Whether it’s 2 days, 8 or 12 days, time to detection is too much and reactive protection offered by AV is not enough. Why does that happen? Traditional antivirus products are mainly effective against known online threats, and the malware scanning is rather reactive than proactive. The older a malware strain is, the easier for AV is to detect it. This scenario doesn’t apply for new and advanced malware. AV might block websites and domains that are known to be infected, and keep malware infection at bay. However, cybercriminals move extremely fast and could change methods, making difficult for antivirus software to detect sophisticated malware strains and strengthen security. All these evasion tactics trigger low detection rates in antivirus products, which means that a proactive-based model is needed. To enhance online protection you should use a proactive cyber security solution. An antimalware program isn’t meant to replace the antivirus, but complement it, so users can benefit of multiple layers of protection to defend and stop the growing number of malware attacks. What these examples teach us? Both organizations and home users shouldn’t rely on a limited number of software products to keep their data safe and secure. Protection guide against second generation malware Keeping malware attacks under control is a challenge for many businesses. Same goes for home users who are vulnerable and easy targets to different cyber attacks. Malware threats are widespread and difficult to spot by anyone, so prevention is the best strategy to stay safe online. We recommend using these security measures to keep your sensitive data away from online criminals trying to steal it: - Always keep your software up to date, including operating system and every application you may use on your device; - Have a backup of your important data in at least two separate locations: an external drive and in the cloud; - Do not open an email from the spam folder or click on suspicious links you might receive in your inbox’ - Remember to set strong and unique passwords with the help of a password manager system. This guide will help you everything you need to know about password security; - Use a reliable antivirus program as a basic protection for your device, but also consider including a proactive cyber security solution as a second layer of defense to secure your data and block most attacks; - Always secure your browsing while navigating the Internet and click on websites that include only HTTPS certificate; - Teach yourself (and master basic cyber security) to detect potential online threats delivered via emails, social engineering attacks or other methods; The low detection rates for AV found in these spam campaigns have shown us that AV can’t sometimes keep up with the high number of malware threats rising every day. An antivirus solution alone is not enough to keep users safe. For better online safety, you need to add security layers to fend off the rise of advanced cyber attacks. Let’s keep in mind that the impact of these low detection rates for AV along with the prevalence of software monocultures system have their risks. If everyone is using the same OS, apps, or network protocol, and security vulnerabilities are being discovered, it will impact everyone and could potentially lead to massive cyber attacks. Let’s not forget that the increasing amount of security holes in some of the most used apps is a great opportunity for cyber criminals to exploit. They take advantage of everything and easily target everyone. We all know the consequences: security data breaches, business disruption, sensitive data loss, financial damage and many more. How do you keep your computer protected? Using any proactive security solution besides antivirus to secure your data from cyber attacks? Let us know your thoughts in a comment below.
<urn:uuid:b7f4f546-d272-44a9-93d3-9f9d04895518>
CC-MAIN-2022-40
https://heimdalsecurity.com/blog/campaigns-av-detection-new-malware-low/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00775.warc.gz
en
0.935106
4,173
2.515625
3
Cyber Security in the Shipping Industry With more than 90% of the world’s trade being carried by shipping, according to the United Nations’ International Maritime Organization, the maritime industry is an attractive target for cyber attackers. The European Union has recognized the importance of the maritime sector to the European and global economy and has included shipping in the Network and Information Systems (NIS) Directive, which deals with the protection from cyber threats of national critical infrastructure. Ships are increasingly using systems that rely on digitalization, integration, and automation. While the IT world includes systems in offices, ports, and oil rigs, the OT world is used for a multitude of purposes, such as controlling engines and associated systems, cargo management, navigational systems, administration, etc. The OT systems used aboard include: - Vessel Integrated Navigation System (VINS) - Global Positioning System (GPS) - Satellite Communications - Automatic Identification System (AIS) - Radar systems and electronic charts
<urn:uuid:7e2c4aa2-5ed5-47dd-b9b5-57bb2c912602>
CC-MAIN-2022-40
https://www.adacom.com/cyber-security-in-the-shipping-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00175.warc.gz
en
0.908129
204
2.953125
3
In order to manage the world around us, we cast a net of words around it. Like the branding of a steer on the open range, names identify objects, and concepts, as being in our possession. To give a thing a name is to own it. The advantage is obvious. Labels facilitate communication. The disadvantages, on the other hand, are subtle but powerful. Names and labels, by their very nature, constrict our view of the world. Labels shape the world for us, and can also reduce, rather than expand our potential. When teaching a class on creativity I’ll often pass out decks of playing cards ask the participants to build the tallest tower they can in 10 minutes. Towers two cards high are common, four high are rare. Yet towers of 10 or more are easily doable in the time allowed. Most people insist on seeing the cards as cards. They use the cards according to all the rules they know about the proper care and handling of cards. No bending, no folding, no mutilating. Only when you see past the name, can you see it for what it is. Ignoring the label, what are “cards”? They’re nothing but stiff pieces of paper perfectly suited for folding, bending and mutilating. Fold a stiff piece of paper into a “V” and you have the perfect building material. A child knows that, because a child doesn’t yet know the “rules” for playing with cards. As the world become more complex we’re tempted to label everything and deal with the world purely on the basis of those labels. To a large degree this works, but to be different – to distinguish ourselves from the pack – requires we go back to first principles. When we think of the Internet, what do we think of? “Internet” is a name we’ve given to something… what is that something? In other words, what are the attributes that make up the Internet? What makes it different from everything we had before? What is it that it offers, that we didn’t have before? And how do we use it? First, let’s get rid of some of the nonsense. It’s not a “New Economy,” it’s not “New way of doing business,” it’s not the “Solution to all societies’ problems.” None of this is to suggest, even for an instant, that the Internet isn’t an impressive achievement, but it is just a tool, albeit a powerful one. The Internet concept, combined with the ubiquity of the computer, offers us the following: incredibly easy access to the rapid transmission of information. You can use this perspective of the Internet as a lens through which to examine a process. The objective is to identify opportunity. Consider the following example: When an engineer wants to understand the importance of a component in a bridge, one tactic is to pretend the component doesn’t exist. Then ask if the bridge will fall down or stay up. Here’s an exercise for you, the reader. Imagine building a company based on the Napster model without the Internet as the means of data transfer. Instead? Use the Canadian Postal service. Okay… okay… clean up the coffee you’ve spewed over this fine article and just pretend okay? Sheesh. There are four primary transactions: 1) Mailing your listing of sharable MP3s to a central database. 2) Mailing a query to that database and receiving a response. 3) Mailing a request to a holder of an MP3 you want to steal copy (I meant “copy”… honest). 4) Processing all the requests you receive and mailing the MP3s to the requesters. Oh yes, you can charge no fees to the users, but somehow make money doing this. This company would never get off the ground. It’s an obvious failure from the start, until you take the “Internet Lens” and pass it over the business process… thereby reducing all times for delivery of information to about “zero”… then the magic happens. Your next exercise? Take that same lens and pass it over ALL of your existing business processes. Does the magic happen? de Jager is a well known, creative curmudgeon, keynote speaker/consultant on issues relating to change and the management of technology. You can contact him at firstname.lastname@example.org.
<urn:uuid:828e2700-90f5-4dd8-933f-b61019f49dc0>
CC-MAIN-2022-40
https://www.itworldcanada.com/article/break-down-the-rules/31777
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00175.warc.gz
en
0.94094
956
2.90625
3
Public schools nationwide are taking a cue from business, harnessing big data to improve student outcomes, help school districts make better hiring decisions and help governments use their education dollars more effectively. The results may be more successful students, better teacher retention and more finely tuned administration policies. Some of these big data applications are in the classroom, such as the creation of adaptive learning courses that guide individual students through lessons in the way most likely to promote individual success, but other applications are aimed squarely at administrators and the schools they work for. Those data apps help principals and superintendents hire the teachers most likely to improve educational outcomes, track trends in educational giving and, at the college level, correlate majors and classes taken to long-term career success. The message: Big data can be transformative for education. Designed for use from the upper elementary grades into college, McGraw-Hill Education is developing digital curriculum and course materials that take data gathered from 2 million students and use artificial intelligence to create adaptive learning experiences tailored to individual student needs. As a student reads and answers questions, the system tracks what the student doesn’t know and then presents that material. The system knows what questions to ask, and in what order, to maximize long-term retention.
<urn:uuid:ff0e45ee-3b57-4ded-bcba-6f3472c59d2a>
CC-MAIN-2022-40
https://www.crayondata.com/big-data-goes-to-school/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00175.warc.gz
en
0.940498
252
3.265625
3
The terms ‘data sovereignty’ and ‘data residency’ can be a source of confusion for organizations who manage data across borders, particularly with the rapid adoption of cloud computing. It’s important for businesses to understand the differences in these terms, and how they impact their data – and ultimately, their business operations, especially with the global adoption of the hybrid work model. In effect, data sovereignty and data residency are part of the same basic concept, which is the way data privacy impacts cross-border data flows. Those businesses who handle international data must ensure that data privacy isn’t compromised when shared in other countries. Additionally, understanding the legal requirements of storing data in a specific country is fundamental to legally satisfying data privacy and security standards. What is data sovereignty? Data sovereignty refers to the laws and governmental policies applicable to data stored in the country where it originated and is geographically located. For example, in Canada, the Canadian Consumer Privacy Protection Act (CCPPA) gives consumers control over their data and promotes greater transparency about how organizations use data containing personal identifiers. And Australia’s Privacy Principles (APP) legislate that personal data kept in Australia must meet the thirteen standards on how data is collected and used. What is data residency? Data residency refers to the decision of businesses to store data (away from its origin) in another location and authority. It means that once data is moved, stored and processed within a particular region it is subject to the laws, customs and expectations of that specific region. For example, what may be deemed the acceptable use of personal information in Europe, could be controversial in California. To avoid data residency compliance issues, users need to conduct data mapping – that is, understanding what data you have, where it’s located, and the data residency policies for each respective location. Additionally, cloud users need to carefully review their Service Level Agreements (SLAs) with cloud providers to establish exactly where their data can and cannot be moved, stored or processed. Image source: TechTarget In summary, data residency refers to where the data is physically and geographically stored, while data sovereignty is not just about where the data is stored but also about the laws and regulations that govern the data storage at its physical location. The ‘three states of data’ is a term used to categorize structured and unstructured digital data. The three states are: - Data at rest - Data in transit - Data in use Understanding their characteristics can help organizations manage and secure sensitive information. Data at rest (stored data) Data at rest is data that is not actively moving between devices or networks, such as archived or stored data. One of the primary things for businesses to consider is how and where this data is stored, for example on-premise, or in the cloud. As some cloud providers may not provide the option for customers to select the regions where they’re storing or backing up data, organizations need to clarify where exactly their data will be stored, and the regulations relating to that location. Data in transit (Data in motion) Data in transit is actively moving from one location to another as it passes through the internet or a private network. As data in transit is considered less secure while in motion, in any industry it’s crucial that this data is protected wherever it’s moving. Data in use Data in use actively moves through parts of an IT infrastructure as it’s being updated, accessed, read, processed or erased by a system. Because data in use is directly able to be accessed by multiple users, it makes this type of data most vulnerable. Image source: Security Boulevard Unprotected data, whether at rest, in transit or in use can leave organizations open to attack, so it’s vital to have robust data protection measures in place across the board. One of the most effective data protection measures is data encryption. Organizations can use data encryption tools to protect data from unwanted access, while ensuring data residency compliance. The growth of data sets Globally, there are over 7.2 million data sets, and the industry is growing rapidly. Data s handle the backup of data, networking, website hosting, security and email management. The country with the most data centres by far the United States with over 2,670. This is followed by the U.K., with 452, Germany with 443, China with 416 and The Netherlands with 275. Organizations may choose to store their data in a cross-border data for reasons such as different tax benefits, but this data is then subject to the privacy laws of that country. This may cause conflict if these laws are different from the country where the organization is based. Organizations that reside in places like Canada, Australia and Europe are increasingly demanding that their data remain outside of the United States, and preferably within their own country of residence. Choosing where your data resides When deciding where and how to store their data in the cloud, organizations should balance their need for efficiency and competitiveness with data residency and privacy implications. Too many restrictions with keeping data in one location could impact innovation. However, a free approach to moving data across jurisdictions is also risky. Thoroughly investing best practices could help your organization boost your digital transformation efforts while mitigating data residency risks.
<urn:uuid:032873ce-0652-48be-831e-4dd5be47df68>
CC-MAIN-2022-40
https://www.ir.com/blog/payments/data-sovereignty-vs-data-residency
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00175.warc.gz
en
0.928599
1,103
2.84375
3
FOIA 2000 Vs. GDPR | An Analysis, Privacy Legislation Together with the General Data Protection Regulation 2018 and the Freedom of Information Act 2000, people have the right to view or access confidence-bearing information. These regulations illustrate the necessity for proper management of data throughout various organizations and enable the disclosure of information within the constitutional time limit. Failure to comply with the regulations provided by these Acts could lead to negative press, prosecution, and/or a severe penalty. General Data Protection Regulation (GDPR) The GDPR places legal liability on institutions that keep your private data to guarantee that: - it is processed legally and equitably; - it is obtained for specific, overt, and lawful reasons; - it is precise and updated; - it is secured from unsanctioned retrieval or unintended loss. The GDPR enables suspects and their legal representation, doctors, relatives of patients, or employees to see personal information kept by the particular institutions about them. This encompasses all forms of documents. Whether in hard copy or digital, preserved on networks or databases, x-rays, photos, hospital cards, and mails. Requests by staff may include access to data about employees or information on their discipline. The time frame for responding to a records request is one month (30 days) The Freedom of Information Act 2000 The Freedom of Information Act seeks to improve public sector transparency and accountability, which involves institutions of government, boards, clinics, teachers, and law enforcement. The Act accords the citizens an opportunity to access reliable information held by an institution that could potentially offer a better sense of how it conducts its duties, the decision-making procedures, and how public funds are used. The Environmental Information Regulations, in conjunction with the Freedom of Information Act, empowers the public with the ability to ask for information regarding the environment held by any organization. Examples of requests organizations received on freedom of information include: - Access to information on claims for expenses; - Contract forms and charges; - Amount and justification for discontinuing activities; - Access to data on conformance with the European Working Time Regulations; - How cooperative an organization is concerning Freedom of Information demands? Just like GDPR, requests in compliance with the freedom of Information also do have a contractual time frame. Once the application is received by the institution involved, the clock starts, and they have 20 business days to avail the data. It is imperative that once a request is obtained, it is forwarded immediately to the Freedom of Information department. Differences between FOIA and GDPR FOIA encompasses information stored by public institutions and not demands for private information on the individual making the request. FOIA is restricted to enabling access to information in the public domain. The legislation under GDPR safeguards private information. This offers everyone the constitutional right to access data collected on them (through a Subject Access Request) and, in certain situations, to deter other individuals from seeing, using, or storing your personal information. To ensure that your data is secure, FOIA does not allow the public to retrieve information that is exempted from access by GDPR. When you request for personal data on a different person, the demand will be addressed under the FOIA rules, but the GDPR guidelines will be used to establish whether the data can be disclosed. If releasing the information would violate the terms of the GDPR, then the request is denied. How GDPR affects FOIA The General Data Protection Regulation (GDPR) can impact the Freedom of Information Act 2000 (FOIA). Section 40, which ties FOIA with the Data Protection Act of 1998(‘ DPA’) — the legislation that the GDPR would substitute, has the most significant influence on FOIA. There is also a collateral impact: under GDPR, organizations like public institutions are required to register their adherence, this implies that there will be no room whatsoever for public institutions to hide since they are obligated to be open to the people. Taking a closer look at section 40 of the FOIA, there are two reasons for the exceptions on personal data: - When a person submits an FOI application for his or her personal information, this should thus be considered as a Subject Access Request as covered by DPA (section 40(1) FOIA); - Adhering to the FOIA query would disclose private information belonging to third parties and disclosure of such information would be contrary to the core values of the DPA (section 40(2) FOIA) — thus needing analysis as to whether abuse of personal data would occur upon disclosure. GDPR does not have any effect on the first form of FOI filing, although government authorities are required to become acquainted with and refer to the current GDPR rules concerning Subject Access Requests. On the other hand, the introduction of GDPR has increased uncertainty in terms of handling the second type of FOI request — in which different individuals’ data are concerned. How GDPR promotes accountability and documentation and its impact GDPR focuses more heavily on transparency and accountability than its predecessor, the Data Protection Act (DPA). This is essential in itself for government bodies to pay close attention to, since it lies within the scope of their duties to be open and accountable to the public. The fundamental goal of FOIA is to permit any individual citizens to obtain access to any recorded data kept by a public body. Unless an exception arises, any record held by a public body may be released within FOIA: these include all records retained by an agency concerning its compliance with data protection. If the person requesting for data asks for information regarding the data security methods of a public organization, it may be difficult to answer such FOIA requests if the disclosure of these records would reveal non-compliance to GDPR, or worse, it suggests that no enforcement steps for data protection have been taken, especially when there are no records to disclose. There won’t be any concealment. The GDPR enforcement measures by public agencies are as strong as in the public sphere, which will result in more significant criticism and also potentially a negative impact on reputation if compliance attempts are in any way inadequate. One instance of the updated stipulations for transparency is in Article 30 of the GDPR, which demands that institutions must keep records of processing operations. Each organization is expected to record the following data clearly: - Identity and contact details of the controller (individual in charge of record-keeping and processing), representative of the controller, and the officer in charge of data protection; - Summary of the reasons for the collection of private information; - Classification of types of data subjects and groups of personal data; - Groups of users to whom the personal information will be released, including third-party recipients or international agencies; - The time limits proposed for the deletion of the various data categories; - A summary of an organization’s technological and internal privacy measures put in place to secure private data. It is available to data requesters to obtain a copy of such a document. Public authorities are expected to publish the requested information unless a great reason not to disclose such data exists. It’s not only a necessity to keep a log of the information processing activity, but it is also vital that the specifics of the processing are appropriately documented and that they comply with data protection regulations. For instance, in the recorded documentation, the explanation of reasons for collecting personal data has to be written down in the privacy statement of the public body. If not, the public authority could be justly blamed for failing to be forthright and open about how they handle personal data. Additionally, public institutions are obligated to record any infringements of personal data (Article 33(5) GDPR), which covers all the details regarding the breach of data, its consequences, and any corrective actions undertaken. This report becomes’ registered information’ for public bodies and, therefore, can be disclosed under the FOIA. If the record shows repeated violations, exposing a recorded list of data breaches may contribute to broader exposure of data protection deficiencies. In conclusion, since the implementation of GDPR in May 2018, it appears that public agencies have experienced specific preliminary difficulties with respect to the enforcement of the exemption under section 40(2) when the requested data includes personal information belonging to a third party. The foundation of ‘legitimate interests’ that public authorities used to depend on to excuse the release of personal information is no longer relevant. Consequently, it has proven difficult for public authorities to disclose personal data through FOIA. Accordingly, the release of private information via FOIA has posed a problem for public bodies. Moreover, bearing in mind the primary goal behind FOIA — increasing public bodies’ transparency, rendering their operational procedures and decision-making processes transparent to the general populace — there is no way for public organizations to hide any half-hearted attempts to comply with GDPR. This transparency now also applies to compliance with GDPR, which expressly mandates that certain documents be in order and to be classified as ‘recorded information.’ There is currently no excuse for complying with data protection for public organizations subject to FOIA.
<urn:uuid:b219779f-d0a3-4a73-91ec-31d631c7004c>
CC-MAIN-2022-40
https://caseguard.com/articles/foia-2000-vs-gdpr-an-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00175.warc.gz
en
0.933041
1,855
3.125
3
LGPD: Brazil’s Data Protection Law Explained What is Brazil’s LGPD? Brazil’s Lei Geral de Proteção de Dados Pessoais do Brasil, also known as LGPD, is a data protection law implemented on August 14, 2018, after years of debate and consultation. The data protection law was inspired by and is relatively similar to the European Union’s General Data Protection Regulation, also known as GDPR. Brazil’s new data protection law, LGPD, will go into effect in May 2021 and require companies to comply with strict requirements related to the processing of personal data and sensitive personal data. What is Personal Information? The LGPD also defines what personal data or personal information is, similar to the GDPR’s own definition of personal data. The LGPD states that personal information or personal data can refer to any data that, either by itself or combined with other data, can identify a natural person or subject them to a particular treatment. Does LGPD Apply to My Business? Follow these guidelines regarding who LGPD applies to and who is exempt: Who It Applies To The LGPD applies to any public or private individual or company with personal data processing activities carried out in Brazil, including the collection of personal data, regardless of where the company is geographically located. Companies that offer or supply goods or services in Brazil must also comply with LGPD. It is also important to note that LGPD does not just apply to data collected from Brazilian citizens. Any individual who has personal data collected while inside Brazil is also protected under LGPD. Who is Exempt LGPD does not apply to data processing by a person who is processing data for personal purposes, for journalistic, artistic, literary, or academic purposes, or for national security, national defense, public safety, or a criminal investigation. LGPD Compliance: The Nine Rights Article 18 of LGPD explains the nine fundamental rights that data subjects have under LGPD, including: - The right to access the data - The right to confirmation of the existence of the processing - The right to correct incomplete, inaccurate, or out-of-date data - The right to anonymize, block or delete unnecessary or excessive data or data not being processed in compliance with the LGPD - The right to delete personal data processed with the consent of the data subject - The right to the portability of data to another service or product provider, through an express request - The right to information about public and private entities with which the controller has shared data - The right to information about the possibility of denying consent and the consequences of such denial - The right to revoke consent LGPD for Business Here is everything you need to know about your responsibilities as a business, as it pertains to the LGPD: Obligations from Businesses LGPD imposes the following obligations on businesses: - Inform, correct, anonymize, delete, or provide a copy of the data if requested by the data subject - Delete customer data after the relevant relationship terminates - Appoint a DPO officer responsible for receiving complaints and communications - Adopt technical and administrative data security measures to protect personal data from unauthorized access, accidents, destruction, and loss - Provide a data breach notification to both the data subjects and local authorities in case of a breach Outgoing President Michel Temer signed an executive order on December 28, 2018, that officially created the ANPD, which stands for Brazilian National Data Protection Authority ( Autoridade Nacional de Proteção de Dados in Portuguese). The authority fully enforces all aspects of the LGPD. It is technically independent of the Brazilian government, although it is tied directly to the office of the president. Section 55(j) of Executive Order no. 869/18 establishes that the ANPD has the authority to, among other things: - Issue rules and regulations regarding data protection and privacy; - Within the administrative sphere, exclusively interpret the LGPD, including cases in which the law is silent; - Request information regarding the processing of personal data from data processors and controllers; - Exclusively oversee and impose administrative sanctions for violations of the LGPD; - Promote data protection and privacy within the Brazilian society; and - Develop studies regarding domestic and international data protection and privacy practices and establish partnerships with authorities from other counties to increase international cooperation. Under the Brazil LGPD (also known as LGPD Brasil), fines and penalties are not as punitive as the GDPR. The maximum administrative sanctions under the LGPD are 2% of the company’s Brazilian revenue of up to $8.9 million per infraction, compared to 4% of global revenue or up to $23.8 million under GDPR compliance. How to Become LGPD Compliant In order to be LGPD compliant, your business needs to create the position of Chief of Data Treatment, which is the data protection officer or DPO in charge of the data processing operation. Your DPO is responsible for accepting complaints and communications from data subjects and the national data protection authority as well as orienting employees about good practices and performing other duties determined by the controller or outlined in complementary rules. If a data breach occurs, the controller needs to provide a data breach notification to the National Data Protection Authority (ANPD) and the data subject in a reasonable time period if the breach is likely to cause risk or harm to the data subjects. Your breach notification notice should contain information about the data subjects involved, a description of the nature of the affected personal data, indication of the security measures used, the risks generated by the incident, the reasons for the delay of communication, if any, and the privacy protection measures that were or will be adopted. LGPD Definition of what is not personal data in Article 1212: “Anonymized data shall not be considered personal data, for purposes of this Law, except when the process of anonymization to which the data were submitted has been reversed, using exclusively its own means, or when it can be reversed applying reasonable efforts.” By irreversibly masking personal information and sensitive data, organizations would be protected if this anonymized data was exposed during an accidental or malicious breach. If you want to learn more about compliance best practices, learn how Delphix provides an API-first data platform enabling teams to find and mask sensitive data for compliance with privacy regulations.
<urn:uuid:234c050d-6db9-48de-9cca-40c517b24390>
CC-MAIN-2022-40
https://www.delphix.com/glossary/lgpd
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00175.warc.gz
en
0.923386
1,345
2.65625
3
30 Years of Engineering the Internet This week marks the 30th anniversary of the meeting that became the very first edition of IETF meetings now held three times per year. On 16-17 January 1986 in San Diego, California, 21 people attended what is now known as IETF 1. Some names and topics found in the proceedings from that meeting are still familiar to current IETF participants! Although IETF meetings now regularly bring together more than 1000 participants, the goal of addressing challenges to improve the network and the principle of meetings being a means to the end of getting work done remain the same. Of course, even the ~50x growth in IETF meeting participation is dwarfed by the growth of the Internet itself. In 1986, there were a few thousand hosts in the Internet, and today there are over three billion users, many of them using mobile devices [2,3,4]. This incredible growth is a testament to the flexibility of the Internet architecture, but also to the engineering effort that has been put in over the years by IETF participants and many others. Today’s Internet users employ many technologies that have become IETF standards during our 30 years of existence. Such as HTTP and TLS that form the basis of the web communications, or RTP and other protocols that enable real-time multimedia conversations. And many others. Skipping ahead a bit from 1986, the initial specification for IPv6 was published 20 years ago . A motivation for IPv6 was that IETF participants understood the long term need for more Internet addresses to make sure the Internet continued to “work better”. While IPv6 deployment has initially languished, the situation has recently changed. Observed use has been doubling in recent years—and it just recently passed 10% globally . Some leading economies are even further; US is at 25%, for instance. Of course there is still much work to do, but the fact that an “upgrade in place” of a fundamental component of the global network connecting billions of people seems possible is quite encouraging! Read IETF Blog
<urn:uuid:e33af93b-3623-4c90-ba1f-84f86b92c4f2>
CC-MAIN-2022-40
https://domainnewsafrica.com/ietf-turns-30/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00375.warc.gz
en
0.961742
427
2.640625
3
BIOS stand for Basic Input/output System which is the first software that is loaded by the PC and prepares the PC for loading the Operating System. It finds all the hardware components of the PC and it becomes possible for the PC to load the OS. Updating BIOS is a critical work, since it requires lots of careful implementation of steps. Before installing the upgrades for BIOS, one must first understand that when BIOS is being updates, the computer has to keep running. If due to any technical reason like voltage variation, load management or the low battery, the computer shuts down while the BIOS is being updated, the computer will shut down immediately and it won't be able to boot again. So while the update is being installed, one must check that laptop is not being run only at the battery, but it's connected at the main power as well. So if the light blacks out, the battery power can back up the machine. Or if it is a computer, one must make sure that a UPS is connected to the computer so in case of black out, the UPS's battery can be consumed and machine doesn't shut down. The first thing is, to check the current version of BIOS. The simplest way to find so is by typing msinfo32 in the search bar of windows or the run box. By clicking at the system summary, the BIOS version would be shown and it would be under processor speed. Here the user can note the BIOS version. After doing that, second step is to check for the BIOS updates at the manufacturer's website. Mostly manufacturers create and store BIOS updates according to the models of mother board. So one should go to the website and search for the update by simply adding the mother board's model. If the user is unable to recall the mode, he can simply know it by downloading CPU-Z. After the BIOS updates are found, one should check the read me files. Because before installing updates, user might have to install some specific patches or might have to update some drivers. This is due to the reason that many BIOS upgrade can require newer versions of drivers or their patches. The BIOS's update's read me file would normally contain some list of features and fixed which are to support the hardware. After it's done, now the time comes to update the BIOS. Normally, a PC would have a simple procedure to download and install the BIOS updates. It can be as simple as just clicking at the downloadable file from the website of PC manufacturer and after it's done, quitting all programs and running the .exe file. The .exe file will handle the patch itself, and will get the PC restarted. Some old PC's might require the users to make a bootable CD or USB for the BIOS update. One should install an app for making bootable image on USB or a CD. The example of such application is ISO Power. Creating a USB bootable image is easy. One just has to install that app, and click at tools and then create a Bootable USB image. And click okay after selecting the user's USB. Also some systems might require the user to first copy a few files to PC and then open them up during the installation. The BIOS have some components too, which are as follows; RAM: RAM is not same as the hard disk storage or another storage device is. RAM basically is computer's memory and it is very vital for computers since it can perform those calculations only if it has that data in the memory. When we discuss about CPU's and processor, it mostly pints towards static RAM. It is not the memory stick RAM that we normally put into our computer, but it's a very specialized type of memory that is pretty fast. It is so amazingly fast that one would like to have the same calculation process into the processor. But problem is that this amazingly fast memory is pretty expensive so we normally see an SDRAM in computers. The new generations of SDRAM are DDR2 and DDR3. DDR is twice as faster as SDRAM and DDR3 is almost four times faster than the basic SDRAM. Hard Drive: Hard drive is physically heavier and is pretty hard too. Its back end has a port for the cables which can be connected to mother board. Whenever a hard dish is purchased, the cable is normally included with it. Optical Drive: Examples of optical drives are CD and DVD. Most of these optic drives have ability to play or record the data into various formats. Some popular formats include CD-R, DCD-R, DVD, CD Rom etc. The maximum space availability for a CD is now said to be 1TB. CPU: All the CPU's are rated according to the speed they have. These days, the most modern processors contain multiple number is cores inside the chip. The old computers (atom) used to have only one chip inside so they weren't fast enough, now computers are available with multiple processors like core i3, i5 and i7. More the core a computer has more would be the chips inside and hence the processor will work lot faster. The BIOS settings can be considered using the following ways; Boot Sequence: In every computer, a default boot sequence is set up already by all the manufacturers. But if someone wants to change the order in which the booting takes place, one can change the boot sequence himself as well. He can modify the list according to the priorities he needs them to be booted from. This list can be modified by simply going into the PC's BIOS menu and changing the order. By default, a computer naturally looks for the removable devices, like the CD, DVD. If there some operating system in those removable devices, computer will load it although it's just a setup. But if there is nothing there, computer will simple look into the hard disk and if the operating system is found there, computer will load it. So, the user can set the arrangements and set up the priority as he feels is better for him and can keep changing it time by time. Enabling and disabling devices: If the devices user wants to enable or disable are plug and play types, he can do some simple steps. For enabling the device, one can click at Start, and then at My Computer, click at Manage. The Computer Management window will open up. There the device manager can be found. Once device manager is found, one can click at any device (which is being shown enabled) and can right click on it and can disable the device. Same thing is done for the devices that are disabled. User can just right click at disabled devices and can enable them, manually. Date/time: Setting up date and time has two methods. While setting up windows, one should set up the date and time too according to the time zone they are living in. Also, one can simply adjust the time by simply clicking at the task bar. The time is shown at the right lower side of the task bar, by clicking on it, one should click at adjust time and hence the time problem can be fixed. Clock speeds: For over clocking, one can use BIOS settings. But before that, one should have some gadgets. Firs one is an unlocked processor, Intel's K series processors are specially designed for this. Second, an over clocking friendly mother board. Third, software for monitoring the speed of clock, CPU-Z will do the trick. Fourth one is the software for the stress test. Software's like AIDA64 and LinX will serve well for this purpose. After ensuring presence of all these, one should do the following steps; Virtualization support: To enable the virtualization, one can proceed to BIOS. Locate for the Advanced Tab and under CPU configuration, look for Virtualization technology. This option is usually disabled by default and one can enable it from there. BIOS security (passwords, drive encryption, lo-jack, TPM): BIOS security can be set in the following manners; There are many built- in diagnostics in the computer to help determine the problems. The first built in diagnose system is, the trouble shooting system. For example, whenever a device is connected to the computer and the computer is unable to detect, it shows message to trouble shoot the problem. The user can simply click it, they computer runs the command and then the possible ways to get rid of this are presented by computer. User can chose any of them to get their device detected. Also, some computer manufacturers also offer a diagnose test. Like Dell has. If someone owns a dell system, they can simply go to manufacturer's website and can run for the test, this way they can understand what's wrong with their computer and can make some corrections for that. The most easiest and recommended way is, use the windows built in system to detect the errors. If user suspects that there is some problem in memory, his can simply run the Windows Memory Diagnostics in a shirt time. User should first of all, type mdshed.exe in the run or search box and should press enter. Then should chose to restart computer immediately or can schedule the tool to restart the computer later. Then windows memory diagnosis works automatically after computers get a restart and it also performs a standard memory test. Someone can add up some more tests too by simply adding or cutting out some tests by pressing F1 and then using up and down keys. After the user has chosen the test mix, he can use F10 key to simply apply the settings and can resume the testing. The good thing about this test is that if due to any reason the test is cancelled or interrupted, the computer will schedule that test for the next time it gets started. Monitoring a system's temperature can be done by several ways. There are some software's available for that, like Speed Fan, which controls the speed and monitors and checks if the system is being over heated. Also by using the BIOS one can monitor the temperature. One can simply enter BIOS and go to power management. There one can check for the temperature. The temperature limit varies with every CPU but for namely the temperature should be less than 75C. Also, from BIOS one can know the heating level of other devices too like graphic card and mother board temperature etc. If computer is overheating one must do following; The fan speed can be controlled by several ways. The first and easiest way is to download software which can control the Fan speed. Software's like Speed Fan doesn't only monitors, but also puts fan on auto-pilot to adjust the speed itself. One can simply go to BIOS and look for the performance, overclocking or maintenance. There one can open and search for the option "fan control". Here someone can adjust the speed according to one's needs. If there is mentioned "disabled" in front of this option, enable it to further open a list of individual settings. Intrusion detection systems are of two types, Network Intrusion detection (NID) and Host Introduction detection system (HID). NID is normally placed at the places where the network traffic passes more frequently. The points should be from where the traffic can be monitored. Like a NID can be placed near firewall, so if some traffic tries to breach the wall, it can be detected and the notification can be sent to the administration. HID runs on the all individual devices and hosts on a certain network and whenever there is some suspicious activity seen, the report/ notification is sending to the administration so that appropriate action can be taken place. It works on simple matching rule. It takes a snapshot of the device before and matches it with latest one to identify the activity. Monitoring Voltage is very important too, since the purpose of a power supply is to convert the Alternate Current (AC) to the direct current (DC) which helps keeping your computer on and alive. THz voltage required by each computer can be different. Every laptop and computer has specific voltage cable. For laptops, it must be noticed that high voltage cable can cause the system to burn by overflow of current. There are three typical voltage suppliers which are of 3.3,5 and 12 volts. The clock is simply a microchip and in this chip, there is a small crystal which keeps vibrating at some specific frequency, every time it's connected to electricity. The clock speed can be increased by over clocking, which can be done both by usage of software's or going g to BIOS and adjusting the speed manually. There is a system clock too, which helps keeping the clock of the system running perfectly. It can be adjusted when the windows are being installed, a command promotes up asking user to select the time zone user is living in. There user can chose correct time zone and the format in which time user wants to see, can be adjusted through the task bar. One can simply adjust the clock time by seeing the task bar and looking for the time displayed. When the time is clicked, the new windows are opened where one can easily select and adjust the time. The data is travelled through bus, so if someone wants to have a faster computer, one should use bus with high data travelling speed. The speed of a bus is measured in Mega Hertz. (MHz). A MHz is the rate at which the data can travel through the bus. There are any buses which are found on the CPU. These buses contain the Front Side Bus, Back Side Bus, AGP Bus, Memory Bus, PCI bus ext. Although there is an understood fact that faster the bus seed is, faster will be the computer but this fact shouldn't be kept aside that no matter how fast a bus is, if the processor or chipset is slow, even the fastest bus cannot make a computer fast. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from email@example.com and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:f5e1011a-cae5-4acf-8677-ed32a7990515>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/a-plus-configure-apply-bios-settings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00375.warc.gz
en
0.945985
2,918
3.21875
3
A report from the NHS suggests the impending technological ‘revolution’ in healthcare will increase the amount of time doctors can spend with patients. NHS doctors are overburdened; a problem only getting worse from a growing and ageing population, and not enough funding. The report was led by US academic Eric Topol and calls for a reskilling of NHS staff to harness new digital skills. AI and robotics can reduce the burden on healthcare professionals, but only if they’re utilised effectively. Doctors will not be replaced by robots but instead will have their abilities “enhanced” to improve care. Around 90 percent of all NHS jobs are predicted to require digital skills within the next 20 years. The use of virtual assistants such as those offered by Apple, Google, and Amazon are expected to be among the closest innovations to being ready. Assistants can help with checking whether symptoms require urgent care, a GP appointment, or whether a doctor needs to be seen at all. This would help prevent the misuse of A&E by people with trivial ailments or the booking of GP appointments for otherwise healthy adults with things such as a common cold. Virtual assistants could also be used to book and remind of appointments. This would help to reduce the number of unattended appointments that someone else could have needed. Yet another concept is the use of a ‘mental health triage bot’ that engages in conversations while analysing text and voice for suicidal ideas and emotion. This could help reduce the ~6000 suicides per year. The main concern preventing uptake is the potential for errors, which in healthcare could be fatal. AI News previously reported on the findings of NHS consultant ‘Dr Murphy’ who reached out to us after using ‘GP at Hand’ from Babylon Health, an AI-powered service promoted by health secretary Matt Hancock. Dr Murphy has since posted many flawed experiences with the service, but one example of a “48yr old obese 30/day male smoker develop[ing] sudden onset central chest pain & sweating” suggested booking a GP appointment. Anyone with common sense would say call 999 urgently. That example could have meant life or death and shows, while such a system could one day provide huge benefits, it must undergo rigorous testing. Commenting on the report, Hancock said: Our health service is on the cusp of a technology revolution and our brilliant staff will be in the driving seat when it happens. Technology must be there to enhance and support clinicians. It has the potential to make working lives easier for dedicated NHS staff and free them up to use their medical expertise and do what they do best: care for patients.” In the NHS report, it’s claimed the use of virtual assistants could save 5.7 million hours of GP’s time across England per year. Further AI use cases include speeding up the interpretation of scans; improving accuracy while enabling treatments to begin sooner. We’ve created a dedicated ‘healthcare’ category on AI News highlighting the incredible advances in this area. When it comes to robotics, their assistance in surgery could be expanded in addition to being used for simple tasks which are important but time-consuming such as dispensing medicines. Other emerging technologies such as VR also present exciting opportunities. Virtual reality could help with pain reduction and treating mental conditions such as post-traumatic stress, anxiety, and phobias. The report’s authors conclude: “Our review of the evidence leads us to suggest that these technologies will not replace healthcare professionals, but will enhance them … giving them more time to care for patients.” Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.
<urn:uuid:3d803454-3eb0-41f0-be12-43dc3d14c19e>
CC-MAIN-2022-40
https://www.artificialintelligence-news.com/2019/02/11/nhs-report-ai-docs-patient-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00375.warc.gz
en
0.95164
810
2.5625
3
OS command injection (also known as shell injection) is a web security vulnerability that allows an attacker to execute arbitrary operating system (OS) commands on the server that is running a web application and typically fully compromise the application and all its data. Why do web applications need to execute system commands? Web applications sometimes need to execute operating system commands (OS commands) to communicate with the underlying host operating system and the file system. This can be to run system commands, launch applications written in another programming language, or run shell, python, perl, or PHP scripts. For any operating system, including Windows, Unix and Linux, functions are available that can execute a command that is passed to other scripts as a shell command. While extremely useful, this functionality can be dangerous when used incorrectly and lead to web application security problems, as explained in this article. Why you should be careful with system commands in web applications By exploiting a command injection vulnerability in a vulnerable application, attackers can add extra commands or inject their own operating system commands. This means that during a command injection attack, an attacker can easily take complete control of the host operating system of the web server. Therefore, developers should be extremely careful when passing user input to one of those functions when developing web applications. Command injection vulnerability example In this example of the command injection vulnerability, we are using the ping functionality, which is notoriously insecure on many routers. Imagine a vulnerable application that has a common function that passes an IP address from a user input to the system’s ping command. Therefore, if the user input is 127.0.0.1, the following command will be executed on the host operating system: ping -c 5 127.0.0.1 Since we are dealing with a vulnerable web application, it is possible to break out of the ping command or provoke an error that returns useful information to the attacker. The attacker can then use this functionality to execute his own arbitrary commands. An example of adding additional system commands could look like this: ping -c 5 127.0.0.1; id In the above example, first the ping command is executed and directly after that the id command execution takes place. Therefore the command output on the page will look like this: PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.074 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.074 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.072 ms 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.037 ms --- 127.0.0.1 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.023/0.056/0.074/0.021 ms uid=0(root) gid=0(root) groups=0(root) During an OS command injection attack, the attacker can also set up an error-based attack. For example, a code injection in this case will typically look like this: ping -c 5 "$(id)" The above code injection returns a response like: ping: unknown host uid=0(root) gid=0(root) groups=0(root) How to prevent system command injection vulnerabilities In order to prevent an attacker from exploiting a vulnerable web application and inserting special characters into an operating system command, you should try to generally avoid system calls where possible. In all cases, avoid user input of any kind unless it is absolutely necessary. And if it is necessary, make sure there is proper input validation in place – input validation combined with context-depended encoding is always a must to ensure your web application code is not vulnerable to other high-impact vulnerabilities, including cross-site scripting (XSS) and SQL injection. If you don’t need system command functionality, deactivate it in your configuration to eliminate the risk of attackers getting access to the system shell on the host operating system through vulnerable web applications. Depending on the technology, you may be able to separate the execution of the process from the input parameters. You can also build a whitelist of possible inputs and check the formats, for example accepting only integers for a numeric identifier. Vulnerability classification and severity table |Classification||ID / Severity|
<urn:uuid:8eedb9f1-c291-4ef8-b02b-cce821314fff>
CC-MAIN-2022-40
https://www.invicti.com/blog/web-security/command-injection-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00375.warc.gz
en
0.85277
1,146
3.265625
3
Big Data is rocking the world of technology as it grows at an exponential rate, but what is it exactly? Big Data Defined: Big Data is not just a buzzword; the term covers a humongous and complex volume of structured and unstructured data that is tough to process using traditional database processing applications. But Big Data is not just large volumes of data but it is actually an innovative way of managing the data. Big Data Rides to the Rescue While the sheer size of the data is overwhelming and it is growing as fast as a tsunami, big data rides to the rescue. What does Big Data do? Big Data is used to search, store, analyze, connect, visualize, curate and extract information. It aids organizations in maximizing data operations and making quicker smarter decisions. The more accurate the data and analysis, the better and more innovative the decisions. Enhanced decision making translates into greater profits and cost reduction. Three Main V’s of Big Data Industry analyst Doug Laney defined big data as the three Vs of big data: volume, velocity and variety. Extensive data volume used to be a storage issue in the past. Now we have data based on transactions as well as unstructured data from social media. Big Data’s challenge is how to mine relevant information within large data volumes and to use analytics to create value. Torrents of data are racing in and have to be dealt with as quickly as possible. Responding in a timely manner to data velocity is a challenge that organizations are facing with the aid of big data Data flows in from all over the place. Structured data is accessed through traditional databases while unstructured data comes from data that is not organized. Tweets on Twitter, email, text documents and metadata can be taken as examples of unstructured data. Managing all these varieties of data in an efficient manner can be a challenge for companies. Big Data See Saws Data flows can be inconsistent with peaks and troughs. As Andy Warhol commented, everyone will be famous for fifteen minutes, especially in today’s social media. Such ups and downs in unstructured data loads can be difficult but not impossible for big data to manage. Embrace Big Data Banking institutions have a pressing need to embrace Big Data with open arms, because it provides business insights and value for the money. And with Hexanika by your side providing you the smart products and solutions to address your data challenges and pain points, you can be assured you won’t put a foot wrong.
<urn:uuid:cb14cd8a-1e35-4857-93ac-4bc356ce29d3>
CC-MAIN-2022-40
https://hexanika.com/the-big-question-answered-what-is-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00375.warc.gz
en
0.919621
543
3.03125
3
If a CPU (central processing unit) is like a computer's brain and a motherboard is its nervous system, then a GPU (graphics processing unit) is the muscle. This specialized processer takes a load off your computer’s CPU and RAM by accelerating graphics rendering. It comes in two forms — an integrated GPU is built into your motherboard, while a discrete GPU is part of a graphics card. So, what is a graphics card? Well, a graphics card is a sophisticated printed circuit board that helps render graphics. It may have the following components that benefit its functionality, performance, and connectivity. - GPU: Performs geometric and mathematical calculations to render graphics. A GPU’s clock speed, core-count, and more, determine how fast it processes data. - Video memory: A graphics card’s video memory, also known as Video RAM (VRAM), helps render images at higher resolutions with greater visual fidelity by storing data and functioning as a framebuffer. Top graphics cards have large VRAM with high memory clock rates and bandwidth. - Heatsink: A graphics card can run very warm under load. The heatsink controls its temperature by dissipating heat. - Motherboard interface: A graphics card fits on a motherboard’s interface slot like PCI Express. More powerful graphics cards need more than one interface. - Display interface: Most graphics cards feature multiple display ports like HDI, VGA, and DisplayPort for connectivity with monitors and TVs. What devices have GPUs? It’s not just desktop computers and laptops that use GPUs. Video game consoles like PlayStations, Nintendo Switches, Xboxes, and mobile devices like phones and tablets also carry GPUs. Video game console GPUs are usually not as powerful as the latest graphics card for PCs, though. Remember, console GPUs are just a snapshot of graphics technology, while new PC graphics cards release regularly. What does a graphics card do? A graphics card takes complex instructions from the CPU and software applications like video games and determines how to reproduce this data as a 2D or 3D image for the pixels on a computer screen. Depending on the job, a graphics card may have to complete this task anywhere from 30 to 60 frames per second (fps). In addition to rendering graphics for video games, a graphics card can help with video and image editing, mining cryptocurrency, and streaming videos. Which graphics card is best for gaming? In general, the more powerful a graphics card is, the better and smoother the image quality. Some factors in image quality include: - Frame rates: A graphics card’s processing power, the demands of an image, and a display’s refresh rate help determine the frequency with which images in a video sequence appear. Modern gamers prefer at least 60 fps. - Resolution: The higher a video game's resolution, the more pixels you see on the screen. Some gamers are happy with 1080p, though enthusiasts prefer 1440p or 2160p. - Anti-aliasing: This technique helps smooth out jaggies and shimmering in objects in games. - Anisotropic filtering: In 3D graphics, anisotropic filtering boosts the texture image quality of surfaces. - Ray tracing: This sophisticated light and shadow rendering technology helps boost realism in 3D graphics. Rivals Nvidia and AMD produce the most popular graphics cards for gaming called GeForce and Radeon, respectively. Third parties like Asus, EVGA, MSI, and Gigabyte license the GPU technologies to sell graphics cards under their brand. Usually, the same GPU from different brands produce nearly identical performance. Both Nvidia and AMD seek to outdo each other with each flagship model release, so the answer to “what company makes the best graphics card” can change. But unless you have money to burn or desire to play at extremely high resolutions, you may not want to pay thousands of dollars for the most powerful cards. For the average gamer, the most expensive graphics cards offer diminishing returns. You may find that a mid-range card will offer only slightly lower frame rates for significantly less money than the most powerful GPU on the market. Tip: Check graphics cards benchmarks for the games you like to play at your monitor’s resolution to determine what product to purchase. Why are graphics cards so expensive right now? Many gamers are struggling to buy graphics cards at retail prices due to supply chain issues, escalating demand during the pandemic, chip shortages across industries, and the skyrocketing popularity of cryptomining. Yes, cryptominers are buying your favorite graphics cards to mine for cryptocurrency. Reportedly, cryptomining teams are also using bots to purchase cards in bulk from online retailers upon listing, resulting in higher prices. Some cryptominers are also cryptojacking your computer for cryptocurrency. But what is cryptojacking, and what does it have to do with escalating bitcoin prices? Also known as malicious cryptomining, cryptojacking malware uses a computer's resources to mine different types of cryptocurrency like ethereum, bitcoin, and litecoin. Malicious cryptomining is on the rise as cybercriminals try to cash in on the popularity of digital currencies like bitcoin. It’s important to protect your computer from malicious cryptomining hacks to stop your electricity bills from going up and hackers from hijacking hardware like your graphics card. You can also read up on how to troubleshoot hardware problems and maintain your computer for optimal gaming performance.
<urn:uuid:50399b88-3e0a-44b6-833a-4081428b1ea9>
CC-MAIN-2022-40
https://www.malwarebytes.com/computer/what-is-a-graphics-card
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00375.warc.gz
en
0.915448
1,122
3.765625
4
Collectives are aggregations of people who are outside the control of the enterprise, bound by a common action or opinion, and who affect the enterprise’s success. Mobs, formal communities, and informal networks of friends and groups linked by their liking for a particular product or location are examples of collectives. Collectives refers to all informal human forces outside the control of the enterprise that can impact the enterprise’s success or failure. These forces permeate enterprise and social boundaries, existing within and outside the formal enterprise. There are many related (but not identical) concepts, including collective intelligence, “wikinomics,” collaborative decision making, network-centric strategy, wisdom of crowds, social media, open innovation and crowdsourcing. Collectives influences innovation, sourcing, branding, positioning, fads, trends, customer support, hiring and other key aspects of every enterprise’s essential processes. As an adjective, collective is one of the core principles that distinguishes social solutions from nonsocial solutions.
<urn:uuid:d3563050-619c-4a24-955b-f49769b885b0>
CC-MAIN-2022-40
https://www.gartner.com/en/information-technology/glossary/collective
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00575.warc.gz
en
0.903508
204
2.796875
3
The last year has turned the world upside down. Jobs that were once thought to be secure evaporated completely as the pandemic change work habits and lifestyles. Equally the digital advancement needed to keep organizations functional have created demand for new skills, especially in data and AI. This article, originally published on Today’s Modern Educator, takes a closer look at how workplace training and education programs can help workers reskill and become vital parts of the new economy and the benefits of conducting this training remotely. The mass shift to remote work brought on by the COVID-19 pandemic seemed like it would only bring challenges to government agencies. Actually, the shift to remote work has brought opportunity for many government agencies to innovate their workplace training and education programs. In a webinar hosted by GovLoop, panelists from government agencies and the private sector discussed how they have leveraged remote learning tools effectively. The webinar included a presentation from the Internal Revenue Service (IRS), which has been developing virtual training programs since before the pandemic, due to budget cuts and a dispersed workforce that made traditional classroom learning less feasible. The IRS discovered that a shift from in-person learning was necessary in order to train employees more effectively. Kelly Barrett, eLearning Specialist for the IRS, said “Research has shown that only about 10% of learning really comes from formal training events.” The shift to remote work has not hindered the development of workplace training and education programs; instead, it’s been a bit of a revelation for federal agencies. Agencies have recognized that many in-person programs were both costly and not living up to their full potential. In fact, the majority of what is taught in formal learning environments ends up lost. The shift to remote work compelled agencies to look at technology in new ways, especially how it can enhance learning opportunities and innovate training programs. When shifting to virtual learning programs, traditionally, the focus for establishing “success” might be on whether the organization is equipped with clear A/V signals and the latest software. While this may provide a solid foundation, current innovations are focusing on meeting a learner’s specific needs. A successful learning program means that the student is able to complete the course, absorb the information, and put it to work. As people have become more adept at using online tools in their personal lives, training programs need to adapt to these cultural and technological shifts. For example, when people want to learn something new, their first instinct is often to check sites like YouTube or WikiHow for quick, on-the-fly, tutorials. A primary drawback of in-person trainings was that they often pushed people to digest a day’s worth of information all at once. An entire curriculum had to be capsulized to a specific amount of time within a specific setting, increasing the likelihood that new skills could be lost before they were put to use. With the help of online tools, people are now able to learn at their own pace. Focusing on the individual learner also means tracking employees’ experiences – making sure they are set up with everything they need, including proper surroundings and equipment. While this may seem like a lot to consider, cloud technology solutions have made it easy to understand user-based problems. These tools can help segment learners to identify and address any factors that may inhibit learning experiences. In the GovLoop webinar, Kevin Mills, Head of Coursera for Governments, also explained how AI in online learning tools can ensure that the learning path is tailored to the unique learning styles and experiences of the individual. Integrating AI into their learning platform has made it possible to track engagement and build a tailored pathway for each learner. As Kevin put it, “Let the automation and algorithms informed by all the data of our 77 million learners tell you what is the most efficient learning route and credential route to that outcome.” Whether a student needs split-screen project guidance, custom assessments, or coursework that reaches specific certifications or degrees, AI technology can help better guide learners across the best curriculum paths. As government agencies adapt to using technology to meet learners’ needs, they will become more successful in developing scalable workplace training programs and increasing skillsets and cross-collaboration, which ultimately leads to cost-savings. To learn more about how the IRS and other government agencies are effectively training their employees remotely, sign up for GovLoop’s webinar, “Fireside Chat: Virtual Meetings, Virtual Trainings, Virtual Everything – How to Actually Collaborate”.
<urn:uuid:032699c3-3ca9-4501-a034-e376b61b1265>
CC-MAIN-2022-40
http://governmenttechnologyinsider.com/how-remote-work-has-brought-new-ideas-to-workplace-training-education-and-reskilling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00575.warc.gz
en
0.964343
935
2.515625
3
Anyone who has run security awareness programs for a while knows that changing human behaviour is not an easy task. And that sometimes the problem with awareness is that "awareness" alone does not automatically result in secure behavior. Let’s look at the challenge of building a security culture through the lens of behaviour design. BJ Fogg’s much-quoted behavior design model neatly outlines that behavior happens when three things come together at the same time: Motivation, Ability, and a Prompt which could be a reminder or a nudge to do the behaviour. Fogg’s Behaviour Model highlights three core motivators: Sensation, Anticipation, and Belonging. Each of these has two sides: pleasure/pain, hope/fear, acceptance/rejection. These core motivators apply to everyone; they are central to the human experience. Let's try apply these to cybersecurity: - Tapping into people’s emotions by using visually appealing content, engaging with humour and story-based techniques, and activating positive sensations. - Fear can be a powerful motivator too. Show what could happen when. But too much of it can result in apathy and needs to be underpinned with the notion that it is simple to defend. - Using the power of leadership or celebrity to tell stories and invoke a sense of belonging. - Making it personally relevant by providing information on how to protect kids or family members Caveats: Humour is a great technique to grab people’s attention, evoke positive emotions and help with memory retention. However it has to be applied carefully and with a sensitivity to the audience's cultures, else it can backfire. Also, it shouldn’t be used too much, as it could result in the audience not taking the core message seriously enough. BJ Fogg says that training people is hard work, and most people resist learning new things. That’s just how we are as humans: lazy. Give someone a tool or a resource that makes the behaviour easier to do. A great example is a password manager. This is a tool that takes care of desired behaviour and simplifies the complexity of having to remember multiple different passwords. The concept of prompt has different names: cue, trigger, nudge, call to action, request, and so on and they all have the purpose to remind and tell people to "do it now". A good example are the password strengths meters reminding people to come up with better passwords as and when they create them. When designing an awareness campaign, it’s important to consider where prompts may be used. For example, in the moment nudges, such as when users look at emails while on the go or when they are about to send a large file to someone externally. When it is possible to combine the three elements of motivation, ability and prompts, changing behaviour is a much more likely outcome than just spreading awareness content and hoping for a result. Stay up to date on the rest of this evangelist series to help keep you and your users safe during Cybersecurity Awareness Month and beyond!
<urn:uuid:60da6562-4456-4929-adde-9088cd492c42>
CC-MAIN-2022-40
https://blog.knowbe4.com/security-culture-behavior-design
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00575.warc.gz
en
0.937907
647
2.71875
3
Power of Consequences Unwanted behavior can very well be explained and, once understood, brought under control. The key is that behavior always has behavioral consequences. People learn from consequences. Behavior that results in positive consequences for the performer (the person showing the behavior) is likely to be repeated. To put it technically—behavior is a function of its consequences. So, when an employee decides not to use the newly installed software application to log incidents, he believes he benefits from this. And, from his perspective, he does! It saves him time, or it hides the fact that he hardly understands how to use the new tool. The only effective way to deal with this “won’t-do” behavior is to change the consequences for the performer. Managing by consequences has a few pitfalls: - The first pitfall is to keep repeating ourselves with respect to the behavior we would like to see, and that which we would not like to see. This pitfall can be avoided by deliberately matching consequences with behavior. Preferably, positive consequences for the right behavior. - The second pitfall is the use of (hierarchic) force and punishment, while neglecting to change the behavior. Of course, when inappropriate or immoral behavior is exhibited, some form of corrective action must follow. But even though force and punishment seem effective in the short run, reinforcing desired behavior is much more effective. Proper reinforcement makes people want to perform and show discretionary effort. It takes a little more effort to get results, but these results are sustainable. Please note: for people to learn a new behavior, continuous positive reinforcement is required. - The last pitfall I’d like to mention is that “positive consequences” are usually associated with money. However, personal attention and appreciation for the desired behavior is more effective than extra money. By the way, any well-meant and well-timed compliment is free. Dealing With Unwanted Behavior For dealing effectively with unwanted behavior, “getting away with it” has to stop. The fastest way to deal with this issue is to pair unwanted behavior with undesirable consequences for the performer. However, this only suppresses the unwanted behavior. Desired behavior must be carefully pinpointed. Then, when desired behavior is shown, it must be (consistently) reinforced. Discussing these interventions so that all concerned understand the required behavior and the positive consequences is highly recommended. What’s in It for Me? Anyone confronted with a new way of working searches for the positive consequences of this change. “What’s in it for me?” The intended positive consequences may not be immediately clear or noticeable. Compare it to someone who quits smoking. The (ex-) smoker immediately loses the joy of smoking, while the benefits of improved health and physical condition take a while to establish. Experiencing immediate positive reinforcement improves the chances that new behavior is adopted. To learn new behavior, the performer must receive continuous reinforcement. A simple, well-timed compliment or any other sign of appreciation will suffice. Finally, it is imperative that consequences (both positive and negative) consistently follow the behavior. It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change, said Charles Darwin. The main success factor in generating sustainable change when introducing frameworks and best practices, is the introduction of lasting, desired behavior. New behavior does not just occur. It requires attention and leadership. The most powerful and effective way to “implement” new behavior is to reinforce it positively. You get what you reinforce. Reinforcement improves chances that the new and desired behavior is repeated and accepted. Build behavior and the results will come, says Aubrey Daniels, a renowned expert on behavior. Thanks to Annerieke Ruijter, Paul Wilkinson, Marius Rietdijk and Joost Kerkhofs for their feedback and contributions! More information on the Project and Program Management courses is available in the ITpreneurs Course Catalog. Need more information? Please contact ITpreneurs. About the author Change & agility is all about people and behavior. I help leaders to actually realize behavioral change, so they enable increased performance, customer value and agility.
<urn:uuid:a472a62e-8297-41f9-8453-eef1dc76d673>
CC-MAIN-2022-40
https://www.itpreneurs.com/blog/service-management-initiatives-get-stuck-part-2-of-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00575.warc.gz
en
0.940463
890
3.09375
3
Data rules the tech world. Data, data, data. There is data that needs to be seen by a user, data to be reviewed by a data scientist. Data for investors. Data for management. Data. For any organization, how you structure and store your data informs your potential success. The company storing their data in hand-written Microsoft Word documents is a step behind the company storing their data in easy-to-read Excel charts on a cloud server. Innovation continues in the database world—these are graph databases. (This article is part of our Data Visualization Guide. Use the right-hand menu to navigate.) How traditional databases work Data has traditionally been stored in tables. For the coffee shop industry, you may probably see a set of tables appear like this: Then, there are separate tables to store all the orders, employees, inventory, and suppliers. We have 6 data tables to store the data of the coffee shop industry. In the traditional relational database, each new relationship that is wished to be known requires a new table to store information. While inventory items might work well in tables, they are limited when expressing relationships between entities. In traditional databases, answering relationship questions requires IDs to be double-entered across tables, then a number of join statements to say something like, “Return all values where ID-1 matches other ID-1’s in all these sets of tables. Then, look at other common IDs that appear in the tables with ID-1, and also find those IDs across all sets of tables.” The Cornell Movie Dialog Corpus is a traditional database. To know who said what in what movie, you have to cross-reference the IDs with the movie line to be able to see that the line “Here’s to looking at you, kid” was spoken by character ID 546 (Humphrey Bogart) and movie ID 123 (Casablanca). You can answer relational questions with traditional SQL and noSQL databases, but only through a well-thought out question that states explicitly what sets of tables to explore and to join (tremendous amounts of work). Then, wait a while for the query to be returned. Queries about relationships in non-graph databases don’t scale well. The relationship questions in relational databases can take minutes, where the same query in a graph database can take seconds. What is a graph database? As people move into an increasingly interconnected society, connections need to be expressed. From social media feeds to show who are friends of whose, to recommended lists to show people that like these songs or videos also like these, to identifying the rogue member of the group who is most likely a threat, graph databases help store this information and query this information with ease. Graph databases are used to understand highly interconnected data. They are great at exploring the relationships between data. By design, graph databases can easily answer these types of questions: - Do the user’s friends like the same music as the user? - Is it the case that this fraudulent user could be detected because it has fewer relationships to other members of the group? - If this person knows these types of things, what are the chances they will connect the dots to learn these other types of things? Graph databases are schema-less and mutable. Graph databases are particularly good to use when you need to: - Explore the relationships between data - Easily scale queries to increasing amounts of relationships Example use cases for graph databases include: - Fraud detection - Real-time recommendations - Data management - Identity management - Network and IT operations The three graph components The concept of a graph is a math term, originating with graph theory. The graphs of graph theory can be thought of as trees, networks, webs, or mind-maps. Pick your demon. They all consist of the same parts: The basic element of the graph is the node. These are each point along the graph: - One node contains data like the customer name and the customer coffee choice. - Another node on the same graph might have the name of a coffee shop, its address, and its hours of operation. The edge on the graph defines the relationships between the entities. For our two customers, above, we can say they both “attend” the coffee shop. The nomenclature will vary based on your organization and needs. For example, “attends” can be replaced with customer, shopper, patron, user, etc. Finally, we have our graph of nodes and edges. From it we can express the relationships such as, “Customer 1 and Customer 2 both attend Coffee Shop 1” and “Only Customer 1 attends Coffee Shop 2”. We could add many more relationships to this graph. Within this chart we could: - Show the suppliers to each coffee shop. - Add family and friends of these two customers, creating an edge “family”, “brother”, “friend” to the network and show what their coffee preferences are, then also use the “attends” edge to express which shops they attend. These graphs can get incredibly complex. Popular graph tools from cloud providers Neo4j is one of the biggest players in the graph world, and they are creating a Kubernetes infrastructure that can implement the graph database on any cloud service. This means you can implement graph databases on AWS, Azure, and GCP. While Neo4j takes some extra work to get up and running, AWS offers Amazon Neptune, a ready-to-go graphs infrastructure on AWS. Another good graph alternative is Cayley, which is open source and usable on GCP. What other good graph options have you come across? Please reach out and let us know. For related reading, explore these resources: - BMC Big Data & Machine Learning Blog - Data Visualization Guide, a tutorial series - Tableau Online Guide, a tutorial series - Data Lake vs Data Warehouse vs Database Explained - Who Cares How Big the Data Is? It Doesn’t Really Matter - DBMS: An Intro to Database Management Systems
<urn:uuid:3da9a92b-af36-498a-8c15-02053c3645cd>
CC-MAIN-2022-40
https://www.bmc.com/blogs/graph-databases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00575.warc.gz
en
0.90964
1,459
3.125
3
Around the globe, scientists are using blindingly fast, incredibly powerful supercomputers to model and predict important environmental events, such as global warming, the paths of hurricanes and the motion of earthquakes, and to envision more fuel-efficient cars and planes, ensure nuclear stockpile safety and develop new sources of energy. All of these simulations require the processing, sharing and analyzing of terabytes or petabytes of data. But what about storing and managing all that data? To develop large-scale, high-performance storage solutions that address the challenges faced by the huge amounts of data that supercomputer simulations use and produce, the U.S. Department of Energy (DOE) recently awarded a five-year, $11 million grant to researchers at three universities and five national laboratories under the banner of the newly created Petascale Data Storage Institute (PDSI). Part of the DOE’s larger Scientific Discovery Through Advanced Computing (SciDAC) project, PDSI’s five-year mission is to explore the strange new world of large-scale, high-performance storage; to seek out data on why computers fail and new ways of safely and reliably storing petabytes of data; to boldly go where no scientist or scientific institution has gone before — and to share its findings with the larger scientific (and enterprise) community. “The first and primary goal of creating the Petascale Data Storage Institute was to bring together a body of experts covering a range of solutions and experiences and approaches to large-scale scientific computing and storage and make their findings available to the larger scientific community,” states Gibson. The project’s second goal is to standardize best practices, says Gibson. Performance Comes at a Cost PDSI’s third goal: to collect data on computer failure rates and application behaviors in order to create more reliable, scalable storage solutions. “As computers get a thousand times faster, the ability to read and write memory — storage — has to get a thousand times faster,” explains Gibson. — Garth Gibson But there’s a downside to greater performance. “As we build bigger and bigger computer systems based on clustering, we have an increased rate of failures. And there is not enough publicly known about the way computers fail,” he says. “They all fail, but it’s very difficult to find out how any given computer failed, what is the root cause.” While today’s supercomputers fail once or twice a day, once computers are built to scale out to multiple petaflops, or a quadrillion calculations per second, the failure rate could go from once or twice a day to once every few minutes, creating a serious problem. As PDSI scientist Gary Grider said in a recent interview: “Imagine failures every minute or two in your PC and you’ll have an idea of how a high-performance computer might be crippled.” To learn more, PDSI scientists are busy analyzing the logs of thousands of computers to determine why computers fail, so they can come up with new fault-tolerance strategies and petascale data storage system designs that can tolerate many failures while still operating reliably. As it makes new discoveries, PDSI will release its findings, through educational and training materials and tutorials it plans to develop. PDSI will also hold an annual workshop (maybe more), including one next month at SC06. While the Institute’s findings will initially benefit the scientific supercomputing community, Gibson sees a trickle-down effect that will eventually reach enterprises. “There is a whole commercial ecosystem around this,” says Gibson. “The same technology that is being driven first and foremost in the DOE labs [and now PDSI] shows up in energy research, the oil and gas [industries], in seismic analysis. … It shows up in Monte Carlo financial simulations for portfolio health. It shows up in the design of vehicles and planes. … It’s the same technology that’s used in bioinformatics for searching for proteins in genes. It’s almost the same technology that’s used for rendering computer graphics.” So by the same token, the best practices, standards and solutions pioneered by PDSI in large-scale storage should eventually make their way into applications for the commercial sector. Already, IBM, HP, Sun and Cray (and no doubt other vendors) are busy working on solutions that address the challenges of large-scale storage. And as the scientists at PDSI uncover the reasons why computers fail and come up with new fault-tolerance strategies, vendors will be able to use that information to design storage solutions that can scale out even more while still providing the reliability that enterprises and institutions need and expect. For more storage features, visit Enterprise Storage Forum Special Reports
<urn:uuid:e305271c-393d-4982-a7b7-efbbe05a06ac>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/networking/preparing-for-failure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00575.warc.gz
en
0.919179
1,013
3.453125
3
On October 24, 1994 our customers traveling through New York City’s Lincoln Tunnel were able to do something no one had ever done before – make a wireless phone call. This October marked the 20th Anniversary of AT&T becoming the first communications company to bring wireless connectivity to the Lincoln Tunnel, making it the first tunnel in the NYC metropolitan area to be wired for mobile devices. We brought connectivity nearly 100 feet below sea level inside a then 57-year-old concrete tunnel, effectively making it The World’s Longest Phone Booth. To bring cellular coverage to the Lincoln Tunnel back then, we installed more than 24,000 feet of “leaky coax” cabling, which is essentially a long antenna that runs 8,000 feet inside each of the three tubes. Since placing traditional cellular antennas inside the tunnel was impossible due to a large power-washing device that scrubs the walls daily, we needed to deploy a creative solution. The leaky coax cables work similar to how an irrigation hose works, but instead of water leaking out of the holes in the hose, wireless radio frequencies are emitted providing wireless connectivity. This allows the cables to provide consistent, reliable coverage throughout the entire length of the tunnels without disrupting the maintenance of the tunnel. The leaky coax cables then connect to cellular base stations, which are located inside large ventilation buildings – one on the Manhattan side and another on the New Jersey side of the tunnel. Since 1994, we adapted the technology inside the tunnels to keep pace with mobile growth. As customers began using mobile devices more often and for more than just texting and talking, our engineers upgraded the equipment to handle mobile web browsing, social media app use and more. That meant upgrading the system from 1G/2G technology to 3G/4G technology. Now fast forward to 2014 and our engineers are currently in the process of upgrading to 4G LTE. We are always working to keep customers connected wherever they may be, including another underground frontier— the subway platform. Similar to the Lincoln Tunnel, NYC’s subway platforms are below ground labyrinths, reaching as far as 180 feet below street level (even deeper than the Lincoln Tunnel). That’s why our engineering teams that worked to bring mobile connectivity to the Lincoln Tunnel 20 years ago are now working to bring mobile connectivity to the subway stations. It’s all part of our long-standing commitment to help keep customers connected to their mobile devices everywhere they go – below or above ground.
<urn:uuid:816fd0d5-b751-436b-ad7e-0da081a4f6fd>
CC-MAIN-2022-40
https://about.att.com/innovationblog/120414belowthehudson
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00575.warc.gz
en
0.94349
507
2.8125
3
A man who made his name by founding PayPal has now made history by launching a small, low-cost rocket into space. Elon Musk, who heads up space exploration company SpaceX, successfully sent a Falcon 1 rocket into orbit over the weekend. It marked the company’s fourth attempt and first success in launching the rocket. SpaceX blames design issues for its past problems, the most recent of which resulted in the loss of three government satellites and the ashes of “Star Trek” actor James Doohan and Mercury astronaut Gordon Cooper. (The rocket vanished on its way into space, an apparent consequence of timing trouble.) Now, SpaceX says it’s ready to move forward with trials for its larger rocket — which could, it hopes, end up being used by NASA for future missions. Fleet of the Future Right now, NASA is planning to retire its current shuttle fleet by 2010. That’s why the space agency is working with private companies to find replacements. “The idea is that the space shuttle is used to finish building the space station,” NASA spokesperson Allard Beutel told TechNewsWorld. “Once that’s done, then we retire the space shuttle and make way for the next generation of spacecraft that will allow us to be able to go to the moon and other places,” he said. The shuttle, Beutel explained, can only orbit the Earth. Updated vehicles are needed to take us farther. “If we’re going to branch out — go back to the moon or go onto Mars or an asteroid or anywhere else — the space shuttle can’t do it. We have to have a different type of spacecraft,” he said. Rocket and capsule systems, then — like the ones being developed by SpaceX — will be used to transport equipment and crew to and from the space station. That, in turn, will free up money for future progress. “The idea is that the money used right now to operate the space shuttle program gets pushed more into [research and development] for the next-generation stuff — so you keep constantly building the next stuff out,” Beutel remarked. For SpaceX, the whole thing means opportunity. And with the recent Falcon 1 success, the company is as optimistic as ever about its larger Falcon 9 rocket and its possible implications for NASA. While the Falcon 1 will eventually become a working launch vehicle, its broader goal was to clear the way for its big brother, so to speak. “Our attention, now that we’ve [launched Falcon 1] successfully, is to go into the Falcon 9 rocket and the Dragon” pressurized capsule, SpaceX spokesperson Diane Murphy told TechNewsWorld. “We’ve developed these in parallel, so all the risk on Falcon 1 that’s been retired means we don’t have to revisit that on the Falcon 9,” she said. SpaceX has to prove that its rockets can deliver cargo to the space station. It’s not the only company trying to show what it can do, but its representatives are confident it’s the strongest candidate and will stay at the top of the pack. Both the Falcon 9 and its Dragon capsule are designed to come back from missions, Murphy said, but that isn’t the only factor that sets SpaceX apart. “Unlike any other offering, [the Dragon capsule] has been designed from day one to take crew. We can take up to seven astronauts,” Murphy noted. Next up for SpaceX are more trials and demonstrations. Engineers will spend about 18 months building an escape tower that would let astronauts get out of the vehicle in case something were to go wrong. Then, NASA will decide how to proceed in selecting its future fleet. The space agency’s goal is simple: finding the smartest solution to expanding exploration and increasing our understanding of the world around us. “We haven’t done any next-generation spacecraft in 30 years,” NASA’s Beutel told TechNewsWorld. “The idea is that you’re constantly branching out into the solar system — bit by bit, piece by piece,” he said.
<urn:uuid:b537c84b-9bbb-4aec-881b-5cb7e9adf27e>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/spacex-private-rocket-blazes-post-shuttle-trail-64662.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00575.warc.gz
en
0.946294
884
2.625
3
Black-Box Tests are a software testing method that focuses on the functionality and behavior of an application without knowing how it works internally. With this testing approach, the goal is not to find bugs but rather to understand how the application works and its capabilities. This article discusses how black-box testing identifies security gaps and prevents potential vulnerabilities for startup and enterprise-level organizations. We also delve into its types of pentest techniques and address commonly asked questions. What is black-box testing, and why is it important for your business? Black-box security testing is a software testing method used to examine the functionality of an application with limited knowledge of its architecture or internal processes. This approach, also known as closed-box testing, relies on outputs from implementing specific execution conditions on selected inputs to observe the application’s functionality. For example, a black-box penetration test helps organizations reduce their attack surface by uncovering common vulnerabilities undetected by the development team during code testing. QA teams can also build test cases for specific usage scenarios, which provide application performance information from a user’s point of view. Additionally, you can learn more about the latest cyber security trends for businesses. When to Implement Black Box Scanning? Black box testing is used to examine various aspects of an application’s functionality. As a result, black-box testing techniques are applicable across multiple phases of the SDLC, including: - Integration testing for new features during deployments and rollouts - In production, to understand application usability and the user experience - During acceptance testing, to understand the effects of third-party solutions on application security - During quality assurance, to ensure the application works as expected before deployment Types of Tests in Black Box Testing A black-box penetration test can be categorized primarily into three types of testing. These are: 1. Functional testing A form of closed-box testing examines how the software performs against specific functional requirements. The tests check the software’s mainline functions (user interface, databases, security, etc.) by supplying a particular input and comparing its actual output with the expected behavior. An ethical hacker develops functional test cases based on business scenarios and other applicable requirements to determine the application’s conformance to SLAs in such a testing approach. These cases can either be automated or achieved by manual penetration testing, and they typically focus on the system’s accessibility and error conditions. 2. Non-functional Testing Non-functional testing examines the non-functional requirements of the application, including performance, reliability, and usability, among others. These parameters are commonly used to measure the application’s ability to run effectively within its deployment environment. Therefore, a non-functional test provides detailed information on the application and tech stack’s behavior, helping avoid runtime security vulnerabilities. Aspects covered by this type of testing include: - Software performance testing - Load testing - Stress testing - Volume testing - Compliance testing - Portability testing - Configuration testing - Recovery testing 3. Regression Testing Regression testing is a black-box security testing method used to validate whether a recent update has affected the application’s existing functionality. The approach involves a partial dynamic analysis test that re-executes test cases on existing features to ensure new changes do not incur unwanted side effects. Regression tests are mainly used when a new version of source code has been deployed or when segments of code fixes have been released. Acting as a foundation for the acceptance testing approach, regression testing can be automated using tools that build test libraries out of varying combinations of previous test cases, thereby reducing the manual efforts of QA teams. Black Box Pentest Techniques Testers use several black-box pentest techniques to design test cases for varying requirement specifications. These black-box testing types include: A black-box security testing method derives test cases from data classes within the input domain. A typical test case divides the input domain into equivalent classes/partitions and then evaluates each class for specified input conditions. The software requirements and specifications additionally define the equivalence partitions. Test cases also validate whether the values of a partition behave the same as other equal partitions, thus determining valid or invalid inputs. Boundary Value Analysis (BVA) This black-box pentesting technique uses boundary values (values at the lower and upper limits of variables) to identify the source of input errors. Using this technique, testers design test cases to examine application functionality at the beginning and end of each partition using test data. Boundary value analysis is commonly used to identify system errors at the extreme ends of the input domain and only applies to test cases where the partitions are ordered sequentially or numerically. State Transition Testing A black-box penetration testing technique is used to observe how the application behaves under a sequence of different input conditions. Testers provide both negative and positive input values to the application server to record the valid and invalid state transitions. These tests are most appropriate when determining an operation’s dependencies on past values. This technique also tests real-time systems with multiple states, transition events, and associated conditions. Decision Table Testing A systematic closed box testing approach that tabulates input combinations and their respective system behavior to help evaluate the combinations of input data the application can handle. The decision table correlates inputs versus the rules, test cases, or conditions to map cause-and-effect relationships to form the foundation of structural testing. With an error-guessing mechanism, black-box penetration testers rely on their expertise to infer sources of issues within the application. This level of software testing is unstructured since it does not follow any specific rules/conventions. A testing team develops test cases using experience with similar performance requirements and software vulnerabilities with this approach. Frequently Asked Questions What are some popular black-box testing tools? Some of the most effective application security tools include: Crashtest Security Suite integrates with most modern development stacks without requiring testing teams to worry about the underlying programming language and application logic. The platform establishes a continuous testing process that includes automated vulnerability scanning to help spot potential security issues before attack vectors exploit them. Crashtest Security’s vulnerability scanner benchmarks the security scans against the OWASP Top 10 to ensure robust security control with low false positives and negatives. An AI-powered test automation platform simplifies black-box penetration testing using visual snapshots. The Applitools Eyes platform performs validation by taking a snapshot of the UI as the baseline. Applitools Eye tests the framework to compare it with the baseline snapshot using various user workflow or input parameters. The platform, however, does not test for security issues on processes running in the background or those invisible to the user. SmartBear’s TestComplete uses keyword-driven tests to automate black-box testing. The platform allows testing teams to record scriptless test sequences then play them back in applications using data-driven visual recognition to examine dynamic UI elements. TestComplete supports security analysis on a wide variety of devices, including the support for automated tests across different browsers, applications, and devices. What is the difference between black box and white box testing? One of the fundamental differences between the two testing mechanisms is that security and QA teams mainly perform black-box testing. In contrast, developers usually perform white-box penetration testing with access to source code, internal knowledge of implementation logic, design, and the application’s internal structure. Black-box testing techniques describe the application’s behavior and perform functional product tests. On the other hand, white-box testing can be used for logic and algorithm testing to uncover the software’s structural performance and assess internal and external vulnerabilities.
<urn:uuid:7f5e33a1-046d-49ea-96bb-44d5b9d35832>
CC-MAIN-2022-40
https://crashtest-security.com/black-box-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00775.warc.gz
en
0.888904
1,585
3.09375
3
The Man-in-the-Middle attack is a prominent cyberattack that has become infamous in recent years. However, it has been around since the 1980s, and it is one of the oldest types of cyber threats. In a nutshell, this attack constitutes an interception of a data transfer or other digital communication. By doing this, the attacker gains access to exchanges that are supposed to be secured. The MitM attack usually entails eavesdropping and can also include distributing malicious data to the existing parties in a conversation — with the attacker staying under the cover of a legitimate participant. It can also include impersonation as one of the legitimate parties to obtain other sensitive data. The end goals of MitM threats can be different — to steal someone’s identity, make fund transfers, change a user’s login credentials, gain access to financial institutions’ or eCommerce platforms’ data, and many more. In the guide below, you can learn the nitty-gritty details about MitM attacks — and how to tackle them for your cyber protection. What Is Man-in-the-Middle Attack? A Man-in-the-Middle attack can be executed on a data transfer between a web server and a client and on a private communication exchange between individual users over a messaging platform. It can also target credentials during authentications with payment platforms and many more cases. MitM Attack Definition In essence, a MiTM attack consists of an attacker gaining unauthorized access to data transfers that should be secure and private. They succeed by inserting themselves as a relay or a proxy in a standard exchange — getting ‘in the middle’ between two other parties. Man-in-the-Middle attacks can be considered a type of session hijacking. They often are not caught, even though they can cause severe data loss and damage. How Are Man-in-the-Middle Attacks Performed? Different online security loopholes may allow a MitM attack to be executed. The steps usually follow a particular path, or attack progression, with the attacker intercepting traffic at first and then decrypting it in an unnoticed way to get to the valuable data. The interception can occur in several ways. The easiest one is setting up an open network where users can easily log in — and then steal their data exchanges. More elaborate interception methods include IP, ARP, or DNS spoofing. In the case of IP spoofing, the malicious user pretends to be an application by changing the packet headers in an IP address. ARP spoofing involves using fake ARP messages to link the attackers’ MAC address to the victim’s IP. As for DNS spoofing, also known as DNS cache poisoning, the attacker gains access to a DNS server and changes the address record of a website. Once the attacker has inserted themselves in the middle of data exchange, they have to find a covert way to decrypt the information contained in the Secure Sockets Layer (SSL) traffic. This can be done through methods like HTTPS spoofing, SSL BEAST, SSL hijacking, and SSL stripping, among others. Types of Man-in-the-Middle Attacks As Man-in-the-Middle attacks can cause different vulnerabilities, there are a couple of main types of such attacks. Here are the most popular MitM attacks: - Wi-Fi eavesdropping - SSL hijacking - SSL stripping - Email hijacking - Browser cookies theft - Rogue access point (Wi-Fi pineapple) - Internet Control Message Protocol (ICMP) redirection - Dynamic Host Configuration Protocol (DHCP) spoofing - HTTPS spoofing - IP spoofing - DNS spoofing The last three in the list correspond to the methods of interception described in the previous section. MitM attacks can also be categorized as either made during an active or a passive session. This relates to the exact activities of the attacker — whether they are simply eavesdropping on a communication channel or actively tampering with the ongoing exchange. What Types of Attacks Are Similar to a MITM Vulnerability? As already mentioned, Man-in-the-Middle attacks are a type of session hijacking. Other kinds of this threat include sniffing, sidejacking, and Evil Twin. Sniffing entails the interception of data a device sends and receives. Sidejacking, on the other hand, includes gaining unauthorized access to session cookies which may contain login data that can be used for hijacking user sessions. Last but not least, Evil Twin is a type of attack based on session hijacking in which legitimate Wi-Fi networks are duplicated so that the attacker gains access to users who think they’re logging in to the network traffic. Real-Life Examples for Man-in-the-Middle Attacks The world cyber threat records have numerous examples of Man-in-the-Middle attacks that have affected different types of businesses, large international organizations, and even national authorities. For example, the Organization for the Prohibition of Chemical Weapons (OPCW) was targeted by a Man-in-the-Middle attack from Russian spies in 2018. Back in 2011, the Dutch certificate authority DigiNotar suffered a breach in which fake certificates were issued, which were then used for MitM attacks. How to Prevent MitM Vulnerabilities For end-users, there are practical ways to stay away from MitM attacks. They include: - Not logging from your mobile devices to Wi-Fi networks that are not password-protected - Logging out of applications when you’re not using them - Avoiding public Wi-Fi connection networks - Reading browser notifications about the security of visited websites As for applications and websites, it’s essential to use the latest updates for secure connection protocols like TLS and HTTPS and stick to strong encryption and verification methods. How to Detect and Remove a Man-in-the-Middle Attack Catching MitM threats can be tough to track, as they are often very discrete attacks. That’s probably the most characteristic thing about the common types of threats that such interception entails. Your best bet is to use dedicated software to monitor and identify if anyone is trying to or already has gained access to your data exchanges. Do you know if your systems are protected against man-in-the-middle attacks? Crashtest Security’s powerful Vulnerability Testing Software can help you verify any vulnerabilities that you can fix and prevent all kinds of cyber threats.
<urn:uuid:6e344573-b654-4f38-a369-62c07de96219>
CC-MAIN-2022-40
https://crashtest-security.com/mitm-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00775.warc.gz
en
0.929403
1,337
3.578125
4
The Internet is a huge and ever growing part of the vast U.S. and international economy. Experts continue attempts to quantify the billions of dollars generated by the Internet and Internet-based activities. A recent report released by the U.S. International Trade Commission sheds light on a part of that impact, more specifically the influence that the Internet has on trade. The report, titled Digital Trade in the U.S. and Global Economies, was formulated using metrics from three different vectors, including firm case studies, a survey of U.S. businesses, and economy-wide application. According to the findings, not only does the Internet generate economic growth, but it also aids communications and increases productivity for businesses of varying sizes. The report suggests that the benefits generated by the Internet have increased the U.S. gross domestic product by a total of 3.4-4.8 percent or $517.1-$710.7 billion. In addition, “digitally intensive industries” were responsible for more than $935 billion in commerce in 2012. Removing foreign trade barriers would even add an additional 0.1-0.3 percent or $16.7-$41.4 billion. The Internet also largely contributes to reducing the costs of imports and exports in the U.S., which the report estimates to be about 26 percent. The lowered costs increase the GDP by up to nearly $40 billion and increases employment wages by up to $2.4 million. Throughout the more than 300-page report, which was commissioned by the Senate Finance Committee, there are numerous examples of how Internet-based industries are positively impacting trade economically on a global scale to the tune of billions of dollars. The Internet infrastructure industry can stand proud at its role in the digital economy. Our industry provides the platform on which all this is possible. From web hosting and data-center providers to cloud computing services and registries and registrars, the infrastructure industry is the backbone of the Internet. Just as a long bridge that you drive over requires a steel framework, the Internet needs the nuts and bolts to make it connect. All the devices making up the infrastructure act as the rivets that hold the framework together, allowing it to reach far and wide. This report underscores the ongoing importance of the Internet infrastructure industry and is a worthwhile read. Please take a look and email me your thoughts at [email protected] Are you interesting in helping the i2Coalition support the Internet infrastructure industry? Learn more about becoming a member today.
<urn:uuid:cf00f81b-5fec-420c-9c59-5c9502ca92b8>
CC-MAIN-2022-40
https://i2coalition.com/report-internet-infrastructure-industry-paves-the-way-for-strong-economic-growth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00775.warc.gz
en
0.941556
514
2.578125
3
If you’ve ever wondered what SSL is, well here you go: SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser or a mail server and a mail client (e.g. Outlook). SSL allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. Normally, data sent between browsers and web servers is sent in plain text—leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server they can see and use that information whether it be for good or dastardly. Several events have happened within the last year which has brought SSL to the forefront of many people’s minds. Some of those things include: Heart Bleed (Open SSL Vulnerability), AOSSL (SEO benefits), POODLE (SSL v 3.0), and the SHA-1 to SHA-2 Upgrade/Update. If you have a business that does any type of online commerce, your customers want to know that you value their security and are serious about protecting their information. More and more customers are becoming savvy online shoppers and reward the brands that they trust with increased business. The time to install SSL on your website or upgrade your SSL Certificate to an EV SSL Certificate (more about this below) would be better served now than later. Seriously. Google also recently announced that sites secured by SSL will now get a boost in search rank results, so that’s pretty cool. Nothing bad ever came from a bit of free Google-certified SEO advice, eh? Want more security? If you want to increase customer confidence and conversion rates EV SSL could be for you. Extended validation (EV) certificates use the highest level of authentication and were specifically created to boost and maintain customer confidence in ecommerce through a rigorous verification process and specific, EV certificate-only browser cues like the green address bar. EV certificates incorporate some of the highest standards for identity assurance to establish the legitimacy of online entities. We have a trusted SSL partner, DigiCert, who we greatly trust. DigiCert is one of the world’s leading SSL Certificate Authorities (CA) and their product value, we think, is unparalleled. For more information on DigiCert please click here. For more information contact Albert Ahdoot
<urn:uuid:bcb2da2d-cc13-4d5e-a853-2ce2ac36595c>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/your-website-needs-ssl
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00775.warc.gz
en
0.937548
503
2.578125
3
Most individuals older than 30 most likely keep in mind doing analysis with good old school encyclopedias. You’d pull a heavy quantity from the shelf, verify the index on your matter of curiosity, then flip to the suitable web page and begin studying. It wasn’t as straightforward as typing just a few phrases into the Google search bar, however on the plus facet, you knew that the knowledge you discovered within the pages of the Britannica or the World Guide was correct and true. Not so with web analysis right this moment. The overwhelming multitude of sources was complicated sufficient, however add the proliferation of misinformation and it’s a marvel any of us consider a phrase we learn on-line. Wikipedia is a working example. As of early 2020, the positioning’s English model was averaging about 255 million web page views per day, making it the eighth-most-visited web site on the web. As of final month, it had moved as much as spot quantity seven, and the English model presently has over 6.5 million articles. However as high-traffic as this go-to info supply could also be, its accuracy leaves one thing to be desired; the web page concerning the website’s personal reliability states, “The net encyclopedia doesn’t think about itself to be dependable as a supply and discourages readers from utilizing it in tutorial or analysis settings.” Meta—of the previous Fb—desires to alter this. In a weblog submit revealed final month, the corporate’s workers describe how AI may assist make Wikipedia extra correct. Although tens of hundreds of individuals take part in enhancing the positioning, the info they add aren’t essentially appropriate; even when citations are current, they’re not at all times correct nor even related. Meta is creating a machine studying mannequin that scans these citations and cross-references their content material to Wikipedia articles to confirm that not solely the subjects line up, however particular figures cited are correct. This isn’t only a matter of choosing out numbers and ensuring they match; Meta’s AI might want to “perceive” the content material of cited sources (although “perceive” is a misnomer, as complexity principle researcher Melanie Mitchell would inform you, as a result of AI remains to be within the “slender” section, that means it’s a device for extremely refined sample recognition, whereas “understanding” is a phrase used for human cognition, which remains to be a really completely different factor). Meta’s mannequin will “perceive” content material not by evaluating textual content strings and ensuring they comprise the identical phrases, however by evaluating mathematical representations of blocks of textual content, which it arrives at utilizing pure language understanding (NLU) strategies. “What now we have executed is to construct an index of all these net pages by chunking them into passages and offering an correct illustration for every passage,” Fabio Petroni, Meta’s Elementary AI Analysis tech lead supervisor, informed Digital Traits. “That’s not representing word-by-word the passage, however the that means of the passage. That implies that two chunks of textual content with comparable meanings might be represented in a really shut place within the ensuing n-dimensional area the place all these passages are saved.” The AI is being skilled on a set of 4 million Wikipedia citations, and apart from choosing out defective citations on the positioning, its creators would love it to ultimately be capable to counsel correct sources to take their place, pulling from a large index of information that’s constantly updating. One huge situation left to work out is working in a grading system for sources’ reliability. A paper from a scientific journal, for instance, would obtain the next grade than a weblog submit. The quantity of content material on-line is so huge and assorted that you could find “sources” to help nearly any declare, however parsing the misinformation from the disinformation (the previous means incorrect, whereas the latter means intentionally deceiving), and the peer-reviewed from the non-peer-reviewed, the fact-checked from the hastily-slapped-together, isn’t any small process—however a vital one relating to belief. Meta has open-sourced its mannequin, and people who are curious can see a demo of the verification device. Meta’s weblog submit famous that the corporate isn’t partnering with Wikimedia on this venture, and that it’s nonetheless within the analysis section and never presently getting used to replace content material on Wikipedia. In case you think about a not-too-distant future the place every thing you learn on Wikipedia is correct and dependable, wouldn’t that make doing any kind of analysis a bit too straightforward? There’s one thing worthwhile about checking and evaluating varied sources ourselves, is there not? It was a giant a leap to go from paging via heavy books to typing just a few phrases right into a search engine and hitting “Enter”; do we actually need Wikipedia to maneuver from a analysis jumping-off level to a gets-the-last-word supply? In any case, Meta’s AI analysis workforce will proceed working towards a device to enhance the web encyclopedia. “I believe we have been pushed by curiosity on the finish of the day,” Petroni stated. “We wished to see what was the restrict of this expertise. We have been completely undecided if [this AI] may do something significant on this context. Nobody had ever tried to do one thing comparable.”
<urn:uuid:26c4d58f-12ab-4d6c-8ab6-0cc60cedec16>
CC-MAIN-2022-40
https://blingeach.com/meta-is-constructing-an-ai-to-truth-verify-wikipedia-all-6-5-million-articles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00775.warc.gz
en
0.938577
1,186
2.765625
3