text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
It’s an unfortunate fact that data centers collect significant amounts of dirt. Dirt particles are carried in on the shoes and clothing of personnel or through ventilation systems, and contamination may accelerate due to nearby construction sites or natural disasters. Over time, dirt particles can collect on hardware, walls, floors, subfloors and the ceiling.
A dirty data center poses a particular danger to both the equipment and staff members. Here are the top three risks associated with dust in data centers, and the cleaning procedures that companies use to address them.
1. Equipment Downtime
Dust particles that collect in a data center may carry an electrostatic charge. Even small voltages can lead to problems with sensitive data center hardware. Electrically charged dust particles can cause signal disturbances, data losses, short circuits and power failures, compromising not only the hardware but important data.
If a data center is exceptionally dirty due to infrequent cleaning, dust may prevent fans from functioning efficiently and lead to overheating and extended downtime.
Downtime is one of the biggest dangers of a dirty data center. Downtime interrupts business and costs North American companies $700 billion every year. Though dirt isn’t the only cause of downtime, it’s certainly one factor. Using data center cleaning best practices could help you reduce downtime and cut losses.
2. Data Center Fires
When dust particles end up on or near heat-producing hardware in a data center, the equipment can ignite or even explode. The result is a data center fire, which could pose a danger to your hardware as well as your staff.
The good news is, proper cleaning can greatly reduce the risk of fire. Managing fire risk through the cleaning and installation of fire suppression systems is an important part of managing a data center today.
3. Employee Health Complications
In addition to danger from potential data center fires, employees face other work-related health complications. As in any working environment, airborne contaminants in a dirty data center can negatively impact employee health, leading to extended illness, low productivity and reduced job satisfaction.
Proper data center cleaning can help eliminate health concerns, filtering hazardous contaminants such as fungi and bacteria out of the air. The result is better overall health outcomes for data center employees.
Reducing the Dangers of a Dirty Data Center
It’s clear that a dirty data center comes with many dangers for business operations and employees. So why do many data centers still struggle to keep their centers clean? Eliminating contaminants in a data center environment is highly complicated. If a contractor attempts to clean your data center using improper cleaning procedures, they could make the situation worse, spreading around dust and increasing static instead of eliminating it.
At DataSpan, our experienced and trained technicians use anti-static data center cleaning best practices to remove dirt while protecting your storage media. Because we complete every job to ISO standard, many Fortune 500 companies trust us to handle their regular data center cleanings. | <urn:uuid:6970fb2b-4003-40fd-9acf-0d667ef8d746> | CC-MAIN-2022-40 | https://dataspan.com/blog/the-dangers-of-a-dirty-data-center/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00198.warc.gz | en | 0.908689 | 602 | 2.765625 | 3 |
SCADA stands for Supervisory Control and Data Acquisition, and although it’s not likely to be the first thing to come to mind when discussing cyber security, it certainly should be. As its name implies, it is a type of software designed to supervise – controlling and monitoring – and collect data and well as analyze data for industrial processes. You’ll find SCADA is just about every industrial processing plant around the world, to include manufacturing and utilities production plants. Through the use of SCADA, industrial processes and procedures are made more efficient through a relatively simple command and control architecture composed of several major components: supervisory computers, remote terminal units, programmable logic controllers, communications infrastructure, a historian, alarms, and a human-machine interface.
Supervisory Computers form the backbone of the SCADA system and can vary widely in their complexity, from a single PC to geographically separated systems spanning multiple servers, distributed software applications, and even multiple disaster recovery sites. Remote Terminal Units (RTUs) act as the liaison between the supervisory computing systems and connect to sensors and actuators. While SCADA operators manipulate the human interface machine, RTUs exist to convert those electrical signals and inputs into mechanical outputs, such as the flipping of a switch, the opening of a valve, or collecting measurements and providing feedback to the users. Programmable Logic Controllers (PLC) are very similar to RTUs as they also interface between the supervisory system and sensors and actuators, however, PLCs are more sophisticated than their counterparts, including more embedded control capabilities, often written in numerous programming languages. Finally, they also offer a cheaper, more versatile, and scalable alternative to standard RTUs. The Communications Infrastructure is the glue that ties the whole system together, transmitting commands from the supervisory system to the RTUs and PLCs. This is not to say that the RTUs and PLCs are entirely dependent on this as a system. Operating independently of the supervisory system, PLCs and RTUs are often undeterred by temporary losses of service, able to continue their appointed duties without fail. Upon the resumption of service, monitoring and control can continue. Many of the more robust SCADA systems even install redundant communications pathways in the event of damage or disaster. The Historian is a software service located within the Human-Machine Interface. A critical feature of SCADA, historian enables the collection and storage of historical data, operating metrics, etc. Alarm handling is a critical component to SCADA systems. Just as the system is there to monitor and analyze the information within it, keeping the SCADA operators informed of the goings-on of the system is vital to its continued efficient and safe operation. Finally, the Human – Machine Interface (HIM) can be compared to your standard Graphic User Interface (GUI). Through the HMI, the plant worker or industrial professional can monitor and operate the supervisory system itself.
Given its nearly universal use in utility systems, the ramifications of an unsecure network are just as endless. A well-placed exploit against an undefended system can wreak absolute havoc on national infrastructure. Despite this, SCADA systems are routinely engineered with vulnerabilities. Security and authentication in the deployment of these systems is often viewed as an afterthought, relying instead on the ill-conceived notion that SCADA systems are inherently secured through obscurity – a belief that directly fed into the success of the Stuxnet virus from the not-too-distant past. Another misconception that feeds into their false sense of security is the physical construction of the network itself, believing that if the system is physically secure and not logically connected to the Internet, then it must be secure. Again, breaches of the same caliber as Stuxnet have proven that “air-gapped” networks contribute only to a false sense of security – not any actual sort of security. In defending SCADA, it is critical that organizations actively work to overcome their cyber security inertia – just because your system has yet to be exploited, does not mean that it will never get exploited. Almost unanimously, the question of cyber security breaches is not a question of if, but of when.
The first step in securing SCADA starts with understanding the competing priorities of both standard IT security versus SCADA – centric priorities. Much of the debate between the two entities rests on the point of what each community of users value in their systems. A standard IT system often orders their priorities as confidentiality, integrity, and availability. This model is flipped on its head in the SCADA community, with availability being the most important for the three items, followed by integrity, trailed by confidentiality. In analyzing current trends and historical data, these three items are proven to resolve many of the more routine security issues that commonly plague SCADA systems:
In countering the number of attack vectors often present in SCADA systems, including an office network of firewalls between the different layers of networking just makes comment sense.
Fashioning a defense in-depth takes the DMZ concept a step further. By planning for and developing defensive controls designed to avert threats before they even have a chance of exploiting your system, you effectively create a defense in depth. This could be something as simple as not allowing removable media onto the network without prior scanning and approval. This policy effectively eliminates a common and dangerous attack vector into your system.
Given the busy nature of today’s workforce and occupational demands, workers will want to be able to remote into their SCADA system and continue to accomplish their assigned duties. Rather than choose to introduce a new attack vector into your network, or ban remote working altogether, the use of Virtual Private Networks (VPNs) enables secure web browsing, email exchange, and continued work performance in times of need.
In light of recent attention paid to the importance of cyber security and recent advances in IT networking and cyber security technologies, SCADA networks are evolving. One of the most exciting, but also highly nuanced, advancements of late is the Internet of Things (IOT). Advances in cloud computing technologies has yielded exponential advances in the areas of systems security, system supervision and data collection and analysis. How these advances are going to impact the industrial sector and national infrastructures has yet to be seen but will certainly be exciting. | <urn:uuid:554411a0-2ed0-427f-b48a-eaadf6378ef6> | CC-MAIN-2022-40 | https://www.logsign.com/blog/what-is-scada/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00198.warc.gz | en | 0.94523 | 1,279 | 3.4375 | 3 |
We’re all familiar with the term data breach. A hacker enters a system and successfully extracts sensitive information, typically for identity theft and most often, businesses. This can lead to damaged reputation, catastrophic data loss, downtime and even full corruption. Although a data breach is an old concept, there are few things that people aren’t aware of. By gaining a deeper insight into data breaches, you’re aiding in the overall security and protection of your business. Here are four things to know about them.
1. Data Breaches Don’t Happen Overnight
Typically, when a business gets hit by a cyberattack, there is this false assumption that it came out of nowhere. Although it might appear that way on the surface, data breaches can sometimes take months for a company to detect, according to Forbes. Often times, cybercriminals are lurking around in networks, conducting research on specific areas of a system, gradually stealing information and installing malicious software before the company is even aware. Even more often, so many companies have the mentality that a cyberattack or a breach could never happen to them, when in fact, it could be occurring as we speak, and you just don’t know it yet.
We’ve mentioned previously that a hacker attempts an attack every 39 seconds and that a ransomware attack can occur as quick as 11 seconds. Although this is true, keep in mind that attempts are not data breaches quite yet and it can still take a while for the cyberattack to fully take effect. For these reasons, we encourage any business to be consistently proactive and up to date with their cybersecurity practices.
2. Data Breaches are very Versatile
Wherever there is a network with files, passwords and other private information, there are cybercriminals trying to break down the door to enter. Some types of cyberattacks include ransomware, phishing, malvertising, pharming, spear-phishing, and much more. The most common platform for hackers to nest is via email. Hackers accomplish this through phishing emails that encourage users to click on dangerous links or malicious attachments within the emails to exploit their information or install malware on their devices. Cybercriminals will also use this same tactic to perform a ransomware attack, which not only exposes confidential information, but can cost a company catastrophic amounts of money. Businesses that operate remotely need to keep in mind of unsecured networks that are commonly found in public places like hotels and restaurants. Breaches can also occur through fake networks that are created by cybercriminals.
Additionally, social media has become a popular data breach medium. There are 50 million to 100 million active users on Facebook, and 14 million of those are malicious profiles. Many of us are also familiar with “bots” that are commonly found on Instagram and Twitter, and although cybercriminals continue to use fake accounts, attacks are becoming more targeted and personalized through spear-phishing. Hackers will do things like research a victim’s interests on Facebook and go through their Twitter feeds to learn more information about the person or business. They will also go as far as sending malicious links to a victim while impersonating the person’s family member or friend, so that the message looks legitimate.
Depending on your business, recovering from a data breach is possible. However, many do not recover due to the amount of data loss, the potential for another attack and the high recovery expenses. For example, the average cost of a data breach is $3.92 million, and a ransomware attack can cost a single company up to $133,000, with no guarantee of data retrieval.
3. Data Breaches Happen to Small and Medium-Sized Businesses
Believe it or not, 81% of cyberattacks happen to small and medium-sized businesses. The culprit? Lack of cybersecurity solutions and protocols. Typically, these sizes of businesses believe that a hacker will bypass their organization because it’s smaller, therefore, they don’t invest in the necessary and critical security tools. In short, this makes a hacker’s job easier to execute, resulting in larger damage. Do the first step in preventing a data breach and lose the mentality that it will never happen to you, because it can, and it will.
4. Data Breaches are Completely Preventable
Did you know that cloud security guarantees 99.99% business protection and that 97% of attacks could have been prevented with a cloud investment? Cloud security can often times be overlooked or perceived as a spare tire. You don’t want it until you have no other choice. What happens if you have a flat tire and you don’t have a spare? Ask yourself the same thing about your business and a data breach. A hacker successfully attacks your network, but you don’t have cloud solutions to save you from data loss, downtime and financial impact.
First and foremost, invest in services like cloud backup and Disaster Recovery as a Service (DRaaS). These solutions ensure that you have multiple copies of your most critical data and that they’re easily retrievable in the event of a data breach. Additionally, Security as a Service (SECaaS) solutions are your best bet in preventing a data breach. SECaaS comes equipped with several services that you can choose from for your business model. They include: Security Information and Event Management (SIEM), Intrusion Detection and Prevention, Email Security, Identity Management and Vulnerability Protection.
Services like SIEM and Intrusion Detection and Prevention are analytical tools that look deep into your network, find vulnerabilities and trends, quickly mitigate threats and help you understand your business. Email is a cyberattack haven. Cybercriminals are constantly sending harmful links and attachments to victimize users. In addition to Email Security, ensure that your business is protected with strong cybersecurity training and a solid cybersecurity culture. This will help reduce human error by educating your employees on how to recognize cyberattacks and what to do in the event of one.
What would your business do in the event of a cyberattack? Would you be prepared? If you don’t have a disaster recovery plan and haven’t invested in cloud solutions, more than likely, your business would not survive. Now that you know a bit more about data breaches, get in touch with a Cyber Sainik expert today. | <urn:uuid:2405a1e4-df6e-4d76-985e-fca376f73da2> | CC-MAIN-2022-40 | https://cybersainik.com/4-things-to-know-about-data-breaches/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00398.warc.gz | en | 0.953221 | 1,307 | 2.671875 | 3 |
Okay, so I do not have a strong background per se in computing, nor networking, and I’m really just starting to learn this stuff now, for a relatively small family business of ours. It is rather fascinating and I’ve got a lot to learn. However, I have recently been playing with OpenVPN server configurations and had some success with bridging. I am working on a routing configuration now. One thing I have spent hours and hours playing with is being able to have a client which is assigned an IP address as such:
(10.8.0.6) with the OpenVPN server:
(10.8.0.1), having access to JUST ONE INTERNAL LAN IP address for a web application.
Allow me to explain further. On the LAN there is a server at: 192.168.50.100 which is an Apache server hosting a web application for internal usage. For individuals whom work locally, it is all accessible, as one would expect. Port 80 HTTP. Open up a browser and type http :// 192.168.50.100 and bingo, it is there.
However, as we are expanding and having individuals work remotely, we wish to give them access to this web application too (at http :// 192.168.50.100). In bridged mode, this works fine, however, they also get access to all the other networked resources/computers et cetera. Sure, a good firewall helps (we have recently installed a WatchGuard firewall, although only for a few other servers, the web application server is not behind it. Yet.) It seems this should be possible, to do. But, as I’m fumbling my way through this, I could do with a bit of advice, as currently over a routed connection, it is not accessible.
BTW – my question is very familiar to the one posed here:
The only difference being.. both computers (including the server for web application) are Windows. And yes, I’ve disabled all firewalls for all adapters (OpenVPN one as well) (only during testing/experimenting), including Windows Firewall, on gateway computer and Apache server. In the question linked above, the individual found his solution being IPTABLES, however, I’m quite certain that’s just a Linux thing. (I’ve never heard of it before).
Nevertheless, these are the steps I’ve taken thus far.
Enabled IP Forwarding on the gateway (OpenVPN server) by the registry edit documented in previous link.
Added within server.conf (push “route 192.168.50.0 255.255.255.0”)
I’ve experimented on the gateway computer which is part of the private subnet (192.168.50.*) with various command lines “route ADD 10.8.0.0 MASK … ” et cetera. And I’ve done them non-persistently and cleared them out afterwards, after testing that they did not work.
So, any further guidance would be greatly appreciated. Sorry for essay form, just wanted to give as much information up front.
EDITED AND APPENDED TO ANSWER TESSELATINGHECKLERS ANSWER/QUESTIONS FROM BELOW. (BTW – Thanks for the time TH).
(VPN Client <-> VPN Server): Leaving the Apache server aside for a moment, how far have >you got with the OpenVPN tunnel itself? Does it connect? Can you get any traffic at all >over it, and know it’s working? Use ping on the VPN client and try to ping the server >10.8.0.1 and prove you’re getting a connection.
Yes, the client can successfully connect to the server using a combination of client specific certificates and authorization (username/password). At this point, yes, traffic is going over it and it is working. Even when all traffic for the client is forced over (I only played around with this setting for testing), so, the web browser traffic is passed through it – it works very well. And as for pinging from VPN client to server 10.8.0.1, it works perfectly.. all four packets return as desired.
(VPN <-> LAN on the VPN server): You have enabled routing in the registry, but is it >working? The OpenVPN server needs an IP address on the 192.168.50.x network.
Yes, I have enabled routing in the registry, but is it working? I do not know. I do not know how to check whether the OpenVPN server has an IP address on the 192.168.50.? Prior to turning the OpenVPN server on.. if I enter “ipconfig /all” at the command line, I see for the Local Area Network adapter that it has an IP address of 192.168.50.10 (static LAN IP). Once the OpenVPN server/service is started, the new adapter now takes an address of 10.8.0.1… and naturally if I type “ipconfig /all”, then I see for the two different adapters – two different IP addresses respectively listed moments ago. I can ping from the server to 192.168.50.100 and 192.168.50.100 (and all other servers/computers on the 192.168.50.x network can ping 192.168.50.10)… however, the other servers/computers on the 192.168.50.x network cannot ping the newly connected server at 10.8.0.1 nor the connected VPN client (first client) with address of 10.8.0.6 (as an example).
So, is it a matter of configuring the adapter which has an IP address of 10.8.0.1 to be able to communicate with 192.168.50.x addresses?
What do I need to do to get “The OpenVPN server needs an IP address on the 192.168.50.x network.” working? Is this different than the one the actual computer already has on it’s default LAN adapter? That is one part that is getting confusing.
If you look in the client’s routing table (“route print” at the command line), and/or in >the OpenVPN connection logs, is it picking up the routes to the 192.168.50.x network that >you put in server conf? If so, is the route gateway set to the 10.0.8.1 address of the >OpenVPN server? From the client, can you ping the OpenVPN server on the 192.168.50.x >address?
This part I will need to do again. Since posting my question here yesterday, I have spent a bit of time messing around and there are no default routes pushed over. I will do that shortly and revise this post at that point. However, what I can say is that when I did push the route 192.168.50.x.. I definitely could NOT ping the 192.168.50.10 address or any other 192.168.50.x address from the client. | <urn:uuid:eba20bc8-d582-411e-8100-6cf93af34280> | CC-MAIN-2022-40 | https://intelligentsystemsmonitoring.com/community/openvpn-server-on-windows-7-how-to-route-specific-ip-addresses-to-clients/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00398.warc.gz | en | 0.94211 | 1,524 | 2.53125 | 3 |
insomni’hack is an annual security conference in Geneva, Switzerland. Each year, they host a teaser competition prior to the conference. Here’s a writeup from one of the problems, which was to reverse engineer a cryptographic algorithm from a binary executable, worth 400 points (a hard problem). Our team was the only team out of 300+ teams to solve this challenge.
Opening the program in IDA, we see that it’s almost entirely AVX2 instructions, which IDA doesn’t decompile. We’ll have to attack the assembly directly. Below, a “yword” is a 256-bit (32-byte) quantity, named after the
ymmN 256-bit AVX2 registers.
main function has a fairly simple outer structure:
load 19 constant ywords to [rbp-0x930..rbp-0x6d0] ("constants") load code bytes from the text section to [rbp-0xb50..rbp-0xad0] ("passkey") vzeroupper call func1 to generate keys in [rbp-0x6d0..rbp-0x50] ("keys") f = open("/tmp/plaintext") buf = f.read() while 1: block = buf[pos:pos+0x80] if len(block) < 0x80: break inblock = rearrange block with AVX2 instructions outblock = func2(inblock, keys, constants) buf[pos:pos+0x80] = rearrange outblock pos += 0x80 open("/tmp/ciphertext", "wb").write(buf)
We can run the program in
gdb and dump out the
keys just before it reads the input file. Also, with a little bit of GDB sleuthing, we figure out that the block rearrangements just amount to reversing the 16-bit words in the block, i.e.
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F
1E 1F 1C 1D 1A 1B 18 19 16 17 14 15 12 13 10 11 0E 0F 0C 0D 0A 0B 08 09 06 07 04 05 02 03 00 01
which is a self-inverting function. So, now all we have to do is reverse engineer
func2 (note that we didn’t even look at
func1, because it’s only called to generate the
func2 appears to take a single block of 0x80 = 128 bytes and encrypt it; no block chaining is used (so two consecutive identical blocks encrypt to the same identical output blocks).
func2 is just a huge function with a normal function prologue and epilogue, with 625 AVX2 instructions in the middle – no jumps, conditionals, or even any non-AVX instructions in between. This seems like a super-daunting task to reverse – each instruction operates on 32 bytes of data, so plugging the whole mess into Z3 will probably fail (I didn’t even try).
So, what I did instead was just try to visualize how data flowed through these instructions. I wrote
analyze.py to create a data flow graph, where nodes are the instructions (and
movs are elided for simplicity). The first few
analyze.py attempts produced really bad graphs because I was leaving constants as separate inputs, even though they were used by many, many instructions (see
flow_bad.pdf). After moving constants into the node labels, I got a much nicer graph (
flow.pdf) which immediately revealed some high-level structure.
First, it’s obvious from the flow graph (at a high level, zoomed all the way out) that there are eight repeats of the same basic structure, suggesting an 8-round cipher (albeit one with 1024-bit block sizes). Second, one particular pattern stands out frequently: the use of a
vpmullw/vpmulhuw/vpor - vpsubusw/vpsubw - vpcmpeqw - vpsrlw/vpand/vpaddw/vpsubw network to turn two inputs into a single output. Because every instruction in this pattern operates either word-wise or bitwise, we can analyze the network’s effect on a single pair of 16-bit inputs. As it turns out (through some lucky guessing and checking), this network computes
(x * y) % 65537, but where 0 is swapped for 65536. It’s a very clever function, and it’s invertible if you know one of the inputs.
At this point, I started to code the cipher in the forward direction to check my understanding – see the first half of
decrypt.py. By simply tracing the flow of data through one round and reimplementing the AVX instructions in Python, it was fairly easy to build the forward encryption algorithm (liberal amounts of debugging and tracing with
gdb were used to check that each step was implemented correctly). After a couple hours, I got it working (producing identical output to the original program), so it was time to run the algorithm in reverse.
The round function breaks down as
r0 = vmul65537(invecs, getvec(keys, keyoff + 0)) r1 = vpaddw(invecs, getvec(keys, keyoff + 0x20)) r2 = vpaddw(invecs, getvec(keys, keyoff + 0x40)) r3 = vmul65537(invecs, getvec(keys, keyoff + 0x60)) x0 = vpxor(r0, r2) x1 = vpxor(r1, r3) y0 = vmul65537(x0, getvec(keys, keyoff + 0x80)) y1 = vmul65537(vshufnet(vpaddw(y0, x1)), getvec(keys, keyoff + 0xa0)) y2 = vpaddw(y1, y0) o0 = vpxor(y1, r0) o1 = vpshufb(vpxor(y1, r2), getvec(constants, 0x200)) o2 = vpshufb(vpxor(y2, r1), getvec(constants, 0x220)) o3 = vpshufb(vpxor(y2, r3), getvec(constants, 0x240))
vshufnet is a complicated function mapping a single input to a single output involving a bunch of shuffles and XORs. The
vpshufbs, which permute the first input’s bytes according to the second input, are all invertible thanks to the particular constants chosen.
o0 ^ invshuf(o1) we can recover
r0^r2 = x0, which lets us get
invshuf(o2) ^ invshuf(o3) gives
r1^r3 = x1, which yields
y1 and then
y2 (just by running the forward calculations for
y2, we can calculate
r3 and thereby invert the round function.
Finally, we just invert all eight round functions to produce a decryptor – note that at no point do we have to invert
vshufnet. When we’re done, we get a nice plaintext JPG,
plaintext.jpg, with our flag: | <urn:uuid:0ee8cc1f-53f7-4206-b7e2-de1afdae42ca> | CC-MAIN-2022-40 | https://www.robertxiao.ca/hacking/ctf-writeup/insomnihack2017-encryptor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00398.warc.gz | en | 0.849128 | 1,727 | 2.53125 | 3 |
The amount of computing done in data centers more than quintupled between 2010 and 2018. However, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency.
That’s according to research published today in the academic journal Science. Such a slow rate of growth in energy consumption in relation to growth in overall computing power reflects the ongoing shift of computing from old, inefficient data centers operated by traditional enterprises, such as banks, insurance companies, or retailers, to newer facilities built by providers of cloud computing services, such as Amazon Web Services, Microsoft Azure, and Google Cloud.
According to the study, in 2018, the world’s data centers consumed 205 terawatt-hours of electricity, or about 1 percent of all electricity consumed that year worldwide. They consumed 1 percent in 2010 as well.
Titled Recalibrating global data center energy-use estimates, the report seeks to dispell often fantastically exaggerated estimates of data center energy consumption and its growth curve, as the society increasingly relies on digital services.
In general, commercially operated cloud data centers are much better optimized for efficiency than the legacy enterprise data centers, since their operators have strong business incentives to waste less energy. The less energy a cloud data center uses, the higher the provider’s profit margin.
Corporate data centers, on the other hand, often don’t have such incentives, their managers rewarded only for maintaining uptime, not for doing it efficiently. They are notorious for not only being designed and operated inefficiently but also for not having their energy consumption closely tracked – or tracked at all – by their managers.
The latest data center energy use findings come as European Union officials entertain imposing energy efficiency regulations on the block’s data center operators. As they do so, commercial data center providers have been lobbying them to create rules that would incentivize traditional enterprises to move out of their old data centers and into commercially operated facilities faster.
Writing about the new study on a company blog, Urs Hölzle, Google’s senior VP technical infrastructure, said that on average, “a Google data center is twice as energy efficient as a typical enterprise data center. And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power.”
The new study, authored by a group of researchers from Northwestern University, UC Santa Barbara, and the US Department of Energy, shows that the gains in energy efficiency the data center industry had made 10 years ago persist, albeit the latest gains aren’t as dramatic as they once were.
One of the authors is Jonathan Koomey, a former Lawrence Berkeley National Laboratory scientist and one of the leading authorities on data center energy use and its impact on the environment. He's been studying the subject for more than two decades.
A previous Koomey-led study of data center energy use in the US, which was paid for and published by the US Department of Energy in 2016, found that collectively, all data centers in America consumed 2 percent of all electricity consumed nationwide.
The 2016 study found that in the US, data center energy consumption grew by 4 percent between 2010 and 2014. That’s after it grew 24 percent in the preceding 5 years, and nearly 90 percent between 2000 and 2005.
The Shift to the Cloud
Speaking with DCK Thursday, Koomey said he and the other researchers found that there had been a dramatic decrease in recent years in the amount of “traditional” enterprise data center capacity online and a subsequent decrease in overal energy consumption by this class of computing facilities.
“Traditional data centers go down a lot,” he said. “The shift is both to hyperscale and to the cloud non-hyperscale.”
The authors considered any cloud-provider data center larger than 40,000 square feet hyperscale, Eric Masanet, associate professor at Northwestern University and the study’s lead author, explained in an email.
Smaller traditional data centers housed 79 percent of the world’s compute instances in 2010, the researchers wrote. By 2018, 89 percent of compute instances were hosted by cloud data centers, both hyperscale and smaller cloud computing facilities.
Better Gas Mileage
Servers, storage, and network hardware on its own consumed more energy in 2018 (130TWh) than it did in 2010 (92TWh). But these devices use energy much more efficiently now than they did a decade ago, meaning a lot more computing for every 1Wh used.
Additionally, the authors said, new data suggested that modern data center infrastructure systems (cooling and power) were so much more efficient than earlier, that the decrease in their energy use was “enough to mostly offset the growth in total IT device energy use.”
Fighting 'Simple-Minded Extrapolations'
The findings contradict reports that come out periodically painting the world’s growing appetite for digital services as creating a new, quickly growing environmental threat in the form of cloud computing infrastructure.
One report the authors point to was published by the Independent in 2016. It said data centers had gone from consuming “virtually nothing 10 years ago to consuming about 3 per cent of the global electricity supply and accounting for about 2 per cent of total greenhouse gas emissions,” or the same carbon footprint as the airline industry. The report went on to predict that the total amount of data center energy use would triple over the next 10 years.
Koomey said such reports were based on too simplistic an approach to calculating energy consumption. The typical mistakes, he explained, include using projections of future data growth and extrapolating data center energy consumption growth to support it without taking into account energy efficiency gains; or taking a data growth rate from a particularly high-growth period and extrapolating it for many years into the future.
Combine those two, and “you end up with the possibility of some [projection] mistakes being very large,” he said. “Simple-minded extrapolations can get you in real trouble.”
Koomey and his colleagues used a “bottom-up” approach to make their estimates. “We like to count gadgets,” he said. Instead of drawing conclusions based on data growth projections alone, “we’re focused much more on making sure that the physical characteristics of the systems are well represented.” | <urn:uuid:7855ac30-aff7-4446-aae6-4be6c40991c9> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/energy/study-data-centers-responsible-1-percent-all-electricity-consumed-worldwide | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00398.warc.gz | en | 0.95104 | 1,339 | 3.203125 | 3 |
Opioid epidemic: Weighing alternatives for pain management
Providers, payers and consumers will require data to make informed care decisions
There are two sides to the ongoing opioid epidemic. As described by the National Academies of Sciences, Engineering and Medicine, the crisis “lies at the intersection of two public health challenges – reducing burden of suffering from pain and containing the increasing toll of the harms that can arise from the use of opioid medications.”1
This definition encapsulates the dilemma physicians face when determining how to help patients manage pain.
Multifactorial reasons behind the opioid epidemic
When pain became known as “the fifth vital sign” in the early 2000s, physicians increasingly sought to alleviate pain with opioids. At the time, there was low perceived risk of dependence on long-acting prescription opioids. Pharmaceutical companies downplayed the risk of dependence or addiction in sales and marketing activities2. As physicians increased prescribing, especially of synthetic opioids, opioid use disorder (OUD) developed in many patients. In others, left-over prescriptions in the house contributed to OUD in family members. When they could no longer get prescription opioids, some with an OUD turn to inexpensive heroin and fentanyl on the streets.
As a result, North American communities are widely affected by the opioid epidemic. According to the Centers for Disease Control and Prevention (CDC)3, the number of overdose deaths involving opioids was six times higher in 2017 than it was in 1999 and, on average, 130 Americans die every day from an opioid overdose.
Alternatives to opioid prescriptions
There are, of course, alternatives to prescribing opioids to treat pain, especially chronic pain, which is not an FDA-approved indication for opioids. Examples include non-opioid analgesics, which have been shown to be as effective as opioids4, physical therapy, massage, relaxation, biofeedback, chiropractors and a host of other options.
However, many of these alternatives are more expensive – or perceived as more expensive. There is wide variation among coverage of alternative therapies for pain management, which may be due to the lack of clear best practices, difficulty creating coverage policies for these non-regulated therapies, or economic incentives.5
But in weighing the risks and costs of pain management treatment plans, it is important to understand the burden of opioid misuse and abuse. Opioids may incur lower costs up front, but the long-term costs of treating addiction across a population are high. One study estimates the economic burden of prescription opioid misuse to be $78.5 billion each year, including healthcare, lost productivity, treatment and criminal justice costs.6
In an ideal situation, prescribers would have guidance as to which pain therapies, for which people, would lead to the best outcome. (For some patients, a stable dose of opioids could be the right course of action, and providers should not be penalized for using opioids appropriately.) Payers would understand which providers are achieving better outcomes and could help direct consumers to the right physicians. Consumers would have more access to information about effective alternatives, where appropriate.
This ideal state is not yet reality. But healthcare is making early progress toward helping stakeholders use data to make more informed decisions about pain management.
- Zee AV. The promotion and marketing of OxyContin: commercial triumph, public health tragedy. Am J Public Health 2009;99:221–227. www.ncbi.nlm.nih.gov
- Heyward, J, Jones CM, Compton WM, et al. Coverage of Nonpharmacologic Treatments for Low Back Pain among US Public and Private Insurers. JAMA Network Open. 2018;1(6):e183044. doi:10.1001/jamanetworkopen.2018.3044 jamanetwork.com
- Florence CS, Zhou C, Luo F, Xu L. The Economic Burden of Prescription Opioid Overdose, Abuse, and Dependence in the United States, 2013. Med Care. 2016;54(10):901-906. doi:10.1097/MLR.0000000000000625. ncbi.nlm.nih.gov | <urn:uuid:1e68b844-3942-45fc-8b16-1647d91bda56> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/watson-health/opioid-epidemic-weighing-alternatives-for-pain-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00398.warc.gz | en | 0.920307 | 874 | 2.953125 | 3 |
Referential Integrity Options (Cascade, Set Null and Set Default)
Referential Integrity Options (Cascade, Set Null and Set Default)
Ever wonder why there are only two options under the INSERT and UPDATE Specification of a foreign key? Why is there no Insert Rule? And to which table in the relationship do these rules apply?
Let’s start at the beginning. Relationships in a database are implemented with foreign keys and primary keys. (For a primer on relationships, download this article.) Referential Integrity is a constraint in the database that enforces the relationship between two tables. The Referential Integrity constraint requires that values in a foreign key column must either be present in the primary key that is referenced by the foreign key or they must be null.
Let’s take the example shown in the database design article referenced in the paragraph above. The Course and Section tables are related to each other in a one-to-many relationship (for each course in the Course table, there can be many sections in the Section table; and for each section in the Section table, there can be only one course in the Course table). This relationship is implemented by creating a foreign key (CourseID) in the table on the many-side of the relationship (the Section table) that references the primary key (CourseID) in the table on the one-side of the relationship (the Course table).
(Notice that, though both tables have primary keys, the only primary key that plays a role in the relationship is the primary key on the one-side of the relationship. We often refer to the tables in a relationship as the primary-key table and the foreign-key table. We will do so here so that this discussion can also be applied to a one-to-one relationship.)
There is potential for violating referential integrity between these two tables if we modify (insert, update or delete) data; in some cases, data modifications can result in referential integrity violations, and in others no violations will occur. For example, deleting rows from the primary-key table can cause referential integrity violations. Let’s say we delete course 2 (SQL Level 2). That will cause sections 1 and 5 to reference non-existing courses, which violates referential integrity. On the other hand, deleting rows from the foreign-key table will cause no referential integrity violations. Let’s say we delete section 7 that references course 4. The result will simply be that course 4 will no longer have that section; the worst that can happen is that it will end up with no sections. No problem.
When a data modification would cause a referential integrity violation, what can the database do to prevent the violation? Disallowing the data modification is always an option. But in some cases, it has other options. In the case we just saw of the deletion of a course, the database can also prevent a referential integrity violation if it also deletes the sections that reference the deleted course (sections 1 and 5 in the Section table). This is called a cascade, because the deletion in the primary-key table is cascaded to the foreign-key table. Other options the database has are to set the foreign key to null or to its default value (as long as the default value references an existing value in the primary-key table).
The table below summarizes all the data modifications that can take place, their impacts on referential integrity, and the choices the database has in preventing the violations in each case.
Notice that the database has choices in the event of a referential integrity violation (disallow, cascade, set the foreign key to null or its default value) but only when an update or a delete is performed in the primary-key table; in all other cases where there would be a referential integrity violation, the only action the database can take is to disallow the operation.
That explains why only the delete and update operations are represented in the dialog you saw above. It also shows that the table in which the deletions and updates in question occur is the primary-key table.
Let’s set up a simple database that will allow you to see how to tell the database what to do when an update or delete is performed in the primary-key table. Run the script below in SQL Server 2008 or higher to create a database called SchoolEnrollment with the Course and Section tables related as shown above.
CREATE DATABASE SchoolEnrollment
CREATE TABLE dbo.Course
CourseID int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Name varchar(50) NOT NULL,
Credits tinyint NULL
CREATE TABLE dbo.Section
SectionID int IDENTITY(1,1) NOT NULL,
Days varchar(3) NULL,
Location varchar(5) NULL,
Time time(0) NULL,
CourseID int NULL CONSTRAINT FK_Section_Course FOREIGN KEY REFERENCES dbo.Course(CourseID)
VALUES('SQL Level 1', 3)
,('SQL Level 2', 3)
,('Database Design', 4)
,('Data Warehouses', 4)
VALUES('MW', 'B24', '6:00 PM', 2)
,('TH', 'B24', '6:00 PM', 1)
,('MWF','A18', '3:00 PM', 3)
,('MWF','C12', '3:00 PM', 3)
,('TH', 'A18', '4:00 PM', 2)
,('MW', 'A18', '4:00 PM', 1)
,('MWF','B15', '2:00 PM', 4)
After you run the script above, refresh the database folder so you can see the new database (right-click on the database folder in Object Explorer and select Refresh).
Next, expand the Databases folder, then the SchoolEnrollment folder, the Tables folder, the Section table folder, and finally the Keys folder. Double-click on the referential integrity constraint, FK_Section_Course, in the Keys folder of the Section table.
This will put the table in design mode and will display the Foreign Key Relationships window.
Next, expand the INSERT and UPDATE Specification section (by now, you are probably aware that this is a misnomer; it should be called DELETE and UPDATE Specification), and select the dropdown for either the Delete Rule or the Update Rule.
By default, no action is specified for either operation. If no action is specified, the database will not allow the deletion or update in the primary-key table if they would result in referential integrity violations. You can also select Cascade, in which case the database will allow the deletion or update in the primary-key table, but it will also cascade them to the foreign-key table as discussed above. Other choices include Set Null, which, if the delete or update will result in a referential integrity violation, the database will put Null into the CourseID foreign key on the corresponding section row(s), and Set Default, which will instead use the default value of the foreign key column if one exists.
You May Also Like
Mark Jacob, Cisco Instructor, presents an introduction to Cisco Modeling Labs 2.0 or CML2.0, an upgrade to Cisco’s VIRL Personal Edition. Mark demonstrates Terminal Emulator access to console, as well as console access from within the CML2.0 product. Hello, I’m Mark Jacob, a Cisco Instructor and Network Instructor at Interface Technical Training. I’ve been using … Continue reading A Simple Introduction to Cisco CML2
In this video, you will gain an understanding of Agile and Scrum Master Certification terminologies and concepts to help you make better decisions in your Project Management capabilities. Whether you’re a developer looking to obtain an Agile or Scrum Master Certification, or you’re a Project Manager/Product Owner who is attempting to get your product or … Continue reading Agile Methodology in Project Management
In this Office 365 training video, instructor Spike Xavier demonstrates how to create users and manage passwords in Office 365. For instructor-led Office 365 training classes, see our course schedulle: Spike Xavier SharePoint Instructor – Interface Technical Training Phoenix, AZ 20347: Enabling and Managing Office 365 | <urn:uuid:b78a1853-5649-4b29-8d4f-891b86d2a0a3> | CC-MAIN-2022-40 | https://www.interfacett.com/blogs/referential-integrity-options-cascade-set-null-and-set-default/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00598.warc.gz | en | 0.863951 | 1,893 | 3.296875 | 3 |
The Space Apps Challenge took place on April 20 and 21, with more than 9,000 participants from around the world. NASA and its 150 partners -- including the European Space Agency, TechShop and the National Science Foundation -- created 50 challenges for teams that came together during the competition to develop relevant apps for space exploration missions.
The big winner of the "best mission concept" was a team from Athens, Greece, which created Popeye on Mars, a model for a deployable, reusable spinach greenhouse for Mars. The air garden would be able to operate without human intervention for 45 days, or the lifecycle of spinach. The system includes all the needed resources, sensors and electronics for spinach to thrive in the extreme conditions of the Red Planet's surface. The team also proposed systems for harvesting both the plants and the oxygen they produce.
[ Want to know more about the competition? Read NASA Launches Next Space Apps Challenge. ]
The "most inspiring" award went to T-10, a prototype mobile app for use on the International Space Station (ISS), created by a team from London. The app lets astronauts identify points of interest they wish to photograph, and alerts them 10 minutes before the ISS is set to fly over that location. Prior to sending alerts, T-10 checks real-time weather data and doesn't disturb users if the visibility is bad. The app also allows astronauts to upload their photos to Twitter, and can notify T-10 users on Earth when the ISS is about to fly overhead.
Sol, an interplanetary weather app, won for the "best use of data." Developed by a U.S.-based team from Kansas City, Mo., the app for tablets and smartphones integrates weather data from the Curiosity rover on Mars with weather data on Earth. The team wanted to get mobile users interested in science by appealing to them with a sleek design. It also developed a second companion app to augment the Sol experience, allowing users to control a 3-D version of Curiosity or spin a 3-D version of Mars to get facts about the planet.
ISS Base Station, headed by a group from Philadelphia, is a project that consists of hardware and software. The software component is a Web app that tracks the position of the ISS on a world map and connects to an augmented-reality iOS app, which lets users find the ISS in the sky. The hardware is a mechanical arm that receives data from the app and points to the location of the ISS in the sky when it comes overhead. ISS Base Station was the winner in the "best use of hardware" category.
In an effort to make "galactic impact," a team from Gothenburg, Sweden, collaborated on a project called Greener Cities. The team's design is meant to compliment NASA's satellite climate data with crowdsourced micro-climate data in order to monitor the environment. Users can "plant" a Greener Cities sensor into their box gardens -- a common gardening method for apartment dwellers -- and receive information about the state of their garden. The data reported from individual gardens can be aggregated and used by city officials to monitor air quality.
The Space Apps Challenge first launched in 2012. During that event, 2,000 developers, designers and scientists from 17 countries participated. | <urn:uuid:dabdffc4-000e-4e50-909b-b7a692cab6ff> | CC-MAIN-2022-40 | https://www.informationweek.com/government/nasa-crowns-space-apps-challenge-winners | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00598.warc.gz | en | 0.941176 | 661 | 2.734375 | 3 |
Is there a difference?
Two of the biggest trends in technology right now are machine learning and artificial intelligence. In fact, the two terms are used almost interchangeably. However, there are subtle but important differences between them both. In many ways, machine learning is a subset of artificial intelligence. Also, the term AI is older than machine learning.
What’s the difference?
At its heart, artificial intelligence involves the attempt to make machines think in the way humans do. The famous Turing test says that a system can be said to be intelligent if a human judge cannot distinguish the system’s behaviour from that of a human. However, current technology is far off achieving this, so artificial intelligence at the moment simply means creating systems that are good at doing what humans are good at. It is a catch-all term. Machine learning also harks back to the middle of the twentieth century. Arthur Samuel defined machine learning as “the ability to learn without being explicitly programmed”.
Uses and applications
The discipline of machine learning fell out of favour for decades (much like artificial intelligence) but with data mining taking off just before the end of the last century, there was a need for algorithms to look for patterns in each dataset. Machine learning does this but goes one step further and learns from the process, improving performance as it goes along.
Another thing machine learning has been used for is image recognition. These applications are initially trained by humans to look at images and then describe what they are. After thousands or millions of images are used in training, the machine learning system can then look at the pixels and work out if a picture is that of a dog, a house, flowers or a person […] | <urn:uuid:722b3756-e061-4967-a114-b1f7736fb9c9> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/02/13/machine-learning-vs-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00598.warc.gz | en | 0.958015 | 348 | 3.078125 | 3 |
What are the Differences between a Modem, a Router, a Gateway and a Modem Router?
A modem is a small box that connects your household to the Internet using cables. It acts as a digital translator, taking an information signal (Internet data) from your cable or phone lines and making it accessible to your computer.
A router is works with your modem to join networks together and allow multiple devices like phones, tablets and computers to use a single network through a wired or wireless connection. The router’s main function is to create and send out the Internet Wi-Fi signal in your home.
Gateway and Modem Router
Modems and routers can be combined into a single box, and gateway is another term for a combined modem router. The primary benefit to using a modem router or gateway is the simplicity of only having a single device to set up.
How They Differ
In the traditional home network chain of command, the router talks to the modem which talks to the internet service provider, resulting in an internet connection and Wi-Fi. A modem will give local Internet access to a single device, but requires a router to connect multiple devices via Wi-Fi.
When your home network uses a modem-router combination device or a gateway, it talks directly to the Internet Service Provider to connect you to the internet and enable a Wi-Fi connection. Since gateways and modem routers combine the functionality of a modem and a router, only one device is required to achieve the same result. | <urn:uuid:b90578ab-5b25-472e-83f6-469e11903aa7> | CC-MAIN-2022-40 | https://www.actiontec.com/what-are-the-differences-between-a-modem-a-router-a-gateway-and-a-modem-router/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00598.warc.gz | en | 0.895607 | 313 | 3.359375 | 3 |
Published On June 06, 2018Ethereum cryptocurrency is a programmable blockchain that opens the door to vast new realms of applications.
Bitcoin may get most of the buzz, but the somewhat less well-known Ethereum cryptocurrency is the one that records and information managers should watch more closely.
Like bitcoin, Ethereum is based on the blockchain protocol, a peer-to-peer messaging and storage architecture that enables parties to conduct transactions without the use of intermediaries like banks and clearing houses. Unlike bitcoin, however, Ethereum is programmable. It can process small chunks of code called “smart contracts,” which opens up a vastly greater number of use cases. That’s why more than 300 large corporations have banded together to support it.
Although Ethereum cryptocurrency can be bought and sold like any other cryptocurrency, it was created for a different purpose. “While Bitcoin allows you take part in a global financial network, using Ethereum you can participate in a global computational network,” notes Coinbase. Parties in Ethereum transactions can create rules that fire off automatically without the need for approvals or go-betweens. There’s no risk of downtime, fraud or hacking, because every party in an Ethereum blockchain has a full record of every transaction. If the records don’t match, the blockchain automatically falls back to the most recent valid state.
Ethereum has intriguing applications in areas as diverse as voting, supply chain, self-governing markets and real estate. A “coin” in the Ethereum blockchain can be a membership, vote, record of ownership or share. The programmable features make it possible for members of an Ethereum network to create rules around those digital tokens.
Take a crowdfunding platform, for example. An Ethereum-based approach could use a contract to hold contributions in escrow until specific milestones in the development process are reached. Funds can then either be released to the developer or returned to the donors, depending upon performance against goals.
The same technology could be used to create a peer-to-peer energy sharing network among neighbors. Surplus power from generators of electricity could be sold to power consumers in other homes in the network. Payments would be handled automatically by the blockchain, with a full ledger of accounts kept by every participant.
The Ethereum Project has a collection of more than 1,000 applications that are now being built on the platform. One of them is Augur, a decentralized production platform that uses the wisdom of crowds to forecast the future. Studies have long shown that years of crowdsourced predictions are more accurate than those of even the most informed individual. In Augur, members buy and sell shares in predictions, rather than securities. The market price reaches an equilibrium based on the collective wisdom of all participants.
Ethereum cryptocurrency could potentially remove much of the delay and cost from complex transactions. That’s why the Enterprise Ethereum Alliance has attracted more than 300 members, including some of the world’s largest financial institutions, pharmaceutical companies and technology firms. They’re working to evolve Ethereum into an enterprise-grade technology that can streamline supply chains, improve field trial efficiency, verify quality controls and generate self-regulating contracts.
The success of technologies like Ethereum could change the nature of records and information management, consolidating vast amounts of paper and electronic forms into a few blockchain ledgers. It’s a trend worth watching. | <urn:uuid:80323e1e-0456-44c2-afe1-f5c23e2db2d5> | CC-MAIN-2022-40 | https://www.ironmountain.com/blogs/2018/ethereum-cryptocurrency-is-blockchain-for-contracts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00598.warc.gz | en | 0.92405 | 687 | 2.8125 | 3 |
Fragmented Data is the new oil. Big Data companies explain us why Data Fragmentation is important. How many times have you read about the significance of data in modern business? Still, enterprise data is often strewn across a disconnected matrix of data islands, greatly reducing its ability to provide insights.
Common data storage problems for business owners include large data volume, fragmented data, extraneous duplicate copies and dark data, which offers little visibility into what it is or where it lives.
It’s important to keep safe your data but most important is to analyze your data. This is the reason that Big Data companies are getting popular.
Data fragmentation and the “3 V’s of Big Data”
Data fragmentation happens when a collection of stored data is split into many pieces. This type of data separation makes it nearly impossible for businesses to protect, locate and manage their most important digital asset. Common causes of data fragmentation include data being siloed to unique systems, data being spread across multiple locations, and multiple data copies being stored and replicated.
One framework for understanding the nature of data fragmentation is the “3 V’s of Big Data”:
- Volume: represents the sheer amount of the data in storage – often in the terabytes or petabytes – that becomes too large to process within a traditional data system.
- Velocity: represents the increasing speed at which new data arrives into a data system, which becomes a challenge due to the rise of real-time feeds and analytics.
- Variety: represents the array of structured and unstructured data types that become harder to process, particularly with an abundance of text, image and video-based content.
Data fragmentation is expensive. Data anomalies, support costs for various systems and software platforms, and duplicated data storage costs are all avoidable expenses caused by data fragmentation. A study commissioned by Cohesity found that 63% of businesses have 4-15 copies of the same exact data, and 35% of businesses use six or more unique vendors to manage data workflow.
Common data consolidation structures
The chaos and uncontrollable costs of data fragmentation can be reined in with careful data system structure and design. Three common structures for consolidating enterprise data are the data warehouse, data mart and data lake.
The data warehouse is the foundation of enterprise data management. It’s an integrated and permanent structured database solution. The warehouse consolidates fragmented data with diverse formats across multiple sources and is generally optimized for data analysis and query processing. Its contents can be extracted by front-end tools to support decision making. The data stored in the warehouse helps managers better understand the business’s activities across all areas.
An important subset of the data warehouse is a data mart, a unit of the warehouse used by a specific business function or team. Unlike fragmented data silos, data marts inherit similar structure and properties of the data warehouse. This prevents individual departments from being bogged down by the sheer volume of data.
An alternative or complementary solution for managing and consolidating large volumes of data is a data lake. Unlike a data warehouse, which is a carefully managed system for structured data, a data lake primarily consists of raw, unstructured data.
Want to know more about this matter, get a free consultation call with one of the most important big data companies in San Antonio here.
Turn your Fragmented data into your most valuable asset
Adaptive businesses turn their fragmented data into their most valuable asset. The first step to do this is to consolidate and structure all data sources. From there, businesses can use data mining and analytics to gain insights.
AI, supported by machine learning, can navigate through a data warehouse or data lake to analyze data patterns and relationships. Data mining technology uses these structured systems to identify problems, opportunities and relationships, and even create models to project future events. | <urn:uuid:7139c716-320a-4154-ae14-c956cd495f21> | CC-MAIN-2022-40 | https://www.getsecuretech.com/how-to-structure-fragmented-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00598.warc.gz | en | 0.918669 | 796 | 2.578125 | 3 |
SAN FRANCISCO (Reuters) – Flying commuters could be whizzing to work through the sky less than ten years from now, according to ride-services provider Uber, which believes the future of transportation is literally looking up.
Uber Technologies released a white paper on Thursday envisioning a future in which commuters hop onto a small aircraft, take off vertically and within minutes arrive at their destinations. The flyers would eventually be unmanned, according to the company.
It sounds like the opening sequence to “The Jetsons”, the 1962 US cartoon about a future filled with moving sidewalks, robot housekeepers and spaceflight, but Uber sees flying rides as feasible and eventually affordable.
Uber already offers helicopter rides to commuters in Brazil. The company plans to convene a global summit early next year to explore on-demand aviation, in which small electric aircraft could take off and land vertically to reduce congestion and save time for long-distance commuters, and eventually city dwellers.
Others have also envisioned such aircraft, akin to a helicopter but without the noise and emissions. Vertical take off and landing aircraft (VTOL) have been studied and developed for decades, including by aircraft makers, the military, NASA and the Federal Aviation Administration.
Uber is already exploring self-driving technology, hoping to slash costs by eliminating the need for drivers in its core business of on-demand rides. On-demand air transport marks a new frontier, set squarely in the future.
Uber’s vision, detailed in a 97-page document, argues that on-demand aviation will be affordable and achievable in the next decade assuming effective collaboration between regulators, communities and manufacturers.
Ultimately, using VTOLs for transport could be less expensive than owning a car, Uber predicted.
Such on-demand VTOL aircraft would be “optionally piloted,” Uber said, where autonomous technology takes over the main workload and the pilot is relied on for situational awareness. Eventually, the aircraft will likely be fully automated, Uber said.
Hurdles include battery technology. Batteries must come down in cost and charge faster, become more powerful and have longer lifecycles.
Regulatory hurdles must also be solved such as certification by aviation regulators as well as infrastructure needs, such as more takeoff and landing sites.
Uber plans to reach out to stakeholders within the next six months to explore the implications of urban air transport and share ideas before hosting a summit in early 2017 to explore the issues and solutions and help accelerate urban air transportation.
(Reporting By Alexandria Sage; Editing by David Gregorio) | <urn:uuid:29872927-3bbc-4483-a365-e940a4498d05> | CC-MAIN-2022-40 | https://disruptive.asia/uber-plans-flying-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00798.warc.gz | en | 0.948963 | 525 | 2.671875 | 3 |
New computing technologies have facilitated a flurry of new stimuli in our daily lives. We literally interact with machine learning applications without realizing it. Everything from self-driving cars, to Netflix movie recommendations and fraud detection to shopping recommendations, embody the essence of machine learning.
Machine Learning (ML), one of the mainstays of Information Technology (IT), can be defined as a subset of Artificial Intelligence. They are a powerful set of algorithm and model which gives computers the ability to learn without being programmed. Machine learning is being extensively used across diverse industries to gain business-critical insights to solve business problems. Machine learning is not a totally new concept and has been for around 20 years.
The availability of massive amounts and varieties of data, affordable data storage, and computational processing that is cheaper and power has led to a resurgence in interest in machine learning.
Machine learning techniques such as regression, clustering, ensemble methods, transfer learning, classification, natural language processing, neural nets and deep learning, word embedding, and others are applied to massive amounts of data to retrieve valuable information. They are used in several security-related procedures such as face detection, speech recognition, image classification, signal diagnosing, etc.
Machine learning is gaining acceptance at a blistering pace across multiple industries, including healthcare, government, automotive, BFSI, and others. Most industries produce massive amounts of data and have been able to recognize the potential of machine learning technology.
By studying data in real-time, organizations can work more efficiently. Recent years have witnessed exciting advances in machine learning, which has enhanced its capabilities across wide-ranging applications. Algorithmic advances and other advances in machine learning holds the promise of supporting potentially transformative advances in a range of areas.
In the healthcare sector, machine learning has led to the development of systems, which are aiding doctors in diagnosing more effectively and tailor treatments to patients.
Technology plays a very crucial in the financial services sector by identifying important insights in data. They help in the prevention of fraud and identification of opportunities through extensive data mining. Machine learning in financial services can identify clients with high-risk profiles or pinpoint warning signs of fraud using cyber surveillance.
The retail industry has been a frontrunner in the adoption of machine learning technology. Websites recommending consumers items to buy are made possible by machine learning, which analyzes the purchasing or buying pattern to do the same. In retail, machine learning is used to implement marketing campaigns, tailor customer shopping inexperience, price optimization, merchandize supply planning, and others.
The Applications of machine learning for Oil and Gas are vast and still expanding. Some of the present Applications include streamlining of oil distribution, predicting sensor failure, analyzing minerals in the ground, and identifying new energy sources.
The transportation industry is leveraging machine learning to realize patterns and trends to make routes more efficient. Machine learning serves as an important tool for logistics and other transportation companies.
Machine learning is one of the fastest-growing areas of computer science. It challenges our understanding of key concepts such as privacy and consent as it enhances our analytical capabilities. | <urn:uuid:75663887-2ce3-4cfd-8f27-38c03f5e0654> | CC-MAIN-2022-40 | https://www.iotforall.com/does-machine-learning-hold-the-key-to-successful-automation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00798.warc.gz | en | 0.943353 | 630 | 3.109375 | 3 |
Scientists say that Araucaria araucana, generally generally known as the monkey puzzle tree, is now endangered and faces the danger of extinction. Recognized to be one of the crucial resilient species, the plant outlived the Jurassic period, over 14 million years in the past.
Proceed studying under
Our Featured Movies
In line with the IUCN pink checklist, the monkey puzzle tree is at present endangered each within the wild and on farms. The plant is cultivated in farms and gardens world wide because of its magnificence. Moreover, the truth that it’s evergreen makes it a lovely selection for backyard decorations.
Right now, the tree solely grows on the slopes of Patagonia’s volcanoes in Chile and Argentina. Nevertheless, its existence has confronted loads of challenges within the current previous. Wildfires, overgrazing and land clearances led to a lower in its numbers. Moreover, its seeds are a delicacy for endemic species of birds, significantly the austral parakeet. The parakeets raid timber in a flock of about 15 birds in winter, however their numbers on one tree can simply swell to over 100.
Whereas the parakeets have been fairly heavy on the timber, in addition they assist in their survival. The birds feed on its nuts, however in return, they unfold its seed to new grounds. A current examine in Patagonia discovered that the birds act as a buffer between the timber and the threats posed by people. Among the many threats famous within the examine embody overharvesting of the nuts.
The parrots solely eat the nuts from the tree tops, partially eradicating the coat on the seeds. This helps the seeds germinate quicker. However, people harvest each the nuts and seeds, killing the expansion cycle of the pant.
“They play an vital position within the regeneration of the araucaria forests because the partially eaten seeds they go away on the bottom aren’t chosen by seed collectors, and so they retain their germination potential,” defined Gabriela Gleiser and Karina Speziale, researchers at Argentina’s Biodiversity and Setting Analysis Institute on the Nationwide College of Comahue.
Not solely with animals, they’re a supply of meals for the Indigenous Mapuche folks in Chile and Argentina. The Mapuche life-style will depend on the monkey puzzletree simply as a lot because the parakeets. Now, these timber are protected by regulation throughout the Patagonia in hopes that they are going to proceed to outlive.
By way of CNN
Lead picture through Pixabay | <urn:uuid:6911d4a3-9190-4b37-a9f2-008fe89cce47> | CC-MAIN-2022-40 | https://blingeach.com/a-tree-that-survived-the-jurassic-period-is-now-endangered/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00798.warc.gz | en | 0.941009 | 544 | 3.0625 | 3 |
Thermal imaging has been incorporated into video surveillance applications for many years, at first mostly in military and critical infrastructure environments. As the benefits of the technology became more well-known, other vertical markets such as manufacturing and agriculture began adopting thermal as well, and not solely for security. Thermal security cameras can be configured to send alerts when changes in temperature are detected, an invaluable tool for a business that depends on machinery not overheating or refrigeration not failing. Thermal is also an advantage when the video image itself may not be clear, due to atmospheric conditions or extremely low light.
How Does Thermal Work?
Anything with a temperature above absolute zero (-273.15 °C / −459.67 °F) emits infrared radiation, an electromagnetic wave with wavelengths ranging from 0.7 μm to 1,000 μm. Although we can’t see it, we experience it every day, such as the infrared energy that comes from the sun. Thermal imaging cameras have special detectors that detect infrared radiation emitted by target objects or people. They calculate the corresponding relationship between the radiation energy and its temperature, and show the surface temperature of the target through different values on the display monitor.
Why Use Thermal to Monitor Human Skin Temperature?
Today, thermal is garnering attention for its ability to monitor human temperature from a distance.
Thermal imaging is an efficient way to monitor skin temperature within groups of people because it can be done at a distance, is contact-free, and has a fast detection speed. This saves manpower and material resources and allows for a seamless flow of people passing through
Dahua’s Thermal Temperature Monitoring Solution
Dahua’s thermal temperature monitoring solution has been deployed across the globe. The solution combines a hybrid thermal network camera, a blackbody temperature calibration device, and a feature-rich, 4TB NVR. Read on to learn how it works.
The Dahua Thermal Temperature Monitoring Solution is not FDA-cleared or approved. The Solution should not be solely or primarily used to diagnose or exclude a diagnosis of COVID-19 or any other disease. Elevated skin temperature in the context of use should be confirmed with secondary evaluation methods (e.g., an NCIT or clinical grade contact thermometer). Public health officials, through their experience with the Solution in the particular environment of use, should determine the significance of any fever or elevated temperature based on the skin telethermographic temperature measurement. The Solution should be used to measure only one subject’s temperature at a time. Visible thermal patterns are only intended for locating the points from which to extract the thermal measurement. | <urn:uuid:b70197d4-9186-432c-910a-8847584dcf09> | CC-MAIN-2022-40 | https://us.dahuasecurity.com/thermal-imaging-and-video-surveillance-new-applications-for-operations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00798.warc.gz | en | 0.913174 | 532 | 2.796875 | 3 |
The HTTP protocol is, for the most part, running the web. It is the communications protocol that drives most internet traffic. New ways of network optimizing internet communications are emerging, either encrypted, encoded, or clear text over UDP or TCP. Among them is the ability to batch multiple HTTP requests together into a single request. Batching multiple HTTP requests helps limit the payload overhead and the round-trip time (RTT) of a new request – meaning it saves time and cost.
The batching of multiple HTTP requests is primarily used to group various representational state transfer (REST) API calls by multiple protocols and vendors, among them is the Open Data Protocol (OData). An open standard that enables a simple method of creating and using REST APIs that interoperate and can be queried, the OData protocol was developed by Microsoft in 2007 and is used and leveraged in applications from Microsoft, SAP, and other vendors.
As a part of the OData specification, multiple REST API calls over HTTP can be batched into a single HTTP Request, saving precious and expensive network time and allowing the application better utilization of the allocated bandwidth.
When trying to secure applications that are batching HTTP requests, a challenge arises with the application of attack signatures.
There are three different types of attack signatures, depending on the part of the request to which they are relevant:
Here is an example of such a request:
The first request contains other HTTP Requests, including its headers and URLs.
But, when a web application firewall (WAF) processes an HTTP request with multiple batched requests as a part of the payload, it will look at all the batched requests as a single payload. Therefore, it will only use payload-related signatures, which can lead to false positives and undetected attacks.
In F5 Advanced WAF v16.1, F5 has added native parsing and support for HTTP batched requests. This allows Advanced WAF to distinguish between each HTTP request individually – and not collectively in a batch – and therefore run the proper signatures on the right parts of each request.
F5 Advanced WAF protects all OData or other traffic with HTTP Batched Requests without risk of missing attacks or producing many false positives.
SAP leverages the OData protocol to communicate and interoperate with any application, software, or device that is not an SAP offering. As OData is based on HTTPS, any programming language – and for that matter, any developer – can use and communicate with an OData message. This allows any offering that is not an SAP offering to connect with SAP using HTTPS, as the interface to OData is based on XML or JSON.
SAP Fiori delivers tools that empower designers and developers to create and optimize native mobile and Web apps that deliver a consistent, innovative user experience across platforms. SAP Fiori provides a modern user experience to any device and for every user. SAP Fiori delivers users a simple, productive working from anywhere experience. OData enables non-SAP apps to be integrated and interoperable in an SAP Fiori-created environment.
While interoperability and easy communications are essential, so is security, especially for SAP Fiori deployments that are internet-facing and consume analytical applications or that use search over the Internet.
In a blog published by SAP, “Considerations and Recommendations for Internet-facing Fiori apps,” it states that a WAF “should be placed in front of the SAP Web Dispatcher, monitoring and controlling all incoming HTTP requests,” and that a WAF should be deployed “between a trusted internal network and the untrusted Internet.” The blog goes on to point out that, among the security capabilities available in a WAF, it should stop Distributed Denial of Service (DDoS) attacks, particularly “ so they cannot reach your SAP S/4HANA system”.
Support of OData protocol by F5 Advanced WAF enables customers to protect SAP applications with higher efficacy and reduced false positives.
For more information on F5 solutions for SAP Fiori and S/4 HANA, please review the following:
Quick and Secure: SAP Migration to the Cloud (F5.com)
Mitigating Active Cyberattacks on Mission-Critical SAP Applications | DevCentral (f5.com)
For more information on SAP Fiori and the application of a WAF to ensure security and reduce false positives associated with SAP Fiori and its use of OData and HTTP batch requests, you can review the following: Considerations and Recommendations for Internet-facing Fiori apps | SAP Blogs
Deployment in the Intranet or on the Internet | SAP Help Portal | <urn:uuid:93d69b9d-e3f8-4bc9-bfdb-9ff59357bc41> | CC-MAIN-2022-40 | https://www.f5.com/es_es/company/blog/securing-sap-fiori-http-batched-requests-odata-with-f5-advance | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00798.warc.gz | en | 0.903395 | 962 | 2.734375 | 3 |
Integrating technology into your district’s student suicide prevention program can make a huge difference in a child’s life
Students are spending more time online, and it’s a trend that will undoubtedly continue even in a post-pandemic world. Likewise, they are also doing more learning online, both in and outside of the classroom. For example, completing homework often requires online research and collaboration with other students in a cloud app like Google Docs or Word.
For this reason, K-12 cyber safety programs are critical for keeping students safe online. For example, administrators should be planning for ways to monitor school-provided technology for student suicide digital signals as part of their broader student suicide prevention program.
What is a digital signal? It’s anything that someone posts on a digital platform that can be identified and categorized. For example, if a student sends an email to another student describing the fact that they are depressed and want to kill themselves, an automated system can identify the intent of the email and categorize it as a student suicide digital signal.
Digital signals can exist in text, video, and/or image files. And, it’s important for district admins to understand that these signals aren’t only being shared via instant messaging or social media.
Unfortunately, students aren’t only using school-provided technology in the ways they’re supposed to. School admins we work with are finding a variety of inappropriate and harmful digital signals in district Google and Microsoft domains.
In these cases, the digital signals reveal toxic online behavior where they’re planning to or are inflicting harm on themselves and/or others. Student digital signals can point to discussions of self-harm, thoughts of suicide, cyberbullying, discrimination, substance abuse, threats of violence, sexual exploitation and/or inappropriateness, and more. School IT teams are finding student safety risk signals in school-provided emails, shared drives, files, and chat apps.
Research shows that adolescents are likely to talk about suicidal thoughts, but they also use digital media such as social networking sites, blog posts, instant messages, text messages, and emails. Further, they found an increase in the number of suicide digital signals over the four years of their study. They concluded that using digital outlets to convey distress may become even more common.
To be clear, technology isn’t a silver-bullet for preventing student suicides. And K-12 IT admins are not mental health counselors. The first people to know about a student’s suicidal issues are usually going to be their peers, family members, teachers, and/or other trusted individuals in their lives. It is important for schools to train faculty, parents, and students on how to identify potential suicide risk signals and respond appropriately as part of their suicide prevention program.
However, integrating technology into your district’s student suicide prevention program makes a huge difference for students in crisis and for your community as a whole.
5 Student Suicide Digital Signals that School Technology Can Monitor
Your IT team is in a unique position to monitor for potential student suicide digital signals. You can then establish a process for reporting risk signals to building professionals who are equipped for counseling students in crisis. Here are five digital signals that schools we work with find that could prevent a student suicide.
1. Cyberbullying Signals
Cyberbullying is a big problem around the globe. 60% of parents who have children in the 14 to 18-year-old age range report that their children are being bullied, and 82.2% of that cyberbullying is happening at school.
School leaders are well aware of the need for cyber safety in schools, but don’t always think beyond their content filter and blocking students from harmful websites. New technology that can handle cyberbullying monitoring is available, and schools need to monitor internal locations, like Google Docs, Slides, chat apps, emails, etc, for harmful content and behavior.
When school leaders talk about cyberbullying, they relate stories of how students use school systems to harass one another. Stories we’ve heard from their own cyberbullying detection experiences include:
Students using Google Slides to message each other. They used white lettering to make the files look like blank pages.
Most risk alerts come from Gmail conversations between students—meaning they’re using school-provided email to have personal discussions
Students are deleting text, renaming files, and moving files around shared drives to create different versions of a file and make detection difficult without an advanced alerting system
It’s important to note that bullying doesn’t directly correlate to suicide. But, research shows that children and young adults under the age of 25 who are being cyberbullied are twice as likely to harm themselves and exhibit suicidal behavior. Interestingly, the people who cyberbully others are also at higher risk.
Therefore, detecting and addressing bullying behavior early on is an effective way to reduce potential suicidal outcomes in the future. Not to mention the many other benefits it will have for all involved students’ wellbeing.
2. Self-Harm Signals
Student self-harm and suicide are two different things, but they are related.
Self-harm refers to students who hurt themselves in a number of ways, including cutting and/or burning themselves, misusing alcohol and drugs, or hitting themselves against walls or with weapons. Digital self-harm is another form of self-harm that is relatively new and not well understood by child psychologists. The intent of self-harm is to release difficult feelings, not to end a life.
However, research shows that about 65% of students who self-injure will also become suicidal at some point. It’s important to recognize that when a student self-harms, it makes it easier for them to think about suicide. They have “practiced” harming themselves, which reduces the inhibition they would typically feel about taking their own life.
3. Depression and Anxiety Signals
According to the CDC, anxiety and depression in children is a big problem. In children aged 3-17 years old, 4.4 million have diagnosed anxiety and 1.9 million have diagnosed depression.
The American Academy of Child and Adolescent Psychiatry (AACAP) states that most children and adolescents who attempt suicide have a mental health issue, usually depression. Therefore, spotting depression and anxiety signals is critically important to preventing suicide.
Students with anxiety or depression may send out the following types of signals:
A preoccupation with death and dying
Fear of being away from their parents
Worrying about the future and bad things happening
Panic reactions such as heart pounding, trouble breathing, feeling dizzy
Feeling extremely sad or hopeless
4. Excessive Online Browsing
Many parents think their children are guilty of excessive online browsing, but there comes a point where it can be a real problem. There is an illness called Internet Addiction Disorder, and children are as prone to it as adults. Many students do a lot of online browsing without becoming addicts, but excessive online browsing can signal the beginning of a real problem.
For a student who has anxiety or depression, they may spend an unreasonable amount of time browsing to find bad things happening in the world. Or, they may focus on finding violent or destructive content online. That type of signal could certainly mean the student is moving toward more destructive behavior.
5. Survey Responses
Social and Emotional Learning (SEL) is a hot topic this year, as many students are returning to classrooms after an extended period of social distancing. There are indications that SEL can positively impact school violence, bullying, depression, anxiety, and other student safety concerns.
We recently hosted a podcast discussing Social and Emotional Learning (SEL) in K-12 schools with Eileen Belastock, the Director of Technology and Information at Nauset Public Schools. During our discussion, Eileen shared how schools can incorporate something as simple as using a Google Form to do a daily check-in on how students are feeling.
If your school does the same, or are looking to incorporate it into your SEL program, the form can be set up to collect responses in a Google Sheet. That way, admins can monitor responses to spot cries for help and potential suicide risk signals. This could be done manually and/or by using cyber safety artificial intelligence technology to flag potential risks.
For IT Admins: Understanding Student Self-Harm and Suicide Signals
As mentioned previously, IT admins are not trained counselors. You can’t be expected to directly help a student in crisis. However, IT teams do effectively partner with student resources in the schools to help provide a window into what is going on in digital platforms. As a member of the IT team, here are a few things you should know about student self-harm and suicide signals.
Student self-harm and suicide are different but related. As mentioned earlier, while these are two different problems, self-harming behavior is linked to an increased risk of future suicidal ideation, and sometimes action. That is why self-harm detection in school-provided technology is critical.
Monitoring must cover images and text. While students will write about harming themselves or taking their own lives, they may also post images that illustrate one activity or the other. The images are just as much a signal as the text.
IT should coordinate with school resources. You’re in a unique position to spot problems in a space that others are mostly blind to. It’s critical that you partner with those people who are trained to counsel students to develop a process defining how you’ll work together. You need to know who to alert when a problem is spotted, and that person needs to know how to manage the issue before you have an irreversible situation on your hands.
Student suicide digital signals are typically a call for help. Schools need to develop a way to detect those cries for help online so they can intervene in situations, and help the student improve their mental health. With the right people, training, tools, and processes, you can help students live long enough to find their way in the world. | <urn:uuid:3f4f36b3-8d58-4311-95a0-78754bf58f6b> | CC-MAIN-2022-40 | https://managedmethods.com/blog/student-suicide-digital-signals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00798.warc.gz | en | 0.953056 | 2,095 | 3.09375 | 3 |
Every data center has an invisible ceiling that limits the amount of IT equipment, servers, storage, and network devices that it can handle. This is known as the capacity of the data center. To make the most of data center floor space there has been a shift towards high-density computing. Data centers must guarantee that localized cooling capacity is enough to match non-uniformly distributed heat loads. Non-uniform because a data center is a “living breathing animal” some racks may be working at higher loads than others at different times of the day. The original Mechanical Electrical Plumbing (MEP) architecture and data center location choice have already substantially determined a data center’s power and cooling capability. The combination of power, cooling, and floor space creates an unseen barrier that leads to the concept of stranded capacity. Because of the system’s design or configuration, stranded capacity is the capacity that cannot be used by IT loads. It’s incredibly expensive for data centers to fail to meet the operating and capacity needs of their initial design, and getting your data center green is a challenge.
Stranded Cooling Capacity
Stranded cooling capacity is the most expensive type of stranded capacity since it refers to any part of the mechanical system that is working but not contributing to cooling IT equipment. Wasted cooling energy is one of the consequences of stranded capacity and consequently money. But just because you can’t see the issue doesn’t imply it doesn’t exist. With that in mind, here are the effects of stranded capacity a data center manager should be concerned about:
Effects of Stranded Cooling Capacity
- There Is More Stranded Capacity In Most Data Centers Than You Might Think
The data clearly revealed that there was 3.9 times more cooling capacity than IT load at the last 45 sites analyzed by Upsite. This means a lot of fan horsepower is being used unnecessarily, and money is being spent on cooling equipment that is not needed.
- Low Cooling Unit Temperature Set Points Can Strand Capacity
Manufacturers rank their cooling units based on conventional return-air conditions, which are typically 75°F with a relative humidity of 45%. However, because most sites run their cooling units at lower setpoints than recommended, the rated capacity cannot be met. Because the cooling unit’s cooling capacity reduces at lower return-air temperatures, this results in a very costly situation of more cooling units working.
A typical 20-ton (70 kW) cooling unit, for example, has a total capacity of 20 tons (70 kW) at a return-air temperature of 75°F and an RH of 45%. The same 20-ton cooling unit, however, has a sensible cooling capacity of only 17 tons at a 70-degree return-air temperature and 48% Rh (59.7 kW).
- High Relative Humidity Can Strand Capacity
Condensation can build on cooling unit coils in some IT settings due to high relative humidity (RH%). Moisture condensing on cooling unit coils produces heat, which uses some of the cooling capacity of the unit, stranding capacity that could otherwise be utilized to lower the supply air temperature to IT equipment.
- Misplaced Perforated Tiles Can Strand Capacity
In your computer room, misplacing perforated tiles reduces cooling capability. Perforated tiles or grates, for example, can be placed in an open area or a hot aisle to allow valuable conditioned air to escape the raised floor plenum. The amount of air lost through these tiles is insufficient to keep IT equipment cool. Unused conditioned air is referred to as stranded capacity.
- Unsealed Cable Openings Can Strand Capacity
Unsealed cable apertures, like misplaced perforated tiles, release bypass airflow, stranding cooling capacity since conditioned air escapes via these cable gaps and cannot be utilized for IT equipment.
Stranded capacity can be recovered without spending loads of money. All that must be done is basic and effective management and controls adjustments. These simple steps will recover stranded cooling capacity and lead to energy savings.
How To Avoid Stranded Capacity?
To avoid stranded capacity you must first identify the limiting factor and modify the capacity of the remaining 2 elements to rebalance each of the 3 defining parameters.
- Cooling as limiting factor
If cooling capacity is not able to efficiently cool the power load, the data center’s PUE will suffer; systems will become overheated and some of the available power will be wasted as heat output.
Underfloor cabling with limited space for cooling can contribute to stranded power capacity.
- Space as limiting factor
When physical space within a data center facility has been exhausted, modular e-houses are an efficient solution to facilitate continuous upscaling of power capacity. Additionally, the introduction of low footprint infrastructure within the data center can help to optimize the white space and eliminate wasted square footage.
- Power as limiting factor
Overbuilding of data center capacity is a major issue and is largely due to the notion that data centers need to be armed with enough capacity to meet unforeseen demand. In reality, this can lead to excessive stranded capacity which can be very costly. It is important to right-size your data center to support optimal operating efficiency where cooling, power, and space are in balance.
Preventing Stranded Capacity With AKCP Monitoring
As you can see, stranded capacity in a data center can come from a variety of places. They could account for a significant portion of the site’s cooling capacity on their own. They can add up to a significant loss of resources and money when added together, thus they must all be handled to effectively maximize a room’s cooling efficiency and capacity. Reducing stranded capacity will increase the amount of potential load that can be efficiently cooled as well as the amount of money that can be saved by making necessary airflow management (AFM) changes.
The Cabinet Analysis Sensor (CAS) features a cabinet thermal map for detecting hot spots and a differential pressure sensor for analysis of airflow. Monitor up to 16 cabinets from a single IP address with the sensorProbeX+ base units. The Wireless Cabinet Analysis Sensor is also available using our Wireless Tunnel™ Technology.
Differential Temperature (△T)
Cabinet thermal maps consist of 2 strings of 3x Temp and 1x Hum sensor. Monitor the temperature at the front and rear of the cabinet, top, middle, and bottom. The △T value, front to rear temperature differential is calculated and displayed with animated arrows in AKCPro Server cabinet rack map views
Differential Pressure (△D)
There should always be a positive pressure at the front of the cabinet, to ensure that air from hot and cold aisles is not mixing. Air travels from areas of high pressure to low pressure, it is imperative for efficient cooling to check that there is higher pressure at the front of the cabinet and lower pressure at the rear.
Rack Maps and Containment Views
With an L-DCIM or PC with AKCPro Server installed, dedicated rack maps displaying Cabinet Analysis Sensor data can be configured to give a visual representation of each rack in your data center. If you are running a hot/cold aisle containment, then containment views can also be configured to give a sectional view of your racks and containment aisles.
Cabinet Analysis Sensor connects to AKCP sensorProbe+ base units. Extendable up to a maximum of 15 meters cable length, you can monitor multiple cabinets from a single IP address. Up to 16 sensors can be connected to a single SPX+.
The latest generation of sensorProbe devices, in a form factor that allows for 1U, 0U, and DIN rail mounting. A low-profile design that is economical on cabinet space. sensorProbeX+ comes in several standard configurations or can be customized by choosing from a variety of modules such as dry contact inputs, IO’s, internal modem, analog to digital converters, internal UPS, and additional sensor ports.
- Every sensorProbeX+ is equipped with Ethernet, Modbus RS485, EXP, and BEB communications.
- Compatible with sensorProbeX+ EXP and BEB units, expand the capabilities of your device.
- 1U rackmount brackets, Tool-less 0U mounting, or DIN rail mounting options.
- Notification by SNMP, Email, SMS (requires optional cellular modem), built-in buzzer.
- Compatible with a wide range of AKCP Intelligent sensors.
- Start with base configuration and build up your device with the modules you need.
- Up to 80 virtual sensors. | <urn:uuid:20858cdb-9e91-433e-a25f-7f9d6ba313bb> | CC-MAIN-2022-40 | https://www.akcp.com/blog/effects-of-stranded-capacity-in-data-centers-and-how-to-avoid-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00798.warc.gz | en | 0.90967 | 1,877 | 2.65625 | 3 |
Service-oriented Architecture (SOA) was declared dead nearly ten years ago. A contributing - but rarely discussed - factor in its demise was the network. Latency between services prevented architects from fully decomposing applications into services with the granularity needed to encourage reuse and, ultimately, composable applications.
Enter Microservices Architecture (MSA). Its principles demand even greater decomposition, with a focus on function (verbs) over object (nouns) as the primary criterion for divvying up an application. From this seemingly subtle change in focus comes greater granularity of services; there are many more functions than there are objects in any given system.
The network is ready. Speeds and feeds of the physical network have increased dramatically. Compute, too, has advanced in accordance with Moore's Law and rendered networking latency almost a non-issue.
Unfortunately, communication latency will take its place.
We have replicated in the interior of the container environments used to deploy microservices the Internet's complexity. While a microservice may not need DNS, it still relies on the same kind of name-based resolution that runs the Internet. Application tags - metadata - must be translated to an IP address. Service registries and complex IP tables entries act as miniature DNS, translating tags to addresses and enabling communication between services.
Exacerbating the latency associated with this process is the ephemeral nature of microservices and their associated containers. With lifetimes measured in seconds or minutes instead of hours or months, name resolution must occur with every call. Time to live (TTL) inside the container world is, effectively, zero.
Even if we ignore this reproduction of one of the biggest sources of communication latency, we are left with that associated with TCP. It is not - nor ever has it been - free to initiate or tear down a TCP connection. This source of latency is certainly small but absolutely additive. Each connection - each microservice - required to execute a single transaction adds latency that eventually breaches tolerance for delay.
HTTP/2, despite its dramatic changes in behavior, do not address this problem. HTTP/2 is designed to facilitate the transfer of multiple objects over the same connection, thereby reducing latency for multi-object content such as web-pages and web-based applications. Microservices are ideally designed such that each service returns a single response. While multiple requests over an established connection will certainly reduce communications overhead, it cannot do so in a system where multiple-requests are distributed across multiple discrete services.
The problem is, then, not network latency but communication latency. Connections still count, and improvements in protocols designed to enhance performance of web-based, multi-transactional communications will not help multi-service transactions.
The result is SOMA. Service Oriented Micro Architectures. A strange hybrid of Service-Oriented and Microservice architectures that leaves one wondering where one ends and the other begins. Decomposition of applications into its composite function-based services is constrained by communication latency and ultimately, sustainability of the code base. While certainly network advances have increased the granularity with which decomposition can be reasonably accomplished, it has not eliminated the constraint. Also a factor is the that there are orders of magnitude more functions in an application than objects, That makes the task of managing a pure microservices architected application somewhat of a logistical nightmare for network operations let alone app developers. Combined with the inherent issue raised by communications latency, organizations are increasingly developing object-oriented microservices instead of truly function-oriented microservices.
This is ultimately why we see application decomposing beyond the traditional three-tier architecture, but not so far as to be a faithful representation of functional-based decomposition.
Until we address the latency inherent in connection based (TCP) communications - either with something new or by zeroing in on the system-level implementations - we will continue to be constrained to microservices architectures that are less micro and more services. | <urn:uuid:da3897b0-751a-4ad5-b39e-696a291847b6> | CC-MAIN-2022-40 | https://www.f5.com/fr_fr/company/blog/microservices-less-micro-and-more-services1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00798.warc.gz | en | 0.934078 | 802 | 2.78125 | 3 |
Although the internet is nothing new at this point, it's influence continues to grow. That said, everyone - including those who originally despised the technology - should learn how to effectively utilize devices that can access it, like computers and tablets. Teaching older individuals how to navigate the technology can be difficult because you have to start from scratch. But just as older generations found a way to teach you how to walk and talk, you too can help your older counterparts understand newer technologies.
Don't forget about the basics
A big hurdle that a lot of people don't properly overcome when giving an internet lesson to the older people is the fact that these students often don't understand the very basics of modern computing. Even though you may want to show your student how to create a Facebook account, he or she may not even know how to turn on his or her computer.
A good way to plan your technology curriculum is to pretend your student has not been exposed to any technology newer than a television set. The good stuff you're accustomed to using on a regular basis - like Google and Netflix - may be important down the line, but it's crucial to teach students how to use a mouse and open an internet browser, for starters.
Increase the font size
Even if your mind stays sharp as a tack as you age, your vision may not. And though your older students might still think like a spring chicken in their golden years, they may come to find that reading normal text becomes incredibly hard, even with glasses. For computers and tablets, your best option is to increase the font size of displayed words. If your student is having trouble reading to begin with, they probably won't respond positively to a smartphone.
Luckily, you're not alone when it comes to training. Our certified IT consultants offer specialized Apple training, Mac repairs, iOS management and more to help you work smarter and faster. Contact MC Services today to learn more. | <urn:uuid:44cbed59-875b-4066-8b8d-d6e3899ec97c> | CC-MAIN-2022-40 | https://www.mcservices.com/post/tech-training-for-older-employees | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00798.warc.gz | en | 0.966139 | 387 | 3.28125 | 3 |
A way of arranging tables in a relational database such that the entity relationship diagram resembles a snowflake in shape. At the center of the schema are fact tables which are connected to multiple dimension tables. Thus a snowflake simplifies to a star schema when relatively few dimensions are used. The star and snowflake schemas are most commonly found in data warehouses where the speed of data retrieval is more important than the speed of insertion. As such, these schemas are not normalized much, and are frequently left in third normal form or second normal form. | <urn:uuid:0d5e6654-0526-4854-8e5e-cb0c59f136d8> | CC-MAIN-2022-40 | https://help.hitachivantara.com/Documentation/Pentaho/5.4/0N0/010/070/060 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00198.warc.gz | en | 0.910073 | 109 | 2.53125 | 3 |
Use Business Continuity Strategy to Reduce Impact of Disasters
A natural disaster or cyberattack can cripple your business. Learn how to reduce damage and keep your company operational with business continuity planning.
Business continuity planning is an essential component of your company strategy. It’s a process that ensures that in the event of a catastrophic event, your business is prepared to respond rapidly, minimizing the impact and keeping operations online.
What Is Business Continuity Planning?
Business continuity is a planning process that ensures that critical business functions can remain operational in the event of severe disruption. Disruptions may include a natural disaster such as a flood, earthquake, tornado or fire, a cyberattack, a structural failure such as a utility failure, or a supply chain disruption.
Business continuity includes the policies, procedures, internal and external communications protocols, dependencies, and roles employees fill during and after a declared disaster.
In the 2019 Ponemon Institute’s Fourth Annual Study on the Cyber Resilient Organization, those organizations with a high degree of cyber resilience report the following:
- Improved ability to mitigate attacks, risks, and vulnerabilities
- Higher confidence to contain, prevent and detect cyberattacks
- More communication about cyber resilience among senior leadership
- Report streamlined IT infrastructure and less complexity
You can think of business continuity as the contingency planning to ensure that data, systems, and technologies are operational, allowing employees and customers to continue as frequently as possible.
How Does a Company Start Business Continuity Planning?
To begin business continuity planning, your company needs to conduct a comprehensive assessment that identifies and prioritizes data and systems.
This assessment should include a deep understanding of the threats and risks to your technologies, identifying the vulnerabilities inherent with each likely scenario. For example, following a flood or earthquake, your physical work locations may be inaccessible to employees. In a cyberattack, data and networks may be unavailable or compromised.
The threat and risk assessments help you identify the business processes that are essential for your company and help prioritize in the event of a declared incident.
What Staff Roles Are Necessary for Business Continuity Plans?
Your business continuity plan needs to create clarity in what could potentially be a chaotic situation. Defining and communicating staff roles is essential. You need to consider skill sets and leadership abilities and consider moving staff into different positions during a disaster.
What Is Disaster Recovery?
Disaster recovery is a crucial component of business continuity planning. It is the actionable work done after a disaster is declared. It usually involves work to recover critical data and systems, restoring operations and mitigating impact on customers.
Disaster recovery focuses on hitting two key markers, the first of which is a Recovery Time Objective, the amount of time elapsed during which systems can be disrupted without causing undue harm to the business. The second is the Recovery Point Objective, which is the maximum targeted time for lost data transactions during an IT disruption. This figure often helps companies determine data backup schedules.
In disaster recovery, plans and procedures go into effect, as teams and third-party vendors work to restore prioritized data and systems.
What Is Including in Business Continuity Plans?
Along with threat assessments, disaster recovery plans, and personnel roles, business continuity plans often include the following elements:
- Emergency contact information
- Identified essential equipment and services
- Offsite data backups
- Backup power generators or other sources
- An alternate site for business operations
- Communications plans to keep employees, stakeholders, customers, shareholders, and other key players informed and updated
Who Can Help WIth Business Continuity Planning?
Many companies turn to a third party for managed IT services related to business continuity. Your managed services provider can complete risk and threat assessments, assist in policy development, and manage data storage, backups, and recovery. Your managed services provider can recommend solutions for all aspects of your IT infrastructure, including:
- Managed IT, including hardware and software management, help-desk services and system monitoring
- Managed security services, including automatically updated anti-virus, anti-spam and anti-phishing tools
- System and network monitoring using advanced firewalls that identify suspicious activity, issue alerts, and quarantine threats
The ARCIS Technology Group is a trusted managed services provider delivering business continuity, disaster recovery, cybersecurity, and managed IT solutions throughout northeast Ohio. To learn more, contact us today. | <urn:uuid:62e79467-5106-4e7b-bcf6-886cd93891fc> | CC-MAIN-2022-40 | https://www.arcistg.com/business-continuity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00198.warc.gz | en | 0.934313 | 894 | 2.859375 | 3 |
Get the inside scoop with LoginTC and learn about relevant security news and insights.
January 19, 2022 •
Can’t get enough of the Wordle craze? Neither can we!
Wordle, the online word game that challenges players to guess a five-letter word in six tries, has captured attention worldwide, and some of us at Cyphercor have not been immune.
The game has users stretching their deductive reasoning skills and learning a few new words in the process. In the spirit of the game, we decided to make a glossary of every cybersecurity-related five-letter word we could think of. Who knows, some of these might end up in your Wordle guesses soon!
Alert: A notification that something, often an attack or a vulnerability, has been detected on an organization’s systems. To use it in a sentence: “I just got an alert on our monitoring system that our firewall is down.”
Allow: In a cybersecurity context, to “allow” access into something is to run the proper checks in the system to ensure that the person trying to access that thing, be it a database, asset, or otherwise, has the proper authorization to do so.
Asset: An asset more generally could be anything that an organization possesses, but here at Cyphercor we often refer to assets as something you can add two-factor authentication onto. “A customer wants to add MFA to their RD Web Access asset.”
Cloud: In the cyber world, clouds aren’t just in the sky — they’re all around us. Cloud computing allows users and organizations to access servers all around the world to save documents to, use services on, access resources from, and more: anytime, anywhere. One of the great inventions of the 21st century, but also very susceptible to cyber attacks. You should always protect access to your cloud services with proper authentication controls.
Cyber: Anything related to computer, information technology, or this virtual, online world we all exist and operate in. Cybersecurity is the industry that seeks to protect that cyber world.
(En)crypt: We know, we’re cheating with this one. To “encrypt” something is to encode messages or information by converting them into secret ciphers, which can only be decrypted with the right key or code. Encrypt is too long for Worlde, but “crypt” might show up!
Event: We mostly use “events” in the cybersecurity world to talk about incidents where an organization’s systems are being attacked. “This organization experienced a cyber event when they were hit with the WannaCry ransomware attack.”
Guard: A guard is something that mediates between two systems of differing security levels, ensuring that the one with higher security protocols doesn’t get exposed via the lower level one.
Logic: The word logic comes up in a couple places within the cybersecurity world. “Logical cybersecurity” is the process of identifying what secure processes should be in place for your organization to properly protect your systems, whereas a “logic bomb” is a type of malware that waits patiently on your system until a specific set of conditions are met before detonating.
Macro: A macro is a program that’s used to perform bulk functions easily, and are more often than not good things. However, if left unprotected, cyber criminals can also utilize macros to infect your computer with malware.
Patch: When the recent Log4j vulnerability was discovered, we all rushed to check if our systems were affected, and if so, “patch” those holes in the system. A patch is a new piece of code introduced into software that fixes, updates, or changes that system or application. You should always stay up to date with any new patches that have been released for software that you use, and always test updates before implementing them.
Phish: The fact that “phish” is in the Wordle word list really is a sign of the times. Phishing is when an attacker sends you a malicious link, most of the time through email, that when opened can allow that hacker to gain unauthorized access to your network and systems. You should always be on the lookout for phishing.
Risky: Risk is what the foundation of cybersecurity is built on, and knowing your risk level and how to reduce it is key to protecting your network, applications, and systems from malicious actors. To use it in a sentence: “Using Remote Access without implementing two-factor authentication is pretty risky.”
Proxy: Some may recognize this five-letter word from Wordle puzzle #213. In cybersecurity terminology, when we talk about a “proxy” we’re most likely referring to a “proxy server”, which is a server that sits between your computer and the web pages you visit to hide your IP address and protect your identity and information from possible malicious actors.
Spoof: “Spoofing” is when a user pretends to be something they’re not, often faking their IP address, email address, or some other identifier in order to gain unauthorized access somewhere, or fool you into clicking on links in a phishing email.
Theft: Theft in the cybersecurity world is usually about data theft. Sometimes hackers attack networks just to cause a disturbance, but sometimes they want to exfiltrate and sell your data and information — that’s when you can become a victim of data theft.
Virus: Just like physical viruses, a computer virus is something that infects a computer without permission and replicates itself, causing damage and destruction in its wake. Viruses spread from one computer to another often without knowledge of the users.
Interested in learning more about cybersecurity and how you can protect your company from the rise in cyber attacks? Sign up for our monthly industry newsletter below: | <urn:uuid:227f0436-c46e-4c4d-b815-d21b9e981b86> | CC-MAIN-2022-40 | https://www.logintc.com/blog/how-many-five-letter-cybersecurity-words-can-you-name/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00198.warc.gz | en | 0.94532 | 1,230 | 2.890625 | 3 |
Click on the timecodes to jump to that part of the video (on YouTube)
2:26 Introduction, background history covering LaBrea Tar Pits and ARP Cache Poisoning and how they relate to this webcast and how “eavesarp” basically works.
14:15 Demo of “eavesarp” against a Stale Network Address Configuration (SNAC) attack.
Justin wrote an extensive blog post on this topic: Analyzing ARP to Discover & Exploit Stale Network Address Configurations
eavesarp – GitHub: https://github.com/arch4ngel/eavesarp
When you are on a pentest (or an internal assessment) there are a large number of different techniques that you can use from an unprivileged workstation to move laterally, get hashes and/or attack services. Attacks techniques taking advantage of protocols and misconfigurations like LLMNR, GPP, mDNS and WPAD are now commonplace in any attack toolbox.
But what if those don’t work? Is there anything else in this category of attacks that can help you to easily gain access to other systems? Justin Angel has just written a tool we would like to share with the community that will answer these questions — eavesarp.
In this webcast, we talk about an oldish defensive technique that attackers can use to further access on the inside of a network. We know, we are being very coy with telling you exactly what the issue is. But, it is really cool. Trust us. We released a new tool and building on some existing research to bring another tool to the LLMNR, WPAD and mDNS attack toolbox — eavesarp.
And yes, we will be offering some tips on defending against these attacks as well. | <urn:uuid:5dfcbbb1-31aa-4900-a328-fcbbdd719524> | CC-MAIN-2022-40 | https://www.blackhillsinfosec.com/webcast-how-to-attack-when-llmnr-mdns-and-wpad-attacks-fail-eavesarp-tool-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00198.warc.gz | en | 0.937615 | 380 | 2.515625 | 3 |
Since the 1930s, police investigators have studied bullets to find key evidence. At a crime scene, investigators would retrieve any bullets and cartridge cases, and take them to police labs. There, firearms examiners would place the bullets and cartridge cases under a special microscope to compare the marks left by the firing gun on each. They would take snapshots and distribute them to local police departments. When police recovered a gun, they could discharge it to compare the markings left on bullets and casings against those in that collection of snapshots.
Technology changed all that about a decade ago. Now, in communities across the country, the ballistics imaging and matching process is computerized. Investigators still collect evidence at crime scenes, but software programs analyze ballistics images and store the results in databases. And databases from local and regional jurisdictions are linked together in the National Integrated Ballistics Identification Network (NIBIN). But the entries in this network of databases cover only guns seized by police.
This ability to analyze ballistics became a topic of interest recently, when a string of sniper shootings terrorized citizens around Washington, D.C., Maryland and Virginia before authorities caught and charged two suspects in the case. (It turns out that Maryland, along with New York, has created a statewide database for handguns owned there but not for rifles, which was the type of gun used in the sniper attacks.)
Now, some are calling for the creation of a national ballistic fingerprint system that would enable police to trace bullets recovered from shootings in all states. Such a proposal is part of the gun control policy debate. But, politics aside, we wondered how technically difficult such a project would be, given potentially millions of gun records.
Not very difficult, says Joe Vince, a former chief of the crime guns analysis branch of the Bureau of Alcohol, Tobacco and Firearms (ATF). “Human fingerprints were once kept on file cards, and now there is a national electronic system,” says Vince, who is now president of Crime Gun Solutions, a consulting company in Frederick, Md. “It should be no different for ballistic fingerprints.”
Vince says such a national system would be based on a computer imaging product known as IBIS (Integrated Ballistic Identification System), which the ATF has used since 1993. IBIS, developed by Forensic Technology of Cote St.-Luc, Quebec, digitally captures images of bullets and cartridge cases, stores them in a database, performs automatic computer-based comparisons of the images and ranks them according to the likelihood of a match. Firearms examiners can then perform microscopic comparisons of the likely candidates.
The IBIS system uses Oracle as its relational database management system. The system could expand to store all guns sold in the nation, says Serge Labrecque, one of the developers for IBIS. “Right now IBIS is a reactive product, taking an image once a crime has occurred” and evidence is recovered, Labrecque says. He adds that IBIS could support more information and images, with records received directly from gunmakers, before they sell or distribute the weapons.
Straightforward? Not so fast. Hurdles include getting local, state and federal police agencies to cooperate, says Wayne Eckerson, research director for The Data Warehousing Institute. “Modeling the database is the easy part,” he says. “The challenge will be bringing together data owned by multiple agencies.” | <urn:uuid:9f41d9f2-1661-46cb-8d8e-9df7d13cb01f> | CC-MAIN-2022-40 | https://www.cio.com/article/270141/enterprise-software-software-changes-ballistics-investigations.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00198.warc.gz | en | 0.950648 | 694 | 2.78125 | 3 |
Welcome to part 3 in my award-winning (I gave myself an award) series of blog posts introducing you to IS-IS, my favourite interior gateway protocol of them all.
In Part 1 (which you’re probably going to want to read before you start this one) you got the overview of it, including how it’s similar to OSPF, how IS-IS levels are the equivalent of OSPF areas, and how it uses the shortest-path first algorithm. You also saw a really basic configuration.
Then in Part 2 we had a look at how IS-IS forms adjacencies, the four kinds of message it can send, and what a Link State PDU (LSP) looks like – the building blocks of the topology.
You’re a good and diligent student, and your reward is some super cool technology in this post. Today I’m going to show you the difference between point-to-point and broadcast interfaces, and I’m going to show you the IS-IS equivalent of the OSPF designated router.
I’m also going to show you how metrics work. IS-IS and OSPF were both built in the dark ages, and so both of them do metrics in weird ways that we need to change. But that’s fine – you just configure it once, and forget about it. I’ll show you all the “deets” (cool way of saying details I made up) so you’ll be in no doubt at all about it.
Ready? Then let’s do it!
There are just two kinds of interface in IS-IS: point-to-point and broadcast.
Just like OSPF, an Ethernet interface will default to being a broadcast interface. If your Ethernet interface is actually point-to-point, you can configure it as such to make things a bit more efficient, because there will no longer be a need for a DIS on the link. I will explain what a DIS is very, very shortly.
The configuration is easy:
set protocols isis interface ge-0/0/0.0 point-to-point
Simple as that!
A REMINDER: OSPF DESIGNATED ROUTERS
On a broadcast network, OSPF elects a Designated Router. This achieves two things.
The first is to make the topology more simple to understand. If you’re not sure why this is, I’ll explain it in a moment when I introduce you to the concept of a pseudonode.
The second is to handle advertising LSAs. In a network of ten routers on a shared LAN segment, all OSPF devices will only become “fully adjacent” with the DR, not with each other. They then send their LSAs to the DR via a special DR-only multicast address, at which point the DR then re-advertises them to all the routers in the network.
This is actually kind of inefficient, because everything has to be sent twice.
This is also part of the reason that non-DR routers on the LAN don’t become fully adjacent with each other. If they did, they would end up exchanging LSAs with each other. By staying in the “2way” state, they never actually exchange LSAs with each other.
This is also why a backup DR is elected. Non-DR routers send their LSAs to a multicast DR-only address – and if there’s only one device receiving them, then it would cause a bit of chaos when a new DR is elected, because everything would need to be re-advertised to the new DR, and then re-re-advertised back to all the non-DR routers.
As such, a backup DR is used. Both the DR and the BDR will become fully adjacent with all non-DR routers, and listen in on all the LSAs. If the DR disappears, the backup DR can immediately take over, because it already has all of the LSAs, and is already fully adjacent with all the other routers.
Pff. Blimey. Okay, let’s see how IS-IS does things more betterer. (Woah, my spell check seems to think that betterer is a word. That’s my kind of spell checker!)
DESIGNATED INTERMEDIATE SYSTEM (DIS)
In IS-IS the DR is called the Designated Intermediate System – which makes sense, because Intermediate System is its name for a router,
The mechanics are a bit different to OSPF.
In IS-IS, a router multicasts its LSPs directly to all the routers on the LAN. They don’t need to be re-advertised by the DIS. Way more efficient! Instead of sending them twice, they are just sent once.
To make sure that every router’s database is correct, the DIS sends out a CSNP, a Complete Sequence Number PDU, every 10 seconds so that all routers can be sure that they do indeed have the latest and greatest information.
For that reason, there’s no need for a backup DIS. All routers take care of sending their own information to the LAN. If the DIS disappears, someone else just takes charge of sending the CSNP.
Just like in OSPF, each router has a priority which is used to elect the DIS. You can see this priority number in the IS-IS Hello packet capture above. The default in Junos is 64 – and just like OSPF, the numerically highest priority wins.
Unlike OSPF, setting a priority to 0 doesn’t stop a router from becoming the DIS – it just means it’s not very likely.
Also unlike OSPF, if a router comes online with a higher priority, it actually does take over the responsibility of being the DIS.
If two or more routers have the same priority, the highest MAC address used as a tie-breaker. To be more specific, the SNPA (Sub-Network Point of Attachment) is the tie-breaker, which includes both MAC addresses and DLCIs on frame-relay circuits, in case you’re from the past.
Changing an interface’s priority is nice and easy. Notice below that it’s done on a per-level basis.
set protocols isis interface ge-0/0/0.0 level 1 priority 100
Both the DR and DIS has another important function, which is to make the topology more simple for SPF to deal with.
Check out this pic, taken from the old JNCIA guide, which you can and should buy and read!
Notice on the left that there’s four routers on a shared segment, each with an adjacency to each other.
The adjacency itself doesn’t matter, that’s easy to deal with.
Where things become a bit more tricky is when a router needs to run SPF, because the more topological connections there are between routers on a shared segment, the more complicated it becomes for SPF to do its thing.
In fairness, in a network of just four routers it’s not such a big deal – but every router you add makes this exponentially more complicated for SPF to deal with, because it has to process so many more potential paths.
To make this more simple, instead of each router seeming to connect directly to every other router on the shared segment, the DIS and the DR do something clever.
Notice on the right that there’s some kind of invisible ghost router. All four real routers are showing as being connected only to this new pretend router that we made up. There’s a name for this haunted ghost router: it’s called the “pseudonode“, and it’s the role of the DR and DIS to generate it.
It’s weird that OSPF does exactly the same thing, and yet it has no official name for this concept. As such, you’ll often hear IS-IS folks using the term “pseudonode” to describe it in OSPF too.
To show you what this looks like, I’m going to change the connection between R1 and R2 from point-to-point, to the default of broadcast.
Let’s take a look at what R1’s database looks like now, focusing in on the neighbor relationships:
root@R1> show isis database R1.00 detail | match neighbor IS neighbor: R1.02 Metric: 100 IS neighbor: R6.00 Metric: 100
Woah, look at that. As far as the topology is concerned, R1 is no longer connected to R2. Instead, it looks like it’s connected to itself!
What does R2 show? I’m going to type this command on R1 too, because every router has exactly the same view of the topology:
root@R1> show isis database R2.00 detail | match neighbor IS neighbor: R1.02 Metric: 100 IS neighbor: R3.00 Metric: 100 IS neighbor: R7.00 Metric: 100
Aah, okay! So it seems that “R1.02” is the pseudonode, and you can guess that R1 took the responsibility of generating it.
Instead of R1 connecting to R1, it’s now R1 connects to the pseudonode, and the pseudonode connects to R2.
(You can see here a subtle distinction between a “neighbor” and an “adjacency”.)
To quote from the JNCIA guide, “The pseudonode will advertise the neighbor relationships of all routers in its database update; the actual routers advertise a relationship with only the pseudonode”. This takes the strain out of the SPF calculation, because there’s less adjacencies to compute.
One final thing: this pseudonode doesn’t add anything to the metric (the “cost”) between two paths, because all links that are “coming out” of the DIS and DR have a cost of zero. I’ll tell you about metrics in just a second. To prove this, let’s look at the pseudonode in the IS-IS database:
root@R1> show isis database R1.02 detail | match neighbor IS neighbor: R1.00 Metric: 0 IS neighbor: R2.00 Metric: 0
There we are. Metric of zero!
METRICS & REFERENCE BANDWIDTH
Like OSPF, IS-IS uses the Shortest Path First algorithm to work out the best route to a prefix.
OSPF’s “cost” is based on the bandwidth of the link. You’ll remember that OSPF was made a million years ago, which is why anything that is 100 Mbps or more needs to have a “reference-bandwidth” set to make it more accurate.
A similar thing is true for IS-IS, but in a different way.
Weirdly, by default IS-IS just gives every link a cost of 10, regardless of the speed! (The one exception is loopback interfaces, which get a metric of 0.) Remember that all of these protocols were made a long time ago, when network requirements were very different.
Don’t worry though: it’s very easy to change this.
First, you can increase or decrease the cost on a per-interface basis. You can even have different metrics for each level:
set protocols isis interface ge-0/0/0 level 1 metric 50 set protocols isis interface ge-0/0/0 level 2 metric 40
A much better idea though is to base the cost on the bandwidth of the link. Just set your “reference-bandwidth” of choice, and you’re “good” to “go”. Nowadays you have to do this in OSPF too, because all links of 100Mb and above all have a cost of 1, so in practice this is really no difference in setting up either protocol in the year 2021. Here’s how you do it:
set protocols isis reference-bandwidth 100g
At the time of writing, 100g is the biggest number you can choose in Junos, though I’m sure this will change in the future, when 100000000g links are the norm.
Hey, here’s a fun fact that you’ll never need to know in the real world, but is interesting trivia: there’s actually four costs in an IS-IS TLV. As well as the “Default Metric”, IS-IS also allows you to calculate a path based on the Delay Metric, the Error Metric, and the Expense Metric, which is literally how much money the link costs!
Having said that, even though these numbers are included in the advertisement (as you can see in this picture), these metrics are always set to 0, and no vendor that I know of actually uses them. Still, it tickles me that IS-IS has four costs, including one actual monetary “cost”!
One final thing. See in that screenshot, the first three lines of the IPv4 prefix are to do with the “Default Metric”.
- The first line is 10, which is the metric itself.
- The second line is whether this is internal or external. If this had been redistributed into IS-IS, this bit would be set, so you can tell if a prefix is internal or external – though you’ll see in a moment that this isn’t always true.
- Finally there’s something called the “Down” bit. Hold fire on that for a moment, because we’ll talk about that in Part 4.
Here’s the other way that IS-IS shows its age.
By default, the maximum cost a link can have is 63, because the three original prefix and topology TLVs (IS reachability, IP Internal Reachability, and IP External Reachability) only gave 6 bits to the metric value.
Again, remember that all these protocols were invented a long time ago, when there were pretty much only four routers in the entire world. The idea that networks could have grown to the size and speed of today was the stuff of dreams.
Luckily, the gods of the internet made some extensions that support a 24-bit metric field. These two TLVs are called the Extended Reachability TLV, and the IP Extended Reachability TLV. See, I told you we’d come back to the “extended” bit!
When you use these, it’s called using the Wide Metric, and is configured like this:
set protocols isis level 2 wide-metrics-only
Interestingly, Junos advertises both by default, for backwards compatibility purposes. But even though it advertises both out of the box, it only uses the small metrics by default. So remember to configure the wide metrics if you want to use them, which you definitely do!
SOME RANDOM FUN FACTS ABOUT WORKING OUT THE BEST PATH
Level 1 paths are preferred over Level 2 paths. In other words, if a router has learned a prefix via an L1 router, and it is also learning it via the backbone, it is definitely going to prefer the route within the non-backbone area, because this will (or at least, should!) be where the prefix originally came from.
Internal paths are preferred over external paths. In other words, if an IP is being learned via IS-IS directly, and if that same IP has been redistributed into IS-IS from another protocol, IS-IS prefers its own route.
Finally, when you turn on Wide Metrics, the extended TLVs don’t contain the “external” flag – in other words, IS-IS doesn’t distinguish between internal and external prefixes when you’re using wide metrics.
THAT’S IT FOR NOW!
You are now officially three fifths of the way through this “course” that I wrote for you for free. Wow, that’s pretty nice right? If you want to click here, you might have good time. Entirely up to you of course.
In the mean time, I really hope you’re enjoying yourself, and having your eyes opened to how sweet IS-IS is.
When you’re ready, I’ve got two more parts for you. Click here for Part 4, where you finally get to learn what an IS-IS area is. It’s different to OSPF, so clear your mind of preconceptions, and then click there to see how they work, and how they can generate default routes automatically.
And if you fancy some more learning, take a look through my other posts. I’ve got plenty of cool new networking knowledge for you on this website, especially covering Juniper tech and service provider goodness.
It’s all free for you, although I’ll never say no to a donation. This website is 100% a non-profit endeavour, in fact it costs me money to run. I don’t mind that one bit, but it would be cool if I could break even on the web hosting, and the licenses I buy to bring you this sweet sweet content. | <urn:uuid:a46048a6-ded5-4d33-a1f8-5161f91d7acf> | CC-MAIN-2022-40 | https://www.networkfuntimes.com/junos-is-is-study-notes-part-3-for-junipers-jncis-sp-and-jncis-ent-exams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00198.warc.gz | en | 0.935988 | 3,762 | 2.515625 | 3 |
Starvation stones are a typical hydrological marker within the area that date again to the pre-instrumental period
Photograph: @histories_arch / Twitter
Europe is affected by the worst drought in half a millennium in accordance with the European Fee. Rivers have dried up a lot that ‘starvation stones’ have been revealed and have gone viral on social media.
By the way, this isn’t the primary time that starvation stones have been revealed. That they had appeared 4 years in the past in 2018 as properly, when river ranges had equally dropped.
However this time, issues are rather more grim than 2018. Officers themselves have agreed about this.
“We haven’t analysed totally this yr’s occasion as a result of it’s nonetheless ongoing,” Andrea Toreti of the European Fee’s Joint Analysis Centre acknowledged just lately.
“There have been no different occasions up to now 500 [years] just like the drought of 2018. However this yr, I feel, is worse,” he had added.
Fashionable speak present host Trevor Noah talked concerning the drought on the continent and the looks of starvation stones August 17, 2022.
Rivers are operating so low in Europe that “starvation stones” have been revealed? Is nobody else freaking out?! pic.twitter.com/b20YaLZQ3T
— The Every day Present (@TheDailyShow) August 17, 2022
What are starvation stones
Starvation stones, or hungersteine in German, are a typical hydrological marker in central Europe. They date again to the pre-instrumental period.
Christian Pfister, a local weather historian and professor on the College of Bern, Switzerland had famous in an educational paper printed in 2010:
…Then again, a sure variety of pre-instrumental low-water indicators are identified for that area. Pointer rocks are identified for the Rhine and Lake Constance…Native pointer rocks, known as hunger-stones (“Hungersteine”) are identified for a lot of rivers…
“It’s hoped that future analysis in historic hydrology will discover the potential of this type of in situ proof for the reconstruction of maximum low-flow occasions,” the paper had added.
Hydrological winter droughts over the past 450 years within the Higher Rhine basin: a methodological method was printed within the Hydrological Sciences Journal.
Scientists corresponding to Rudolf Brazdil from the Institute of Geography, Masaryk College, Brno, Czech Republic, together with others had additionally alluded to starvation stones in a 2013 paper:
Hydrological droughts might also be commemorated by what are generally known as “starvation stones”. One in all these is to be discovered on the left financial institution of the River Elbe (Decın-Podmokly), chiselled with the years of hardship and the initials of authors misplaced to historical past.
The inscription on this starvation stone reads Wenn du mich siehst, dann weine (“In the event you see me, weep.”).
“It expressed that drought had introduced a nasty harvest, lack of meals, excessive costs and starvation for poor individuals,” Droughts within the Czech Lands, 1090–2012 AD famous.
The next droughts are commemorated on the stone: 1417, 1616, 1707, 1746, 1790, 1800, 1811, 1830, 1842, 1868, 1892, and 1893. “Equally, Pfister (2006) mentions low-water marks of the River Rhine on the stone generally known as ‘Laufenstein’,” the paper added.
The Elbe, which flows from the Czech Republic into the North Sea close to Hamburg, is one waterway that has 22 identified starvation stones. They’re additionally present in different rivers such the Rhine, the Danube and the Weser.
Main rivers on the continent have dried up because of the present drought. These embrace the Rhine in Germany, the Po in Italy, the Thames in the UK and the Loire in France.
We’re a voice to you; you will have been a help to us. Collectively we construct journalism that’s impartial, credible and fearless. You’ll be able to additional assist us by making a donation. This may imply lots for our skill to deliver you information, views and evaluation from the bottom in order that we are able to make change collectively. | <urn:uuid:dc490fd0-d96b-471e-a823-81bbc1f47811> | CC-MAIN-2022-40 | https://dimkts.com/carved-in-stone-what-are-these-warning-signs-that-europes-drought-has-revealed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00198.warc.gz | en | 0.941838 | 969 | 2.890625 | 3 |
HTTP Parameter Pollution (HPP) is a Web attack evasion technique that allows an attacker to craft a HTTP request in order to manipulate or retrieve hidden information. This evasion technique is based on splitting an attack vector between multiple instances of a parameter with the same name. Since none of the relevant HTTP RFCs define the semantics of HTTP parameter manipulation, each web application delivery platform may deal with it differently. In particular, some environments process such requests by concatenating the values taken from all instances of a parameter name within the request. This behavior is abused by the attacker in order to bypass pattern-based security mechanisms.
Information transfer using the HTTP protocol can be done in various ways, such as:
- Within the URI – using the GET parameters
- Within the request body – using the POST parameters
- In the HTTP headers – using the COOKIE header
The adopted technique depends on the application and on the type and amount of data that has to be transferred. Examples are shown in Figure 1.
- GET /somePage.jsp?param1=value1& param2=value2 HTTP/1.1 Host: www.someHost.co.il User-Agent: Safari/535.1 Accept: text/html,application/xhtml+xml
- POST /somePage.asp HTTP/1.1 Host: www.someHost.co.il User-Agent: Safari/535.1 Accept: text/html,application/xhtml+xml Content-Type: application/x-www-form-urlencoded Content-Length: 27param1=value1& param2=value2
Figure 1 – Parameters transfering examples
In HPP, the attacker introduces multiple parameters with the same name into a single HTTP request, whereas the attack vector is split across all instances. Since RFC3986 does not specify a standard behavior in this situation, the exact processing semantics are dependent upon the specific application delivery environment.
Table 1 shows a few examples of how different technologies and web servers manage multiple occurrences of the same parameter.
|TECHNOLOGY/HTTP BACK-END||OVERALL PARSING RESULT||EXAMPLE|
|ASP.NET/IIS||All occurrences of the specfic parameter||par1=val1,val2|
|ASP/IIS||All occurrences of the specfic parameter||par1=val1,val2|
|JSP, Servlet/Apache Tomcat||First Occurence||par1=val1|
Table 1 – Different processing methods for technologies and web servers to manage multiple accurences of the same parameter1
When the Web application delivery environment concatenates multiple occurrences, the complete attack vector is reconstructed and processed by the application. At the same time, security mechanisms that inspect each parameter instance individually, or process the entire request data as a single string, will not be able to detect the attack. For example, as Table 1 above shows- ASP with IIS concatenates the values of duplicate parameters.
In Figure 2 we see two SQL injection vectors: “Regular attack” and “Attack using HPP”. The regular attack demonstrates a standard SQL injection in the prodID parameter. This attack can be easily identified by a security detection mechanism, such as a Web Application Firewall (WAF). The second attack uses HPP on the prodID parameter. In this case, the attack vector is distributed across multiple occurrences of the prodID parameter. With the correct combination of technology environment and web server, the attack succeeds. In order for a WAF to identify and block the complete attack vector it required to also check the concatenated inputs.
Regular attack: http://webApplication/showproducts.asp?prodID=9 UNION SELECT 1,2,3 FROM Users WHERE id=3 —
Attack using HPP: http://webApplication/showproducts.asp?prodID=9 /*&prodID=*/UNION /*&prodID=*/SELECT 1 &prodID=2 &prodID=3 FROM /*&prodID=*/Users /*&prodID=*/ WHERE id=3 — | <urn:uuid:fb7745a3-66f8-445a-8174-1c4118281ef8> | CC-MAIN-2022-40 | https://www.imperva.com/learn/application-security/http-parameter-pollution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00198.warc.gz | en | 0.730654 | 910 | 2.71875 | 3 |
November 17, 2016
Capturing Scientific Data from the Arctic and Antarctic
With the growing need to secure new data from remote and inhospitable areas of scientific interest balanced against commercial challenges and tough competition for academic budgets as a background, Ground Control reported an upsurge in its Iridium satellite communication products being used for environmental science applications in both the Arctic and Antarctic.
Several successful deployments have proven the robustness of the company’s RockBLOCK and RockFLEET systems, highlighting their suitability for reducing costs not only in the research sector, but also in commercial industries such as oil & gas and mining.
Ground Control is the manufacturer of the innovative RockBLOCK, a tiny device that can be integrated with most computing platforms to provide global data transmission capabilities even at the Poles. The system is currently being used by a team from the National Institute of Water and Atmospheric Research – New Zealand (NIWA) to measure the effects of storm waves on sea ice. RockBLOCK has been integrated on specially developed wave buoys deployed on to sea ice floes in the Arctic and Antarctic by NIWA. The system transmits GPS position and signal strength data from the buoys every hour, allowing the teams to plot the movement of the ice against wave data.
Project contributor Scott Penrose, software architect at Digital Dimensions, said: “The research is vital as it supports investigation into current environmental changes at the Poles while informing the development of future models. RockBLOCK helps us collect data from our wave buoys using Iridium short burst data, which is the easiest and most cost-effective way, especially considering the low cost of the device itself. Despite this, the system is more than capable of operating in such extreme environments while providing reliable data according to our set schedule.”
Ground Control’s Iridium technology is also being used in the Arctic by the Laboratory for Cryospheric Research, which is dedicated to the monitoring and understanding of the frozen earth including glaciers, ice caps, ice shelves, snow, and sea ice.
Laboratory members are undertaking research across northern Canada, including monitoring glacier changes in Kluane National Park, examining ice shelf and sea ice interactions along northern Ellesmere Island, and measuring glacier and ice cap dynamics across the Canadian Arctic Archipelago. A team from the laboratory is using Ground Control’s RockFLEET product, combined with a solar panel and extra battery pack, to provide long term position monitoring of sea ice in the region.
Nick Farrell, director of Rock Seven (now trading as Ground Control), said: “Operating in such extreme environments can be costly, so research teams are looking at ways to reduce their spend. RockBLOCK and RockFLEET fulfil this need, whilst still providing the reliability of much more expensive systems, in terms of hardware and airtime costs. There’s real potential for technology transfer from research to commercial industries based on these developments. We’re seeing more interest from the oil and gas industry for instance, where data originating at facilities in remote or hazardous locations can inform if an engineer needs to visit or not.”
Designed to work with any platform with a serial or USB port, including Arduino™, Raspberry PI™, and Intel Edison, as well as Windows, Mac, and Linux computers, RockBLOCK is a simple and reliable way to integrate two-way communication into sensor and measurement based research projects. It can send messages of 340 bytes and receive messages of 270 bytes using Iridium short burst data, which offers global, pole-to-pole coverage. At just 76.0 x 51.5 x 19.0mm, the system can be integrated easily into almost any sensor station. The RockFLEET system offers the same communication capabilities as RockBLOCK but comes in a sealed form factor for permanent installation. | <urn:uuid:075cc463-1e76-4f66-9f2f-df4a40337c28> | CC-MAIN-2022-40 | https://www.groundcontrol.com/us/blog/capturing-scientific-data-from-the-arctic-and-antarctic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00198.warc.gz | en | 0.942693 | 782 | 2.828125 | 3 |
Four Technologies To Watch For In The Agricultural Space
Ian Bailey, Director of Rural Research, Savills
Ian Bailey, Director of Rural Research, Savills
From cutting emissions, to increasing productivity and changing diets, there are many reasons why agriculture will change in the coming years but it’s the emerging technology that will decide how it changes. Forget the traditional view of farming as an industry wedded to production measures of the past. Today’s food producers are keen to adapt and leverage technology driven by the world’s most futuristic businesses and developers.
From crypto-currencies and fast-track plant breeding, to floating farms and vegetables that grow in thin air, we look at five areas of technology that could become commonplace in the not-so-distant future.
Indoor farming units, growing a year-round supply of fresh produce are an increasingly popular set-up option.
Indoor, or vertical, farms grow rows of crops stacked in tiers. The plants are grown without using soil, in a water based solution, infused with nutrients—a technique known as hydroponics— while high quality light is provided by light emitting diodes.
The system has strong green credentials, for example, it overcomes concerns over soil degradation and high water-use associated with some traditional agricultural systems. Water is constantly recycled and the units themselves can be sited in under-utilised space.
Indoor farming also offers farmers a high degree of control over the growing environment. They can manipulate day length, temperature and precise nutrient levels and maintain the same conditions for 365 days of the year.
Larger scale operations of thousands of square metres are now being set up on the outskirts of urban areas and for landowners there are options to set up a unit or let land to existing companies.
However, there is a drawback. Indoor farms depend heavily on heat, light, and additional carbon dioxide to boost plant growth.
Power sourced from the grid would be prohibitively expensive, so units often invest in anaerobic digesters or biomass burners. These are allied to combined heat and power generators that also yield carbon dioxide, so the system can be very efficient but the initial capital cost may be substantial.
The technology system that keeps crypto-currencies tamperproof is already being adapted to help tighten traceability and boost customer confidence.
Blockchain software, which underpins currencies such as Bitcoin, creates a chain of digitised data blocks. These blocks have a unique identity in a similar way to a fingerprint; so by changing a block you change the chain’s identity and the chain is broken. Any new information is only added to the end of thechain, which does not alter the sequence of the preceding blocks. This series of linked blocks is a more secure way of holding data because unlike conventional systems where data is held centrally behind firewalls, there is no centralised version of the chain. The result is a tamper-proof, interlinked data log.
There are many reasons why agriculture will change in the coming years but it’s the emerging technology that will decide how it changes
In agriculture, suppliers across the world are already starting to use the blockchain systems. Companies such as Cargill in the USA, use the system to trace thousands of turkey movements. While here in the UK, Marks & Spencer is using a DNA sampling system to trace the provenance of its beef. Firms are also promoting the extra security levels used as a confidenceboosting selling point.
As the technology is rolled out it will undoubtedly add cost as hardware will need to be upgraded to record and link the data. But in the longer term, the benefits of secure data could outweigh these set-up costs.
SATELLITES AND DATA
As well as being able to steer a combine harvester from 300 miles above the ground, today’s satellites can also photograph fields on a daily basis. The photographs are clear enough to enable identification of individual trees, and can be used to collect all manner of information that can be especially useful when viewed over several years.
Some of the ways satellite photography is used include assessing in-field productivity and looking at crop health. This is done through looking at the colour of vegetation in the field, which can give information about the drying pattern of a field and the stability of the soil, as well as providing health indicators such as vitality and biomass. Through data collection it is possible to create indices that individual growers can use to appraise a farm’s metrics against the pooled data.
However, satellite technology is not cheap. A less costly route for precision agriculture is using GPS-equipped devices. These enable the mapping of pest infestations, soil conditions and nutrient levels, among other metrics. GPS also enables mechanised field operations that are more efficient as they reduce overlap and omissions.
Another alternative to satellites are drones. These are more versatile and tend to be easier and cheaper to use for smaller scale farm businesses.
The efficiencies provided by precision agriculture will more than pay for the investment over time. However, as the gains are only marginal, the payback takes longer the less land that is covered.
PRECISION PLANT BREEDING
A newly-developed, precision, plant breeding technique could fast-track crop improvements to lower costs and increase yields. The technique is known as gene editing. Scientists refer to the technique as CRISPR – an acronym derived from, “clusters of regularly interspaced short palindromic repeats.” These are stretches of a genome containing spaces between the building blocks of the DNA. Researchers found it was possible to snip the gene using an enzyme as a pair of molecular scissors. By altering the DNA sequence it is possible to select desirable gene functions such as disease resistance.
Because gene editing only alters the DNA sequence within a plant, it is less controversial than genetic modification that introduces DNA from other organisms. It is also a simpler technique than genetic modification and, therefore, a more cost-effective solution to fast-tracking crop improvements. However, while the US courts have already approved gene editing for development, a ruling by the European Court in 2018 means the technique will fall under the same regulatory framework as genetic modification. | <urn:uuid:f3764286-2757-4676-ab13-6d58ef89abc5> | CC-MAIN-2022-40 | https://agtech.enterprisetechnologyreview.com/cxoinsight/four-technologies-to-watch-for-in-the-agricultural-space-nwid-1042.html?utm_source=google&utm_campaign=enterprisetechnologyreview | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00198.warc.gz | en | 0.942502 | 1,289 | 3.046875 | 3 |
We all got a lot of sensitive data stored on our online accounts that we want to keep secure. However, most of it is protected by pretty weak passwords. Creating a really strong password should do the job. But, as we all know, it’s a fine line between choosing a password that no one else will be able to guess and one that’s easy for you to remember.
Luckily, there are loads of ways and ideas to create strong passwords, such as using a unique password generator tool. Here, we’ll walk you through our tips and tricks for choosing and setting up secure passwords for your online accounts. And we’ll share some of our favorite methods for keeping your passwords safe and ways of making sure you don’t have to hit the “Forgotten password” link ever again.
How to generate a strong password?
- Use long password combinations
- Combine numbers, lowercase, and uppercase letters
- Avoid popular passwords
- Get NordPass for super strong passwords
What is a strong password?
A strong password is one you can’t guess or crack using a brute force attack. Hackers use computers to try various combinations of letters, numbers, and symbols in search of the right password. Modern computers can crack short passwords consisting of only letters and numbers in mere moments.
As such, strong passwords consist of a combination of uppercase and lowercase letters, numbers and special symbols, such as punctuation. They should be at least 12 characters long, although we’d recommend going for one that’s even longer.
Overall, here are the main characteristics of a good, secure password:
- Is at least 12 characters long. The longer your password is - the better.
- Uses uppercase and lowercase letters, numbers and special symbols. Passwords that consist of mixed characters are harder to crack.
- Doesn't contain memorable keyboard paths.
- Is not based on your personal information.
- Password is unique for each account you have.
When you’re setting up an online account, there’ll often be prompts reminding you to include numbers or a certain number of characters. Some may even prevent you from setting a “weak password”, which is usually one word or number combination that’s easy to guess.
But even if you don’t get reminded to set a strong password, it’s really important to do so whenever you’re setting up a new online account or changing passwords for any existing account.
A long password is a good password
When it comes to password security, length really does matter. We recommend opting for a password that’s at least 12 characters long, even longer if you can.
Each additional symbol in a password exponentially increases the number of possible combinations. This makes passwords over a certain length essentially uncrackable, assuming you’re not using common phrases.
A strong password isn’t obvious
A good password needs to be something that’s really difficult for someone else to guess or crack, so don’t go for anything really generic, like “password” or “12345”. The latter two choices are still among the most popular passwords in the world, and they’re also among the least useful.
Good passwords can’t contain memorable keyboard paths
Don’t use sequential keyboard paths, like “qwerty”, as hackers are likely to crack these. If you spent no effort in thinking of a good password, the chances are the hackers won’t need much effort to crack it.
Password strength isn’t personal
It’s really important that you don’t use anything personal to you, like a nickname, your date of birth or your pet’s name. This is information that’s really easy for a hacker to find out simply by looking at your social media, finding your online work profile or even just by listening in on a conversation you’re having with someone else.
A good password should be unique
Once you’ve created a strong password, you might well be tempted to use that password for all your online accounts. But, if you do that, it leaves you more vulnerable to multiple attacks.
After all, if a hacker manages to discover your password, they’ll then be able to login to every account you use that password for, which might include your emails, your social media and your work accounts.
A lot of people use the same password for everything because it’s easier to remember. But don’t worry because we’ve got loads of tips and tricks to help you manage multiple passwords a bit further down.
Avoid past passwords
It’s also really important to make sure you don’t recycle your passwords, particularly if they’ve been hacked before. This may seem obvious, but once you’ve used a password, you shouldn’t reuse it. Even if you haven’t used it for years, it’s best to come up with a new one. Especially if you’ve had issues with a password being hacked in the past.
Special characters in passwords
Although using special characters in your passwords is a really good way of making them extra secure, not all online accounts allow you to use any symbol you like. But most will allow you to use the following:
Good password examples
Here are some good examples of strong passwords:
They all consist of a seemingly random and long (more that 15 characters) collection of uppercase and lowercase letters, numbers and special characters. These passwords are not generic, and don't contain any memorable keypaths or personal information which hackers could use.
Ideas for creating a good password
Luckily, there are loads of things you can do to create unique and strong passwords for each of your online accounts. We have a ready-made password generator tool that generates unique and almost impossible to crack passwords. Alternatively, you should follow our top tips and ideas on how to setup a good password:
Use a password generator
If you don’t have time to come up with your own strong passwords, a password generator is a really quick and easy way to get a unique and strong password. Our own secure password generator will create a sequence of random characters. Copy and use it as a password for your device, email, social media account, or anything else that requires private access.
Top-notch password managers also include secure password generators. For example, NordPass can help you create unique and unbreakable passwords as well as passphrases.Get NordPass
Choose a passphrase rather than a password
Passphrases are much more secure than passwords because they’re typically longer, making them more difficult to guess or brute force. So instead of choosing a word, pick a phrase and take the first letters, numbers and punctuation from that phrase to generate a seemingly random combination of characters. You can even substitute the first letter of a word with a number or symbol to make it even more secure. Or try swapping out words for punctuation like we used to back in the days of text slang, if you can remember back that far.
Here are some examples of how you can use the passphrase method to create strong passwords:
|I first went to Disneyland when I was 4 years old and it made me happy||I1stw2DLwIw8yrs&immJ|
|My friend Matt ate six doughnuts at the bakery café and it cost him £10||[email protected]&ich£10|
|For the first time ever, Manchester United lost 5:0 to Manchester City||4da1sttymevaMU5:02MC|
Note: don’t use common phrases, because these are vulnerable to dictionary attacks – random combinations are what you want.
Opt for a more secure version of dictionary method
A popular method for choosing a password is to open a dictionary or book and choose a random word. But, as random as it may seem to you, a single word is actually quite easy for a hacker to guess.
So rather than opting for just one word from the dictionary, choose a few and string them together along with numbers and symbols to make it much trickier for someone to figure out.
Here are some examples of good password ideas created with this method:
|Words from the dictionary||Secure password|
|Jigsaw, quest, trait, fork||Jigsaw%Quest7trait/fork48|
|Glimpse, stuff, prize, koala||G1impse$tuff74Prize8Koala!|
|Trombone, fish, quick, upside||Tr0mb0ne&Fish?Qu1ck^side|
Play around with phrases and quotes
If you want a password that’s difficult for others to guess, but easy for you to remember, it can be a good idea to use a variation on a meaningful phrase or quote. Simply take a phrase you’ll remember and swap out some of the letters for numbers and symbols.
Here are some examples of strong password ideas generated with this method:
|Quote or phrase||Secure password|
|“One for all and all for one”: The Three Musketeers||14A&A413Mu$keteers!|
|“For the first time in forever”: Disney’s Frozen||4da1stTymein4eva-Frozen|
|“Twinkle twinkle little star, how I wonder what you are”: nursery rhyme||TW1nkle7ittle*how1??UR|
If you want to add symbols to your passwords without making them harder to remember, you can always use emoticons.
Although you won’t be able to add in emoji, you can use emoticons, which are the coded versions, usually made up of punctuation, letters and/or numbers.
Here are some emoticons that you can use in your passwords:
Customise your passwords for specific accounts
Once you’ve come up with a strong password that you can remember, you’ll still have to create different passwords for each of your online accounts. But, rather than starting the whole process again, you could simply add a different code into your password for each online account.
So, for example, if your password was cHb1%pXAuFP8 and you wanted to make it unique for your eBay account, you could add £bay on the end so you know it’s different to your original password but still memorable.
Here’s how that could work:
|Online account||Password with added code|
Commit your password to muscle memory
If you want to remember your password, it can be a good idea to practice typing it several times over. Eventually, if you type it correctly enough times, you’ll develop a muscle memory that’ll mean it’s much easier for you to remember.
However, it's quite a challenge to remember at least a dozen of long and unique passwords of all your accounts. So, this technique is only applicable with short, 4 or 6 digit passwords that you use to unlock your device or your password manager.
How to keep your passwords safe
Now that you’ve set up a strong password for each of your online accounts, the next step is to keep them safe and secure from hackers.
Here are some of our top tips on how to do that:
Choose a good password manager
Whether you’ve generated your own strong passwords or you’re looking for an online service to do it for you, we strongly recommend using a good password manager. A secure password manager generates, stores and manages all your passwords in one safe online account. This is really useful because it allows you to use as many unique passwords as you like without ever having to worry about memorising them.
All you need to do is save all your passwords for every online account you have on your password manager and then protect them with one “master password”. This means you only have to remember one strong password as opposed to every single one.
Once you’ve got your password manager set up, whenever you go to login to one of your online accounts, you simply type your master password into your password manager and it’ll auto-fill in your login details for this account. You don’t even need to remember which email address or username you used. A secure password manager will fill all this in for you. Here are some of the best password managers in 2022.
It may seem insecure to keep all your passwords in one place. However, a reliable password manager, like NordPass, is the most secure place to store your credentials. Providers never keep your vault's master password, so hackers cannot steal it even if they breach the database.
Use two-factor authentication
Even if someone does manage to steal your password, you can still prevent them from accessing your account by adding in an additional layer of security with two-factor authentication (2FA). This means that anyone trying to login to your account will have to enter a second piece of information after the correct password. This is usually a one-time code that’ll be sent directly to you.
Sometimes this will be sent to you via text message, although this isn’t necessarily the most secure way of receiving that code. After all, a hacker could steal your mobile number through SIM swap fraud and access your verification code.
We’ve found it’s much safer to use a two-factor authentication app instead, as they’re much trickier to intercept. Our favourites include:
- Google Authenticator
- Microsoft Authenticator
Don’t save your passwords on your phone, tablet or PC
This may sound obvious but you must avoid saving any of your passwords in a document, email, online note or anything else that could be hacked.
Check if your email has been leaked
Of course, it’s really important to keep on top of any data breaches that may have occurred, particularly with your email account.
But how do you know if your email has been leaked? Well, we have an online personal data leak checker, which will let you know if anything like this has happened to your email account. All you need to do is enter your email address and we’ll be able to tell you if anything has happened to it.
Don’t give out your password
Last but not least, it’s really important to keep your passwords private. Even if you completely trust the person you’re giving your password to, it’s risky to send a password via text message or email in case anyone intercepts it. Even if all you’re doing is reading it out over the phone or spelling it out to the person sat next to you, there could be someone listening in and making notes.
Conclusion: so how do I make all my passwords hacker-proof?
Passwords are like the lock on your apartment door – they're the one thing criminals have to go through if you're not home. Having a weak password is like a weak lock. It greatly increases the number of people who have the means to access your accounts.
Using all the tricks in this article to create strong, memorable passwords is a good place to start increasing your security. Alternatively, get a strong password manager like NordPass and generate all your passwords automatically - that way, you won't have to remember any of them.
Whichever course you decide to take, don't put it off! Data leaks and breaches happen every day, and the next one could have your password in it. | <urn:uuid:32115d2c-0748-4360-9015-8f3594ef462f> | CC-MAIN-2022-40 | https://cybernews.com/best-password-managers/how-to-create-a-strong-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00198.warc.gz | en | 0.91863 | 3,365 | 2.640625 | 3 |
QR codes went mainstream during the pandemic, as businesses sought ways to offer customers ‘touch-free’ services. Criminals have taken note, and have been swapping tips on exploiting QR codes to steal funds and break into systems. Organisations should bolster their mobile security, experts advise, and make sure their employees and customers are aware of the risks.
How QR codes went mainstream
Quick response (QR) codes were invented in 1994 by Japanese car parts maker Denso Wave to track vehicles through the manufacturing process. A QR code is essentially a two-dimensional bar code, with around 100-times the data storage capacity, according to PayPal. Combined with widespread smartphone adoption, they offer an affordable way to transmit data that can be attached to any surface.
Initially dismissed by some in the West as a low-tech fudge, QR codes became an essential part of the digital payments infrastructure in China. The country’s two biggest payment apps – WeChat Pay and AliPay – introduced QR codes as a way to initiate payments in 2011. By 2016, an estimated $1.25trn in transactions were initiated by QR code in China.
QR codes became a global phenomenon during the pandemic, as customers sought to avoid physical contact with surfaces. ‘Touch-free service’, where customers can scan a QR code for a menu or to pay, is now commonplace. QR codes were central to the UK government’s contact tracing app, which asked citizens to ‘check in’ to venues by scanning a code on their phones.
As a result, QR codes are now mainstream. According to a report by Juniper Research, 1.5 billion people globally used a QR code to facilitate a payment in 2020. A survey of UK and US citizens in September 2020 by endpoint security provider MobileIron found that 8% had scanned a QR code in the previous 24 hours.
Digital payment providers PayPal and Apple Pay both launched QR code features last year, while banks including Natwest, Royal Bank of Scotland (RBS) and Deutsche Bank now allow users to log into the online banking services using a QR code. Others have introduced QR codes to facilitate ATM withdrawals. As a result, adoption is poised for rapid growth, especially in the US, where Juniper predicts a 240% rise in user numbers by 2025.
Are QR codes secure?
This growing use of QR codes has not escaped the attention of criminals. "We know cybercriminals are abusing this behaviour,” says Anna Chung, principal researcher at Unit 42, the threat research arm of cybersecurity company Palo Alto Networks. "During the pandemic, Unit 42 has observed cybercriminals in underground online forums discussing ways to abuse QR codes and target mobile devices. We also found open-source tools and video tutorials offering training on how to conduct attacks by using QR codes."
We know cybercriminals are abusing this behaviour.
Anna Chung, Unit 42
Many QR code-related threats work by tricking users into scanning a code that directs them to a malicious site or initiates a criminal payment – a technique known as QRLjacking.
Last year, Belgian police issued a warning about a scam in which hackers, posing as customers, would send QR codes to small businesses supposedly to confirm payments. Scanning the code would grant the hackers access to the sellers' bank accounts. "The code does not, in fact, refer to a payment confirmation, but to a login portal that the fraudster, in combination with the bank account number provided, will have direct access ... to your current and savings accounts," said commissioner Olivier Bogaert of the country's Federal Computer Crime Unit.
Another emerging threat is the phenomenon of QR code phishing, or 'quishing', whereby criminals trick users into scanning a malicious QR code via email, directing them to a fake site that prompts them to enter their login details. This technique bypasses many anti-phishing systems, which work by scanning the text of emails, explains Mark Harris, senior director at Gartner. "Because you can't see the URL or it's not visible in the email, [quishing] gets past those traditional techniques."
Chung says that Unit 42 has observed 'quishing' scams that spoof corporate share drives. “We have come across attackers sending out QR codes to phish employees... to trick them onto a web page that looks like a corporate share drive.”
The technique may have an added impact as employees may not have been trained to view QR codes as potential phishing threats, adds Peter Gooch, partner in cybersecurity and privacy at Deloitte. "If it's seemingly from a known company to you, you might not think twice about it,” he says.
Managing the cybersecurity risk from QR codes
How can organisations reduce the cybersecurity risk posed by malicious QR codes? One essential approach is to ensure that employee smartphones are secured, something that can be overlooked. "The majority of [companies] have fairly strict security protections over the laptop," explains Chung. "But not so much for the corporate phone ... because that's an extra layer of investment and protections that you need to continuously control. So that is another layer of effort that I know [many] companies overlook."
Another crucial measure is to raise awareness of the risks, both among customers and employees, Chung says. “QR code stands for a quick response, so [being] quick is its advantage," she explains. "But at the same time, it could be a disadvantage for people who are not fully familiar with this technology and the potential risks that come with it." | <urn:uuid:512e5ab8-2bf6-4a42-8de9-c24586c71f24> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/cybersecurity/qr-codes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00198.warc.gz | en | 0.961882 | 1,147 | 2.640625 | 3 |
According to the new market research report "District Heating Market by Heat Source (Coal, Natural Gas, Renewable, Oil & Petroleum Products), Plant Type (Boiler Plant, CHP), Application (Residential, Commercial, Industrial), and Geography - Global Forecast to 2023", The district heating market is expected to grow from USD 170.7 billion in 2018 to USD 203.0 billion by 2023, at a compound annual growth rate (CAGR) of 3.5% during the forecast period. The major factors driving the district heating market include increasing demand for energy-efficient and cost-effective heating systems and growing urbanization and industrialization.
Browse 69 market data Tables and 29 Figures spread through 113 Pages and in-depth TOC on "District Heating Market - Global Forecast to 2023"
View detailed Table of Content here - https://www.marketsandmarkets.com/Market-Reports/district-heating-market-107420661.html
Renewable heat source to grow at highest CAGR in global district heating market during forecast period
Renewable heat sources help in meeting the rising energy needs, improving efficiency, reducing greenhouse gas emission, and improving climate conditions. Geothermal heat source uses one or more production fields as heat sources to supply district heating to residential and commercial buildings. Solar heating converts energy from the sun into heat; it uses solar panels that are often arranged on a building or concentrated in solar farms to facilitate a clean heat source. As a CO2-free power source, the environmental impact of the solar heat source is significantly smaller than other power generation methods.
CHP plant type to hold major share of global district heating market during forecast period
CHP helps reduce the capital investment, provides economies of scale, reduces heat losses to the environment, and substitutes the use of fossil fuels for district heating, which, in turn, lead to the reduction of greenhouse gas emissions. Moreover, CHP makes a district heating system an efficient energy solution for residential and commercial entities that show significant demand for district heating.
Residential application to dominate global district heating market during forecast period
Favorable government incentives such as more focus on energy-efficient products have led to the increased adoption of district heating in residential application. Moreover, continuous urban development is boosting the demand for district heating. Growing urbanization leads to organized infrastructure developments suitable for district heating solution. Growing urban cities create a demand for sustainable, efficient, and reliable utility services including district heating and electricity production.
Europe to hold major share of district heating market from 2018 to 2023
Europe is the largest market for district heating as a large number of leading players are based in this region. Increasing technological advancements, in terms of connectivity, digitalization, and IoT integration; rising demand for energy-efficient solution; and growing initiatives to reduce greenhouse gas emissions are the crucial factors driving the growth of this market in Europe. Due to the legislative framework of the European countries including Germany, the UK, and France, the penetration of district heating in new buildings is expected to increase in the upcoming years, especially gated societies. As a result, the market for district heating is expected to hold a major share in this region.
Fortum (Finland), Vattenfall (Sweden), Engie (France), Danfoss (Denmark), NRG Energy (US), Statkraft (Norway), Shinryo Corporation (Japan), LOGSTOR (Denmark), Vital Energi (UK), Kelag (Austria), Goteborg Energi (Sweden), FVB Energy (Canada), Alfa Level (Sweden), Ramboll (Denmark), Savon Voima (Finland), Enwave Energy (Canada), Orsted (Denmark), Helen (Finland), Keppel DHCS (Singapore), and STEAG New Energies (Germany) are among the major players in the district heating market.Don’t miss out on business opportunities in District Heating Market. Speak to our analyst and gain crucial industry insights that will help your business grow.
MarketsandMarkets™ provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies’ revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets™ for their painpoints around revenues decisions.
Our 850 fulltime analyst and SMEs at MarketsandMarkets™ are tracking global high growth markets following the "Growth Engagement Model – GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets™ now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets™ is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.
MarketsandMarkets’s flagship competitive intelligence and market research platform, "Knowledgestore" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.
Mr. Shelly Singh
630 Dundee Road
Northbrook, IL 60062
USA : 1-888-600-6441
This FREE sample includes market data points, ranging from trend analyses to market estimates & forecasts. See for yourself.SEND ME A FREE SAMPLE | <urn:uuid:1c1a702d-a7eb-44e8-946b-9cc155fcb887> | CC-MAIN-2022-40 | https://www.marketsandmarkets.com/PressReleases/district-heating.asp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00399.warc.gz | en | 0.917886 | 1,216 | 2.65625 | 3 |
Health Data Management: What is it and why does it matter?
Health informatics is an emerging practice that continues to grow. There are numerous colleges today that offer degrees in Health information. The heart of health informatics is Health Data Management (HDM) of healthcare data, information, and knowledge for decision support by care providers, teaching hospitals, research centers, and pharmaceuticals and biotech companies. Healthcare data management is evolving and improving the delivery and support of medical treatments.
Making sense of healthcare data and managing outcomes of patients are driving the practice of healthcare today. Today, we can see this in the management of the world’s response to COVID. The response is very data-driven to determine decisions for an appropriate response to the virus worldwide.
What is Health Data Management?
Healthcare data management is the processing of managing the lifecycle of health data. Data is created, stored, organized, processed, archived, and destroyed. In addition, data is kept secured and protected to maintain a strict level of confidentiality, integrity and is only available to those who need access. Healthcare database management systems have the capability to do all of this and more such as analyzing disparate and diverse datasets from multiple internal and external sources to deliver operational and decision support to applications, devices, and people.
Healthcare data management is increasingly about digital data, on-premises, in the cloud, and out at the edge of the network for mobile and telehealth and medical devices and instrumentation. There are structured and unstructured data that has to be managed. Some organizations are starting to utilize a data warehouse for the volumes of data that have to be managed and analyzed in a healthcare data management system (HDMS). These systems also include a clinical decision support system (CDSS) that takes advantage of all the data stored to automate interpretation, care plans, and treatments for patients.
Healthcare data management, also sometimes referred to as Digital Healthcare data management, is not limited to electronic health or medical records (EHRs or EMRs) but includes population health records, other clinical records for drug efficacy, or even medical instrumentation logs and RF-ID tags on various physical assets from beds to bedpans necessary for the supply chain management. Management of the data also includes all the operational and financial records spanning healthcare providers and payers – public such as national and state healthcare programs like Medicare and Medicaid and private insurers.
Challenges of Health Data Management
Challenges in Health Data Management evolve around the enablement of people, both the provider of health care and the patient. The processes and technologies have to be aligned to the needs of all the stakeholders in the overall value chain for providing and receiving Healthcare. Some of the biggest challenges of Health Data Management are:
- Security of the data – Data needs to be stored securely – confidential, integrity, and available to only those who should have access. Mandating that data be shared securely is the first step in improving outcomes and shifting to value-based care delivery and payment integrity model and away from our current inefficient fee-for-service model. This also helps protect patient data from unauthorized sources that may use the data for other purposes such as ransomware. Data must be protected according to US Health Insurance Portability and Accountability Act (HIPPA) compliance. The system has to be in compliance with government regulations for role-based access, encryption at rest and in transit. The system also has to be resilient and protected from cyber-attacks.
- Data integration – Integration of health data is important from various stakeholders, which includes the patient, provider, creditors, payers, and government. Integration and analyzing various healthcare data: clinical, operational, and financial data, including combining this data with external population health data and other social determinants of health, becomes valuable for public health data management.
- Catalog of datasets – All the various datasets from asset IDs and EMRs, Claims, EHRs, Pop Health Data, Accounts Receivables, and other sources are creating challenges with tagging and managing a rich set of metadata with proper ontologies and taxonomies for various elements of each dataset relative to the rest. Further, ingestion, replication, and combining data can result in duplication, errors, and other anomalies that must be identified and eliminated to avoid a wide range of problems, ranging from adverse drug reactions to payments fraud.
- Managing various data formats
- Healthcare data management covers everything from the healthcare records in large legacy Hospital ERP applications like Cerner or EPIC and formats for medical imaging like DICOM encapsulating an image or video in JPEG or MPEG formats or claims submission EDI formats such as X.12 837. A healthcare data management system must be able to convert between various healthcare data formats. Maintaining data quality
Medical records have to be accurate. The patient record management system must have oversite when transforming medical records into accurate data. Many errors and omissions can occur that can cause harm to the patient.
Other challenges can be related to the technologies being used for the data. The database has to be scalable for all the data that is collected. Data has to be able to be consolidated from various technical platforms and sources. The healthcare provider data management and hospital data management system have to meet all of these requirements. Cloud Enterprise Data Warehouses and data marts can be viable solutions to help with these issues.
Benefits of Health Data Management
The benefits of healthcare data management can be looked at from an elementary perspective. The better the data you have, the better decisions can be made, and improved outcomes can be achieved regarding the Healthcare of patients. Besides the significant aspect of providing Healthcare to help patients.
Some of the other benefits of Health Data Management are:
- Health data analytics – It can be used to make predictions about patients’ health to enable better treatment and overall a better proactive approach to providing health care—the overall improvement of health outcomes for the patient and sometimes for the general public.
- Better alignment and communication – Communication improves patients, providers, and other stakeholders, especially with access to digital records. A comprehensive view of the patient can enable better collaboration between doctors. This is also helpful across geographic boundaries and countries.
- Improved patient engagement with healthcare – This includes improved visibility in the patient records by themselves to understand treatments, trends, and proactive care. The patient can easily access their health records anytime and anyplace when needed.
- Data-driven decisions – Historical data, real-time data, and other data can help improve provider and patient decision-making. Data can improve the diagnostic ability of both provider and patient instead of inaccurate guessing based on hunches.
- Integration with patient personal health-related activities – Physical activity, especially individual patient monitored activity with sensors, can be fed into the Health Record Management system for improved treatments. Today, many mobile applications allow integrations or sharing of data from sensors or other applications with healthcare data management systems.
- Integration with emerging technologies – Improved integration with artificial intelligence to help diagnose illness without the need for a physical doctor visit—better integration with medical chatbots that use medical knowledge management systems integrated with health data knowledge for self-service.
Besides the challenges and benefits listed above, a high-quality, well-organized healthcare data management solution can be achieved. Health data and management solutions support a wide range of use cases, including improved chronic disease management, accelerated clinical trials with more accurate recommendations, optimized use of provider resources, improved wellness programs, better alignment between payers and providers, thereby reducing the time and expenses involved in back-and-forth struggles with down-coding versus upcoding.
HDM Decision Support Systems
Healthcare data enables decision support for all stakeholders, provider to patient. Data is everywhere and needs to be available in real-time for timely decision support. With integrated secure collaborative systems, data can be used to measure anything Healthcare related, manage decisions and monetize actions. Healthcare service catalogs of information can be created by various providers and utilized with factual, real-time data to determine the availability of services and products.
So many decisions can be made using a healthcare data warehouse of various information by various stakeholders across the healthcare provider’s organization or for the patient themselves. Supply chains for physical healthcare space, medicine, specialty care, etc., can all be managed with a Healthcare database system, especially an integrated shared secure system between providers.
Emerging technologies such as artificial intelligence, machine learning, and others can take advantage of social, mobile, and cloud platforms with the utilization of Heath data to support numerous use cases. Clinical decision support systems can analyze evidence-based data collected in a health care management system at any point of care, either routine or emergency care.
When a healthcare provider can do their job better, and the patient has better knowledge about care, all are great for both. Healthcare data management systems combined with the expert opinion of care providers will increase the efficiency and effectiveness of health care.
Healthcare fraud affects everyone, from patients to providers. Healthcare data management solutions can help reduce these challenges. Data quality management in healthcare helps protect billing, identity theft, forgery, medicine abuse, and many other challenges. The integrity of the healthcare data management solutions can help save the healthcare community financially and individual patients. As healthcare systems and databases become more securely integrated and shared among payers and providers, the better the transparency and support of enforcement of rules and regulations can be done.
Recently, under the Cures Act, healthcare payers and providers have been instructed to share more of their data and have recommended updated formats for sharing data in various formats in a new version of HL7 called FHIR (Fast Healthcare Interoperability Resources). The key is that data be protected according to HIPAA compliance and with need-to-use guidance under Meaningful Use guidelines. Central to HIPAA compliance and Meaningful Use is to maintain encryption of all data at rest with AES-256 bit encryption, use SSL and encryption for data in transit, and combine granular masking and authorization of data with role-based access multi-factor authentication. Further, as data is moved to the cloud or accessible from outside the organization, intrinsic Cloud security mechanisms in the three public cloud platforms (AWS, Azure, and GCP) should also be leveraged.
In addition to the above, medical errors are a leading cause of death in the United States. Some of these errors are caused by communication problems between providers and patients, lack of information for prescribing decisions, and poor data documentation. Reducing these types of medical errors can be done with improved Healthcare data management.
As mentioned earlier, we all can see the effect of healthcare data management on how the world exchanges data for responding to COVID. Data is coordinated, collaborated, and shared quickly. Based on the data, experts worldwide can make informed decisions on how to respond to the virus based on many factors such as their economy and other unique constraints. The general public, like never before, pays attention to Health Data for individual decisions about their options for care.
Healthcare data management has many challenges and benefits. Healthcare data management companies are rapidly improving to meet today’s and tomorrow’s challenges. The benefits clearly outweigh the challenges. Tomorrow looks bright with the enablement of new innovative technologies to support Healthcare for both providers and patients. | <urn:uuid:e706f4b7-d17c-4dd1-b568-5fcc58704d88> | CC-MAIN-2022-40 | https://www.actian.com/what-is-health-data-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00399.warc.gz | en | 0.929053 | 2,305 | 2.9375 | 3 |
Schneider Electric chips in for a sustainable data centre future
Have you ever heard of Earth Overshoot Day? It's the date when humanity will have consumed more from the planet than its ecosystems can renew in the future.
This includes food, fibres, timber, and absorption capacity for carbon dioxide from fossil fuel burning.
It all sounds quite grim – and it is, especially given the day falls on August 1 2018.
To drive awareness of the imminent date, Schneider Electric has partnered with Global Footprint Network to support its ambition to ‘move the date'. The company believes that adoption of energy efficient and renewable technologies could shift the date 21 days by simply retrofitting existing building, industry and data center infrastructure and upgrading electricity production.
“Operating on a planet with finite resources requires creativity and innovation”, says Schneider Electric global environment senior vice president Xavier Houot.
“We team-up with our customers and partners to unlock the potential to retrofit existing infrastructure, adopting circular business models, and we measure how much this helps save resources and CO2. We work to see our growth path through the lens of the growing need of living within the means of our one planet.
Houot says the challenge is key to Schneider Electric's strategy that is focused on its EcoStruxure solution, with a few examples that could deliver up to 50 percent improved energy efficiency while reducing energy costs by 30 percent:
- Installed connected sensors and meters that improve the efficiency of networked lighting, heating, and air conditioning to optimise the use of space in the building.
- Edge control to allow users to manage the data from IoT connected products on-site with day-to-day optimisation of energy consumption through remote access and advanced automation.
- Visualised reporting on energy consumption through interactive dashboards, detection and diagnosis of faults, performance analysis, and asset monitoring to detect additional energy efficiency opportunities.
“Schneider Electric's business case is aligned with moving humanity out of ecological overshoot”, says Global Footprint Network CEO Mathis Wackernagel.
“Leading companies like Schneider Electric are rising to the challenge of managing natural resources differently, measuring them more accurately, and developing products and processes that use them not only more efficiently, but also reduce their overall use.
With the day's imminent arrival, it's clear that businesses need to take new directions to ensure sustained growth that also benefits the planet and its people. | <urn:uuid:ab1ca9a0-e5e6-437a-9056-8ed4a40e7759> | CC-MAIN-2022-40 | https://datacenternews.asia/story/schneider-electric-chips-sustainable-data-centre-future | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00399.warc.gz | en | 0.930965 | 507 | 2.703125 | 3 |
Remember when the only threats you had to worry about were on your computer?
Those days are over. If an employee falls victim of a mobile phishing attack it can have serious implications. Compromised data can be used for credential stuffing and identity theft, resulting in the fraudulent access of your business systems.
Don’t let BYOD stand for Bring Your Own Disaster
Mobile phishing attacks that target individuals can quickly compromise an organisation. That’s because more people are using their personal phones for work. The pandemic accelerated that as most people began working from home. Suddenly the phone you use to access your work emails is the same one you use to surf Facebook or play games on.
75% of phones in the enterprise will be BYODGartner
Mobile Device Management (MDM) is often used by businesses with BYOD and COPE mobility strategies, but it doesn’t actually provide threat detection and remediation. But add MTD to detect any potential phishing risk and you have a powerful combination allowing IT teams to block access to particular business content or systems if there is a threat.
A mobile phishing attack can devastate your company
The ripple effect of a single successful data breach can quickly escalate. Here are just a few of the risks:
- Credential theft: Entering access credentials for on online account can give attackers access to it, and potentially others, if victims reuse passwords.
- Device compromise: Phishing attackers can infect mobile devices with malware that steals data from the phone and uses the device to target contacts, spreading the attack.
- Business Email Compromise (BEC): Phishing attackers have been spotted impersonating executives and using SMS messages to target administrators, asking them to wire money to fraudulent accounts. One successful attack can cost millions.
- Ransomware infections: All it takes is a compromised company account to spread a ransomware attack internally among employees.
How does mobile phishing work?
Phishing used to be an email-only activity until smartphones came along. Now, as smartphones are used for both work and personal activity, your employees need to be aware of the different ways that cybercriminals target victims.
- An email opened on a mobile can make phishing harder to spot. Smaller screens make it more difficult to see who really sent the email. This, coupled with the fact that we often use our mobiles when multi-tasking, makes it easier to fall prey to.
- An SMS, WhatsApp or social media message is a common method of attack for mobile phishers, especially if the message appears – at first glance – to be from a known brand or someone the victim expected to get a message from. Read more about SMS phishing and a recent WhatsApp phishing (whishing) scam.
- Malicious mobile apps downloaded from official app stores have impersonated software from well-known online services, displaying login screens that collect your personal account details. If this happens, you might never even realise you’ve been phished.
Mobile phishing is more dangerous
The majority of employees have their company email account on their mobiles. Although the general population are becoming more mindful of email phishing, when it happens on a mobile phone, it can be a lot harder to spot. Why?
- It catches you off-guard when you’re distracted.
- It arrives on tiny screens that people don’t read properly.
- It targets devices that aren’t protected by your company network.
Coaching staff on how to spot suspect emails is very useful when applied to desktops. Unfortunately, the same strategies don’t apply to mobiles, as the advice to ‘hover over a link to see where it goes before clicking on it’, is a redundant exercise on a mobile device, so it’s important to update your security training.
How we can help
All the images above are examples of real phishing scams we’ve seen on mobiles.
Any business that uses mobile devices has a duty to its customers to make sure they are doing everything in their power to protect against threats such a mobile phishing.
A simple and effective way to do so is with Trustd MTD. Quick and easy set up, you can enrol your organisation’s devices within five minutes to protect BYOD, COBO or COPE devices from phishing and other malware, credential theft via compromised WiFi, device vulnerabilities, and malicious web and app content.
Trustd works directly on smartphones to spot known phishing links and warn users before they give up their details. We use powerful AI techniques to look for suspicious patterns in web addresses, backed up by lists of known phishing URLs.
Try it for yourself
Trustd is a new mobile threat defence solution that is quick and simple to set up – see for yourself with our 14-day free trial. Get set up in 5 minutes. If you’re looking for our free Trustd app, head over here. | <urn:uuid:969142ea-db9e-4da7-942c-21b1897b6f4b> | CC-MAIN-2022-40 | https://traced.app/what-is-mobile-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00399.warc.gz | en | 0.92169 | 1,033 | 2.546875 | 3 |
Conventional software solutions, both on-premises and in the cloud, are large applications that handle several different business functions and technical operations from one platform. This monolithic design can offer developers and operational teams a variety of efficiencies, not to mention cost savings in the number of tools they need to manage.
However, this infrastructure can become bulky and unmanageable as a developer team upgrades the application and the rest of the organization expands its use cases and user base over time. To make applications that work better with agile project management and DevOps team needs, many enterprises are opting to transform their most important enterprise applications into microservices.
What are microservices?
Microservices architecture is an application model in which individual components and functions are separated into different containers, or clusters, so they can operate and scale independently. Many enterprise software vendors are incorporating microservices into the newer apps that they create, but a growing segment of enterprise tech teams are taking these applications into their own hands and creating microservices that work for their specific business needs.
Application programming interfaces (APIs) are used to maintain basic connections amongst these services, but each microservice hosts its own business data and primarily relies on its own business logic and operational rules. This independent design makes it possible for a company’s developer teams to focus on each individual app component rather than the application as a whole.
Microservices vs APIs
Microservices and APIs are sometimes confused because of the cross-functional application management features they both offer. However, microservices and APIs serve distinct purposes in enterprise networking infrastructure and frequently work together.
APIs help separate applications “interface,” or communicate, with each other so they can co-manage operations and business workflows. APIs are also used to help different microservice components of the same application communicate with each other, but they are not the primary feature of microservices architecture.
Read more: How Do APIs Work?
Microservices, on the other hand, are the result of dividing a single application or platform into several different segments. While each microservice can handle different business needs and function independently, they’re still part of the same bigger application and still share some resources.
Key features of microservices
An application that has been segmented into a microservices design typically includes the following features:
- Autonomous services
- Lightweight APIs for cross-service communication
- Agnostic design for different programming languages and applications
- Containers and serverless computing
- Databases and data storage
- Load balancing
- Performance monitoring
Also read: Are Your Containers Secure?
Examples and use cases for microservices
A growing number of global enterprises—both service providers and internal teams—are turning their applications into microservices. This transformation not only creates new efficiencies for internal operations and teams but also helps companies to improve customer experiences.
Netflix uses a microservices architecture for a variety of its internal operations. One example is the Netflix Cosmos Platform, which is the company’s microservices system that processes media files from outside partners and studios to make them accessible to user devices. This solution combines microservices with load- and queue-based autoscalers and asynchronous video workflow rules.
In combination with the Internet of Things (IoT) and distributed computing solutions, Tesla relies on microservices architecture to support grid resilience and quick recovery times. Tesla uses both Kubernetes containers and Akka, an open-source application development toolkit, to create its microservices architecture.
Read more on TechRepublic: How Tesla Uses Open Source to Generate Resilience in Modern Electric Grids
Trade Republic, a fintech startup in Germany, uses microservices to separate industry- and organization-specific knowledge and requirements. This structure is particularly helpful for addressing a wide variety of banking and finance needs across enterprise customers from different industries, as well as the needs of individual consumers.
Learn how one tech company is helping businesses modernize their applications with microservices: Developing a Cloud Modernization Strategy: Interview with Moti Rafalin of vFunction
Pros of using microservices
Microservices help teams make their application visibility and functionality more granular, which can lead to a variety of security, collaboration, and growth-based benefits.
A microservices strategy creates an application infrastructure where each application component functions independently. Because these components only rely on and communicate with each other for a few resource-sharing workloads, microservices architecture makes it possible for teams to scale up or scale down individual services without hurting the functionality of another service.
A microservices architecture is designed to let developer teams choose the resources that best fit each service. This developer-agnostic infrastructure means that developers can use the platforms, programming languages, and third-party support applications that make the most sense for the microservice or project they’re running.
Improved fault and resource isolation
Microservices give developers and security professionals precise visibility into and control over what happens in each application component. This application composition offers improved fault isolation; security and performance problems in one service do not have to affect the others.
Resource isolation is another benefit of this design because memory problems and limitations will likely only affect one service’s uptime rather than the entire business application.
DevOps teams stand to benefit the most from microservices because of the development and deployment agility this strategy provides. A company’s developers can update or work on each service as needed without changing ones that are doing well or aren’t ready for updates. This focused development cycle typically leads to more efficient CI/CD pipelines and quicker time to market, or quicker time to internal users, depending on the specific use case.
Learn more about DevOps resources: Best DevOps Tools
Cons of using microservices
Microservices won’t work for every team and application use case. Enterprises should watch out for these potential issues that come with a microservices architecture.
Potential for resource sprawl and shadow IT
Independent platform, language, and other resource selections across individual microservices will likely lead to more overhead resource types that a company needs to manage and pay for. Resource sprawl can lead to unnecessary costs and inefficiencies, while also causing potential problems with shadow IT and limited global security visibility.
Difficulties with global testing and security management
Deployment, debugging, and application testing are not as easy to conduct globally in a microservices model. Teams need to make sure all interconnected services are functioning one by one before they can take big testing and deployment steps.
Testing and management are easier with a microservices architecture, but global testing and security management become more difficult without the right supportive tools in place.
Not suited for smaller teams and their needs
Microservices are complex to set up and often require additional resources and third-party partners for strategic deployment in an enterprise network.
The amount of work that goes into setting up microservices might not be well-suited to small businesses and smaller developer and tech teams, due to a combination of lacking technical expertise and financial resources. Your team will need expert-level knowledge of DevOps best practices and complex deployment and testing schedules for microservices to be successful.
It’s also important to invest in the right security monitoring, API, and application infrastructure resources, which can quickly become cost-prohibitive for some teams.
Read next: Best API Management Software & Tools | <urn:uuid:1db1ad96-0baa-42e4-a90a-f4d70536a604> | CC-MAIN-2022-40 | https://www.cioinsight.com/it-strategy/what-are-microservices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00399.warc.gz | en | 0.926894 | 1,536 | 2.84375 | 3 |
- February 11, 2019
- Posted by: admin
- Categories: Big Data Analytics, IoT
Two of the most popular buzzwords in today’s times are Big Data and Internet of Things i.e. IoT. And if you are keen about technology and its latest on-goings, then you must have heard business experts say that companies with strong Big Data Analytics are going to be benefitted from IoT enormously.
But before going into the Role of IoT in big data analytics, let’s first understand basic IoT.
So, what exactly is IoT?
Abbreviated as IoT, the Internet of Things basically refers to a group of physical devices such as home appliances and vehicles integrated with sensors, software and network connectivity which allows them to exchange data. Put differently, IoT is a vast system of interrelated digital machines and objects, even people, which have the ability to transfer data over a stable network.
Perhaps the most interesting thing about IoT is that this exchange of data requires no human intervention. The potential that lies in applications of IoTistruly incredible and much of it is yet to be unveiled.
What is the scope IOT in Big Data?
That day is not far away when IoT will take precedence in our daily lives, thereby resulting in the generation of massive volumes of user data every single day.
So Big Data automatically comes into the picture. Therefore, the scope of specialized IoT big data applications is beyond measurable limits.
Let us take a look at some of the most significant points about Big data powering IoT.
> Storage issue checked
According to sources, data experts have projected that in the current year, the data accumulated from various sources would amount to about forty-four trillion gigabytes. Some reports say that by the year 2020, for each person in this world there will be around 5200 GB of data. In order to store and process this mind-boggling amount of data, enterprises are looking forward to making the most of cloud servers today. With unlimited space, cloud storage is the perfect answer to Big Data storage issues.
> Security system
Till 2017 there were umpteen ways to leverage the positives of Big data IoT architecture. But as complexities surfaced with every passing day, a ton of security concerns was also raised consequently. But such concerns are rapidly being demolished by hiring professionals well-versed with maintaining the security of Big data IoT applications for enterprises. Today, companies that utilize IoT services are on the constant lookout for keeping up with the latest security standards through countless verification and authentication methods.
> Deeper insight into user data
The biggest impact of IOT certainly on the sphere of Big data analytics.
Already,the number of metrics available to a data scientist today for analyzing user data is huge. And IOT is actually the cherry on its top, as it enriches the data pool with deeper and more valuable details. The more detailed the data pool is, the better and more accurate knowledge it can provide about users. And even if the number of users starts increasing exponentially, data scientists can still work upon IoT data analytics projects for processing massive data sets,to offer precise customer insights to businesses. And that is all that enterprises will need to do business better.
If the Relation between bigdata and IoT can be accurately mapped, it would open up more new horizons with much advanced technological developments in future.As data experts carry on with their research, this wave of IoT data management and analytics will eventually give rise to an evolving landscape in modern technology. And the gradual paradigm change in technology will benefit both enterprises and the common man, in the recent future. | <urn:uuid:cff6569a-d558-480c-be97-aee5270850dc> | CC-MAIN-2022-40 | https://www.aretove.com/how-iot-changing-big-data-analytics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00399.warc.gz | en | 0.944876 | 739 | 2.75 | 3 |
School security has become a hot topic for debate—in light of the Marjory Stoneman Douglas High School shooting, March for Our Lives is a large movement that has sparked a national debate on gun ownership and usage. Whatever side of the conversation you may fall on, there’s a call for more security measures to be implemented immediately. Thankfully, there are three technologies that can be implemented right now, by schools, to help improve security and help appease educators, parents, and students alike.
1. Modern CCTV Systems
Many schools already have CCTV systems in place, but a current, modern surveillance system is critical. Being able to monitor all access areas to the building at one time allows us to have a better understanding of the security of a school. Additionally, cloud storage allows for accurately determining the details of an incident, a vital process in preventing future incidents from happening and in investigating what may have happened for legal purposes. IP based surveillance also allows for the “network access” by law enforcement if properly configured. If you still have the older coaxial cameras, you probably suffer from poor images and an expensive infrastructure to maintain.
2. Access Control
Utilizing access control systems allows educators to grant access to certain areas of the building to certain people. In order to gain access, you must first present some form of credentials such as a key card. Multifactor authentication, which requires more than one kind of credential like a physical card as well as an access code or biometric reading, can make access control even safer. These systems are effective means of cutting off access to doors that may not be monitored but are nonetheless essential to security. Video intercom technology can marry access to the school campus with main offices to ensure only authorized teachers and students can gain ready access.
3. Contemporary Communication Technology
Installing two-way communications systems allows those in the classroom to quickly initiate emergency protocols or reach administration and security. Compact personal panic buttons are capable of sending school-wise alerts within seconds, allowing security staff members to address a CCTV system and determine where and to what extent the threat exists. They can also “lock down” classrooms, allowing students and teachers to shelter in place safely.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (inside and outside plant)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Distributed Antenna Systems
- Public Safety DAS
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at firstname.lastname@example.org, or visit our contact page. Our offices are located in the Washington, DC metro area, Richmond, VA, and Columbus, OH. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999. | <urn:uuid:0f50e09e-9599-4252-abfd-c9980200e72b> | CC-MAIN-2022-40 | https://www.fiberplusinc.com/security/3-practical-school-security-measures-can-put-place-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00399.warc.gz | en | 0.943842 | 696 | 2.640625 | 3 |
Editor’s note: Data Privacy Day is an international event that occurs every year on Jan. 28. The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. It is currently observed in the United States, Canada, Israel and 47 European countries. The following story is relevant to this topic.
Small businesses certainly aren’t immune to cybercrime. The cyberthreat landscape has evolved; attacks don’t stem from only rogue hackers hoping to get access to corporate secrets from large businesses. Instead, small businesses are just as likely to be the victim of cyber-attacks as large corporations, with organized crime groups targeting points of weakness in the hopes of making quick money.
Today’s attacks are simple enough to be deployed at a large scale, and hackers are using them to target small businesses that typically have a moderate amount of data with minimal security.
A Better Business Bureau study found that even the smallest of businesses are at risk. Of respondents representing businesses with 0 to 5 employees, 16% have faced a cyber-attack, and 9% don’t know if they’ve been targeted. Similarly, about 12% of survey respondents from organizations with 6 to 10 employees have been attacked, and 14% are unaware if they’ve ever fallen victim to a cybercrime.
No Small Threats Anywhere
Cyber-attacks don’t represent small threats, either. A Kaspersky study indicated that among small businesses, the average direct cost of recovering from a data breach is $38,000. The direct costs commonly associated with data breaches are far less significant than the “hidden” costs.
Companies must also consider the operational implications of a cyber-security incident. Businesses rely on data. In fact, the Better Business Bureau survey found that only 35% of businesses could maintain profitability for more than three months if they were to permanently lose access to critical data.
It doesn’t take much to run into a data loss incident, either. Ransomware is more likely to create sizable data loss than a hard disk failure, and it is emerging as one of the most common types of attacks.
Beyond data loss, organizations must also contend with reputation-related damages, legal costs, customer defection and similar issues when impacted by a data breach.
The threat for small businesses is real and growing. The Identity Theft Resource Center found that the number of tracked U.S. data breaches reached a new high in 2017, as the figure climbed 44.7% year over year.
Taking cyber-security seriously isn’t just important in preventing damages. It can also create a positive starting point with customers by showing you care about the security of their private information.
With risk rising at an astronomical pace, small businesses must prepare themselves to not only keep attackers at bay, but to also respond effectively in the event of a disaster. This process begins by understanding the entire threat climate.
Data Point Question No. 1: Which industries are most at-risk for cyber-attacks?
Any type of organization may be threatened. However, a few industries stand out as being highly targeted based on data from the Identity Theft Resource Center. These industries include:
General businesses: The average business is the biggest target for attacks. The Identity Theft Resource Center found there were 1,579 tracked data breaches in the U.S. in 2017, with 870 of those breaches impacting enterprises. If that number seems low, remember that it covers only reported and tracked data breaches—not the many attacks that go unnoticed or are kept quiet.
Health care: The study indicated that approximately 24% of all data breaches in 2017 happened at health care industry businesses. These statistics aren’t limited to hospitals and care networks; 83% of physicians polled by the American Medical Association said they’ve faced a cyber-attack.
Banking and finance: Banks and financial institutions are heavily targeted by cyber-criminals seeking to hack into the accounts of customers. Organizations in this sector were struck by 8.5% of all breaches.
Retail: While not mentioned in the study, the rise of e-commerce is leading to a rapid increase in the number of attacks targeting merchants online and through attacks at the point of sale.
Data Point Question No. 2: What data are hackers targeting?
Beyond knowing what industries are most at risk, it’s important to identify what data is targeted most often. For example, the information stored on mobile devices. Many smartphones and tablets lack the same security protections offered by traditional computers.
What’s more, many users rely on passwords as the sole form of protection for their devices and applications. But passwords are faulty and often poorly created. The Better Business Bureau study mentioned earlier found that 33% of data breaches impacting respondents lead to the theft of passwords or similar data.
For small business owners, losing control of a customer’s account information can lead to an immediate loss of trust. Not only are you failing customers, you’re also leaving their private information exposed, potentially leading to further problems. This can damage your brand, force you to spend on credit monitoring or lead to legal problems.
The costs and long-term damages can be substantial, and even a small incident can escalate quickly because of the types of attacks cyber-criminals employ. In simplest terms, hackers are attacking data that allows them to take control of your identity. If they’re able to retrieve password data, they can use it to force their way into email accounts. Once there, they can reset passwords to accounts that use email for a login.
If they steal payment card data, they can claim a person’s identity and set up accounts or make purchases. For small businesses, these attacks can put customers at considerable risk. If an employee email account is compromised, for example, then hackers can gain access to your back-end systems where customer information is stored. From there, they can use the data to target your clients.
The result of these tactics is an increase in other types of identity fraud. The Identity Theft Resource Center found that credit card attacks increased 88% from 2016 to 2017. According to FICO, attacks on debit cards rose 10% year over year in 2017. Payment credentials aren’t alone in being attacked. Social Security numbers, for example, were attacked eight times more often in 2017 than they were in 2016. As a business owner, you are responsible for the safekeeping of your customers’ credit card and debit card information, so the fact that these types of attacks are increasing is even more reason to stay vigilant.
Data Point Question No. 3: What methods do hackers use?
There are several types of cyber-attacks. However, a few stand out as particular threats for small businesses.
Malware: According to the Kaspersky study mentioned previously, approximately 24% of businesses have been hit by malware. Malware is malicious software that accesses a system and resides in the background sending data to attackers. For example, keyloggers—applications that record all keystrokes a user makes—are a common malware system. They are used to steal passwords that users type repeatedly.
Phishing attacks: Ten percent of those polled in the Kaspersky study said they were hit by phishing scams. Phishing tactics use fake emails to get users to click a link or open an attachment, often to get malware or ransomware onto a system. For example, an email may look like it has come from an equipment supplier and ask one of your workers to reset a password. When the worker does so, it gives the hacker access to your system.
Ransomware: This is a relatively new type of malicious software designed to block access to a computer system. When ransomware gets onto a machine, it turns the data in the system into a coded format. From there, the attacker demands a ransom from the victim in order to get the data decoded.
Software vulnerabilities: Sometimes software will have a glitch that moves data around in an unsafe way. These vulnerabilities let hackers get into systems they otherwise wouldn’t be able to access. It’s important to keep up with patches and software updates to avoid these problems.
These attack types are particularly problematic for small businesses because they don’t take much skill to use. Because they’re easy for criminals to employ, hackers have no problem using them at large scale to attack many organizations, regardless of size. Being a small business won’t keep you off attackers’ radars. It’s time to adapt and employ modern security strategies.
Data Point Question No. 4: What’s the solution?
There isn’t a single strategy to deal with cyber-security. However, you can get help to mitigate these threats as fully as possible.
QuickBridge, for one, can provide businesses with the supplementary capital needed to invest in cyber-security measures. The funds can be used to hire additional IT staff, train employees, update your software or purchase cyber-security insurance to safeguard against the after-effects of a breach.
Approximately 60% of small businesses shut down within six months in the aftermath of a data breach, QuickBridge said.
Cyber-attacks tarnish both your customer’s trust and your business’s reputation. Allocating resources toward data protection can give you a competitive advantage by demonstrating how strongly you value your customers’ data and limit your business’s liability.
Other small business loan providers with online platforms include Lendio, LoanBuilder, Headway Capital, Kabbage and a list of others.
Editor’s note: This is an edited version of an industry whitepaper researched and offered to eWEEK for publication by QuickBridge, which provides a small business loans platform called Smarter Funding to provide business owners with fast access to working capital. | <urn:uuid:0662eb0e-9e4c-4374-9b63-bb984ada6645> | CC-MAIN-2022-40 | https://www.eweek.com/small-business/are-small-businesses-protecting-customer-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00399.warc.gz | en | 0.948422 | 2,036 | 2.84375 | 3 |
Today, information outlines development. It is the most helpful asset used by associations to make comprehensive, insight-based choices. It is the stage on which business truths get shaped. People are producing enormous volumes of information every day by cooperating through different electronic channels.
For example, data might come as buy records from stores and retail outlets, calls, self-regulated overviews, field perceptions, meetings, and examinations.
Big Data is a resource for both tech and non-tech arrangements
Data is crucial for a Data Analyst to have the option to sort out information into a justifiable structure. Additionally, its needed to extricate applicable and valuable information from an immense pool that is accessible and to normalize the data. With Data we can structure business operations and gather insights properly
Businesses are exploring Big Data via a prescient examination to gauge future freedoms and dangers. Telecom firms convey that this method continues to distinguish supporters that are probably going to stir their organization.
Insurance agencies depend vigorously on Regression Analysis to assess the credit remaining of policyholders and a potential number of cases to expect in every period. In the financial business, relapse examination helps to portion clients as per their probability of reimbursing credits.
Big data structures help to discover hidden insights and patterns
Information extraction is presently quicker and less awkward with the consistent mix of IoT (Internet of Things) and Big Data. The worth of Data is progressive and constantly increments, with organizations working explicitly to gather and sell information. Examination shows that precise translation of Big Data can work on retail working edges by as much as 60%.
What’s Causing The Data Explosion?
The fundamental justification for such development is that more individuals have more devices to make and share data than in recent memory. From another Word archive on your PC to a photograph or video snapped on your telephone, we’re stacking up rigid plates with more information than any time in recent memory.
Considering this gigantic information development, tech organizations are developing solutions that help people to comprehend the different patterns of information. This has made artificial intelligence (AI) become possible, and a very crucial factor.
For What Reason Should We Care About Data Growth?
The big data explosion matters because it will make a big difference in running your business and relationships with your customers. Suppose you haven’t begun posing inquiries identified with extensive information and information examination. In that case, there’s a decent possibility that you will start soon enough.
By learning and understanding the most recent patterns in everything from business and information investigation to AI and AI, you can bear outings among contenders who might not have created administrations to address clients’ issues.
As the information downpour proceeds, the following are a couple of things you should seriously think about having access to your customers.
The most straightforward way for you to take advantage of information development is to help your clients store and deal with every one of the information they’re making. With arrangements that offer article-based, limitless scale-out capacity, you have a straightforward way of giving clients more extra room the second they need it.
Getting information represents a significant test on the planet today. Activities get checked and recorded persistently, for example, shopping, online media conversations, computerized content utilization conduct, and so forth by individuals we may not know.
Associations get set up with the sole point of social occasion and exchange information with accomplices who utilize this information for business purposes. We have seen numerous infractions throughout the long term.
We realize how significant reinforcements are, however as information develops, you have the option to isolate helpful information from less valuable information. Thoughtful capacity choices can simplify it to give more basic information, so reinforcement stockpiling costs don’t go out of proportion.
Analytics as a Service
Many specialist organizations are plunging into the information game by giving their clients information examination applications and administrations. At this moment, just a modest bunch of IT suppliers are offering such administrations. Preparing a portion of these administrations for clients can provide you with a severe chance of adding beneficial incentives for your clients.
The Big Data explosion is genuine, and it remains. No country or association can bear to close its eyes and disregard this marvel. It requires an immense pool of abilities and enterprises very much situated to join the ‘Enormous Data stage‘ temporary fad. The wonder of the Big Data blast has acknowledged enormous development in information.
It has animated bright advancements that will hold the world hypnotized until the end of time. | <urn:uuid:11860c34-b1bb-4caa-8fef-25c94999141f> | CC-MAIN-2022-40 | https://enteriscloud.com/big-data-explosion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00599.warc.gz | en | 0.929585 | 932 | 2.71875 | 3 |
A database is a collection of data that is organized so that its contents can easily be accessed, managed, and updated.
How database hosting works in Security Center
By default, a role’s database is hosted on the same server that hosts the role. This is
shown in the role’s Resources tab by the value
(local)\SQLEXPRESS in the Database server field,
where “(local)” is the server where the role is running.
If you plan to change the server hosting the role or add secondary servers for failover, the database must be hosted on a different computer.
In addition, the computer hosting the database server does not have to be a Security Center server (meaning a computer where Genetec Server service is installed), unless you are configuring Directory database failover using the backup and restore method.
How SQL Server uses memory
If you are using a licensed edition of SQL Server (like SQL Server Standard, SQL Server Business Intelligence, or SQL Server Enterprise) please keep in mind that all databases are managed by Microsoft SQL Server in Security Center. By default, SQL Server is configured to use as much memory as it is available on the system. This could lead to memory issues if you are hosting SQL Server and many roles on the same server, especially on a virtual machine with little memory resources. If you are running out of memory on one of your servers, you can fix the problem by setting a maximum limit to the amount of memory SQL Server is allowed to use. | <urn:uuid:3bfe5035-68b4-4362-a352-6eae7c6083ec> | CC-MAIN-2022-40 | https://techdocs.genetec.com/r/en-US/Security-Center-Administrator-Guide-5.9/Databases | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00599.warc.gz | en | 0.889052 | 313 | 2.796875 | 3 |
As you may know already, Wired Equivalent Privacy (WEP) security is not secure. This first wireless LAN security standard, developed by the IEEE, has been vulnerable to cracking by Wi-Fi hackers for nearly a decade now.
In 2003, the Wi-Fi Alliance released a security standard called Wi-Fi Protected Access. Although the first version (WPA), which uses TKIP/RC4 encryption, has gotten beaten up a bit, is not totally cracked, and can still be very secure.
The second version (WPA2), released in mid-2004, does provide complete security, however, because it fully implements the IEEE 802.11i security standard with CCMP/AES encryption.
In this article, we'll discover the two very different modes of Wi-Fi Protected Access. We'll see how and why you'd want to move from the easy-to-use Personal mode to the Enterprise mode.
Now let's get started!
Two Modes of WPA/WPA2: Personal (PSK) versus Enterprise
Both versions of Wi-Fi Protected Access (WPA/WPA2) can be implemented in either of two modes:
- Personal or Pre-Shared Key (PSK) Mode: This mode is appropriate for most home networksbut not business networks. You define an encryption passphrase on the wireless router and any other access points (APs). Then the passphrase must be entered by users when connecting to the Wi-Fi network.
- Enterprise (EAP/RADIUS) Mode: This mode provides the security needed for wireless networks in business environments. Though more complicated to set up, it offers individualized and centralized control over access to your Wi-Fi network. Users are assigned login credentials they must present when connecting to the network, which can be modified or revoked by administrators at anytime.
Though this mode seems very easy to implement, it actually makes properly securing a business network nearly impossible. Unlike with the Enterprise mode, wireless access can't be individually or centrally managed. One passphrase applies to all users. If the global passphrase should need to be changed, it must be manually changed on all the APs and computers. This would be a big headache when you need to change it; for instance, when an employee leaves the company or when any computers are stolen or compromised.
Unlike with the Enterprise mode, the encryption passphrase is stored on the computers. Therefore, anyone on the computerwhether it be employees or thievescan connect to the network and also recover the encryption passphrase.
Users never deal with the actual encryption keys. They are securely created and assigned per user session in the background after a user presents their login credentials. This prevents people from recovering the network key from computers. | <urn:uuid:69e3ab62-fec5-4fb1-b9d6-43abec52e1e2> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=1576225&seqNum=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00599.warc.gz | en | 0.923299 | 568 | 2.53125 | 3 |
NASA has partnered with its International Space Station associates to draft a set of standards on seven priority areas concerning global interoperability.
These areas include avionics, communications, environmental control and life support systems, power systems, rendezvous operations, and robotics and thermal systems, the space agency said Tuesday.
The collaboration seeks to improve space technology compatibility without the need for additional design changes.
âHaving compatible hardware will allow differing designs to operate with each other,” said William Gerstenmaier, associate administrator at the agencyâs Human Exploration and Operations Mission Directorate.
“This could allow for crew rescue missions and support from any spacecraft built to these standards,â he added.
NASA intends to have the draft’s baseline finalized in summer 2018, and have the standardization first applied to the Lunar Orbital Platform-Gateway outpost. | <urn:uuid:7adc3a4f-15a3-412b-83fb-d4dd98925abf> | CC-MAIN-2022-40 | https://executivegov.com/2018/03/nasa-iss-partners-formulate-standards-to-promote-interoperability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00599.warc.gz | en | 0.888498 | 176 | 2.640625 | 3 |
All that stands between hackers and your accounts’ data, be it personal information or sensitive business info, is a measly string of characters that may (or may not) be complex enough to thwart their attacks. We’re talking about your passwords, and for many businesses, they are the only thing protecting important data. We’ll walk you through how to make sure your passwords are as complex as possible, as well as instruct you on how to implement additional security features to keep your data locked down.
How to Create a Secure Password
The ideal password is generally easy to remember, but difficult to guess, all while utilizing a plethora of letters, numbers, and symbols. Unfortunately, all of this combines to create a situation that makes remembering a password practically impossible without some sort of aid or program. We recommend putting together a password that is an alphanumeric representation of a phrase that you will remember.
Of course, if you make so many of these, you might forget which ones apply to that particular account. This is where password management comes in. You can create a “master password” that acts as the gatekeeping password for your many accounts. Password managers store your passwords in a secure database where they are only called on as needed, keeping you from having to remember them all.
Passwords are best utilized alongside a secondary method of authentication. This could be in the form of a passcode sent to your mobile device via text message or phone call, or it could be a biometric code of some sort like a thumbprint. Regardless, a secondary method of authentication means that your account is less likely to be infiltrated, as it effectively means twice the work for any hacker attempting to break in.
COMPANYNAME can equip your business with password managers and two-factor authentication tools that will optimize data security and ensure your accounts will have a minimal chance of being compromised. To learn more, reach out to us at PHONENUMBER. | <urn:uuid:a680c05f-265e-4e8b-922c-7b0dcef68c66> | CC-MAIN-2022-40 | https://www.activeco.com/how-to-secure-data-using-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00799.warc.gz | en | 0.949304 | 405 | 2.8125 | 3 |
311 million records containing 77 million URLs were analysed to develop the document.
Malware is a massive issue for internet users across the world, and one of the main ways that we’re hoodwinked is by clicking on something we’re not meant to. Of course, the way that happens is by directing us to URLs. “URLs are central to a myriad of cyber-security threats, from phishing to the distribution of malware,” say the authors of a new paper that tries to uncover and characterise what makes a maliciously-used URL.
Researchers at Australia’s Commonwealth Science and Industrial Research Organisation (CSI) researched and analysed 311 million records containing 77 million URLs that were submitted to Hispasec Sistemas’s online antivirus checking website, VirusTotal between December 2019 and January 2020. The findings were astounding for the scale of the malware issue discovered.
From the dataset the researchers analysed, a staggering 2.6 million suspicious campaigns were identified based on their attached metadata, 77,810 of which were confirmed to be malicious through a secondary check. In total, 38.1 million records and 9.9 million URLs were found within those 2.6 million malicious campaigns.
Digging into the detail
Perhaps most concerning for those trying to spot these malicious campaigns is the volume of worrying ones that slipped through the net. “Some surprising findings were observed, such as detection rates falling to just 13.27% for campaigns that employ more than 100 unique URLs,” the researchers say.
A quarter of all submissions came from the United States, with 17 million unique pieces of content analysed.
Submitted URLs were checked by a median of 72 security vendors to determine if they’re benign or malicious. But what was worrying was that almost all – 98% - of all the submissions were only flagged as malicious by 10 or fewer vendors at a time. “This indicates that vendor detection performance is highly skewed and only a few of them are effective,” the researchers write.
That may seem a major concern – and it is, given the ineffectiveness of much of the market – but the wisdom of crowds does help spot those malicious sources. The researchers’ findings are that if a URL is flagged by at least four vendors, it is reasonable to conclude that it is malicious. That means that while any individual security vendor solution may not be that reliable, the market as a whole is able to provide a safety net through strength in numbers.
A barrage of malware
Still, malware detection is a cat and mouse game, and the hackers and cybercriminals behind these campaigns know how to try and force their way through defences. And they literally do: the vast majority of malicious URLs come from campaigns that employ multiple unique URLs, according to the researchers. The goal too is to bamboozle users into thinking they’re visiting a legitimate URL when in fact it’s a fraudulent one.
The average URL lengths across campaigns (i.e., mean of means) stands at 64.29 characters, say those who parsed the data.
Those campaigns often try to mimic big brands to capitalise on the trust those brands have built up over time. Take for instance one campaign launched by hackers. One campaign of 4,081 unique URLs tried to pass off as an Apple brand. It used a combination of 9 sub-domains, 12 domains, and 7 suffixes, ranging from www.apple.com as a subdomain, icloud-com as a domain, and .us, .live, and .support as a gTLD.
“We see the efforts [cybercriminals] go to in order to evade defences, with our findings on their use of widely variant URL lengths and propensity for longer URLs,” say the researchers. “It is hoped that such insights would be of use to the wider cyber-security community.” | <urn:uuid:2a7b5a82-8f91-47da-af28-0ba94bfccc01> | CC-MAIN-2022-40 | https://cybernews.com/security/massive-analysis-of-311-million-malware-warnings-heres-how-hackers-fool-us/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00799.warc.gz | en | 0.95848 | 810 | 2.515625 | 3 |
Computers are supposed to be completely predictable. When you tell it to do something, it should do exactly that – over and over again, if necessary – in the same way, with the same result. This is the nature of computer programming. But this predictability can allow computer criminals to interrupt a computer’s processing and divert it to do nefarious things. If you know exactly where to poke the system, predicting where and how it does it’s processing, you can effectively rewire it to do your bidding. This is the basic attack methodology that lets bad guys insert their malware into our systems. But what if we were able to randomly perturb a computer’s processing on a periodic basis, making it effectively unpredictable? This is the essence of a new computer architecture called Morpheus that may one day make all of our computers and computerized devices much, much harder to hack. Today, Todd Austin will explain how this brilliant defense mechanism works and how it was inspired by the human body’s immune system.
Todd Austin is a Professor of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor. His research interests include computer architecture, robust and secure system design, hardware and software verification, and performance analysis tools and techniques. Todd is also co-founder of Agita Labs, a startup developing privacy-enhanced computation technologies that help ease the tension between data discovery and personal privacy.
- Morpheus article: https://spectrum.ieee.org/morpheus-turns-a-cpu-into-a-rubiks-cube-to-defeat-hackers
- Morpheus video: https://www.youtube.com/watch?v=v2mLm2QqsVo
- DARPA SSITH program: https://www.darpa.mil/program/ssith
- Become a Patron! https://www.patreon.com/FirewallsDontStopDragons
- Would you like me to speak to your group about security and/privacy? http://bit.ly/Firewalls-Speaker
- Generate secure passphrases! https://d20key.com/#/ | <urn:uuid:f02e8eb8-c66f-4fd6-9699-cb844fff72cd> | CC-MAIN-2022-40 | https://podcast.firewallsdontstopdragons.com/2021/08/30/morpheus-securing-cpus-with-entropy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00799.warc.gz | en | 0.901357 | 452 | 3.296875 | 3 |
What Is a Container?
David Egts, chief technologist for the North American public sector at Red Hat, notes that a container is “an application with all the dependencies and libraries that it needs wrapped into a unit that we call a container image, which people can pull down, and then you could run it on a container host and the host could be a Linux system.”
The GSA’s container guide notes that containers are “packages of software services that exist separately and independently from an existing host infrastructure.”
Container environments include the application, all required dependencies, software libraries and configuration files, the GSA notes. “Because container images hold everything needed for an application, developers do not need to code applications for new environments and deployment is greatly streamlined,” the guide reads. “Generally, applications have multiple containers functioning like isolated, secure building blocks for the application’s software.”
Unlike a virtual machine, which replicates an entire operating system and is a very large disk file, containers include just the application and its necessary dependencies.
Container Orchestration in Government
On their own, containers are easy to deploy and maintain for federal agencies. However, as agencies deploy more containers and associated services, they can become more complex to manage.
“The need to automate the deployment, networking and availability of containers becomes critical at scale,” the GSA notes in its guide. “Container orchestration is a critical component of overall container management. In addition to orchestration, a successful container management system also contains load balancing, networking, schedulers, monitoring and testing.”
Container operators can automate the packaging, deployment and management of containerized applications, according to the GSA. Orchestration platforms such as Docker and Red Hat’s OpenShift can help agencies manage containers.
How Can Containerization of Software Benefit Federal Agencies?
Containers provide agencies with numerous benefits. “Containers offer federal agencies a unique opportunity to modernize their current legacy applications and develop new applications to take advantage of cloud services,” the GSA notes. “They allow agencies to develop applications quickly, scale rapidly, and efficiently use their valuable resources.”
Containers are much more efficient to run than virtual machines, which require hypervisors and are essentially running their own operating systems. “With the container host, you’re going to have an operating system that can run containers, but all it’s doing is just running those particular applications,” Egts says.
Additionally, containers are immutable infrastructure. “A container image contains the code to run an application and provides a ‘static’ element for IT operations teams to work with,” the GSA notes. “The immutable aspect of the container provides a higher level of confidence for both testing and production.”
Containers also make it easier for agencies to deploy applications more quickly. “Using containers frees developers from the tedious task of managing multiple configuration environments, supporting libraries, and configurations from testing to production environments,” according to the GSA. “Containers can be created once and used multiple times without additional effort. Through containers, developers can focus on application deployments rather than maintaining supporting configurations.”
Agencies can also do cloud-native development more easily via containers, Egts says, enabling them to build apps that can scale up and down to meet demand.
Orchestration tools can enable agencies to schedule multiple containers to handle increased demands, he says. The orchestrators can detect that and detect when the demand has waned, “and can shut off those containers automatically and free up those compute resources for something else, which you couldn’t do with virtual machines.”
Further, containers can aid agencies’ cybersecurity by presenting a smaller attack surface, Egts says. The GSA says containers are typically easier to inspect than virtual machines, enable the resolution of vulnerabilities without affecting the entire application, provide a more consistent environment and enable quick updates.
Agencies have been benefiting from containers for years. For example, the Navy teamed with Red Hat to speed up its software development efforts using OpenShift to orchestrate containers. The Naval Information Warfare Center Pacific “created a secure application development pipeline, and then successfully demonstrated automated application deployments,” a case study notes.
At the National Institutes of Health, containers are helping support high-level scientific research. In a heterogenous IT environment, containers help researchers overcome legacy IT hurdles that might hinder their efforts.
Kubernetes vs. Docker: What’s the Difference?
Containers, Docker and Kubernetes are often discussed at the same time, but there are important differences.
Kubernetes is essentially an open-source orchestrator for containers, Egts says. Docker provides set of Platform as a Service products that use virtualization to deliver containers. “A fundamental difference between Kubernetes and Docker is that Kubernetes is meant to run across a cluster while Docker runs on a single node,” Microsoft notes in a blog post.
Egts says that users can write apps and then deploy them “on a container platform that would use Kubernetes in the background to schedule and scale out these containers” as demand for the containerized applications increases.
“You need to be able to spill over to other container hosts and have them spin up and run those containerized workloads as well,” Egts says. “That’s what Kubernetes does. Think of it as the puppet master of all of the containers.” | <urn:uuid:30c8ee8a-e518-4e1c-955f-dafd3c2c82f9> | CC-MAIN-2022-40 | https://fedtechmagazine.com/article/2022/01/container-technology-how-containerization-helps-federal-agencies-modernize-software-perfcon | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00799.warc.gz | en | 0.924721 | 1,165 | 2.96875 | 3 |
What Is the Public Cloud?
The public cloud is a computing model in which third-party vendors deliver various computing resources over the internet. The resources are available to the general public. Any organisation can simply sign up with a cloud provider and begin provisioning services.
The public cloud services typically fall into these three categories:
- Software as a service (SaaS)
- Platform as a service (PaaS)
- Infrastructure as a service (IaaS).
SaaS offerings are ready-to-use software solutions, while PaaS is typically used for tasks like app development. IaaS gives companies the cloud infrastructure to build their overall environment; computing resources might include cloud storage, servers and networking. Prominent public cloud providers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.
What Is the Private Cloud?
The private cloud is a cloud resource owned and used by one organisation. The IT resources that make up the private cloud environment can be hosted on-premises or in third-party data centres. The key indicator of a private cloud resource is that there is no multi-tenancy: the resources are used solely by one single group of users.
The private cloud is an attractive cloud model for companies that want a bespoke cloud connect service, as it allows them to have more control over their resources. They might store sensitive data, have an in-house team, or wish to host behind their own company firewall. As such, private cloud deployments offer organisations more management and customisation options.
What is the difference between a public and private cloud?
The difference between public and private cloud computing typically comes down to resource access. In a public cloud environment, the physical resources are used by many different organisations simultaneously in a multi-tenancy arrangement. They are managed, owned and maintained by third-party public cloud providers and rented to users on a pay-as-you-go basis.
The private cloud, meanwhile, is used by only one organisation with single tenant access. Some private cloud resources are owned by organisations and hosted in an on-site data centre.
More commonly, though, companies choose to purchase or rent private computing resources hosted in third-party data centres, otherwise known as hosted private clouds. This gives them the access controls of private cloud – without having to host on-premises. The two types of cloud can also be combined in multi-cloud or hybrid cloud setups.
The Hybrid Cloud: Merging Private and Public Clouds
When an organisation combines public and private cloud resources, it’s known as a hybrid cloud environment. The individual cloud resources can be connected by virtual private networks, APIs and orchestration tools. The crucial and defining feature of a hybrid cloud setup is that it contains at least one private cloud resource and one public cloud element. Learn more about Hybrid Cloud Security from our downloadable whitepaper.
Multi-Cloud: Two or More of the Same Cloud Platform
A multi-cloud environment contains two or more of the same cloud type. It’s most commonly found as two or more public cloud resources which work independently of each other. A company can design multi-cloud strategies from the beginning, but they can also occur due to unforeseen scaling, changes over time, or shadow IT.
What Are the Benefits of the Public Cloud?
Public cloud services give users many options for scaling. New resources can be provisioned on-demand, while existing environments can be scaled up and down if needed. As the resources are hosted and managed by a third-party vendor, there is little time overhead in installing and deploying them.
The public cloud offers excellent options for backups, redundancy and disaster recovery. Clients can choose from different cloud storage types, located in various locations if needed.
Cost Savings and Flexible Pricing
One of the most notable advantages of public cloud strategies is the pricing. Costs can be more predictable and kept low through usage management, while the pay-as-you-go model avoids the need for initial outlays on IT infrastructure.
Less Maintenance of On-Premises Infrastructure
As such, there are no ongoing maintenance costs – whether financial or in staff time. The public cloud means that the user has no responsibility for hosting and management, so effectively outsources those duties to the vendor.
What Are the Benefits of the Private Cloud?
Safeguard Sensitive Data
One of the key reasons that an organisation might choose the private cloud is data ownership. This might be for legislative reasons, or simply company preference. Many industries must also host sensitive data on private resources with stringent access controls.
More Control Over Security Measures
Private cloud storage, therefore, also gives the owner more control over security. They can choose to safeguard data or certain workloads on private cloud resources, while using the public cloud for less security-conscious tasks.
No Shared Resources
With the virtual private cloud, there is also peace of mind over resource usage. The owner controls who can access the resource and which controls are in place, with no shared use of any resources - unlike the multi-tenant structure of the public cloud.
Private cloud users can customise their resources to match requirements. For example, they might optimise their private cloud architecture for low latency, more storage space, or enhanced security. This can be done for entire systems, or specific workloads.
What Are the Benefits of Hybrid Cloud Computing?
The Best of Both Worlds
With a hybrid cloud strategy, you can pick and choose features from both types of cloud technologies. You can have a private network portion to keep sensitive data, for example, combining it with the ability to scale up other workloads hosted on public cloud computing services.
Employing a hybrid cloud strategy also gives you flexibility. You can choose to optimise specific workloads for high performance – especially those on private cloud resources hosted in off-site high-speed data centres. Combined with public cloud resources, this makes it straightforward to tailor a system to meet workload performance requirements.
Business Streamlining and Automation
Using orchestration tools and proper data management, hybrid cloud deployments can help you automate and streamline repetitive tasks, freeing staff to concentrate on more important tasks.
Public Vs Private Vs Hybrid Cloud: Which Should I Use?
So, which cloud solution should you use? There are many potential use cases of the public, private and hybrid cloud – each depending on your unique business needs. Below are some situations where each cloud type might stand out.
Public cloud use cases include companies that:
- Want to save money on purchasing IT infrastructure outright.
- Do not have a dedicated IT professional or team to manage their system.
- Want to deploy straightforward workloads quickly.
- React to fluctuating demands in traffic.
- Want to avoid vendor lock-in and use different suppliers for different tasks.
- Do not store highly sensitive data.
Private cloud services might suit organisations that:
- Require strict management and control over sensitive data storage.
- Want sole use of their computing resources, avoiding multi-tenancy situations.
- Have predictable levels of traffic and demand.
- Require high levels of customisation within their environment.
The hybrid cloud, meanwhile, can suit businesses that:
- Want the best of both worlds – combining the security and ownership of private cloud with the flexibility of the public cloud infrastructure.
- Want to use the public cloud for some workloads without sacrificing control over others.
- Have the expertise to manage the connections between resources.
- Require a tailored solution that meets complex or unique business needs.
How Interxion Can Help Deliver Your Cloud Solution
Different types of cloud offer different benefits. The public cloud can give you flexibility, scalability and easy deployment for a straightforward monthly cost. The private cloud gives you control and higher levels of security. When combined, the hybrid cloud lets you enjoy the perks of both.
At Interxion, our data centres offer the ideal infrastructure to host any type of colocation cloud service. Whether you’ve chosen the public or private cloud, or both, we can help bring your cloud services to life. Contact us today for a quote or learn more with our whitepapers. | <urn:uuid:4361129e-1dd2-4051-a38f-20e0dc347d41> | CC-MAIN-2022-40 | https://www.interxion.com/uk/blogs/public-vs-private-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00799.warc.gz | en | 0.912528 | 1,694 | 2.9375 | 3 |
While this blog series was originally intended for business analysts, it applies to anyone who is involved in eliciting, modeling, analyzing, or consuming requirements for Business Intelligence (BI) projects. It does not matter what job title you have – if you are involved in a BI project, knowledge of these techniques will be useful to you.
What is Business Rules Analysis?
The technical definition of Business Rules Analysis is:
“Business Rules analysis is used to identify, express, validate, refine, and organize the rules that shape day-to-day business behavior and guide operational business decision making.”
– BABOK® v3.0
Business Rules are directives that serve as criteria to:
- Guide behavior
- Shape judgments
- Make decisions
General Business Rules Principles
There are some general principles regarding Business Rules that should be considered when using this technique:
- Base them on standard business vocabulary
- Express them separately from how they will be enforced
- State them atomically and declaratively
- Map them to decisions the rule supports (or constrains)
- Maintain them so they can be monitored and adapted
When to Use It?
When eliciting requirements, it is not uncommon for analysts to [accidentally] discover Business Rules. While they might resemble requirements, they are distinctly different in that they apply to the whole organization and are self-imposed constraints that the business needs to operate within. If you do identify Business Rules during the course of your project, that is when you might start to consider using this technique.
Where Do You Find Business Rules?
Business Rules may be discovered during requirements elicitation, but they may also be found in other places. Business Rules may be either explicitly or tacitly found:
|Documented policiesRegulationsContracts||Undocumented stakeholder “know-how”Generally-accepted business practicesNorms of the corporate culture|
The explicitly found Business Rules are obviously much simpler to identify and manage. Tacitly found Business Rules will take some disentangling to identify, document, and consistently apply.
What Are the Attributes of Business Rules?
When you have identified Business Rules, there are certain attributes about the Business Rules that it is important to take note of. They must be:
|Specific||They can’t be vague or broadly stated|
|Testable||They need to be able to be verified|
|Explicit||The must be completely stated|
|Clear||They are distinctly stated and well-understood|
|Accessible||They are published to a place where people can view (and manage) them|
|Single-sourced||They should exist in only one place, without duplications or conflicts in other locations|
|Practicable||They need no further interpretation|
How to Use This Technique?
Once you have identified your Business Rules and their attributes, it’s time to document those rules. If you’re lucky, your organization may already have a Business Rules “engine” that exists in a location that enables it to apply those rules across your business systems. If you’re not that lucky, you will have a bit more difficulty, but it still worthwhile to separate the Business Rules from a project’s requirements. A sample of a set of Business Rules might look something like this:
Example from: “Business Analysis for Practitioners, a Practice Guide”, published by PMI®
If you have a lot of Business Rules, using this technique will aid you in ensuring that you are applying all the Business Rules to not just your project, but across the organization. It will also help to reduce the possibility of duplicate or conflicting Business Rules from being applied.
It’s sometimes difficult to distinguish between requirements and Business Rules, and you may inadvertently include a rule as acceptance criteria. As a result, Business Rules will be sprinkled throughout the organization and will be extremely difficult to apply and manage. By having a consolidated set of Business Rules, you will alleviate these problems.
By breaking what might seem like a complex set of rules down into atomic, individual rules that can be combined to achieve the same result, it will be much simpler to manage and change any of the individual components of a rule.
Business Rules are often tricky to find and nail down. It’s also difficult to make sure that they have all of the attributes described above. If you end up embedding them in requirements and acceptance criteria, rather than in a single source, your rules will end up getting lost.
If you have a large number of Business Rules, but don’t have a way to systematically apply them across systems, it will still be difficult to apply those rules – even if you have a single source.
Many Business Rules will likely end up needing a management system in order to make the best use of those rules, and to apply them across the organization. This can be a costly proposition.
Business Rules are important constraints that an organization imposes on itself, and separating them from project requirements can be difficult. But if you do the analysis and split them into their own separate list, you will be doing the right thing for your organization and alleviating the problems that occur when this technique is not used. This is especially true for Business Intelligence projects, where you are likely to uncover large quantities of business rules masquerading as requirements. | <urn:uuid:795c1be7-af74-4645-9c1c-a1bdb6d8bc95> | CC-MAIN-2022-40 | https://corebts.com/blog/how-to-use-business-rules-analysis-bi-requirements/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00799.warc.gz | en | 0.933772 | 1,124 | 2.578125 | 3 |
Session Initiation Protocol (SIP) is a critical building block for service delivery in the Next Generation Networks. It has a prominent role in IMS architecture as the new framework for service delivery and management over fixed and mobile networks.
The adoption of SIP by the 3rd Generation Partnership Project (3GPP) as a call control protocol was a major milestone for the standardization world as the IMS architecture became the first SIP-based standard commercial system, setting the guideline for following developments. Driven by the demand for multimedia communication services, the industry had to search for an efficient way to deploy those services in the existing and new networks. Wireless telecommunication operators and vendors had to adapt quickly, enabling a rapid introduction of IP and SIP services into their network through the IMS. The standardization effort has been driven by the 3GPP in close work with the Internet Engineering Task Force (IETF) to ensure the harmonization of standards.
SIP standards set the baseline for the creation of CSCF and other functional elements in the IMS network, ensuring the interoperability of various network elements (UE, CSCF, etc), defining basic communication rules between the elements, and setting the guidelines for interconnecting with other network architectures. The standard body comprises the standards for data plane, user plane as well as session control in the SIP network.
Why is Session Initiation Protocol (SIP) so important?
Ensuring standards compliance helps to keep the number of implementation- and vendor-specific workarounds in the core network to a bare minimum, resulting in a more stable and better-performing network. Below is a list of some of the SIP standards which are essential for the stable operation of all network elements.
Standards are defining service functions that enable the creation of new services, content and media distribution, and various formats of multimedia services. They are wide-ranging, defining everything from the network interoperability functions, legacy functions such as SMS to particular aspects of IMS function such as QoS reservation or voice call continuity (where 3GPP extensions rely on particular SIP messages and headers for the call handover). Standards also define particular aspects of IMS function that form the end-user experience and fulfill the mission-critical role, such as enabling emergency calling.
SIP standards as common enablers of new services on IMS networks
The availability of service enablers ensures service compliance and lowers the cost of implementing the more complex end-user services in terms of both capital expenses (CAPEX) and operational expenses (OPEX). To ensure simple and rapid integration of services, the network must support open standard-based interfaces and apply service-oriented architecture principles. Even though many IMS equipment vendors are capable of creating new services, that is just the first of many steps required to get a given service operational and ensure the best customer experience. Successful integration requires taking service applications from diverse sources and integrating them in an innovative way. SIP provides session establishment capabilities for application, using SDP protocol offer/answer (RFC3264) mechanisms.
ng-voice’s fully containerized and cloud-native IMS core is 100% standard-compliant and SIP-based, having been successfully integrated both with legacy structure and innovative open-source applications. To know more about our solution, contact us at email@example.com .
Share this article via:
Software engineer and team lead
After years of experience in VoIP in Ukraine and Austria, Andrii joined ng-voice as a Software Engineer and is using his vast knowledge of telco-specific protocols and Kamailio programming to further develop our fully cloud-native and standard-based IMS core. | <urn:uuid:1bf985e5-4107-4fc0-a502-99475e9767a4> | CC-MAIN-2022-40 | https://www.ng-voice.com/session-initiation-protocol-sip-in-the-ims/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00799.warc.gz | en | 0.925578 | 772 | 2.59375 | 3 |
RSRP 3GPP Definition
RSRP or Reference Signal Received Power, is defined as the linear average over the power contributions (in [W]) of the resource elements (REs) that carry cell-specific reference signals within the considered measurement frequency bandwidth. For RSRP determination the cell-specific reference signals RE0 according TS 36.211 shall be used. If the UE can reliably detect that RE1 is available it may use RE1 in addition to R0 to determine RSRP.
where Prs,k is the estimated received power (in Watts) of the kth Reference Signal Resource Element (REs) transmitted from the first BTS antenna port.
In the figure beside, these REs are denoted with RE0. To improve the accuracy of the RSRP estimate, the UE may optionally also measure the RS transmitted from the second antenna port (RE1), if present.
In case of four BTS transmit antennas, Reference Signals of the third and fourth BTS antenna ports are not used in the RSRP measurement.
Since all LTE UEs have at least two receive antennas, RSRP must be equal or higher than the stronger of the two receive antennas’ individual measured RSRP.
The maximum number of PRBs over which RSRP should be measured is sent to the UE over RRC signalling, and is denoted in this paper with Nprb.
Applicable for: RRC_IDLE intra-frequency, RRC_IDLE inter-frequency, RRC_CONNECTED intra-frequency, RRC_CONNECTED inter-frequency
- Note 1: The number of resource elements within the considered measurement frequency bandwidth and within the measurement period that are used by the UE to determine RSRP is left up to the UE implementation with the limitation that corresponding measurement accuracy requirements have to be fulfilled
- Note 2: The power per resource element is determined from the energy received during the useful part of the symbol, excluding the CP
In other words, RSRP is the average power of Resource Elements (RE) that carry cell specific Reference Signals (RS) over the entire bandwidth, so RSRP is only measured in the symbols carrying RS.
In other words:
- RSRP is the average received power of a single RS resource element
- UE measures the power of multiple resource elements used to transfer the reference signal but then takes an average of them rather than summing them
- the reporting range of RSRP is defined from -140 dBm to – 44 dBm with 1 dB resolution. The mapping of measured quantity is defined in the table below: RSRP mapping 3GPP TS 36.133 V8.9.0 (2010-03)
- RSRP does a better job of measuring signal power from a specific sector while potentially excluding noise and interference from other sectors
- RSRP levels for usable signal typically range from about -75 dBm close in to an LTE cell site to -120 dBm at the edge of LTE coverage
- the lowest RSRP value having reported value of ’RSRP_00’
- under normal operating conditions, absolute measurement accuracy is allowed to have up to 6dB error for intra-frequency RSRP measurement
- measured RSRP difference between serving and a neighbour cell on the same carrier frequency should have at most 3dB error.
- for inter-frequency RSRP measurement, the absolute and relative error under normal conditions should both be less than 6dB. It can be seen that inter-frequency RSRP measurement is considerably less accurate for “power budget” type handover triggering than intra-frequency RSRP measurement.
The above cited measurement accuracy requirements hold for SNR = −6dB and without any higher layer measurement filtering. Higher SNR and application of L3 filtering results in improved measurement accuracy. Therefore, in typical network conditions, measurement error can be expected to be smaller. | <urn:uuid:fa402383-7186-49a4-b64a-d163019df4af> | CC-MAIN-2022-40 | https://arimas.com/2016/10/20/rsrp-mapping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00199.warc.gz | en | 0.72015 | 2,294 | 2.703125 | 3 |
A shift towards modern computing infrastructures and architectures, particularly those deployed on the cloud, and which use microservices, has shaped and continues to shape the modern understanding of cybersecurity. In this context, cyber threats/risks and mitigation thereof mean adopting layered approaches to ensuring the safety of increasingly complex and multifaceted structures.
But what are the key segments within cybersecurity?
Cloud security refers to policies, controls, and solutions deployed to ensure safety of the entirety of, and mitigate weaknesses in, distributed virtual infrastructure, applications, and data. This includes SaaS products, such as Microsoft 365 and Google Drive, PaaS (Platform-as-a-Service) products, such as Windows Azure, and IaaS (Infrastructure-as-a-Service) products such as AWS (Amazon Web Services).
The rise of cloud computing and shared responsibility models between users and cloud providers had led cybersecurity vendors to develop cloud-orientated or cloud-first services and products. Migration of critical enterprise applications and data to cloud, coupled with remote/hybrid working, brought cloud security to the fore, with solutions not only aimed at prevention and mitigation of threats and cyberattacks but also remediation and recovery of system components and data.
DDoS (Distributed Denial of Service) Security
DDoS attacks are a type of persistent cyberattack to applications, servers, services, or networks to distract or overwhelm it by sending rapid and continuous online requests via multiple infected devices (bots) and/or networks (botnets), flooding the bandwidth with fake traffic. Thereby, attackers deny the access of legitimate users to services.
These attacks often serve as front or first-stage attacks to detect and exploit the weaknesses in servers, with attackers aiming to obtain sensitive customer data and/or access critical infrastructures. DDoS security, thereby, refers to dynamic solutions and measures deployed to detect and mitigate these attacks, protect servers and networks, and minimise business downtime.
With the advent in remote working, email has arguably become one of the main vectors for cyberattacks, with phishing, BEC (Business Email Compromise) and other forms of attacks, such as those including malware and ransomware, which can lead to large-scale data breaches.
Email security, therefore, refers to various solutions and broader policies to protect email accounts and content against compromises, unauthorised access, data loss or theft. Email security solutions are increasingly incorporated into cloud security solutions, as email is an essential asset to be secured.
Endpoint security is the practice of securing endpoint devices such as laptops, desktops, and mobile devices, from cyber threats and attacks.
Many security solutions have evolved to secure endpoints remotely; accessing networks and/or servers, integrating advanced threat intelligence, investigation, and response mechanisms within security platforms collectively known as XDR (Extended Detection and Response), as well as incorporating identity and access management elements for ensuring secure access.
Identity and Access Management
Identity and access management, or IAM in short, refers to a set of rules, policies, and associated technologies deployed to ensure the access of appropriate users to critical enterprise information digitally. It involves assignment of user identities and rules of access linked to those identities, as well as storage of identity and profile data, data governance rules and automated monitoring of data assets.
For many organisations, IAM constitutes the baseline of establishing a secure IT architecture, applicable on both cloud and on-premises systems. Arguably, IAM is also the most important component for organisations to remain compliant to regulations and avoid data breaches.
The IoT is a complex system of not only interconnected devices but also networks, middleware, all endpoints including sensors and appliances, and infrastructure components, as well as data transmitted and stored therein. As such, IoT security refers to ensuring the safety and integrity of IoT devices and networks.
Threat intelligence is threat information that has been analysed and interpreted to provide the necessary context for decision-making. This information-based definition can provide a foundation, as threat intelligence currently leads the way to deployment of solutions attached to the specific threat information processed and action-oriented advice.
Unified Threat Management
Unified threat management is a single security solution that provides multiple security functions or services combined into a device to simplify protection. Hence, unified threat management is also referred to as NGFW (Next-generation Firewall) in some enterprise contexts; encompassing antivirus, web, content filtering, email filtering, and anti-spam.
- Definitions and Scope
- Key Cybersecurity Trends
- Market Forecast Summary
► Cybersecurity Market Research
Our latest research found:
- The total value of enterprise cybersecurity spend will exceed $226 billion in 2027, up from $179 billion in 2022; representing total growth of 26% over the next five years.
- Juniper Research’s Competitor Leaderboard for the cybersecurity market has identified the five leading market vendors as:
- Cybersecurity vendors must form strategic partnerships with smaller, specialised cybersecurity vendors to acquire new data sources and point solutions, and offer services, such as unified threat management, in order to maintain relevance in this highly competitive market. | <urn:uuid:117d8fba-d171-489a-bf81-ccece0f767a4> | CC-MAIN-2022-40 | https://www.juniperresearch.com/blog/june-2022/cybersecurity-the-new-threat-landscape | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00199.warc.gz | en | 0.930776 | 1,091 | 3.109375 | 3 |
Apr. 2, 2021
The massive growth of IoT devices and its new applications is driving edge ai with an explosive increase in revenues expected to go from 2.8 billion U.S. dollars in 2019 to 9 billion by 2024. The rise of edge computing has been significantly transforming how organizations are collecting their data, processing it, and gaining insights for more data-driven business decisions. But, what is an edge device?
Edge computing is a distributed topology where data storage and processing are done close to the edge devices where it’s being collected, rather than relying on a central location that can be thousands of miles away.
Edge devices are hardware components that control data flow at the boundary between two networks where they serve as network entry (or exit) points.
Enterprises and service providers use edge devices for transmitting, routing, processing, monitoring, filtering, translating, and storing data passing between networks.
Examples of edge devices include routers, routing switches, integrated access devices, multiplexers, and a variety of metropolitan area network and wide area network access devices.
Analyzing data, especially real-time data, in edge devices, eliminates latency issues that can affect performance. The less time it takes to analyze data, the more value that comes from it. For example, when it comes to autonomous vehicles, time is of the essence, and most of the data it gathers and processes is useless after a couple of seconds.
The distributed architecture that comes with edge computing enables organizations to distribute security risks as well, which diminishes the impact of attacks on the organization as a whole. Edge computing enables organizations to overcome local compliance and privacy regulations’ issues, as well.
Edge computing helps companies to reduce costs associated with transporting, managing, and securing data. By keeping data within your edge locations, you optimize bandwidth usage to connect all of your locations.
Business operations continuity may require local processing of data to avoid possible network outages. Storing and processing data in edge devices improves reliability, and temporary disruptions in network connectivity won’t impact the devices’ operations.
Edge devices have a very simple working principle; it serves as network entry or exit and connects two different networks by translating one protocol into another. Moreover, it creates a secure connection with the cloud.
An edge device is a plug-and-play; its setup is quick and straightforward. It is configured via local access and also has a port to connect it to the internet and the cloud.
Many Artificial intelligence use-cases are better done on edge devices, offering maximum availability, data security, reduced latency, and optimized costs.
Running Machine learning models can be computationally expensive in cloud-based environments. Meanwhile, inference needs relatively low computing resources. When AI models are trained on the cloud, data needs to be transferred from end-devices to predict outputs. This needs a stable connection, and since the volume of data is large, the transfer can be slow or, in some cases, impossible.
Edge AI moves algorithms closer to the data source where they’re processed locally without requiring any connection offering real-time analytics in less than few milliseconds
The volumes of data are significantly increasing, so is the need to process it autonomously. Enabling Deep Learning algorithms to perform EdgeTraining locally is a must-have feature for many applications such as autonomous vehicles.
IoT edge devices are now able to run machine learning models locally within the device using TensorFlow, Pytorch, or other machine learning tools. The thing that enables capabilities to be handled directly on a device. Localizing the data reduces the latency that results from sending the data to the cloud, and it enables more immediate insights generated by devices.
Massive changes are initiated with Edge AI raising demand for IoT smart devices, and the emergence of more advanced technologies. As organizations are increasingly adopting Edge AI to make their operations better and enable real-time performance, the market will significantly grow to keep pace with the computing requirements of these smart items.
Chooch Edge AI helps organizations to take their video analytics and IoT applications to the next level. With over 90% of accuracy delivered in less than 0.2 seconds, Chooch provides massive results for many solutions in AIoT, geospatial, security, media, healthcare, hospitality, banking, retail, and more.
Chooch AI creates complete solutions from AI training in the cloud through to deployment. Edge deployments are managed from the cloud, with models that include object recognition, facial authentication, action logging, complex counting, and more.
Currently, Chooch Edge AI is able to deploy up to 8 models and 8,000 classes for robust Visual AI on a single edge device. Chooch AI inference engines are very fast, generating responses under 0.5 seconds, and processing ten simultaneous calls per second.
Check out the Edge Device Setup Guide. | <urn:uuid:aaceffff-5990-44f5-93d2-d27b5b3b22a3> | CC-MAIN-2022-40 | https://chooch.ai/computer-vision/what-is-an-edge-device/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00199.warc.gz | en | 0.923262 | 994 | 3.203125 | 3 |
An insidious army of darkness is rampaging across the Internet and taking control of unsuspecting business and personal computers.
They’re ’bots — zombie foot soldiers that march relentlessly to the order of “bot-herding” generals. These ’bot masters use the Internet to build massive platoons of bot-networks (botnets), operating from a central command station to direct this malicious software into hundreds, thousands or even millions of notebooks, PCs and servers.
Have you been wondering how those incessant junk e-mails about penny stocks and “male enlargement” wonder drugs keep pouring into your home and business accounts from all sorts of different and anonymous sources? Blame ’bots and botnets. They provide the covert means for mass distribution of junk e-mail and all sorts of other unwanted spam.
But that’s the least of the damage they do. At their evil worst, botnets can be used to extract personal and business information from computing systems — things like user names and passwords, e-mail addresses and log-in information, or even dial-up network settings.
Cyber criminals use botnets to extort and destroy. Personal information can be remotely encrypted and unlocked by a bot-herder — for a price, of course. A business may suffer a denial of service attack or an entire computing and communication system might be brought down and held to ransom by botnet-controlling evil-doers. A ’bot’s life begins as a software module that gets silently planted into an application on your computer system. Getting inside a computer is easy enough and happens through any number of innocent activities, such as instant messaging chats, opening e-mails or simply through surfing activities. Allysa Myers, a virus research engineer for security software company McAfee Inc., says you’re not likely to know when your system has been infected.
“These ’bots try to stay quiet and inconspicuous, if they can,” she says, explaining that many of today’s ’bots give no warnings or obvious signals as they install themselves on a system. Infestation can happen as a “drive-by download” simply by visiting a Web site, Ms. Myers adds.
Once in place, other ’bots gather and a botnet quickly spawns, instantly hatching a cancerous menace. Botnets are “modular,” meaning they tighten their grip of control by calling in other botnets that build upon one another with new functions and continually seek out and exploit vulnerabilities in applications or operating systems.
Botnets get entrenched by downloading more modules that further strengthen and conceal the infestation. Gradually the bot-herder’s ability to gain greater function and ultimately complete system control is achieved. That’s when the real dirty work begins.
Now omnipotent, these software zombies relentlessly hunt for even more system weaknesses. The deeper they weave their way into the fabric of your computer, the tougher they are to detect and destroy.
A recently published book by Jim Binkley and Craig A. Schiller, entitled Botnets, describes them as “the killer Web app,” inferring that the chaos they cause is destroying the world’s most important communication landscape.
Binkley and Schiller suggest botnets are an out-of-control threat and that the counter-offensive community of security professionals is being tasked beyond their capabilities to defend against the onslaught and becoming demoralized.
To put the threat into greater context, the book cites research from Symantec Corp. from 2006 that says the security company observed “more than 4.5 million distinct, active ‘bot-networked computers.”
Oliver Friedrichs, a director of emerging technologies for Symantec’s Security Response group, says his company observes 57,000 ’bot-infected computers each day.
He cites Internet founding father Vint Cerf’s estimate that 25 per cent of all computers connected to the Internet are infected by ’bots.
“I think that number is pretty high…but it shows the numbers are really across the board,” says Mr. Friedrichs. “There’s no way of knowing how many systems are infected at any given time.”
Should a business be concerned about ‘bots and botnets?
How great a risk do they pose, particularly to a business that relies on the Internet to drive its processes?
Mr. Friedrichs says systems infected with malicious code are unpredictable — and ultimately unreliable. So even though it may appear ’bots are non-malicious, the potential for them to cause damage is definitely there, he says.
It’s not so much the damage done to your systems, but rather the damage your systems may be used to do on others.
“You must ask: do you want your business to be responsible for being the source of a generated attack on other businesses?” he says. “Are you doing what you need to, to protect other Internet systems? Do you want to be seen as a company that takes precautions to protect your systems?”
The experts agree that ’bots and botnets are smarter than ever and tougher to detect. Thanks to better security technology and more secure operating systems, the spread of ’bots as a result of drive-by downloads has been greatly diminished. Most of today’s ’bot infestations happen as a result of people doing things they shouldn’t.
It’s best to practice safe computing through the use of good anti-virus and firewall products and diligently installing the latest software updates and patches when these are available.
“Making sure you have updated OS and application software security patches — that significantly minimizes the risk,” Ms. Myers says.
You might also diligently apply the best computing practices, many of which are detailed in the Botnets book.
These include: deleting spam and never responding to it, never executing unknown email attachments, using what the experts do to surf the Internet — browsers other than the frequently infected Internet Explorer — and being wary of downloading or executing any application from the Web.
And make sure your system’s auto-updates feature is active, to ensure you stay properly “patched.” | <urn:uuid:d0a7c2cf-89f7-4376-b09d-9a48013a825c> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/battling-the-legions-of-bots-and-botnets/8094 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00199.warc.gz | en | 0.938716 | 1,321 | 2.71875 | 3 |
Security, Data Protection for Medical Devices: Healthcare Orgs Can’t Afford to Wait
Networked medical devices have great potential to improve patient outcomes. After all, devices that can signal healthcare professionals instantly when a risk or crisis occurs can save valuable time on the way to diagnosis and treatment.
Yet, whenever a medical device uses software that generates and/or displays sensitive data, the risk of cyberattack exists. Many medical devices use the same types of technologies used in other IT environments, which means they can be just as vulnerable to hacking as computers and mobile devices. This is more than conjecture. Researchers have proven it is possible to hack medical devices such as pacemakers and insulin pumps.
A cyberattack to any organization means downtime and the possibility of additional costs to restore damaged systems. But, healthcare organizations have added concerns of breaches that involve protected healthcare information (PHI) and system outages that can put lives at risk. Medical devices containing vulnerabilities may provide a hacker with a way into the healthcare organization’s network as a whole, enabling them to steal PHI or other data — or hold it for ransom.
The risks are real, but traditionally the government has categorized, and subsequently regulated, medical devices differently than other IT devices. In an important step toward extending IT security to include medical devices, the U.S. Food and Drug Administration (FDA) issued “Postmarket Management of Cybersecurity in Medical Devices” in December 2016, a set of nonbinding guidelines related to securing medical devices.
The FDA recommends that medical device manufacturers take the following security measures:
- Include a way to monitor and detect cybersecurity vulnerabilities in medical devices
- Assess the level of risk a vulnerability poses to patient safety
- Deploy software patches and other security risk mitigation measures as early as possible, before they can be exploited.
Additionally, the FDA recommends that medical device manufacturers work with cybersecurity professionals to learn about potential vulnerabilities. Suzanne Schwartz, the FDA’s associate director for science and strategic partnerships at the Center for Devices and Radiological Health, explains that this is known as a “coordinated vulnerability disclosure policy,” in which people who find vulnerabilities in devices disclose that information to the manufacturer or vendor.
The FDA also encourages medical device manufacturers to follow the National Institute of Standards and Technology’s (NIST) principles for improving critical infrastructure cybersecurity, including developing a core policy based on four tenants of cybersecurity: identify, protect, detect, and respond.
What Healthcare Organizations Can Do Now
It’s early in the timeline for standardizing the security of medical devices, but networked devices are in use in hospitals and other healthcare facilities today — and many devices have been developed without security measures that can protect them from cyberattacks.
Healthcare facilities must put security solutions in place, such as firewalls, antivirus, intrusion detection systems/intrusion prevention systems (IDS/IPS), and identity and access management (IAM) solutions. Segmentation should be used as an extra layer of protection for vital applications and sensitive data.
Although the responsibility for securing these devices falls to device manufacturers and regulations must come from the government, the responsibility for patient welfare falls to healthcare organizations. While many healthcare organizations may be aware of industry regulations and security in general — there is much less awareness when it comes to the security vulnerabilities that exist with medical devices. This is where IT solution providers can add significant value in educating healthcare customers and helping them protect these mission-critical assets — and their business reputations. | <urn:uuid:b7af9afe-603c-4fd2-8b52-5c3874e7dd9e> | CC-MAIN-2022-40 | https://www.channele2e.com/influencers/security-data-protection-medical-devices-healthcare-orgs-cant-afford-wait/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00199.warc.gz | en | 0.943574 | 718 | 2.71875 | 3 |
mysqld is widely known as one of the main executable files related to MySQL and its functionality. The
mysqld is the MySQL daemon – the main functions related to the database management system are accomplished using it. In this blog post, we will tell you all about it.
The MySQL D(a)emon
mysqld, as already previously noted, translates to MySQL daemon (not the demon). The daemon allows database administrators and other kinds of developers to complete all kinds of operations relevant to MySQL including the ability to start, stop, and pause the beast.
However, starting, stopping, and pausing MySQL-related operations is not everything that the MySQL daemon is used for: this program (mysqld is an executable file) has many options that can be used. To figure those out, start it by using the
–help option (
--verbose might also help with formatting):
mysqld --help [--verbose] will do. Use such a command and you will instantly see that MySQL comes out with a lot of useful information:
As you can see, MySQL will tell you what files does it read and in what order, and what options can be specified when
mysqld is invoked. Scroll down a little and you will be able to observe the available variables. Some of them can be seen below:
The understanding of the basic options and variables that
mysqld provides is an absolutely essential task to every developer and database administrator working with MySQL or any of its flavors – since the usage of
mysqld is inevitable, our knowledge of at least some of the commands that are given to us by MySQL can be very helpful.
Some of the basic options that
mysqld provides include:
- The ability to specify a default file from which MySQL would read information by specifying it inside of the
--defaults-fileparameter or specify a file that would be read after all of the default files would be by using the
- Options related to certain storage engines available inside of MySQL (InnoDB being the main one): developers can change the default directory that InnoDB stores files in by specifying the
--innodb-data-home-dirparameter, files can be stored in another location specified inside of the
- The ability to set when certain operations (think opening ports, connections, etc.) would time out.
- The ability to log all changes relevant to a specific storage engine into a file (the option is called
filenameis the name of the file, and is only relevant to the MyISAM storage engine inside of MySQL.)
- The ability to display a default list of options and exit.
- The daemon also comes with operating system-specific options that are displayed at the top. For Windows, the options look like this:
Of course, there are a lot of other options that can be specified and used, but you get the idea by now. The majority of developers and engineers working with MySQL or any of its flavors aren’t too fussed about the option list provided by the daemon because they wouldn’t be able to remember them all anyway; rather, people just pick the option that solves their specific problem and uses it. Here’s the problem though – with so many available options, how do you know which option is the most suitable for your use case?
Choosing Suitable Options
We just said that the majority of developers working with MySQL and its flavors don’t worry too much about mysqld – it’s because they know what options they need and roll with them! Here’s what they will keep in mind when working with the daemon and scrolling through the available options:
The use case and the factors the storage engine is used together with will determine what options will mysqld be invoked with. If our storage engine is used for testing purposes and we want to “lock down” our entire workstation (bear in mind it should be running InnoDB in this case) for one or another reason, we might enable a read-only mode by specifying the “
--innodb-read-only ” parameter and setting it to 1, if we want to change the location of the slow query log available in MySQL, we could mess with the
slow-query-log-file parameter and set a different file path, and should we want to dive deeper, we can even enable deadlock detection (use the
innodb-deadlock-detect parameter), change the format rows are stored in by specifying a parameter after the
innodb-default-row-format value, and so on.) The MySQL daemon will even let us change how tables are stored (e.g. if they are stored in a file-per-table format or not), and let us perform a wide variety of other things, but as always, keep in mind that the tasks that are performed would generally depend on our specific use case. Here is a list of options that will be relevant to some of the most widely-used use cases across the MySQL world:
We won’t bore you with the entire list – you already see that certain use cases have specific options of interest, and you can probably sense that the daemon is able to set all options also available to be set inside of my.cnf: and you’re not wrong! The reason why people set options by using the daemon and not using my.cnf, though, have to do with practicality: as soon as the daemon (MySQL) is restarted, the options will be nullified (in other words, MySQL will restart and start looking at the options available in my.cnf, rather than the options which were previously set using the daemon): such a feature may be incredibly useful if you have a specific use case that solves a specific problem on-the-go!
MySQL and Data Breaches
If you are a developer working with the daemon for quite some time, you will know that the performance, availability, and capacity features provided by mysqld are not the only features that can be optimized. MySQL is also a frequent target of data breaches – and MySQL developers know that very well. Thankfully, MySQL can be secured by following a couple of basic security practices:
- All developers having MySQL as their database of choice should follow basic input sanitization procedures.
- Developers should familiarize themselves with the “defense in depth” principle: the more security layers protect their web appliactions, the harder it gets for a hacker to penetrate them.
- Those developers that want to take the security measures of their web applications up a notch should consider using information security services such as web application firewalls that protect web applications from attacks like SQL injection, cross-site scripting and the like or use data breach API services that protect the employees of companies from identity theft and similar attacks – web application firewalls protect web applications from aforementioned attacks, while data breach API services help protect people from identity theft and credential stuffing attacks. One does work without the other – however, protecting your web applications does you little favor if you don’t protect your online wellbeing at the same time.
- Developers familiar with security measures should also familiarize themselves with the OWASP Top 10 list – the OWASP Top 10 list outlines all of the most popular flaws targeting web applications, and you can bet the attackers are well versed in all of them. Familiarize yourself with those principles, then protect your web applications accordingly.
The mysqld stands for the MySQL daemon and it’s one of the most popular tools in the toolset of a modern developer or a DBA – most developers and database administrators know that in order to improve the performance, availability, or capacity of their database instances they should look into what the daemon can offer by looking into my.cnf on Linux or my.ini on Windows – however, performance, availability, and capacity advancements are not the only things this file can be used for – combine everything mentioned in this article with using a properly built web application firewall and using information security services provided by BreachDirectory to protect yourself and your team from identity theft attacks both now and in the future, and you will be golden. If you’ve read this article to the end, we have something to offer you – ping us over email, and both you and the entire company you represent will receive an unlimited amount of API keys to use for 6 months – at absolutely no cost. Sounds good? After you’ve done protecting your applications, shoot us an email and within 24 hours, you can start protecting protect the identities of your team members. It doesn’t get better than that!
Be safe, and we will see you in the next blog. | <urn:uuid:6595687d-6fbc-4d4d-a548-f0fffcfca9cb> | CC-MAIN-2022-40 | https://breachdirectory.com/blog/the-mysql-server-mysqld/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00199.warc.gz | en | 0.913656 | 1,853 | 3.046875 | 3 |
In this week’s roundup, learn about how artificial intelligence (AI) is being used to predict future cancer and how it is used to generate high-value artwork. Explore how “liquid” networks work and what they can be used for. Consider the areas that AI might advance this year. Finally, understand the need for location in advanced analytics and how it can be used to better business.
By Daniel Ackerman, contributing writer for News.MIT.edu
Researchers have developed a type of neural network–called a “liquid” network– that learns on the job, not just during the training phase. They change their underlying equations to continuously adapt to new data inputs. This could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.
Artificial Intelligence Generated Artwork Sells for $432,500 – Is AI a Simple Tool or Creative Genius?
By Max Planck Institute for Human Development, contributing writer for SciTechDaily.com
A portrait was developed by an artist collective feeding real paintings by human painters into an AI algorithm, training it to create images autonomously. One of those images was selected and it was sold at auction for $432,500. AI tends to be humanized, especially in the media. Yet the payment went to the collective—not the machine or the programmers. This begs the question: who gets credit for the art? Read about a study that aims to answer this question.
by Bob Wiener, Daniel Hannah, Allan Ogwong, and Christopher Thissen; contributing writers for VentureBeat.com
Despite 2020 being jam-packed with compelling and important news updates, AI advances still commanded mainstream attention. Stories highlighted the new and surprising ways that we may start to see AI showing up in daily life. This article dives into what we might expect to see from AI this year, particularly in the realms of Transformers, graphic neural networks, and applications.
By Rachel Gordon, contributing writer for News.MIT.edu
In order to catch cancer earlier, we need to predict who is likely to get diagnosed in the future. However, the adoption of AI in medicine has been slow due to poor performance and neglect of racial minorities. Researchers developed a new deep learning system using a patient’s mammogram, which showed significant promise and racial inclusivity. This “Mirai” algorithm was then tailored to the unique requirements of risk modeling and it is showing consistent performance across datasets from the US, Europe, and Asia.
By Helen Thompson, contributing writer for Forbes.com
A new type of business analyst is becoming in demand due to the need to understand and apply the power of location data. Corporations have discovered that almost every dataset can be explained using geographic insight and relationships. These up-and-coming specialists are able to help analyze and adjust supply chains, determine optimal routes, decide where to expand operations, and more.
Did you see an interesting article in the last week? Share it with us! Send it to astuttle [at] lityx.com. | <urn:uuid:406fab0d-5dc0-4679-b62b-87e72775a5a6> | CC-MAIN-2022-40 | https://lityx.com/predicting-cancer-creating-artwork-liquid-networks-ai-advancements-location-analytics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00199.warc.gz | en | 0.952285 | 634 | 2.671875 | 3 |
November 21, 2016
Cyber risk is like your aunt making awkward comments on your Facebook page—it’s one of those unfortunate parts of the modern world that we all have to deal with. But unlike our aunts on Facebook, we can’t just block out cyber threats with one simple click. Cyber threats are constantly evolving, so companies need to be vigilant about their online security.
Cyber risk is the chance of anything going wrong with your IT systems that can cause a disruption to business or financial losses. It’s basically the probability of management needing to curse at the tech department. Cyber risk includes things like outages that grind things to a halt, intrusions that leave your systems vulnerable and data breaches where your information is stolen. Even the poor souls who manage to live their lives without Wi-Fi can’t escape cyber risk. If their bank or government is hacked, their private data could be leaked and used for all kinds of nefarious purposes.
Modern companies need to be alert to their cyber risks because so much data is kept online. Although this provides the world with great convenience and accessibility, it also means that there are huge vulnerabilities unless adequate and correct security precautions are taken. With many communication systems also running over the internet, a business can be severely impacted if the risk is not managed properly.
Managing and keeping track of a company’s networks and systems.
Where internet security and business practices converge. With strong cyber resilience, policies and procedures are clearly outlined in advance so that a business can still operate when under attack.
A way that an attacker might try to access your system or network. This is normally a malicious act aimed at stealing data or distributing malware.
The theft or public exposure of a company’s data. This could include credit card numbers, email addresses, records and more.
The collaboration between software developers and other IT workers to produce faster and better services.
A tool that is designed to abuse a vulnerability within a system. This is usually done with malicious intentions.
A threat from within the company, such as a dissatisfied or angry employee.
An attack from outside of the company.
The process for updating a company’s systems and software to cover up any vulnerabilities.
An attacker may attempt to trick company employees into divulging confidential information, often on the phone.
A previously unknown vulnerability in a system or network. Because the developers are unaware of the flaw, zero-day vulnerabilities are a serious threat.
This is an attack in which an unauthorized party gains entry to the network but the breach isn’t discovered for a long time period. APT’s are generally used to steal information, not cause immediate damage to the network. These attacks are a big concern to organizations with extremely valuable information, such as financial institutions and governments.
These are malicious attempts to take down a company’s network for a period of time. Distributed-Denial-of-Service (DDoS) attacks are similar, but they use multiple computers in coordination. DoS attacks can be costly to a brand’s reputation, but they don’t normally involve any theft.
These are unintended downloads that leave malware on a user’s computer. This can happen simply by visiting a website.
Malware is short for malicious software, programs that are designed to disrupt or harm a user’s computer. Malware is commonly distributed through software downloads or email attachments. Trojans, bots, viruses and worms are all different kinds of malware.
Malicious advertising uses online ads to spread malware. Malware is downloaded to the user’s system when infected ads are clicked on.
MitM attacks involve someone intercepting data in the middle of two parties. These attacks can be used for eavesdropping or intercepting passwords between users and their banks.
This type of fraud involves the attacker tricking users into giving up valuable information. It’s commonly done through phishing emails, where someone will pose as the user’s bank or other service and attempt to get them to give up their personal details.
These programs present themselves as virus removal tools that ask a user for money in exchange for removing a virus. They often introduce malware to the system instead.
Hackers can be anyone, from that greasy computer nerd in their mother’s basement to the modern day James Bond, a suave government spy uncovering state secrets from the safety of their desk. They can operate by themselves or as part of a network. There are three main types that businesses need to worry about, depending on their industry.
The first type of hackers are cyber criminals. All they care about is making money, whether it is from individuals, small businesses or huge corporations. One of their main objectives is to steal private information or intellectual property that they can sell or use fraudulently.
These are politically motivated hackers, such as the group Anonymous. Rather than seeking money, they launch attacks that are aimed at causing as much damage as possible to organizations that go against their political ideology.
State-sponsored hackers are one of the most fearsome security threats a company can face. Warfare has made its way onto the internet, and with the backing of a country’s budget, these hackers have the resources to do serious damage. They are known for stealing valuable intellectual property and seeking out state secrets.
Unless you run the Pentagon or a controversial, high-profile company, it’s unlikely that your business will be specifically targeted by any of these types of hackers. The majority of attacks that the average firm will face are purely opportunistic. They are usually performed by bored people who are just looking for something easy to exploit. If it looks like it will be a lot of work to hack into your company, they will simply move on to an easier victim.
Picture hackers as burglars scouting a street for somewhere to rob. If they see a house with a 10-foot wall and rabid guard dogs, they will probably keep walking down the street until they find someone with a Shi Tzu and an unlocked door. In the same way, if you want to protect your business, you don’t need to make it impenetrable, just more difficult than other targets.
There is no single thing that you can do to protect your business. Adequate security is multifaceted and requires you to:
To protect your company from cyber threats, you need to understand the risks it faces. The most common risks that firms will have to deal with include malware, attacks from outside the company and simple user errors. Other key points of cyber risk include the misuse of operating systems, insider threats, service provider failures, and the theft of physical equipment.
Only when you are aware of the risks your business faces can you begin to take protective measures. Not sure about your risk factor or security readiness? Try out our assessment tool–if anything it will serve as a great starting point for determining your current security posture.
Protecting your company from threats requires a solid and actionable plan. Once you have identified the key risks that your business faces, it is important to develop the right strategies, processes, and management systems to deal with them. And if your workforce is remote, or uses their own devices (increasingly common these days), make sure to take those into account when crafting your policies. Good security policies bring control back to your organization and allow you to be proactive about keeping your organization safe. Not sure what makes a good policy? We’re here to work with you throughout the process.
Preventing attacks is important, but it is also necessary to have a plan on how to recover in the event that something does happen. A good recovery plan will reduce your downtime and minimize any damages that occur during an attack. This can be the difference between a minor breach and a devastating event for your company. Don’t have recovery plans, we can help with that too.
Unfortunately, people aren’t as perfect as some of us might think we are. As humans, we are prone to making mistakes, whether they are simple errors or catastrophic blunders. This is readily apparent in cyber security, where small mistakes can lead to vulnerabilities and attacks that have severe impacts on business. The good thing is that, with the right training, these problems can be significantly reduced.
Cyber risks are constantly evolving. As new security measures are put in place, there are already counter attacks put in place. Keeping on top of these risks is a full time job and probably something that isn’t core to your business. Our team is here and ready to help you ensure that you can continue to focus on your core business, not the business of keeping your business running smoothly. | <urn:uuid:cdf3b31a-6eae-4fcf-b15b-06fa85d8abb3> | CC-MAIN-2022-40 | https://jacobianengineering.com/blog/2016/11/everything-need-know-cyber-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00199.warc.gz | en | 0.955219 | 1,790 | 3.046875 | 3 |
The trade name for flexible metallic conduit.
Ground Fault / Ground-Fault Overcurrent.
(1) An abnormal connection in an electrical circuit, where the normal load path is either partially or completely bypassed through this non-circuit return path, which is the equipment-grounding system of an electrical power distribution system. The low impedance causes high-magnitude values of overcurrent to flow at the point of fault in the circuit. (2) A flaw in an electrical circuit in which some or all of the circuit current is escaping and flowing to ground or along the equipment-grounding system back to the power source ― bypassing the connected load. These faults pose an electrocution hazard due to the level of current involved. (3) An unintentional, electrically conducting connection between an ungrounded conductor of an electrical circuit and the normally non-current-carrying conductors, metallic enclosures, metallic raceways, metallic equipment, or earth.
The connection or act of connecting a conductive body to the ground or to another conductive body that extends the ground connection.
A grounding electrode is a conducting object through which a direct connection to earth is established. As used with the grounding of the electrical service in a building or other structure, this electrode is the conductor or other material that physically connects the electrical system to earth ground. Several types of these electrodes are recognized by the NE Code: underground metallic water piping systems, metal rod and pipe electrodes, ground rings, and rebar or bare-copper conductor concrete-encased electrodes.
Grounded Neutral or Grounded-Return (Grounded Circuit) Conductor.
All three terms describe the grounded conductor in an electrical circuit. The NEC does not address the single-phase AC grounded-circuit or grounded-return conductor as a grounded neutral conductor because it carries the full-load current of the single-phase AC supply. By NE Code requirements, two or more phases (single-phase AC sources) must share this common grounded conductor for it to be addressed as the neutral conductor in the multi-wire circuit. | <urn:uuid:599b09fa-c3fd-4cf9-b450-840a7daef644> | CC-MAIN-2022-40 | https://electricala2z.com/glossary/electrical-engineering-terms-g/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00199.warc.gz | en | 0.908171 | 442 | 3.609375 | 4 |
What Is Heuristic Analysis?
Heuristic Analysis Definition
What is heuristic analysis? Heuristic evaluation or analysis works by looking for commands and instructions not normally present in a benevolent application. For example, it may detect commands to deliver payloads often disguised within a Trojan horse virus or those used to distribute a worm virus throughout your network.
Heuristic analysis can pinpoint a virus through the way it replicates as it spreads. It is also at the heart of user and entity behavior analytics (UEBA), which uses algorithms to study the behavior of users, routers, endpoints, and servers.
Heuristic Analysis Meaning: How Does Heuristic Analysis Work?
Heuristic analysis is done using a couple of different techniques:
Static Heuristic Analysis
Static heuristic analysis involves examining the source code of a program and comparing it to the source code of known viruses that have already been logged in a database. If enough of it matches what is in the database, the code gets flagged as a potential threat.
Dynamic Heuristic Analysis
Dynamic heuristic analysis uses a virtual machine, which acts as a sandbox. A sandbox is a safe, isolated environment in which a program can execute without affecting the rest of your system or network. With dynamic heuristic analysis, the sandbox environment allows the file to run, so you can see what it would do if it runs in a sensitive environment.
For example, during a dynamic heuristic analysis, the program under observation may self-replicate, try to stay within resident memory after executing, overwrite files, or do other things that viruses are often programmed to do.
Heuristic Analysis Definition: Advantages and Disadvantages of Heuristic Analysis
There are a few advantages and disadvantages to heuristic analysis, but despite the drawbacks, it is still a very powerful tool.
Heuristic analysis can detect more than just modified forms of current malicious programs. It can also detect previously unknown malicious programs. This is because it analyses the behavior of a potential threat instead of its file name.
This method of analysis also reduces the number of false positives because some behaviors are very specific to malware, and heuristic analysis can identify them, pinpointing the threat. For example, if a program tries to delete files that are needed by the operating system, it is most likely malicious. Heuristic analysis can detect this kind of behavior and flag the threat so it can be removed.
On the other hand, by merely examining the signature of a program and comparing it to those of known threats, the threat may slip away unnoticed, simply because it does not match a known threat. This is often the case when dealing with a zero-day or previously unknown threat. Heuristic analysis can flag the threat based on what it does, regardless of whether it has already been logged in a threat management system.
Heuristic analysis is designed to detect known threat behavior. If the threat does not perform any action the threat detection technology has been programmed to recognize, it can slip under the radar.
To illustrate, suppose your antivirus software has been engineered to flag a program that tries to delete files your operating system needs but not files that decrypt themselves. In this case, if it comes across a self-decrypting file, it may not notice that it is a threat—even though this action is typical of threats.
There is also a chance that the antivirus/anti-malware software uses heuristic scanning based on a range of behavior that is too broad. In this heuristic analysis example, the process can result in mislabeling innocent files as threats. However, this is more common in older heuristic analysis programs, so if you have a newer one and it has been recently updated, chances are it uses modern techniques, which limit the number of false positives.
Heuristic Analysis vs. Heuristic Virus
It is easy to confuse the terms “heuristic analysis” and “heuristic virus.” However, in some ways, they can not be more different. A heuristic virus can be detected using heuristic analysis. For example, the malware known as Heur.Invader is designed to make changes to your system’s settings. Therefore, it can be detected using heuristic analysis.
Heuristic analysis, on the other hand, identifies programs or applications that behave suspiciously. In other words, heuristic analysis is a methodology used to identify a heuristic virus.
How Does Heuristic Analysis Help to Detect and Remove a Heuristic Virus?
Heuristic analysis detects and removes a heuristic virus by first checking files in your computer, as well as code that may be behaving in a suspicious manner. Once a potential threat has been identified, it gets flagged.
At this point, the threat can be removed from your system. The antivirus system can also quarantine the threat, which can give IT teams the opportunity to study it and gain a better understanding of what it is and how it works.
How To Run an Effective Heuristic Analysis
Thanks to modern antivirus software, it is relatively easy to run a heuristic analysis:
- Start up your device in safe mode.
- Once startup is complete, run an antivirus scan using your heuristically enabled anti-malware software.
- After the program has identified suspicious files, carefully check them to see if they are definitely ones you want to delete. It can sometimes help to Google the file name to see if others have come across it and how they have dealt with it.
How Can Fortinet Help?
The FortiGate Next-Generation Firewall (NGFW) uses heuristic analysis to identify suspicious behavior and then remove the threat from your system. FortiGate does this using machine learning algorithms that can detect anomalous behavior indicative of a threat. This gives it the ability to pinpoint zero-day attacks heuristically, honing in on behavior that typifies malware.
FortiGate is powered by an onboard processor that enables it to perform deep scans of data packets as they attempt to enter or exit your network. In this way, not only does it perform heuristic analysis, but it also does so in a way that does not negatively impact your throughput.
What does heuristic analysis mean?
Heuristic analysis is a method of threat detection that works by looking for commands and instructions that would not normally be present in a benevolent application.
How does the heuristic technique work?
Heuristic analysis evaluates the actions of programs using a couple of different techniques. For example, you can use static heuristic analysis, which involves examining the source code and comparing it to the source code of known viruses. You can also use dynamic heuristic analysis, which uses a virtual machine that acts as a sandbox. With dynamic heuristic analysis, the sandbox environment allows the file to run, so you can see what it would do in a sensitive environment.
How does the heuristic method detect viruses?
Heuristic analysis detects and removes a heuristic virus by first checking files in your computer, as well as code that behaves in a suspicious manner. Once a potential threat has been identified, it gets flagged. | <urn:uuid:104daf94-6b78-4978-8b7c-dc5587337da5> | CC-MAIN-2022-40 | https://www.fortinet.com/lat/resources/cyberglossary/heuristic-analysis | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00199.warc.gz | en | 0.930595 | 1,462 | 3.34375 | 3 |
Oats are a whole grain that is commonly eaten for breakfast as oatmeal (porridge).
They are rich in carbs and fiber, but also higher in protein and fat than most other grains. They are very high in many vitamins and minerals.
It contain many powerful antioxidants, including avenanthramides. These compounds may help reduce blood pressure and provide other benefits.
High in the soluble fiber beta-glucan, which has numerous benefits. It helps reduce cholesterol and blood sugar levels, promotes healthy gut bacteria and increases feelings of fullness.
May lower the risk of heart disease by reducing both total and LDL cholesterol and protecting LDL cholesterol from oxidation.
Due to the soluble fiber beta-glucan, oats may improve insulin sensitivity and help lower blood sugar levels.
Oatmeal may help you lose weight by making you feel fuller. It does this by slowing down the emptying of the stomach and increasing production of the satiety hormone PYY.
Colloidal oatmeal (finely ground oats) has long been used to help treat dry and itchy skin. It may help relieve symptoms of various skin conditions, including eczema.
Reduce constipation in elderly individuals, significantly reducing the need to use laxatives.
For more tips, follow our today’s health tip listing. | <urn:uuid:b2089560-15a2-4bba-8564-475558120489> | CC-MAIN-2022-40 | https://areflect.com/2019/08/21/todays-health-tip-benefits-of-oats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00399.warc.gz | en | 0.933104 | 267 | 2.671875 | 3 |
An antioxidant appears to help level the playing field between males and females of the cofactor BH4 deep inside the kidneys—where the fine-tuning of our blood pressure happens—and restore similar production levels of protective nitric oxide.
Higher nitric oxide levels help reduce blood pressure both by enabling dilation of blood vessels and increasing the kidneys’ excretion of sodium, which decreases the volume in those blood vessels.
“BH? has to be there,” says Dr. Jennifer C. Sullivan, pharmacologist and physiologist in the Department of Physiology at the Medical College of Georgia at Augusta University, who is exploring gender differences in hypertension.
“We found that oxidative stress makes a big difference in BH4 levels.”
The study in the journal Bioscience Reports is the first to look at sex differences of BH4 in a rodent model of hypertension.
Male humans generally have higher blood pressures and oxidative stress levels than females, at least until menopause.
The findings provide more evidence that the cofactor might be a novel treatment target for both sexes, says Sullivan, the study’s corresponding author.
BH?, or tetrahydrobiopterin, is required for the precursor nitric oxide synthase to make nitric oxide.
Oxidative stress, which results from high levels of natural byproducts of oxygen use, is known to reduce BH4 levels, is implicated in high blood pressure and, at least before menopause, females tend to be less sensitive to it, possibly because of the protective effects of estrogen.
In an attempt to figure out why females, even in the face of hypertension, have more nitric oxide, the scientists measured BH4 levels in the inner most part of the kidney in male and female spontaneously hypertensive rats.
“We found BH4 levels were higher in the hypertensive females than the hypertensive males,” Sullivan says.
Females also had more nitric oxide and lower—but still high—blood pressures, and the males had more oxidative stress.
They had previously shown that young spontaneously hypertensive female rats have significantly more nitric oxide and nitric oxide synthase activity in the inner portion of their kidney than their male hypertensive counterparts, and that difference holds as the rats mature.
The new work helps explain why.
“If we don’t understand why females have more nitric oxide, we can’t do things to potentiate our ability to make it,” Sullivan says.
The scientists theorized—and found—that the elevated levels of oxidative stress in the males meant less BH4, and ultimately less nitric oxide compared to females.
They found that reducing oxidative stress improved BH4, levels and nitric oxide production and “normalized the playing fields between the two sexes,” Sullivan says.
Pouring more BH4 on the situation on the other hand, didn’t work without reducing oxidative stress.
“If you have a ton of oxidative stress, you can give as much BH4 as you want, and all you are going to get is more BH4,” Sullivan says of BH4’s destructive counterpart and the unhealthy, vicious cycle it helps create.
Without BH4, nitric oxide synthase becomes “uncoupled” and instead produces superoxide, which decreases nitric oxide production but also interacts with the nitric oxide that is available to form the oxidant peroxynitrite. Destructive peroxynitrite, in turn, targets the BH4 that is present so it becomes BH?, which further interferes with BH4’s normal job of helping nitric oxide synthase make nitric oxide.
“You don’t make the product you want nitric oxide synthase to make, which is nitric oxide,” Sullivan says.
While it’s not clear that females are any better of making BH4, it is clear that the cofactor is easily altered by oxidative stress to become its unhealthy counterpart BH4, Sullivan says.
Giving both males and females the synthetic antioxidant treatment Tempol for two weeks is what leveled the gender field.
Bottom line: the antioxidant treatment essentially eliminated the sex differences in BH4 and nitric oxide synthase activity in that key region of the kidneys.
Males had a higher blood pressure at baseline and the antioxidant treatment had no effect on the blood pressure of either sex.
More work is needed to explore BH4’s treatment potential in both sexes, Sullivan says.
BH? is widely available without a prescription, and its impact has been evaluated in a number of clinical trials including a current study at the University of Nebraska, Omaha, looking at its effect on blood flow and exercise capacity in patients with peripheral artery disease.
More information: Ellen E. Gillis et al. Oxidative stress induces BH 4 deficiency in male, but not female, SHR, Bioscience Reports (2018). DOI: 10.1042/BSR20180111
Provided by: Medical College of Georgia at Augusta University | <urn:uuid:014f0611-24b6-447b-ac7a-f09d13d9acf9> | CC-MAIN-2022-40 | https://debuglies.com/2018/08/13/higher-levels-of-oxidative-stress-in-males-results-in-lower-levels-of-a-cofactor-needed-to-make-the-powerful-blood-vessel-dilator-nitric-oxide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00399.warc.gz | en | 0.943537 | 1,076 | 3.265625 | 3 |
Also known as: Crypto-Virus, Crypto-Trojan, Ransom Virus, Ransom Malware
Ransomware is a type of malware that infects your system, then locks or encrypts your most important data, allowing attackers to ask for a ransom. The attackers will offer to provide the decryption key only if you pay a certain amount of money within a short time.
Ransomware usually finds its way into a system through a malicious email attachment or through a malicious website that will begin downloading infected software onto the system. Phishing or Spear-phishing scams are commonly used to trick the victim into opening attachments by masquerading as another person or organization that the victim already trusts. Sometimes, more aggressive forms of ransomware are used that don’t require tricking users in any way and instead exploit weak points in system security.
Once the malware is on the system, it may lock down the system, encrypt the user’s files, or restrict the user from accessing any of the computer’s main features. While the system is locked down, the ransomware will pop up messages asking for a certain amount of money to lift the lock. On top of that, some ransomware will pose as an official government agency and claim that the lockdown is necessary for legal or security reasons. In every situation, paying the ransom is not a guarantee that you will completely unlock the system or remove the malicious ransomware.
Ransomware is often the largest security challenge faced by businesses in the modern world, especially for small and medium sized businesses who lack the resources to effectively combat the malware.
Today, ransom malware is becoming ever more widespread. It has become a preferred tool of hackers for several important reasons:
- Ransomware is now created like a fully-developed piece of software. It is frequently updated and patched to mirror any updates that users are making to system security.
- Ransomware development is so advanced that it is now even offered as “Ransomware as a Service” with dedicated customer support. This means that executing a ransomware attack requires no technical knowledge.
- To effectively combat ransomware requires a big budget and a team of knowledgeable people making frequent updates to cybersecurity, something most businesses don’t have the resources to do.
- Attackers don’t need to use technology to find their way into systems, instead they are adept at exploiting users and employees and tricking them into downloading email attachments or navigating to malicious websites.
The first record of extortion based malware dates back as early as 1989, but widespread use of ransomware didn’t begin occurring until the mid-2000’s. In the beginning, ransomware would usually encrypt files types that users would be willing to open, such as files with extensions like .DOC, .XLS, .ZIP and most image formats. Since then, ransomware technology has developed to target other important file types such as SQL and database files.
Over the years, cyber-thieves have added more features to their ransomware, such as countdown timers, incrementally increasing ransom amounts, and alternative payment platforms for ransom payments. More recently, ransomware attackers have expanded their targets to include larger operational systems like hospital networks and transportation service providers. In the future, as more devices connect to the internet, we will likely see more ransomware targeted beyond computers and servers.
Ransomware has quickly become one of the largest threats for business IT environments and currently accounts for around 40% of all spam messages. Paying the ransom might release the locked files, but it also invites further attack. Moreover, the damage ransomware can cause goes beyond the cost of the ransom. The disruption caused by a ransomware attack can hurt a business’ revenue, productivity, and reputation.
There isn’t a one-off solution for preventing ransomware. Instead, a multi-layered security program should be put in place to detect potential ransomware attacks, prevent the intrusion of malware, and allow for quick recovery in case an attack is not stopped.
Some general tips include:
- Train users to be defensive. Never click on email links or open attachments from any email that is from an unknown sender or which looks suspicious. Never click links or download files from untrusted websites.
- Keep computer operating systems and software up to date.
- Don’t install any software that you don’t completely trust, and don’t give software more permissions than they need.
- Install security software that covers all threat vectors. This includes email security filters, web filters, web application firewalls for your website, network firewalls with advanced threat protection, and endpoint anti-virus software.
- Back up all company files and documents frequently and in multiple locations. Be sure that all backed up data is replicated to a secure cloud storage.
Barracuda provides a complete family of solutions to help you detect, prevent, and recover from ransomware attacks. See Don’t Be a Ransomware Victim to learn more or attend a free webinar.
Step 1. Detect Ransomware
A good first step is to identify any latent ransomware threats that may already exist in your organization. In fact, 47% of all businesses in the U.S. have been affected by ransomware, and 59% of ransomware infections have been delivered by email. Barracuda offers two free services to check your existing email and website for possible ransomware attacks as well as a variety of other advanced threats.
The Barracuda Email Threat Scanner is a free service that checks for latent threats that are already in your Office 365 or Microsoft Exchange Inboxes.
The Barracuda Vulnerability Manager will scan your website and any web applications for possible vulnerabilities including ransomware. As with the email scanner, the Vulnerability Manager is a free service that takes just two minutes to set up.
Step 2. Prevent Ransomware
Preventing ransomware requires a comprehensive defense that covers every possible method by which ransomware can enter your network and reach users and data.
The foundation of an effective ransomware defense is a network firewall with advanced threat protection. Barracuda CloudGen Firewalls scan all network traffic for potential ransomware, malware, and many other cyber threats. They secure today’s dispersed network infrastructures, including on-premises, cloud-hosted, SaaS-based, and mobile elements, as well as third-party applications. They enable secure network connections for your remote workers, improve site-to-site connectivity, and ensure secure, uninterrupted access to cloud-hosted applications.
Barracuda’s email security products extend ransomware defense to your mail server, the most common source of ransomware attacks. Barracuda Email Protection is a cloud-based service that protects email from cyber-attacks and data theft. The Barracuda Email Security Gateway provides this same level of protection in an appliance. To protect against the most sophisticated types of email phishing and impersonation attacks, it uses artificial intelligence to scan all emails for potential threats. The Barracuda Email Security Gateway provides this same level of protection in an appliance.
Web Browsing Security
Another common source of ransomware is malicious websites that users may visit by accident or by clicking on a link within an email. The Barracuda Web Security Gateway and Barracuda Web Security Service safeguard web browsing to ensure that users do not inadvertently download malware or enter sensitive data to untrusted websites. It detects and blocks internal spyware that may be trying to access the Internet, and it provides detailed reporting of unusual or suspicious web browsing activity.
Web Site and Web Applications
Your organization’s website is a high-profile target for attackers. Despite the recent news about larger corporations and government agencies who were attacked, the majority of attacks target small to medium size businesses. The Barracuda Web Application Firewall continuously monitors your outward-facing websites and applications to identify, log, and remediate thousands of potential attacks that can steal data, deny service, and infect your organization with malware such as ransomware.
Step 3. Backup and Recovery
Even the best ransomware defense can occasionally be breached, which makes a robust backup and recovery system critical. If ransomware does reach your network, an offsite backup system can help you quickly recover your data and minimize business disruption.
Barracuda Backup automatically creates updated backups as files are revised, and duplicates them to the secure Barracuda cloud or to a private off-site location. If criminals encrypt your files with ransomware, first eliminate the malware, then simply delete the encrypted files, and restore them from a recent backup file. The whole process can take as little as one hour, letting you get right back to business, and leaving the criminals empty-handed.
See the following articles for additional information about ransomware protection.
Contact Barracuda to learn more about ransomware defense, set up a free ransomware consultation, or to get a free trial of any Barracuda product. | <urn:uuid:723023e4-7fbc-41cb-b38b-f6ba10b1cc9d> | CC-MAIN-2022-40 | https://www.barracuda.com/glossary/ransomware?utm_source=blog&utm_medium=39506&switch_lang_code=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00399.warc.gz | en | 0.917181 | 1,832 | 2.921875 | 3 |
The Blockchain – Hype or Reality
What could be the single most transformative technology concept that could change everything we know about risk and internal controls.
If you have not heard about the Blockchain, you certainly have heard about bitcoin. Bitcoin, and other digital currencies like it, are built on the concept of the blockchain – also known as a distributed ledger. The capabilities and benefits of utilizing a Blockchain for other use cases holds significant promises for transforming how we do business.
This blog is to help those who are just beginning to hear about blockchain. I will not make you an expert. My objective is to get this technology on your radar stimulate intellectual curiosity to learn more. ERM is working on a risk framework for blockchain. Drop us a line if you want to stay in touch regarding our developments.
What is Blockchain and what are the benefits?
In simple terms, blockchain is a distributed database where a community has agreed to use it as a transparent authority for their activities. The benefit of using the distributed database include:
- The community validates the authority and validity of all transactions on the distributed database.
- Backup and recovery of data is distributed and replicated making it virtually impossible to lose data on the blockchain in the event of a disaster.
- The transparency and authentication of activity on the blockchain provides for nonrepudiation of transactions at scale unlike any other platform in the past.
Why do I want to use a blockchain?
ERM recently attended the 2nd annual Blockchain conference in Washington, D.C. One of the panelists made the clearest business case for when you want to use blockchain. The two conditions noted:
- The trading partners do not trust each other
- The trading partners do not trust a central authority
The blockchain inherently manages these two conditions by being transparent, authenticating all transactions and recording them indefinitely. Creating alternative to the infrastructure needed for institutional trust; changing the way our society functions.
What changes with the blockchain?
The following table illustrates how financial statement assertions and control assertions are addressed inherently it how it works:
|Assertion||Blockchain Value Proposition|
|Occurrence||If it’s on the chain, it took place|
|Completeness||Explicit agreement to use the chain for all transactions by trading partners|
|Accuracy||Transaction accuracy is reinforced by the trading partners and available for review.|
|Cutoff||The transactions are recorded when they occur – real-time accounting.|
|Classification||Could be forced to be acknowledged on the chain.|
|Existence||If it’s not on the chain, it did not happen.|
|Rights and Obligations||Documented as part of the transaction and available for review.|
|Completeness||The entire transaction is available for review.|
|Valuation and Allocation||The transaction is available for review and establishing its value is available to all trading partners.|
How is it currently being used?
Everledger, a diamond registry founded by Leanne Kemp, uses blockchain to track and protect the diamonds throughout their life. The genesis of a diamond isn’t always clear. Knowing the origin of a jewel can stop insurance frauds, sort out synthetic diamonds or those sourced in war zones. But even then, the documents can be forged.
“Blockchain is immutable; it cannot be changed, so records are permanently stored,” says Kemp. “Information on the blockchain is cryptographically proven by a federated consensus, instead of being written by just one person.”
Starting at mines they originated, each diamond is given an ID based on several dozen different features and then is put into the chain; becoming the record of the jewel’s ownership throughout its life.
How will it impact business and society?
Blockchain technology will be used to improve inefficient processes. Think of the processes that are used to buy and sell things, identify ownership or even ourselves. They are typically slow, error prone and dependent on people. However, the amount of uses a transparent, verifiable record of transaction data on a decentralized platform, which requires no central supervision while maintaining fraud resistant, is seemingly endless.
Giving blockchain the potential to make enormous changes to our economic and social climates that will revolutionize a wide range of industries including: Financial Services, Healthcare, Music, Manufacturing, Identity, Automotive and Government.
What does the world look like from a Risk Management perspective?
Audit – Historical accounting and auditing continues to be the best we have today to provide reasonable assurance to stakeholders on the accuracy and completeness of the financial statements. In a blockchain enabled world, audit can exist real-time. If the auditor is a participant in the blockchain, transactions can be audited real time before being recorded. Real-time and transparent audit. In addition, the endless list of document requests can go away – everything the auditor needs is already there for their review.
Regulatory Compliance – In highly regulated transactions, the regulator could be part of the blockchain reviewing and certifying each transaction for compliance.
Cybersecurity – The use of tokens and cryptography reinforce the confidentiality and integrity of the transactions and the participants on the blockchain
Business Continuity and Disaster Recovery – The distributed nature of the database means data is not vulnerable to a single point of failure. Business can continue regardless of the loss of part of the blockchain.
Let’s Slow Down Here – The Security Challenges
With cryptography as its foundational block, the blockchain starts off on firm footing. However, blockchain poses some key security challenges –
People are still the weakest link even with all the fancy crypto in place. People are the main actors performing the transactions. All the typical cybersecurity problems from authentication to data compromise directly apply. And a lot of what we do today happens on smartphones – an area that’s yet to see the full impact of what havoc poor cybersecurity can wreak.
If there’s something that the huge data breaches at Bitcoin exchanges like Mt. Gox and Bitfinex are telling us, it is to take a step back and look at where this is heading. Blockchain-based Bitcoins were simply stolen with direct attacks at the exchanges. Think about this at an individual level – individual wallets could be compromised with social engineering attacks and malware-based attacks. The end-result is that the blockchain and its proponents will need to come up with a way to address fraudulent losses – much like how a bank would cover the losses a customer faces when hacked.
Inherent Key Weaknesses
Blockchain and implementations based on it heavily depend on the keys that they use to operate. Software-generated keys have been known to have flaws, including the generation of weak keys that could be compromised by a determined hacker.
The distributed ledger, as the name suggests, is distributed among several individuals and they will have the ability to view the transaction histories including those transactions where they weren’t even a part of the transaction. This inherently violated privacy and when you think of the “right to be forgotten” and how you’d go about implementing it with blockchain, you have a pretty difficult task at hand. For instance, how would you prove that all transactional data has been deleted (even if eventually) from all parties and counterparties?
While technology has no boundaries, boundaries and nations do exist in the world we live in. The legalities and jurisdictional implications surrounding issues related to blockchain will be truly challenging. Throw in regulations and compliance and you have the perfect storm.
So there’s a blockchain project or proof of concept underway, what are the risks?
With the promise of blockchain, there are several risks that should be considered and monitored as it is implemented and operated:
The initial implementation and use of the blockchain represents the most vulnerable time since embedding vulnerabilities at inception have the greatest chance of success and going undetected in the future. Here are some pre-implementation considerations:
- Terms and conditions – What exactly is the legal arrangement for using and participating on the blockchain?
- Hosting Risks – Hosting the technology on shared infrastructure may increase the risk of unauthorized access to blockchain data. Insist on isolation and separation to protect against external attacks.
- Encryption – Do not assume all data is encrypted. Evaluate that all data in encrypted form and that encryption keys are not stored with the data.
- Administrative rights – Gain a detailed understanding of the precise administrative access that will be managed and monitored.
- Data Governance – No ambiguity should exist in any of the data elements recorded on the chain. They should be documented and mutually agreed to by all participants.
- Forked Chains – What happens if there’s a disagreement, how is this going to be addressed, how do you reduce the risk of multiple chains for the same transaction activity.
- APIs - Validate that all APIs writing or reading from the chain have no inherent security vulnerabilities.
- Personally Identifiable Information – How are you preventing and/or detecting PII from being stored on the blockchain?
- Get help – None of us are complete experts in any of these new technologies. Consider hiring outside experts regarding cryptography and other relative security experts to review your implementation.
So the blockchain is in place, what do you do know? Here are some practical considerations:
- Incident response – What is the plan and approach if participants are raising concerns regarding transactions or data leakage? What is your organization prepared to do? Can you continue to transact if the whole blockchain was disrupted?
- Governance – How are internal policies and procedures being adapted and maintained in light of the use of the blockchain?
- Information Security – Are you following the best security practices ranging from general security considerations to technical security considerations such as encryption?
Turn your employees into a human firewall with our innovative Security Awareness Training.
Our e-learning modules take the boring out of security training.
Get a curated briefing of the week's biggest cyber news every Friday.
Intelligence and Insights | <urn:uuid:da384195-91d6-438f-adea-04207900a40f> | CC-MAIN-2022-40 | https://ermprotect.com/blog/the-blockchain-hype-or-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00399.warc.gz | en | 0.929964 | 2,083 | 2.546875 | 3 |
It’s alive! In a moment straight out of science fiction, scientists have created lab-grown brain that can move muscles. This miniature brain, only the size of a bean, was able to join with cells that came from a donors spinal cord, forming a miniature nervous system in a petri dish. Are these mini-brains conscious? No, but their development may hold the key to understanding how neurological diseases like Alzheimer’s develop over time. | <urn:uuid:d84463fb-1f6f-4e86-904d-36d5ee4f7ac1> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/this-lab-grown-brain-made-a-muscle-twitch-heres-how/681959/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00399.warc.gz | en | 0.928815 | 94 | 2.609375 | 3 |
A group of Canadian and internal researchers and graduate students working on the University of Alberta’s climate research project is using IBM’s InfoSphere Stream software to crunch, correlate and analyze data in order to detect, visualize and predict subtle changes in the environment.
The software provides real-time analysis for more than 10,000 data points per second from sensors measuring carbon levels and other environmental indicators such as relative humidity, temperature, soil moisture, atmospheric pressure and ambient noise from forests in Canada , Australia , Brazil , Costa Rica and Mexico . The data is gathered by more than 500 sensors located in some of the world’s most remote and vulnerable ecosystems.
“When I started this project four years ago I had no idea how much data I would be generating, and we could not look at our data in a reasonable amount of time. It was taking something like six months to two years before we had usable insights,” said Dr. Arturo Sanchez-Azofeifa of the university’s Department of Earth and Atmospheric Sciences and leader of the Enviro-Net project. “Now, we can basically ‘see’ the forests breathing in real-time.”
He said the depth of insights now being produced has not previously been available in real-time.
Students from the university will work with IBM to develop a simplified ‘dashboard’ view of the data to make it easier to share and convey insights to decision-makers.
“The ability to quickly analyze that data and make informed decisions will have implications for us here in Alberta as researchers study the impact of oil sands extraction efforts,” said Bernie Kollman , IBM’s vice-president, for public sector in Alberta. “It will also help other policy makers around the world support environmental stewardship.”
The University of Alberta collaborated with researchers from IBM’s T.J. Watson Laboratory to integrate InfoSphere Stream into their research to reduce from months to minutes the time required to analyze data. The technology provides researchers — and eventually policy makers – with an unprecedented ability to predict environmental events, such as forest fires and drought, and to apply insights to more accurately forecast how boreal and tropical forests are returning after deforestation and disturbances.
IBM awarded Dr. Sanchez-Azofeifa use of the software through IBM’s Alberta Centre for Advanced Studies (CAS). IBM Alberta CAS was formed with the Government of Alberta and Alberta Universities to enable strategic, multidisciplinary collaborations of mutual interest and benefit between the province’s research community and IBM’s worldwide research. | <urn:uuid:0c2cd672-cee4-4f4a-8583-ec18c6273ac8> | CC-MAIN-2022-40 | https://channeldailynews.com/news/university-of-alberta-uses-ibm-infosphere-stream-for-research/37989 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00399.warc.gz | en | 0.938226 | 535 | 2.90625 | 3 |
U of Basel Researchers Provide New Look at 2D Magnets using Diamond Quantum Sensors
(RDMag) Physicists at the University of Basel have succeeded in measuring the magnetic properties of atomically thin van der Waals materials on the nanoscale for the first time. They used diamond quantum sensors to determine the strength of the magnetization of individual atomic layers of the material chromium triiodide. The work opens interesting perspectives on how their innovative quantum sensors can be used in the future to study two-dimensional magnets in order to contribute to the development of novel electronic components.
“Our method, which uses the individual spins in diamond color centers as sensors, opens up a whole new field. The magnetic properties of two-dimensional materials can now be studied on the nanoscale and even in a quantitative manner. Our innovative quantum sensors are perfectly suited to this complex task,” says Georg-H.-Endress Professor Patrick Maletinsky from the Department of Physics and the Swiss Nanoscience Institute at the University of Basel. | <urn:uuid:d364bd27-b189-4321-b18a-3db6c46f907b> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/u-basel-researchers-provide-new-look-2d-magnets-using-diamond-quantum-sensors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00399.warc.gz | en | 0.866365 | 213 | 2.640625 | 3 |
In a recent survey, 70% of tablet owners and 53% of smartphone / mobile phone owners stated that they use public Wi-Fi hotspots. However, because data sent through public Wi-Fi can easily be intercepted, many mobile device and laptop users are risking the security of their personal information, digital identity and money. Furthermore, if their device or computer is not protected by an effective security and anti-malware product... the risks are even greater.
Wireless Security tips — to help keep you safe on public Wi-Fi
With coffee shops, hotels, shopping malls, airports and many other locations offering their customers free access to public Wi-Fi, it’s a convenient way to check your emails, catch up on social networking or surf the web when you’re out and about. However, cybercriminals will often spy on public Wi-Fi networks and intercept data that is transferred across the link. In this way, the criminal can access users’ banking credentials, account passwords and other valuable information.
Here are some useful tips from Kaspersky Lab’s team of Internet security experts:
- Be aware
Public Wi-Fi is inherently insecure — so be cautious.
- Remember — any device could be at risk
Laptops, smartphones and tablets are all susceptible to the wireless security risks.
- Treat all Wi-Fi links with suspicion
Don’t just assume that the Wi-Fi link is legitimate. It could be a bogus link that has been set up by a cybercriminal that’s trying to capture valuable, personal information from unsuspecting users. Question everything — and don’t connect to an unknown or unrecognised wireless access point.
- Try to verify it’s a legitimate wireless connection
Some bogus links — that have been set up by malicious users — will have a connection name that’s deliberately similar to the coffee shop, hotel or venue that’s offering free Wi-Fi. If you can speak with an employee at the location that’s providing the public Wi-Fi connection, ask for information about their legitimate Wi-Fi access point — such as the connection’s name and IP address.
- Use a VPN (virtual private network)
By using a VPN when you connect to a public Wi-Fi network, you’ll effectively be using a ‘private tunnel’ that encrypts all of your data that passes through the network. This can help to prevent cybercriminals — that are lurking on the network — from intercepting your data.
- Avoid using specific types of website
It’s a good idea to avoid logging into websites where there’s a chance that cybercriminals could capture your identity, passwords or personal information — such as social networking sites, online banking services or any websites that store your credit card information.
- Consider using your mobile phone
If you need to access any websites that store or require the input of any sensitive information — including social networking, online shopping and online banking sites — it may be worthwhile accessing them via your mobile phone network, instead of the public Wi-Fi connection.
- Protect your device against cyberattacks
Make sure all of your devices are protected by a rigorous anti-malware and security solution — and ensure that it’s updated as regularly as possible. | <urn:uuid:eac7de44-789d-476d-b318-77340ff7f0a7> | CC-MAIN-2022-40 | https://usa.kaspersky.com/resource-center/preemptive-safety/public-wifi | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00399.warc.gz | en | 0.912897 | 685 | 2.828125 | 3 |
To ensure proper protection of their critical data, organizations pay attention to the processes they use for managing identities, privileges, and secrets. Collectively referred to as secrets management, these processes provide organizations with capabilities for managing passwords, encryption keys, API keys, and other types of secrets in a centralized and secure way.
In this article, we tell you about common types of secrets and the role of secrets management in an organization’s cybersecurity. We also cover common challenges of secrets management and provide helpful recommendations for improving your secrets and password management routines.
Secrets management is vital for data security
Poor secrets management leads to cybersecurity incidents.
According to Gartner, by 2021, more than half of organizations using DevOps will be using PAM-based secrets management services and solutions. That’s a promising prediction, considering that today only about 10% of organizations use secrets management solutions. At the same time, secrets management is crucial for all organizations, whether they use DevOps or not, because all organizations use digital secrets to some extent.
Let’s start by clarifying what a secret is from the cybersecurity perspective. Secrets are digital credentials: passwords, APIs, encryption keys, SSH keys, tokens, and so on. They’re used for managing access permissions at both human-to-application and application-to-application levels of interaction.
Secrets provide users and applications with access to sensitive data, systems, and services. This is why it’s so important to keep secrets secure both in transit and at rest — and, therefore, to manage them properly.
So what is secrets management?
Secrets management is the process of securely and efficiently managing the creation, rotation, revocation, and storage of digital authorization credentials. In a way, secrets management can be seen as an enhanced version of password management. While the scope of managed credentials is larger, the goal is the same — to protect critical assets from unauthorized access.
With a well-thought-through secrets management policy, organizations can prevent various cybersecurity issues, including unauthorized access to critical data and systems, data losses, and data breaches.
In general, secrets management helps to ensure security at three levels:
- Infrastructure security – Protect user and application accounts, devices, and other network elements from intrusions.
- Cloud service security – Limit and manage access to cloud accounts and important cloud-based services.
- Data security – Protect critical systems, storages, databases, and other resources from data compromise.
Plus, managing and auditing secrets — particularly passwords, as one of the most common types of secrets — is among the key cyber security best practices and requirements of standards and acts like NIST, FIPS, and HIPAA.
Secrets management challenges and anti-patterns
Sometimes it’s hard to keep a secret.
Let’s take a look at the key challenges of secrets management. The biggest difficulty of managing digital credentials is the need to ensure full protection at every phase of a secret’s lifecycle, from creation to deletion.
There are four main phases a password or other secret can go through:
- Creation – Secrets can either be created manually by a user (a password to a personal account) or generated automatically (an encryption key for deciphering a protected database).
- Storage – Secrets can be stored centrally or separately, using designated solutions (a PAM-based secrets management tool or password manager) or common approaches (in a text file, on a shared disk, etc.).
- Rotation – Secrets can be changed or reset on a schedule, thus improving the overall protection of an organization’s infrastructure. Secrets rotation is one of the key requirements of many regulations and standards, including NIST and PCI DSS.
- Revocation – Secrets can be revoked in the case of a cybersecurity incident. Thanks to this measure, organizations can prevent or limit the negative consequences of an incident and make sure that attackers can’t use compromised credentials for accessing your organization’s critical resources, systems, endpoints, or applications.
At each of these phases, secrets should be protected from unauthorized access, intervention, and manipulation. However, many organizations struggle to build an efficient password management system.
Here are some of the most common secrets management problems:
Lack of visibility. With a constantly changing number of systems, resources, accounts, and applications, the number and locations of secrets change as well. Without clear visibility, an organization won’t be able to manage secrets effectively and securely. Plus, visibility gaps can create additional challenges for audits.
No secrets management policy. Setting clear rules in security policies makes it easier to control different stages of a secret’s lifecycle and helps organizations meet the requirements of security regulations. However, many organizations don’t have such a policy or don’t follow it properly.
Manual management. According to a survey conducted by Centrify, 52% of organizations don’t use password vaults or any other dedicated secrets management tools or systems to manage their digital credentials. This slows down the management process and can make both storage and transmission of keys less secure.
In addition, many organizations still have one or more anti-patterns in their password management routine. Here are the six most common:
Let’s look closer at each of these anti-patterns.
1. Weak passwords. To make things easier for themselves, people tend to use default account passwords, embedded and hard-coded application secrets, and weak, easy-to-remember passwords. These three bad practices constitute one of the biggest password management sins.
Usually, the easier it is to remember a password, the easier it is to crack it. Nevertheless, people still use passwords like “password,” “admin,” and “123123.”
As for embedded and hard-coded passwords, hackers can easily get them with the help of scanning tools, guessing, or performing a brute-force attack.
2. Storing secrets in plain text. Have you ever seen a team or a department using a shared text file containing all the passwords to critical resources? Or sending each other emails and messages with the secrets needed for accessing specific applications, resources, or services? While it’s a pretty common practice, storing and transmitting secrets in plain text creates multiple security risks.
Storing secrets in plain text makes it so much easier for an attacker to get into the target system. The only thing they need to do is obtain a file, email, or text message.
3. Sharing passwords. There are two sides to the problem of sharing passwords. On the one side, many organizations use shared accounts for managing their systems or working with cloud services and web applications. The key weak point here is that to access a specific resource or application, several people use the same credentials. So in the case of a security incident, you never know who did what under such an account.
On the other side, some employees may share credentials to their personal accounts with colleagues or even outsiders without giving it a second thought. These credentials can be used by malicious insiders and intruders to surreptitiously get hold of an organization’s sensitive data.
4. No secrets revocation. Being able to revoke user credentials is one of the key NIST requirements. Revoking secrets should be a standard response to an employee’s resignation, the expiration of an agreement with a third-party vendor, failed authorization attempts, etc. But unfortunately, not all organizations follow this procedure in their secrets management routines.
5. No secrets rotation. Many security standards require changing passwords on a schedule. PCI compliance, for example, requires changing user passwords at least once every 90 days. The same recommendation applies to application keys and other types of secrets.
However, not all organizations rotate secrets regularly, increasing the risk of their compromise.
6. Reusing secrets. Using the same secret for different accounts, services, or applications is a common bad practice. Employees often do it to save some time and avoid bothering with the need to remember multiple secrets. However, if a reused secret gets compromised, all the accounts and resources it was used for will be at risk.
As you can see, ensuring the security and effective management of secrets within an organization isn’t that easy. However, we have several recommendations that can help you tackle this task. In the next section, we discuss three best practices for secrets management.
3 best practices for secrets management
Say no to threatening anti-patterns.
In order to minimize the risk that sensitive data will be compromised, organizations need to pay more attention to the way they manage secrets. Below, we list three recommendations that can help you build an efficient secrets management system within your organization.
1. Build a secrets management policy
Using the list of bad practices from the previous section, determine your secrets management strategy and build a basic secrets management policy for your organization. Here are some basic elements to include in this policy:
- Restrict the use of hard-coded secrets and default passwords
- Set strict requirements for the format of passwords
- Specify mandatory secrets revocation in certain cases
- Set a fixed period for mandatory secrets rotation
You can expand this list based on the particularities of your organization. Look to the password management requirements of the regulations you must comply with for additional guidance.
2. Automate secrets management processes
When things aren’t automated, there’s always room for human errors. That’s why you need to deploy dedicated secrets management software for managing secrets in a secure and centralized manner.
Look for a tool that allows you to discover and account for all secrets in use. Pay special attention to options for key generation, rotation, revocation, storage, and transmission.
3. Manage privileges
Users and applications with elevated privileges have access to the most critical data, services, and resources. Not to mention that privileged accounts are one of the key targets for cybercriminals.
Ideally, no user or application should have more privileges than they need for accomplishing regular tasks. Any privilege elevation should be granted for a valid reason and be strictly limited in time. This is why such privileges also should be closely monitored and managed.
Some of today’s PAM solutions go far beyond managing privileges assigned to specific accounts or roles and provide full coverage of different types of secrets.
Manage secrets effectively with Ekran System
Ekran System is an insider protection platform that comes with a rich set of PAM and secrets management functionalities. With the help of this platform, you can effectively manage:
- Local Windows admin passwords
- Windows Active Directory (AD) secrets
- SSH/Telnet keys (for UNIX environments)
- And more
Ekran System’s PAM features cover all vital secrets management and privileged access management tasks, including:
- Configuration, encryption, and management of privileged user credentials
- Secure delivery of secrets
- Delivery of temporary credentials to specified users or groups
- Password rotation with manual or automatic generation of new secrets
With the help of these features, you can ensure secure and efficient management of the most important secrets used by your organization. And to secure your critical assets from insider threats, you can use Ekran System’s user activity monitoring and identity management platform.
Secrets management is important for ensuring an organization’s cybersecurity. It covers all processes and tools related to the creation, storage, transmission, and management of digital credentials such as encryption keys, APIs, and passwords.
To manage secrets both securely and effectively, organizations should build a core secrets management policy that establishes standard rules and procedures for all phases of a secret’s lifecycle. To avoid human errors, it’s best to deploy a centralized secrets management solution.
PAM solutions enriched with secrets management capabilities allow organizations to tackle two main cybersecurity tasks:
- Securely and effectively manage different types of secrets
- Control, monitor, and audit privileged accounts
Ekran System is an all-in-one insider threat protection platform. It offers a large selection of features and tools to secure databases and effectively manage and protect privileged accounts and secrets. Start exploring the capabilities of Ekran with a 30-day trial. | <urn:uuid:eef1b7b2-5f32-4a3d-9222-7a8cb387a497> | CC-MAIN-2022-40 | https://www.ekransystem.com/en/blog/secrets-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00399.warc.gz | en | 0.89994 | 2,521 | 2.703125 | 3 |
We live in an age where, unfortunately, it is essential for us all to be adequately protected from the various dangers we, and our devices and networks, may encounter online. In order to combat these cyber threats, the cyber security industry is working tirelessly to try and outpace those that may wish to do harm to us or our technologies. In this series of articles, we’ll be looking at what cyber security is overall, what the threats we face are, and how we could overcome the challenges we face using various different cutting-edge technologies currently in development.
In the last two articles of this three-part series, we’ve looked at what cyber security is and why it has become such an essential element of everyday life. We’ve also detailed several of the biggest threats facing both companies and individuals, how they infect systems and what kind of damage they are capable of doing.
Armed with this knowledge, let’s now start off the final part of the series by taking a look at how best to protect ourselves from these threats.
There are several ways in which businesses and individuals can protect themselves from the variety of cyber threats out there and, what’s more, entire industries have now arisen that are dedicating themselves to trying to prove increasingly secure and reliable ways to keep businesses and individuals safe and their personal and sensitive information secure.
Where Do I Need Protecting?
So, let’s start with the basics. Firstly, in order to keep themselves secure, a large number of businesses and enterprises will take the time to find out where exactly they could be vulnerable to cyber-attacks or sabotage from within.
Risk assessments are becoming more and more commonplace within the field of cyber security and are most regularly used by larger organisations and institutions such as governments or transnational corporations.
In order to discover where best to use any available cyber security resources, businesses and enterprises will need to know where their systems are vulnerable and how.
This is usually done by a third-party analyst and can prove incredibly useful when attempting to find out where best to allocate cyber security resources.
Another, less conventional way that some larger organisations attempt to test their own systems is by the hiring of “ethical hackers” who attempt to breach their systems in order to work out their weak points without the consequences of a real hack.
In a similar way to penetration testing, ethical hackers test the limits of a company’s security systems in an ethical and legal manner so as to ensure that they are patched up before any malicious hackers find them.
How Do I Protect Myself?
Once the results of any cyber security-based risk assessment are in and there is a greater insight into where a business or individual may be vulnerable, it’s time to start building your system.
Firewalls, encryption, automated alerts and notifications, security-as-a-service, there are an enormous number of options available for those looking to beef up their cyber security systems.
However, just purchasing whatever sounds futuristic then installing it and hoping for the best is never a good way to secure anything.
When looking for the best systems for their specific purposes, companies with the most adequate cyber security systems select the equipment that covers both their requirements and any future additions that could prove useful in the long-term.
The most important time after having discovered a cyber security incident has taken place are the few moments taken to decide what course of action to take.
When entrusted to human beings, these moments can prove to high-pressure or to fleeting for them to be able to act in the most appropriate manner for the situation. This is now changing thanks to automated systems and artificial intelligence.
Predictive behavioral systems and automated alarms to alert both security experts and data officers of an incident or breach are becoming increasingly common and it seems reasonable to expect that AI systems will begin to take a much larger role in securing cyberspace as their technologies develop and mature.
Having acquired the technologies required to protect themselves, businesses and individuals should now turn to ensuring they remained protected into the future.
How Do I Stay Protected?
Whilst most products have a fairly decent shelf-life, eventually all will be replaced with something new and improved. Relying on several cyber security technologies acquired ten years ago to keep you protected from the threats of today is not only optimistic but, in some severe cases, outright dangerous.
In many cases, once cyber security measures are in place, their maintenance and operation depend on the further implementation of intelligent use policies of network systems and services so as to ensure security systems aren’t compromised in the future. This can also be helped by the acquisition of technologies ready for future additions, as mentioned earlier.
Hardware and software are constantly changing, so smooth future upgrade paths are a smart way of ensuring long-term protection through security updates and hardware add-ons. In some cases, the overhaul of legacy equipment may seem initially expensive, but if a major cyber-attack where to happen, those initial overhaul costs could possibly pale in comparison to the damage cause.
As our reliance on networked and connected technologies increases, so too does the risk of our data and our identities be stolen or our technologies and infrastructure vandalized and damaged. This is a risk we all live with today, so investing in adequate protection for tomorrow seems like a reasonable expectation in return for staying secure. | <urn:uuid:8a6f882f-70c9-4e04-a6a4-432c577b81a6> | CC-MAIN-2022-40 | https://www.lanner-america.com/blog/cyber-security-part-3-building-bespoke-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00399.warc.gz | en | 0.955707 | 1,097 | 2.703125 | 3 |
One thing I did during my Master Thesis a while ago, was to test how different webservers react to all kind of characters. One of the first things I tested was all characters represented by one byte (00 to FF) and their percent encoded equivalents (%00 to %FF). Of course the results may vary with other server versions, server configurations, server side code, client libraries or the sent HTTP headers. For example python’s urllib2 is not able to send 0A (line feed) in an URI (which makes sense). I tried to use standard components as best as I could. The webservers I used were:
- An Apache 2.2.12 server (port 80), Ubuntu 9.10 machine with PHP 5.2.10
- On the same machine a Tomcat 6.0.26 server (port 8080) with JSP (Java Server Pages)
- On a Microsoft-IIS/6.0, Windows 2003 Server R2/SP2 with ASP.NET 2.0.50727 a script in C# on Virtualbox 3.1.8
So here are the main results in one picture:
The ‘Name’ column means that the character was injected into the parameter name, e.g. na%00me=value&a=b. The fields with ‘S’ are explained in another section of my Master Thesis, but some of the time you can guess the behavior. E.g. I think you know what & stands for in GET parameters, right? 😉
This kind of information is useful when you are trying to write a fuzzer, that is more focused to do some tests that make sense. Would be interesting if this table is useful for someone else.
When sending the ASCII control character null (hexadecimal 00) in the query string of an URI, IIS returns a 400 (Bad Request). Tomcat passes the null to the web application. But Apache returns a HTTP entity (the HTML code), but no HTTP headers. Additionally the URI is truncated (the null and everything after it is missing).
If you have a local apache running, try this python script (you need to have a index.html or index.php in your root directory):
print 'Valid request:'
print 'Invalid request:'
If you watch it with wireshark you will see that the answer to the second request has no HTTP headers. The apache access.log will look like this:
::1 - - [09/Jun/2010:16:44:41 +0200] "GET /?abc=123&def=456_VALID HTTP/1.1" 200 321 "-" "Python-urllib/2.6"
::1 - - [09/Jun/2010:16:44:41 +0200] "GET /?abc=123" 200 94 "-" "-"
Eric Covener of the apache project:
The null in the invalid URL causes the request line to be terminated before the rest of the URL or the protocol. The response (no headers) is “HTTP 0.9” described here:
You can find my (invalid) bug report here. I think this can only be used for web server fingerprinting. Or if there is a client (e.g. a browser) that sends the null character as well, there might be some changes for header injection.
No. But at least for Firefox: Yes. You can change you character encoding under “View – Character Encoding – Western (ISO-8859-1)“. But hexadecimal 80 won’t be the control sequence PADDING CHARACTER (PAD). It will be the euro symbol €. Control characters have no meaning in HTML.
I have no clue why they don’t indicate Windows-1252. 😉
As everything starts once, today it’s my blog. This blog is simply about IT Security stuff.
Today I was wondering how a web server reacts on an URI with a pound sign (#) in it. It took me about 3 hours to realise that it is not possible to send a pound sign with Firefox and WebScarab, even my first try with the perl library did not work. They’re just all too URI RFC 3986 compliant. But python’s urllib2 worked (not urllib)!
Findings: Apache and IIS simply ignore it and everything after it. Apache Tomcat interprets the pound sign as part of the last GET value.
If you want to try it yourself, use Wireshark to watch if the pound sign is really sent! I’m still thinking about an exploit… | <urn:uuid:02406816-4eff-42e3-a8eb-cd4dd4114270> | CC-MAIN-2022-40 | https://www.floyd.ch/?cat=4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00599.warc.gz | en | 0.837848 | 1,161 | 3.109375 | 3 |
DOE Deploying First Exascale Computer in 2021; Expects Exascale Computers to Complement Quantum Computers
(Phys.org) The Department of Energy (DOE) is preparing for the first exascale computer to be deployed in 2021. Two more will follow soon after. Yet quantum computers may be able to complete more complex calculations even faster than these up-and-coming exascale computers. But these technologies complement each other much more than they compete.
Exascale computers will be ready next year. When they launch, they’ll already be five times faster than our fastest computer—Summit, at Oak Ridge National Laboratory’s Leadership Computing Facility, a DOE Office of Science user facility. Right away, they’ll be able to tackle major challenges in modeling Earth systems, analyzing genes, tracking barriers to fusion, and more. These powerful machines will allow scientists to include more variables in their equations and improve models’ accuracy. As long as we can find new ways to improve conventional computers, we’ll do it.
DOE is designing its exascale computers to be exceptionally good at running scientific simulations as well as machine learning and artificial intelligence programs. Quantum computers, on the other hand, will be perfect for modeling the interactions of electrons and nuclei that are the constituents of atoms. As these interactions are the foundation for chemistry and materials science, these computers could be incredibly useful. | <urn:uuid:acec3259-42a0-4dec-9280-03583c5f4380> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/doe-deploying-first-exascale-computer-in-2021-expects-exascale-computers-to-complement-quantum-computers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00599.warc.gz | en | 0.915075 | 293 | 3 | 3 |
Obviously, database administrators are critical to the success of any disaster
recovery scenario. There are many key roles that are critical to the success of the
database administrator. A server administrator will have to install and set up the
server. A system administrator will be needed to install and set up the operating system.
A storage administrator will be necessary to duplicate the disks accordingly. Application
developers will need to assist with troubleshooting errors detected by the user
community. These are some of the people that a database administrator will rely on.
Many, if not all, of these steps can be accomplished prior to any disaster and tested.
There can also be problems at the time of failover where some of these areas may need to
be revisited. The database administrator may know who to call and work with during normal
times, but what happens when a disaster strikes and some primary support personnel are
not available? They could be taking care of injured family members or injured themselves.
What if your database administrator is not available? Contingencies for these scenarios
should be put in place.
It is imperative for employees to know who to call when they have an issue.
One of the best ways to avoid a situation with availability is cross-training
employees. An employee that knows more than one job function can become essential and can
play a key role during a disruption by knowing more than one area or job function.
Some people may not be able to make it to the recovery site, leaving some areas not
covered (Maiwald & Sieglein, 2002, p. 193).The cross-training should not be a
complete shift from their normal profession, unless requested by the employee. What is
usually better is to have an employee learn a skill that is new, but in the same
profession they are currently engaged.
For instance, Oracle database administrators can cross-train as SQL Server database
administrators. They are already familiar with the concepts, SQL, structures, etc. of
database administration. It should mostly be a matter of learning the different toolsets
for the new database software. This can be a win-win for the employee and the
The employee learns a valuable new skill that can enhance their career. The
organization gains an employee that has multiple skill sets that can be called upon in
times of normalcy and times of crisis.
Requirements for a database will drive the type of backups you make for it. If a
database can have several hours of downtime and the last night backup will work
sufficiently, then a full backup will be fine. If little to no downtime and/or little to
no data loss is acceptable, then full backups will not do the job.
Technologies such as remote mirroring will have to be investigated. In remote
mirroring, all changes made to the production system are copied to the disaster recovery
site. This is normally considered in an asynchronous context, since most disaster
recovery sites are at some distance away from the primary site. “Asynchronous remote
mirroring is most often utilized when the remote site is a long distance from the local
site.” (Staimer, 2005) When a fail over is called for, databases can be recovered with
the mirrored data for business continuance.
Data replication is another technology that can keep disaster recovery databases
updated. The native settings of the software replicate changes as they occur from
production databases to databases at the disaster recovery site. This can be altered so
that changes are applied on a schedule, i.e. every four hours. This would be for a data
recovery scenario in case a user made an error. The database administrator could use the
data from the disaster recovery database to correct the error in production because the
changes had been delayed.
Installation of database software should be a fairly routine task for a database
administrator. It should also be the same across servers with the same database versions.
Installation and setup should be well documented. There is always the possibility that a
database administrator will not be available when a fail over is called for. Clear and
concise, step by step directions will allow technical professionals from another area the
ability to stand in for a missing database administrator and set up the database
This being said, each production server is different. Certain things may need to be
done to prepare the database. Special scripts will sometimes need to run, or jobs to load
or unload data. These steps for individual databases and the order in which they should
execute also need to be well documented.
The best way to set up disaster recovery is by having a dedicated site with servers
available and application software running so that an immediate fail over can be done
when called for. This approach is also very expensive and not always popular. There are
ways to implement disaster recovery sites, save money and be practical, all at the same
An excellent approach for the dual use of just such a facility is testing of upgrades.
All operating systems, applications, and databases require regular maintenance patches,
fixes, and upgrades. With environments available as exact duplicates of production
systems, these are prime locations to test the maintenance releases.
Patches and fixes can be applied to a disaster recovery system on a regular schedule.
An approved test plan can be administered against the environment to check for issues
with the maintenance release. If no issues are found, the patches can be left in place
and migrated to the test environment on a regular schedule as well. If no problems are
found, the patches can then be migrated into production on a regular schedule.
If any issues are found at the disaster recovery site or in the test system, then the
patch can be rolled back or tickets can be opened with the vendors if problems are minor.
This eliminates the need for a separate laboratory environment, which can also be very
costly. No additional hardware, software, licenses, maintenance, administration, or space
would be needed for a lab to test maintenance releases.
If you do not currently have a lab for testing patches and fixes for software, then
this can be of a substantial benefit in three areas. The money has already been spent on
the disaster recovery site, which was a necessity in itself. Secondly, a duplicate
environment of your production systems now exists to test software patching, negating the
need for a laboratory. Thirdly, less administrative maintenance is spent on systems once
they are patched. Keeping software patched and fixed to current levels reduces downtime
and the amount of time administrators spend on system repairs.
This approach can be especially helpful for database administrators. Many times a
server may be available for database installations, patching and upgrades, but rarely are
there complete environments for these tasks. The need for application developers and
users is to test the application against the database after the patches have been
installed. The database administrator can perform some limited testing, but the true
tests come when users put the system through the motions.
Stocking the disaster recovery site with test servers is another great way to get the
disaster recovery site up and running quickly and maximize the value of those servers. In
most, if not every case, these servers are purchased for every new project that will be
migrated into production. Test servers should be purchased with the same specifications,
or better, than production. Most test servers will need higher capacity because more
databases, application servers, web servers, etc. will be running on them than the
production hardware. With test servers in the disaster recovery facility, much of the
work of software installation is already done. Disaster recovery instances can be created
on test servers and left idle. Application servers, web servers, and databases just wait
for the day that a fail over will be alerted.
Using virtualized servers can assist in lower costs for a disaster recovery site.
Server virtualization has become less expensive and at the same time, less complex, “…
the cost of these technologies continues to fall, allowing small firms to implement
solutions once reserved for large companies.” (McCarthy, 2007).
It is now much easier to implement virtual servers than it has been in the past.
Today, many applications, operating systems, and databases support server virtualization
software. This has changed since many of the virtualization vendors have tried to work
closely and cooperate fully with the other software vendors.
Pressures from customers have also driven software companies to work with
virtualization companies to certify and support their products. Through virtualization, a
physical server can be imaged and reproduced in a virtual environment. A production
system consisting of a web server, application server, and a database server can all be
imaged and virtualized on a single physical server. This effectively consolidates three
physical servers down to one without losing any functionality. Capacity may not be equal,
but it may suffice perfectly in a disaster recovery scenario. This does not mean that all
applications will work together on virtual servers. “For example, one would not configure
a SQL Server, an Oracle server, and a Lotus server to fail over to a common target. As a
basic rule of thumb, if the applications would not peacefully coexist on a production
server, then they will not peacefully coexist on the target.” (Buffington, 2005)
A step beyond cross training is mentoring. A mentoring program allows subject matter
experts to work directly with management-identified employees who are interested in
becoming experts in a different field than the one they are currently in. This can become
a large financial gain for employers while increasing employee morale as well.
“On average, companies with mentoring programs have a 19 percent lower turnover rate
than those without such a program. That retention boost can translate into a substantial
cost benefit. A mentoring program could save a 1,000-person company nearly $9.5 million a
year, based on a $50,000 average turnover cost, according to Interim’s 1999 Emerging
Workforce Study.” (Southgate, 2002) Mentoring can also work well for employees who wish
to cross train to qualify for positions on other technology teams that have unfilled
By identifying and opening career opportunities across teams, individuals feel a sense
of empowerment and are not stuck in their current roles. For instance, a database
administrator position may be difficult to fill externally. A current developer with
talent, ability, and desire to become a database administrator could miss an opportunity
to make a lateral move due to lack of experience. Through mentoring, the developer could
continue in her current role while cross training in a potentially new career path. In
this way, mentoring programs can help manage expected retirements and workflow
fluctuations while providing alternative career paths for qualified candidates.
When an employee and mentor begin the process, they should meet with a manager. During
this initial interview, they will identify the goals and objectives of the process and
develop work plans. The primary focus of the mentor and employee should be to capture
institutional knowledge. The employee should document the mentor’s position and job in
the form of process diagrams and standardized procedures.
As part of the mentoring process, learning employees will identify, learn, and record
undocumented processes and procedures. This assists in preventing the loss of
institutional knowledge that occurs when a subject matter expert leaves a position that
has not been well documented. It also insures that the employee understands the mentor’s
A review of the documentation by the mentor will give an excellent indication of the
understanding and progress of the employee. This provides opportunities for
standardization and improvements through process engineering. The employee and mentor
should also look for training opportunities to supplement the learning process. Future
mentoring times, communication, and work product delivery can be managed by the employee
and mentor in alignment with approved work plans. The work plans can become subject to
review in the annual review of the participants.
By establishing a mentoring program, senior technical staff is recognized for their
accomplishments and junior staff is given the opportunity to learn from them and develop
into the next generation of subject matter experts. Senior technical staff is the primary
source of institutional knowledge. By spreading this knowledge within and across teams,
the ability to provide support when subject matter experts are inaccessible or
incapacitated is greatly improved.
This is a critical consideration with respect to disaster recovery. By documenting
processes and procedures through a mentoring program, the ability to respond quickly to
outages or disasters is dramatically enhanced.
Buffington, Jason (2005). Leveraging virtual machines for business
continuity. Continuity Central. http://www.continuitycentral.com/feature0272.htm.
Maiwald, Eric & Sieglein, William (2002). Security Planning & Disaster
Recovery. California. The McGraw-Hill Companies, Inc.
McCarthy, Ed (2007). Tech Tools for Disaster Recovery. Journal of Financial
Planning. Vol. 20 Issue 2.
Southgate, David (2002). Streamline mentoring program administration with these
new online tools. http://articles.techrepublic.com.com/5100-10878-1051412.html.
Staimer, Marc (2005). Pros and Cons of Remote Mirroring for DR. http://searchstorage.techtarget.com/tip/1,289483,sid5_gci1069175,00.html.
Kevin Medlin has been administering, supporting, and developing in a variety of
industries including energy, retail, insurance and government since 1997. He is currently
a DBA supporting Oracle and SQL Server, and is Oracle certified in versions 8 through
10g. He received his graduate certificate in Storage Area Networks from Regis University
and he will be completing his MS in Technology Systems from East Carolina University in
2008. When he’s not trying to make the world a better place through IT, he enjoys
spending time with his family, traveling, hanging out by the pool, riding horses, hiking,
Article courtesy of Enterprise IT Planet | <urn:uuid:1dddac04-6840-44fe-a568-d5173fadf7d1> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/management/databases-prep-for-disaster-recovery-and-continuity-planning-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00599.warc.gz | en | 0.921012 | 2,992 | 2.640625 | 3 |
soft artificial heart
Design of an artificial heart tailored to individual patients took a big step forward today with the unveiling of the first 3D printed, entirely soft artificial heart. The silicone heart developed researchers at ETH Zurich, Switzerland, to make an artificial heart that mimics the real deal.
Currently used blood pumps have many disadvantages. The mechanical parts exposed to complications, while the patient lacks a physiological pulse, which assumed some consequences for the patient.
Artificial blood pumps
A well-functioning artificial heart is a real necessity. About, 26 million people worldwide suffer from heart failure while there is a shortage of donor hearts. Artificial blood pumps help to bridge the waiting time until a patient receives a donor heart or their own heart recovers.
The soft artificial heart was created from silicone using a 3D-printing, lost-wax casting technique. It weighs 390 grams and has a volume of 679 cm3. It is a silicone monoblock with complex inner structure.
The artificial heart has right and left ventricles. Well, not exactly like a real heart. In between the ventricles a chamber fills and deflates to create the pumping fluid from the blood chambers. Thus, replacing the muscle contraction of the human heart.
Researchers proved that the soft artificial heart fundamentally works and moves in a similar way to a human heart. However, it currently lasts for about only 3,000 beats, which corresponds to a lifetime of 30 to 45 minutes in an hour.
Nicholas Cohrs, a doctoral student in the group, said, this was simply a feasibility test. Our goal was not to present a heart ready for implantation, but to think about a new direction for the development of artificial hearts.
Currently, our system is probably one of the best in the world, says, Anastasios Petrou, a graduate student who led the testing, in ETH Zurich.
More information: [Willey Online Library] | <urn:uuid:e7cd6e2b-fdcd-469d-a47b-8c49d936af8e> | CC-MAIN-2022-40 | https://areflect.com/2017/07/14/3d-printed-soft-artificial-heart/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00599.warc.gz | en | 0.920854 | 396 | 3.3125 | 3 |
The motivations for releasing malicious code onto the public Internet fall into four broad categories:
- Political ambitions of a nation/state: Good examples are NotPetya and Shamoon, both of which are designed to destroy data and computers.
- Financial gain by criminals: A good example is Ryuk, a form of ransomware. Another is Zeus, a banking Trojan. Also, Mirai is a piece of malware that was used to conduct online advertising click-fraud in addition to causing the biggest DDoS attacks in history.
- Political activist agenda: Otherwise known as vigilantes or those practicing civil disobedience, they often deface websites or launch distributed denial of service (DDoS) attacks. A classic example is the Morris worm, which was designed to call attention to insecurity of networked computers.
- Self-centered goals, like bragging rights or curiosity: A good example was the Melissa virus, a mass-mailing macro virus from 1999. | <urn:uuid:bb3599ba-8428-44b9-a5e2-87e6aa847a01> | CC-MAIN-2022-40 | https://www.cyberriskopportunities.com/who-creates-computer-virus-trojan-malware-and-ransomware-and-why/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00599.warc.gz | en | 0.944535 | 193 | 3.03125 | 3 |
When purchasing a new laptop, I failed to consider things like: was this laptop made from non-toxic materials? How much does it use? When and how was I going to dispose of my new device once it became obsolete?
When most people think about making a positive environmental choice they think about driving less or composting. I did not realize that considerations regarding new green technology purchases would impact the environment in three distinct areas:
- Technology acquisition: Buy technology with a longer lifespan to reduce unnecessary waste. Hardware vendors have made significant strides in their production practices. Many industry leading products are now recognized by ENERGY STAR or EPEAT to ensure consumers can make a responsible (green) choice.
- Technology use: All electronics use energy, but you can dramatically reduce the amount of energy wasted by inactive devices by powering them down when they are not in use. Laptops and desktops are a perfect example of technology often left on 24/7. You can take advantage of dedicated PC power management software that will allow you to deploy and manage power saving settings across an entire network. Implementing an intelligent green technology solution can deliver savings of $50 per computer per year.
- Technology disposal: Technology advances so quickly we often look to replace equipment with the latest and greatest. With a little research the equipment that might otherwise be ‘junked’ can be reused by schools or charity organizations. If this is not an option, look to dispose with a certified green technology disposal service. | <urn:uuid:e94be340-e6e5-4005-972f-a53feed280ee> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/green-technology-three-impacts-of-your-pc-lifecycle | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00599.warc.gz | en | 0.951875 | 304 | 3.140625 | 3 |
You Don’t Have To Be You Don’t Have To Be a Quantum Scientist To Design A Quantum Computer Chip Using IBM’s New Tool Called Qiskit Metal
(Forbes.com) Until recently, even experienced researchers needed weeks or even months to design a simple quantum chip from start to finish. IBM officially released its new open-source design automation software Qiskit Metal in March 2021. Qiskit Metal is the first software to automate the design of superconducting devices. IBM is looking at eventually expanding its use from superconducting to other quantum technologies as well.
1. Considering all its advantages, Qiskit Metal should be a clear long-term winner for IBM and the quantum community.
2. By reducing the complexity of chip design, IBM has eliminated a significant barrier that may make quantum attractive to more people.
3. Qiskit Metal makes it possible for young K-12 students to have an understandable hands-on learning experience with quantum computing. Qiskit Metal can turn a seemingly impossible task into a fun learning experience. A positive early learning experience with Metal could help establish enough academic interest that might lead to thousands of future quantum researchers.
Qiskit Metal is part of IBM’s general quantum SDK Qiskit library. It is unique because other Qiskit resources create quantum computing circuits and applications rather than producing quantum chip designs.
The current Qiskit Metal code is an alpha version that’s still under development. IBM believes Qiskit Metal will eventually provide the entire quantum ecosystem with an innovative tool that simplifies designing superconducting devices. Metal might subsequently be used for other technologies as well.
Long-range, IBM expects Metal will enable users with minimal programming skills to effectively use available libraries of quantum components and renderers for chip building. It also foresees a time when there will be a critical mass of shared resources developed by the open-source community of Metal users.
According to IBM’s Qiskit Metal website, future additions include the full integration of the Energy Participation Ratio (EPR) method, the impedance analysis, and the lumped-oscillator model. | <urn:uuid:1491cd3d-fb1e-466e-a437-74f37e55bc1e> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/you-dont-have-to-be-you-dont-have-to-be-a-quantum-scientist-to-design-a-quantum-computer-chip-using-ibms-new-tool-called-qiskit-metal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00599.warc.gz | en | 0.887213 | 449 | 2.765625 | 3 |
While speaking to a group participating in career transition in 2009, I was struck by the lack of understanding of the original meaning of the word “job.” Everyone was talking about needing a job, changing jobs or leaving a job, but what was a job? I decided to do some research before my next speaking date.
Sources are vague, but most point to the 1550s for the phrase jobbe of worke translated “piece of work.” Indications are that work was paid on a piece rate. In other words, you give me three bushels of wheat and I will pay you “x.” At that time, there was no basis for hourly rate or day rates. Everything was based on product output, pieces or “jobbes,” hence our word “job.”
This got me thinking more about the topic. The industrial revolution, circa 1840, served to more formally move the work place into specific positions such as lathe operator, cotton gin operator, mechanic or tool handler. People were paid day rates for their performance. As management theory and practice evolved, the clock was used to segment the day and establish hourly rates for work, thus allowing wages to be paid hourly rather than for the day.
Now, tie all of this in with the notion that people perform better if there is a common or known purpose. Many leaders choose to motivate their teams by providing a clear vision or purpose for the organization. Breaking the collective corporate vision into individual understanding of one’s purpose can contribute to more sustainable performance, which is the key difference between managing process and leading people.
Think of this as a continuum for leadership. When we are placed in roles of responsibility, we have three potential stages of engagement with our people.
- Piece – We can accept the basics and focus on piece work to simply get the job done.
- Position – We can manage by position, bestowing authority on subordinates, delegating, and managing by position on the team.
- Purpose – We can center on the purpose, keeping all discussions keenly focused on achieving the purpose for which we have been tasked.
“Leading on purpose” carries many implied requirements.
- We have to know our people.
- We have to relate to the team.
- We have to continually enforce the message of the vision or purpose to which we have committed our effort.
Doing this well elevates a manager toward true leadership. Where are you in your thoughts and beliefs about managing your teams?
- Is every day centered more on merely checking things off the to-do list or project plan?
- Have you invested the time to communicate a purpose to each member of your organization?
- Have you made a commitment to enforce that purpose in the way you live each day, by connecting to the primary purpose your programs for monitoring, managing, evaluating and compensating people? | <urn:uuid:5211cc4a-6e5c-4205-8d47-990e10ced516> | CC-MAIN-2022-40 | https://www.cio.com/article/250983/what-is-a-job.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00599.warc.gz | en | 0.968266 | 598 | 2.875 | 3 |
by Andrew Erickson
You'll find SNMP (Simple Network Management Protocol) in virtually every network management environment. If you don't understand SNMP, you're at a serious disadvantage.Fortunately, the popularity SNMP means that many people have had decades to develop useful tools for you. Even better, many of these tools are free.
Net-SNMP is one such free tool that you can use in your SNMP projects. With it, you can:
This works well in both corporate/industrial and IT/hobbyist scenarios. In these situations, you're very likely using Unix or Linux or Windows.
Net-SNMP is a toolkit. It probably not your final goal. It's something you'll use on the way.
If you're making SNMP software of some kind, you might integrate it into your project. It's open-source, after all.
You might use Net-SNMP to make your device an active SNMP agent supporting the SNMP protocol. If you're making a high-volume printer, for example, you're probably focused on other areas. Integrating Net-SNMP gives your users more options without requiring you to write an SNMP implementation from scratch. You'll supply the payload message, and Net-SNMP will format the output into an SNMP trap for you.
You can also call Net-SNMP from the command line to test your SNMP setup. This is a vital troubleshooting step as you move toward a complete monitoring system. You can receive TRAPs, send SET requests, and do just about any other SNMP management function.
When you've set up your SNMP network, it's time to stop testing and start your actual use.
If you built Net-SNMP into your own home brew system, you'll continue using it as your SNMP manager.
If you were just using it to test, you'll transition now to your actual SNMP manager.
Even though Net-SNMP was a great tool to set up your SNMP system, you now how to set up your SNMP manager (remote host).
One of your early steps is to compile your MIB files into your manager. Think of these like "driver files" that tell your manager how to talk to each SNMP agent. If you don't load ("compile") them into your manager, your system just won't work.
That's a big question. There are many factors you need to consider.
You should mainly focus on:
Your SNMP manager is serious business. It's the core of your remote network management. You need it to work all the time.
Look for proven designs from companies who can give you a long client list of big-name companies.
Ask whether you get dual redundant hard disks or SSDs. Ask whether your can run on two servers simultaneously (primary/secondary configuration).
All of these factors improve the chance that your SNMP manager will stay online.
There's nothing worse than maximizing your investment and minimizing your reward. Why pour effort into designing and testing your system if your team can't understand it?
You did your homework. You tested with Net-SNMP (or built it into your own manager). Don't drop the ball now.
Grid displays (similar to an email inbox) are fine if they're clear, and they must also correlate "alarm" and "clear" events and show you only the active alarms.
Even better are map displays. These overlay any alarm conditions on geographic maps, rack photos, custom sorted menus, or any other image file you choose.
Don't forget that user-based systems offer an advantage here, too. If you can limit each user to just the information they need to know, that's important.
You'll reduce the time your team spends sifting through data they don't need. This shifts their focus to the important things.
Sure, it's a very important protocol, and quite likely your most important. But what else do you use besides SNMP?
Do you have DNP3? MODBUS? CANBUS? TL1? A proprietary protocol? A legacy protocol?
None of those can be piped directly into your SNMP manager. You need some way to handle that data to avoid managing multiple incompatible systems.
You could use after-the-fact conversion devices to change other protocols to SNMP, but that hardly makes sense in this context. You're in the process of choosing your SNMP manager anyway, so why not choose the right one from the start?
Look for an SNMP manager that supports multiple protocols. They do exist, even though they're not exceedingly common. The SNMP world is simple enough and massive enough that some programmers can get away with focusing on nothing else.
That hardly helps you, though, when you've got MODBUS from your generators and TL1 from legacy SONET gear. You need an alarm master that acts as both an SNMP manager and handles your other important protocols.
Virtually all networks have a variety of data sources. Traditional contact closures, physical relays latched by your revenue-generating equipment, are a major one. They aren't protocol-based at all.
To bring these under your new SNMP umbrella, you need an SNMP RTU. These boxes convert relay closures to SNMP traps.
As you add them, your SNMP manager might make it easy. If it doesn't, you can always use Net-SNMP again to do some quick troubleshooting.
You might need a way to convert SNMP back into legacy contact closures Yes, this absolutely sounds unusual. Hear me out.
Believe it or not, many people end up needing to convert SNMP to traditional relay outputs. How can this possibly be true?
Well, there are legal requirements for things like fire panels that can put you in this situation. If you're required by law to tie into a "too basic to fail" device like that, you have to do it.
But much of the world, and certainly the SNMP world, isn't built with that application in mind anymore.
So, what's an engineer to do when backed into this particular corner?
Well, fortunately for you, that engineer actually did bring this problem to DPS a few years ago. That led us to develop our "TrapRelay" line of devices. These are effectively "backwards RTUs", which take incoming SNMP traps and convert it back into contact closures.
You use this device by defining TRAPs and (optionally) variable bindings that trigger a corresponding relay. The big choice you need to make is capacity, with the TrapRelay currently being built with 8, 32, or 64. This capacity range addresses different fire panel designs - and also other use cases.
Even though SNMP is a standard, it now exists in several versions to deal with an evolving world. Net-SNMP has evolved to match these protocol evolutions.
Security was (and remains) a big concern for the first two versions of SNMP (v1 & v2c). There was no encryption of any kind built into those earlier versions. It was entirely up to the network administrators to protect network traffic in other layers.
SNMPv3 arrived to handle that problem, but what about the massive stacks of gear that were already installed?
Servers and other high-powered systems were able to evolve naturally because they had the hardware horsepower to handle encryption once it was developed. Small embedded devices, on the other hand, often had no hope of gaining SNMPv3 through a firmware update.
So, what's the solution here?
The solution to legacy devices that don't support SNMPv3 is an SNMP proxy box.
This is simple in concept, but there's a lot going on under the hood:
An SNMP proxy has two independent network interfaces. The first one connects to your unsecured SNMPv1 or SNMPv2c gear. This can be just one piece of equipment or an entire self-contained network at one physical location.
The other side of your SNMP proxy box connects to your main network. It only communicates via encrypted SNMPv3 back to your SNMPv3 manager.
In this way, all unsecured SNMP traffic that would have been transmitted between your agents and master is eliminated. It's replaced by encrypted v3 traffic.
A good proxy will be bidirectional, supporting not just TRAPs but also SETs and GETs in the opposite direction.
Of course, diagnostic tools like Net-SNMP are incredibly useful in this type of application, since you can test out both sides of your proxy device. Since your application will be increasing in complexity (agents AND the proxy box AND your SNMP manager), troubleshooting tools are supremely important.
If something isn't working on your first attempt, use Net-SNMP to follow the data thread. If you can't get TRAPs to go through, start at the agent. Use Net-SNMP to see if the TRAP is being sent via v1 or v2c.
If that's working, move Net-SNMP above your SNMP proxy and look for a v3 TRAP.
Of course, in the other direction, you can send SETs and GETs from Net-SNMP to your proxy in v3. If that works, you can send v1/v2c SETs/GETs to your older SNMP gear from a network position below the proxy box. The result of each test is an important clue to which part of your SNMP implementation is failing.
There's nothing like talking to an expert, especially when you're just getting started.
I work at DPS alongside many other engineers. SNMP is the most common protocol we use. We have experience working with Net-SNMP.
If you need any help, even if you don't immediately need a NetGuardian RTU or T/Mon SNMP manager, I'm happy to help you. I can help you troubleshoot with Net-SNMP and other recommended software tools.
Just give me a call at 559-454-1600 or send me a quick online message.
Just give me a call at 559-454-1600 or send me a quick online message.
SNMP may not actually be "simple", but I'll do my best to make it easy for you.
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SNMP White Paper. Featuring SNMP Expert Marshall DenHartog.
This guidebook has been created to give you the information you need to successfully implement SNMP-based alarm monitoring in your network.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:83b9d7d6-d07b-4869-b0b5-08acaa82c0b2> | CC-MAIN-2022-40 | https://www.dpstele.com/snmp/net-snmp.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00599.warc.gz | en | 0.936318 | 2,293 | 2.5625 | 3 |
A United Launch Alliance-built Atlas V rocket carrying NASA’s Perseverance rover lifted off Thursday from a launch complex at Cape Canaveral Air Force Station in Florida as part of the Mars 2020 mission.
NASA said Thursday the spacecraft transmitted its first signal to ground controllers through the agency’s Deep Space Network an hour after launch and entered safe mode in the next few hours as indicated by telemetry data.
The rover will travel for seven months and is expected to reach the planet’s Jezero Crater to study the landing site’s geology, demonstrate technologies in support of future human and robotic space exploration and collect and return rock samples to Earth.
Aboard the rover are the Sample Caching System and the Ingenuity Mars Helicopter, a technology demonstrator that will perform up to five controlled flights. Perseverance also features seven instruments, including the Mars Oxygen In-Situ Resource Utilization Experiment or MOXIE, which seeks to demonstrate the ability to turn carbon dioxide into oxygen.
"With the launch of Perseverance, we begin another historic mission of exploration," said NASA Administrator and previous Wash100 Award winner Jim Bridenstine. “Now we can look forward to its incredible science and to bringing samples of Mars home even as we advance human missions to the Red Planet.”
The Mars 2020 mission is part of NASA’s Moon to Mars exploration program and the agency’s Jet Propulsion Laboratory will oversee operations of the rover.
The launch vehicle’s Centaur upper stage was powered by four Aerojet Rocketdyne-built solid rocket boosters, while the Atlas booster was backed by the RD AMROSS RD-180 engine. | <urn:uuid:aa1d7b87-9ca9-4bc3-a5f0-474e40212854> | CC-MAIN-2022-40 | https://www.govconwire.com/2020/07/ulas-atlas-v-rocket-launches-nasas-perseverance-rover-for-mars-2020-mission/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00599.warc.gz | en | 0.917652 | 350 | 2.515625 | 3 |
End of availability (EOA) of NetApp HCI starts February 14th, 2022
Multicloud is the use of multiple cloud computing and storage services in a single distributed architecture. Multicloud also refers to the distribution of cloud assets, software, and applications across several cloud environments, using multiple cloud computing platforms to support a single application or ecosystem of applications that work together in a common architecture. Multicloud can include multiple public cloud providers, on-premises environments (NetApp® HCI), private cloud infrastructure with a public cloud provider (hybrid cloud), or a combination of both approaches.
There are various architectural approaches to multicloud. You can build different portions of an application stack in different clouds, with each portion accessing different systems and services that are required to work together. The intelligence in such scenarios is often built in to the application itself rather than the infrastructure side of the stack.
In other scenarios, the same application services might be required to run in more than one cloud, and few (if any) code changes would be required for the different physical locations. Although this approach used to be challenging to accomplish, modern Linux container orchestration, especially Kubernetes, has made application portability across different clouds, both public and on premises, far more feasible.
There are many reasons to implement a multicloud architecture for an application:
Many environments involve an on-premises architecture component. This approach is typically for economic, regulatory, or technical reasons related to accessibility of ancillary systems that were previously built to run in the data center.
Multicloud situations are sometimes inherited within an organization. For example, separate teams might have made different architectural decisions and then come together after an acquisition or a decision to integrate two autonomous applications. In these situations, there is often a lack of cohesiveness that makes integration challenging. It is important to partner with an open, agnostic vendor who can help solve this problem and create a forward-looking hybrid multicloud strategy.
Organizations whose cloud environments incorporate a full breadth of enterprise capabilities will gain market advantage. Advantages come with delivering a consistent hybrid multicloud experience based on frictionless consumption, self-service, automation, programmable APIs, and infrastructure independence. This advantage ensures that customers can thrive by unleashing agility and latent abilities in their own organizations.
A well-executed multicloud strategy can create many business benefits, given an architecture that includes:
Flexible access to best-in-class cloud services from multiple providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform | <urn:uuid:a5234c81-59f1-4283-b87d-9c8f960b066d> | CC-MAIN-2022-40 | https://www.netapp.com/hybrid-cloud/what-is-multicloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00799.warc.gz | en | 0.930326 | 536 | 2.515625 | 3 |
Records are paper or digital documents or forms that provide evidence of the activities of organisations and individuals. They are created during normal business activities, and are maintained to comply with regulations and to support operations. Capturing, maintaining and keeping records mean that essential information is available to safeguard the organisation’s interests.
What is Records Management?
Organisations, public and private, create and capture records in the course of their routine activities. Legislation, industry standards or contractual terms require them to keep those records for defined periods. Additionally, organisations may choose to keep records because the information they contain is useful to the operations of the enterprise.
Regardless of the reason they are kept, quick and accurate access to the information in records is critical to the organisation’s success. Workers must be able to find the right record at the right time. If your staff cannot find the records they need, it will cost you time and money, and you may be in breach of legal or contractual obligations.
While there is a cost in storing records, whether on paper or on computer media, the greater cost is almost always the cost of finding the records you need. The more records you have stored, the more you have to search through to find what you need. Records Management is the skill to: Store the records you need or are required to have; Destroy the documents and records which you do not need to store; Know where to file your documents and records; Know how to find your documents and records when you need them; Manage the access controls so sensitive information is only accessible to authorised people.
What are the Risks?
The main risks from poor (or no) records management are:
- Staff time wasted searching for lost or mislaid documents
- Frustration and ill feeling (blame for losing the documents)
- Failure to comply with laws, regulations or contractual requirements
- Failure to comply with standards
All of these risks lead to increased cost, delays, and anxiety among your staff. They may lead to loss of customer good-will, litigation, loss of product or process quality, and loss of business.
What are the benefits?
Good records management is the foundation of any storage strategy. Until you know what you should be storing, and what you do not need to store, you cannot develop a cost-effective storage strategy. Good records management is also the foundation of storage deployment. Every worker must know where to put documents and records and how to find them again.
These simple (but rarely achieved) principles are required for normal business, and for Sarbanes-Oxley, Basel 2, G*P, and most other quality regulations. | <urn:uuid:efdb9e20-b647-4213-8c8a-789468b9f67a> | CC-MAIN-2022-40 | https://it-observer.com/records-management-reduces-risk.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00799.warc.gz | en | 0.940851 | 543 | 3.421875 | 3 |
When historically tasked with configuring and managing a computer network, engineers have been forced to do almost everything manually: generate device configurations (and changes to them), commit them to the network, and check that the network behaves as expected afterward. These tasks are not only laborious but also anxiety-inducing, since a single mistake can bring down the network or open a gaping security hole.
But now networking is on an exciting journey of developing technologies that aid engineers with these tasks and help them to run complex networks with high reliability. These technologies provide two capabilities: automation and validation.
- Automation augments the hands and eyes of network engineers and helps them log into devices, extract information, copy data, and so on.
- Validation augments their brains and helps them predict the impact of different actions, reason about correctness, and diagnose unexpected network behavior.
These capabilities are conceptually distinct, though some tools provide both. Network validation is the focus of this post.
How well do you speak validation?
There is a wide maturity gap today between automation and validation.
Because it builds on server automation, network automation is more mature and has a well-developed lingua franca. Engineers can precisely describe its different modalities using terms like “idempotent,” “task-based,” “state-based,” “agentless,” etc.
Network validation, however, does not have a nuanced vocabulary. The general term “network validation” gets used to refer to a number of disparate activities, and specific terms get used by different engineers to mean different things.
This lack of nuance hinders the communication and collaboration required to advance network validation technology. That, in turn, harms the adoption of network automation. It is too risky to use automation without effective validation; a single typo can bring down the entire network within seconds.
The faster the car, the better the collision-prevention system needs to be.
In this post, we outline different dimensions of network validation and hope to start a conversation about developing a precise vocabulary. When talking about network validation, there are three important dimensions to consider:
- i) What is the scope of validation?
- ii) When is validation done?
- iii) How is validation done?
The “what” involves three different scopes; the “when,” two simple possibilities; and the “how,” four separate approaches. The choices along these dimensions can effectively describe the functioning of existing validation tools.
What is the scope of validation?
The most critical dimension to consider is the scope of validation, which determines the level of protection. As with hardware and software validation, there are three possibilities.
Unit testing checks the correctness of individual aspects of device configuration such as correct DNS configuration, interfaces running OSPF, accurate BGP autonomous system numbers (ASN), and compatible parameters for IPSec tunnel endpoints.
Unit testing is simple and direct—the root cause of the fault is immediately clear when a test fails— but it says little about the end-to-end behavior of the network. It is not hard to imagine situations where all unit tests pass, but the network does not deliver a single packet.
Functional testing checks end-to-end behavior for specific scenarios such as whether DNS packets from host1 can reach the server, data center leaf routers have a default route pointing to the spine, a specific border router is preferred for traffic to Google.com, and traffic utilizes the backup path when a link fails.
Unlike unit testing, functional testing can provide assurance on network behaviors. However, as with software testing, its Achilles heel is completeness. It provides correctness guarantees only for tested scenarios, while the space of possible packets, failures, and external routes is astronomically large. For packets alone, there are more than a trillion possible TCP packets (40-byte header).
Because it is impossible to test all scenarios, the correctness guarantees of functional testing are inherently incomplete. Just because a few (or even a hundred) test packets cannot cross the isolation boundary does not mean that no packet can. Completeness of guarantees is where verification comes in.
Verification ensures correctness for all possible scenarios within a well-defined context. It is a formal (mathematical) approach, though the term “verification” sometimes gets incorrectly used for other types of checks. Example guarantees that verification can provide are: all DNS packets, irrespective of the source host or port, can reach the DNS server; the path via a specific border router is preferred for all external destinations; and the services stay available despite any link failure. Such strong guarantees offer network engineers the confidence to rapidly evolve their networks.
When is validation done?
The timing of validation is the second dimension to consider, for which there are two possibilities.
Validation is done after deploying changes to the network, to check if they had the intended impact. With post-deployment validation, errors can make it to the production network, but the duration of their impact is lowered.
Validation is done proactively, before deploying changes to the network. Proactive validation can ensure that erroneous changes never reach the production network, providing a higher degree of protection than post-deployment validation.
How is validation done?
The final dimension of network validation to consider is the validation approach itself. There are four main approaches, each of which is capable if catching different types of errors.
Text analysis scans network configuration and other data without deeply understanding semantics. It can check, for instance, that lines with “name-server” (for DNS configuration) exist and contain specific IP addresses.
Text analysis cannot check network behavior and tends to be brittle. Another line, for instance, could counteract the checked line, but it is commonly used when other options are not available.
Emulation uses a testbed of physical or virtual devices (VMs or containers), where engineers can deploy configurations and check the resulting network behavior.
When the emulation and production software is similar, emulation can help predict what will happen in production. However, it is difficult to build a full-scale replica of the production network using emulation either because of limited resources or because software images of some devices are not available. Using smaller versions dilutes correctness guarantees.
Operational state analysis
In operational state analysis, engineers push changes to the production network and check if they produced the intended impact (e.g., did the newly configured BGP session come up?).
The key advantage of this approach is that it checks the behavior of the actual network. However, it can only support post-hoc validation and will, therefore, leak any errors to the network. Further, it cannot be used to test behavior for scenarios such as large-scale failures because that may disrupt running applications.
Model-based analysis builds a model of network behavior based on its configuration and other data such as routing announcements from outside. It then analyzes it to check behaviors in a range of scenarios. It is a broad category that includes simulation as well as abstract mathematical methods. Its two formal variants have been previously covered here.
Model-based analysis is the only approach that can perform verification because evaluating all possible scenarios needs a model (though not all model-based tools support verification). A key concern with it is model accuracy. But as we know from other domains, such models get better than human experts over time, and they need not be completely accurate to find errors.
Comparing validation approaches
Different validation approaches are capable of finding different types of errors. The class of errors found by text analysis are a subset of those found by other approaches.
While model-based analysis can consider the widest range of scenarios, operational state analysis can find bugs in device software that model-based analysis cannot.
Errors that emulation can find overlap heavily with operational state analysis, though it can help find device software bugs triggered by failures often difficult to study in the production network. Similarly, operational state analysis can find errors that emulation misses because emulation is rarely able to faithfully mimic the size and traffic conditions of production networks.
Classifying existing tools
The table below classifies some open source tools along these dimensions.
|GNS3, vrnetlab||Functional testing||Pre-deployment||Emulation|
|ns-3||Functional testing||Pre-deployment||Model-based analysis|
|Functional testing||Post-deployment||Operational state analysis|
|rcc, most homebrew scripts||Unit testing||Pre-deployment||Text analysis|
|Unit testing||Post-deployment||Operational state analysis|
In this post, we outlined the rich space for network validation in terms of its three key dimensions: scope (what), timing (when), and analysis approach (how).
We did not address the important question of which option(s) to pick for a given network. The answer is not as straightforward as picking the “best” one within each dimension. For instance, while model-based analysis provides the strong guarantees for configuration correctness, coupling it with emulation or operational-state analysis may be needed if bugs in device software are a concern. Further complicating matters, the choices are not completely independent because some combinations such as emulation and verification are incompatible.
In a future post, we will outline how to make these choices and implement network automation plus validation pipeline to effectively augment engineers’ hands, eyes, and brains. Such a pipeline would enable engineers to evolve even the most complex networks with confidence, without fear of outages and security holes. Stay tuned! | <urn:uuid:8767955b-6e43-4800-b025-109c30b702f5> | CC-MAIN-2022-40 | https://www.intentionet.com/blog/the-what-when-and-how-of-network-validation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00799.warc.gz | en | 0.912108 | 2,065 | 2.78125 | 3 |
In this blog post
Adopting a Layered Cyber Security Strategy
There has been a staggering rise in the amount of data acquired, stored, and used by businesses around the world. Consequently, the frequency & complexity of cyberattacks and the volume of data being compromised are also growing at an alarming rate. This has forced organizations to turn their immediate attention towards implementing robust cyber security and data protection methods and practices. Securing digital assets and infrastructure has now become part of corporate strategy.
Statistics show that most of these attacks are done on medium sized organizations that may not have sufficient resources to strengthen their cyber security posture. No matter the size of the organization, a layered security approach needs to be adopted to ensure protection from any type of attack on the company’s systems, infrastructure, and other digital assets. The impact of a cyberattack can be destabilizing and cause the organization to incur heavy financial and reputational losses. So, investing in cyber threat management is a business imperative.
Why is a Layered Security Approach Crucial for Businesses?
Cyber threats take place at several different levels. To counter them, it is important for businesses to tackle them at their respective levels – which is why implementing a layered security approach is advised by cyber security experts. Some parts of an organization’s IT systems, infrastructure, and data are more vulnerable to attacks than others. That makes understanding the risks associated with each IT asset critical to using the right security methods.
Types of Cyber Attacks
Broadly speaking there are two types of security risks that businesses encounter: passive attacks and active attacks. Passive attacks are where an organization’s network traffic is monitored through unauthorized means to gain back door access to confidential information. These attacks are either system-based or network-based. Detecting passive attacks is a challenge. They lie low in the network, establishing themselves well before they strike.
Active attacks enter network systems by breaking through protection layers. These attacks can be further classified into different types. The first of those are system access attempts, in which loopholes in security are exploited to find ways to server or client systems and to seize control over them. Then there are spoofing attacks which perpetrators use to gain access to systems by appearing and behaving like a trusted system. Spoofing attacks also include cases where system users are persuaded to share confidential information.
Another type of active attack involves the perpetrator flooding systems with junk or using other means to interrupt or close down operations. One of the most common attacks that we see today is a cryptographic attack, in which through guesswork or tools, an attacker tries to decode passwords or decrypt encrypted data.
The Layers of Cyber Security
Some of the things that can be done to enforce network security include patching, vulnerability scanning, content filtering, Wi-Fi security, and SOC/SIEM amongst others.
Application and Data Security
Organizations need to measure and control how different people interact with applications. Configuring security for internet-based applications is key as these are more vulnerable than others and can be targeted by those trying to gain access to the network and systems. Security measures used at this level should take into account exposures from the client and server-sides. Some of the measures that organizations can take to enhance the security of their data include data backup, data encryption, and Data Loss Prevention (DLP).
Deploying security measures at this level help guard communication on the internet against attacks within the organization’s own network and other trusted/untrusted networks. This level of security ensures complete protection for your data as it moves beyond the physical boundaries of the organization.
Human Level Security
Humans are amongst the biggest factors responsible for security breaches at organizations. They continue to be the easiest prey for hackers to exploit to get an entry into a company’s IT systems. It could happen due to any number of reasons – distraction, carelessness, or simply the inability to understand a technology and what they can do to maintain security etc. Continuous reinforcement through periodic training and education are the only ways for organizations to minimize the occurrence of such security threats. People should be taught how they can identify security threats, what they can do and who they should contact if they do identify a threat, and what good cyber security practices they should adopt to keep attackers at bay.
The perimeter is where a company’s network connects with the outside world – and so is a critical area through which attackers can gain unlawful access to the internal network through devices, access points. Protecting the perimeter wasn’t as difficult a job not so long ago when all a network had was servers and desktop computers. But now, with so many varied devices connected to the same network – desktops, virtual desktops, laptops, printers, mobile phones, BYOD devices, IoTs, and more – the task of securing the perimeter isn’t so easy anymore. Protecting this layer involves getting complete visibility into what devices comprise the layer, what data passes through it, and then securing it with anti-virus software, firewalls, device security management, data encryption, and more.
Endpoint Level Security
Endpoint level includes all the devices that are connected to a company’s network – and their numbers are generally overwhelming. Shadow IT is a common problem where IT is unaware of the devices, applications, services in their network. It could include employee personal devices, downloads of software from online resources, old discarded systems that are still linked to the network, etc. So, gaining 360o visibility is the most critical first step here. Encryption needs to be implemented not just to secure a company’s data but to ensure that the environment that these devices are operating in is completely secure.
To protect business systems, network, and data from cyber threats, it is important for organizations to understand that the costs of employing cyber security measures are far more affordable than the costs of a data breach or any other form of cyberattack. Partnering with a reputed managed security services provider is one route to take. You can find information on how GAVS can help secure your enterprise, here. | <urn:uuid:a7eece79-afd0-4ba7-ac97-98920014559a> | CC-MAIN-2022-40 | https://www.gavstech.com/adopting-a-layered-cyber-security-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00799.warc.gz | en | 0.950078 | 1,255 | 2.65625 | 3 |
The datasets used in big data analytics and AI model training can be hundreds of terabytes, involving millions of files and file accesses. Conventional X86 processors are poorly suited for this task and so, GPUs are typically used to crunch the data. Their instruction sets can process millions of repetitive operations many times faster than CPUs.
However, there is a performance bottleneck to overcome when transferring data to GPUs via server-mediated storage.
Typically, data transfers are controlled by the server’s CPU. Data flows from storage that is attached to a host server into the server’s DRAM and then out via the PCIe bus to the GPU. Nvidia says this process becomes IO bound as data transfers increase in number and size. GPU utilisation falls as it waits for data it can crunch.
For example, an IO-bound GPU system used in fraud detection might not respond in realtime to a suspect transaction, resulting in lost money, whereas one not getting access to data faster could detect and prevent the suspect transaction, and alert the account-holder.
Normally data is bounced into host server’s memory and bounced out of it on its way to the GPU. This bounce buffer is required – because that’s the way server CPUs run IO processes. However, it is a performance bottleneck.
If the IO process can be accelerated, with higher speed and lower latency, application run times are shortened and GPU utilisation is increased.
Nvidia, the dominant GPU supplier, has worked away at this problem in stages. In 2017, it introduced GPUDirect RDMA (remote direct memory access), which enabled network interface cards to bypass CPU host memory and directly access GPU memory,.
The company’s GPUDirect Storage (GDS) software, currently in beta, goes beyond the NICs to get drives talking direct to the GPUs. API hooks will enable the storage array vendors to feed more data faster to Nvidia’s GPUs, such as its DGX-2.
GDS enables DMA (direct memory access) between GPU memory and NVMe storage drives. The drives may be direct-attached or external and accessed by NVMe-over-Fabrics. With this architecture, the host server CPU and DRAM are no longer involved, and the IO path between storage and the GPU is shorter and faster.
GDS extends the Linux virtual file system to accomplish this – according to Nvidia, Linux cannot currently enable DMA into GPU memory.
The GDS control path uses the file system on the CPU but the data path no longer needs the host CPU and memory. GDS is accessed via new CUDA cuFile APIs on the CPU.
Bandwidth from CPU and system memory to GPUs in an DGX-2 is limited to 50GB/s, Nvidia says, and this can rise to 100GB/sec or more with GDS. The software combines various data sources such as internal and external NVMe drives, adding their bandwidth together.
Nvidia cites a TPC-H decision support benchmark with scale factors (database sizes) of 1K and 10K. Using a DGX-2 with eight drives, the 1K scale factor test latency was 20 per cent of the non-GDS run. This is a fivefold speedup. At the 10K scale factor, latency was 3.33 to five per cent when GDS was used compared to the non-GDS case. This is a 20x-30x speed up.
Four flying GDS partners
DDN, Excelero, VAST Data and WekaIO are working with Nvidia to ensure their storage supports GPUDirect Storage.
DDN is supplying full GDS integration with its A3i systems: A1200, A1400X and A17990X.
Excelero will have a generally available GDS support in the fourth quarter for disaggregated, converged and hybrid environments. It has a roadmap to develop a GDS-optimised stack for shared file systems.
A Nvidia GDS slide deck provides detailed charts showing VAST Data and WekaIO performance supporting GDS storage access.
VAST Data achieved 92.6GB/sec from its Universal Storage array with GDS while Weka recorded 82GB/sec read bandwidth between its file system and a single DGX-2 across 8 EDR links using two Supermicro BigTwin servers.
Blocks & Files envisages other mainstream storage array suppliers will soon announce support for GDS. Also, a GPUDirect Storage compatibility mode allows the same APIs to be used when non-GPUDirect software components are not in place. At time of writing, Nvidia has not declared Amazon S3 support for GDS.
Nvidia GDS is due for release in the fourth quarter. You can watch an GDS webinar for more detailed information. | <urn:uuid:bd9313a9-64bb-45de-851e-d96883b6eb92> | CC-MAIN-2022-40 | https://blocksandfiles.com/2020/07/23/nvidia-gpudirect-storage-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00799.warc.gz | en | 0.916815 | 990 | 2.8125 | 3 |
Telecommunications companies are in constant research trying to make their networks more efficient and improve their processes.
That is why new technologies and the digital transformation play an essential role. Currently, telecommunication operators are working on offering and improving their networks with the benefits of 5G helping this transformation and making their equipment and infrastructures more automated. These 5G networks will be key to the expansion of RAN.
But what is the Radio Access Network?
A radio access network (RAN) is ”part of a mobile telecommunications system that implements a radio access technology which resides between a device such as a mobile phone, a computer or any remotely controlled machine and provides a connection to its central network”. Other technologies such as Open RAN and Virtual RAN appeared with the emergence of RAN.
Open RAN, or open radio access networks, refers to ”a new paradigm in which cellular radio networks, consisting of hardware and software equipment from multiple vendors, operating over network interfaces that are truly open and interoperable”. This allows units from one supplier to relate and work with units from other suppliers, as RAN Sharing is known.
Virtual RAN (vRAN) is ”the virtualization of the baseband unit so that it runs as software on generic hardware platforms”. The whole concept of the RAN revolves around the idea of saving on network elements, in this case, the radiating equipment, by sharing them between several operators. They all use the same hardware, but then, using the software, they can separate the traffic of each one of them to give service to their clients.
Currently, MNO’s are working on progressing the design, development, optimisation, testing and industrialisation of Open RAN technologies in which, thanks to this sharing of networks through software, companies manage to save costs on equipment through sharing, making them more profitable and flexible. The cost of implementing 5G networks is expected to be reduced by 50% thanks to Open RAN networks.
RAN will lead to the automation of telecommunication tower networks.
According to the latest research of Analysis Mason “By 2025 almost 80% expect to have automated 40% or more of their network operations.” The Open RAN architecture will make easier this incorporation of intelligence, needed by maximizing the automation and optimization of networks, key for the latest network generation.
At Atrebo, we are aware of the importance of the sustainability and efficiency of our customers’ processes. We have developed a specific module, in TREE, the automation and infrastructure management platform, to help our customers manage their RAN sharing, called TREE.RANSharing.This module is designed to handle spectrum sharing requests and processes. It integrally controls all the processes related to the sharing of capacity of radiant systems shared between several operators, making possible:
- Monitor them, as well as adding new processes
- Carry out rapid audits of current and ongoing processes relating to sharing of radiant resources.
All these functions are integrated into the module of our TREE platform which, together with the Sharing module, facilitates the integration of both space and network equipment sharing. | <urn:uuid:9e18f6de-7e26-4e7f-962e-1f0af967e94c> | CC-MAIN-2022-40 | https://www.atrebo.com/en/the-role-of-5g-a-key-factor-in-the-automation-of-rans/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00799.warc.gz | en | 0.942904 | 646 | 2.75 | 3 |
February 20, 2017 | Written by: Rahil Garnavi
Share this post:
Glaucoma is the second leading cause of blindness worldwide. The disease progresses very slowly and destroys vision gradually, starting with the side vision and narrowing over time. It often remains undetected until irreversible eye sight is lost at later stages. It’s no surprise then, that glaucoma has earned a reputation as the silent thief of sight, with an estimated 50 percent of cases going undetected, leaving people unaware that they’re slowly going blind.
It can be treated but early detection is critical in ensuring effective treatment. The first challenge we face with eye diseases like glaucoma as well as diabetic retinopathy and age-related macular degeneration is that in many cases, blindness is preventable (or at least slowed). If detected early enough in the majority of patients, it could have a profound impact on not only their quality of life but also the economic strain on health care systems.
Four images of the back of the eye (fundus photography) showing various ratios of optic cup to disc as measured by Watson. An increased optic cup to disc ratio could be a sign of Glaucoma. Watson is also capable of determining between left and right eye image
Today, IBM Research is using the cognitive computing power of Watson, to progress the science of medical imaging analysis of eye images, which could in the future make the early detection process significantly faster and more accessible for all patients. Another challenge we face is that there may be a limited supply of specialised clinicians experienced in identifying subtle changes in retinal images. Often this means it is costly to visit or difficult to access for those patients in remote areas. Convenient and affordable access to regular screening is critical in the identification of not only glaucoma, but also all preventable eye diseases. We need to innovate to improve access to regular eye disease screening, for everyone.
Since 2015, scientists from the IBM Research Lab in Australia have been applying deep learning and image analytics capabilities to 88,000 retina images accessed from EyePACS, a global web-based platform that enables the exchange of eye-related images and clinical information. By understanding what constitutes the regular anatomy of an eye, the technology is being trained to identify possible abnormalities which may indicate the early onset of eye diseases like glaucoma.
The research results that we’ve announced today have indicated a statistical performance of 95 percent in the technology’s ability to measure the ratio between two parts of the eye, the optic cup and the disc. Identifying an increased optic cup to disc ratio could be a sign of Glaucoma and inform a need for further tests. Another key factor for eye disease analysis is the ability to automatically identify left from right in a retina image. With 94 percent confidence in determining left from right images, this technology could help streamline some of the manual processes that support optometrists and ophthalmologists today.
A clear vision for eye health
IBM Research’s early successes could help make eye examinations accessible to far more people worldwide than ever before. Our researchers will continue to progress the science of medical imaging analysis for retinal images, including the ability to understand a broader range of eye diseases such as diabetic retinopathy, cataracts and age-related macular degeneration. Giving Watson eyes is an exciting area of research. It could one day be key to helping free up our expert clinicians to focus more of their efforts on targeted treatment and management of these diseases.
P. Roy, et al. “Automatic Eye Type Detection in Retinal Fundus Image Using Fusion of Transfer Learning and Anatomical Features.” International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016.. http://ieeexplore.ieee.org/abstract/document/7797012/
D. Mahapatra et al. “Retinal Image Quality Classification Using Saliency Maps and CNNs.“ Machine Learning in Medical Imaging, Volume 10019 of the series Lecture Notes in Computer Science, pp 172-179. http://link.springer.com/chapter/10.1007/978-3-319-47157-0_21.
S. Sedai, et al. “Segmentation of Optic Disc and Optic Cup in Retinal Fundus Images Using Coupled Shape Regression.” , Proceedings of the Ophthalmic Medical Image Analysis Third International Workshop (OMIA 2016) Held in Conjunction with MICCAI 2016, pp 1-8. http://ir.uiowa.edu/omia/2016_Proceedings/2016/1/ | <urn:uuid:334b3ff4-f880-47f3-9daf-ea7da766e500> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/research/2017/02/watsons-detective-work-could-help-stop-the-silent-thief-of-sight/?glaucoma | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00799.warc.gz | en | 0.91495 | 975 | 3.234375 | 3 |
I recall in early 2000’s having a debate with a security expert about firewalls, at the time they were advocating the firewall model was fundamentally broken! Their argument was that if any traffic could get through, in any direction, for any purpose, bad guys could figure out how to use it to exploit the system. I disagreed, believing the ‘new’ filtering technology would be able to stop them; I was wrong.
The myth of the perimeter still persists today – we see it again and again in security technology such as sandboxes, containers, virtual machines and of course firewalls. All of them seek to secure systems by putting a boundary around them and stopping bad things happening. This is an very attractive idea, and goes back to real world defensive strategies that have worked for centuries – castle & town walls have been very effective at protecting their inhabitants.
In IT systems there is one big difference: our walls are full of holes because we open up holes to let through ‘good’ traffic. In the castle analogy this is like guards (firewalls, sandboxes etc) on the gates & walls to make sure that only what we want can go past. But unlike in the situation at the castle, in the complex world of IT it’s incredibly difficult to tell what is bad traffic. In ancient times you just had to look for people with swords (and the occasional Trojan horse) – but with data traffic, telling the difference between bad & good traffic is impossible (especially if it’s encrypted).
This goes to the root of system design – far too many IT systems rely on a perimeter and a ‘safe’ zone where you ‘trust’ the network and data on it. This does not work. The reasons are many; and here are some of them:
- Programming mistakes happen – there is no way, practically, to ensure they don’t – so even ‘harmless’ data can trigger harmful consequences. ‘Input Validation’ has been been on the top 10 list of attacks from OWASP since it began and it isn’t going anywhere!
- People can’t always be trusted – your staff need access to systems, but sometimes they will, due to malice, incompetence or simply corner cutting, expose the system behind the perimeter.
It’s really complex to actually secure any computer – if you’ve ever done any of the following, your computer _may_ be compromised:
- Installed software from the internet
- Opened a document with macro’s and let them run
- Failed to patch it the same day patches have come out
- Run any piece of software with a vulnerability
- Plugged in any USB/Displayport device
Once a bad guy gets behind the perimeter in most IT systems their job becomes really easy – they generally can start installing things, running malware and extracting data. Many organisations have ‘smart’ logging systems to help monitor for such activity. However much of the time they are ignored; they tend to generate a lot of false positives, and a smart intruder could defeat them.
This is especially true for IoT – in IoT we’re seeing a huge number of devices running on the ‘home’ network behind the ‘firewall’ that assume perimeters can protect them. Let’s take some examples:
Example 1: If your device has ‘outbound’ access only, this does not protect you; attackers can use the App to reach through your firewall via 2 main methods:
- MITM (Man-in-the-Middle) the connection and modify the data
- Tampering with your app and making it send malicious data. Most API’s forwarding from apps to devices don’t have enough input sanitation to prevent this.
Example 2: If your device listens on the internet (often via UPNP) it’s directly exposed; many devices don’t have:
- Strong passwords
- Rotatable passwords
- Strong authentication protocols with anti-replay
- Repeat failure blocking
- Enough input validation on login to prevent exploits
- DOS protection
Example 3: If your device talks to other devices on the local network it’s indirectly exposed, but is just as vulnerable as one on the internet, unless:
- Your device has all of the above for internal network
- AND every single device on the network is secure
So what’s the solution? On one level it’s simple: shrink the perimeter to the smallest possible size. In practical terms this means you should ensure security at Application / Container/ MicroService level:
- Each App/Device should have direct authentication/encryption
- This is a challenge for TLS offload – but if used you need to make sure the un-encrypted segment is as small as possible
- Each App/Device should have local network access control blocking all traffic it does not need (both inbound and outbound!)
- Each App/Device should be read-only except for what actually needs to be writable
- Each App/Device should be signed and verified ideally at boot and runtime
- App/Devices should all authenticate to each other using unique, rotatable and regularly rotated credentials
- Logs should be centralised and tuned to provide useful and actionable data
This is all stuff that is possible today; it’s rarely properly implemented, but it’s possible. In a subsequent posts I’ll discuss how we can achieve all of this. | <urn:uuid:1b1fd017-542d-4793-80cd-a6b3d3bec1f2> | CC-MAIN-2022-40 | https://blog.irdeto.com/software-protection/the-perimeter-is-a-lie/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00799.warc.gz | en | 0.937076 | 1,156 | 2.78125 | 3 |
Internet-enabled devices are so common, and so vulnerable, that hackers recently broke into a casino through its fish tank. The tank had internet-connected sensors measuring its temperature and cleanliness. The hackers got into the fish tank’s sensors and then to the computer used to control them, and from there to other parts of the casino’s network. The intruders were able to copy 10 gigabytes of data to somewhere in Finland.
By gazing into this fish tank, we can see the problem with “internet of things” devices: We don’t really control them. And it’s not always clear who does – though often software designers and advertisers are involved.
In my recent book, “Owned: Property, Privacy and the New Digital Serfdom,” I discuss what it means that our environment is seeded with more sensors than ever before. Our fish tanks, smart televisions, internet-enabled home thermostats, Fitbits and smartphones constantly gather information about us and our environment. That information is valuable not just for us but for people who want to sell us things. They ensure that internet-enabled devices are programmed to be quite eager to share information.
Take, for example, Roomba, the adorable robotic vacuum cleaner. Since 2015, the high-end models have created maps of its users’ homes, to more efficiently navigate through them while cleaning. But as Reuters and Gizmodo reported recently, Roomba’s manufacturer, iRobot, may plan to share those maps of the layouts of people’s private homes with its commercial partners.
Like the Roomba, other smart devices can be programmed to share our private information with advertisers over back-channels of which we are not aware. In a case even more intimate than the Roomba business plan, a smartphone-controllable erotic massage device, called WeVibe, gathered information about how often, with what settings and at what times of day it was used. The WeVibe app sent that data back to its manufacturer – which agreed to pay a multi-million-dollar legal settlement when customers found out and objected to the invasion of privacy.
Those back-channels are also a serious security weakness. The computer manufacturer Lenovo, for instance, used to sell its computers with a program called “Superfish” preinstalled. The program was intended to allow Lenovo – or companies that paid it – to secretly insert targeted advertisements into the results of users’ web searches. The way it did so was downright dangerous: It hijacked web browsers’ traffic without the user’s knowledge – including web communications users thought were securely encrypted, like connections to banks and online stores for financial transactions.
One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.
This sort of arrangement is destroying the concept of basic property ownership. John Deere has already told farmers that they don’t really own their tractors but just license the software – so they can’t fix their own farm equipment or even take it to an independent repair shop. The farmers are objecting, but maybe some people are willing to let things slide when it comes to smartphones, which are often bought on a payment installment plan and traded in as soon as possible.
How long will it be before we realize they’re trying to apply the same rules to our smart homes, smart televisions in our living rooms and bedrooms, smart toilets and internet-enabled cars?
Return to Feudalism?
The issue of who gets to control property has a long history. In the feudal system of medieval Europe, the king owned almost everything, and everyone else’s property rights depended on their relationship with the king. Peasants lived on land granted by the king to a local lord, and workers didn’t always even own the tools they used for farming or other trades like carpentry and blacksmithing.
Over the centuries, Western economies and legal systems evolved into our modern commercial arrangement: People and private companies often buy and sell items themselves and own land, tools and other objects outright. Apart from a few basic government rules like environmental protection and public health, ownership comes with no trailing strings attached.
This system means that a car company can’t stop me from painting my car a shocking shade of pink or from getting the oil changed at whatever repair shop I choose. I can even try to modify or fix my car myself. The same is true for my television, my farm equipment and my refrigerator.
Yet the expansion of the internet of things seems to be bringing us back to something like that old feudal model, where people didn’t own the items they used every day. In this 21st-century version, companies are using intellectual property law – intended to protect ideas – to control physical objects consumers think they own.
My phone is a Samsung Galaxy. Google controls the operating system and the Google Apps that make an Android smartphone work well. Google licenses them to Samsung, which makes its own modification to the Android interface, and sublicenses the right to use my own phone to me – or at least that is the argument that Google and Samsung make. Samsung cuts deals with lots of software providers which want to take my data for their own use.
But this model is flawed, in my view. We need the right to fix our own property. We need the right to kick invasive advertisers out of our devices. We need the ability to shut down the information back-channels to advertisers, not merely because we don’t love being spied on, but because those back doors are security risks, as the stories of Superfish and the hacked fish tank show. If we don’t have the right to control our own property, we don’t really own it. We are just digital peasants, using the things that we have bought and paid for at the whim of our digital lord.
Even though things look grim right now, there is hope. These problems quickly become public relations nightmares for the companies involved. And there is serious bipartisan support for right-to-repair bills that restore some powers of ownership to consumers.
Recent years have seen progress in reclaiming ownership from would-be digital barons. What is important is that we recognize and reject what these companies are trying to do, buy accordingly, vigorously exercise our rights to use, repair and modify our smart property, and support efforts to strengthen those rights. The idea of property is still powerful in our cultural imagination, and it won’t die easily. That gives us a window of opportunity. I hope we will take it. | <urn:uuid:4ac456f6-a66c-40bd-8d4c-15fb397d5ff9> | CC-MAIN-2022-40 | https://www.mbtmag.com/home/blog/21101865/iot-is-sending-us-back-to-the-middle-ages | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00799.warc.gz | en | 0.956453 | 1,479 | 2.796875 | 3 |
Data centers are collections of servers used to host online information so that it’s accessible from anywhere. Often, the sites we visit on the internet will be provided by a number of servers working together to protect the data stored on them in case of any issues.
Major data corporations have massive data centers all over the world; Google has 15 data centers across four continents and has a massive cloud map of servers hosted elsewhere and Facebook has nine centers, with six of these in the US.
While standard data centers will most often use Lead Acid battery cells, major corporations like these have already made the move to the more modern Lithium-ion batteries, like the one you’ll find in your smartphone. https://www.youtube.com/embed/mn-coW70goIThe central metric data center engineers are interested in is Power Usage Effectiveness (PUE) and the older type of battery, Lead Acid, struggle to compare with Li-ion technology, due to their huge size, poorer efficiency and need to be replaced much more regularly.
The most common and recognizable Lead Acid batteries used in data centers are the Valve-Regulated Lead Acid (VRLA) cell. These often come as part of a huge cabinet of stacked batteries able to support Uninterruptible Power Supply (UPS) systems.
Other common Lead Acid batteries used in data centers include the Flooded Lead Acid cell and the Modular Battery Cartridges (MBC). The former is a very old battery kind which is often too heavy to lift alone with a long life span and the latter is a newer type designed to be easily replaceable.
So why change now? Lead Acid cells have been the industry standard for so long, what’s wrong with it? Lithium-ion technology continues to make strides that far outstrip the Lead Acid model so it may be time to bring data center energy storage into the future, but is it right for you?
Pros of Lead Acid:
The Lead Acid battery is often used because of its large power-to-weight ratio and high surge currents, especially useful in handling the huge energy output generated by swells of internet traffic. This kind of battery has worked well for data centers historically as there was more need to consistent energy output and less regard given to standard charge rate with the ability to store backup energy units to support this.
The main benefit of the Lead Acid battery, however, is its cost. Many data center operators are unwilling to switch to the newer Li-ion tech because the overall cost is much too high. Lead Acid batteries are far easier to come by and much simpler to replace.
The MBC lead Acid battery is designed to be a contained item, easily slotted into a cabinet and connected simply without the need for an engineer. This makes them an attractive option for businesses without a dedicated team of engineers.
Cons of Lead Acid Cells:
As with most electronics, the design life specified by the manufacturer is tested in controlled conditions in a laboratory and is therefore much different from the reality of everyday use. With respect to the Lead Acid battery, standard predicted life is around 10 years, while actual service life comes more around three years.
Due to this, Lead Acid batteries need constant replacement, racking up costs in new cells and admin time over the years. Additionally, the Lead Acid cells needs to be kept at around 20 degrees for maximum efficiency, requiring companies to spend massive amounts on expensive air conditioning systems at all times to keep them at optimal temperature.
The footprint of Lead Acid systems is massive and they weigh at least three times more than their Lithium cousins, meaning systems above ground level need reinforced floors and storage space can take up several rooms, adding to rent costs.
The Lead Acid cell has been around since the 80’s and batteries have come a long way since then. Modern, state of the art batteries have a power efficiency of nearly 100%. When compared with the Lead Acid battery, the best estimate of its efficiency comes in at around 85%. This shows just how much energy wastage companies face when using a Lead Acid system when coupled with the constant need for replacement and cooling bills.
With most of the major tech giants having already made the switch to Li-ion, it’s safe to assume they’ve done the calculations and discovered the benefits over a Lead Acid system. However, companies like Google need server systems that can withstand more traffic than the internet even sees so their needs are likely to be far different to those smaller server hosts.
These companies are also consistently trying to make their energy output greener. They’re in a financial position to make these major changes with the intention of reducing their carbon footprint that other small server hosts may not see the worth of just yet.
Cons of Lithium Batteries:
The real major con to switching to Lithium-ion storage is the cost. Currently, Lithium production is super expensive and so the production costs of Lithium batteries remains too high to reason with. Factoring in the inevitable replacement costs, a Lead Acid system just makes sense from a monetary perspective.
In addition, most smaller companies are hesitant to make the switch too early. While it may be proving great for the big players, the evidence that it really works for small businesses is still thin on the ground. Considering Lead Acid has been the industry standard for decades, it’s not surprising that many companies aren’t interested in overhauling everything they know without hard evidence of the benefits first.
Pros of Li-Ion:
While the sky-high cost is hard to argue with, experts in the industry believe the cost of Lithium is about to collapse and could drop by up to 60% in the next year. If this is the case, fully replacing your data center system with Lithium batteries will become much more cost effective and could remove one of the only clear barriers to using Lithium technology.
Besides this, Li-ion batteries give a huge number of savings compared with Lead Acid cells that make them great for UPS data centers.
Li-ion batteries are much lighter and smaller than Lead Acid cells and their footprint is much smaller, saving you space no matter what your set-up is. This could save companies renting space and could also help the need to reinforce floors in battery rooms. Lithium-ion batteries also have a much longer service life of around 7 years, making savings in replacement over a longer period of time.
Lithium batteries can perform at the same efficiency at 30 degrees as Lead Acid batteries do at 20 degrees. This has meant that data centers adopting Lithium batteries are able to switch from bank-bursting air conditioning to systems which make use of outdoor air, meaning further savings to your overall maintenance. With the need for super cool areas reduced, some data centers have now done away with separate battery rooms, reducing the space your data center takes up which could shave rent prices right down.
Lead Acid batteries have been suited to the role of back-up batteries due to their ability to provide high power over a longer period of time. For UPS systems, the need for constant power is imperative. Lead Acid batteries take much longer to charge so back-up batteries are often used while the initial batteries recharge.
In new Lithium-ion systems, the batteries are able to quickly discharge large amounts of energy and charge up much faster, essentially eliminating the need for a longer-lasting power supply. As newer battery systems become more efficient, data centers will be able to provide the levels of energy they need quickly without worrying about running out at any point.
All batteries leak energy when sitting unused. While Lead Acid batteries have an efficiency of around 85%, Lithium-ion batteries have hit an efficiency level of 99%, losing about 1% of stored energy after 24 hours. This makes them perfect for back-up storage as they will virtually be fully ready to go at any given moment.
As battery technology continues to develop at high speeds, and our lives become ever more digital, data centers will need to revolutionise to meet consumer demands and, once the cost is out of the way, there really isn’t much bringing the Lithium battery down. | <urn:uuid:66b96c4d-100a-41cb-8dfd-eabf777722ff> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/data-center-batteries | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00799.warc.gz | en | 0.954734 | 1,669 | 3.28125 | 3 |
Last updated on August 31st, 2022
Any healthcare institution, whether a hospital or a clinic, deal with some confidential patient information. If this data falls into the wrong hands, it can be used to carry out malicious activities like scams, identity theft, and others.
The HIPAA is an act that aims to prevent the same by establishing guidelines for the healthcare institutions to follow to ensure the security of the data. The healthcare data that is protected under HIPAA is called Protected Health Information (PHI). The institutions that fall under the HIPAA act are called Covered Entities and Business Associates.
HIPAA, PHI, and the Role of Institutions
The covered entities are the institutions that provide healthcare services and deal with the PHI. Moreover, Business Associates can be an organization that helps the covered entities and have access to the PHI.
Every covered entity and business associate try their best to adhere to HIPAA rules. However, it is an uphill task as the HIPAA laws are quite specific. As a covered entity or business associate, certain constraints make it difficult to secure the PHI in the local premises.
Since the US Department of Health and Human Resources (HHS) does not give any leeway in complying to the HIPAA laws, there are no excuses for you if the Protected Health Information is compromised. Hence, there is a constant search for a more secure environment to store the PHI.
The cloud environment offers a platform that stores data under multiple layers of security to ensure that it is inaccessible to unauthorized users. Moreover, under the HIPAA rules, any cloud service provider that is responsible for transmitting, receiving, and storing ePHI is considered a business associate.
All HIPAA regulations that apply to a business associate are to be followed by the cloud service provider. The cloud provider must sign a Business Associate Agreement (BAA) with the covered entity that avails their services to become a HIPAA compliant hosting provider. Hence, it makes the cloud service provider legally bound to secure the services.
Here are some of the security-specific aspects that make HIPAA compliant cloud hosting a preferred choice to store PHI.
1. Added Layer of Security
In the cloud environment, the PHI is protected with the help of multiple security methods and protocols from the server storing the data right up to the end-point device from which it is accessed. The data is monitored 24/7, and any irregularity in the traffic pattern is instantly identified and mitigated.
Here are some of the security safeguards deployed by the HIPAA compliant hosting provider.
a) Data Encryption – Data encryption is the method of transmitting data in a coded form that can be comprehended only by authorized When the user is trying to access PHI on the end-user devices such as smartphones, the data travels from the remote cloud server to the device. Cybercriminals may try to intercept the data at any point in the transmission. However, with data encryption, as the data is coded, only the user with the right authentication key is able to decode the data.
b) Multi-factor Authentication – It ensures that no unauthorized user is able to log in to your cloud server, even if they know your credentials. When you log in to the server, the cloud provider sends an OTP to your phone or a security code to your inbox that you need to type along with the username and password. Hence, if any hacker gets access to your login, he/she will still not have the other code required for login.
c) Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) – The IDS and IPS system are both deployed on the network level to identify anomalies in the data traffic and anticipate a cyberattack. The IDS system can identify the malicious traffic entering the network and IPS prevents them from being incident on the servers.
2. Centralized Data
In the conventional scenario, the PHI is stored in the local systems or servers on the premises of a healthcare institution. However, the local setup is subject to various physical and natural parameters. If the operating system of a local system crashes or its hard drive malfunctions, there is no way the data can be recovered. Being HIPAA compliant means merely that there is no option of losing critical PHI.
However, the HIPAA compliant hosting providers store all the PHI on the cloud servers which offer a centralized platform for each authorized user to access the data. Since the data is not stored in the local systems, the malfunction of hardware does not lead to loss of PHI.
3. Business Continuity & Disaster Recovery
There are some advanced security safeguards and protocols that help keep the data secure from any unauthorized access and harmful data packets. However, no security setup can guarantee complete data protection without the implementation of Business Continuity and Disaster Recovery (BCDR).
It is essential for every HIPAA compliant cloud service provider to deploy a robust BCDR plan that ensures proper crisis as well as risk management strategies and procedures. Business Continuity plan ensures that the risks are identified and prioritized based on severity.
Moreover, the implementation of Disaster Recovery means that the PHI is stored in multiple geographic locations so that your data is accessible and safe in the case of even a disaster like an earthquake.
4. Data Center Infrastructure
Most of the cloud service providers host your data as well as applications on the cloud servers that are situated in third-party data centers. Hence, along with the cloud providers implementing advanced security methods, it is also necessary that the data centers deploy a robust infrastructure.
The data centers must be equipped with state-of-the-art power, cooling, and network equipment. The infrastructure must be redundant so that the failure of one does not affect the data center operations.
Also, the data centers must have airtight security with multiple entrance levels, each with an authentication system like biometrics, optical scan, ID cards, etc. The data center must also be certified with security-specific certifications.
For the PHI to be secure, all the aspects of security, i.e., physical, network, as well as administrative security must be ensured.
HIPAA Compliance Is Necessary!
The Privacy and Security rule under HIPAA mentions strict guidelines that the covered entities and business associates should follow to ensure that the Protected Health Information (PHI) is secure under all circumstances.
As the cloud provider involved in transmitting, receiving, or storing the PHI is considered a business associate under HIPAA laws, it is responsible for complying with the standards mentioned by it.
The HIPAA compliant cloud provider deploys advanced security methods in its cloud architecture. As the data is stored in the cloud rather than local machines, any disruption caused in the local premises does not compromise the integrity of PHI.
Moreover, the Business Continuity and Disaster Recovery plan ensures that the data is secure from disastrous events like earthquakes or cyberattacks. The state-of-the-art data centers in which the cloud infrastructure is set up is also an essential parameter responsible for keeping the PHI secure.
Chat With A Solutions Consultant | <urn:uuid:009e3124-2b14-4f86-984d-93d0c6192a7d> | CC-MAIN-2022-40 | https://www.acecloudhosting.com/blog/hipaa-compliant-cloud-hosting-secure-phi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00199.warc.gz | en | 0.931929 | 1,440 | 2.75 | 3 |
The smart grid has been on my mind a lot lately – possibly because I’ve been seeing articles about it everywhere the past month, particularly regarding the Obama administration’s support for it. And last week, I talked to a student working on his master’s degree in IT whose research focuses on the smart grid and keeping it secure.
That grad student isn’t alone in his thinking. The Future of Privacy Forum recently discussed the need for smart privacy.
And as an article in the Homeland Security Newswire points out:
“The smart grid is a theoretically closed network, but one with an access point at every home, business, and other electrical power user where a smart-grid device is installed; those devices, which essentially put the smarts into the grid, are computers with access to the network; in the same way attackers have found vulnerabilities in every other computer and software system, they will find vulnerabilities in smart-grid devices.”
An attack on the smart grid could wreak national – or international – disaster.
The federal government recognizes that making the smart grid secure needs to be a priority. The White House has released a document that discusses the general thought process behind its efforts:
“The probability of hacking into Smart Grid must be assumed to be 100%, and limitations of the damage possible by such entry must be a core element of the design. These designs should assume penetration of the network will take place at various layers and address responses on the network and in legal enforcements that are effective.” | <urn:uuid:a6237b55-6449-43bd-ba6f-90bb6a22f1b5> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/unlimited-vulnerabilities-in-the-smart-grid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00199.warc.gz | en | 0.956636 | 316 | 2.8125 | 3 |
The National Institute of Standards and Technology is inching closer to developing two new encryption standards to protect the federal government from new and emerging cybersecurity threats.
The National Institute of Standards and Technology is inching closer to developing two new encryption standards designed to protect the federal government from new and emerging cybersecurity threats.
Many experts believe the advanced computing capabilities of quantum computers will render most traditional encryption protocols used today obsolete. While true quantum computing is still decades away, the federal government is already preparing contingencies for how to defend its current IT assets and equipment from the threat.
In a March 20 briefing to the Information Security and Privacy Advisory Board, Matthew Scholl, Chief of the Computer Security Division at NIST, said the agency spent much of the past year evaluating 69 algorithms for its Post Quantum Cryptography Standardization project, a 2016 project designed to protect the machines used by federal agencies today from the encryption-breaking tools of tomorrow.
The submitted algorithms are all designed to work with current technology and equipment, each offering different ways to protect computers and data from attack vectors – known and unknown – posed by developments in quantum computing. NIST chose 26 of the most promising proposals in January 2019, and the agency will be conducting a second evaluation this year to whittle that list down even further.
Scholl told the board that the agency isn't shooting for a specific number of algorithms at the end of the process and wants to leave room for agencies to deploy multiple options to protect their assets.
"This is to ensure that we have some resilience so that when a quantum machine actually comes around -- not being able to fully understand the capability or the effect of those machines -- having more than one algorithm with some different genetic mathematical foundations will ensure that we have a little more resiliency in that kit going forward," Scholl said.
Switching encryption protocols is disruptive. NIST turned to the history books to study previous cryptographic transitions in the federal government and found they were plagued by poor communication, unrealistic timelines and overall confusion regarding expectations. Scholl said the agency is planning to do more proactive outreach to agencies and industry during second round evaluations.
NIST is also working on another revamp of encryption standards for small "lightweight" computing devices, focusing on components such as RFID tags, industrial controllers, sensor nodes and smart cards that are inherent in many Internet of Things devices.
The agency received 57 proposals for the project at the end of February, extending the submission timeline by a month due to the partial government shutdown, and plans to consider candidate algorithms at a public workshop in November.
The government's current encryption standards are largely designed for personal computers, laptops and other general purpose computing platforms. NIST officials believe new standards are needed to tackle a range of problems, from increasing reliance on connected devices to dissatisfaction with current identity and access management tools.
NIST will be able to rely on a rich catalogue of prior cryptographical research, Scholl said.
"The nice thing about the program is that many implementations and algorithms have a long history…unlike quantum where attack models are very new and different, lightweight is a more mature space," he said.
NEXT STORY: CISA's plans for 2020 | <urn:uuid:fb8d92a5-ab51-4f1b-abcf-435445529338> | CC-MAIN-2022-40 | https://fcw.com/security/2019/03/nist-pushes-new-encryption-protocols-for-quantum-connected-devices/240777/?oref=fcw-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00199.warc.gz | en | 0.94825 | 645 | 2.671875 | 3 |
There's a problem with instructor-led education, and it's a problem at all levels, from first grade up through graduate school, whether it's public education or corporate training.
- People have different backgrounds
- People learn at different speeds
- People learn in different ways
This isn't a new problem, but it's a sticky one. Walk into any first grade class, and you'll find students who are lost because they weren't ready for the class or don't have the support they need, students who are at grade level and doing well, and students who are bored because the class is moving too slowly for them or they already know most of the material. And the same is true of large classes at any level.
The best teachers manage to engage students, teach to different learning styles, and differentiate as much as possible. But it's always a delicate balancing act between covering the material and giving students individual attention.
Computer-mediated education, particularly adaptive learning, solves this problem.
- Enables students to go at their own pace
- Provides help where students need it the most
- Allows extra time if a student needs it
- Allows students to go faster and learn more if they can
- Doesn't slow the rest of the class down or leave the rest of the class in the dust
In this way, the student's experience is individualized in a way that benefits everyone.
[sidebar_cta header="WANT TO EXPLORE ITIL 4 FOUNDATION & BEYOND?" color="blue" icon="" btn_href="https://info.learningtree.com/ITIL-offer/" btn_href_en="https://info.learningtree.com/ITIL-offer/" btn_href_ca="https://info.learningtree.ca/ITIL-offer/" btn_href_uk="https://info.learningtree.co.uk/ITIL-offer/" btn_href_se="https://info.learningtree.se/ITIL-offer/" btn_text="START YOUR JOURNEY HERE"]
Adaptive learning works by tying assessment questions and teaching assets to discrete, granular objectives. Students who can breeze through the assessment questions get through the material quickly; those who struggle are shown more of the teaching assets so they can learn the material. Teachers are given data on which of their students are struggling and can use that to individuate education even more.
But there's also a problem with any computer-mediated module. In fact, it's a problem with all recorded media, even videos, textbooks, and scrolls.
Using recorded media, it is impossible to raise your hand and ask a question that the designer of the piece of media hasn't already thought of. Even Google won't show you the answer to a question no one has ever thought to include in their blog or post on YouTube. There are no TedTalks on subjects that no one has yet made a TedTalk about. But is that a problem?
Yes, it really is. And here's why: Learning is not just about adding discrete, known skills to a person's repertoire. It's partly about that, but it's also about making new connections between disparate skills, about asking questions no one has thought to ask before, about putting together new skills in a creative way to demonstrate competence. We don't yet have any computers powerful enough to help people do that. We certainly don't have standardized tests that assess that, and we aren't likely to for the foreseeable future.
Teachers can solve that problem, because teachers are able to respond to original questions and help students apply the content of the classroom to their own situation.
The future, at least for the next 30 years, belongs to schools, universities, and training companies that embrace both computer-mediated and human-mediated education, that combine live events with a human teacher, mentor, or coach with adaptive learning modules that allow students to go at their own pace.
The weaknesses of each approach are the strengths of the other approach. Which is why the organizations that blend both approaches are going to be the most successful over the next 20 years.
The biggest change is going to be in what teachers and instructors do. They are going to have to be more like mentors and less like lecturers.
Instructors won't be able to frog-march students through a programmed curriculum, reading PowerPoints and skipping through the interesting bits so they'll be able to finish Chapter 5 by the end of November. They won't be able to do that because AI will do that so much better.
Instead, instructors are going to have to fill in the gaps where adaptive learning and other technology-based approaches fall short, in engaging the curiosity of students, helping them answer original questions, and apply knowledge to real-life situations.
At Learning Tree International, we're partnering with Area9 to bring our expert instructor-led training together with Rhapsode, a state-of-the-art adaptive learning program. This blended approach is going to provide more opportunities for attendees to work at their own pace and get a more individualized experience in the classroom. That's what effective training is going to look like, at least for the next 20 years. | <urn:uuid:3fc6a063-23b2-4bc9-98f9-a76149dc3e3f> | CC-MAIN-2022-40 | https://www.learningtree.ca/blog/classroom-online-training-are-broken/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00199.warc.gz | en | 0.9619 | 1,090 | 3.1875 | 3 |
Public keys and private keys are the working parts of Public-key cryptography. Together, they encrypt and decrypt data that resides or moves in a network. The public key is truly public and can be shared widely while the private key should be known only to the owner. In order for a client to establish a secure connection with a server, it first checks the server’s digital certificate. Then, the client generates a session key that it encrypts with the server’s public key. The server decrypts this session key with its private key (that’s known only to the server), and the session key is used by the client-server duo to encrypt and decrypt messages in that session. In case of email communication, the sender’s private key signs the message while the recipient’s public key verifies the sender’s signature. This is why the private key should be kept secret– exposing it will pave the way for hackers to intercept and decrypt data and messages.
Due to their importance in safeguarding critical data, public-private key pairs or the PKI in general has to be managed with utmost diligence. | <urn:uuid:7d58e1ca-3d1f-4f9f-845e-90031dd307d1> | CC-MAIN-2022-40 | https://www.appviewx.com/education-center/what-are-public-and-private-keys/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00199.warc.gz | en | 0.909326 | 232 | 3.984375 | 4 |
Facebook has built its entire business model on knowing as much about you as possible. So it shouldn’t come as a surprise that, according to a team of medical researchers and computer scientists, your Facebook posts could be used to determine whether or not you will develop mood disorders in the future.
According to the study, users with schizophrenia or other mood disorders are more likely to use curse words and emphatic punctuation in their posts. They are also more likely to use words related to pain and negative emotions. Researchers say this could be used to detect these illnesses over a year in advance.
“There is great promise in the current research regarding the relationship between social media activity and behavioral health, and our results… demonstrate that machine learning algorithms are capable of identifying signals associated with mental illness, well over a year in advance of the first psychiatric hospitalization,” said one of the study’s co-authors, Michael Birnbaum. “We have the potential to thoughtfully bring psychiatry into the modern, digital age by integrating these data into the field.”
Of course, there are also massive privacy concerns with a venture like this. Would users really be willing to have their mental health evaluated constantly just because they’re on Facebook? And would Facebook then be in charge of our private health data? There’s no doubt this technology could prove immensely useful, but there are many important questions to answer first.
The Choice of Tech Experts Worldwide. Try 90 days free of Bitdefender and experience the highest level of digital safety.
Surf the web truly incognito. Try Bitdefender Premium VPN, the ultra-fast VPN that keeps your online identity and activities safe from hackers, ISPs and snoops. | <urn:uuid:4d81afe5-fbc6-46e8-8a14-dfc5430e20ad> | CC-MAIN-2022-40 | https://facecrooks.com/Internet-Safety-Privacy/report-facebook-data-could-be-used-to-diagnose-mental-illness.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00199.warc.gz | en | 0.949897 | 358 | 2.640625 | 3 |
Digital Contact Tracing Systems
Contact tracing is an essential method for mitigating disease. It has been used for many years in fighting pandemics.
Pandemics have been around forever. They have caused many deaths, impacted the economies of nations, and have changed human history.
Today we are in the middle of the most recent pandemic, and it too has changed our lives. We struggle to maintain mitigation methods such as wearing masks, identifying people with the disease, and contact tracing to control the spread.
This article reviews how contact tracing and other mitigation efforts have been used to fight the pandemic. It examines the historical, social, political, and scientific aspects of mitigating illness.
Contact Tracing in Pandemic History
One of the earliest pandemics was the Black Death (or plague), which occurred in the Late Neolithic-Early Bronze Age (around 3,000 BCE) and resulted in mass population migrations. An article in nature, “Bronze Age Skeletons Were Earliest Plague Victims,” described how the study of bronze age skeletons verified the presence of plague in their DNA. The disease caused a massive exodus of people from the steppe of what is now Russia and Ukraine; they scattered west into Europe and east into central Asia. They ran to avoid contact with the sickness that was sweeping the area. Even back then, they knew that contact with sick people was dangerous.
The deadliest plague in recorded history killed 100 – 200 million people worldwide in the 14th century. The article, THE BLACK DEATH: THE PLAGUE, 1331-1770, said that the best estimates now are that at least 25 million people died in Europe from 1347 to 1352. It took 150 years for Europe’s population to recover.
Besides the medieval cures that included such things as bathing in your own urine, they used masks, contact tracing, and isolation to reduce the deaths. The plague doctor with his bird-like costume was probably the first contact tracer.
Other major pandemics include measles, smallpox, syphilis, typhus, cholera, HIV/AIDS, the 1918 flu pandemic, SARS, and Ebola.
During all these pandemics, there has been a conflict between politics, science, and the economy. During the Middle Ages, the Church attributed the plague to “supernatural forces and, primarily, the will or wrath of God.” The response to the 1918 flu was affected by World War I, and the presidential election. Political leaders even tried to hide the existence of the pandemic because of partisan or diplomatic reasons, perhaps to avoid disruptions in trade and travel. The inconsistency of messages to the public slowed the response to the epidemic and resulted in increased death.
Contact Tracing Provides Containment and Mitigation of Disease
The control of pandemics consists of containment and mitigation. Stanford University described the Public Health Response to the 1918 flu. During that outbreak, masks, contact tracing, and isolation helped return the country to normal. Other public health interventions, such as vaccines and therapeutic countermeasures, were used to mitigate and control the spread.
By learning from past experiences and following the advice of scientists, we can control the outbreak of Covid-19. The strategy is to identify infected individuals, use body temperature scanning, provide contact tracing, use masks, test people for disease, and isolate infected individuals to stop the spread of illness.
Contact Tracing and Wearing Your Team Colors
Many of us are aware that the mitigation methods are essential in keeping us all safe. Unfortunately, there have been political influences that have modified behavior. Like cheering for their favorite team, and wearing their team colors, people will not wear a mask, or object to contact tracing, because of political affiliation. The conflict between tribalism and science continues to haunt us even in the worst of times.
Digital Contact Tracing and Privacy
Some people are concerned that contact tracing, and especially automated contact tracing, can intrude on their privacy. The new digital contact tracing methods pay attention to this concern. The contact tracing methodology is to register the time a person spends close to another person. The process doesn’t record personal data or the person’s location. The system only provides a list of possible contacts if someone is reported to be sick.
Automated Contact Tracing System
The automation of contact tracing was inevitable. Just like other manual operations, the computer has now made it easier to trace all the people you have been near.
One method is to use smartphones to provide contact tracing. Apple and Google have teamed together to provide this service. There have been some privacy concerns, and contact tracing offered by smartphone apps has not been widely implemented.
Another approach for digital contact tracing uses separate electronic tags rather than a smartphone. People carry these tags while they are inside an organization. The contact tracing system doesn’t use smartphones and provides better privacy. Conceptually the process is easy. The system uses contact tracing devices (tags) that detect when they are close to another similar device. Each tag keeps a record of contacts and the duration of time they were together. Information from each tag is then automatically sent to a database that holds the data from all these devices. To learn more, see the Contact Tracing System product description.
Automated Contact Tracing Summary
Contact tracing is an essential method of controlling the spread of disease. It has been used in many historic pandemics. History teaches us that it’s crucial to follow suggestions provided by the medical community. There are political and religious conflicts that can confuse the successful implementation of mitigation methods.
Contact tracing is only one of the methods used to reduce transmission of disease. There are other protocols that should be followed, such as wearing masks, social isolation, testing, and temperature scanning.
If you would like help selecting the right systems for controlling disease, please contact us at 800-431-1658 in the USA, 914-944-3425, everywhere else, or use our contact form. | <urn:uuid:50441461-bb64-46b9-b4eb-82a52f1f4a09> | CC-MAIN-2022-40 | https://kintronics.com/automated-contact-tracing-in-a-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00199.warc.gz | en | 0.949296 | 1,231 | 3.46875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.