text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
- January 14, 2016
- Posted by: Kerry Tomlinson, Archer News
- Categories: Cyberattack, Industrial Control System Security, Posts with image, Vulnerabilities
A cybersecurity expert says he has found an all-too-easy way that hackers can get into plants and factories and cause trouble.
You need steam in a gas plant—no steam equals no energy. And you can’t get the water in to make that steam without water pumps.
Now a cybersecurity researcher says he has discovered that hackers can get in to mess with pumps like those, as well as other motors that keep our world moving.
The attack path, says Reid Wightman with Digital Bond, is simpler than you might think.
“Pretty darned easy,” he told Archer News.
The attacker “doesn’t need a big brain, just a medium-sized brain,” he explained to the audience at S4x16, a cybersecurity conference in Miami.
Wightman said some believe it is hard to damage equipment through cyber means, in part, because you would need knowledge about the equipment and the laws of physics to be able to make an effective attack.
“I think it’s the kind of perception that the industrial world has lulled itself into,” he said.
Wightman’s research focuses on the “critical speed” of a motor, the speed at which it starts to vibrate and could do damage. It is often not the highest speed of the motor, he said.
He found that he could break into the equipment that controls the motor and change the speed. On top of that, the same equipment showed him the motor’s critical speed, without any sort of cyber protection on the data. That means a cyber intruder could get inside each machine, quickly learn the critical speed, and set the attack in motion.
“The controller gives you the recipe for damaging the equipment,” he explained. “That definitely popped out at me. That’s the sort of setting that should be protected.”
Do not push this button
Other cybersecurity experts say this unprotected “recipe” is a security problem.
“It’s a big red button saying, ‘Do not push this button. Here it is,’” said Monta Elkins with FoxGuard Solutions.
“He was answering the charge that you can’t attack these systems because they are too complicated and poorly documented,” said Elkins. But, he added, “The knowledge you need is included in the thing.”
“It demonstrated how ubiquitous it is, how easy it is,” said Daniel Lance with Archer Security Group. “He was just showing the rampant, widespread ability to do this.”
What can this kind of attack do?
The attacker can set the motor at a speed that will cause “bad vibrations,” Wightman said. But he does not think the result would be catastrophe.
“No explosion. It’s not like the Second Coming or something like that,” he said. “I doubt that anybody’s going to get hurt by it. I don’t think it’s going to be a life safety issue.”
It could, however, slow down operations and cost money.
“It can cause damage to motor and surrounding equipment. It’s going to prematurely wear out the motor and it may cause vibration damage to the motor. It may cause vibration damage to nearby equipment,” he said. “If there are pipes nearby, it could begin making the pipes spring leaks.”
This kind of attack might be slow, Wightman said, but it could also fly under the radar.
“It takes a bit of time to do damage,” he said. “But it’s also pretty hard to diagnose.”
Under the radar
If the motor is vibrating in a destructive way, you might think someone at the gas plant, or water plant, or mining operation or manufacturing company would be able to spot the problem and stop it.
“You’re changing the motor speed. Isn’t somebody going to notice?” asked Wightman.
But he said diagnosing motor vibration issues can be difficult.
The motor may adjust to account for the vibrations, he explained.
“You can kind of trick operators into thinking their motors are running at the right speed,” he said. “You’re causing the motor to slow down, but the operator thinks it’s operating at the same speed.”
“There currently aren’t a lot of solid, reliable, high-integrity means to be able to detect this attack,” said Lance. “The existing ability to prevent or detect is currently quite low.”
The key, according to experts, is keeping track of the data from your motors, which many companies do not do.
“Is anybody logging these settings? Are you monitoring? Are you logging it somewhere?” asked Wightman.
If you are not keeping track of the data, you may not know what is “normal,” and what is “not normal” for running the motors, experts say.
“You need to understand what is the proper operational data,” said David Foose with Emerson Process Management.
He said plant workers often rely on each other to know what is “normal,” as opposed to tracking data that can show them a clearer picture.
“A lot of times, there are only a few guys who understand the plant as a whole. Plant operators rely on the guy on the shift before them to tell them what’s going on,” he said. “They’ve been trained that way.”
“Operators rely on the system graphics [control system information] along with the previous shift for situational awareness. They must trust the data they are given due to the speed and fragile nature of the tuning/process [of equipment],” Foose added.
Defeating the medium-sized brain attacker
Some companies with large electric motors connected to the internet need to make this security vulnerability a high-priority issue, Wightman said.
Vibration monitoring tools may help, experts say, along with the basic security defenses that plants should be using to stay safe from cyber attacks.
“Don’t trust your [automatic motor] controllers. You might think about instrumenting them with vibration sensors, and try to collect more data for your equipment,” said Wightman.
“It’s raising the bar. Our whole job is making it harder on people [attackers],” said Foose. | <urn:uuid:e89a4f70-b6a2-4eda-96b1-4e5a6ec363b1> | CC-MAIN-2022-40 | https://archerint.com/researcher-finds-easy-button-for-attacking-motors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00722.warc.gz | en | 0.958298 | 1,450 | 2.515625 | 3 |
Malware is everywhere, infecting nearly one third of all computers in the world today.
It’s ready to do damage to you, your computer or your data in ways that seem to be limited only by the dark ingenuity of hackers.
Ransomware, a form of malware, can lock your files or allow hackers to threaten and steal your data if you don’t pay them. Cryptojacking attacks can install software on your device that co-opts its computing power to mine cryptocurrency for hackers without your knowledge.
Viruses and worms can damage and corrupt your files; and Trojans can wreak havoc by sneaking into your system disguised as legitimate pieces of software. The possibilities are as endless as they are dangerous.
Perhaps the single most devious form of malware, however, is spyware. Spyware is any kind of malicious software that allows a hacker to listen in on, observe or otherwise gather data on an intended victim through an infected device.
Due to the increasing ubiquity of the Internet of Things (IoT) in seemingly all aspects of both business and daily life, the dangers posed by spyware, like the dangers posed by all forms of malware, have multiplied of late.
The IoT allows businesses and households to integrate all manner of devices — like computers and laptops, but also smart TVs, security cameras, thermostats, refrigerators, coffee machines and even pacemakers — together into one network.
Though this can offer impressive benefits on the side of ease of use and convenience, it also presents hackers with greater opportunities to potentially spy on you or otherwise do you harm, making effective network security all the more important.
Here’s some essential information about how hackers can use spyware to infect systems and what sorts of dangers spyware poses. In particular, you need to know about some of the new perils spyware can present in the age of the IoT. Most importantly, we’ll give you some crucial tips on how to protect yourself, your home or your business from this ominous menace.
Spyware and the IoT
Malicious hackers have a great number of ways in which they might try to infect your system with spyware. To give one common example, email phishing scams usually try to get you to part with or divulge some kind of important information — like a bank account or credit card number — under some kind of false pretext.
Perhaps someone may email you claiming to be in desperate need of money, or may tell you that you’ve won a cash prize and need to deposit a certain amount of money into a specified bank account before you may claim it.
Be on the lookout for all such scams and do not fall for them, as simply clicking on a link that you see in a suspicious-looking email may trigger the automatic installation of spyware on your device. In modern forms of spyware, this installation will be automatic and you will not be alerted to it. If you’ve opened any suspicious emails, therefore, someone may already be spying on you without your knowledge.
Since spyware typically needs to be installed directly on the target system or device before it can begin working, potential attackers will need to somehow gain access to your system or network before they can begin spying on you.
Since security precautions like encryption and the use of strong passwords (more on that below) are enough to keep out most potential attackers, sneakier moves that attempt to exploit the human element in online security — referred to as “social engineering” — have increasingly been the go-to tactic for those wishing to break into a person’s or company’s network.
Given this, connecting ever more devices to the IoT presents a few additional risks not encountered before. The more devices there are connected to a network, the more vectors there are for a hacker to try to break into that network.
Since all of the relevant devices on the network communicate with one another in at least some way, if a hacker finds a way to break into one of them, he will at least potentially have access to all of them.
If the particular device — say, the server with all of your company’s private customer files on it, or the smartphone whose camera can be used to watch you — is not within the hacker’s immediate reach, he can try to break into another device that forms part of your IoT network — like your office printer or the Amazon Alexa virtual assistant that you have at home — and try to pivot from there to get to the device he truly wants to compromise.
More interconnected devices also means more places for hackers to install spyware and more ways for them to surveil you and take your private data. Anything from your laptop and smart TV cameras, to the microphones in your computer or smartphone, to the cameras in your home security system, could potentially be used to spy on you.
In short, the IoT gives hackers more ways to watch you and requires you to be aware of more security vulnerabilities through which spyware can get an unwelcome foot into the door.
Who might try to attack you with spyware?
The list of those who might want to target you or your organisation with spyware is similarly long and varied, but you should keep all of these possibilities in mind and take steps to guard against them. Here are just some:
Jealous ex-lovers or spouses:
Jealous ex-lovers or spouses: This is distressingly common. A jealous and possessive ex-lover or spouse who just can’t let go may decide to hack into your devices to watch, stalk or harass you. He might take advantage of the IoT to watch you through your webcam or play with your smart thermostat or your lights to cause you distress. Luckily, there are steps you can take to fight against the commercial spyware that such people use, and you can even gather evidence of their stalking with which you’ll be able to go to the police.
Thieves: If you are wealthy, or work for a company or business that is, hackers will certainly have an interest in using spyware to extract and leverage important information. This information can be personal information with which you might be blackmailed, or company information and customers’ credit card numbers that can be sold by cyberthieves on the dark web.
Other companies (competitors):
Other companies: Corporate espionage, though highly illegal, does happen. Rival companies may attempt to install spyware on your private networks in order to watch your employees and try to learn crucial information, like trade secrets or anything else that may give them a competitive advantage against you.
This is one reason why it is crucial to implement the best cybersecurity protocols that you can, as the IoT only increases the opportunities for corporate espionage.
Even governments have been known to install spyware on the devices of their own citizens. Documents released by WikiLeaks show the CIA and other parts of the US government have created hacking tools that allow them to surveil their own citizens through the smart devices connected to the internet of things.
Foreign governments could potentially do the same. To advocates of civil liberties, this has been a grave cause for concern and illustrates yet further than people must take active steps to keep their private data safe.
How Can You Protect Your IoT Devices From Malware?
With all of these dangers to your personal data or that data of your customers, what can you or your business organisation do to mitigate the dangers posed by spyware and the ways in which the IoT has enabled those dangers to multiply? Here are a few suggestions:
- Use a VPN
A VPN, or virtual private network, puts a layer of encryption between the devices on your personal home or business network and the broader internet. Not all VPNs are created equal; like any software, some providers do a better job than others at keeping you secure. The best VPNs today are valuable cybersecurity and data privacy tools, and using them is essential for proper protection against not just spyware, but all forms of malware in general.
- Use strong and unique passwords:
Make the passwords to all of your devices as long, complicated and difficult to guess as you can make them. Also, try to use different passwords for each device. The more work that hackers have to do to find your passwords, the less likely they will be to find them all.
- Install the latest software updates for all of your devices
Software updates to various devices often try to patch up security vulnerabilities in those devices that would leave them open to attack and infiltration by hackers. Having the latest software updates for all of your devices will close off more avenues by which you could be attacked. In 2017, failure to perform a standard Windows update opened millions of computers to a ransomware attack known as “Wannacry”.
- Implement a zero trust security strategy
If you run a business, zero trust security strategies — in which network access at all levels is highly restricted and segmented, and in which authentication is required for all forms of access —are indispensable to effective cybersecurity.
It’s Time To Defend Your IoT Devices From Spyware
As we’ve shown, spyware presents all sorts of dangers of its own — dangers of harassment, of your personal data being stolen and your privacy being breached, and many other things.
The IoT, for all of its advantages, sadly supplements these dangers and amplified them even further. Fortunately, there are things that you can do to protect yourself and your business from spyware. You simply need to be aware enough and conscientious enough to do them. | <urn:uuid:c57c0c93-990a-484c-9d9b-71c539def5cf> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/spyware-in-the-iot-this-years-biggest-security-threat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00722.warc.gz | en | 0.943747 | 1,973 | 3.15625 | 3 |
Cyber attacks are nothing new, but the latest round of attacks has bamboozled the digital world. Hundreds of thousands of devices have been affected by the recent malware attacks. Usually, these types of threats used to be mainly targeting corporate devices, but that’s not the case anymore. Now, pretty much everyone is at risk. Many digital users have neglected the aspect of cyber security for so long. They’re slowly realizing the importance of it. If you’re still among such people who don’t take the digital security seriously, you should definitely know about the latest attacks that have targeted loads of devices all around the world.
In order to protect yourself from the danger, you must first know the extremity of the danger itself. Continue reading to find out the latest and dangerous malicious threats:
By now, you may have gotten a deep insight into some of the highly dangerous malware threats that have attacked and affected thousands of devices in the recent times. If you want to safeguard yourself from such attacks, you must tighten up your security defenses by using a good anti-virus program that offers an effective combat system against malware threats. | <urn:uuid:b1faaeb2-b55d-4605-98b2-7c574c9ba9c9> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/latest-malware-threats | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00722.warc.gz | en | 0.958503 | 235 | 2.53125 | 3 |
New Training: Implementing Controls to Mitigate Attacks and Software Vulnerabilities
In this 4-video skill, CBT Nuggets trainer John Munjoma discusses vulnerability controls. Watch this new CompTIA training.
Watch the full course: CompTIA Cybersecurity Analyst
This training includes:
20 minutes of training
You’ll learn these topics in this skill:
Implementing Vulnerability Controls
Man in the Middle Attack
Mitigating Attacks and Software Vulnerabilities
What is a Man in the Middle Attack?
Unlike highly visible and publicized ransomware and phishing attacks, MITM attacks often get little fanfare in the public space, yet they remain a very serious threat by definition. A MITM attack occurs when the attacker positions themselves between a user and their desired destination (a site, saas application or other network resource) to either silently intercept data or impersonate a trusted resource to get further access to the network. MITM perpetrators have countless ways to achieve their goals :
Fake website to trick users into providing legitimate credentials.
Fake wi-fi spots in public locations to snoop on network traffic.
Snooping on email, chat or other forms of web communication.
There is no one solution to preventing MITM attacks, prevention should be part of a cohesive security strategy rather than just a suite of tools. Some of the ways organizations can prevent MITM attacks is to start by implementing strong password policies, enable MFA, prevent the use of open WI-FI or unsecured networks, enforce secure browsing by verifying SSL/TLS in websites being visited. | <urn:uuid:c3c982b1-f96b-4881-93de-b527d1922136> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-implementing-controls-to-mitigate-attacks-and-software-vulnerabilities | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00122.warc.gz | en | 0.883763 | 333 | 2.703125 | 3 |
New Training: IP Routing and Forwarding
In this 12-video skill, CBT Nuggets trainer Keith Barker walks you through the concepts of routing, such as static and dynamic routes, network address translation (NAT), and access control lists (ACLs). Watch this new networking training.
Watch the full course: CompTIA Network+
This training includes:
56 minutes of training
You’ll learn these topics in this skill:
IP Routing and Forwarding: Introduction to IP Routing
IP Routing and Forwarding: How to Train a Router
IP Routing and Forwarding: Options for Static Routes
IP Routing and Forwarding: Configuring Static Routes
IP Routing and Forwarding: Dynamic Routing Protocol Overview
IP Routing and Forwarding: Dynamic Routing Protocol Demonstration
IP Routing and Forwarding: Address Translation with PAT
IP Routing and Forwarding: One-to-one Translations with NAT
IP Routing and Forwarding: Using Wireshark to Verify IP Translations
IP Routing and Forwarding: Access Control Lists
IP Routing and Forwarding: ACL Demonstration
IP Routing and Forwarding: Enterprise Network Forwarding
What is NAT?
The internet works by connecting a lot of different networks together. Networks use IP addresses to send data back and forth. Those IP addresses tell networks where data is being sent to and from. There is only a finite amount of IP addresses available, though. So, each device connected to the internet cannot have its own unique IP address. This is when NAT is used.
NAT stands for network address translation. It is a protocol that sits at the edge of a network such as a router. Internal network-connected devices connect to routers for internet access. That router passes traffic from the internal network out into the public internet. NAT translates the internal network IP address to a unique, public IP address. Likewise, when data comes back into a network, the NAT protocol forwards that information back to the specific device that requested it. In this way, NAT keeps a running list of which devices are communicating with specific internet resources so data can be transferred from an internal device to the public internet and back again properly. | <urn:uuid:c2d2b640-b3cd-4445-a116-365e9992cf68> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-ip-routing-and-forwarding | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00122.warc.gz | en | 0.897773 | 463 | 3.59375 | 4 |
"If only we knew then what we know now" is probably a lament many epidemiologists contemplate when they try to find a way to combat the spread of a serious disease and contain an outbreak. Thankfully, doctors have a wide range of tools and instruments at their disposal to find how an illness spreads, who the culprits are and what the population can do to stay safe.
Not only does medical science have a hand in fighting epidemics, but data science does as well, especially big data and predictive modeling tools.
Learning lessons from the past
Epidemiologists already put big data and predictive analytics to use to try to stay one step ahead of an outbreak, Forbes noted. Doctors and scientists employed these data systems to fight the spread of both the Ebola and West Nile virus.
"The ability to predict where a fast-spreading disease could appear next is crucial."
Damian Mingle, a data scientist writing for HIT Consultant, said the World Health Organization and other institutions and doctors could use predictive analytics to combat Zika virus, which is spreading across South and Latin America.
Mingle worked with Chicago's city officials to predict where West Nile – another disease spread by mosquitos – could strike next by collecting information on weather patterns and demographics across the city. Once they analyzed the data with an algorithm they found the areas the virus was most likely to spread. Predictive analytics gave city officials actionable data and made the process of controlling the virus faster, cheaper and more effective. City workers knew which sections likely had pools of standing water – a breeding ground for mosquitos – and developed a focused insecticide spraying campaign.
The ability to predict where a fast-spreading disease could appear next was crucial, especially since Chicago has nearly 3 million people spread out across 234 square miles.
Fighting illness with analytics
Big data and predictive analytics tools could help the many varied organizations fighting Zika and other rapidly-spreading diseases by getting everyone on the same page. Not only can these solutions pinpoint where the next threat could arise, but they can also speed up the pace of relief efforts, Forbes noted. If multiple organizations such as the Centers for Disease Control and Prevention, the World Health Organization, vaccine manufacturers and local and state governments share their data and use analytics, then it's possible illnesses such as Zika can be better contained. | <urn:uuid:7ffb7ec7-03a0-42a6-a9dd-6748f7923dd0> | CC-MAIN-2022-40 | https://avianaglobal.com/fighting-the-zika-virus-with-predictive-analytics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00122.warc.gz | en | 0.957164 | 471 | 3.6875 | 4 |
Shintō Practice Today
A Shintō shrine is a jinja,
a dwelling for the kami, the spirit or spirits
Shrine names end with -jinja
You approach a shrine through a torii, a gate made from two upright columns and two crossbeams. A torii marks a boundary where you pass into sacred space. Or, multiple divisions of increasing sacredness, as some shrines are reached through a series of several gates. Here's an example at Kiyotaka Inari shrine in Kōyasan.
The path leads up the hillside through dozens of torii, taking you to an open area in front of the shrine. Then there is one last torii leading into the building. Shrines often have a fountain with ladles to clean your hands and mouth before entering. However, there isn't one at this small shrine.
As you approach the interior, there's a rope that you shake to rattle a bell at its top. Then clap your hands twice. As for the bell or rattle and the kashiwade, the double clap, some say that it's to summon and show gratitude to the kami. Others say that it's to ward off evil spirits. Again, there's no single consistent explanation. It means what people think it means.
The clap is quite old, a holdover from Japanese prehistory. A Chinese text from the 3rd century CE reported that the Wa, the Japanese pronunciation of the Chinese term for Japan and its people, clapped their hands during worship. This was five centuries before the Kojiki (or Records of Ancient Matters) and Nihon Shoki (or Chronicles of Japan) were written down, codifying Shintō practice.
There's a small offering box, where a coin with a hole is considered more auspicious. So, ¥5 or ¥50. There is often a table where you can leave food or drink for the kami. When the kami doesn't consume the food or drink, it goes to the keeper of the shrine.
There are often two guardian statues or kitsune. Here they are foxes wearing red bibs. Foxes are believed to have magical powers as they are messengers of Inari, the deity of good harvest. Fox-messengers are common shrine guardian figures. They are often dressed in red votive bibs.
Often the guardians at Shintō shrines are actually Buddhist guardian deities, the Niō. The one at your left usually has an open mouth, while the one at your right has a closed mouth.
That's another Buddhist feature. They're pronouncing the Sanskrit letters अ or A, then म or MA, forming the sacred syllable ॐ or Aum.
Aum is the sacred sound of Hinduism. But it is used as a mantra in Buddhist, Jain, and Sikh practice.
The actual dwelling for a kami here in the inner chamber is a miniature shrine. It stands just a little taller than me. It has a miniature rope and bell above its entrance.
The zigzag streamer is a shide. They are made from paper, hemp fibre, or other material. A pair of shide streamers hang on a wooden wand called a gohei or onbe or heisoku. A popular ritual involves a Shintō priest waving this "lightning wand" to bless a person or object. It might be used to bless or ritually clean a newly purchased automobile or property.
The shide can represent the enshrined and invisible kami, and so it may be the focus of devotion. A twisted straw rope or shimenawa may hold several shide.
Spirited Away is filled with Shintō imagery and concepts.
Here is a bundle of many shide at Tōshō-gū, a shrine at the tomb of Tokugawa Ieyasu at Nikkō.
Palanquins or portable shrines called mikoshi carry the shintai, the "god-body", the central totemic object normally kept in the most sacred area of a shrine. Or, to use the honorifics, the o-mikoshi or divine palanquin carries the go-shintai or sacred god-body.
The shintai are not themselves part of the kami or spirit of the shrine. They are temporary repositories which make the kami accessible to human beings for worship. They are also yorishiro, objects that are capable of attracting and, in some sense, capturing kami so they can be "enshrined" or housed within the shrine.
The omikoshi are used to carry the goshintai on a tour of the community at a matsuri or religious festival, or to move it to a new shrine location, or to capture a kami somewhere and transport it to be enshrined at a newly built facility. The omikoshi physically protects the goshintai and hides it from sight.
The first picture below is at Tōshō-gū, a shrine complex at the tomb of Tokugawa Ieyasu at Nikkō.
The picture above is at a small temporary shrine along a side street in the Asakusa district of Tōkyō. Below we see the priests preparing, then conducting a ceremony at this temporary shrine. The group then walked together to a permanent location two blocks away.
When a child is born, the local shrine adds them to their record as a "family child". After their death this ujiko becomes a "family kami" or "family spirit". The name is added regardless of personal or family wishes. It is not intended as a means of imposing Shintō belief on an unwilling individual, but to indicate a sign of being welcomed by the local kami.
The shrine below is at Tō-ji in Kyōto. Notice the outer torii, the bell rope and metal donation box, the stone guardian koma-inu or lion-dogs with open mouth at left and closed mouth at right, and the offerings of sake and candy.
Also notice the shimenawa, the rice straw or hemp rope used for ritual purification and to ward off evil spirits. Shimenawa can bound a sacred space such as a Shintō shrine like this one.
Taimatsuden Inari shrine
The small Taimatsuden Inari shrine in Kyōto, seen below, is associated with a larger shrine. This one is in a small lot beside the end of a bridge over Kamo-gawa, the Kamo River. It actually has two shrines.
In addition to the two Shintō shrines, this complex also has a small Buddhist shrine. It's a small structure on top of the block with the swastika, next to the ablutions fountain. Large Buddhist structures where services are held are called temples. This, however, is just a shrine in the Buddhist sense, with a small statue of a Bodhisattva or other manifestation of the Buddha.
Small shrines are everywhere in Japan. Here is a pair, Buddhist on the left and Shintō on the right. This is along the Takase Canal in Kyōto. A small Shintō shrine is called a hokara.
The Buddhist one has swastikas on the base and in the frame above its doorway. It has a small statue of a manifestation of the Buddha behind the wooden grill. The Shintō shrine has no image.
Okazaki shrine in Kyōto was associated with the Emperor and Empress when it was founded in 1178 CE. The Emperor called for its establishment to protect the Imperial Court and expel evil from the four cardinal directions. Because of that, it is believed to house the god and goddess of dispelling evil related to the compass points. Because the Empress gave it a ritual offering after giving birth, it is also believed to house the god and goddess of easy childbirth.
Rabbits are considered to be the servants of the evil-dispelling deities. Because rabbits reproduce prolifically, and also because of the connection with the Empress's childbirth, this shrine has become quite popular with people wishing for children. They write their wishes for future births, or thanks for successful births, on ema, votive tablets or prayer plaques.
Rabbits flank the approach to one of the shrines along with the koma-inu or lion-dogs. All of them follow the Buddhist pattern of open mouth on the left, closed on the right.
A rabbit presides over the basin in the hand-washing shelter, which is lined with votive plaques. Shintō refers to the ceremonial purification rite as temizu, the water-filled reservoir as mizuya or chōzubachi, and the small shelter or pavilion as the chōzuya or temizuya.
Larger shrine complexes like this one sell charms or omamori, prayer plaques or ema, and other material.
When I was in Japan on this trip I heard far more Russian than English spoken by foreign visitors. But I saw more English wishes written on votive plaques than Russian. So, at least from a Bayesian point of view, English speakers are far more likely than Russian speakers to write wishes on votive plaques.
Shintō and the Emperor in Prehistory
The Kojiki and Nihon Shoki of 711-720 CE were only partially historical. They largely contain the mythology of the Emperor's ancestry. All the figures listed before Emperor Ōjin of the late 3rd century CD are legendary. Emperor Ankō of the 5th century was the earliest historical ruler of at least a part of Japan, and he appeared as the 20th Emperor in the legendary lists. Up to that time, Japan had been entirely clan-based.
The Yamato group of powerful clans had organized by the late 400s CE. They were each headed by a patriarch called the Uji-no-kami who performed rites honoring the clan's kami or spirit. The clans wielded power on a local to regional scale only.
One clan became dominant over the centuries. Its ruler was known within Japan as Yamato-ōkimi, the Grand King of Yamato. Chinese chroniclers called him Wakoku-ō, the King of Wa. As they came to rule larger areas of Japan they become increasingly aristocratic and militaristic. The supposed Japanese Emperor still petitioned China's leadership for the use of the title Ame-no-shita shiroshimesu ōkimi or Sumera no mikoto, the "Grand King Who Rules All Under Heaven".
In the 600s the Emperor came to be called Tennō, the Heavenly Sovereign, the title still used today.
Buddhism was officially introduced into Japan from Korea in 552 CE. Buddhism had spread from India into Central Asia, and then moved along the Silk Road after that travel network was established in the 2nd century BCE. The monk Zhang Quian traveled along the Silk Road between 138 and 126 BCE, and Buddhism was officially introduced into China in 65 CE. Buddhism next spread to Korea.
Buddhist monks from China visited Japan during the Kofun period of 250 to 538 CE, but those visits left few traces. The Chinese Book of Liang recorded in 635 that five Buddhist monks from Gandhāra had visited Japan in 467. In 552 CE, King Seong Myong of Baekje in Korea sent a mission to the Japanese Emperor Kinmei. The group included Buddhist monks and nuns along with an image of the Buddha and a number of sutras introducing and explaining Buddhism.
In 607 the Emperor of Japan sent an envoy to China to obtain more sutras. By 627 there were 46 Buddhist temples in Japan, staffed by 816 priests and 569 nuns.
Buddhism wasn't a practical movement. It wasn't for the masses. For its first few centuries in Japan, Buddhism was staffed by educated priests who prayed for the prosperity of the nation and the Imperial house. Uneducated and unordained "people's priests" ministered to the common people, practicing a combination of Daoist and Buddhist philosophy plus some local elements of shamanism.
Buddhism and Shintō became blended. Shintō shrines were built on the grounds of Buddhist temples, and vice-versa. Both were caught up in the power struggles of weak Emperors.
The Imperial capital moved from Nara to Kyōto in 794. Buddhist monasteries grew more powerful, to the point that some established armies of warrior-monks known as Sōhei. Buddhism and Shintō were both powerful.
A period of crises arose in the late 1100s. The Imperial house lost power as the samurai gained power. In 1185 the Kamakura Shōgunate was established. The Emperor stayed in Kyōto while power moved to Kamakura with the military leadership.
Formalized Shintō, associated with the Emperor, lost influence while Buddhism, more associated with the Shōguns, gained influence. Pure Land Buddhism and Zen Buddhism were introduced. They were quickly adopted by the upper classes, and then by the common people throughout Japan. Zen in particular appealed to the samurai and also had large effects on traditional arts. Pure Land is still the largest Buddhist sect in Japan, and Zen remains very popular.
Buddhism gained a great deal of political and military power. The Shōguns imposed strict control over Buddhism, lest it become so powerful that it took over. Meanwhile, Japan cut itself off from the world. By 1641 the only foreign contact allowed was a small colony of Dutch traders at Nagasaki, on the island of Kyūshū at the far southern end of the country.
The "Black Ships" and the Suppression of Buddhism
U.S. Navy commander Matthew Perry led the "Black Ships" fleet to Japan in 1853, demanding that Japan open its ports to foreign ships. A series of treaties beginning in 1858 opened Japanese ports first to U.S. ships and then to ships from other nations. Confidence in the Shōgunate dropped, made worse by worry over China's experience against Britain in the Opium Wars. In November 1867 the last Shōgunate was terminated and power returned to the Emperor in January 1868. That was called the Meiji Restoration. In 1869 the Emperor moved his court to the Shōgun's city of Edo, now known as Tōkyō.
With the return of the Emperor, Buddhism with its foreign origin was suppressed in favor of the native Shintō. The treaties with the western nations had insisted that Japan must have freedom of religion. First, the Japanese language needed a word for "religion" in the sense used by the other countries.
Japan said that Shintō wasn't really a religion, not in the sense that the treaties used that word. The Emperor's representatives effectively said, "People in Japan can believe whatever they want, so long as they accept the divine nature of the Emperor and take part in Shintō rites honoring the deified Emperors. But since Shintō is a Japanese national tradition and not a 'religion', we have full freedom of religion."
Shinbutsu bunri is the Japanese term for the separation of Shintō from Buddhism. Shintō kamis were no longer considered to be manifestations of Buddha figures. Buddhist temples were closed, monks were forced to return to lay life or become Shintō priests, and many books, statues, bells, and other Buddhist artifacts were destroyed. Shintō became a nationalist movement.
"State Shintō" was a term applied by the U.S. during World War II and continuing through Occupation. It was used to describe the ideological use of Shintō on a national scale after the Meiji Restoration of 1868, and especially after 1900. It never was a Japanese term. Again, see the ambiguity of the word "religion".
The official stance was that Shintō, as defined by the Kojiki and Nihon Shoki documents in the early 700s CE, had established that the Emperor was a descendant of the gods. The divine descent was a fact, not a matter of faith, and was taught as such in schools. Worship of the Emperor, which had never been a part of Shintō, was added to traditional Shintō practices after the Meiji Restoration.
Shrines were redefined as patriotic institutions, not religious sites, and later they were given local political functions. The government took control of training Shintō priests. Traditional activities that non-Japanese people would call "religious", like sermons at shrines, and Shintō funeral services, were prohibited. An estimated 200,000 shrines before 1900 had been reduced to 120,000 by 1914, by closing shrines that did not follow the national directives.
Surprisingly, the Roman Catholic Church went along with the state's explanation. An official publication in 1936 by the Propaganda Fide, the Society for the Propagation of the Faith, agreed with the state's definition. It said that visits to shrines had "only a purely civil value".
The Emperor was still subject to the recommendations of a council controlled by the military. The military controlled the council, the council advised the Emperor, and the Emperor was divine. Military factions now had a strong influence over national rule.
There were three Emperors during the years of Imperial Japan from 1868 to 1945. Emperor Meiji ruled 1867-1912, then Emperor Taishō 1912-1926, and then Emperor Shōwa 1926-1989. Japanese emperors are referred to by posthumous names after their death. The one we call Shōwa today was known until his death by his personal name Hirohito. But in Japan he was simply "The Emperor" throughout his reign.
Emperor Taishō (birth name: Yoshihito) had neurological problems. He succeeded to the throne in 1912 but was kept out of public view as much as possible. By 1919 he no longer carried out any official duties. Hirohito was named Prince Regent in 1921, and took over some of the official duties. In December 1926, Taishō died and Hirohito became Emperor.
The military's increasing influence over the government through the 1920s and 1930s led first to Japan's occupation of multiple Chinese territories, then the Second Sino-Japanese War which started in July 1937, and then expanded into World War II.
Emperor Hirohito was required to denounce godhood after the war. On New Year's Day in 1946 the Emperor issued a statement, sometimes called the Humanity Declaration, saying that he was not an Akitsumikami, a deity in human form, and he was not descended from the sun goddess Amaterasu and her brother the storm god Susano-o. It also said that the stories of the creation of Japan as described in the early 8th century Kojiki and Nihon Shoki were myth, not history.
Or at least that was the Western interpretation... The official English translation included this passage near its end:
The ties between Us and Our People have always stood upon mutual trust and affection. They do not depend upon mere legends and myth. They are not predicated on the false conception that the Emperor is divine, and that the Japanese people are superior to other races and fated to rule the world.
General Douglas MacArthur, the Supreme Commander of the Allied Forces, used the official English translation to promote the idea that the Emperor had admitted that he was not a living god.
However, the statement was phrased in a very stilted way, using the archaic formal language of the Japanese Imperial court. The Japanese people themselves struggled to understand it. Debate continues over the precise meaning of some unusual phrases, but it's clear that Emperor Hirohito did not really renounce the idea that the Emperor should be considered as a descendant of the gods.
In December 1945, the month before the Humanity Declaration, Hirohito said, "It is permissible to say that the idea that the Japanese people are descendants of the gods is a false conception; but it is absolutely impermissible to call chimerical the idea that the Emperor is a descendant of the gods." He, along with other critics of the U.S. interpretation, argued that the point was not to deny divinity. The Emperor started with a recitation of the Five Charter Oath of 1868, and the full statement was intended to make the point that Japan had already been democratic since the Meiji Restoration, and therefore the U.S. occupiers had not been the ones to bring democracy. The Imperial statement was published along with a commentary by Prime Minister Kijūrō Shidehara. That commentary addressed only the prior existence of democracy after the Meiji Restoration, and made no mention of any renunciation of divinity.
MacArthur and the U.S. State Department maintained the authority of the Emperor as the leader of the nation, in order to retain the support of the people during the years of occupation and reconstruction. But now he was just a hereditary Emperor, and not divine.Visiting
The U.S.-led Occupation Authorities, known as the GHQ, planned to demolish Yasukuni Jinja, a shrine in Tōkyō which enshrined Japanese war dead from conflicts after the Meiji Restoration of 1868. The GHQ planned to build a dog-racing track there. However, the Roman Curia persuaded GHQ to leave Yasukuni Jinja standing. In 1951 the Roman Curia reaffirmed its 1936 decision that Shintō rites were not religious. Nationalists enshrined several war criminals at Yasukuni, based on secret decisions made in 1969.
After World War II, Buddhism began a resurgence. There was a demand during and after the war for Buddhist priests to conduct funerals. With Shintō still being seen as a non-religious cultural practice that encourages national unity, the people wanted some traditional religious practice and Buddhism has filled that role.
Of course, America has The Apotheosis of Washington. This is a fresco in the dome of the United States Capitol Building that depicts George Washington ascending and becoming a god. He is surrounded by various Roman deities and mid-1800s technology — the goddess Minerva with an electrical generator and batteries, the goddess Venus helping to lay the transatlantic telegraph cable with an ironclad warship in the background, the god Vulcan with a cannon and steam engine, the goddess Ceres sitting on a McCormick mechanical reaper, and so on. I wouldn't want to explain the theological meaning of that to a foreign visitor.
Over a third of Japanese people now identify themselves as "Buddhist", and that number continues to grow, while almost no one calls themself a Shintōist. Over 90% of Japanese funerals are Buddhist.
Other topics in Japan:
Akihabara, Tōkyō's Electric Town
Electronics parts and tools, the otaku lifestyle, cosplay, anime, and manga
Travel through Kyūshū, the Harbor, Temples, Shrines, the Samurai Path, and a World War II Bunker | <urn:uuid:cd53bad8-ab1f-4ebd-a493-d00e457606e3> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/japan/shinto-buddhism/shinto.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00122.warc.gz | en | 0.969479 | 4,880 | 3.4375 | 3 |
Smart Cities Need Intelligent Infrastructure Powered by Smart Energy
The rollout of smart cities is well underway, however, to make a city truly smart all the individual elements need to work together, not just independently. Connecting these elements is the infrastructure which, like the IoT devices themselves, are full of sensors but with their own challenges to ensure they work successfully. These may be motion sensors, pollution sensors, parking sensor or moisture sensors to name just a few, and all require safe, reliable and energy efficient power.
Imagine traveling into a smart city in your autonomous car. With connected devices controlling the car and city, you can sit back and relax. But behind the scenes, there is a lot going on to make this possible. Lots of sensors are working flat out in both the city and car, to ensure everything runs smoothly. The car sensors ensure it can read reference points like roadworks, parking, and smart traffic lights. These, in turn, are full of sensors that can read the arrival of autonomous and non-autonomous cars. They gather data on vehicle movement and quantity to ensure smooth traffic flow through the city – safe and efficient movement being a key benefit of IoT smart cities.
On public transport, sensors on the rail tracks monitor where the trains are at all times ensuring smoother running of the trains and more up-to-date information for the passengers. In addition, sensors enable remote condition monitoring of tracks and points collecting data that will flag problems and maintenance issues before they become costly to repair.
Lampposts in a smart city not only light the streets but also provide the opportunity for city management to monitor them. The EU program Sharing Cities is trialing smart city technology in various European cities. They state Europe’s existing lighting network costs €3 billion a year to operate. Installing smart street lighting could reduce electricity costs to €900 million.
Sensors on lampposts can detect movement so they only light when required thus providing cost and energy savings. Sensors will also provide maintenance and fault detection data in advance so engineers will only need to visit a particular lamppost when it needs maintenance rather than on a scheduled health check routine.
Smart energy management and smart building technology are also on the rise with a predicted increase of approx. 30% a year. Buildings are becoming more complex with interconnected IoT systems offering energy and cost-efficient buildings. IoT infrastructures are being developed to offer many benefits, including optimizing room occupancy, turning lights on/off as needed, monitoring assets movement and thus increasing security by knowing where people are located.
However, these automated systems will have more complex infrastructure requiring more communication technologies and wiring. Sensors will monitor the health of the building, providing predictive maintenance data. Especially in the case of retrofitting existing buildings, these can be in hard to reach locations creating their own set of challenges.
A smart city will also have elements outside the city. To ensure its population has safe, clean drinking water, wastewater, and water treatment plants will benefit from increased sensing and communication technology adoption. Sensors at these facilities will remotely monitor a range of equipment (such as water composition control testers), sending data back to a central control point. It’s not feasible to have engineers at each location full-time on the off chance there’s a problem and the time and expense of them proactively driving round to the different locations is both costly and inefficient.
So whilst it’s clear that smart cities offer many benefits, a major challenge is making it happen to reap the full benefits. Just thinking about each individual IoT device doesn’t work. We need to think about the infrastructure connecting the whole smart city and how it’s installed and maintained. Key to this is powering all the sensors that will be required in the infrastructure. If the sensors required keep powering down and need constant maintenance the city will never be truly smart. City officials don’t want to incur costs of frequently sending out engineers to change batteries powering the ever increasing number of sensors. They need a form of power that is as smart and intelligent as the devices being powered.
Cabling power to the sensors is costly and often impractical. Installing batteries can also be difficult as sensors can be located in hostile conditions where they need to function despite high temperatures, dust, oil and vibration. Smart lampposts need to be powered in an energy and cost efficient way that can handle the intense heat from the bulbs. Batteries powering sensors on rail tracks need to function despite being located in dirty, hot environments covered in oil and dust. Autonomous vehicles will have so many sensors cabling isn’t feasible as the weight of the cabling required will be too great. This is especially key as there are global environmental targets to lower the weight of cars in order to make them more efficient and environmentally friendly. Batteries need to be lightweight, able to work in high temperatures.
In addition, they need to power the sensors for a long time as frequently changing the batteries can be very expensive negating the cost benefits of a smart city. One of the main goals of intelligent cities and buildings are to be more cost, energy and time efficient. A smart city isn’t smart if the elements that make it smart keep powering down.
Traditional batteries don’t have the lifespan required and vibrations can cause dangerous leakages. Some batteries can handle extreme temperatures but these are often large and heavy. Fortunately, battery technology innovation has moved forward. Solid state batteries are designed for powering wireless sensors in connected IoT devices. They offer:
- Long lifespan up to 10 years with minimal maintenance
- Efficient in hostile environments with extreme temperatures or humidity
- Scalable size from miniature to large scale
- Increased energy density
Smart cities need intelligent infrastructure to become truly smart and as invisible as possible. Infrastructure with ‘Fit and Forget’ powering that is safe, reliable and long lasting is vital to making this a reality. Then not only will all the connected devices work effectively and efficiently but will also produce accurate data that will be used to make the city even smarter. Solid state batteries offer the power required to make this happen.
This article was written by Denis Pasero of Ilika Technologies, a pioneer in solid-state battery technology. Denis joined Ilika in 2008, as a scientist specializing in battery technology, to manage commercial lithium-ion projects. Today, as Product Commercialization Manager, Denis interfaces between customers and technical teams. | <urn:uuid:16a7d203-9112-4267-9b7d-640ebf0681db> | CC-MAIN-2022-40 | https://www.iiot-world.com/smart-cities-buildings-infrastructure/smart-cities/smart-cities-need-intelligent-infrastructure-powered-by-smart-energy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00122.warc.gz | en | 0.947358 | 1,321 | 3 | 3 |
Today is February 10, Internet Safety Day, and an Internet safety campaign called Be Smart wanted to know how kids behave online.
More than half of children online have done something 'risky' or something that's considered 'anti-social', a new study suggests.
The results of a poll, in which 2,000 children aged 11 to 16 participated, show that more than half of children in the UK (57 per cent) have done something 'risky', BBC reports.
Almost two thirds of children (62 per cent) said they'd been pressured into doing those things while around 20 per cent said they'd pressured someone else into such activity.
Those activities include sharing pictures or videos of themselves, saying bad things about others and browsing unsuitable websites.
Almost half of all the children surveyed (47 per cent) said they'd looked at sites their parents wouldn't approve of, while 14 per cent admitted to sharing pictures of themselves – photos their parents wouldn’t want them to share.
Andrew Tomlinson, the BBC's executive producer responsible for digital and media literacy, said: "Internet safety is becoming increasingly important as more families get online and children start to use tablets, computers and smartphones earlier in their lives.
In the meantime, a new mobile app is set to be launched later this year, which will give parents remote access to everything their children do on their smartphones.
Those include movement tracking, monitoring texting as well as browsing the web. We’re not sure if the app covers Snapchat, but we'll assume it does. | <urn:uuid:d16578c8-4808-4cce-a45e-c56b2e44462b> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/02/10/half-children-online-committed-risky-action/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00122.warc.gz | en | 0.969391 | 318 | 2.78125 | 3 |
I heard something really worrying yesterday – someone’s got a proof-of-concept that defeats TLS (previously known as SSL) encryption. Security researchers Thai Duong and Juliano Rizzo are planning to demonstrate this at Ekoparty in Argentina this week.
Fundamentally, compression works by removing repeated information found in the uncompressed data. Therefore if you have repetition, the data compresses better. By making a number of requests for differing data (like bogus image file names) you’ll know by the size of the compressed packet if data in the unknown login cookie contains data repeated in the file requested simply because the combined encrypted packet will get shorter. In other words, because the unknown cookie and known file request are compressed into the same packet, you can determine whether there is any repetition simply by comparing the size of the compressed data – when it gets shorter you’ve got a match. Apparently you need make as few a six bogus requests for each letter in the login cookie to work out its contents.
Apparently this flaw doesn’t affect IE, but the others were vulnerable until tipped off about it. Make sure you’re running a current version | <urn:uuid:270ccb68-6406-48b3-aeae-9b4568c0c05f> | CC-MAIN-2022-40 | https://blog.frankleonhardt.com/2012/tls-used-in-web-browsers-is-vulnerable-to-cookie-grabbing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00122.warc.gz | en | 0.953473 | 531 | 2.765625 | 3 |
An annoyed user couldn’t fix his printer as the printer’s source code wasn’t available to users. This was the reason that led to the start of the open source movement. Organizations have saved billions of dollars and man hours by collaborating and innovating on the open source platform. The open source software has been used almost everywhere, and most importantly, technologists are taking full advantage of it when the world needs it to solve humanitarian problems.
Here are some humanitarian crises that technologists have built open source platforms for, just to give a new life to those badly affected by it.
Syrian Crisis: 15 March, 2011—present
One of the deadliest civil wars in recent history, the Syrian Civil War has claimed hundreds of thousands of civilian lives and has displaced millions. The Syria Conflict Mapping Project, a work by The Carter Center and Palantir Technologies, has analyzed open source information with minute detail and mapped over 70,000 conflict events as well as the movements of armed forces and civilians. Humanitarian organizations directly get this information using a software tool by Palantir, which could help them mobilize volunteers and aid workers to conflict zones.
2007 Kenyan elections: 27 December, 2007—28 February, 2008
During the disputed 2007 Kenyan presidential elections riots broke out. More than a thousand were killed and about half a million were displaced. This was the birth of Ushahidi, a non-profit software company that developed a geospatial map of the riot regions in Kenya. The information was collected via eyewitness reports by email and text message. During the riots, Ushahidi posted several reports which were used by the international media, NGOs and government sources for further action. Ushahidi now uses its open source platform for various other humanitarian crises.
Ebola outbreak: December 2013 – January 2016
With around 29,000 reported cases of this virus in West Africa, the Ebola epidemic took almost 11,000 lives during the two years of the outbreak. A number of technologists came together to use the open source platform effectively. The Humanitarian OpenStreetMap Team tracked infected persons and mapped the location of the maximum number infected persons. Another open-source platform, mHero, was used for contacting, informing, surveying, and polling health workers on information such as training materials, test results, and equipment.
Nepal Earthquake: 25 April 2015
The 7.8 magnitude earthquake, with the epicentre at Gorkha, and the multiple aftershocks was a major natural calamity in decades. The disaster left almost 9,000 dead, 22,000 injured and 3.5 million displaced. Quakemap.org, an open source platform, made around 2,000 reports in the aftermath of the situation with the help of the people around the country. These reports were later used by the government and Nepalese Army for relief distribution efforts.
Boko Haram: Active since 2002
On 15 April, 2014, Boko Haram, a Nigerian terrorist group, kidnapped 276 girls from Government Secondary School in Chibok, Borno State, Nigeria. Fifty seven girls managed to escape, but it has been two years now that the remaining 219 girls are missing. During this period, the Armed Conflict Location & Event Data Project (ACLED) have been tracking the movements of this terrorist group and examining how they have evolved over the years. The analysis from this project can be used by governments as well as military organizations, crucial for the defeat of Boko Haram. | <urn:uuid:55c6d38f-5370-4275-b225-90344b1cea0a> | CC-MAIN-2022-40 | https://www.cio.com/article/218153/5-humanitarian-crises-where-open-source-projects-aimed-to-bring-stability.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00122.warc.gz | en | 0.95933 | 709 | 3.09375 | 3 |
A Bot is a computer connected to the Internet that has been surreptitiously / secretly compromised with malicious logic to perform activities under remote the command and control of a remote administrator.
A Botnet is a collection of computers compromised by malicious code and controlled across a network.
A Bot Master is the controller of a botnet that, from a remote location, provides direction to the compromised computers in the botnet.
What does this mean for an SMB?
Bots and Botnets are pieces of malware which can infiltrate your company through phishing attacks, weak remote access protected only by password and not two-factor authenticated. To protect against Bots and Botnetworks, SMB owners should always ensure they have the following:
- Deploy next-generation Antivirus software and keep it up to date;
- Ensure you enable 2-factor authentication to access your VPN, O365, GSuite, banking, and any other critical accounts.
- If you have 1 and 2 in place, consider adopting a Password Manager across your company. These tools improve security and productivity. | <urn:uuid:beb15f7e-aada-4583-9992-6194dd61a3a3> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/bot-botnet-bot-herder-and-bot-master/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00323.warc.gz | en | 0.930173 | 233 | 3.234375 | 3 |
With technology continually proving to be a competitive differentiator across global marketplaces, the California Department of Education (CDE) is exploring a variety of ways to promote the development of transferable digital skills in today’s youth. With that in mind, officials recently announced an ambitious plan to incorporate more computer-based assessments into statewide school testing protocols.
“Multiple-choice, fill-in-the-bubble tests alone simply cannot do the job anymore, and it’s time for California to move forward with assessments that measure the real-world skills our students need to be ready for a career and for college,” CDE superintendent Tom Torlakson explained. “The concept is simple but powerful: if our tests require students to think critically and solve problems to do well on test day, those same skills are much more likely to be taught in our classrooms day in and day out.”
Torlakson’s recommendations for implementing such a system ahead of the 2014-2015 school year came following a mandate from the state legislature calling for California’s standardized testing systems to fall into alignment with the new voluntary, national Common Common State Standards (CCSS). The CDE’s existing Standardized Testing and Reporting (STAR) Program is now scheduled to be phased out in July 2014.
While parents, educators and administrators have generally been amenable to the latest slate of proposed changes, there may still be much work left to be done to forge a smooth transition.
In La Canada Unified School District (LCUSD), the focus is already shifting toward logistical challenges. According to the La Canada Valley Sun, educators will need to plan for the recruitment of hundreds of computers and associated classroom software licenses while boosting the bandwidth on its core networks. If these improvements are not made in time for the 2014 launch, schools may also be forced to explore phased testing protocols that can make due with limited IT assets.
Additionally, there are sure to be attitudinal shifts required by this new educational assessment paradigm.
“API (Academic Performance Index) scores, standardized tests and all that – it’s something that the community understands,” LCUSD governing board member Andrew Blumenfeld told the Valley Sun. “We’re very successful in that metric. We’re No. 2 in the state. Whenever you shake up a metric like that you’re going to have to rethink some things.”
What are the risks and rewards of embracing computerized testing programs within your district? What safeguards should be in place to ensure classroom technology investments stop short of the point of diminishing returns? Let us know what you think in the comments section below! | <urn:uuid:26085043-e425-475e-a2bf-9d1cfa2b9ed0> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/california-doe-to-embrace-computerized-classroom-testing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00323.warc.gz | en | 0.940296 | 554 | 2.703125 | 3 |
In a time of economic crisis, there tends to be an increase in the number of people that turn to criminal activity. Although petty crime is usually one area that shows a significant upswing, an additional form of criminal activity on the rise is fraud.
Before you can stop fraud, you need to know how to define it in order to properly identify it. Fraud is defined as the use of deception to obtain money or something else of value. Although typically carried out online, some fraudsters pursue the riskier physical fraud in which they interact with people face-to-face.
When fraud is carried out online, however, fraudsters can orchestrate an attack on a much larger scale, allowing them to sit back and wait for the goods to arrive.
Define and Identify
To identify fraud, there are some red flags that all businesses should be aware of. Some of the red flags include the following:
- Order velocities — Defined as multiple orders placed within the same day, hour or minute, they typically appear from one device, one address, one card or one user ID.
- Risky street addresses — Often, you can accurately estimate the level of risk of carrying out an order by utilizing the Google Maps Street View to determine the location of the shipping address. If the address looks like an abandoned building, making a call to validate the card holder really made the purchase is advised.
- Anonymous/free email accounts — These email accounts illustrate a higher percentage of fraud activity than those associated with a paid Internet service provider or a company email address.
Types of Fraud
There are a number of different types of fraud. Here we provide you with a brief description of some types most frequently encountered within the e-commerce industry:
- Card-not-present fraud — Also known as “CNP fraud,” this is the basic form of fraud carried out online. A purchase can be made with just the card number; no physical card is needed.
- Gift Card Fraud (card purchased in store) — To avoid being caught by initial fraud screening technology, the fraudster pools together several small denomination gift cards to purchase a bigger ticket item online. Typically, the gift cards are purchased with stolen credit card information.
- Gift Card Fraud (card purchased online) — This type of fraud is frequently carried out with the utilization of a fake email account. Since the purchase of a gift card online requests only an email address in order to receive a confirmation code, this allows the fraudster to purchase many gift certificates on one [stolen] credit or debit card and send the gift card credits to multiple email addresses. Typically, the fake email accounts are set up with free email services.
- Friendly Fraud — This type of fraud is carried out by someone who places an order online and follows up with a complaint. Usually stating that they never made the purchase or did not receive the merchandise, this is one of the most difficult types of fraud to detect since it crosses into both the online and physical realms. Because of friendly fraud, fraud will never be completely eliminated.
Fraud in the E-Commerce Industry
Fraud ranks as one of the biggest problems within the e-commerce industry. Fraud rings pose the biggest threat as this technique utilizes the latest technology with one purpose in mind: Get away with as much fraud as possible. Fraudsters are getting better at fraud ring activities, as well, causing merchants to find it difficult to link transactions in order to find fraud. Many merchants ranked fraud rings as one of the biggest challenges to fighting online fraud.
An additional emerging threat to the e-commerce industry is the challenge of m-commerce, or mobile commerce. Mobile device users are generally less protected when accessing a merchant’s Web site, frequently due to the merchant’s establishment of “light” versions of the Web site, ironically designed to attract more mobile users. Merchants typically have not yet considered the potential new security threat or established stronger user-authentication on this platform, and fraudsters know it.
At this point, you’re probably wondering if there is even anything that can be done to stop fraud before a company or a legitimate customer becomes a victim. There is. Although fraud may be one of the biggest threats to the e-commerce industry, there exist a number of solutions which focus on utilizing the technology and techniques that are readily available today. Depending on the type of goods/services that are sold, there are two approaches:
- Digital goods (such as music, software and video) — These items are delivered in real-time, making it critical to assess the order quickly to determine the likelihood of fraud. Because the goods must be released almost instantly, it is recommended to fulfill any order not immediately deemed fraudulent. Re-screening the order later enables a more thorough investigation. If upon further investigation the order is found to be fraudulent, the card should be credited back for the goods that were purchased. This protects the victim from the charge and the company from eventual chargeback.
- All other goods — Since these orders are processed and then scheduled to ship, there is time to allow the fraud detection screening system to fully assess the risk of an order, and then sort-out questionable orders for further review. With this system in place, fraudulent orders can be stopped before being processed. This protects the legitimate customer or fraud victim, and eliminates the fees associated with a future chargeback for the company.
Basically, to protect yourself and your customers from becoming victims of fraudulent activity, utilize every aspect of today’s technology to protect the e-commerce venue, including those offered by card issuers. Today’s leading technology enables the use of tagless/covert device ID, risk engines tuned for the environment they support, and link analysis tools for finding additional instances of fraud.
Every device with Web access leaves a digital fingerprint. With device ID technology, the digital fingerprint of these devices is captured and stored, enabling any Web accessible devices to be equally monitored among primary e-commerce orders for fraudulent activity. This information can then be referred to with link analysis; by linking similar transactions, it helps the company determine the risk-level associated with a transaction.
It is fair to assume that with the proper tools in place, an enterprise can screen fewer than five percent of all orders while capturing upwards of 85 percent of all fraud (minus friendly fraud). This also plays an important role in the number of chargebacks.
It is important to note that there is no silver bullet to prevent fraud. Some type of fraud will always exist, as evidenced by the presence of friendly fraud. In order to protect both customer and company, it is best to implement a layered security approach to identify potential fraud first and then investigate orders that appear suspicious. This enables both a real-time and time-delayed system to be employed, in addition to human intelligence. This will assist you in achieving maximum security online.
Ori Eisen is founder and chief innovation officer of 41st Parameter, a fraud detection and prevention firm. | <urn:uuid:af62fadd-654b-42f8-b136-360b5b6ed510> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/telltale-signs-of-e-commerce-fraud-66278.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00323.warc.gz | en | 0.938641 | 1,439 | 3.1875 | 3 |
Remote access technology is essential in maximizing the efficiency of IT departments. It allows the team to provide support to businesses even in significant physical distances. But as easy as it is for tech support to be delivered, this technology is also a niche for remote access scams.
But what are remote access scams? With remote access scams, hackers contact the victims and obtain access to their computers by disguising as tech support providers. Once they enter the network, they can start stealing money and other useful data that can be used or sold.
Organizations and individuals can protect themselves against hacking by looking out for suspicious calls and messages, restricting network access, regularly updating the system, and installing protection against malware.
What is Remote Access Hacking?
Remote access hacking happens when scammers and hackers break into vulnerable servers, devices, and networks. By gaining access to the networks or devices, they can start stealing information like bank accounts, confidential company information, and even personal data. While this kind of hacking is fairly common, there are still people who fall victim to remote access hacking schemes.
What Can Remote Access Hackers Do to Your Computer?
Hackers essentially have full control of your computer after they enable remote access on your device. Remote hackers can install software that blocks your screen, shares your desktop, or registers your passwords through keyloggers.
Aside from utilizing the information for their own benefit, they can also use the access as leverage to ask for a reward payment, more commonly known as “ransomware.” On top of holding crucial data ransom, hackers getting into work devices typically sell their assets and information on the dark web.
Common Types of Remote Access Hacking
To protect yourself against these scams and attacks, it’s important to be familiar with how remote access hacks work and what the scheme typically looks like. Here are three examples:
Remote Desktop Protocol (RDP)
Remote Desktop Protocol (RDP) is commonly used by organizations to allow remote work. But when one endpoint device is not secure, the whole IT system becomes vulnerable to cyberattacks. Hackers utilize online scanning tools to search for these vulnerabilities in the RDP endpoints.
The biggest problem with using RDP for remote work is that it cannot differentiate between bad and good actors once they enter the network. This makes it easier for hackers to collect company information without being detected.
Remote Access Trojans (RATs)
Aside from gaining access through stolen credentials, hackers may also use different malware types such as the Remote Access Trojans (RATs). This method involves phishing campaigns where hackers will send files or links to the client. Once the malware has been installed on the victim’s device, it gives the hackers back-door access to the private network.
In a remote work environment, victims might think RAT is just another program required for a work from home setup, avoiding detection from the employee or even administrators.
Automated Malicious Bots
More and more organizations are starting to use artificial intelligence and automated bots for different purposes. While they are certainly helpful for the company, hackers can also use malicious bots for ill intentions.
Compromised bots can scan the website, apps, and APIs for weak security spots they can utilize for entry.
How People Fall For Remote Access Hacking
Hackers employ different tactics to trick their victims. Spotting these tell-tale techniques as early as possible can keep you from falling for one:
Remote access scammers usually disguise themselves as computer technicians from well-known companies like Microsoft, Telstra, and NBN. They will tell victims that there is a problem with their computer; it might be a connection issue or malware infection. They might also talk in technical language to intimidate the victim into following the instructions.
They will pretend to run a diagnostic test and ask for remote access to the computer. Once they obtain the needed details, they will charge the victims for fixing a problem that doesn’t exist.
To become more familiar with how a phone call with a tech support scammer sounds, here is an excerpt from the Federal Trade Commission. The scammer claims that the computer is infected even though the warning message appears on most computers.
Scammers also lure victims in and make them call first by using false pop-up windows. These pop-up messages often look like error messages with logos from trusted websites or companies. It will warn victims about a security issue on the computer with a phone number where they could get help.
Here’s also a more detailed example from a Youtube video uploaded by Jim Browning. In this video, he calls a number from a company called TechKnacks IT Solutions LLC who claims to fix computer problems. The scammers used a remote access software that can freeze the screen and take over the victim’s controls over their computer. They also tried to make the victim pay $200 for fixing a problem that doesn’t exist.
Online Ads and Listings
Some scammers also try to get their sites to show up in the search results when people look for tech support. They can also run their ads online, hoping that victims will call them for help. If you need tech support, it’s better to get them from a trusted company that you know.
Signs That Your Computer Is Infected
Most of the time, victims don’t realize the scam as it is happening. To see if your network or device has been hacked, here are a few signs to watch out for:
- Computer functions start moving on their own; your cursor starts moving without your control and applications might be downloaded without your consent
- People receive strange messages from friends and colleagues
- Some of your files have been encrypted
- Odd redirections to ads and dodgy sites pop up once the browser is opened
How to Prevent Remote Access Hacking
Remote access hacking can bring severe damages to the victims. To prevent these kinds of hacking from happening, creating a multi-layered security approach is necessary. Here are some methods that organizations and individuals can employ to prevent hackers from gaining access to their devices:
Looking Out for Suspicious Calls and Messages
If someone calls about a tech issue and asks for remote access, hang up immediately. Always remember that companies do not ask for credit card details over the phone to charge victims for fixing computer problems.
Some pop-up windows created by scammers also look realistic but keep in mind that a legitimate malware notice wouldn’t include a number to call. Aside from personal information and bank account details, hackers may also attempt to ask for remote access to the computer.
There are many ways to restrict access – changing the default setting, preventing guest accounts from entering, and using two-factor authentication.
By changing the default username, hackers will have a hard time trying to guess the username and password for the network. Using a combination of characters, symbols, and numbers can help limit access and minimize the risk of network hacking.
Guest accounts can allow anonymous users to access the device and network. Disabling these accounts can protect the company against unauthorized users who can launch remote access attacks. With guest accounts, it’s easier for bad actors to install the malware in the device or the network.
Two-factor authentication is a verification process that effectively secures remote access networks. To access the device or network, users need to input their username and password, as well as a random security code that will be sent to the email associated with the account.
Updating the System
Frequently updating the system can also prevent unwanted access through vulnerable backdoors. Aside from patching up the vulnerabilities from the previous version, system updates also provide the latest features and functionality of some apps for added protection.
When upgrading the system, make sure that all needed data is still secure and the necessary software for company operations is still compatible.
Installing Protection Against Malware
Downloading good anti-virus software can also help the early detection of malware. Make sure that all anti-malware programs are updated according to the system and that they regularly scan the device and network. Early detection of malware can prevent a couple of remote access attacks from happening.
Before settling for antivirus software, consider the following:
- Download Protection
- Virus and Malware Scans
Implementing Vulnerability Scanning
Vulnerability scans test the networks and system to find weaknesses. A complete scan will produce a detailed report about the specific weaknesses found in the systems and networks. Using these reports, IT experts can fix the vulnerabilities as soon as it is detected.
Protect Your Data Privacy with Abacus
Here at Abacus, we ensure that our clients are well-protected against malware and other tech issues that can potentially damage their organization. Our wide range of services can offer comprehensive protection and support for your remote access needs.
To avoid getting scammed by remote access hackers, contact a trusted provider like Abacus for all your IT solution needs. Learn more about us and how we can protect your data by visiting our website. | <urn:uuid:d8dd8031-e7b2-47be-b7fd-11695544d65c> | CC-MAIN-2022-40 | https://goabacus.com/remote-access-scams-everything-you-need-to-know-to-avoid-falling-for-one/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00323.warc.gz | en | 0.934534 | 1,871 | 3.015625 | 3 |
While the concept of the “smart city” has been around for more than a decade, it is only recently that city planners have started to think seriously about the impact of technological innovation on personal privacy and individual digital rights. With that in mind, a group of three world-class smart cities – New York City, Amsterdam and Barcelona – recently launched a new initiative known as Cities Coalition for Digital Rights that is specifically designed to protect the digital rights of their citizens. It is an important achievement because it is the first international agreement between cities that is designed to protect digital rights on a global, not just local, level.
What are the digital rights of residents in smart cities?
The idea of the Cities Coalition for Digital Rights is simple yet also profound: smart cities can be fully compatible with personal privacy and fundamental digital rights. One of the accomplishments of this Cities Coalition for Digital Rights is simply outlining the types of fundamental digital rights that cities should be protecting. The leaders from New York, Amsterdam and Barcelona settled on several fundamental digital rights that all cities should respect, including the right to Internet access, the right to privacy and data protection and the right to data transparency (including non-discriminatory algorithms).
The basic idea is that the same type of human rights that are enjoyed “offline” should also be enjoyed “online.” The goal, of course, is to protect citizens and create what some have referred to as a “human-centric digital society.” This is a civil society that is humane and respectful of human rights, but also innovative in terms of digital technology and digital media. In coming up with the fundamental digital rights and civil liberties that all cities should be respecting, the city leaders specifically referenced the United Nations’ “Charter for Human Rights and Principles for the Internet.”
In order to protect these digital rights, cities should be taking steps to build “trustworthy” and “secure” digital services. The goal should be to take a more proactive role in making sure that citizens are not snooped on without their consent, or their personal data used in ways they never expected. While cities may not be able to control private corporations that are based within their metropolitan area, they can control the public spaces and public infrastructure that are found within these areas. They can also ensure that the democratic process is open to all, and that they are protecting residents from external threats to their digital rights.
As the Chief Technology Officer of New York City remarked at the unveiling of the new initiative at the Smart Cities Expo World Congress in Barcelona, cities have a civic duty to protect all people from any threats to these digital rights. Specific threats include hate speech on social media, personal identity theft and so-called “black box” algorithms that unfairly profile people based on their racial or economic background.
The fundamental tension between technological innovation and digital rights
On the surface, all of these human rights and principles would appear to be rather non-controversial. The problem, however, is that the rapid pace of technological change is leading to the types of situations that city leaders in the United States and other Western nations never could have imagined. For example, the idea of putting sensors on city infrastructure to make a city more efficient seems like a good idea at first. If those sensors can help to detect when a bridge needs to be repaired, or when a street needs to be plowed of snow, that is a positive benefit for smart city residents. But what if those same sensors are used to eavesdrop on conversations taking place in public places? Or what if those sensors (or other digital technologies) are also collecting personal data on people without their knowledge?
One particular concern highlighted at the Smart Cities Expo World Congress in Barcelona is the problem of black box algorithms that city leaders are increasingly relying on to make their cities more efficient. For example, city police departments may use crime prevention algorithms to determine how to allocate their resources. The problem is that these algorithms might unfairly discriminate against low-income city residents living in specific neighborhoods of a city. In recognition of this fact, New York City has now convened a special task force on “algorithmic bias,” especially as it relates to the New York Police Department and the Office of Criminal Justice.
Responsible digital innovation in smart cities
Smart city leaders are also taking steps to promote what they refer to as “responsible digital innovation.” For example, a new project called DECODE is dedicated to helping smart cities to protect digital rights and personal privacy. And a new special report from Nesta, “Reclaiming the Smart City,” takes a closer look at how cities can be more responsible when it comes to protecting the data of their citizens. The report specifically looked at case studies from cities such as Amsterdam, Barcelona, Ghent, Sydney and Bristol, and covers topics such as access to information, intellectual property, open data and data collection.
One big question still facing leaders of smart cities is how to gather data from Internet users in consent-driven ways and then how to share information responsibly. After all, residents can’t “opt out” of a smart city, so the responsibility is on city leaders to come up with better approaches to common urban problems while also respecting fundamental freedoms. For example, the fight against crime and domestic terrorism is always a concern, but smart cities should not be constructing vast surveillance states to snoop on their citizens. Some smart cities, for example, now record private conversations in public spaces and use sophisticated facial scanning techniques using AI to find potential criminals or terrorists in a crowd as part of law enforcement initiatives. But where is the line between crime prevention and unwanted surveillance?
Principles and best practices for smart cities
With the emergence of any new technology – such as the Internet of Things or artificial intelligence – there will always be compromises that smart city leaders will need to take into account when they offer new digital services. The good news is that initiatives such as the Cities Coalition of Digital Rights point the way to a future where responsible collection, management and use of personal data from residents and visitors is the norm rather than the exception. At the Smart City Expo World Congress, a group of 42 other cities also agreed on the ten principles for a platform economy, and momentum appears to be growing around data privacy and data protection.
Further build-out of these initiatives – and the ability of cities like New York, Amsterdam and Barcelona to promote and track progress – will lead to the development of smart cities that respect privacy and the freedom of expression, and that provide plenty of options for residents to participate in democracy regardless of race, gender, or economic background. | <urn:uuid:afb943fc-d234-48e2-bf2a-d2485f16c2b4> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-privacy/smart-cities-begin-to-embrace-digital-rights-for-personal-privacy-and-data-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00323.warc.gz | en | 0.944999 | 1,359 | 3.015625 | 3 |
Two provinces and two cities in China have included underwater data centers in their five-year plans, as a means to reduce emissions.
This marks a boost for the technology, which has been demonstrated by Highlander, and which is being developed commercially in Hainan Port. The coastal authorities backing underwater data centers, which can operate more efficiently, using seawater for cooling, and potentially powered by local renewable energy.
Everyone looks to Hainan
The underwater data center concept was pioneered by Microsoft with its Project Natick which operated a 12-rack data center in a pressure vessel on the seabed off the coast of Scotland for two years until 2020.
China's Highlander launched a similar test project earlier this year holding four racks, and has since moved onto a commercial-scale project which will extend to a plan for a data center consisting of 100 such pressure vessels off the coast of Hainan Free Port, and powered with low-carbon energy from a local nuclear power station.
In May, the city of Hainan announced that this 100-module data center was included in its five year plan - the instrument by which every part of the Chinese state is organized.
In 2021, China has published the fifteenth national Five Year Plan since the People's Republic was founded in 1949. Every city and province has to produce its own five year plan, to cover the years 2021 to 2025, before the end of 2021.
Since the announcement of the Hainan project, the province of Hainan has included the project in its five-year plan, alongside plans for special corridors to allow submarine cables such as the Hainan-Hong Kong International Submarine Cable Project. Hainan is also proposing to provide 4G and 5G coverage in the South China Sea, using shore-based, island-reef, and ship-borne communications towers. Alongside this, it wants to improve satellite broadband service.
To build the submarine data center, Hainan will build shore stations, subsea high-voltage composite cables, subsea sub-power stations and subsea data cabins. The 100 data "cabins" described by the Port's plan are just the "first phase" and Hainan says it will gradually build "a comprehensive marine new technology industrial park with a submarine data center as the core."
To back work in the South China Sea, Hainan will also build a real-time 3D database and a data center to handle the background data of the South China Sea and support deep sea development and "South China Sea rights maintenance" (China's claim to the South China Sea is disputed).
Shandong, Xiamen, and Shenzhen follow
Shandong, a province further North is following suit with a promise to develop a "smart marine industry," with a similar 3D observation network, and much infrastructure including foreign communication networks, and optical fiber cables, as well as high-tech equipment such as "measuring robots, unmanned observation boats, manned submersibles, and deep-water gliders."
The Shandong five year plan is less specific about the size, but includes a goal to build submarine data centers.
Meanwhile, two other Chinese coastal cities, Xiamen and Shenzhen, have also included underwater data centers in their plans.
The move is likely to legitimize the idea of sea-bed water-cooled facilities, and may engender local competitors to Highlander, who may offer very similar projects, as intellectual property rights are less rigorously policed in China.
For that reason, Highlander may also be having a call from Microsoft at some point in the future. Microsoft recently included many aspects of its underwater data center project in a raft of patents that have been opened for free use, however, the Highlander prototype is very similar to the Natick design used by Microsoft
Underwater data centers may yet appear in other five year plans, as other provinces and cities are due to publish their plans. Perhaps more importantly, individual National ministries handling industry and information have also yet to publish their plans, and these could include direct guidance on data center technologies such as cooling. | <urn:uuid:3a60ead1-07b7-4851-b938-b0c4c4254c6b> | CC-MAIN-2022-40 | https://direct.datacenterdynamics.com/en/news/four-chinese-authorities-back-underwater-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00323.warc.gz | en | 0.959573 | 843 | 2.78125 | 3 |
DNS resolvers all across the world have ceased to handle domain names. It occurs because DNS, like other Internet systems, has its routing algorithm. Whenever anyone enters https://facebook.com into their window, the DNS resolver, which is in charge of converting names into IP Addresses, checks whether it has anything in its cache and executes it.
If that doesn’t work, it attempts to get the information from the domain servers hosted by the organization that controls the domain. A SERVFAIL returns if the nameservers are unavailable or do not reply for any other issues resulting in the browser displaying a warning to the user
The DNS resolvers could not link to nameservers because Facebook stopped broadcasting their DNS prefix routes via BGP. As a result, prominent public DNS resolvers such as 220.127.116.11, 18.104.22.168, and others began issuing (and caching) SERVFAIL answers. Human behavior and application logic now take over, causing a second enormous impact. The result is a flood of extra DNS traffic.
It occurred in parts since apps could not catch an error as an answer and began retrying abruptly. Another reason is end-users will not accept an error as a reply and will start reloading sites or closing and restarting their apps, often forcefully.
Since Facebook and other social networking sites are so large, DNS resolvers worldwide are suddenly processing maximum times as many queries causing potential lag and expiration difficulties to different platforms. However, 22.214.171.124 was designed Free, Confidential, Quick, and Scalable; it served users with minimal disruption.
People started looking for alternatives and wanted to learn more about what was happening. Users saw an uptick in DNS queries to Twitter, Signal, and other texting and social media services when Facebook went down.
The unexpected events are a quick reminder that the internet is a vast and interconnected system with billions of algorithms and devices. It works for approximately five billion active users worldwide because of trust, standardization, and collaboration among entities. | <urn:uuid:dd7f7205-61a2-44e7-9bc4-f7d0147a67d7> | CC-MAIN-2022-40 | https://lyftron.com/blogs/facebook-got-disappeared-a-bolt-from-the-blue/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00323.warc.gz | en | 0.959642 | 422 | 2.921875 | 3 |
The digital revolution, ubiquity of the internet, and rise of Big Data have given government an unprecedented capability to produce, collect, utilize, and disseminate a vast array of information and data.
These trends have ushered in a new era of data-powered government innovation and citizen services based on the undeniable value in making government data widely available – to citizens, activists, companies, academics, and entrepreneurs.
This is often referred to as the “open government” era, which thrives on government transparency, public accountability, and citizen-centered services.
Consequently, the last 20 years have seen a transformation of public policies – legislative, regulatory, and administrative – grounded in the philosophy that access to and dissemination of government data is a public right and that any constraints on access hinder transparency and accountability.
While there is broad recognition of the need to maximize access to government data, the types of government data are increasingly diverse and complex.
For instance, there are many cases where the government collects or licenses private sector data, often combining this data with other data produced by the government.
These data sets are often referred to as “hybrid data” or “privately curated data” – data licensed to or collected by the government that comprises both public and private sources.
Access to and use of hybrid data is increasingly critical for government to transform data into actionable information.
Examples of curated, or hybrid, data sets include the integration of traffic-app data with US Department of Transportation information, the incorporation of private geographic mapping software into local government flood tracking, the federal award infrastructure’s use of the Dun & Bradstreet D-U-N-S® Number to administer and oversee a $1.2 trillion federal grant market, and peer-reviewed scientific and technical literature that is based on government-funded academic research but published in the private sector.
Subjecting this full range of information to unfettered “openness” requirements risks the availability and quality of these valuable data-driven resources.
Such requirements will ultimately harm the public interest when the inevitable “tragedy of the commons” scenario compromises the quality of the data set, as private-sector actors begin avoiding these government partnerships for fear losing control of their data.
Unfortunately, some current open data policies invite unintended consequences – specifically, well-intentioned but overly broad open data mandates that nullify intellectual property rights by extending to data produced in the private sector and collected by, or licensed to, the government.
In these cases, the pursuit of maximum data-driven transparency often conflicts with other important public-interest goals, such as rewarding data driven innovation, safeguarding individual privacy, protecting intellectual property, encouraging private-sector innovation, and promoting the government’s access to data-driven tools that enable smarter decision-making.
Therefore, policies and requirements for openness of government data must contend with these unique challenges and take care to avoid unintended consequences.
To be sure, resolving these tensions is not easy, as it requires the nuanced balancing of competing public interests (e.g., effective and accessible government versus open government), but it is possible – and urgent. | <urn:uuid:bebf87c8-a755-4875-83d4-e6866230b698> | CC-MAIN-2022-40 | https://www.cyrrusanalytics.com/single-post/2018/09/13/a-new-era-of-data-powered-government-innovation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00323.warc.gz | en | 0.917749 | 647 | 2.765625 | 3 |
SpectreRSB leverage the speculative execution technique that is implemented by most modern CPUs to optimize performance.
Differently, from other Spectre attacks, SpectreRSB recovers data from the speculative execution process by targeting the Return Stack Buffer (RSB).
“rather than exploiting the branch predictor unit, SpectreRSB exploits the return stack buffer (RSB), a common predictor structure in modern CPUs used to predict return
addresses.” reads the research paper.
“We show that both local attacks (within the same process such as Spectre 1) and attacks on SGX are possible by constructing proof of concept attacks”
The experts demonstrated that they could pollute the RSB code to control the return address and poison a CPU’s speculative execution routine.
The experts explained that the RSB is shared among hardware threads that execute
on the same virtual processor enabling inter-process, or even inter-vm, pollution of the RSB
The academics proposed three attack scenarios that leverage the SpectreRSB attack to pollute the RSB and gain access to data they weren’t authorized to view.
In two attacks, the experts polluted the RSB to access data from other applications running on the same CPU. In the thirds attack they polluted the RSB to cause a misspeculation that exposes data outside an SGX compartment.
“an attack against an SGX compartment where a malicious OS pollutes the RSB
to cause a misspeculation that exposes data outside an SGX compartment. This attack bypasses all software and microcode patches on our SGX machine” continues the paper.
Researchers said they reported the issue to Intel, but also to AMD and ARM. Researchers only tested the attack on Intel CPUs, but it is likely that both AMD and ARM processors are affected because they both use RSBs to predict return addresses.
According to the researchers, current Spectre patches are not able to mitigate the SpectreRSB attacks.
“Importantly, none of the known defenses including Retpoline and Intel’s microcode patches stop all SpectreRSB attacks,” wrote the experts.
“We believe that future system developers should be aware of this vulnerability and consider it in developing defenses against speculation attacks. “
The good news is that Intel has already a patch that stops this attack on some CPUs, but wasn’t rolled out to all of its processors.
“In particular, on Core-i7 Skylake and newer processors (but not on Intel’s Xeon processor line), a patch called RSB refilling is used to address a vulnerability when the RSB underfills” continues the researchers.
“This defense interferes with SpectreRSB’s ability to launch attacks that switch into the kernel. We recommend that this patch should be used on all machines to protect against SpectreRSB.”
A spokesperson for Intel told BleepingComputer the Xeon maker believes its mitigations do thwart SpectreRSB side-channel shenanigans:
“SpectreRSB is related to Branch Target Injection (CVE-2017-5715), and we expect that the exploits described in this paper are mitigated in the same manner. We have already published guidance for developers in the whitepaper, Speculative Execution Side Channel Mitigations. We are thankful for the ongoing work of the research community as we collectively work to help protect customers.” | <urn:uuid:4a29350c-6abb-42d2-ba4d-4bbac115da4c> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/74698/hacking/spectrersb-attack.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00323.warc.gz | en | 0.935082 | 707 | 2.609375 | 3 |
Microsoft Tuesday was awarded a patent on a new technology that may enable security applications to detect and stop malware before it enters the operating system.
In the patent, Microsoft inventor Adrian Marinescu describes a method for creating a virtualized sandbox in which the behavior of incoming executable code can be studied.
The technology would enable a software program to identify malware based on its behavior before it does any damage, rather than relying on post-infection signatures of malware that has already infected some systems. This approach may help mitigate the threats posed by the majority of new malware, which generally riffs on previously-written code.
"The virtual operating environment confines potential malware so that the systems of the host operating environment will not be adversely effected [sic] during simulation," the patent says. "As a program is being simulated, a set of behavior signatures is generated. The collected behavior signatures are suitable for analysis to determine if the program is malware."
The patent was originally filed in 2004. Microsoft has not said when or how the technology might be deployed in its product line.
Tim Wilson, Site Editor, Dark Reading | <urn:uuid:b8e3374c-73b7-46c2-8779-b7de428979f6> | CC-MAIN-2022-40 | https://www.darkreading.com/analytics/microsoft-wins-patent-on-proactive-anti-malware-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00523.warc.gz | en | 0.952677 | 222 | 2.890625 | 3 |
Using an innovative nanotechnology, IBM scientists have demonstrated a data storage density of a trillion bits per square inch — 20 times higher than the densest magnetic storage available today.
IBM achieved this remarkable density — enough to store 25 million printed textbook pages on a surface the size of a postage stamp — in a research project code-named “Millipede”.
Rather than using traditional magnetic or electronic means to store data, Millipede uses thousands of nano-sharp tips to punch indentations representing individual bits into a thin plastic film. The result is akin to a nanotech version of the venerable data processing `punch card’ developed more than 110 years ago, but with two crucial differences: the `Millipede’ technology is re-writeable (meaning it can be used over and over again), and may be able to store more than 3 billion bits of data in the space occupied by just one hole in a standard punch card.
Although this unique approach is smaller than today’s traditional technologies and can be operated at lower power, IBM scientists believe still higher levels of storage density are possible. “Since a nanometer-scale tip can address individual atoms, we anticipate further improvements far beyond even this fantastic terabit milestone,” said Nobel laureate Gerd Binnig, an IBM Fellow and one of the drivers of the Millipede project. “While current storage technologies may be approaching their fundamental limits, this nanomechanical approach is potentially valid for a thousand-fold increase in data storage density.”
The terabit demonstration employed a single “nano-tip “making indentations only 10 nanometers (millionth of a millimeter) in diameter — each mark being 50,000 times smaller than the period at the end of this sentence. While the concept has been proven with an experimental setup using more than 1,000 tips, the research team is now building a prototype, due to be completed early next year, which deploys more than 4,000 tips working simultaneously over a 7 mm-square field. Such dimensions would enable a complete high-capacity data storage system to be packed into the smallest format used now for flash memory.
While flash memory is not expected to surpass 1-2 gigabytes of capacity in the near term, Millipede technology could pack 10 – 15 gigabytes of data into the same tiny format, without requiring more power for device operation.
“The Millipede project could bring tremendous data capacity to mobile devices such as personal digital assistants, cellular phones, and multifunctional watches,” says Peter Vettiger, Millipede project leader. “In addition, we are also exploring the use of this concept in a variety of other applications, such as large-area microscopic imaging, nanoscale lithography or atomic and molecular manipulation.”
The core of the Millipede project is a two-dimensional array of v-shaped silicon cantilevers that are 0.5 micrometers thick and 70 micrometers long. At the end of each cantilever is a downward-pointing tip less than 2 micrometers long. The current experimental setup contains a 3 mm by 3 mm array of 1,024 (32 x32) cantilevers, which are created by silicon surface micromachining. A sophisticated design ensures accurate leveling of the tip array with respect to the storage medium and dampens vibrations and external impulses. Time-multiplexed electronics, similar to that used in DRAM chips, address each tip individually for parallel operation. Electromagnetic actuation precisely moves the storage medium beneath the array in both the x- and y-directions, enabling each tip to read and write within its own storage field of 100 micrometers on a side. The short distances to be covered help ensure low power consumption.
For the operation of the device — i.e. reading, writing, erasing and overwriting — the tips are brought into contact with a thin polymer film coating a silicon substrate only a few nanometers thick. Bits are written by heating a resistor built into the cantilever to a temperature of typically 400 degrees Celsius. The hot tip softens the polymer and briefly sinks into it, generating an indentation. For reading, the resistor is operated at lower temperature, typically 300 degrees Celsius, which does not soften the polymer. When the tip drops into an indentation, the resistor is cooled by the resulting better heat transport, and a measurable change in resistance occurs.
To over-write data, the tip makes a series of offset pits that overlap so closely their edges fill in the old pits, effectively erasing the unwanted data. More than 100,000 write/over-write cycles have demonstrated the re-write capability of this concept.
While current data rates of individual tips are limited to the kilobits-per-second range, which amounts to a few megabits for an entire array, faster electronics will allow the levers to be operated at considerably higher rates. Initial nanomechanical experiments done at IBM’s Almaden Research Center showed that individual tips could support data rates as high as 1 – 2 megabits per second.
Power consumption greatly depends on the data rate at which the device is operated. When operated at data rates of a few megabits per second, Millipede is expected to consume about 100 milliwatts, which is in the range of flash memory technology and considerably below magnetic recording.
The 1,024-tip experiment achieved an areal density of 200 gigabits (billion bits, Gb) per square inch, which translates to a potential capacity of about 0.5 gigabytes (billion bytes, GB) in an area of 3 mm-square. The next-generation Millipede prototype will have four times more tips: 4,096 in a 7 mm-square (64 by 64) array.
The most recent technical report on the Millipede project is published in the June 2002 inaugural issue of IEEE Transactions on Nanotechnology. | <urn:uuid:09652bad-5c66-4087-908c-844d11dff783> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/news/ibm-demonstrates-trillion-bit-data-storage-density/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00523.warc.gz | en | 0.909586 | 1,237 | 3.828125 | 4 |
When transmitting documents, organizations don’t want sensitive information being intercepted and falling into the wrong hands:
A transmission breach can expose personal information like phone numbers, social security numbers and financial records which can then be exploited by hackers.
Proprietary business secrets can be stolen, resulting in the loss of millions to a corporation.
Inadvertently revealing customer records can result in unhappy – and even litigious – customers.
Medical and financial institutions can face considerable fines from governing bodies if security regulations – such as Health Insurance Portability and Accountability Act (HIPPA) in the U.S. and the General Data Protection Regulation (GDPR) in Europe – are not strictly followed.
So which type of transmission is more secure: email or faxing? There are security benefits and risks for both email and fax
Many people believe that emails are secure because their programs can be encrypted and there’s usually a login and password procedure. Computers can guard against cyberattacks through firewalls and anti-malware programs. Further, emails can be sent to specific individuals on their computers, unlike faxes which may transmit to a central machine in a highly trafficked space.
But there are still serious risks to using emails for transmitting sensitive information:
Emails can contain malicious attachments, such as spyware or viruses that circumvent firewalls and infect a recipient’s computer.
“Phishing” scams can dupe email recipients to providing passwords and other sensitive information.
It can be more cumbersome to send legally binding documents via email – and some governing bodies will not accept legal document transmission except through faxing.
Email accounts can be hacked, and sensitive information can be corrupted or stolen.
Filters can reroute messages directly to spam mailboxes, so a recipient may be waiting for a time-sensitive document and miss critical deadlines.
Emails get stopped at many “checkpoints” before they reach the recipient, such as firewalls, ISPs, servers, virus checkers and possibly even data harvesting bots. With every checkpoint, there’s a possibility of interception and hacking. While the original email may have encryption, they are later saved, copied and forwarded without any form of encryption.
All fax transmissions use the Public Switched Telephone Network (PSTN) which ensures point-to-point transmission – making it more secure than email. A sent fax is converted into base64 binary at its source, travels via PSTN and then is reassembled at on the recipient’s fax machine. PSTN is much less susceptible to hacking because it would mean direct manual access to the telephone line. Even then, an intercepted file would appear to be simply noise and thus unreadable.
While hacking is not a major risk, sending documents through fax technology does not eliminate all potential security issues. For instance, when using a traditional fax machine, simply keying in one wrong digit on a fax machine can send confidential information to an unintended destination. In one case, seven doctors’ offices in Texas accidentally faxed sensitive patient records to a local radio station.1 While a sender can confirm unknown fax numbers before sending, this security practice can become impractical for companies that may have hundreds of individual fax machines in use.
There are other serious risks to using faxes to transmit sensitive information:
All fax machines (including electronic or network-based faxes) use the same protocol. So, a fax sent from one machine can be received by any other fax – running the risk of interception before it gets to its intended recipient.
Faxed paper documents can be voluminous and difficult to catalog, file and store with ink and paper degrading over time.
Paper documents left unattended in a fax machine at either end of the journey become vulnerable and could be accessed by unauthorized individuals.
Paper documents can be stolen – from the physical fax machine, a person’s desk or a filing cabinet.
Working with traditional fax machines to produce secure faxes adds a burden to an already heavy workload for administrative staff. Because of this, many businesses are turning to web-based electronic (network) faxing. Network faxing uses faxing software and network fax servers to better ensure secure transmission of sensitive information.
Network faxing is designed to work with existing systems and use an organization’s existing network. It needs no dedicated phone line or fax machine. It needs no paper, no ink and no human monitoring. Network faxing enables staff to fax from Electronic Healthcare Record (EHR) applications, Project Management (PM) software, their desktop, from office applications by email, a Customer Relationship Management (CRM) platform and many other applications.
Network faxing eliminates many of the issues that traditional fax machines have in creating secure transmissions:
Faxes are received electronically, eliminating the problem of inadvertently leaving faxes on the fax machine for anyone to read.
The process of manual phone dialing is removed, so sending a fax with sensitive information to the wrong fax number is greatly reduced.
Cover sheets with legal confidentiality statements can be automatically programmed into an electronic fax.
No longer do faxes have to be scanned before being entered into various storage applications.
Network faxing software can securely catalog, index and archive faxes automatically – eliminating paper-based storage issues.
Network faxing, along with electronic archiving, enables easier tracking and retrieval of past faxes – creating an accurate audit trail of every fax.
Some network faxing software can even monitor all types of communications and even block any information from being sent if this is against regulations or company policies.
While there are security benefits and risks to both faxes and emails, faxing – especially network faxing – can still deliver one of the safest methods of document transmission. In recent years, more companies are adopting network faxing because of the advanced features they provide and even better security than analog faxes. Three security-related advantages that network faxing provides over emails are:
Security for legal documents. Because emails are more vulnerable to fraud, manipulation, interception and hacking, they have a much harder time being accepted as a legally binding document that is admissible in a court of law. A network fax is generally accepted in court as authentic and thus admissible. Also, business transactions typically accept faxed signatures as legally valid. Additionally, a network fax provides the convenience of sending faxes through email.
Higher level of encryption. As stated earlier, fax transmissions use the Public Switched Telephone Network (PSTN) which ensures the security of point-to-point transmission, which greatly reduces the possibility of hacking. Even with encrypted email services, generally only the message is encrypted – the subject and the recipient’s email address is still vulnerable to exposure. Email encryption is also more cumbersome – the sender and recipient need to have compatible encryption software or a decryption key. Finally, stored emails are typically not encrypted.
Easier compliance. Businesses and other institutions increasingly face more scrutiny regarding data security through an alphabet soup of regulations – such as GDPR, MiFID II, HIPPA, PCI-DSS and FERPA. For instance, medical and financial institutions can face considerable fines from governing bodies if security protocols – such as Health Insurance Portability and Accountability Act (HIPPA) in the U.S. and the General Data Protection Regulation (GDPR) in Europe – are not strictly followed. With network faxes, only the authorized recipient can receive them. Network fax server software can typically track where, when and to whom the information has been sent. Organizations can then more easily provide evidence to regulators that they are in compliance with regulations.
GFI FaxMaker is a network fax server software that enables email to fax and fax to email for Exchange and other SMTP servers in a secure, encrypted environment. Faxing protocols make it nearly impossible to intercept a fax in mid-transmission – making it more secure than email. Electronic faxing with FaxMaker makes it easy to access this more secure protocol.
An organization can install the FaxMaker fax service as a physical, on-premise service with a standard fax modem; as a virtual Fax over IP (FoIP) through a gateway or VoIP phone system, or through Hybrid faxing with no equipment but integrated with a cloud-based faxing system.
FaxMaker is not only popular because of its greater transmission security, but also because of its ease of use:
Users can sign in to the FaxMaker web client, fill in fax content on-screen, add attachments and simply click send.
FaxMaker allows users to fax directly through an email application. Simply start to compose an email and in the “To:” box enter a fax number with “@faxmaker.com” at the end. Fill out the subject line, add body content and attachments and send.
Incoming faxes pass through an OCR (optical character recognition) module that makes it possible to search in the fax body. This feature is useful when older faxes have to be retrieved.
It provides features such as Application Program Interfaces (APIs), Short Message Service (SMS) alerts and digital signatures.
A companion to GFI FaxMaker is GFI Archiver. Businesses have to employ fast, safe and efficient storage software for faxes. Archiving can all be done with GFI Archiver. The system allows for intelligent reporting, and it is already configured to run reports that comply with record confidentiality mandates.
Why Organizations Are Moving Fax to Email (Part 1)
Find out the reasons behind the trend of ever-increasing integration and overlap of fax to email for inbound fax traffic.
Why Organizations Are Moving Email to Fax (Part 2)
Discover how email to fax allows users to send faxes through an efficient process.
How Online Faxing Cures a Widespread Healthcare Headache
Discover why healthcare facilities are using web-based office suites for greater efficiency and cost effectiveness
GFI FaxMaker has a new look!
Learn about GFI FaxMaker’s new interface – allowing administrators to easily monitor and manage all faxing and SMS activities.
GFI FaxMaker trial
Try GFI FaxMaker free for 30 days with access to all GFI FaxMaker features and customer support.
Fax efficiency through automation
Learn how network faxing is getting easier and more accessible than ever before. | <urn:uuid:d8638b7a-51fc-4e9a-adbd-e1e62c874d7b> | CC-MAIN-2022-40 | https://www.gfi.com/company/blog/are-faxes-more-secure-than-email | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00523.warc.gz | en | 0.913409 | 2,138 | 2.890625 | 3 |
By: Microtek Learning
Feb. 08, 2021
Last Updated On: Apr. 01, 2022
The world thrives in the 21st Century, where every day new technologies are being developed and on the other hand, strategies are rendered to hack these new technologies. So to recoup any kind of personal information organizations have established the panel of cybersecurity which aid them to protect the information and data, networks devise programs from damage or unauthorized access. Most of the organizations have insolvency in their servers as they are cyber in the real world.
Earlier, organizations bolstered their data on manuscripts that were upheld by the elected panel, for which they were held completely accountable for monitoring and maintaining all the data and information received in the organization. But nowadays, everything that is data, information, networks, and programs are all maintained digitally and information and data are extracted more effortlessly with the help of new advanced technologies.
Cybersecurity is a term we hear thrown around often in today’s technological world. Cybersecurity is an extremely important topic, as it protects the data that is essential to our daily lives – bank accounts, medical details, work records, and perhaps even your e-mail address. With the rising threat of cyber-attacks and cybercrime, it is incredibly necessary for both businesses and individuals to implement cybersecurity measures to protect our information.
Cybersecurity concerns the measure or policy taken by a person, an organization, or government to protect its data and systems from anything that might endanger it. We probably all seem familiar with the term cybersecurity; however, many of us may still find it puzzling to understand since there are so many terms that are related to it such as cybersecurity defence, cyber risk management, etc. One thing is for sure that emphasizing too much on this matter is necessary since according to experts a single cyber-attack can result in billions of dollars’ worth of damage through both tangible and intangible costs.
According to the report of 2020, 73% of the enterprise drifted to cloud specialized servers due to pandemic. 81% of enterprise hastened to IT specialization and modernize the servers due to pandemic and 32% of the large enterprises are enforcing their servers to artificial intelligence which helps in automation of tools and ease the task of employees which means that everyone is realizing the importance of cybersecurity and gradually owning it.
As per the Cisco report on the cloud market analysis 83 % of all data center traffic will be based on cloud computing soon. This enhancement, compounded with additional expending increases mentioned in the report of Forrester Research, will further boost the requirement for increased cyber-security solutions and measures in the upcoming years to come. Rising awareness of cybersecurity is compelling the organization to educate the employees to safeguard them from attacks like phishing and ransomware attacks that are designed to seize diligent property or confidential data.
Every organization has some defects in information security that stimulates the hacker to easily grasp the information and enter the server of your organization which can compromise all the crucial documents in the hands of the rival. As per the new studies done by IBM, the standard cost of a data breach is USD 3.62 million which to several enterprises is too steep a cost. Hence, we need to understand why we need cybersecurity experts for our organization and can help us in that situation and some reasons might help you to understand the importance of cybersecurity experts.
Every organization wants to safeguard all the important data and information that substantiates to be classified for the outer source and craves exemption to surpass in the wrong hands. So to preclude that to happen organization creates a panel that will protect your data and make it confidential which will be monitored day to day and if any changes as compelled will be done by panels only. The professional will curtail the damage and threats and simultaneously your data will be protected.
Phishing, ransomware, and malicious software are the prominent malware that can steal your data, slow down your server, and set up attacks on your and other businesses as well which are asserted to be the dangerous malware in the marketplace. So if you empower cybersecurity professionals or push for artificial intelligence specialized server that will assist you in preserving all the data as artificial intelligence servers have an inbuilt information security assurance and the professionals will monitor all your data and servers that need protection.
Whenever there is an information security breach mostly what company does that they only centralize their attention towards data and cost but they fail to notice the impact over productivity which can induce some severe issues in the future. For these organizations have a panel of certified professionals that are elected to maintain and preserve all the information in the organization. This step should be taken in the very starting to conserve and insulate all the information to strengthen the pillars of the organization.
Cyber attacks will be at the very least an aggravating concern even for the upcoming decades, which is why the importance of cybersecurity is set to keep worrying IT specialists and business experts for a time being. It truly is one of those things that are as significant to our individual lives as they are to our enterprises, with safe and secure computer networks being major prerequisites for making commerce transactions online. So the cybersecurity security professionals do as much they can to protect your physical damage to the very fragile that can benefit the organization’s capital.
Every organization goes with only one motto and that is the customer is a god which certifies that any organization can run only by customer’s loyalty and confidence. If there is an information breach then that will make the relationship delicate and the customer will not be able to have virtue towards the organization that will degrade its goodwill and name in the marketplace. So to uplift the organization’s brand the company has a professionals panel to retain that name and not neglect customer’s confidence towards the organization.
Cybersecurity is the practice of protecting computer systems from theft, or damage to their hardware, software or information. Organizations need to protect their websites from hacking and other cyber-attacks because one attack can cause a severe breach with wide-ranging effects. To secure your website you need to hire skilled cybersecurity experts who are up-to-date on the latest trends and know-how to keep it secure and make your secure place in the.
The latest studies suggest that security job postings have grown by 74 % and that security jobs take close to 24-25 % longer to fill than regular IT posts because of the shortage of qualifications in the current market today. With the arrival of DDoS attacks and several other new vulnerabilities that have crippled internet connectivity in the current years, more and more experts are required to be on guard as trained experts in cloud computing, cyber-security, and other strategically significant IT roles.
At Microtek Learning, we provide the simplest ways to level-up your skill-set with 24/7 access to experts in combination with project-based learning opportunities that enhances proficiency and competencies. Our cybersecurity training courses offer superior-quality training in both business and technical skills to guarantee professional success. | <urn:uuid:94b2ff4d-393b-4739-84b2-a6991e3b248d> | CC-MAIN-2022-40 | https://www.microteklearning.com/blog/6-reasons-why-your-organization-need-cybersecurity-experts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00523.warc.gz | en | 0.957433 | 1,405 | 3.1875 | 3 |
By: Microtek Learning
Oct. 06, 2021
Last Updated On: Apr. 01, 2022
There are three main driving forces in modern data solutions: Cloud Computing, Data, and Artificial Intelligence (AI). As we all know, through cloud computing, we can access data and applications from anywhere and on any device by simply using the internet. What's the benefit of using Cloud as a platform? Well, this platform is scalable without making any investments. It helps businesses in improving through IT resources which leads to savings. As a result, more than 90% of companies switch to the Cloud.
Azure is also a cloud computing platform that gives solutions along with Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and Software-as-a-Service (SaaS). You can use these solutions for storage, networking, analytics, or virtual computing. Azure services became famous during the pandemic because companies moved to remote working and started maintaining their data in the Cloud. Due to this, employees must become familiar with the Azure concept.
To validate your fundamentals understanding of cloud services, you can take the Azure Fundamentals Certification. Cloud computing professionals are in demand nowadays; by earning this certification, you will be the most demanding employee amongst other employees in your organization.
These certifications are the foundation of a new career. They are the starting point to pursue your dream jobs roles like developer, AI engineer, technology manager, data scientist, data administrator, and many more. Obtaining certifications is always a good start to master the basics of the Cloud. With these certifications, your potential and skills will be measured, and your ability to data and AI techniques using Microsoft Azure as a Cloud service.
Data and AI are applicable in many sectors such as retail, finance, banking, and many others. According to Statista and LinkedIn:
The benefit of the fundamental certification is that both technical and non-technical people can do this. For these certifications, there is no prerequisite required.
Basic computer knowledge will help you understand things better. As reported by Gartner, end-user spending on public cloud services worldwide is anticipated to increase by 18.4% in 2021 only. Hence, there is a high demand for professionals in big companies like Google, Microsoft, IBM, etc.
You can also upgrade yourself to Associate and Expert level certifications. They can give you professional growth over others with industry-endorsed evidence of having the right skills.
At Microtek Learning, we have many ways to keep the learners updated with technology trends.
As a Microsoft Certified Learning Partner, Microtek Learning provides skills-based training taught by Microsoft Certified Trainers using the MOC (Microsoft Official Courseware) and labs. Each course is designed according to the target job role to apply to the workplace immediately. | <urn:uuid:e385b06f-219c-4572-896e-4715ece06c2d> | CC-MAIN-2022-40 | https://www.microteklearning.com/blog/master-the-basics-of-cloud-with-microsoft-azure-fundamental-certifications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00523.warc.gz | en | 0.941808 | 587 | 2.625 | 3 |
A pair of reports reveal a rising level of vulnerabilities and infections on the World Wide Web.
As e-mail security has improved, the Web now is the primary route used to infect computers, and the United States has the dubious distinction of hosting the most infected sites and having the most compromised computers relaying spam, according to two recent reports on Internet security.
WhiteHat Security Inc. of Santa Clara, Calif., reported in its Website Security Statistics Report that 82 percent of Web sites it examined had at least one vulnerability that could leave them open to attack and exploitation, and that 63 percent had vulnerabilities that are rated at high, critical or urgent severity.
WhiteHat Security uses the Web Application Security Consortium (WASC) Threat Classification for classifying vulnerabilities and the Payment Card Industry Data Security Standard (PCI-DSS) severity system to rate vulnerability severity.
According to the latest Security Threat Report from IT security company Sophos of Boston, the United States hosts 37 percent of the online malware, beating out China for the No. 1 spot. Between them, the United States and China account for nearly two-thirds of the malicious code hosted on Web sites. A whopping 97 percent of business e-mail is classified as spam, and compromised computers in this country also seem to be sending out a disproportionate amount of it—more than 17 percent of the world total, the highest amount for a single country, the company found.
“We would like to see the States making less of an impact on the charts in the coming year,” said Graham Cluley, senior technology consultant for Sophos. “American computers, whether knowingly or not, are making a disturbingly large contribution to the problems of viruses and spam affecting all of us today.”
The problem is a vicious circle, with visitors becoming infected by malicious code hosted on legitimate Web sites. Once compromised, the PC can be used to send spam which can contain malicious code or drive more traffic to infected Web sites.
Sophos said that new infected Web pages are appearing at the rate of one every 4.5 seconds, and its labs are receiving 20,000 new samples of suspected malicious code every day.
The SQL injection attack, which exploits security vulnerabilities to insert malicious code into the database running a site, has emerged as one of the primary ways of infecting legitimate sites. If data supplied to the site by a visitor is not correctly checked, the malicious code peppers the database with malicious instructions that can compromise subsequent visitors.
WhiteHat reported that site operators are slow to fix vulnerabilities allowing such attacks. The time-to-fix identified vulnerabilities is in the range of weeks or even months. During the period of its most recent study, from Jan. 1, 2006 to Dec. 1, 2008, only about half of the most prevalent urgent security issues it identified were solved.
Exploiting the vulnerabilities in Web sites is becoming easier, as hacking becomes more automated, Sophos said. Tools use commercial search engines to identify potentially vulnerable sites and inject malicious code. Most of the sites are not being specifically targeted but are caught by automated tools.
At the same time, criminals are building more of their own malicious Web sites and using automated systems to plant links to these sites in legitimate blogs and forums, directing traffic to the malicious sites.
Another way of driving traffic to malicious sites is scareware. The bad guys set up phony Web sites offering malicious faux security scanning and tools, and then bring traffic into the sites with spam and other tricks to convince a user that a PC already is infected. Sophos reported seeing an average of five new scareware Web sites a day, with as many as 20 being seen on a single day.
With worsening economic conditions, things are likely to get worse before they get better, Cluley said.
“As we enter 2009, we are not expecting to see these assaults diminish,” he said. “As economies begin to enter recession, it will be more important than ever for individuals and businesses to ensure that they are on guard against Internet attack.”
NEXT STORY: Commission calls for cybersecurity restructuring | <urn:uuid:0704d2ac-1fa7-4581-b4d7-805093d72c12> | CC-MAIN-2022-40 | https://gcn.com/cybersecurity/2008/12/the-web-is-more-dangerous-and-us-is-biggest-culprit/279930/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00523.warc.gz | en | 0.941456 | 839 | 2.703125 | 3 |
Dates are source of fructose, a natural type of sugar found in fruit.
Compared to similar types of fruit, such as figs and dried plums, dates appear to have the highest antioxidant content.
Compounds that bind to oxytocin receptors and appear to mimic the effects of oxytocin in the body. Oxytocin is a hormone that causes labor contractions during childbirth.
Dates are rich in this vitamin A and help combat night blindness.
The potential to help with blood sugar regulation due to their low glycemic index, fiber and antioxidants.
The potential brain-boosting properties attributed to their content of antioxidants known to reduce inflammation, including flavonoids. | <urn:uuid:5541e768-415b-4c99-9211-aedd7c0f94ae> | CC-MAIN-2022-40 | https://areflect.com/2018/12/20/todays-health-tip-benefits-of-having-dates-every-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00523.warc.gz | en | 0.942432 | 140 | 3 | 3 |
Technology constantly surrounds us. How many times a day do we come into contact with digital media or a connected device without even registering that we have? From mobile phones to laptops and tablets, technology has become deeply embedded in our lives and impossible to escape from.
In other words, we are firmly in the age of digital transformation, where technology impacts every facet our lives, both personal and professional, and it shows no signs of slowing down.
As the pace of technology continues to increase and evolve, businesses are being faced with a new reality. One where our workforce is struggling to keep up with the speed of this change. In order for businesses to survive in this day and age, their employees must not only be equipped with the specific skill-set needed to excel in their career of choice, but also possess digital skills.
The UK is one of the fastest growing digital economies in the G20 and we must embrace cutting edge digital technology to up-skill employees. The marketplace is a jungle and only the digitally fittest will survive.
What are digital skills?
Broadly speaking, having digital skills is having the ability to use a broad range of digital devices including computers, tablets and laptops competently. GO.ON UK (opens in new tab), a charity dedicated to improving the digital skills of people across the UK, use a measurement framework (opens in new tab) of the following topics to determine digital literacy:
- Managing information – find, manage and store digital information and content
- Communicating – communicate, interact, collaborate, share and connect with others
- Transaction – purchase and sell goods and services; organise your finances and use digital government services
- Problem-solving – increase independence and confidence by solving problems using digital tools and finding solutions
- Creating – engage with communities and create digital content
By being able to successfully carry out the above on digital devices, individuals are able to command a basic knowledge of digital devices.
Digital skills are something that we can no longer afford to hide from. A recent report by GO.ON UK, revealed that in the UK over 12 million people (opens in new tab) are falling into the digital skills gap. This is an astounding figure, almost a quarter of the UK population is in danger of missing out on the digital revolution. Not only does this affect individual employees and businesses but it has major implications for the long term future of the economy. If this issues is not addressed, how will our workforce be able to keep pace?
The report also highlighted that more than a million small businesses (opens in new tab) do not possess the basic digital skills necessary skills to succeed. From having the ability to use a search engine to completing online transactions, many of things we take for granted are a daily struggle for small businesses. In order for the UK to stay competitive with other leading economies, we need to ensurse our businesses and employees are equipped to cope with digital technology otherwise we will not survive.
What has caused the digital skills gap?
Simply put, technology has been the primary facilitator in the digital skills gap. Since the late 1980’s there have been three different generations of technology users and the latest are the millennials, who now make up the majority of the workforce.
The huge expansion of software is also playing a major part in the digital era and in driving the digital skills gap. The meteoric rise of cloud has meant that technology is no longer always a physical entity, if you look at consumer services such as Netflix and Spotify – technology is selling a subscription and business technology is no different. Take software for example, in order to keep innovation and ensure the long-term investment of customers, these companies are bringing out product updates every few months. How many times in the last year alone, has Apple launched a new iOS?
The rise is BYOD coupled with the consumerisation of IT has meant that we no longer work as part of a simple food chain, but instead as part of a more complex ecosystem. The days of businesses having one computer, in one office with one Windows license, have vanished. The evolution of technology has meant that employees might learn a digital skill today, but if they don’t constantly update this knowledge it will soon become useless.
According to Deloitte’s Global Human Capital Trends 2014 report (opens in new tab), the knowledge we consume is doubling every year, and as a result the half-life for acquired skills is now only 2.5 to 5 years. Any new skill learnt today, will only be as 25 per cent as useful in five years as it is now. Therefore it’s imperative that organisations recognise the needs of their workforce to be constantly learning and developing their digital skills. This is where the importance of on-the-job training cannot be underestimated.
Particularly with the rise of BYOD, employees need to be able to learn on any device, at any place, at any time and from anywhere. Learning is no longer dictated from the top down, instead more and more people are learning using a bottom up approach, when they want and how they want. The future is self-directed learning, allowing employees to develop the digital skills and solutions they need, when they need them. Not only does it empower the workforce but allows them to determine their own learning programme.
How do we stop the gap from widening?
In order to stop the digital skills gap from widening, its imperative businesses ensure that everybody has access to training. As mentioned previously, companies need to make sure their learning solutions are user centric and mobile. All good solutions providers’ guarantee employees are able to access training materials as and when they need to improve their digital skills. Learning is now all about fostering an environment of self-reliance, where employees are able to incorporate development into their everyday lives.
Another important step to take to help employees improve their digital skills is by understanding the importance of having HR and IT departments working together in tandem. With the explosion of the cloud, technology has become a subscription and effectively become an operational cost. Therefore it’s in the best interests of CIO’s educate HR managers and help reduce desk costs by employing a training solution that will reduce the number of unnecessary calls on computer-related questions. The success of any training solution based in the cloud depends on the involvement of the IT department and their desire to help.
Additionally it’s also essential that businesses keep up with their software updates. Due to the increased frequency of these updates, employees must ensure they are continuously renewing their digital skills and leaning how these work and differ from previous versions. This will be done by ensuring that businesses have proactively implemented a future-proof training solution that is flexible, agile and constantly upgrading itself.
As we continue to divulge deeper into the digital era, businesses and their employees cannot afford to be outstripped by the pace of technology. Instead they need to ensure they are standing toe-to-toe and continuously updating their digital knowledge to keep up with the pace of change. Organisations will be unable to take advantage of advancements in technology until they learn the importance of enchanting the digital skills of their employees. Investment in human capital is just as important as investment in technology.
A business is nothing without its employees. In order to survive in today’s environment, businesses need to ensure they are continuously developing the digital skills of their workforce.
Kevin Young, Vice President and General Manager, EMEA at Skillsoft (opens in new tab)
Image source: Shutterstock/Nomad_Soul | <urn:uuid:7cc69dc3-707f-48c7-9866-7c655ee052e7> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/12/19/the-skills-gap-survival-of-digitally-fittest-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00523.warc.gz | en | 0.957963 | 1,536 | 2.671875 | 3 |
This function rounds a date or timestamp value to the specified unit.
- The following elements can be used as format:
- Rounding-up is as follows: years from July 1, months from the 16th of a month, days from noon, hours from 30 minutes, minutes from 30 seconds and seconds from 500 milliseconds. Otherwise, values are rounded down.
- If a format is not specified, the value is rounded to days.
- For data type TIMESTAMP WITH LOCAL TIME ZONE this function is calculated within the session time zone.
|YYYY, SYYY, YEAR, SYEAR, YYY, YY, Y||Year|
|IYYY, IYY, IY, I||Year in accordance with international standard, ISO 8601|
|MONTH, MON, MM, RM||Month|
|WW||Same day of the week as the first day of the year|
|IW||Same day of the week as the first day of the ISO year|
|W||Same day of the week as the first day of the month|
|DDD, DD, J||Day|
|DAY, DY, D||Starting day of the week. The first day of a week is defined by the parameter NLS_FIRST_DAY_OF_WEEK (refer to ALTER SESSION and ALTER SYSTEM).|
|HH, HH24, HH12||Hour| | <urn:uuid:bc2d3df2-c47e-43fc-9bd9-4df2e0f0c758> | CC-MAIN-2022-40 | https://docs.exasol.com/db/6.2/sql_references/functions/alphabeticallistfunctions/round%20(datetime).htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00723.warc.gz | en | 0.765879 | 351 | 2.765625 | 3 |
What’s the Difference Between Information Security and Cyber Security?
Whilst closely linked and often used interchangeably by most, Information Security and Cyber Security are not synonymous. On a basic level both focus on protecting sensitive company data through risk management and data security, yet when looked to more closely Cyber Security falls into a subset of Information Security rather than being a synonym.
The difference lies in their names. Information Security is focussed on protecting the ‘information’ and ensures data is handled with confidentiality, integrity and availability across all mediums, whereas Cyber Security looks purely at online data and works to protect this data from online vulnerabilities and network weaknesses.
Information security, or more commonly known as InfoSec, protects data in any form. Keeping this data secure is an Information Security Specialists’ main priority. Their role involves both physical files - how they are handled, shared and stored, along with online data. The scope of Information Security is expansive and falls into a broader category than Cyber Security, so one could be an InfoSec professional without being a Cyber Security professional.
Whilst InfoSec is all about protecting data in any form, Cyber Security is focussed on electronic data and protecting this data from outside sources on the internet. A Cyber Security professional is responsible for implementing strategies to protect a company’s data from vulnerabilities on the internet. It has been defined as “the ability to protect or defend the used of cyberspace from cyber attacks”.
With businesses shifting to third-party cloud servers and relying on a multitude of other network platforms, the amount of data stored online has never been greater. This has meant businesses are finding themselves more vulnerable to internet threats and hacks. Implementing preventative strategies that protect against such vulnerabilities as cybercrime is the role of Cyber Security professionals.
As every business looks digital, Cyber Security and Information Security have come together, and many businesses have found themselves only having a Cyber Security expert. Despite the digital shift, the role of Information Security specialists has not been rendered obsolete. The potential risks of physical data remain and with this threat remains the need for robust strategies that encompass all aspects of Information Security, not only Cyber Security.
With this understanding, it is evident why these two sectors are perceived as synonymous. Despite their differences these two arenas are both lucrative and growing. Whether looking for a cyber security specific role or something in the broader information security industry, begin your job search within these growing areas on CareersinCyber.com here. | <urn:uuid:59fe7e75-ec31-41ce-b491-b74bd74205b0> | CC-MAIN-2022-40 | https://www.careersincyber.com/article/what-s-the-difference-between-information-security-and-cyber-security-/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00723.warc.gz | en | 0.936806 | 507 | 2.84375 | 3 |
A server vulnerable to a BREACH attack (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext) allows an attacker to decrypt cookie contents such as session information.
Learn how you can prevent BREACH attacks here.
BREACH Attack Security Assessment Level
CVSS Vector: AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N
BREACH Vulnerability Information
The BREACH attack can be considered an instance of the CRIME attack (Compression Ratio Info-leak Made Easy) attack vector as it is based on and largely follows its logic. It targets vulnerabilities in data compression in the HTTP protocol.
For a BREACH attack to be successful, several conditions must be met. Vulnerable websites must:
- Use HTTP-level compression
- Reflect user input (e.g., a username that is given from the login form) in the HTTP response body
- Contain a secret (e.g., a CSRF token) in the response body that is of interest to the attacker
A server vulnerable to BREACH attacks allows an attacker to decrypt cookie contents such as session information, including login tokens, email addresses, and other types of sensitive data.
This attack can be successfully executed in less than a minute.
How to Prevent a BREACH Attack
Unlike previous attacks such as BEAST or LUCKY 13, this attack does not require SSL/TLS-layer compression and can work against any cipher suite. For this reason, turning off TLS compression does not affect the possibility of a BREACH attack.
The attack is easier to execute against stream ciphers because the responses’ size is easier to establish. However, against block ciphers, attackers must work on aligning the output to the ciphertext blocks more precisely.
Technically, the easiest form of mitigation is disabling HTTP compression, which will lead to bigger sites that need to be transferred and is not a viable solution.
Several ways of mitigating this attack exist. These include:
- Disabling the compression-only if the referrer is not the own application
- Separating any sensitive data (i.e., secrets) from user input
- Using a CSRF token to protect pages that contain sensitive information thanks to the SameSite Cookie attribute
- Hiding traffic length by including random numbers of bytes to responses (aka HTTP chunked encoding)
- Randomizing token value in every response
- Limiting the rate of requests
- Monitoring traffic to spot attacks as they occur
To disable HTTP compression from requests with different referrers, use the following settings:
SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0 no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png|zip|gz|tgz|htc)$ no-gzip dont-vary # BREACH migitation SetEnvIfNoCase Referer .* self_referer=no SetEnvIfNoCase Referer ^https://www\.example\.org/ self_referer=yes SetEnvIf self_referer ^no$ no-gzip Header append Vary User-Agent env=!dont-vary
Possible BREACH Attack Solutions
HSTS – Secure Channels: Strict Transport Security
The server declares “I only talk TLS”
Example: HTTP(S) Response Header: Strict-Transport-Security: max-age=15768000; includeSubDomains
The header can be cached and also prevents leakage via subdomain-content through non-TLS links in the content
Weakness: “Trust on first use”
Server identities tend to be long-lived, but clients have to re-establish the server’s identity on every TLS session.
How could Google/Chrome be resilient to DigiNotar attack?
Google built-in “preloaded” fingerprints for the known public keys in the certificate chains of Google properties. Thereby exposing the false *.google.com certificate DigiNotar signed
But, preloading does not scale, so we need something dynamic:
Could use an HTTP header i.e. transmit the SHA1 or SHA256 hash of the Subject Public Key Info structure of the X.509 certificate. (You could pin to end entity, intermediary, root. Select your degree of precision.)
Secure Channels: DNSSEC for TLS
DNSSEC can be used to declare supported protocols for domains
DNSSEC can be used to declare a server certificate for the domain
Advantage: Advantage of trusted signed source | <urn:uuid:b5ce8ec2-2970-488b-9f12-173720dd44f3> | CC-MAIN-2022-40 | https://crashtest-security.com/prevent-breach-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00723.warc.gz | en | 0.823401 | 1,023 | 2.53125 | 3 |
Open source has played an important role in software development over the last thirty years. It also matters in some other areas, such as intelligence. Open source intelligence has become increasingly important – especially since 9/11, but recent wars in Syria and Ukraine have made it more well known. Can open source intelligence expand to become much more systematic in tracking and predicting world events?
Bellingcat is an investigative journalism organization become the best-known user of open source intelligence. Bellingcat started in 2014 by investigating weapons used in the Syrian war. It analyzed photos from the war, trying to not only identify weapons and items in them, but also confirm the photo’s location.
Later Bellingcat became especially famous after it discovered who was guilty of the downing of Malaysia Airlines Flight 17, the Skripal poisoning, and the poisoning of Alexei Navalny. In those cases, Bellingcat combined information from many sources – not just pure open source intelligence, but also information from Russian passport and travel databases.
On the whole, open source intelligence has been the most important data source for Bellingcat. This open source intelligence consists of data from many sources, including social media updates, satellite photos, photos and information people are willing to share.
Most intelligence data is already public
But of course, open source intelligence is much more than Bellingcat. It even has its own acronym: OSINT. Some countries have laws and regulations for OSINT. For example, in the US, the law defines OSINT as “intelligence derived from publicly available information, as well as other unclassified information that has limited public distribution or access.” After 9/11, the CIA launched an open-source directorate. Now US spy agencies have a foundation for this kind of activity.
Collecting intelligence from public sources is nothing new – it has been said that during the Cold War era, 80% of information collected by the intelligence services came from public sources, like newspapers, media, public documents and public speeches. What has changed during the last twenty years is that technology has developed significantly to enable collection of a lot of data that was not available earlier.
Publicly available satellite photos and videos, radar information, social media content, web cameras, public government data, academic databases and many other sources have made a lot of new data available. Nowadays, basically anyone can use powerful tools to search and combine data from many sources. This is what makes open source intelligence such a significant development.
Is the world safer or more dangerous with open source intelligence?
Has open source intelligence made the world safer or more dangerous? Opinions are sharply divided. Some people say open source intelligence can hamper secret diplomatic negotiations that have sometimes been important to solve conflicts. When all parties can see the other’s actions, they must make countermoves rapidly, which can escalate quickly.
However, another opinion is that open source intelligence can prevent parties from taking action – or at least enable the public to see what they’re doing them early, which consequently makes it harder for them to prepare for something secretly. For example, last winter we saw public information that Russia had amassed a lot of troops and weapons at the Ukraine border. Nonetheless, many parties didn’t want to believe Russia would (or could) actually start a large-scale invasion.
The Ukraine war is also an example of where military personnel become sources of open source intelligence when they publish information on social media. There are even examples of how some Russian soldiers published photos of the entire route from their military base to a battlefield. As a result, such information could be helpful to anyone who seeks to determine which troops are used and how their logistics work. It also looks like some soldiers have published photos that could be used against them as evidence in war crimes cases.
Opportunities for more systematic models
Clearly, open source intelligence is already very important for investigative journalism and for intelligence services. But it can be much more in the future. Nowadays, a lot of this information is still analyzed partly manually.
When there is so much data available all the time, it is also possible to automate many analyses, and we’re starting to detect unusual events automatically and making various predictions based on data. This in turn could also expand the use of data and data analyses. For example, companies could better evaluate risks to their supply chains. Also, investment funds could evaluate risks for their funds, and companies could take into account the latest information in their investment decisions.
This requires complex data models, as well as the ability to combine information from many sources and understand the dependency between different events and objects. But at least for certain purposes, this is already very feasible. It is more important to make sure that some parties can start developing this systematically and find good business models for it. It could be something like Palantir, but based more on open source software and intelligence, and more transparent.
Open source software has changed the software industry. Open source intelligence has become an important tool for investigative journalism and intelligence agencies. But when the use of data is automated better, open source intelligence can be applied to many other use cases, including business. There is so much information available in the world nowadays. The question really is: who can make better models and tools to utilize it systematically? | <urn:uuid:711a0aa8-ba8f-4986-a3c9-9918a3a5276d> | CC-MAIN-2022-40 | https://disruptive.asia/open-source-intelligence-for-more-than-spying-and-journalism/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00723.warc.gz | en | 0.960249 | 1,084 | 2.71875 | 3 |
Aug 28 2019
Click to learn more about author Scott Shadley.
SSDs have evolved over the past decade to meet the growing demand of AI and Edge-Related workloads and now computational storage takes intelligent storage to the next level.
Today the big buzz words like “AI” and “Edge Computing” have taken the technology industry by storm and to the casual observer it all seems very cool and “fashion-forward.” Truth is, what it really means is that a tsunami of data that has taken over and the ability to quickly process and analyze terabytes and sometimes petabytes of data “in real time” (e.g.: in an hour versus a week) presents both a challenge and an opportunity for the computer storage sector.
The computer storage industry is evolving and it is not just about capacity and data recovery anymore, but is about intelligently storing and analyzing data – in real time.
Here is the thing: the oft overlooked computer storage function has suddenly gone from being a “supporting player” to having a starring role in this new and interconnected world of AI and edge workloads. IoT devices are now generating five quintillion bytes of data every day (that is 5 million terabytes of data), and this will continue to grow as the number of IoT devices grows to 30 billion connected devices by next year (2020), according to Cisco.
Let’s take a real world example: today’s modern airliners, can generate up to a terabyte of data per flight. Even taking small snapshots of this data, which can reduce it to below a gigabyte per flight, is still far too much to transmit in-flight, so this massive amount of data must be analyzed at the edge if there is any chance of utilizing it in real time.
Another way “intelligent storage” is saving the day is helping in life and death situations. One of the biggest worries a parent can ever suffer is losing a child in a crowd. Fortunately, there are ongoing improvements in the ability to track and find people, with the use of cameras and facial recognition or object detection continues to provide these enhancements. However, AI needs to manage these tools and their generated data. And the need to store and analyze data across multiple cameras and angles is requiring intelligent storage.
The growing need to store and analyze data at the edge has spawned the need for intelligent storage solutions to solve the low power, more efficient compute needs without strain on the edge platforms.
Computer Storage Memory Lane
In order to meet the needs of intelligent storage, Computational Storage has emerged as a new trend which can deftly organize raw information into meaningful data. Computational Storage allows an organization to ingest as many bits as possible and churns out just the right information on command and in real time at the storage level instead of in the CPU.
How did we get here?
Let’s take a trip down computer storage memory lane when SSDs or solid-state drives, other-wise known or flash, made an appearance in 2005 with the debut of the Apple iPod. Suddenly the flash technology rendered the clunky physical hard drives (or HDDs) as nearly obsolete as SSDs were more stable, longer lasting and there were no moving parts that could be broken. SSDs were also able to reduce storage media latency and improve storage reliability, thus reducing the need for huge RAM buffers.
However, in just the past few years as IoT and AI-powered devices became more standard use, SSDs needed to evolve.
And that is where NVMe (Non-Volatile Memory Express) has emerged and marked the one of the first major developments for SSDs. NVMe is a streamlined and flash-focused interface that operates at a much higher interface for SSDs as NVMe is able to remove existing storage protocol bottlenecks for platforms churning out terabytes of data on a regular basis.
Is NVMe Enough? Welcome Computational Storage
Analyzing data today often means finding a needle in the large data haystack, so that is where Computational Storage has come in and strengthened the haystack, augmenting the CPUs. If organizations are trying to analyze a small portion of data from a huge data lake, it can take days or weeks to process, even with high-capacity NVMe SSDs.
A recent survey by Dimensional Research of more than 300 computer storage professionals demonstrated that bottlenecks can occur at under 10 terabytes.
As such, Computational Storage enables more robust processing power to aid each host CPU, allowing an organization to ingest all the data it can generate but only provide what is necessary, therefore keeping the “pipes” as clear as possible. This allows for when raw data is needed for analytics, organizations have the freedom to only pull out what is needed, versus having to deal with the entire data set.
This approach is especially essential with high-capacity NVMe SSDs that require help to manage their data locality and storage compute needs. Computational Storage increases efficiency via In-Situ Processing for mass datasets, which reduces network bandwidth and is ideal for hyperscale environments, edge processing and AI/data applications.
The data tsunami is not lessening – it is increasing with veracity. That said, storage architects must not look at data throughput as just physically moving data or storing it but rather how to intelligently organize it so that critical analysis can be done to increase not only efficiency within an organization, but to also save lives. | <urn:uuid:4bfe2655-23a8-4f0f-a997-be6a528bc34a> | CC-MAIN-2022-40 | https://ngdsystems.com/when-nvme-is-simply-not-enough-the-future-of-storage-for-edge-workloads/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00723.warc.gz | en | 0.945117 | 1,124 | 2.71875 | 3 |
The moving parts of a data center are abundant. Data center managers have to oversee and supervise staff, large amounts of computer hardware, software, maintenance, inventory, and much more. There are many aspects to consider—and one of these aspects is density. Managing data center density environments can be challenging and complicated especially when you’re dealing with a mixed density environment. But what is a mixed density data center and what challenges might one deal with in this type of ecosystem?
Data center density is the electric power consumed by the operation. The way density is measured is how much electric power is consumed per square foot of floor space, the number of serves, as well as the load of the cooling system. Increasing power consumption means each cabinet can provide better performance. Some data centers are now offering higher power or higher density when it comes to their servers. There are many advantages to a high-density data center.
Data centers where each cabinet consumes more than 10kW is known as a high-density data center. Data centers that use higher power per cabinet can offer their clients a better deal than one that doesn’t. A data center that can pack more power into the same space can give clients a lower price of buying or renting in their data center.
High-density data centers are efficient because while there is more customers space remains the same, so data center managers have a smaller space to manage. This can save time and money in a couple of different ways.
A high-density rack will require more power and more cooling than a traditional one, but customers will need fewer racks when using a high-density setup. This will save both customers and the data center provider money. One of the disadvantages is that some data centers have slowly been going towards a high-density environment which is causing a mixed density environment. This can be challenging for a couple of different reasons.
The use of artificial intelligence has changed how data centers operate. Data centers are using artificial intelligence to make managing data centers easier and more efficient. AI is being used to help data centers manage energy consumption, reducing downtime, optimizing workload distribution, improving security, and in the future enabling unmanned automation. But artificial intelligence isn’t just helping data center efficiency. The world’s usage of artificial intelligence is also pushing data centers to become more powerful or higher density. Artificial intelligence requires a tremendous amount of data-crunching which is pushing data centers to adopt higher-density environments.
One of the problems with transitioning to high-density racks is that not all data center racks will be the same. There will be traditional racks (low and medium density) alongside the new high-density racks. Having a mixed density data center can affect cooling. Because high-density racks will generate more heat they will also need more cooling. Data center managers have a couple of different options.
The first option is to lower the temperature of the entire data center hall because the high-density racks require it. The second option is to incorporate different variations of cooling. They can deploy traditional cooling to lower-density racks and deploy liquid cooling to the high-density racks. Containment systems can also be used to give more flexibility when it comes to managing a mixed density environment. There are different alternatives to making this a possibility but will present some specific challenges along the way and may also require rebuilding a good portion of the data center’s operation.
The different alternatives to dealing with the challenges of a mixed density data center also present some challenges within itself, but one of the solutions could potentially be what is called split architecture. The challenge has been the growing number of high-density racks mixed in with the standard racks. Data centers must adjust their operations to fulfill the needs of high-density and low-density racks at the same time.
Split architecture in the data center space means it can accommodate servers and racks with different requirements. This could mean a couple of different solutions as discussed earlier. Split architecture can be something as simple as having the bulk of the data center using traditional air-cooling while a smaller section deals with the high-density portion.
Although artificial intelligence has forced data centers to adapt—artificial intelligence will also aid in managing a split architecture. Having varying densities inside a data center can become complicated, but the use of AI will simplify data center management.
It seems like an endless cycle. Artificial intelligence is causing data centers to change the way it operates, but at the same time it’s also artificial intelligence could also be one of the most promising solutions. What other ways are data centers implementing the use of artificial intelligence?
Data centers have many moving parts and numerous amounts of aspects to manage. One of these facets is security. Data runs the world, which means data centers are a prime target for any sort of cyber threat. Utilizing AI can help data center managers identify malware and recognize loopholes in the data center security system.
Data centers can potentially use a large amount of energy. AI can learn and adapt to the specific thing going on within a data center including analyzing temperature set points, assess flow rates, and gauge cooling equipment, and more. AI can help data centers reduce downtime by using predictive analytics to survey power levels and possible defective areas. It can also optimize servers, monitor equipment, and lastly help with automation.
Technology continues to advance because of and with the help of artificial intelligence. Data centers are now using high-density racks that can keep up with the usage of artificial intelligence. As data centers incorporate high-density racks in their operations the transition of a mixed data center is causing some challenges. There are a couple of different solutions to this issue, but one of the major reasons for this issue may also be its best solution—artificial intelligence. | <urn:uuid:0c3a2f52-45e3-472d-918b-c89d1ff0bb0f> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/mixed-density-data-center-challenges | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00723.warc.gz | en | 0.943441 | 1,168 | 2.671875 | 3 |
“Can Our Company Use AI to Lower Our Energy Consumption?”
Yes, it can. Companies can lead the way in reducing wasted energy, and AI can help them.
How much energy can your business afford to waste?
None, of course. But how much power does your business waste?
More than you want to waste, of course.
You can’t know how much, but it could be quite a lot depending on your sector. About 30% of the energy used in U.S. commercial buildings is wasted. With end-use efficiency estimated at 65% for retail, 49% for industrial, and only 21% for transportation, your business is probably wasting at least a third of the energy it pays for.
Energy is a perennial source of overlooked costs for companies with manufacturing facilities, complex supply chains, and numerous locations. Because it’s big money, energy-intensive companies try to reduce consumption with techniques like buying more efficient machines or turning the lights off at the end of the day.
But what if companies like yours could use artificial intelligence (AI) to reduce energy consumption? What if you could use automation and advanced analytics to help uncover ways to become more energy-efficient in areas it never occurred to you to look before?
Applying AI and analytics to minimize wasted energy
You can connect to data on your energy usage using automation and analytics, even when it’s siloed in different company areas and other formats. Bringing multiple streams into a single one is how you collect enough data for the truly robust modeling and optimization needed to address the problem of wasted energy.
Then, you can apply AI to uncover insights into where and why waste is happening. Those insights enable you to take steps to reduce energy waste, operate with greater energy efficiency, and monitor the effects of your actions.
We examine four real-world areas where businesses can use AI to find ways to cut their energy consumption and dramatically reduce their energy costs.
Building and factory energy
Use AI to help consolidate your utility bills and analyze how you use energy. You can identify energy-intensive operations and times of peak load, then spread those operations to off-peak hours.
For example, Google reduced energy for cooling its data centers by 40%, thanks to the IoT and artificial intelligence from its DeepMind project. Using temperature data recorded in its data centers, the company applied AI and predictive analytics to control air conditioning.
Smart grid management
If your energy strategy includes renewables like solar and wind, you’ll have to factor in their unpredictable nature. You can apply AI to help with energy forecasting and storage in innovative grid management.
Suppose that, based on real-time meteorological data, your models predict a sharp decrease in your rooftop solar generation for the next three days. Either automatically or with human intervention, you can postpone charging your electric fleet and switch to a different energy source.
In moving both passengers and freight, the transportation sector in the U.S. is becoming less efficient while most other sectors of the economy are becoming more efficient. Innovative companies focus on the main factors behind energy efficiency in transportation and act on them:
- Fuel-efficiency — newer, more-efficient vehicles use less fuel to deliver the same load.
- Mode of transport — Trains are generally more fuel-efficient than planes or trucks. The choice pits an energy budget against a financial budget.
Occupancy rate — Between any two points, a single vehicle can usually transport people and goods more efficiently than multiple vehicles.
AI has a role to play in transportation management by helping to consolidate transportation data from numerous sources and applying analytics that reveals inefficiency. For example, in freight operations, the more data you can harvest and the more variables you can identify, the more inefficiency you can find.
AI can improve transportation management by:
- Checking that human drivers are operating within legal guidelines
- Optimizing routes
- Ensuring loads are complete, both outbound and on the return journey
- Notifying business partners of delays well in advance
- Scheduling maintenance to preserve fuel economy
- Automating arrival notifications to reduce wait time for drop-off
Supply chain efficiency
Every supply chain invites inefficiency simply by the nature of cumulative links. AI has the potential to reduce slack in those links and keep supply chain actors like transporters, suppliers, and purchasers synchronized with automatic prediction and decision making.
AI can anticipate the date by which a supplier will run out of a product by working with production and inventory data. It can automatically notify the buyer of the anticipated restocking date and extend the option of changing the order or waiting. Conversely, AI can predict expected product shortages or price fluctuations, allowing buyers to stock up.
AI can also shed light on inefficient planning, scheduling, and building and managing an intelligent warehouse with reduced total costs. AI-enhanced supply chains:
- Optimize inventory management
- Provide more traceability
- Trim waste
- Lower CO2 emissions
- Improve production efficiency
- Reduce delays in supply or overstocking
Again: How much energy can your business afford to waste?
There are easy steps that organizations can take to improve energy efficiency life, ensuring insulation in buildings, using energy-efficient equipment, and making sure lights turn off when they’re not in use. The more complicated steps can be daunting, and hard to measure their impact. That’s where Alteryx comes in.
With Alteryx Designer, you can easily connect to all data sources to view your energy efficiency. With the Alteryx Intelligence Suite, you can even automate pulling data off PDFs like energy bills, energy reports, and more. This data can be used to robust power dashboards that grant insight into the current state of your energy usage.
Download the Alteryx Intelligence Suite Starter Kit to try it yourself. Alteryx Machine Learning makes data science easy and gives anyone the power to understand associations in data and the factors that lead to high energy bills. To learn more about Alteryx ML, you can request a demo as we’ll show you how Alteryx can solve your most daunting energy analytics problems.
Read This Next
How to Increase ROI Through Data Democratization
See 4 things you should do to increase ROI through democratization.
Why Alteryx is a Better Choice for Enterprise Analytics
Why Alteryx is a better choice for organizations that want to experience a unified analytics platform that is easy to use and upskills your existing workforce to enable a democratized approach to data.
Three Ways 7-Eleven Is Optimizing Its Retail Analytics With Alteryx
Throughout the pandemic, 7-Eleven has used Alteryx to make data-driven decisions. | <urn:uuid:02a24256-caa9-45c6-ac59-a6afdde3c655> | CC-MAIN-2022-40 | https://www.alteryx.com/input/blog/how-organizations-are-improving-energy-efficiency-with-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00123.warc.gz | en | 0.924458 | 1,389 | 2.5625 | 3 |
Superior physical and logical access control at your fingertips
Security is an unignorable aspect in all business endeavours and access control is one of the important activities that help achieve the sense of security. Amid today’s ever growing security concerns, sense of security is necessitated more than ever. Placing access control at entry / exit points of sensitive places and information ensures that only right individuals or entities can access a facility or information at the right time and for right reasons.
Access control systems that use biometrics to verify authority of an individual seeking access to a physical or logical facility, are called biometric access control systems and the approach is called biometric security.
Biometric technology works on the idea that some of human characteristics are so unique that they can be used for establishing identity and authenticating it when needed. This is hardly surprising as we have been using this knowledge to identify familiar people with their face, voice, behavior and even gait.
When this intrinsic human ability is given to electronic devices, the biometric technology came to existence. With the use of mathematical and statistical methods and leveraging computerized pattern recognition ability, unique human characteristics like fingerprints, voice, face, etc. are mapped in digital information. Systems that can do biometric mapping and verify it later, are called biometric recognition systems e.g. fingerprint recognition systems, face recognition systems, etc.
There are substantial reasons why biometric security is considered superior than other methods of access control:
- Physical access control methods laid with possession based factors such as keys, fobs, access cards, etc. can be highly insure as these external artefacts can be misplaced, shared, lost or stolen. Unauthorized individuals can gain access to a secure facility with lost or stolen means of access control. This is not the case with biometrics as it uses inherent characteristics of individuals to lay access control.
- Access control laid with human guards is also prone to human errors and manipulation. Human efficiency to secure a facility may not always be at its best. Human efficiency can be affected by many factors like weariness, fatigue, etc. Access control laid with biometric access control systems can work tirelessly without any deterioration in efficiency.
- Biometric security can further be strengthened with the use of multi-factor authentication for more robust access control and identity management.
- Biometric access control is a widely used approach around the globe and this widespread use has brought down the prices of biometric access control systems, enabling price sensitive customers to adopt the technology.
- Wave of mobile devices with biometric security has transformed how people secure their devices. People are not only unlocking their phones with biometrics, they are performing financial and other sensitive transaction via the internet using biometric authentication. It has improved overall trust on biometrics. | <urn:uuid:f24f03a4-9e96-4aa7-ab0a-f6a4a68ccbe1> | CC-MAIN-2022-40 | https://www.bayometric.com/biometric-access-control-fingerprint-security-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00123.warc.gz | en | 0.928858 | 567 | 2.71875 | 3 |
Machine Learning: A Smart Initiative to Improve Livability Index
The world first came across the idea of Artificial Intelligence (AI) when it first saw androids, and intelligent automatons in comics and movies. Not too later, we saw the advent of contextually aware machines that had the ability to perceive its surrounding and take necessary action partly based on predefined logic and cumulative learning experience during its course of existence. The objective was to maximize the machine’s ability to succeed at something that was defined as a goal or designed reaction to a stimulus.
We have seen ‘Hal 9000’ in “2001: A Space Odyssey”, the ‘C-3PO’ in “Star Wars” and the agents in “The Matrix”, as the dramatized fictional representation of AI in machines in Sci-Fi movies. However, in the real world, research on practical applications of AI in machines were initiated in 1950s and continued into the 80s. These were primarily based around expert systems, which failed to live up to their functional expectations that were set in the Sci-Fi movies.
Three decades later, we are witnessing a resurgence in AI research due to availability of increasingly cheap, powerful and complex computing, a quantum leap in data storage capacity, and the rapid use of remote and wireless networking capabilities. Today, AI capabilities are probing into the world of Sci-Fi, that seemed unachievable not-too-long ago. And now, we see that research in AI and its vastness in terms of potential impact on the future are being tested by government, enterprises and individual citizens.
Marching ahead in the AI ecosystem is the world of Machine learning (ML) — a subset of AI, which is much more than systems that use computing for analyzing manually fed data. ML systems of today are powered by massive volumes of real-time data which combined with the capacity to evaluate, learn and improve by using a set of rules and learning algorithms; churn out new insights continuously without requiring any programming or manual intervention to do so.
ML can be categorized into four major sub parts:
• Supervised learning, where observations contain input/output pairs (aka labeled data): These sample pairs are used to "train" the ML system to recognize certain rules for correlating inputs to outputs.
Examples include types of ML that are trained to recognize a shape based on a series of shapes in pictures.
• Unsupervised learning, where those labels are omitted: In this form of ML, rather than being "trained" with sample data, the ML system finds structures and patterns in the data on its own.
Examples include types of ML that recognize patterns in attributes from input data that can be used to make a prediction or classify an object.
• Deep Learning or Representation Learning is a relatively new area in ML, which aims to develop techniques and algorithms to extract features or representations for effective use of ML algorithms using neural networks.
The human mind with its never-ending capacity to learn is the epitome of intelligence as we know it. However, with the continued advancement in AI, ML has the potential to become a complementary resource that has the capability to augment or even help in human decision making process that lead to better insights. In humans, the process - from accumulation of facts to the point where we arrive at a decision, or to create something that aids in reaching a goal - does not happen all at once. ML is also having a huge positive impact in other sectors that cater to the global society.
Further advancements in ML has advanced into solutions that are classified under Deep Learning (DL). Pioneering developments in DL solutions have yielded pathbreaking results in various spheres of ML applications. DL deeply involves a complex combination of speech recognition, computer vision, volume metric brain image Classification, sensory control and natural language processing.
The increasing application of ML also brings to fore a growing discussion on social, ethical and legal implications of such use. There is a demand for socially responsible intelligent systems which provide an explanation to the answers they provide. Most of the ML based systems are trained and fed with existing information to create extended intelligence. In the absence of ethical and moral understanding, these systems could further reinforce certain biases which are harmful for the society. In the past year, there have been several instances of how such AI based systems have gone horribly wrong with unintended consequences.
This brings us to what is on offer when we consider the future of ML and how it will pan out in the future. The next frontier in ML is to make it as close to human intelligence which involves embedding emotional and social ethics built into the system. With one foot evolving from its SciFi past and the other setting into the advanced world of scientific and technological innovations, ML is heading into a unique symbiosis with the realities of the human world. | <urn:uuid:fa2fb2ae-584e-4ac0-b11d-3ffc87323cbf> | CC-MAIN-2022-40 | https://artificial-intelligence.ciotechoutlook.com/cxoinsight/machine-learning-a-smart-initiative-to-improve-livability-index-nid-3271-cid-127.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00123.warc.gz | en | 0.958534 | 984 | 3.015625 | 3 |
In the event of data breach, MFA can be your strongest line of defense.
What is Multi-Factor Authentication (MFA)?
MFA is an authentication method in which users are required to provide more than one form of verification credentials to gain access to an application, website, or information. MFA often takes the form of a password plus an additional, independent credential such as a PIN, a personal security questions, or a one-time passcode sent via text.
Why use MFA?
Can you imagine using only your debit card to withdraw money from an ATM? If your debit card fell into the wrong hands, all the bank accounts tied to that one card would be at risk! The same is true for your passwords, especially if you’re one of the 54% of consumers using five or fewer passwords for all their accounts. To make matter worse, cybercriminals have more than 15 billion stolen credentials at their disposal. If your credentials are selected and you do not have MFA established, your accounts and records are available for the taking. Having MFA in place adds an additional layer of security, making it harder, if not nearly impossible, for cybercriminals to access your information. For example, if you are sent a one-time passcode each time you log in to your email account, the hacker would need both your credentials AND your phone to successfully log in. Odds are, you would notice if your phone went missing. It’s a lot harder to steal a cell phone than a password!
Key Benefits of MFA:
- a) Stronger Security – Risk reduction is a major focus for organizations, particularly those in the dental and medical space under HIPAA regulation. With cyberattacks at an all-time and over 80% of breaches caused by stolen or weak passwords, multi-factor authentication is no longer an option, it’s a necessity.
- b) Adaptive to Workplace Changes – Remote work is here to stay and will continue to change over the coming years. With this, comes the challenge of managing increasingly more complex device networks over increasingly more diverse geographic locations. Adaptive MFA solutions can evaluate and address varied levels of risk based on geographic location. For example, if someone logs onto a device within the office, that space is qualified as ‘secure’, and the user may not be prompted to enter an additional form of verification. Contrastingly, if the same user logs onto their device at the coffee shop down the street, that additional form of verification now becomes mandatory as they are in an unsecure, or untrusted, space.
- c) Improved User Experience – Password management is a pain. Users have many passwords to keep track of related to not only work-related individual and shared accounts but also personal accounts. How many of us are guilty of using the same password for both a personal and work related account? Implementing MFA can simplify password management and add an added layer of distance between personal accounts and work accounts.
We are happy to help set up MFA for your office, just give us a call or send us a message at email@example.com. | <urn:uuid:8869c9db-2d35-4a02-9b14-01de373e9b67> | CC-MAIN-2022-40 | https://www.irissol.com/blog/why-bother-with-mfa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00123.warc.gz | en | 0.941091 | 648 | 3.15625 | 3 |
Times when information was transmitted by signal fires, drums or pigeons have gone. Nowadays, all the information is transmitted both by wire and without them. But the most traditional way of communication is still a call on a mobile phone. And although we are all so used to calling or answering a call, there are myths about mobile communications. Here are 7 of them:
Communication in the aircraft does not work from ground base stations
Exactly. The technology of using cellular communication on board of the aircraft differs from the ground one, as the aircraft receives a signal from the satellite to its own base station. Therefore, to make a call or send an SMS message, you need to install your own compact low-power base station in the cabin. However, it is not done due to the high cost of such a station.
Communication in transport becomes worse due to the traffic
This is a mistake. The fact is that the speed of your movement does not affect the quality of communication. Just on different sections of roads may be different data rates, which depends on the coverage of a particular operator.
Neighborhood with base stations is dangerous
The maximum safe radiation level must be taken into account. The farther from your mobile phone is the base station, the more powerful the level of radiation coming from the phone itself, even in standby mode. So on the contrary, the closer you are to a cellular station, the less power your phone needs, and therefore less radiation.
Communication becomes worse during a thunderstorm
This is only partially true. Because weather conditions are unpredictable and can be reflected in different ways on mobile communications. This can be due to the banal disconnection of the power cable of the station, and the demolition of the tower by strong winds. During the rain, the frequency range may deteriorate. In fact, in this situation, everything depends on the scale of the weather.
Bars show the quality of the signal: the more, the better
It’s not true. The fact is that the “sticks” in the mobile phone displays the signal strength to the nearest base station. The quality of the network these so-called “sticks” do not affect.
If you can not call the right subscriber, it is not due to poor signal quality of the network, it’s because the number of devices serviced by the tower is much larger than usual, simply put, the limit of subscribers in the network is exceeded.
Operator’s phones and towers emit radiation! We are irradiated and we will die!
Radiation and mobile communication are different things. Mobile communication works with radio waves but they are in the area of non-ionizing radiation! They are followed by infrared light (thermal radiation) and visible light. As you can see, mobile communication has nothing to do with radiation.
Mobile phones emit a field that gives you a headache, you can’t sleep, everything hurts, nothing happens, and everyone gets cancer and dies!
The electromagnetic field is a special form of matter through which the interaction between electrically charged particles. Base station radiation is much less powerful than a telephone in hand. The fact is that the base station does not shine a narrow beam directly into the phone, but “smears” all its radiation in the space around it and the signal strength drops rapidly with distance from the station. So this myth is not true.
Today, due to compliance with regulations and standards, you no longer need to install anything in your home to get a better network connection.
To The Moon Mobile is a young mobile virtual operator that can offer you the best network 4G coverage in the UK and fastest 4G download speeds. TTMM offers you flexible bundles with no contract and SIM only, affordable prices and super useful app. Moreover, if you want to keep a mobile number from another provider, To The Moon Mobile can keep your number and transfer within 2 working days. Join “To The Moon Mobile”! | <urn:uuid:df09aa7e-e884-41d2-9b19-e3f143176d30> | CC-MAIN-2022-40 | https://gbhackers.com/7-common-myths-about-mobile-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00323.warc.gz | en | 0.951094 | 821 | 3.0625 | 3 |
We recently heard in the media about security incidents including data leakage, server infection, and stolen user accounts conducted by malicious parties. In this and future posts I will try to give real life examples of such threats and the way they can be prevented. While analyzing Application Security Manager (ASM) logs I come across this suspicions request:
Looking carefully into this request we can see that the parameter named “author_name” is used to inject something that looks like PHP code. Actually, this seems more like an injection of PHP code in a web application that uses Bulletin Board Code(BBCode).
Another interesting thing with this request is that we can see that the attacker tried to evade security detection by using the “base64_decode()“ PHP function, scrambling the payload of the injected code.
When taking the base64 encoded payload and decoding it, we can see the injected code:
If an attacker successfully injects this code to the web application, the attacker is given control over the web server by exposing the backdoor that allows unauthorized remote control over the web server.
The best way to detect and block such attacks is by combining two known web application firewall methods:
· Signature based – detection of suspicious patterns (such as base64_decode), and block requests that contains this payload.
· Policy anomaly – detection of a change in the way the application is being used. For example, pre-define a set of attributes for the parameter “author_name”, such as a length or allowed meta characters, and if suspicious requests deviates from the pre-defined value then block the request.
And here is how the detection of an injection attempt looks in the ASM log: | <urn:uuid:927672e0-fcbf-462c-8c52-72ed9a277ef6> | CC-MAIN-2022-40 | https://community.f5.com/t5/technical-articles/anatomy-of-code-injection/ta-p/278018 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00323.warc.gz | en | 0.91182 | 350 | 2.5625 | 3 |
U.S. adults will use smartphones for more than three hours a day this year, according to a study from eMarketer. This is a sixfold increase over print media, which receives only 26 minutes of attention each day. Data on citizens access to federal department websites clearly reinforces this trend, indication that of the more than 2 billion visits record over a 90-day period, 38.3 percent from smart devices. As citizens move from “in-line” to “on-line” in their interactions, government agencies must find secure ways to make services accessible through mobile devices.
Every database relies on unique identifiers to track records associated with individuals, and monitoring applicants is crucial to processing requests and fulfilling benefits. Many state services are administered at the county level, providing the opportunity for fraudulent individuals to apply for services from multiple counties. The U.S. Sentencing Commission reported that 682 citizens were convicted of government benefits fraud in 2015, and 16 percent of these offenses involved losses of $1 million or more. With unique identifiers, agencies can prevent these safety-net abuses.
More Efficient Interactions
Single IDs allow state agencies to streamline claims and other applications by linking all information to one tag. Instead of storing documents across multiple offices, medical visit records can instead follow foster children as they move between houses or agencies. When dealing with the most vulnerable parts of our population, government should focus on making interactions as easy as possible to minimize the stress of frequent transitions.
Moreover, single IDs permit integration of government services into smartphones. When citizens need to upload a document, they can just take a picture – no more scanning and emailing. Apple and Amazon have already popularized voice assistants like Siri, and the future of citizen engagement is trending toward the same model of interaction. Authenticated by their unique ID, a citizen can directly ask questions of digital assistants, and the system will find resources. Single IDs open the door to digitizing consultations and services, reducing the amount of time employees spend on administration and freeing up their time for creative tasks.
Use Case: Singapore
The Singapore Government is currently investing in a single ID system that would allow all citizens to access financial, health and government records with a smartphone app, by incorporating biometric authentication elements, such as fingerprints, into its system as well as file encryption. Single IDs will also include open Application Programming Interfaces that will allow private sector businesses to enable unique IDs as login credentials for online banking and other services.
Workforce of the Future
Single IDs are an exciting new development in the tech industry, and adoption by agencies provides a great way to market government IT positions to younger candidates. When the average age of employees is 46 in states, new innovators, attracted by smartphone integrations and other tech drives, can provide the key to invigorating state modernization initiatives. The future of digital services lies with mobile IDs.
Click here to learn more about how single IDs can revolutionize government services and protect sensitive information, or visit CA at booth #526 at the DoDIIS Worldwide Conference August 13-16 at the Carahsoft Partner Pavilion. | <urn:uuid:3cfdb2f9-d9b2-480d-9314-c7b212851bfb> | CC-MAIN-2022-40 | https://www.carahsoft.com/community/single-ids-path-to-success | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00323.warc.gz | en | 0.932792 | 636 | 2.703125 | 3 |
Researchers develop 'world-first' adjustable security cryptoprocessor
Secure cryptoprocessors are getting more advanced, and now there is a world-first in adjustable security, according to researchers at Okayama University and Tokyo Electron.
The secure cryptoprocessors now have the ability to adjust security levels without adjusting the processor itself.
University researchers have developed the adjustable and secure cryptoprocessor that supports both eliptic curve and pairing-based cryptographies.
The researchers say that the cryptoprocessor will have wide-ranging implications for IoT devices and 'ubiquitous terminals' because of its size and power capabilities.
This will provide scalable control, supported by cyclic vector multiplication algorithm, or CVMA. The algorithm is most commonly used for vector multiplication but also has wide uses for security scalability.
The researchers believe IoT and cloud trends will continue to need public key cryptography for security. Elliptic curve cryptography is particularly important for device and user authentication.
However, the researchers also state that computer performance is also on the increase, but that doesn't mean it's any easier to adjust device security.
They say this is because public key cryptography uses complex mathemetical problems, which can be difficult to program for security adjustment.
While traditional RSA cryptography key length has increments of 512, 1024, 2048 and 3072 bits, processors and their mathematical bases need to be upgraded.
The new secure cryptoprocessor offers security strength range from 256-5120 bits through a small circuit area size and 'practical calculation efficiency'. | <urn:uuid:cb3f656f-552d-4673-a92c-a7c800f4a939> | CC-MAIN-2022-40 | https://securitybrief.asia/story/researchers-develop-world-first-adjustable-security-cryptoprocessor | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00323.warc.gz | en | 0.928077 | 324 | 2.65625 | 3 |
China has been working on the router technology for more than two years as part of a wider strategy to foster development of domestic intellectual property. During the past few years, the country has developed technologies in a handful of applications, ranging from mobile phone and wireless LAN communications to optical disc data compression.
The router, codenamed BE12016, was commissioned by the Ministry of Science and Technology and jointly developed by Tsinghua University, Tsinghua Unisplendour Bitway Networking Technology Co., Ltd. and the military's Information Engineering College of the PLA Information Engineering University. It is backward compatible with the current IPv4-based Internet and is capable of transferring 320 billion bits per second, according to a report in local media.
The router comes into service as part of CERNET2, which was launched this week and connects 25 Chinese universities in 20 cities. The network is named after the China Education and Research Network (CERN) and will soon be expanded to 100 universities.
Most of the network operates at speeds up to 10 gigabits per second, but a segment between Beijing and Tianjin clocked in at 40 gigabits per second during a trial in early December. According to an official at CERN, at least half of the "key equipment" for setting up CERTNET2 came from Chinese telecom equipment makers Huawei Technologies and Tsinghua Bitway.
China, as well as other Asian nations like Japan and Korea, have aggressively pursued the development of an IPv6-based Internet because of the vastly higher number of IP addresses it's capable of handling.
Currently, the US controls roughly three-quarters of the 4 billion IP addresses used in the IPv4 networking protocol. China, with its fast-growing Internet community nearing 80 million users, claims that it has only a tiny sliver of the IP addresses available. | <urn:uuid:b0d9b51d-2d51-4978-b00f-48949f6f9da5> | CC-MAIN-2022-40 | https://www.informationweek.com:443/it-life/china-devises-first-core-router-for-ipv6-networks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00323.warc.gz | en | 0.949783 | 376 | 2.84375 | 3 |
Is Linux more secure than Windows, or vice versa? Fueled by conflicting industry reports, this controversy keeps raging. To arrive at a well-informed opinion on the subject, you need to know as much as you can about what kinds of security measures are actually available for Linux. Moreover, if you’re administering Linux already, some implementation tips from Linux security pros can undoubtedly come in handy.
“It’s hard to talk about ‘Linux’ as an operating system, since there are so many different variations. A number of different OSes — such as FreeBSD, VMS, mainframe OSes like VM or VSE, or other proprietary OSes — may lay claim to the title of ‘most secure OS,'” observes Pete Lindstrom, CISSP, research director for Spire Security, LLC.
“The truth is that we don’t, as a community, attempt to figure out which OS is most secure. We rely on an ‘unpopularity’ contest to figure that out. Popularity is a fickle thing, though. Right now, Linux has some momentum in security over Microsoft’s OS family, but that can change quickly.”
The debate over OS security intensified in February of this year, when the Aberdeen analyst group released a report based on publicly available information from CERT. “Contrary to popular misconception, Microsoft does not have the worst track record when it comes to security vulnerabilities. Also contrary to public wisdom, Unix- and Linux-based systems are just as vulnerable to viruses, Trojans, and worms,” the report stated.
Positive Perceptions of Linux Security Pick Up Steam
Meanwhile, though, positive industry perceptions of Linux security actually seem to be picking up steam. A study by Evans Data Corp., released earlier this month, found that the number of developers who regard Linux as “the most innately secure operating system” leaped 19 percent over the past six months.
Jim Dennis, a principal at Starshine.org, is one practitioner who gives Linux a big security nod over other OSes. For one thing, Linux distros have been built from the ground up with security as a major focus, according to Dennis.
Dennis also points to the existence of many “hardened” Linux kernels — such as LIDS, RSBAC, and LOMAC — as well as “hardened” Linux distros, including SELinux, OpenWall Linux (OWL), and Adamantix. (Adamantix was previously dubbed Trusted Debian.)
Even without a hardened distro/kernel, though, there are many ways of battening down Linux’s hatches. Lindstrom and Dennis both provided plenty of advice for Linux administrators, across areas ranging from security policies to secure installation, including cryptography, protection of CGI and dynamic content, replacement of deprecated protocols, and more. | <urn:uuid:f4824635-963c-438e-9a10-62f99217c90e> | CC-MAIN-2022-40 | https://cioupdate.com/linux-security-tips-from-the-experts-4/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00323.warc.gz | en | 0.939331 | 604 | 2.734375 | 3 |
An average website faces 94 attacks every day, and only 26% of all data breaches are attributed to web application hacks. The cyber threat landscape has worsened over the last couple of years, and keeping a continuous tap on all aspects of cyber security can be overwhelming. It is an established fact that frequent penetration testing is the best way to counter this, regardless of the industry you are operating in. But then there is the debate around automated and manual penetration testing slowing the decision down for you.
You need a combination of automated and manual penetration testing to get a complete picture of your organization’s security posture along with actionable insights to fix and prevent the issues.
Now that we have picked an answer and put the debate aside, let us understand why a company needs both manual and automated penetration testing. In the course of this article, you will learn what a penetration test is, what are the different approaches to it, how automated and penetration testing serve your purpose, and how you can choose the perfect pentest partner.
What is Penetration Testing?
Penetration testing is the process of evaluating the security of a system by finding and exploiting vulnerabilities with the help of hacker-like tactics.
It is analogous to finding the ways of breaking into a house, then assessing how many rooms were accessible from each break-in point, how easy it was to break, and how much worth of valuables could be stolen in course of the operation.
The main difference between a hack and a penetration test is that hackers break in to steal actually cause harm, and penetration testers make you aware of the exploitable flaws in your security and help you fix them.
What is Manual Penetration Testing?
Manual penetration testing is the process where security engineers manually perform penetration testing to assess the security posture of a system. They use hacker-style techniques to find ways to break into your system, evaluate the vulnerabilities in terms of impact and exploitability, and prepare a report documenting the vulnerabilities and the way to reproduce and fix them.
This is what a manual penetration testing process looks like
- The secuirity experts prepare a running profile of attack methods that can be used against a target system.
- They prepare test cases and execute them in a way that detects software vulnerabilities without affecting the business functionalities that might be active on the target system.
- After that they customize attack payloads for specific applications and execute them while taking note of the environment.
- They perform an analysis of the data captured through the operation to attain vulnerability patterns, interpret the results, and prepare a plan for remediating the issues.
What is an Automated Penetration Testing?
Automated penetration testing refers to the scanning of your systems for common vulnerabilities with the help of automated tools and processes. It is a faster, and comparatively cheap process that can give you a quick analysis of your website’s or network’s vulnerability status.
An automated penetration test or vulnerability scan produces results in minutes by checking your system for vulnerabilities by referencing a vulnerability database. It is perfect for a small business that does not deal with too much sensitive data and runs a simple application.
What are the advantages of manual penetration testing?
Manual penetration testing comes with a significant edge over vulnerability scanning or automated pentest. The foremost advantage is that it involves both automated tools and human intelligence increasing the depth of the penetration testing naturally. Let us look at some specific advantages.
- Zero False Positives: If you have dealt with vulnerability scanners, or if you are a security aware person in general, you would know how big a deal false positives can be. The real pain of false positives is felt by the developers wasting hours trying to fix an issue that does not exist.
Manual pentesters exploit each vulnerability to ensure that they are genuine issues. This saves you a lot of time and effort. This is one of the most important advantages of manual penetration testing.
- Deep and Exhaustive Testing: Automated vulnerability scanners have become really smart over the last decade with regularly expanding test cases. But let’s face the fact, they still miss vulnerabilities. You cannot have a definitive vulnerability report without a manual pentest. There are security errors like business logic errors that cannot be detected by an automated scanner. We will talk more about it later.
- A Thorough Pentest Report: Security engineers who have run a manual penetration testing of your systems can produce a detailed report with step-by-step guidelines for you to reproduce and fix vulnerabilities. Moreover, you get their assistance while trying to fix the issues. Reading, interpreting and acting upon a pentest report is a pretty tenuous task even for the IT professionals, a little human assistance goes a long way in making the process fruitful.
- Compliance: Some compliance regulations like the PCI-DSS require manual penetration testing. What Are Some Key Differences Between Automated and Manual Penetration Testing?
|Automated Penetration Testing||Manual Penetration Testing|
|Automated penetration testing or Vulnerability Scanning is an automated process of detecting vulnerabilities performed with penetration testing tools.||Manual penetration testing or simply penetration testing is a meticulous assessment of your security infrastructure, performed by competent security researchers.|
|It is quick to execute and saves a ton of time.||Manual pentests can take days on end to complete.|
|It is a low-effort & efficient method of scanning your networks for vulnerabilities.||It requires proper planning and preparation to conduct a full-blown manual penetration test.|
|It does not provide deeper insights into the vulnerabilities.||It provides detailed & deeper insights into the vulnerabilities.|
|It discovers common security misses like a lacking update, flawed permission rules, configuration flaws, with amazing efficiency.||It detects acute flaws that are often missed by a scanner like business logic errors, loopholes, coding flaws, etc. It also involves exploiting these vulnerabilities to gauge the impact on the system.|
|It can be done frequently without much preparation & planning.||It requires effort & time, thus can't be done frequently.|
What Are Some Flaws that Require Manual Penetration Testing to Detect?
While automated penetration testing tools have come a long way in terms of speeding up the process there are certain areas that still require human attention to detail. When it comes to rooting out complex vulnerabilities that do not necessarily show up on vulnerability scans and ensuring zero false positives, manual penetration is irreplaceable.
Here are some vulnerabilities that require manual pentesting to detect
- DOM-based cross-site scripting
- Blind SQL injection
- Business logic errors
- Cross-site request forgery
- Template injection
- Broken access control
An experienced security expert can catch anomalies that may appear to be legitimate to an automated scanner. A lot of difficult vulnerabilities may be found when pentesters follow their instinct and use creative ways to examine in an unexpected direction.
On top of everything, the support you get from manual pentesters in terms of reviewing and remediating vulnerabilities is quite indispensable.
The Advantages of Using Astra’s Pentest
The pentest suite by Astra Security is a complete and elegant solution to your penetration testing needs. The security team at Astra ensures zero false positives by manually exploiting the vulnerabilities. The pentest report produced by them is as thorough as it gets but at the same time, it is easy to follow, thanks to the step-by-step guides, and video POCs. You get best-in-class human support, in case the developers hit a roadblock while remediating the issues. The scope of collaboration between your team and the security experts is ample and smooth.
Here are some key features that set Astra’s Pentest apart
- 3000+ tests to ensure no vulnerability is left unchecked.
- Zero false positives ensured by manual penetration testing.
- Interactive dashboard to visualize vulnerability analysis, assign vulnerabilities to team members, and comminicate with security experts.
- Continuous Scanning with the help of CI/CD integration. You do not have to visit the pentest dashboard to start scans after product updates. You can just automate it.
- Scan behind logged-in pages without manually authenticating the scanner every time the session times out.
- Pentest compliance reporting helps you understand the position of your company in terms of compliance requirements, in real-time.
- A thorough pentest report eases up the process of remediation.
- Best-in-class support from security experts to help the developers interpret and act on the report.
- A publicly verifiable certificate helps you build trust among customers.
A combination of automated and manual penetration testing is a necessity for companies with internet-facing assets and sensitive information. Astra’s pentest makes it super simple for you to evaluate the security posture of your website, applications, or network.
The depth and effectiveness of manual penetration testing cannot be matched by an automated pentest, then again, the speed and scalability of automated tests are incredible. A combination of both received from a perfect pentest partner is what you need.
Your systems are most likely vulnerable to attacks, the sooner you get a pentest, the better your chances are of avoiding the nuisance of getting hacked. Find the right pentest firm to collaborate with and get secure. | <urn:uuid:8e80386f-8f02-4e34-bd60-a76c2f05a435> | CC-MAIN-2022-40 | https://www.getastra.com/blog/security-audit/automated-vs-manual-penetration-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00323.warc.gz | en | 0.912253 | 1,912 | 2.578125 | 3 |
When you point to a hyperlink in Microsoft Internet Explorer, Microsoft Outlook Express, or Microsoft Outlook, the address of the Web site typically appears in the Status bar at the bottom of the window. After you click a link that opens in Internet Explorer, the address of the Web site typically appears in the Internet Explorer Address bar, and the title of the Web page typically appears in the Title bar of the window.
However, a malicious user could create a link to a deceptive (spoofed) Web site that displays the address, or URL, to a legitimate Web site in the Status bar, Address bar, and Title bar. This article describes steps that you can take to help mitigate this issue and to help you to identify a deceptive (spoofed) Web site or URL.
Make sure that the Web site uses Secure Sockets Layer/Transport Layer Security (SSL/TLS) and check the name of the server before you type any sensitive information. SSL/TLS is typically used to help protect your information as it travels across the Internet by encrypting it. However, it also serves to prove that you are sending data to the correct server. By checking the name on the digital certificate user for SSL/TLS, you can verify the name of the server that provides the page that you are viewing. To do this, verify that the lock icon appears in the lower right corner of the Internet Explorer window.Read Full Story | <urn:uuid:2072a35c-4d0f-4b02-bfc9-c249ade5356d> | CC-MAIN-2022-40 | https://it-observer.com/protect-yourself-deceptive-web-sites-malicious-hyperlinks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00323.warc.gz | en | 0.886753 | 291 | 3.015625 | 3 |
Despite the hype around Blockchain in the past few years, a 2018 Gartner survey indicates Blockchain adoption rates are still as low as 1%. Within that 1%, the doubt is how operationally effective and efficient it is. This blog also explores the inherent risks with Blockchain and therefore why Blockchain may not be the magic cure as it is made out to be!
Blockchain is surely a fantastic technology as it has already been proven by cryptocurrency like Bitcoin. Cryptocurrency has been a trend since the Bitcoin prices soared suddenly in 2017, to reach an all-time high of $19,783 the same year. Ever since, Blockchain, which is the underlying technology of crypto has been the talk of the town. Various large global financial institutions were quick to analyze and explore various possibilities of using Blockchain.
However, despite the hype, many institutions are still weary to adopt Blockchain. In fact, Ripple, a prominent Blockchain company, decided not to use Blockchain for its cryptocurrency! Unlike Bitcoin, Ethereum or other cryptocurrencies, Ripple developed its own patented technology; Ripple Protocol Consensus Algorithm (RPCA).
Let’s explore why despite the fad, Blockchain adoption has not picked up and whether it is the right decision for your organization.
What Actually Is The Value Of Blockchain?
A Blockchain is an open distributed ledger that records transactions between two parties efficiently and in a verifiable and permanent way. It contains a growing list of records, called blocks, which are linked using cryptography. Each block contains a cryptographic hash of the previous block, which cannot be altered without affecting subsequent blocks. To understand more, read: https://hexanika.com/Blockchain-s-disruptive-technology/
courtesy of Yevgeniy Brikman
Blockchain is not a recent technology innovation as it is made out to be. It was invented by Satoshi Nakamoto, who published a paper on the same in 2008. The concept of Blockchain or cryptographically secured chain of blocks goes further back, being first conceptualized by Stuart Haber and Scott Stornetta in 1991.
The only key benefit of blockchain or distributed ledgers is that a central authority currently used for monitoring and processing such as stock exchanges (NYSE, BSE, etc.) payment processors (SWIFT, VISA, central banks, FED, RBI, BOE, etc.) can be replaced with a technology because of its automated consensus mechanism.
While organizations want to adopt new technology, it is important to understand the value of reliability, controls and risk management functions these authorities provide that typically can’t be managed by technology alone, especially if we simply consider Blockchain or distributed ledger.
Is Blockchain Really 100% Secure?
If Blockchain enthusiasts are to be believed, one of the biggest reasons to adopt Blockchain is the enhanced security it offers. Blockchain is most promoted as an alternative to prevent fraud and unauthorized activities. However, in reality, that is not the case.
According to Bitcoin.com, $9 million a day are lost due to cryptocurrency scams and this number includes activities like phishing, fraud, theft and hacking. The very same activities Blockchain use cases propose to eradicate! Blockchains hacks are not unique. In 2019, between January and April, a total of eight hacks have been recorded which resulted in the loss of $729.03 million for the cryptocurrency ecosystem; according to MEDICI estimates, the number of hacks will increase to 16, resulting in a loss of approximately $1 billion.
Not only that, cryptocurrency being lost altogether is one of the biggest worries of the modern day, as was seen recently with investors in QuadrigaCX, Canada’s largest cryptocurrency exchange. After the death of its founder, Gerald Cotton, about $190 million in cryptocurrency has been locked away, and recovering the passwords or security keys has proved to be futile.
Bernie Doyle, CEO of Refine Labs and head of the Toronto chapter of the Government Blockchain Association, calls what’s happening at Quadriga a “seismic event” in the industry.
The repercussions of cryptocurrency freezing and being lost altogether can become dire on the economy as the number keeps rising. By completely bypassing systems, regulators and processes meant to keep the flow of money is harmony, is Blockchain really a solution that can add more value to financial institutions? The small benefits Blockchain may offer can be overruled with the serious implications of its misuse, scams and loss altogether.
Digital Transaction > Cryptocurrency
Current market systems provide risks and controls required to regulate and protect global institutions and individuals. Cryptocurrency is said to be faster as it bypasses rules and leverages a shared ledger as opposed to individual copies. In comparison to paper-heavy traditional processes, Blockchain may provide benefits like elimination of human errors and faster speed. However, compared to digital transactions, the streamlining and automation offered is not very radical.
|Value||Determined as per real currencies e.g. $, £, €, ₹||Determined as per crypto currency value e.g. Bitcoin, Ethereum, Ripple|
|Structure||Centralized and regulated by regulatory bodies, governments and institutions||Decentralized and is not overlooked by any regulatory authority|
|Anonymity||Requires user identification and documentation||No confidential information known; only shareable key is known for transactions|
|Transparency||Information is confidential||Transparent and everyone can see any transaction of any user since they are placed in public chain|
|Legality||Has to follow legal framework for digital currencies by each country||No legal establishment currently.|
Digital transactions offer almost similar benefits with the added protection offered by regulatory frameworks that is completely absent in cryptocurrency. Although cryptocurrency is highly transparent, it offers masking of personal identity with no institutional control, making is a safe platform for transactions of illicit activities like drugs, illegal weapons, etc.
While innovators like Elon Musk agree paper money is going away, it is also a fact that entrepreneur, investor and founder and CEO of Tesla himself owns ‘zero cryptocurrency’ other than .25 BTC that someone gave him years ago.
Performance Vs Operational Cost
The primary strengths that Blockchain has are immutability in transaction mainly because of automated consensus without need of manual management. However, these strengths are essentially it’s weaknesses as well!
For most use cases, Blockchain is:
- Blockchain data is immutable and require substantially larger storage space and computational power. Additionally, data once created can never be deleted, meaning the cost of maintenance will only rise.
- Doesn’t provide ROI or benefits compared to current solutions: Institutions globally are already struggling with growing data and with the addition of Blockchain systems, the additional costs are not always feasible to businesses.
- Blockchain is slow & not viable for large-scale applications: According to a Deloitte report, “In contrast to some legacy transaction processing systems able to process tens of thousands of transactions per second, the Bitcoin blockchain can handle only three to seven transactions per second; the corresponding figure for Ethereum blockchain is as low as 15 transactions per second.”
- Is Tamper-proof but not error free: Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. It’s true that tampering with data stored on a Blockchain is hard, but it’s false that Blockchain is a good way to create data that has integrity.
In summary, Blockchain is a good concept that is waiting for the right use case to leverage its true potential. However, in most cases, it is neither faster nor cheaper than current technology. Blockchain designs have also been proposed by over 200 governments for use in various applications including voting, property records, and digital identity. But can this technology really provide significant benefits and insights that something like AI, Big Data or IoT can? That is a big question still waiting to be answered.
Note: These are the author’s personal views. After studying the various proposed implementations for Blockchain, Yogesh is of the opinion that it is a technology still waiting for a truly innovative use case that can add value to those adopting it.
Contributor: Vedvrat Shikarpur
Feature Image: freepik | <urn:uuid:43dc2bcc-c1cb-46ea-9b97-2122561d7268> | CC-MAIN-2022-40 | https://hexanika.com/blockchain-a-technology-fad-thats-fading-away/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00323.warc.gz | en | 0.947029 | 1,734 | 2.84375 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
What does the Future of Healthcare Look Like
The use of virtual reality is changing the lives of both patients and healthcare professionals. The primary application of VR at the moment is in the training of future surgeons. Thanks to virtual reality, they can practice what it's like to perform actual surgeries better than ever before.
Fremont, CA: From a greater emphasis on quality care to the use of technologies such as artificial intelligence and virtual reality, the healthcare industry is undergoing more transformation than one might think. But what does this mean for both patients and healthcare professionals in the future?
Wearable health and fitness trackers have grown in popularity in recent years. Fitness enthusiasts are no longer the only ones who track their heart rates and the number of steps they take in a day. People of all walks of life are beginning to recognize the value of health trackers. In the coming years, health trackers and other wearable technology will become the norm.
Healthcare professionals' monitoring technology is also becoming more sophisticated all the time. The Viatom CheckMe Pro, for example, is a palm-sized device that can measure heart rate, ECG, blood pressure, temperature, oxygen saturation, and much more. With the advancement of health monitoring technology, medical professionals will have more time to devote to patient care and will be able to make more accurate diagnoses.
The use of virtual reality is changing the lives of both patients and healthcare professionals. The primary application of VR at the moment is in the training of future surgeons. Thanks to virtual reality, they can practice what it's like to perform real surgeries better than ever before. According to a recent study, surgeons trained in virtual reality surgical procedures improved their overall performance by 230 percent compared to traditionally trained surgeons. The researchers also discovered that surgeons who had used virtual reality training were faster and more accurate in their surgical procedures. Obviously, this benefits both patients and surgeons, and as VR becomes more advanced, surgeons of the future will become more skilled, faster, and accurate. | <urn:uuid:815cdc9c-9605-4f21-a312-95ba1f11c32a> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/what-does-the-future-of-healthcare-look-like-nid-8398.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00523.warc.gz | en | 0.962493 | 424 | 2.578125 | 3 |
We’re at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients . Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens.
Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people. But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM thinks part of the problem is a lack of standard practices.
There’s no consistent, agreed-upon way AI services should be “created, tested, trained, deployed, and evaluated,” Aleksandra Mojsilovic, head of AI foundations at IBM Research and codirector of the AI Science for Social Good program, today said in a blog post. Just as unclear is how those systems should operate, and how they should (or shouldn’t) be used.
To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets — formally called “Supplier’s Declaration of Conformity” (DoC) — that’d be completed and published by companies who develop and provide AI, with the goal of “increas[ing] the transparency” of their services and “engender[ing] trust” in them.
Mojsilovic thinks that such factsheets could give a competitive advantage to companies in the marketplace, similar to how appliance companies get products Energy Star-rated for power efficiency.
“Like nutrition labels for foods or information sheets for appliances, factsheets for AI services would provide information about the product’s important characteristics,” Mojsilovic wrote. “The issue of trust in AI is top of mind for IBM and many other technology developers and providers. AI-powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks. These issues must be addressed in order for AI services to be trusted.”
Several core pillars form the basis for trust in AI systems, Mojsilovic explained: fairness, robustness, and explainability. Impartial AI systems can be credibly believed not to contain biased algorithms or datasets, or contribute to the unfair treatment of certain groups. Robust AI systems are presumed safe from adversarial attacks and manipulation. And explainable AI systems aren’t a “black box” — their decisions are understandable by both researchers and developers. […] | <urn:uuid:30126f50-e10f-4fce-80b3-b36f6b5388f8> | CC-MAIN-2022-40 | https://swisscognitive.ch/2018/08/25/ibm-researchers-propose-factsheets-for-ai-transparency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00523.warc.gz | en | 0.937427 | 560 | 2.578125 | 3 |
Technology has been a major catalyst for revolutionising the healthcare industry in the past few years. From diagnosis, to patient care, up to the formulation of vaccines, technology granted medical practitioners new ways of saving people’s lives, while also enabling patients to become partners in their well-being.
With new developments in the field of genomics, experts are now looking at transforming the field of healthcare from reactive to predictive— an objective that is fast becoming a reality through technology.
During a keynote entitled “Combining High Performance Computing, Genomics, and AI to Enable Precision Medicine,” as part of the latest Healthcare Frontiers online conference organised by Jicara Media, Ananda Bhattacharjee, Head of Business Development and Solution Architect for HPC and AI at Lenovo, illustrated how precision medicine can be achieved through the combination of HPC, genomics, and AI.
“We want to get to a point where we can treat you, to measure the individual susceptibility of your diseases, or protect you, your environment, (and) your response to a specific treatment. We want to know before these things (illnesses) happen,” Bhattacharjee said.
“We want to take into account your genetic background (and) your lifestyle, so that we find the right drug and dosage for you, and tailor healthcare to maximise the benefit and minimise the harm. We want to move away from treating the symptoms after it has happened to knowing before it has happened,” he added.
The journey of precision medicine
According to Bhattacharjee, precision medicine was born around the time that scientists were trying to sequence the first genome in 1990.
“It took around 13 years to sequence the first human genome. And it took around $3 billion. But quickly, once it has been done, the scientists (understood) that sequencing a single genome is (not enough to) decipher the secrets of your health. To find the secrets to longer life, we clearly realised that we needed to sequence many of us. We needed to sequence a multiple of us to get more insights. But together it is a challenge— it is mapping human variation, the differences we can see. Genome to genome from individual to individual is a variation, (and) that variation results in susceptibility to diseases. It can be as simple as a height,” he explained.
Today, genome sequencing no longer takes 13 years to accomplish, and the costs have also significantly gone down.
“We see (genome) sequences everywhere. (Genome) sequencing costs (have) come down to less than US$1,000. I’m talking only about the sequencing part, not the analytics part (yet). We see genomics in the lab, in the virology and agriculture fields. Plant genomics (is used when) we need a new variety (or) breed of plants. The balance has shifted now from the next generation sequencing side, to more on the analytic side, with the cost per genome coming down drastically from the sequencing side. So, it’s now the analytic side, which is more and more important now, to bridge this gap of sequencing up here (in) our human level population,” Bhattacharjee noted.
Precision medicine has also been instrumental for scientists during the ongoing pandemic, not only for the development of COVID-19 vaccines, but also for studying the behaviour of the virus.
“While there are many unknowns while we talk about COVID, (we can) say (that) researchers are tackling it from multiple places, like tracking the virus origin, or it may be vaccination design. One of the first steps to scientific insight is -omics analytics. In fact, we see that as necessary for any kind of research which is happening in the area of COVID-19,” he said.
The role of HPC, genomics, and AI
The process of genome sequencing, according to Bhattacharjee, starts with a biological sample such as blood, saliva, or tissue, which is then loaded onto a sequencer that converts the sample into digital information.
“Think of it as a puzzle which doesn’t give you a (complete) picture. The information is there, but you can’t make it out until the picture is there. That is what we call genome analytics, where HPC, genomics, and AI play a very big role. That’s where we require a supercomputer on an HPC, which can do this with a good analysis software,” he pointed out.
“The whole workflow starts with sequencers, taking the data out from the samples, (and) giving you raw data (that) needs to be analysed, to find out the properties of the genome or the characteristics of the genome. In the bioinformatics world, this is called variant analytics, where we compare the genome with a standard genome, and find the variants. From the variants we find the characteristics— the phenotype and the genotype,” he added.
While genome sequencing has contributed huge strides in healthcare, the process of genome analytics, which follows genome sequencing, is beset with a major setback.
“The bottleneck currently, which we are seeing, is the amount of time it takes to analyse a single genome. If you’re talking 60 to 150 hours for analysing a single genome, it’s like one week, (and) this cannot be done if you are talking of a population-level genomics,” Bhattacharjee said.
How then to proceed towards a precision healthcare paradigm from a traditional one? Bhattacharjee enumerated five ways:
- Make genome processing fast.
- Increase throughput of genomics analytics.
- Make it affordable.
- Make it easy to use.
- Make it secure.
As such, Lenovo has introduced its GOAST solution. GOAST, which stands for Genomics Optimisation And Scalability Tool, can analyse up to 27 genome samples per day, according to Bhattacharjee.
Lenovo GOAST comes in two forms— the Lenovo GOAST Base, and the Lenovo GOAST Plus. For the Lenovo GOAST Base, it takes about 3.3 hours to analyse one genome sample, therefore capable of analysing 7.3 samples per day, or 2,700 per year.
For Lenovo GOAST Plus, it takes about 53 minutes to analyse one genome sample, yielding up to 27 processed samples a day, or 9,700 per year.
“We spent a lot of time in our lab, understanding the characteristics of the genomics apps and benchmarking it against the computing elements. So it’s an interplay between the hardware and the software, so that the researchers don’t have to spend time, and the bioinformatics guys don’t have to spend time on paralysing applications. They can start on the work on genome analysis from day one,” Bhattacharjee emphasised.
“If you don’t do genome processing, you’re not talking about precision medicine, because you are not tailoring a particular individual’s genome. So for that, what we do is we make genome processing faster. We talked about 167x, where you can take off population-level genomics. You can analyse hundreds and thousands of genomes per year. We want to make it affordable, we call it ‘GPU level speeds at CPU level costs.’ We make it easy with pre-configured and pre-installed systems, which will help you to plug and play and run, and just talk about the computer science part of it,” he added.
Since genomic analysis involves a lot of confidential patient data, Lenovo has also ensured that its GOAST solution addresses security concerns.
“We make it secure. You run it on-premises in your organisation, so that you don’t have to deal with security issues when it comes to human data and putting it onto the cloud,” Bhattacharjee said.
Going forward, a lot more has to be done to mainstream the practice of precision medicine, Bhattacharjee stressed.
“We need to have variables, where it can capture data which talks to these ecosystems. We need to improve the natural language processing, so that we can mine the electronic health record. We need to do a lot of work to wrangle the data, to curate the data mine— it’s an ongoing process, and we as a society need to work much in this area,” he remarked.
“I can’t think of a better area where technology can help the medical field and make a real change in the human population. It’s really an area where technology can make a huge change for the generations to come,” Bhattacharjee concluded. | <urn:uuid:cf29c6aa-95a4-444c-bb2c-4e50f9d7e815> | CC-MAIN-2022-40 | https://www.frontier-enterprise.com/unlocking-the-future-of-precision-medicine-via-hpc-genomics-and-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00523.warc.gz | en | 0.936954 | 1,859 | 2.515625 | 3 |
Key performance indicators (KPIs) are values that measure your organization’s success at meeting its objectives. KPIs provide insight into business conditions like:
- Early return on investment (ROI)
- Product quality
- And more
In practice, KPIs measure how a company will strategically grow.
However, behind every KPI is the implication that current conditions influence trends and inform predictions for future growth. Leading and lagging indicators are qualifiers that assess a business’s current state (lagging indicator) and predict future conditions (leading indicator), so companies can achieve accurate projections.
In the following article, we’ll discuss leading and lagging indicators: what they are and how to use them.
What are leading & lagging indicators?
Leading and lagging indicators help enterprise leaders understand business conditions and trends. They are metrics that inform managers that they are on track to meet their enterprise goals and objectives.
Leading indicators are sometimes described as inputs. They define what actions are necessary to achieve your goals with measurable outcomes. They “lead” to successfully meeting overall business objectives, which is why they are called “leading”.
A leading indicator encourages business stakeholders to ask:
- What processes can I employ to achieve this goal to higher levels of success?
- What skills can the team improve to better achieve the desired outcome?
- What steps can be taken to speed up product development?
Leading indicators do this by providing benchmarks that, if met, will be indicative of meeting overall KPIs and objectives. Some examples of leading indicators for an enterprise business software company with an annual subscription fee might be:
- Percent of customers that sign up for two-year agreements
- Number of customers that renew software at or before mid-term alerts
- Number of customers that purchase software add-ons
If a leading indicator informs business leaders of how to produce desired results, a lagging indicator measures current production and performance. While a leading indicator is dynamic but difficult to measure, a lagging indicator is easy to measure but hard to change. They are opposites, and as such a lagging indicator is sometimes compared to an output metric.
A lagging indicator encourages business stakeholders to ask:
- How many people attended an event?
- How much product was produced?
- What response did it receive?
Lagging indicators measure output that’s already occurred to gain insight on future success. They do this by measuring things like:
- Customer participation
- Renewal rates
How to use lagging indicators
Lagging indicators are always triggered by an event that has just occurred, and, in that sense, are a little more self-explanatory than leading indicators.
If you’re measuring the outcome of an event, product release, sales training program or what have you, you’re using lagging indicators to determine, in retrospect, who attended, what was produced, or how it was received by attendees.
Lagging indicators are best used in conjunction with leading indicators to determine trends and if outcomes were met. This can be made simple with the right technology infrastructure that compares leading and lagging indicators, offering insight.
How to use leading indicators
Leading indicators are trickier to measure than lagging indicators. That’s because they tend to be more abstract.
As mentioned, a leading indicator is a measure of where your business is going. For instance, if you stick to lagging measurements, like revenue, you may completely miss an important, but relatively small, segment of your market that is purchasing from another geographical location in which you don’t have a presence.
That’s where leading indicators enter the scene. By creating measurements like tracking individual purchases outside of certain zip codes or regions, you can learn where your company could potentially establish a new foothold.
That’s an insight you can’t understand by looking at overall revenue alone. When you have a question that asks you to look into future growth and success, it’s the right time to use a leading indicator.
Example of lagging indicators in practice
Since lagging indicators measure what’s already occurred, they can be a useful business asset. However, some enterprise organizations rely too heavily on lagging indicators because they are so much easier to measure. As such, they don’t spend a lot of time working on leading indicators.
A best practice is to deploy both. Here are some examples of lagging indicators so you can see how to use them in practice, and how they interact with leading indicators:
The Corporate Retreat
Imagine you’ve just organized a corporate retreat and you’re trying to determine if it was successful. One way you can do that is by using lagging indicators like:
- How many people attended the retreat? This can give you an idea of general interest.
- How much money did the retreat cost? This is helpful to calculate the ROI
- How many of the attendees signed up for workshops? This metric tells if your programming was engaging.
- Which workshops had the most attendees? This indicator implies which parts of the program were most interesting.
Example of leading indicators in practice
Leading indicators may be harder to measure but the offer valuable insight about the future. They work with lagging indicators to create a number of metrics that are key performance indicators of future growth.
The Corporate Retreat
For example, in the previous section, we decided on some lagging indicators from a fictional corporate retreat. One of those indicators was, “how much did the retreat cost?” Imagine the retreat was a sales training seminar and business leaders want to use this lagging metric to determine the potential for ROI in three months, six months and one year.
What values do they need to do that? The answer is they need to calculate leading indicators that determine sales revenue growth in three months, six months, and one year. Once they have those figures, they can measure them against the cost of the retreat to project future ROI over the course of a one-year sales cycle.
Here are some more other leading metrics that might be associated with a retreat of this nature:
- Where can we expect the most sales growth? Based on attendance and other factors, this indicates what new regional or industry markets you can corner after the retreat.
- What individuals sales goals can we predict in new markets? Given that you expect to grow in certain areas, this percent of growth indicates how much to expect
- What attendance can we expect at next year’s conference? Using lagging indicators like total attendance, managers can come up with leading indicators like percent of future attendance.
The end game strategy
The bottom line is if you’re using lagging indicators without leading indicators, you’re only getting half of the KPI picture. Lagging indicators are an important resource for creating leading indicators that can launch your business into growth mode, but they aren’t the entire package. These sets of metrics work best in tandem to produce the most accurate and achievable KPIs. | <urn:uuid:3a616103-e6af-4471-b52f-9f337d620a63> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/leading-vs-lagging-indicators/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00523.warc.gz | en | 0.948869 | 1,485 | 2.515625 | 3 |
One of the biggest challenges in Forensic Video Analysis (FVA), is the restoration and enhancement of motion blur. As an object moves within the field of view of a camera, the electronic shutter controls the image data being captured at that moment in time. A fast shutter, for example 1/500s, will produce no blur. A shutter setting that may be linked to the frame rate, such as 1/30s, would produce considerable blur. The reason that many cameras have a slow shutter speed is to allow more light onto the camera sensor. This results in a much more visible view of the scene.
A good article to understand this can be found over at IPVM here.
If only all our vehicles and people moved very slowly!
The consequence of slow shutters therefore is motion blur, and as any forensic video analyst will tell you, blurred footage that is then compressed, is frighteningly common.
If the compression is low enough however, thereby retaining pixel definition, the blurred pixels can be restored.
In the following image, the licence plate is stretched throughout the period that the image was captured. The key point though is the identification of the pixel definition. The differences in light, dark or colour that form the detail required to be restored. Without pixel definition, the licence plate would be all one single color, and no amount of deblur is going to bring back that data.
Amped FIVE has had ‘motion deblur’ since the beginning but it was only suitable for a single image or video frame. When dealing with multiple frames that had a different amount of blur, the trick was to save them out as an image sequence and work on each frame individually. It produced great results but was quite time-consuming. Variable Motion Deblurring to the rescue!
The filter can be found within the Deblurring category.
The filter is split into two distinct functions. The Variable Settings, those that can be changed on a frame-by-frame basis, and then the Constant Settings which will stay the same throughout the range.
To help us understand the functions better, let us look at this short dashcam clip of a bus travelling from right to left across the field of view as the vehicle containing the dashcam is approaching the junction. The number on the side is blurred across several frames.
After converting to greyscale and selecting the range of frames, further image preparation is required before applying the Variable Motion Deblurring filter. In this example the Crop filter has been used, followed by a Smart Resize, and then CLAHE.
As soon as Variable Motion Deblurring is selected, your tool automatically changes to the ruler, allowing the selection of a line from one pixel to another. The purpose of this is to mark our blur length and angle.
After selection, the blur values are entered for that frame, but they can be manually adjusted by using the sliders within the Filter Settings.
Now comes the fun part: by moving to the next frame in our selected range, it’s possible to adjust the size, angle and blur thickness for each frame. It may not be required to change every frame, and the filter is flexible to allow individual frame selection: variable parameters will be automatically interpolated for intermediate frames unless we explicitly ask for keeping them fixed from within the Constant Settings tab.
After moving through the six frames in our selection, we have adjusted the blur parameters on 3 frames (O,1 and 5).
All of the images are now ‘deblurred’, but it is often necessary to conduct some further processing in order to exploit all of the deblurred pixels and integrate all of the frame data into a single restored image.
As the bus in the example is moving, stabilization is required. However, the vehicle with the dashcam is also moving, which changes the perspective and size of the number in each frame. We therefore require perspective stabilization.
After using Perspective Stabilization, and integrating the stabilized frames together with Frame Averaging, a couple of final small filters produces the result.
The chain used here, and variations of it, can produce results in cases where you may have once believed not possible. You may only recover 2-3 digits from one camera, but then another 1-2 digits from another camera.
Rebuilding the pixel data in this example was fairly easy, but by learning how the filter works and understanding how it is integrated within a filter chain, Amped FIVE users are able to turn blurred noise into restored and actionable data. | <urn:uuid:436a1f1f-d3f7-45d0-b1b8-8c49cb2354b0> | CC-MAIN-2022-40 | https://www.forensicfocus.com/articles/how-to-use-variable-motion-deblurring-in-amped-five/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00523.warc.gz | en | 0.936428 | 931 | 2.921875 | 3 |
Today nearly 56% of the global population lives in an urban environment. The city has finally become the dominant place to live. Given the changing environment and increasing technology, the city has begun to dramatically change in the past 5-10 years. And it will continue to evolve and change at an increasingly faster pace.
As technology has developed and influenced the city, the term “Smart City” has become prevalent. Technology is an important attribute of a city’s evolution. However, it is just one of the attributes. A more encompassing and enduring term could be a “Sustainable City”. The definition of a Sustainable City that I subscribe to is:
“A vibrant community which can adapt and grow over the years, due to changing demographics and economic conditions. It is based upon multiple attributes.”
This definition begins to describe the holistic and long-term issues associated with the complexities of an urban environment. A Sustainable City has a goal of being an enduring and competitive place to work, live, learn, and play. It requires many aspects. Some of these are:
- Purpose. Entertain, eat, work, live, learn
- Activities. Walk, bike, play, work, learn, etc.
- Scale. Human scale not mega blocks
- Natural Environment. Location, terrain, water, etc.
- Environmental Implications. Resource usage, output disposal, environmental footprint, etc.
- Dynamic. Changing through the day/week as needed (festival, farmer’s markets, sporting event)
- Transportation. Walk, bike, mass transit – beyond cars
- Connectivity. Smart & effective infrastructure (utility & transportation coordination, etc.)
- Built Environment. Smart & efficiently operated buildings, spaces, etc.
A city is also a three-dimensional physical puzzle. It is composed of multiple layers: Subterranean level (utilities, transportation, walkways, retail); Ground level (streets, walkways, public spaces, open areas, building entrances); Concourse (walkways, retail, elevated rail, etc.); and Air Space (skyscrapers, bridges). This three-dimensional layout adds a level of complexity.
Key Stakeholders in a Sustainable City
Another important layer to consider consists of the four main players that need to work together:
- People (employees, students, families, tourists)
- Businesses (large, small/medium, start-ups, etc.)
- Built Environment (developers, real estate investors, consultants, designers, engineers, etc.)
- Government (local and regional)
For a city to be enduring and sustainable, the four main players need to work in a concerted effort. They need to discuss, advise, decide and provide for an environment which can change or be modified based upon a particular city’s needs. No one player truly has the ability to control how the city develops over time. Instead all of them work together along with the marketplace and land economics to determine the success of a city in the long run. Idea generation can come from any of these players and is tested in the marketplace. Figure 1 shows the interactions between these four key stakeholders. When all the groups work together, they are able to attain that “Sweet Spot” which enables a location to have the characteristics of a sustainable Global City. In the most simplistic terms, the Sweet Spot for a Sustainable City is the on-going quality of life that the city provides to its occupants.
The Dynamic Nature of a Sustainable City
Some believe that once a city or regional masterplan has been developed and approved the only thing left is to implement and enjoy. As a city and its inhabitants are dynamic, a longer-term sustainable view might be that the completion of the environment is just a starting point. The cases in point are the great global cities whose origins have started many generations ago, such as London, New York, Berlin, Tokyo, and so on This means that through use, the environment and space will constantly be assessed to evaluate if they are meeting the changing needs of the city or location. The appropriate adjustments or modifications are required over time. This is what has been done over the past centuries for many global cities. The difference between then and now is that with technology permeating everywhere, the ability to assess and adjust the environment will now be done at a much faster rate.
As builders and/or occupants of the environment we are just the current Stewards of the urban environment. Stewardship is a delicate balance between Return on Investment and Return to Society. The city is a dynamic environment which will continue to evolve over time based upon its changing needs. We have to determine whether we are going to change and improve the environment or just “pluck the fruits” from the existing assets. We should make sure that when we design and develop the urban environments it is with long-term sustainability in mind. | <urn:uuid:1f9f4feb-543c-4ce9-ae3e-3f73235d7ce4> | CC-MAIN-2022-40 | https://blog.ecosystm360.com/sustainable-cities-going-beyond-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00523.warc.gz | en | 0.946207 | 1,007 | 3.5625 | 4 |
Background & Timeline
Today, our Threat Guidance team looked at a truly unknown flavor of malware.
With such a high volume of new malware produced 24/7 – numbering into the hundreds of thousands per day – it’s unusual when our Threat Guidance team can’t categorize one into a known family. However, using the CylancePROTECT® Dashboard, a sample was quarantined on an endpoint (as “Unique to Cylance”) of a real customer.
All of the sample’s properties – the location of the file, the compile date, and the lack of similar files on known malware repositories – did not fall into anything which Cylance or any others have seen to date.
What Is It?
This piece of malware was fascinating to us, because it was so beautifully simple – just compiled C++ code, straightforward – only connecting to a single (albeit Korean) IP address, and benign at first glance. But upon further inspection, we learned that this seemingly innocent piece of malware was collecting a ton of information from the victim, much like a legitimate asset management agent.
A great deal of endpoint information was collected – including System, Network, Disk/Memory, and Processes/Services. Taken together, this data could be used to map an individual asset, or “Configuration Item” in Information Technology Infrastructure Library (ITIL) lingo, or could also be used to provide a larger view of the organization and its proprietary network.
And that’s where things get dangerous. For example, understanding various local system and network topology makes this uniquely suited to reconnaissance – especially if it is not performing any unusual or malicious activity itself.
We named the new malware ‘Paipeu,’ the Korean word for ‘Pipes’ (파이프), due to its hardcoded South Korean IP address and its ability to use named pipes, including its enabling of NULL session pipes. Use of named pipes for communication is not unheard of in malware; PlugX and Duqu are two famous examples that have both been known to use them. When found, it’s typically used for communication between different pieces of malware on a host, or between infected systems inside a LAN.
More information on NULLSessionPipes, including how to enable them and the security implications of that, can be found on Microsoft’s Support site: https://support.microsoft.com/en-us/help/813414/how-to-create-an-anonymous-pipe-that-gives-access-to-everyone
So What is the Danger?
Since we are still investigating exactly how this piece of malware was delivered, we had to derive its purpose by certain non-essential attributes.
First, a compile date so close to discovery (only two days prior) could indicate a targeted attack. This is also why the customer and Cylance did not see it prior to being flagged by a background check (remember, it did not try to actually execute anything – it just collected information. A lot of information.)
Second, we had to look at the code to see what mechanisms it used to communicate. This was done via the aforementioned named pipe communication. The latter proxied command-and-control requests to avoid detection or otherwise talk to blocked hosts.
Finally, a hard-coded IP address shows that it had a sole communication purpose – sending data to an offshore server. This type of nonstandard traffic might be missed by not only antivirus and endpoint detection software, but also advanced cloud access brokers (CASB) which looks for known patterns.
Information Targeted for Exfiltration by Paipeu
- NetBIOS name of the local computer
- Standard host name for the local computer
- Language identifier for the system locale
- The type and data for a specified value name associated with an open registry key
- ProcessorNameString, and Major, Minor and Build numbers for the local OS
- List of TCP endpoints available to the application
- Adapter information/network parameters for the local computer
- List of sessions on a Remote Desktop Session Host server
- Session information for a Remote Desktop Session Host (RD Session Host) server
- Retrieves information about the amount of space that is available on a disk volume (total amount of space, free space, and free space available to the user that is associated with the calling thread), drive type (removable, fixed, CD-ROM, RAM disk, or network drive).
- Also fills a buffer with strings that specify valid drives in the system, and obtains information about the system's current usage of both physical and virtual memory.
Processes and Services
- Environment variables
- Active processes (terminal server)
- Services in control manager database
- Security identifier (SID) account for this SID and first, all members of a specified local group
- Adds a user account with a password and privilege level.
Does Cylance Protect Against Paipeu?
This is a key example of the industry’s widespread inability to prevent unknown binaries and blind attacks.
Even though we didn’t witness the actual exfiltration of data (due to CylancePROTECT’s pre-execution quarantine of the malware), all signs lead to a custom attack which could be the work of a professional paid hacker or state actor. In the case of Korea, this wouldn’t be the first – or last – time we’ll see this.
If you use our endpoint protection product CylancePROTECT, you were already protected from this attack. Our artificial-intelligence-driven models have been trained using millions of data points to ‘learn’ exactly what malicious behavior looks like.
Even though this was a brand new, never-before-seen piece of malware, this sample was instantly quarantined and blocked by Cylance. This highlights the inherent weakness of signatures; as no signature exists for this product, any legacy antivirus product that relies on signatures would have never detected it in time. If you are a Cylance customer, as this client was, you can rest assured that you are protected from this and similar types of attack. | <urn:uuid:77771645-7a7a-4ae6-9771-9b4c4f5c23fb> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2017/05/cylance-vs-paipeu-malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00523.warc.gz | en | 0.92332 | 1,307 | 2.78125 | 3 |
The space agency said Wednesday the team used a robotic system to integrate the first of 18 flight mirrors onto the telescope at the Goddard Space Flight Center.
“This installation not only represents another step towards the magnificent discoveries to come from Webb but also the culmination of many years of effort by an outstanding dedicated team of engineers and scientists,” said Bill Ochs, project manager for the Webb telescope.
Full installation of the 18 mirror segments as part of the final assembly phase is scheduled for completion by early next year.
NASA noted that the mirrors are made of beryllium, which is lightweight and suited for cryogenic temperatures, and have a thin gold coating to reflect infrared light.
Mirror manufacturer Ball Aerospace is the principal subcontractor to Webb prime contractor Northrop, while Harris is the lead integrator for the telescope. | <urn:uuid:a9647348-0e3c-4ed6-ae5e-07d84b5587cb> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2015/11/first-flight-mirror-installed-on-nasas-james-webb-telescope-bill-ochs-comments/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00523.warc.gz | en | 0.939195 | 169 | 2.65625 | 3 |
Unsecured network connections give way to a digital invasion of privacy. This unprotected data is easily accessible to hackers, spies, and other people. Data includes your web traffic, personal data, and other online files – all of which can be monetized without your consent. Connecting to public Wi-Fi may increase the chances of leaving your data defenseless.
However, there is a way to protect your information from any unwanted audience. Take precautionary steps to secure your online privacy. A virtual private network or VPN is an encrypted tunnel that reroutes your internet traffic. Your online activity is vulnerable information that can be accessed by other people when not appropriately secured.
Virtual Private Network
A VPN creates an encrypted route that redirects your web traffic to a safe gateway. Remote servers operated by a VPN service secures your internet data and hides it away from peering eyes. The server scrambles your traffic information and gives you a different IP address that masks your identity and location on the web.
The concept behind this internet security might be challenging to grasp. But in simpler terms, consider day-to-day scenarios that necessitate a VPN. For example, connecting to free Wi-Fi is an automatic response when seeing any available network in a public area. Usually, there are no hesitations when connecting to public Wi-Fi networks despite its anonymity.
These common practices often lead to an invasion of privacy because of insecure connections. There might be other people nearby lurking and waiting to steal your information. However, with the help of, you do not have to worry about these problems. Secure your banking data, passwords, social media accounts, and other details online.
Benefits of a VPN
Internet data is sold for a valuable price on the market. This information is used by multiple companies to broaden their product audience. Studying your behavior online can allow them to create promotional advertisements to strengthen their business. Thus, internet behavior is valuable information for many interested parties.
A VPN server intercepts your data, securing every valuable information available on the web. It ensures that no unwanted eyes are stealing your data. Aside from selling your online behavior to companies, identity theft can also be a probable cause. Hackers will be able to retrieve and use your information however they please.
However, it is not only public Wi-Fi that you have to worry about. Your home internet providers monitor a great deal of your online activity. They are also authorized to sell anonymized data to other customers. Technically, they are not only making money out of your monthly bills but off your personal data as well. Most internet users are unaware that their ISP is selling their data, which translates to a lack of consent.
Aside from protecting your data, VPN service providers also allow you to access blocked websites. Since it scrambles your browsing information by modifying your location and IP address, you can view websites that are not visible to your country. For example, if certain web content is inaccessible in the US, you can “change” your location and stream unavailable videos.
The performance of VPN applications varies depending on the device you are using. It is essential to have them installed on all your devices for safe and secure browsing, whether on your laptop or phone. For that reason, most VPN companies offer apps that are compatible with Apple and Android devices.
These devices can now intercept your data when browsing through the internet and hide your information from any lurking parties. These applications are extremely beneficial and offer great deals with promising security. Luckily, some companies provide long-term subscriptions, like the NordVPN three year deal. Similar arrangements are also available on the market.
Although VPN applications are highly accessible, not all devices can run the apps. Smart appliances, for example, cannot quite encrypt the encoded data. But you may configure your router to a VPN connection as an alternative. Purchasing a pre-configured router is also an option if you are unsure of the process.
This method keeps your home network hidden from other people. You no longer have to worry about risking data vulnerability from your smart appliances. Once turned on, VPN will give you brilliant internet security but with a few limitations when scouring through the web or testing particular network setups.
Probable App Complications
Despite its many benefits revolving around internet security and data privacy, they do have their own setbacks. For instance, some online services might deny you access to their webpage if you are browsing through a VPN server. That is because these sites perceive VPN traffic suspiciously. This can be a huge issue if it is your bank account that you are trying to access.
That said, there are several other apps whose features become limited when utilized from a VPN server. However, some companies put in the extra effort to make sure that their customers can enjoy popular websites and quality internet browsing through a secure network connection.
Protect Your Web Data With VPN
The concept behind the world wide web prioritized accessibility rather than security. In short, privacy was not a vital component for its birth. At present times, technology advances at an increasing rate, with digital codes and commands flying through the roofs. Encrypting and decrypting data can be done in a matter of seconds.
Although technology improved our gadgets such as cellphones and smart devices, the internet fundamentals have not changed much since its creation. Still, it focuses on availability rather than defensive play. This neglect of internet security can lead to various problems such as identity theft, hacked websites, and illegally gathered information.
VPN services can protect individuals from these problems. Keep your passwords secure and your website traffic safe from prowling eyes hiding behind insecure connections. Think of the VPN as a security lock to your home. Without it, your house is left vulnerable and defenseless for anyone who cares enough to look. And honestly, with today’s value for information, people will always pry for illegally gathered data. | <urn:uuid:a079e781-b730-497f-a239-614be3061fb6> | CC-MAIN-2022-40 | https://gbhackers.com/why-using-a-vpn-is-crucial-for-your-safety/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00523.warc.gz | en | 0.941898 | 1,201 | 2.71875 | 3 |
A team of scientists comprising of comprising of Dr Adam Hamdy from Panres Pandemic Research-UK and Dr Anthony Leonardi from Johns Hopkins Bloomberg School of Public Health-USA are proposing that SARS-CoV-2 could be a supernatigen and that further urgent studies are needed for validation.
It should be noted that supernatigens typically cause T-Cell dysfunction.
reference link : Pathogens. https://www.mdpi.com/2076-0817/11/4/390/htm
1. What Is a Superantigen?
The term superantigen was coined in 1989 and defined proteins that hyper-stimulate T cells via the crosslinking of T cell receptors (TCR) with MHC Class II molecules. The definition was expanded following the discovery of B cell superantigens , which hyper-stimulate a large population of B cells without the crosslink.
A superantigen is commonly defined as a molecule that has antigen-receptor mediated interactions with over 5% of the lymphocyte pool .Put simply, superantigens are potent antigens that can send the immune system into overdrive and stimulate up to 30% of the naive T cell pool [4,5].
Superantigens have also been shown to impair post-vaccination memory cell responses to unrelated antigens and antagonize memory cell activation .The same superantigen can produce a range of host responses. Toxic shock has been shown to develop more severely in individuals who express certain MHC Class II haplotypes which bind specific superantigens, compared with those who expressed haplotypes with lower binding affinity .
Responses may also be affected by environmental factors. For example, simultaneous bacterial and viral infections have been shown to increase the effects of superantigens . Superantigens have been shown to impact central nervous system function and are implicated in the development of neurological conditions [12,13,14] and cardiovascular dysfunction [15,16].
Superantigens have diverse interactions with MHC class II and T–cell receptor molecules, involving a number of different interaction surfaces and stoichiometries [17,18,19]. In addition to superantigens, there are superantigen-like proteins that activate lymphocytes using mechanisms that place them outside the superantigen classification .
Superantigen-like proteins have been implicated in inducing thrombotic and bleeding complications through platelet activation [21,22].SARS-CoV-2 causes many of the biological and clinical consequences of a superantigen, and we believe in the context of reinfection and waning immunity, it is important to better understand the impact of a widely circulating, airborne pathogen that may be a superantigen, superantigen-like or trigger a superantigenic host response.
2. Lessons from Dengue
T lymphocyte activation during dengue infection is thought to contribute to the pathogenesis of dengue hemorrhagic fever (DHF) . In fact, dengue virus (DENV) causes some of the clinical characteristics seen in COVID-19, including T cell activation , neurological complications and autoimmunity .
DENV-induced autoantibodies against endothelial cells, platelets and coagulator molecules lead to their abnormal activation or dysfunction . A study of TCR Vβ gene usage in children with DENV infection concluded dengue is not a superantigen, but rather a conventional antigen .
The authors of the study cautioned their finding had limitations, but it is widely accepted DENV is a conventional antigen that causes host reactions typically associated with superantigens.A conventional antigen can still trigger a superantigenic host response. A recent study of the response of human endogenous retroviruses (HERV) to DENV serotype 2 infection found significant differentiation in expression during infection .
HERVs are components of the human genome that likely originated through the historic incorporation of exogenous viruses . HERVs perform important biological functions but are also implicated in the development of autoimmunity and cancer . Certain viral infections have been shown to trigger HERV upregulation and autoimmunity . HERVs can present proteins that act as superantigens . Epstein–Barr virus (EBV) has been shown to transactivate HERV-K18, which encodes a superantigen .
This may have clinical implications. For example, HERV-K18 is significantly elevated in the peripheral blood of patients with juvenile rheumatoid arthritis .HERV loci are upregulated by a variety of viral infections, seemingly as part of an effective innate immune response , but it is possible that a dysfunction in response transactivates a superantigen, which triggers an immune cascade or autoimmunity. In fact, transient elevations of HERV-K , and prolonged elevation of HERV-W have been found in COVID-19 patients [35,36]. HERV-W envelope protein (HERV-W-env) has been shown to induce T cell responses with superantigen characteristics .
3. Superantigens and T-Cell Dysfunction
Superantigens have differing effects on immature and mature CD4 and CD8 T-cells (Figure 1). Superantigens can deplete thymocytes or immature T-cells, but can hyperstimulate mature, antigen-experienced CD4s and CD8s . After hyperstimulation by Staphylococcal enterotoxin B (SEB) superantigen, T-cells can enter a state of unresponsiveness known as ‘anergy’ where they fail to respond, and may sometimes subsequently enter apoptosis, or programmed cell death [39,40]. Not limited to only affecting CD4s by virtue of MHC II, superantigens can cause differentiation of naive T-cells and stimulation of CD8 memory cells from bystander activation via cytokines or from similar Vβ gene segments in their TCRs . Antigen-independent activation, or bystander activation of CD8 T-cells, is a well-studied consequence of viral infection [41,42,43].
Figure 1. Potential mechanisms to induce a superantigenic host response and possible clinical outcomes.SEB superantigen activates virus-specific CD8 T-cells in vivo with both direct TCR engagement in some cases and by bystander effect . This bystander stimulation is also apparent in vitro . Interestingly, T-cell death elicited by superantigenic stimulation is most apparent among the T-cells activated by the bystander effect rather than activated by direct TCR engagement . CD8 T-cells in which the superantigen directly stimulates per T-cell receptor β-chain retain their cytotoxic function . The possibility of deletion of antiviral memory by the bystander effect warrants investigation given the involution of the thymus following puberty, as it could compromise microbe clearance .Chronic exposure to superantigen could continually stimulate T-cells, keeping them in a perpetual state between anergy and hyperstimulation. Furthermore, given naive T-cells can be activated and differentiated by the bystander effect, this could manifest in an observed naive T-cell depletion in the peripheral blood where naive cells home to lymphoid tissues in individuals where new naive T-cells are not being readily generated due to thymic involution [47,48]. This effect could explain the paucity of naive T-cells in some Long COVID patients . The loss of naive T-cells is a defining metric in immune aging and dysfunction. They help regulate immune responses and have the highest expansive capacity in response to antigens from cancers and infection [50,51,52].
4. Superantigens and Autoimmunity
Superantigens are implicated in the development of autoimmune diseases [53,54,55,56,57,58]. T-cell clones that are cross-reactive towards the endogenous host and microbial epitopes may be stimulated and migrate to tissue containing an autoantigen, a mechanism believed to play a role in the pathogenesis of rheumatic fever [59,60].
Individuals with autoimmune diseases show an increase in such T-cells in affected organs or peripheral blood . Superantigens stimulate autoantibody production by bridging the MHC Class II molecule of B-cells with the TCR on T-cells . Whether deletion or autoimmunity occurs seems to be a function of dose, persistence, host haplotype and severity of cytokine response .
Persistent subcutaneous exposure to a superantigen has been shown to cause a systemic inflammatory disease mimicking systemic lupus erythematosus (SLE) in mice . Superantigens have been shown to trigger or exacerbate SLE . Interestingly, HERV-E has been implicated in SLE [65,66]. HERV-E has been found to be upregulated in the bronchoalveolar lavage fluid of COVID-19 patients .
Insulin-dependent diabetes mellitus (IDDM) is a T-cell-mediated autoimmune disease triggered by unknown environmental factors acting on a predisposing genetic background, but there is evidence superantigen-like exposure in the form of HERV-W-env upregulation is implicated in the recruitment of macrophages in the pancreas and beta-cell dysfunction .
5. SARS-CoV-2 as a Superantigenic, Superantigen-like Pathogen or Superantigen Trigger
We note a recent study of SARS-CoV-2 which found immunological dysfunction following mild to moderate infection, including depletion of naive T and B-cells in individuals with Long COVID , and a single cell atlas which also found depletion of naive T-cells and higher levels of apoptotic T-cells in SARS-CoV-2 infection than HIV . Taken together with findings on post-SARS-CoV-2 autoantibodies [71,72], presentation of MIS-C , activation and depletion of T-cells and a rise in IDDM , these are suggestive of a superantigen, superantigen-like protein or triggering of a superantigenic host response as a causative agent, and further research is needed into its role and likely long-term effects, particularly since SARS-CoV-2 has been found to persist in the body months after acute infection [76,77,78,79,80,81,82]. SARS-CoV-2’s superantigenic characteristics have been implicated in MIS-C .
The expansion of T-cells carrying the TRBV11-2 gene, in combination with variable alpha chains, a hallmark of superantigen-mediated T-cell activation, has been reported in several studies of patients with MIS-C [84,85].Brodin offers an energy allocation hypothesis for MIS-C, suggesting a choice in favor of disease tolerance over maximal resistance that means children are more likely to present with mild and even asymptomatic disease but might also be less efficient at viral clearance and, consequently, be more prone to some level of viral persistence and possibly other conditions linked to such viral persistence such as superantigen-mediated immune activation in MIS-C .
We question why such SARS-CoV-2’s superantigenic characteristics would not be assumed to apply to adults, particularly given the clinical and biological manifestations in all age groups, which reflect known prior differences between responses to superantigen exposure in adults and children. Indeed, MIS-A manifests in adults as a consequence of SARS-CoV-2 infection and rare instances of Kawasaki disease are observed in adults [88,89].
The issue of whether SARS-CoV-2 contains a superantigen is not settled, but the evidence is accumulating [90,91,92,93,94,95] and SARS-CoV-2 is causing superantigen or superantigen-like clinical presentations and biomarkers. In addition to cytokine storms , T-cell activation and deletion and presentation of MIS-C [73,97,98] (similar to Kawasaki disease, a suspected consequence of superantigen exposure ), those infected by SARS-CoV-2 who suffer Long COVID following infection manifest symptoms typically seen in autoimmune conditions such as SLE [101,102,103], and autoantibodies and antinuclear antibodies have been detected in a proportion of such individuals .
In vitro assessments of SARS-CoV-2’s superantigen-like region may not capture the full physiological effect on the immune system in vivo. For example, lipopolysaccharide (LPS) can potentiate the SEB superantigen effect , which could have a synergistic effect on T cells following gut inflammation or injury via LPS translocation [106,107].SARS-CoV-2 is known to infect gut epithelial cells , persist in the gut [79,109,110] and disrupt tight junctions in bronchial epithelial barriers .
Indeed, hospitalized non-survivors of SARS-CoV-2 infection had increased LPS detected in blood . While SARS-CoV-2 may not be canonically superantigenic in vitro, the in vivo consequences may be significant due to other danger and death signals .With evidence mounting that SARS-CoV-2 reactivates latent viruses such as Epstein–Barr Virus , cytomegalovirus [115,116] and human endogenous retrovirus , which are associated with superantigen expression [31,69,117,118,119], it is important to establish whether SARS-CoV-2 is a superantigen or triggering second-order superantigenic responses in susceptible individuals.Some countries seem willing to tolerate high levels of infection provided their healthcare systems can cope.
This approach is predicated on the belief a level of protective population immunity can be achieved and sustained, and the impact of reinfections will be less severe . If SARS-CoV-2 contains a superantigen, superantigen-like protein or triggers a superantigenic host response, this strategy may prove a grave error. The effect of a superantigen is dependent on dose exposure, genetic predisposition, environmental conditions and immune response [6,7,12,62].
There is evidence the toxic effects of superantigens can be inhibited by specific antibodies but protection conferred seems to depend on antibody titer and exposure dose . Recent evidence of a reduction in MIS-C following vaccination supports the protective role of antibodies in preventing a clinical manifestation of a superantigen or superantigen-like infection ; however, in the context of waning antibody titers seen following vaccination against or infection by SARS-CoV-2, and ongoing evolution of the virus , the impact of repeat exposure may be unpredictable.Rather than proving beneficial, allowing widespread transmission of SARS-CoV-2 could be detrimental, and the growing population suffering from Long COVID marked by a depletion of naive T-cells may be a warning.
Given the adverse impact Kawasaki disease and some autoimmune conditions can have on long-term health and longevity [127,128], national strategies that allow widespread transmission of an airborne potentially superantigenic or superantigen-like pathogen that has demonstrated some evidence of persistence and can inflict repeat infections may be misguided. | <urn:uuid:8074f187-3160-41c0-b90f-73cf4f80f038> | CC-MAIN-2022-40 | https://debuglies.com/2022/03/30/sars-cov-2-could-be-a-supernatigen/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00523.warc.gz | en | 0.902218 | 3,344 | 3.203125 | 3 |
These days our money is transferred via virtual bank accounts all while our informational sources are vast and thorough. Heck, even online orders are delivered the very next day. All this expediency is amazing and convenient but does come with a cost, in particular our privacy.
All this information comes with some safety issues that must be addressed, luckily industries are grabbing onto the new technology to improve online security: blockchain.
Blockchain, a Distributed Ledger Technology (DLT), is built on creating trust in an otherwise untrusting ecosystem, making it a potentially strong cybersecurity technology.
The ledger system is decentralized; however, information is transparently available to members of the specific blockchain. All members (or nodes) can record, pass and view any transactional data that is encrypted in that said blockchain.
First used as the operational network behind Bitcoin, blockchain is now used in more than 1,000 crypto currencies with a number that is growing steadily.
DLT protects the integrity of cryptos through encryption methods and public information sharing. Legitimate purchases made by individuals with crypto is ensured because traces of the transfer can be called back all the way to the currency’s origins.
The legitimacy of cryptocurrency purchases by individuals is ensured because they can trace the transfer of the currency to its origin. Encryption helps control the amount of cryptocurrencies being created, thus stabilizing value. Companies such as Coinbase, Mobilecoin, Javvy, and Founders Bank use blockchain to safeguard users and their crypto trades that take place on these apps.
Traditional banking is taking a bite out of blockchain as well which is shocking consider, that traditional banking solutions tend to be slow endorsers of new technologies.
Trillions of dollars in cash flow combined with outdated and centralized cybersecurity protocols make the largest banks constant targets of hacking and fraud. Today there are at least 85 serious infiltration a year, with cyber criminals focused on operational risks. In its annual report, the US Office of the Comptroller of the Currency (OCC) said stated more sophisticated attacks target employees who have access to credentialed information. Reports suggest multi-layered security protocols to decentralize risk which, hey, blockchain can provide.
Like banking, the healthcare industry tackles a constant barrage of cyber-attacks. In fact, healthcare experiences twice the amount of phishing attacks than any other industry.
Healthcare companies, hospitals, doctors and clinics store patient banking information, they also possess important health records (A hacker’s dream come true for targeted info!). Patient data is important to cyber criminals because it demands much more money on the black market because of the sensitive data stored within it. Exposing the social security numbers, credit card info, full names, weights, heights, prescriptions and medical conditions of millions of patients can be detrimental to any one individuals’ livelihood.
Blockchain could be the badly needed solution to a problem that puts patients and hospitals at severe risk. The DLT’s decentralized state allows only certain individuals to have small amounts of information that, if combined, would comprise a patients entire health chart. The distribution of only certain information to credentialed healthcare professionals ensures that cyber criminals cannot gain access to all the good stuff.
Massachusetts-based Phillips Healthcare is pairing blockchain with AI to create a new healthcare ecosystem that serves a strong security foothold for their patients.
In partnership with hospitals all over the world, the company uses AI to discover and analyze all aspects of the healthcare system, including operational, administrative and medical data. From there, they then implement blockchain to secure the massive amounts of data collected. Philips Healthcare “HealthSuite Insights” platform gives healthcare systems an inside look at key pain points in the current health system and offers AI and blockchain solutions to remediate those issues.
The Office of Management and Budget (OMB) recently released a public report on the damage on U.S. Government cybersecurity infrastructure, where phrases like “do not have the resources to combat the current threat environment” and “agencies lack visibility into what is occurring on their networks” are just the beginning.
This report goes on to claim that:
- Of the almost 31,000 successful compromises in 2017, 38% never had their methods or attacker identified
- Only 27% of agencies can detect large data compromises
- 84% of all government agencies fail at meeting basic encryption goals
Startling stats indeed, especially the last, can be improved with blockchain. The entire system runs on safe encryption of information, putting a barrier between hackers and identifiable information. Encrypted data, decentralized information storage and ledgers can instill a new set of government cybersecurity priorities. Governmental agencies than, would be able to quickly identify potential hacks and trace the manipulated data to its starting point. These governments and agencies, in attempting to be among the first governmental blockchain adapters, are pioneering ways to implement DLT into everyday cybersecurity practices.
Innovation within the military and defense sectors has led to some pretty huge technological breakthroughs. The internet and GPS were just some technologies the military tried and soon enough became part of our everyday lexicon. Will blockchain be the next breakthrough technology that is promoted by the defense sector?
Accenture states that 86% of defense companies plan to inject blockchain in the protocols within the next three years, cybersecurity to particularly name of. Blockchain is deemed a legitimate data safeguard for military, defense contractors and aerospace companies that house some of the most sensitive information.
The Internet of Things (IoT) is a no longer a growing industry but now an industry here to stay built on creativity and, consequently, well…cybersecurity issues. Nowadays, IoT products can be found in almost every aspect of our lives. From robot dog walkers to bluetooth-enabled bike locks to smart kitchen appliances that are hands-free, wireless tech is everywhere and 5G will only add to those claims.
There have been thousands of reported IoT device hacks over the last few years, a number that will surely increase as there is expected to be 75 billion IoT devices by 2025. One cybersecurity-related report found that hackers were able to bypass the security measures in an implantable cardiac device, which gave the hackers the ability to shut down the device battery and leak incorrect heartbeat information. Even cameras in our homes and at-work can be hacked via hackers gaining hold of key IP addresses and login credentials into networks.
As the IoT device market continues to grow, so too does the need for an enhanced form of cybersecurity. That is where you guessed it! Blockchain can provide a safe infrastructure for the transfer of data from one device to another without the interference of malicious actors. Decentralized control enables IoT devices to create audit trails and tracking methods for registering and using products.
As our world becomes more innovative yet connected, there comes like all other aspects of our lives, the “wrong” side to it all. Luckily, good guys are pioneering every day the best security protocols and tools to enable safety for us and blockchain, is one such innovation.
“Healthcare companies, hospitals, doctors and clinics store patient banking information, they also possess important health records (A hacker’s dream come true for targeted info!).” | <urn:uuid:fea2fabf-5bb2-4a35-8624-ff1981658349> | CC-MAIN-2022-40 | https://def-logix.com/industries-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00523.warc.gz | en | 0.934106 | 1,460 | 2.78125 | 3 |
As more and more social and economic activities move online, the importance of privacy and data protection is becoming increasingly recognised. Of equal concern is the collection, use and sharing of personal information with third parties without notice or consent of consumers. In fact, I read recently on the UNCTGAD site that 128 out of 194 countries have put in place legislation to secure the protection of data and privacy.
Whilst the US has been lagging behind other countries in terms of implementing national legislation, the picture is now beginning to take a different path at state level as legislative bodies introduce regulations. Some states such as California, Vermont, New York and Ohio have introduced data protection legislation in some form, Alabama has its Data Breach Notification Act and as recently as last month Colorado passed its new data privacy bill, giving residents the right to stop companies from collecting their data in the future. There is now a significant movement towards safeguarding data privacy and increasing data protection state by state.
We are now seeing moves from the U.S. Federal government as well. In May President Biden published his Executive Order on improving the nation’s cybersecurity as a whole, showing how the thought process has stepped up a notch.
The reason for this is obvious. You don’t have to cast your mind too far back to be able to cite high profile cases in the press which showed us how important strong data protection rules are for society, including the very functioning of the democratic process.
These and other developments have shown that the protection of privacy, as a fundamental individual right, but also as an economic necessity, is crucial. Without consumers’ trust in the way their data is handled, our data-driven economies will not thrive.
As a practitioner working in the field of data security, I’m pleased to see data privacy and protection laws becoming more commonplace across the US.. Data protection is the “one constant” that must be maintained across all environments. Organisations hold and are responsible for safeguarding vast amounts of data and this data must be appropriately protected, irrespective of its type or location.
With personal data protection and privacy law rapidly evolving in the United States, and without principal legislation that governs data protection at the federal level in the U.S. as yet, one could be forgiven for wondering which regulations are most critical to be aware of. With that in mind, let us take a whistle-stop tour of some of the important and forthcoming laws you need to be aware of:
General Data Protection Regulation (GDPR)
Though of course not a US piece of legislation,GDPR is a critical one to conform to if a US company transacts with the EU or the UK.
The most important data protection legislation enacted to date is the General Data Protection Regulation (GDPR). It governs the collection, use, transmission, and security of data collected from residents of any of the member countries of the European Union. The law applies to all EU residents, regardless of the location of the entity that collects the personal data. Fines of up to €20 million or 4 percent of total global turnover may be imposed on organizations that fail to comply with the GDPR.
GDPR's seven principles are: lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality (security); and accountability. Some important requirements of GDPR include:
- Though GDPR was established in the EU, it applies to businesses all over the world. If your website collects the personal information of someone from one of the EU member states, then you're required to comply. Otherwise, you could be faced with fines and penalties.
- Organizations are required to notify supervisory authorities and data subjects within 72 hours in the event of a data breach affecting users' personal information in most cases.
- In a lot of cases the GDPR can require organizations to appoint a data protection officer (DPO). For example, businesses in the public sector, those with large scale monitoring of individuals or processing large amounts of criminal data. This independent data protection expert is responsible for monitoring an organization's GDPR compliance, advizing on its data protection obligations, and acting as a contact point for data subjects and the relevant supervisory authority.
California Consumer Privacy Act (CCPA)
- The right to know about the personal information a business collects about them and how it is used and shared;
- The right to delete personal information collected from them (with some exceptions);
- The right to opt-out of the sale of their personal information; and
- The right to non-discrimination for exercizing their CCPA rights.
Virginia's Consumer Data Protection Act (CDPA)
Virginia's Consumer Data Protection Act (CDPA) was passed on March 2, 2021. It grants Virginia consumers rights over their data and requires companies covered by the law to comply with rules on the data they collect, how it's treated and protected and with whom it's shared.
The law contains some similarities to the EU General Data Protection Regulation's provisions and the California Consumer Privacy Act. It applies to entities that do business in Virginia or sell products and services targeted to Virginia residents.
Colorado Privacy Act (CPA)
In June 2021, Colorado became the third U.S. state to pass a privacy law. The Colorado Privacy Act grants Colorado residents rights over their data and places obligations on data controllers and processors. It contains some similarities to California's two privacy laws, the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), as well as Virginia's recently passed Consumer Data Protection Act (CDPA). It even borrows some terms and ideas from the EU's General Data Protection Regulation.
While there are similarities, such as the opt-in requirement to obtain consent from consumers before collecting sensitive data, and the adoption of some privacy-by-design principles, the significant differences are in the details.
The CPA applies to businesses that collect personal data from 100,000 Colorado residents or collect data from 25,000 Colorado residents and derive a portion of revenue from the sale of that data.
The CPA is scheduled to come into effect on July 1, 2023.
New York SHIELD Act
In July 2019, New York passed the Stop Hacks and Improve Electronic Data Security (SHIELD) Act. This law amends New York's existing data breach notification law and creates more data security requirements for companies that collect information on New York residents. As of March 2020, the law is fully enforceable. This law broadens the scope of consumer privacy and provides better protection for New York residents from data breaches of their personal information.
Importance of privacy policies
With the implementation of data privacy legislation continuing to sweep through countries globally, a list which now increasingly includes the U.S., awareness of the key tenets of the laws that relate to your organization’s business practices are essential. Once you know how you are expected to protect consumer data, you can build a strategy around your people, processes and technology that ensures you comply with prevailing data privacy laws. In so doing, you are safeguarding your customers against theft, loss, or misuse of their personal information, and also protecting your organization from the risk of hefty penalties for non-compliance. | <urn:uuid:edf18331-0f6c-46b8-b4a9-df3526e55da5> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/us-data-privacy-gains-momentum/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00523.warc.gz | en | 0.937296 | 1,754 | 2.703125 | 3 |
A SWOT Analysis is a technique used for business planning. It is useful for gaining a thorough overview of a person, business, product, brand, or new project early in its life cycle.
SWOT is an acronym that stands for:
- S trengths
- W eaknesses
- O pportunities
- T hreats
Strengths and Weaknesses are internal to your organization, while Opportunities and Threats come from outside. The SWOT analysis is a good starting point for establishing where you are; it should be followed up with planning and development sessions to move your business forward. This tool works best when you have identified a specific goal or objective to analyze.
Use notecards, text, or shapes to fill in the Strengths, Weaknesses, Opportunities, and Threats based on your knowledge of the business or process. Then analyze the completed diagram and use the rest of the workspace to outline the next steps in your plan.
Where to Next?
Not what you were looking for? Reply below or Search the community and discover more Bluescape. | <urn:uuid:ddc236e3-2c08-442d-9020-96c95b115a48> | CC-MAIN-2022-40 | https://community.bluescape.com/t/swot-analysis-for-planning-and-evaluation/243 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00723.warc.gz | en | 0.919444 | 227 | 2.90625 | 3 |
The introduction of neuroscience and cognitive science to marketing is known as neuromarketing. This form of market research can uncover customer needs and motivations, revealing preferences that traditional methods such as surveys and focus groups cannot. Biometrics is a new branch of neuromarketing. Biometrics employs various technologies and applications to better understand participants’ cognitive and emotional reactions. It studies reactions towards a product, advertisement, and brand.
Frequently, what people say does not fully correspond to their bodies’ physical reactions. A physiological response occurs before someone actually expresses what they are feeling. As a result, biometrics is a valuable tool in neuromarketing.
Here are some of the biometric tools used in neuromarketing and their functions:
The Facial Action Coding System (FACS) is a classification system for human facial expressions. It allows practitioners to decode the underlying emotion that registers on the human face in response to stimuli, regardless of how brief the exposure is.
At any given time, facial coding can tell us whether the emotional charge is positive, negative, or neutral. It also sorts the expressions into six different categories.
Researchers can use effective facial coding to gauge consumers’ emotional reactions to advertisements, products, or brand perception. While facial coding has a lot of potential in marketing research, some people are still skeptical and consider it to be a bit too subjective and unreliable.
Eye-tracking is a method of determining how interested someone is in what they see. It enables researchers to track where participants’ eyes travel as they view advertisements online or on
television. It also tracks where participants’ eyes travel as they view websites, prints, or any other form of media. Without relying on verbal responses, marketing researchers can determine
which elements are noticed first and most focused on most. Eye movement tracking is an excellent way to tell if a person is paying attention. On the other hand, the technique is unlikely to be helpful in understanding or predicting consumer recall and behavior.
Galvanic Skin Response (GSR)
The galvanic skin response is an excellent predictor of emotional arousal, indicating how strong the emotional charge is. However, it does not disclose emotional expression. Hence, we can’t tell whether the emotions are positive or negative, just like eye tracking. Neuromarketers should combine GSR with facial coding to accurately assess this critical dimension. The key benefits of GSR include its low cost, ease of measurement, and the fact that it is simple to use. It’s a less invasive device. Aside from the inability to measure expression, its main flaw is its lack of accuracy. There is a 1 to 5-second delay between the presentation of the stimulus and the phasic or skin conductance response (SCR).
Heart Rate Monitoring
Marketing experts can use an electrocardiograph (ECG) device to analyze heart rate, and it helps to understand physiological responses better. The idea is that these physiological responses, like other biometric measures, are thought to underpin emotions and may or may not be visible to people on a conscious level.
Anxiety, relief, and emotional fatigue reactions are all studied to give marketers a better understanding of how a respondent reacts to a set of variables. This allows researchers to see how advertisements affect the brain’s unconscious, customer behavior, and emotional regions.
Functional Magnetic Resonance Imaging(FMRI)
FMRI is a new type of biometric tool in neuromarketing that is gaining a lot of attention. The FMRI scanning technique aids in understanding how stimulation is analyzed in the brain, and it
also helps to know which research technique works best and why. For instance, FMRI shows the timing of when consumers see a price that can completely change how they buy. The neural data suggested that when the price came first, the decision question shifted from; do I like this? is this worth it? The researchers were able to predict which types of purchases would benefit from seeing the prices of products early on.
It’s possible that FMRI aids in determining which of several options has the most remarkable customer attraction during the product design phase. Similarly, FMRI data may assist in selecting the most effective messages in promotional campaigns. Such as movie trailers or other forms of advertising.
Moreso, FMRI can be used to better understand how marketing actions, whether consciously or subconsciously, alter people’s preferences and experiences. This type of market research, on the other hand, is relatively new, and it has some methodological and sampling issues.
The purpose of this device is to track changes in brain wave activity. It is easily accessible and provides accurate and valid information on how the human brain works. Electroencephalography provides far more detailed diagnostic insights when compared to GSR or facial coding due to its superior temporal resolution. It also provides a more comprehensive set of metrics for measuring tiredness. It also measures attention, engagement, and workload, and researchers can tell which brain parts are active while respondents are performing a task or responding to stimuli.
Roles Of Biometrics In Neuromarketing
Biometrics helps marketing researchers to combine traditional research methodology, such as when researchers interview people about thoughts and feelings with non-verbal measures. Researchers can use biometrics to tap into consumers’ minds as they process and subconsciously respond to messaging and overall branding.
Biometrics captures unconscious or subconscious responses to marketing media. The researcher should constantly evaluate the benefits of using biometrics in neuromarketing against practical issues, such as access to equipment, expertise, intrusiveness to research participants, and cost.
Biometric tools are most effective when used in conjunction with one another to reveal different aspects of cognition, emotion, and behavior. Eye tracking, GSR, and facial expression, or eye-tracking and EEG, are two combinations to consider. These tools provide a clear, multi-faceted understanding of respondents’ advertising engagement. However, none of the techniques can dive deeply into the minds of the consumers.
To understand what drives emotions and determines consumers’ reactions, we still need qualitative and quantitative research but combining these methods with biometrics enables researchers to get a far more accurate response. | <urn:uuid:dfade337-7fca-46f9-bcf2-a0cd2683cf14> | CC-MAIN-2022-40 | https://www.m2sys.com/blog/guest-blog-posts/the-role-of-biometrics-in-neuromarketing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00723.warc.gz | en | 0.929182 | 1,301 | 2.796875 | 3 |
Although most people have vague understanding what is a VPN and how it's used, few can explain how it works. VPNs were something that was invented to more easily share data, so the terminology can get quite technical and quite confusing.
In this article, I’ll try my best to explain the central part of VPN setup – tunneling.
What is a VPN tunnel?
A VPN tunnel is an encrypted connection between your device and a VPN server. It's uncrackable without a cryptographic key, so neither hackers nor your Internet Service Provider (ISP) could gain access to the data. This protects users from attacks and hides what they're doing online.
Effectively, VPN tunnels are a private route to the internet via intermediary servers. That's why VPNs are popular among privacy-cautious individuals.
How does VPN tunneling work?
Generally speaking, VPN tunneling means simply using a VPN service. Therefore, the answer to "How does VPN tunneling work?" is virtually the same as to "How does a VPN work?".
And now here's what a VPN tunnel does:
- Traffic encryption. Your data becomes protected from the third-parties.
- Hiding your IP address. The VPN tunnel funnels your traffic through to a VPN server, hiding your IP. Without the IP, there's no way to tell your location.
- Securing wifi hotspots. You no longer have to worry about your safety when using public wifi.
To make VPN tunneling work, first, you have to get a VPN service. Once you connect to the desired server, a VPN tunnel will be established. Without it, your ISP sees everything you do online, but this is impossible after you connect to a VPN server. That's because of the encryption and hidden IP address.
Most VPN services claim to have a strict no-logs policy, which means they don’t monitor and store personally identifiable information or online activity data. Having said all that, your best bet is to use a reputable VPN service that either has an independently-audited or no logs policy or one that's been tested in the wild.
VPN tunnel security - can it be hacked?
If VPN connection is so secure, is it actually possible to hack it? Unfortunately, yes - but that’s much less common than you think. You shouldn’t worry about that if you’re just a regular user, as hackers usually prey only on high-profit targets like million-dollar companies.
So, how can a VPN tunnel be hacked? Well, as breaking the encryption itself is virtually impossible (unless there’s a known vulnerability), the most common way is stealing the encryption key. This can be done in a lot of different ways, however, using a reputable VPN greatly minimizes the risk.
For example, VPNs like NordVPN use a 4096-bit DH (Diffie-Hellman) key cipher, which makes the key exchange in a VPN connection extremely secure.
How to test a VPN tunnel?
Checking your ping will help you know whether your VPN tunnel is working. You’ll need to check your ping twice: when you’re connected to a VPN and when you’re not. Then, simply comparing the results will let you see if the VPN connection was successful. So, here’s how you check your ping if you’re using Windows 10:
- Open Command Prompt
- Type in “ping 184.108.40.206” (220.127.116.11 is the public DNS of Google)
- Press Enter
- Wait for the results
The ping received with a VPN in use will be significantly higher than the one you get when disconnected from a VPN.
Types of VPN tunnel protocols
A tunneling protocol, or a VPN protocol, is software that allows securely sending and receiving data among two networks. Some may excel in speed but have lackluster security and vice versa.
At the moment of writing this article, the most popular tunnel protocols are OpenVPN, IKEv2/IPSec, and L2TP/IPSec. However, the next-gen WireGuard protocol is being implemented in many premium VPN services.
Below you will find a list of VPN tunneling protocols, ranked from best to worst. Don't forget that not all providers offer the same set of protocols, and even if they do, their availability will be different across desktop and mobile devices.
Security: Very high
Speed: Very high
WireGuard is hands-down the best tunnel protocol available right now. It offers unprecedented speed and security, using top-notch encryption. What's more, this open-source protocol is easy to implement and audit thanks to its lightweight code, consisting of only 4000 lines. That's a hundred times less than OpenVPN, the most popular protocol.
Built from the ground up, WireGuard is free from any disadvantages that come with an old framework. It's also free from the negative impact of network changes, making it a go-to choice for mobile users.
Released almost two decades ago, OpenVPN is still the most popular tunneling protocol. However, because of WireGuard, it's slowly losing its position for good. Despite that, you still get first-class security and fast speeds with this open-source VPN tunneling protocol.
You may encounter two versions of OpenVPN – TCP and UDP. The former is more stable and the latter offers a faster connection.
This combination of protocol rivals OpenVPN in terms of popularity, security, and speed. IKEv2 excels at maintaining your VPN connection whenever you switch from one network to another. Due to the native support, it's especially popular on iPhone and iPad devices.
L2TP/IPSec is a soon-to-be-retired VPN tunneling protocol that you can still find in some services, especially those that have trouble implementing OpenVPN on iOS. I could have ranked its security as "high," but I can't ignore that it's been mentioned in Snowden's leaks. If what he says is true, then the NSA may have the tools to exploit L2TP/IPSec.
Just like IKEv2/IPSec, this one combines two protocols where one is responsible for encapsulating your traffic, and the user takes care of encryption.
When it comes to speed, the difference between SSTP and L2TP/IPSec is not that big. However, the reason why the former sits one place below is compatibility. SSTP was created by Microsoft and works on Windows only. What's more, there's always a chance that its creators have left some unlocked back doors in case the NSA comes calling. On the bright side, SSTP is great for bypassing The Great Firewall of China.
PPTP is an outdated VPN tunneling protocol that I don't recommend you use. Just like its younger brother SSTP, this one was developed by Microsoft back in the days of Windows 95. And unlike its younger brother, PPTP is available even without a VPN app on all major platforms, including Linux. Unfortunately, there's more than one widely-known security vulnerability that makes using PPTP risky.
VPN split tunneling
Split tunneling allows you to choose which websites or apps should use a VPN tunnel and which ones should stay outside. This feature is useful when you want to watch a show that's available in the US and read a local version of a news portal. Another example would be using your office's printer while torrenting securely with a VPN.
However, not all VPN providers offer this feature. Even if they do, the chances are that split tunneling is available on certain devices and operating systems only. Therefore, always check your options before committing long-term.
Thinking of trying out a VPN service? Read one of our VPN guides or reviews
How does tunneling work on a VPN?
A VPN tunnel links your device with your destination by using a VPN protocol. Your connection becomes encrypted, and your IP address is no longer visible to anyone outside the tunnel. The speed and security of such a tunnel highly depend on your VPN provider’s protocol, encryption type, and additional security and privacy features.
How do I setup a VPN tunnel?
If you’re using a VPN app, you don’t have to set up a VPN tunnel. It will be done automatically, and your only task is to choose between the available VPN protocols (and servers). However, most VPN services offer manual configuration guides on different devices, such as routers or smart TVs. To see your options, visit the provider’s website or contact customer support.
Is tunneling the same as VPN?
No. A VPN merely uses tunneling to connect your device to the VPN server. | <urn:uuid:628bf94a-f01d-4a13-91aa-17b5e5d63124> | CC-MAIN-2022-40 | https://cybernews.com/what-is-vpn/what-is-a-vpn-tunnel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00723.warc.gz | en | 0.936314 | 1,898 | 2.9375 | 3 |
The Defense Department viewed “identity dominance” as the cornerstone of multiple counterterrorism strategies.
In the wake of the Taliban’s takeover of Kabul and the ouster of the Afghan national government, alarming reports indicate that the insurgents could potentially access biometric data collected by the U.S. to track Afghans, including people who worked for U.S. and coalition forces.
Afghans who once supported the U.S. have been attempting to hide or destroy physical and digital evidence of their identities. Many Afghans fear that the identity documents and databases storing personally identifiable data could be transformed into death warrants in the hands of the Taliban.
This potential data breach underscores that data protection in zones of conflict, especially biometric data and databases that connect online activity to physical locations, can be a matter of life and death. My research and the work of journalists and privacy advocates who study biometric cybersurveillance anticipated these data privacy and security risks.
Investigative journalist Annie Jacobson documented the birth of biometric-driven warfare in Afghanistan following the terrorist attacks on Sept. 11, 2001, in her book “First Platoon.” The Department of Defense quickly viewed biometric data and what it called “identity dominance” as the cornerstone of multiple counterterrorism and counterinsurgency strategies. Identity dominance means being able to keep track of people the military considers a potential threat regardless of aliases, and ultimately denying organizations the ability to use anonymity to hide their activities.
By 2004, thousands of U.S. military personnel had been trained to collect biometric data to support the wars in Afghanistan and Iraq. By 2007, U.S. forces were collecting biometric data primarily through mobile devices such as the Biometric Automated Toolset (BAT) and Handheld Interagency Identity Detection Equipment (HIIDE). BAT includes a laptop, fingerprint reader, iris scanner and camera. HIIDE is a single small device that incorporates a fingerprint reader, iris scanner and camera. Users of these devices can collect iris and fingerprint scans and facial photos, and match them to entries in military databases and biometric watchlists.
In addition to biometric data, the system includes biographic and contextual data such as criminal and terrorist watchlist records, enabling users to determine if an individual is flagged in the system as a suspect. Intelligence analysts can also use the system to monitor people’s movements and activities by tracking biometric data recorded by troops in the field.
By 2011, a decade after 9/11, the Department of Defense maintained approximately 4.8 million biometric records of people in Afghanistan and Iraq, with about 630,000 of the records collected using HIIDE devices. Also by that time, the U.S. Army and its military partners in the Afghan government were using biometric-enabled intelligence or biometric cyberintelligence on the battlefield to identify and track insurgents.
In 2013, the U.S. Army and Marine Corps used the Biometric Enrollment and Screening Device, which enrolled the iris scans, fingerprints and digital face photos of “persons of interest” in Afghanistan. That device was replaced by the Identity Dominance System-Marine Corps in 2017, which uses a laptop with biometric data collection sensors, known as the Secure Electronic Enrollment Kit.
Over the years, to support these military objectives, the Department of Defense aimed to create a biometric database on 80% of the Afghan population, approximately 32 million people at today’s population level. It is unclear how close the military came to this goal.
More data equals more people at risk
In addition to the use of biometric data by the U.S. and Afghan military for security purposes, the Department of Defense and the Afghan government eventually adopted the technologies for a range of day-to-day governmental uses. These included evidence for criminal prosecution, clearing Afghan workers for employment and election security.
In addition, the Afghan National ID system and voter registration databases contained sensitive data, including ethnicity data. The Afghan ID, the e-Tazkira, is an electronic identification document that includes biometric data, which increases the privacy risks posed by Taliban access to the National ID system.
It’s too soon after the Taliban’s return to power to know whether and to what extent the Taliban will be able to commandeer the biometric data once held by the U.S. military. One report suggested that the Taliban may not be able to access the biometric data collected through HIIDE because they lack the technical capacity to do so. However, it’s possible the Taliban could turn to longtime ally Inter-Services Intelligence, Pakistan’s intelligence agency, for help getting at the data. Like many national intelligence services, ISI likely has the necessary technology.
Another report indicated that the Taliban have already started to deploy a “biometrics machine” to conduct “house-to-house inspections” to identify former Afghan officials and security forces. This is consistent with prior Afghan news reports that described the Taliban subjecting bus passengers to biometric screening and using biometric data to target Afghan security forces for kidnapping and assassination.
Concerns about collecting biometric data
For years following 9/11, researchers, activists and policymakers raised concerns that the mass collection, storage and analysis of sensitive biometric data posed dangers to privacy rights and human rights. Reports of the Taliban potentially accessing U.S. biometric data stored by the military show that those concerns were not unfounded. They reveal potential cybersecurity vulnerabilities in the U.S. military’s biometric systems. In particular, the situation raises questions about the security of the mobile biometric data collection devices used in Afghanistan.
The data privacy and cybersecurity concerns surrounding Taliban access to U.S. and former Afghan government databases are a warning for the future. In building biometric-driven warfare technologies and protocols, it appears that the U.S. Department of Defense assumed the Afghan government would have the minimum level of stability needed to protect the data.
The U.S. military should assume that any sensitive data – biometric and biographical data, wiretap data and communications, geolocation data, government records – could potentially fall into enemy hands. In addition to building robust security to protect against unauthorized access, the Pentagon should use this as an opportunity to question whether it was necessary to collect the biometric data in the first instance.
Understanding the unintended consequences of the U.S. experiment in biometric-driven warfare and biometric cyberintelligence is critically important for determining whether and how the military should collect biometric information. In the case of Afghanistan, the biometric data that the U.S. military and the Afghan government had been using to track the Taliban could one day soon – if it’s not already – be used by the Taliban to track Afghans who supported the U.S.
Margaret Hu is a professor of law and of international affairs at Penn State.
NEXT STORY: The Future of Work Is Flexible | <urn:uuid:2007c8e0-0bdd-4920-9248-00dce41df0f3> | CC-MAIN-2022-40 | https://www.nextgov.com/ideas/2021/08/taliban-reportedly-have-control-us-biometric-devices-lesson-life-and-death-consequences-data-privacy/184948/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00723.warc.gz | en | 0.928751 | 1,443 | 2.625 | 3 |
The recent paper on Diffie-Hellman "precomputation" estimates a cost of 45-million core-years. Of course, the NSA wouldn't buy so many computers to do the work, but would instead build ASICs to do the work. The most natural analogy is how Bitcoin works. Bitcoin hashes were originally computed on CPU cores, then moved to graphics co-processors, then FPGAs, then finally ASICs.
The current hashrate of Bitcoin 460,451,594,000 megahashes/second. An Intel x86 core computes about 3-megahashes/second, or 153,483,864,667 CPU cores. Divided this by 45-million core-years for precomputing 1024bit DH, and you get 3410 DH precomputations per year. Thus, we get the following result:
The ASIC power in the current Bitcoin network could do all the necessary precomputations for a Diffie-Hellman 1024 bit pair with 154 minutes worth of work. Or, the precomputation effort is roughly equal to 15 bitcoin blocks, at the current rate.(Update: I did some math wrong, it's 154 minutes not 23 minutes)
Another way of comparing is by using the website "keylength.com", which places the equivalent effort of cracking 1024 DH with 72 to 80 bits of symmetric crypto. At the current Bitcoin rate, 72 bits of crypto comes out to 15 bitcoin blocks, matching the estimate above. (I assume precomputation is roughly the same amount of work as computing 1024 DH). | <urn:uuid:59f4b834-29da-4bcc-a841-8d06f62242a8> | CC-MAIN-2022-40 | https://blog.erratasec.com/2015/10/dh-1024-in-bitcoin-terms.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00723.warc.gz | en | 0.90554 | 324 | 2.8125 | 3 |
In this second post of the IONIAN system series, we discuss the reasons for the choice of the submarine route.
IONIAN is a point-to-point submarine cable landing near the cities of Crotone in Italy and Preveza in Greece. It crosses the Ionian Sea in an East-West (almost) straight line. The designed submarine route has a total length of c. 317 km.
IONIAN uses low attenuation fibers, so no amplifiers are required, and still allowing a very large capacity (> 15 Tbps) per fiber pair with current transmission technology.
Looking at the Route map (fig. 1), it is obvious that from a geographical point of view, there are options on the North of this route to reduce the cable stretch. This would shorten the marine route although, if the end goal is to connect Athens, it would be partially offset by a longer terrestrial route.
The reason of the choice of IONIAN’s route is that it has been engineered to maximize the reliability of the cable, while keeping the marine stretch short enough to avoid the need of amplifiers with low attenuation fibers. Given that the major causes of submarine cable
cuts are due to human activity, such as fishing and anchoring, the best way to minimize such risks is by laying the cable in deep waters. Additionally, not having submerged active elements (amplifiers) also increases the reliability and durability of the system. The picture below is the elevation profile of the IONIAN cable route. As can be appreciated, most (78%) of the cable will be deployed below the 1.000m water depth limit.
The Ionian Sea is characterized by its deep waters (the deepest point in the Mediterranean, Calypso Deep is in the Ionian Sea). As we move North, towards the Adriatic Sea, the water depth decreases. As can be seen below, a hypothetical route following the shortest path between the Italian peninsula and the Northernmost part of Greece would have most of the cable (72%) with a water depth of less than 250 meters.
To summarize, the route has been engineered to:
1. Maximize reliability by avoiding cut hazards >> deep waters
2. Avoid submerged active elements to extend the lifetime >> unrepeated cable
Finally, the choice of the landing points is a very important decision, since the availability of appropriate infrastructure, a reliable supply of energy and a safe location for the cable landing stations are all critical for the system to operate in good conditions. But that is a different story… | <urn:uuid:f8560606-80e1-42d7-ab7b-953abf904321> | CC-MAIN-2022-40 | https://islalink.com/2022/04/22/submarine-route/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00723.warc.gz | en | 0.944192 | 518 | 3.40625 | 3 |
ISO 14001 – Global Reach and its application in the Business Industry
The dynamic environment that we are living in has enabled companies to bloom and develop, however, these developments have created an impact on the environment, from which, as far as we can see, the environment has not been blooming as well. On the contrary, the environment has become so fragile that it needs protection. Hence, organizations now must address their impact on the environment and validate how they are doing it.
For many organizations nowadays, sustainability has become an essential element that needs to be addressed in all their operations. And for organizations that want to address the environmental pillar of sustainability, ISO 14001 is a great start since it helps organizations to manage environmental aspects in a systematic order. In addition, an organization can get certified with ISO 14001, and attest to their customers that they are precautious of their impact on environmental issues.
Environmental Management System (EMS) and Environmental Performance are the two main components that ISO 14001 focuses on. Organizations that have implemented an EMS based on ISO 14001, consider the impact of environmental aspects and by fulfilling compliance obligations and reducing the internal and external risks, they enhance their environmental performance. Hence, how organizations manage the environmental aspects has to do with environmental performance.
Where should an organization start?
ISO 14001 is suitable for all organizations. The size, type, and nature of the organization does not limit the implementation of this standard. What top management could do is to start implementing an EMS and address new actions that improve the environmental performance and, based on these two, create an environmental policy. The environmental policy should emphasize how to enhance environmental performance and consider the protection of the environment by preventing and mitigating its negative impacts.
While in ISO 14001 environmental performance criteria are not mentioned, it is up to organizations to determine their environmental impact and, based on that, create policies, objectives, processes that measure and continually improve their environmental performance.
However, organizations are free to decide whether they want to implement parts of the standard or the standard as a whole. The difference between the two is that only organizations that implement an EMS based on all the requirements of ISO 14001 and have successfully fulfilled them, can get certified with ISO 14001. Whereas, implementing only parts of the ISO 14001 standard would still create a positive impact in your organization and the environment, yet there would be no valid proof that you are doing so.
Thus, it is no surprise that there are more than 350,000 organizations that are certified with ISO 14001 worldwide, more specifically in 171 different countries (ISO Survey, 2020).
Why should an organization consider getting ISO 14001 Certification?
Getting certified with ISO 14001 for many organizations, despite improving their environmental performance, comes with many other benefits, such as:
- Enhanced reputation – With today’s sustainability trends, customers are always opting for organizations that consider their environmental impact and their contribution to society. Hence, showing to your customers that you have taken great action with regards to the environment by implementing an internationally recognized standard, gives your organization credibility and improved image. These create a larger customer base that results in greater market share. Moreover, stakeholders’ confidence in the organization will be increased. This means that your stakeholders are convinced that your organization is improving environmental performance in line with the organization’s objectives stakeholders rely on.
- Cost reduction – By following efficient solutions that benefit the environment, organizations also cut down costs. This provides a competitive advantage that is financially beneficial as well, since the implemented EMS enables organizations to reduce incidents by the use of efficient resources that reduce waste.
- Compliance validation – For some organizations that operate in industries that need valid demonstration of compliance with the statutory and regulatory requirements, being ISO 14001 certified is mandatory since it confirms that certified organizations follow such regulations.
- Accomplishment of objectives – Organization’s environmental related objectives are achieved with ISO 14001. The standard requires organizations to set their objectives with corresponding procedures and policies, always by having in mind the environmental aspects too. Following these requirements will enable organization to improve their operations and run them efficiently.
- Employee incorporation & Leadership commitment – Implementing ISO 14001 involves everyone in the organization. While employees are engaged in the process, leadership should show their commitment, and together as a group they can reduce the environmental impact of their organization. This also creates a better impact within their moral, since the organization validates its concern about the environment and shows its commitment towards environmental improvement.
- Continual improvement – Creating a culture of continual improvement makes the staff be more focused on improving the processes to a better level and this also creates an easier way of managing the EMS.
How to get certified with ISO 14001?
To get the ISO 14001 Certification, first you would need to find a third-party certification body that would audit your organization’s ISO 14001 management system. While being certified with ISO 14001 is not a mandatory requirement and for some organizations the benefits of implementing the standard in part would be sufficient, validating that your organization is in conformity with the requirements of ISO 14001 by getting audited and certified would help your organization to denote to your customers and stakeholders that you have followed and implemented all the requirements of the standard properly.
Therefore, for organizations that want to get certified there are some small steps they can take before implementing the standard:
Step 1: Set your objectives. What are the main points that you want to achieve from ISO 14001?
Step 2: Leadership commitment is a core factor in the process of creating an effective EMS. Therefore, make sure you get the buy-in of upper management.
Step 3: Take a look at the current processes and systems that relate to the environmental aspects, and from this create a starting point for the implementation of the EMS. This would help to detect possible gaps and let you focus on what needs to be corrected.
After taking these steps, organizations can start implementing an EMS by creating all the necessary policies, objectives, and procedures, and defining its scope. Once the documentation is provided, organizations need to test whether the EMS is efficient and effective by conducting an internal audit, having a management review, and taking necessary corrective actions. If the organizations see that their EMS is working, they can continue with the audit of the EMS by a certification body.
Selecting the right certification body to perform the third-party audit is vital for your organization’s success. The extent to which your certification adds value and provides comparative advantage to the organization depends significantly on this selection.
How would this work with MSECB ISO 14001 certification?
Receiving an internationally recognized certification from a globally renowned certification body, such as MSECB, has proved to have a multidimensional impact on previously certified organizations, and ultimately has increased the market share and recognition of those organizations.
The certification process is separated in two stages:
- During Stage 1 Audit, MSECB would conduct a review of the EMS to verify whether the client is ready for the Stage 2 Audit.
- After Stage 1 is completed successfully, the Stage 2 will be conducted. The Stage 2 Audit is a more in-depth audit to verify whether the client has met all the requirements of the standard.
Upon verifying that your organization is in conformity with the requirements of the ISO 14001 standard, the certification is granted by MSECB. The certification is then maintained through scheduled annual surveillance audits conducted by MSECB, with the recertification audit performed on a triennial basis. Read more about our certification process here.
To sum up, having an ISO 14001 certification serves as a valid approval, which is internationally recognized, that confirms that your organization is following the best practices and is being cautious about their environmental impact.
By managing, monitoring, and controlling their environmental aspects, organizations that get certified with ISO 14001 not only improve their environmental performance through the efficient use of resources and reduction of waste, but also, they gain a competitive advantage and the trust of stakeholders. Thus, the organization, its interested parties, and the environment benefit from the implementation of the EMS.
Is your organization concerned about environmental issues? Start your ISO 14001 certification today by getting a Free Quote. | <urn:uuid:b83e1ab7-44e0-4ece-bfd4-61c7bf4bd32f> | CC-MAIN-2022-40 | https://msecb.com/how-can-iso-14001-benefit-your-organization-and-the-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00723.warc.gz | en | 0.955776 | 1,674 | 2.53125 | 3 |
In general, these two terms are used to refer to the email addresses of the sender and recipients specified in the email, or during the email transmission. SMTP and MIME email addresses in most cases are identical. The following representation shows the basic commands used to send an email message using the SMTP protocol and highlights the difference between the SMTP and MIME email addresses.
From the above, we can see that the sender's email address is specified two times once in the mail from SMTP command, and another time in the FROM .
The same occurs for the recipient's email address: It is first specified in the rcpt to SMTP command, and then it is specified in the TO.
The SMTP email addresses are only used by SMTP servers during the transmission of the email to route the message to destination. This information is normally lost when the email is saved in the recipient's mailbox.
On the other hand, the MIME addresses are addresses specified within the FROM: and the TO: fields. These fields are part of the message being transferred, and they are both specified after the data SMTP command. This information is normally not utilized by the SMTP servers transmitting the email. However, the MIME email addresses are the email addresses which are displayed to the user.
Spam emails sometimes specify different SMTP email addresses and MIME email addresses. This may be done on purpose by spammers in an attempt to fool Anti-Spam software, or the user. It may also be the result of a mistake or a bug in the software used by the spammer to send his spam emails. The following diagram illustrates such an attempt:
- When transmitting an email, only the email address in the rcpt to command needs to be real, since this specifies the email address where the email will be delivered. All the other email addresses can be fake.
- The email sent using the second SMTP transmission above, will be displayed to the recipient as follows:
To: you [firstname.lastname@example.org]
Subject: This is the message subject
This is the message body
- GFI MailEssentials has access to the SMTP information, since it is bound to the Microsoft Internet Information Services (IIS) SMTP server. GFI MailEssentials will also have access to a copy of the message, which will have the MIME information. | <urn:uuid:62a4cfe5-210d-4b32-8319-6befd93146d1> | CC-MAIN-2022-40 | https://support.mailessentials.gfi.com/hc/en-us/articles/360015135539-What-is-the-difference-between-SMTP-and-MIME- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00723.warc.gz | en | 0.905237 | 496 | 3.578125 | 4 |
Organizations of all sizes all over the world use Active Directory to help manage permissions and control access to critical network resources. But what exactly is it, and how can it potentially help your business?
What is Active Directory?
Active Directory (AD) is a directory service that runs on Microsoft Windows Server. The main function of Active Directory is to enable administrators to manage permissions and control access to network resources. In Active Directory, data is stored as objects, which include users, groups, applications, and devices, and these objects are categorized according to their name and attributes.
What Are Active Directory Domain Services?
Active Directory Domain Services (AD DS) are a core component of Active Directory and provide the primary mechanism for authenticating users and determining which network resources they can access. AD DS also provides additional features such as Single Sign-On (SSO), security certificates, LDAP, and access rights management.
The Hierarchical Structure of Active Directory Domain Services
AD DS organizes data in a hierarchical structure consisting of domains, trees, and forests, as detailed below.
Domains: A domain represents a group of objects such as users, groups, and devices, which share the same AD database. You can think of a domain as a branch in a tree. A domain has the same structure as standard domains and sub-domains, e.g. yourdomain.com and sales.yourdomain.com.
Trees: A tree is one or more domains grouped together in a logical hierarchy. Since domains in a tree are related, they are said to “trust” each other.
Forest: A forest is the highest level of organization within AD and contains a group of trees. The trees in a forest can also trust each other, and will also share directory schemas, catalogs, application information, and domain configurations.
Organizational Units: An OU is used to organize users, groups, computers, and other organizational units.
Containers: A container is similar to an OU, however, unlike an OU, it is not possible to link a Group Policy Object (GPO) to a generic Active Directory container.
Other Active Directory Services
Besides Active Directory Domain Services, there are a handful of other critical services that AD provides. Some of those services have been listed below:
Lightweight Directory Services: AD LDS is a Lightweight Directory Access Protocol (LDAP) directory service. It provides only a subset of the AD DS features, which makes it more versatile in terms of where it can be run. For example, it can be run as a stand-alone directory service without needing to be integrated with a full implementation of Active Directory.
Certificate Services: You can create, manage and share encryption certificates, which allow users to exchange information securely over the internet.
Active Directory Federation Services: ADFS is a Single Sign-On (SSO) solution for AD which allows employees to access multiple applications with a single set of credentials, thus simplifying the user experience.
Rights Management Services: AD RMS is a set of tools that assists with the management of security technologies that will help organizations keep their data secure. Such technologies include encryption, certificates, and authentication, and cover a range of applications and content types, such as emails and Word documents.
The server that hosts AD DS is called a domain controller (DC). A domain controller can also be used to authenticate with other MS products, such as Exchange Server, SharePoint Server, SQL Server, File Server, and more.
Getting Started with Windows Active Directory
A comprehensive step-by-step guide to setting up Active Directory on Windows Server is beyond the scope of this article. Instead, I will provide a basic summary of the steps required to install AD, which should at least point you in the right direction. Assuming you already have Windows Server (2016) installed, you will need to…
- Change your DNS settings so that your server IP address is the primary DNS server.
- Open the Server Manager, which you can access via PowerShell by logging in as administrator and typing ServerManager.exe.
- On the Server Manager window, click on Add roles and features, and click the Next button to start the setup process.
- On the window that says Select Server Roles, check the box that says Active Directory Domain Services. A pop-up box will appear. Click on Add Features, and then click Next to continue.
- Keep clicking the Next button until you get to the final screen. Unless you know what you are doing, you are better off leaving the default settings as they are.
- Once you have got to the end of the wizard, click Install, and wait for the installation process to complete.
Once you have Active Directory Domain Services installed, you will then need to configure your installation, which includes changing default passwords, setting up OUs, domains, trees, and forests. As mentioned, a detailed explanation of setting up and configuring Active Directory is beyond the scope of this article. For detailed up-to-date instructions, you will need to consult the official documentation.
What is Azure Active Directory
Given that increasingly more organizations are shifting their business operations to the cloud, Microsoft has introduced Azure Active Directory (Azure AD), which is their cloud-based version of Windows AD, which can also sync with on-premise AD implementations. Azure AD is said to be the backbone of Office 365 and other Azure products; however, it can also be integrated with other cloud services and platforms. Some of the differences between Windows and Azure AD are as follows.
Communication: Azure AD uses a REST API, whereas Windows AD uses LDAP, as mentioned previously.
Authentication: Windows AD uses Kerberos and NTLM for authentication, whereas Azure AD uses its own built-in web-based authentication protocols.
Structure: Unlike Windows AD, which is organized by OUs, trees, forests, and domains, Azure AD uses a flat structure of users and groups.
Device Management: Unlike Windows AD, Azure AD can be managed via mobile devices. Azure AD does not rely on Group Policy Objects (GPOs) to determine which devices and servers are able to connect to the network.
If you are reading an article about Active Directory, it’s more than likely that you are not already using it. In which case, you might be better off starting with Azure AD as opposed to Windows AD. One of the main reasons why you might want to use Windows AD is if you are storing large amounts of valuable data and have a team of experienced IT professionals managing your cybersecurity program. | <urn:uuid:0b3304f4-e2b4-490f-b9ce-3554fce4f306> | CC-MAIN-2022-40 | https://www.lepide.com/blog/what-is-active-directory-and-how-does-it-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00123.warc.gz | en | 0.92625 | 1,352 | 3.234375 | 3 |
The data center industry is poised to benefit from many of the investments proposed by the $1 trillion, bipartisan Infrastructure Bill currently making its way through Congress. The bill promises “historic investment” in the American economy by creating jobs that facilitate a response to climate and other environmental risks. Though it does not specifically call for investment in data centers, there are broad implications for the industry within two of the bill’s largest line items: electric grid improvements and rural broadband access.
The bill would stimulate development of additional energy generation projects, adding capacity to the grid and making it possible to build new data centers. As demand for data centers services grows, the power supply will need to exceed its pace. To obtain permits to build new facilities, utilities must have adequate energy resources available, which this bill would help create.
Additionally, the infrastructure bill explicitly intends to accelerate the renewable energy transition, which will support decarbonization efforts across the technology industry landscape.
A more reliable grid
Funding from the infrastructure bill would strengthen grid reliability, which is essential to maintain the critical services data centers provide. Reliability can be improved by increasing the grid’s resilience to environmental disruptions and aiding the flow of energy between regional grids during local energy shortages. Data centers protect and ensure the continued operations of IT infrastructure, which support critical services that drive commerce and communication. They must remain operational 24x7 and run equipment that draws large amounts of power. Improvements to the electrical grid that enhance energy reliability are crucial.
The bill would also expand access to high-speed internet by bringing broadband infrastructure to rural communities and lowering the cost of internet services. This would inevitably lead to an increase in demand for data center services. As more people and more systems come online, the importance of data centers becomes even more apparent.
Other features of the infrastructure bill could also support the data center industry. Environmental remediation of old industrial sites offers an opportunity to develop and work safely on an existing footprint without requiring the industry to expand into natural areas for new development. Increased investment in water storage, conservation and reclamation would allow data centers to responsibly draw and return water to the watersheds where they operate. Supporting projects that provide clean water and achieve environmental remediation is critical to the health of the nation
As demand for data centers services grows, the supply of clean renewable power will need to exceed its pace. Many in the data center industry have made bold commitments to net-zero operations that are
dependent on the development of a clean and reliable electric grid. The scale of renewable energy needed must be enough to power new operations while also displacing fossil fuel use at existing operations. We are pleased to see the bill’s explicit intent to accelerate the renewable energy transition, which is welcomed across the technology industry.
The water-electricity trade-off
Another important piece for the data center industry is access to water to operate facilities. The bill’s proposed improvements to water storage and reclamation would benefit the industry at large. However, many providers, including CyrusOne, have committed to operating without consuming water for cooling. However, water-free data centers replace water with a higher electricity demand.
This trade off allows the industry to preserve precious water in regions where it is scarce. When the electricity demand is met with carbon-free, renewable energy generation, we limit our environmental impact both locally and globally by reducing our contribution to greenhouse gas emissions and reducing our dependence on the large volume of water necessary to generate electricity from fossil fuels. This underscores the importance of expanding renewable energy deployment to achieve multiple environmental goals.
Finally, data centers and IT services are the backbone of numerous American companies. The bill’s proposed upgrades to transit systems, such as roads, vehicles, rails lines and airplanes that enable distribution for these companies, will benefit everyone, including data centers.
We welcome the bill not only for its impact, but for the sustainable ways it would impact our industry. We are committed to conserving water and energy through the effective design, maintenance and operation of our facilities. And we aspire to be strategic partners for sustainability with our customers. The bill will help this industry achieve its sustainability mission. | <urn:uuid:9503ea7d-0fe0-4ee2-a2fc-bbf742fef45e> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/lets-welcome-the-1-trillion-us-infrastructure-bill/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00123.warc.gz | en | 0.944787 | 841 | 2.578125 | 3 |
What is Minimum Viable Architecture? Why is it important?
Companies have traditionally taken a waterfall-style approach In the process of defining and designing software architecture. Every detail within the development lifecycle is accounted for much before a line of code is written. Though effective, this model has proven unreliable as it is exceedingly slow and thus not suitable for today’s rapidly evolving technological landscape.
By the time the architecture is complete for developers to use, the business environment has transformed greatly. Consequently, developers are compelled to either get rid of the design altogether or build solutions for the current reality based on a framework that was built for a different time.
The waterfall-style approach of designing the entire architecture upfront, therefore, is an outdated method as it cripples the company’s steady feedback cycle as well as its ability to adapt to ever-changing requirements, both of which are indispensable qualities.
The persistent issue of over-architecture can only be avoided or countered by what is called ‘Minimum Viable Architecture’. Let us take a look at what it is and how companies can benefit through its implementation.
Understanding Minimum Viable Architecture
To understand the concept of Minimum Viable Architecture, we must discuss the Agile software development model. The Agile methodology is a software development practice that promotes a continuous iteration of testing and development throughout the project’s development lifecycle. Unlike the waterfall model, the development and testing activities are both concurrent in the Agile model. This method relies heavily on responding to change rather than following a concrete plan.
One of the primary elements of the Agile methodology is something called Minimum Viable Product or MVP. This refers to the building of a product with the bare minimum functionality required to be able to deliver a usable product to its early adopters. When this very idea is applied to a whole enterprise, it is called Minimum Viable Architecture or MVA.
Simply put, Minimum Viable Architecture, often referred to as “Just Enough Architecture” due to its nature, is a method of software development that entails the implementation of the core architectural components that comprise the architecture’s foundation such that the end result is good enough to be released. The rest of the architecture can then be built on top of this foundation. In this manner, the end user’s most critical requirements, both functional and non-functional, are prioritized and fulfilled.
The Main Aspects of MVA Are As Follows:
- Build for the most likely scenario. Keep the design flexible enough to tackle edge-case scenarios as they arise.
- Keep the major technologies and the programming language intact. The product is built in small increments over a certain length of time.
- This type of architecture is grounded in concrete and factual requirements instead of assumptions or gut feelings.
A Few Basic Guidelines for Implementing MVA
The very first step to successfully implementing Minimum Viable Architecture is to identify the goals of the business. As important as the goal, the target users for the end product need to be defined. Map the complete journey of the customer, such as what problems they are looking to solve.
At this juncture, it is crucial to make a distinction between the must-have features and the good-to-have features. Features that directly address the most pressing pain points of the customers should be the topmost priority. Defining the target objectives for strategies and also the metrics for the measurement of success can be of great help at this stage.
Next comes the development methodology. It is of utmost importance that the development strategies entail best practices pertaining to the code review process, repository management, a persistence strategy, as well as a quality assurance or QA strategy. The next comes the complex step of defining the tech stack to be used, such as the programming language, toolkit, and so on, and the cloud environments that you intend to use for the process.
Once all these steps are taken care of, determining the architectural pattern that fits what you are trying to accomplish the best will tie everything together nicely.
The Benefits of MVA
Prioritizing agility has come to be less of an option and more of a necessity. Businesses that do prioritize it by switching to an adaptive business model such as the Minimum Viable Architecture model are bound to have a much higher success rate as they prepare for and embrace the perpetual state of change.
The MVA methodology can be immensely beneficial to a business that is heavily reliant on investor buy-in as it allows the company to ascertain whether or not their product will succeed before pitching their idea to the investors. When pitching their idea, they will also present a solid business case demonstrating the market validity of the product.
MVA also allows the organization to verify the market demand for its product and discover whether or not the target customers would really use it without having to invest large amounts of money. Testing UX and usability of the product as well as building a monetization strategy can also become a breeze to accomplish when following the Minimum Viable Architecture model.
Lastly, the cost-efficiency factor of the MVA model remains unmatched. Over-architecture from the very outset comes with a hefty price tag, while the minimum approach only requires investments in small increments.
Using the Minimum Viable Architecture model can ultimately result in a highly polished end product as it relies on testing assumptions with small experiments and guiding development using the findings of said experiments. Providing a flexible framework that can help achieve target business objectives, MVA responds to evolving customer requirements and technologies and can go a long way in promoting agility. | <urn:uuid:580f1b95-d508-44f9-a848-7c01d7d531a0> | CC-MAIN-2022-40 | https://www.protecto.ai/what-is-minimum-viable-architecture-why-is-it-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00324.warc.gz | en | 0.944598 | 1,139 | 2.875 | 3 |
A number of people worry about this before they make a purchase for headphones. It is a commonly believed myth that Bluetooth headphones can cause cancer and that corded headsets are a safer bet. Although this myth makes sense, it is not entirely true. While a corded phone line might emit zero radiation, the same, however, is unfortunately not true when it comes to corded headphones.
However, before we delve into finding out which headsets are safe, let’s give you a rundown on how each of them works.
Bluetooth headsets are in fact mini Wi-Fi routers that you place in your ears. The radiation Bluetooth headsets emit are identical to the frequency that your microwave uses to cook your food. Although the wattage is different the frequency is exactly the same.
And the radiation is continuously emitted making the use of Bluetooth headsets potentially harmful for people who use it for a longer period of time. Although all smartphones, running on 1G, 2G, 3G, or 4G, also emit the same radiation, however, their frequencies are different.
According to studies, the RF radiation, caused by microwave ovens or Bluetooth headsets in this context are carcinogenic, which means it increases your risk of developing cancer.
Minor side effects of using a Bluetooth headset that emits constant radiation include headaches, insomnia, and also neck pain.
Contrary to a popular belief when you use a corded or a wired headset, the radiation travels right to your ears through the cord like an antenna, so the two are basically the same. In fact, when you use a corded headset, the radiation travels directly to your ear canal to your head. In fact, the wired headsets increase your brain’s exposure to radiation, multiplying it by as much as three times.
Also because the wires contain metal, it becomes easy for the radiation to be conducted from the device the earphones are connected to, to your ear canal and head.
However, the corded or wired headsets can be made a little safer as compared to a Bluetooth headset.
Here’s how that can be done
The addition of ferrite beads to your corded earphones is a smart way of making the wired headphones safer.
Ferrite beads are that cylindrical beads on most of your charger cords and other electronic devices that come with a wire. These ferrite beads can be used to stop high-frequency electricity so that frequency surges don’t destroy the circuit boards of the device the wire is connected to.
These ferrite beads are also responsible for reducing RF radiation. Although they do not block the radiation entirely, however, the ferrite beads can filter about 90% of RF radiation that your ear canal would usually absorb when you use a wired headset without them.
Ferrite beads are rather inexpensive and easily added to wires. All you have to do is to connect it to the base of your headset and you’re good to go.
Using ferrite beads will actually make your wired headphones safer not only as compared to a Bluetooth headset but also compared to a corded headset on its own.
Bluetooth vs Wired Headphones Radiation- Which One Should You Pick?
So what should be your ultimate choice, ferrite beads on corded headsets or Bluetooth headsets?
Well, to be honest, we’d give you other safe options.
Whenever you have to answer a call on your cellphone, it is recommended to use the speaker feature. Using the speaker will prevent the direct contact of radiation with your head.
However, if you really need to have a headset (while you drive or are busy somewhere) the safest option would be to use wired headphones that are connected with ferrite beads.
Using a Bluetooth headset will only make you dangerously exposed to RF radiation that is known to cause a number of problems, not only cancer. RF radiation is known to cause DNA damage, commonly in a fetus, and is also harmful to male sterility.
Another safe alternative to Bluetooth headsets vs wired headsets would be air tube headphones which are known to emit next to zero EMF radiation. Tests have shown that the air tube headphones cause no EMF transmission when used.
Although air tube headphones look like the regular wired headphones, their design and structure ensure that there is no EMF transmission when they are used.
This is achieved through a special design that makes use of hollow air tubes for transmitting sound rather than the traditional metal wires.
Usually, air tube headphones have a braided nylon cord that has a copper core, which helps in blocking low-frequency electrical signals. Also because these headphones have air-filled hollow tubes, the sound quality, tone, and acoustics are enhanced.
The air tube headphones also place speakers away from the head, while other headphones have the speakers right against the ear. Also because they have been designed to perform, the air tube headphones convert all electrical signals to acoustic signals beforehand.
After the conversion, acoustic signals travel via the air tubes, which are hollow and flexible. The sound is then reached through metal-free ear-buds enhancing not only the user’s listening experience but also keeping them safe from the risk of radiation.
Neither Bluetooth headphones nor wired headphones are safe. Wired headphones can be made safe with the use of ferrite beads, an inexpensive and easy way to reduce the RF radiation from wired headphones to the brain up to 95%.
However, if you’re looking for an entirely safer option, you have the choice of Air Tube headphones which make use of hollow air tubes to transmit sound signals to your ear, making sure no EMF transmission takes place. | <urn:uuid:5e183919-1d43-475f-9086-ce3e2d3715fa> | CC-MAIN-2022-40 | https://internet-access-guide.com/bluetooth-vs-wired-headphones-radiation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00324.warc.gz | en | 0.954553 | 1,166 | 3.109375 | 3 |
zapp2photo - stock.adobe.com
Many encryption systems are based on the premise that it would take too long for anyone to carry out the mathematical calculations required to reveal the encryption keys, but even basic quantum computers will be capable of determining encryption fast enough for attackers to use.
This would leave critical infrastructure, banking and healthcare networks vulnerable to attack, but to counter this threat, BT has announced the UK’s first practical quantum-secured high-speed fibre network between Cambridge and the BT Labs in Adastral Park, Ipswich.
The announcement comes amid growing concern that huge investments in quantum computing by countries such as China and Russia, as well as US companies like Google and Microsoft, will make quantum computers a reality in the next five to 10 years, and these systems would be capable of cracking most encryption systems in use today.
China, in particular, is known to be investing heavily in developing a quantum computing capability for both defensive and offensive purposes, and although Europe is also investing in developing quantum computing capabilities, the investment pledged so far is way below what China is investing.
The collaborative project was led by the Quantum Communications Hub, part of the UK National Quantum Technologies Programme. The hub is a collaboration between eight UK universities, private companies and public sector stakeholders that have common interests in the exploitation of quantum physics for the development of secure communications technologies and services.
Constructed by researchers from BT, the University of York and the University of Cambridge over the past two years, the “ultra-secure” connection, secured by the laws of physics, was built as part of a project co-funded by the Engineering and Physical Sciences Research Council (EPSRC).
The quantum-secured link, which will connect to the Cambridge Metropolitan QKD (quantum key distribution) Network, runs across a standard fibre connection through multiple BT exchanges over a distance of 120km, making it the first high-speed “real-world” deployment of quantum-based network security in the UK.
The network link, which is capable of transferring 500Gbps of data, will explore and validate use cases for QKD technologies. This will include how the technology can be deployed to secure critical national infrastructure, as well as to protect the transfer of critical data, such as sensitive medical and financial information.
The quantum link itself is said to be virtually “unhackable” because it relies on the use of single particles of light (photons), to transmit data encryption “keys” across the fibre. Should this communication be intercepted, the sender will be able to tell that the link has been tampered with and the stolen photons cannot then be used as part of the key, rendering the data stream incomprehensible to the hacker.
The partners are using equipment from ID Quantique to transmit the data encryption key using a stream of single photons across the fibre network. In parallel, the encrypted data flows through the same fibre, powered by equipment from ADVA optical networks.
Tim Whitley, BT
The fibre runs from Cambridge University Engineering Department’s Centre for Photonic Systems via quantum repeater stations at Bury St Edmunds and Newmarket before making its way to the BT Labs in less than one-thousandth of a second.
Tim Spiller, director of the EPSRC Quantum Communications Hub, said: “We know that QKD technology works. The importance of this network is the demonstration of its operation to potential users and customers in a practical network environment to stimulate market pull.”
Ian White, head of photonics research at the University of Cambridge, said: “This quantum-secured network is an excellent example of the large-scale collaborative research that is feasible because of the creation of the UK’s Quantum Communications Hub. The network will allow detailed analysis of the potential for this new technology to enhance security in advanced communication networks.”
Tim Whitley, BT’s managing director of research and innovation, said: “With the huge growth in cyber attacks across the UK, it’s more important than ever before that we continue to develop ways to protect the most critical data.
“BT has a long history of pioneering innovation so I’m delighted that we’re able to announce this major breakthrough in the field of quantum communications. This is a brilliant example of how academia and business can work together to develop ultra-secure networks to give us the confidence we need in our future digital economy.”
Prepare for quantum computing
At Infosecurity Europe 2018 in London, a top European chief information security officer (CISO) urged the security community to prepare for quantum computing to ensure their encryption processes and associated hardware are ready in time.
Organisations that do not start preparing now could end up exposing critical data because their encryption methods are not quantum computing-ready, warned Jaya Baloo, CISO of KPN Telecom in the Netherlands.
The good news is that all the symmetric encryption currently in use is unlikely to be affected by the arrival of quantum computing. “As long as we keep refreshing keys and following best practices for transferring keys, we are good to go,” said Baloo.
“The problem arises when it comes to asymmetric encryption. It is all the public key cryptography that is out there because it is based on complex mathematical problems that would take even a supercomputer a long time to solve, but that principle breaks down with quantum computers,” she said.
Although it may already be too late to ensure organisations’ encryption processes are completely secured against cracking by quantum computers because it could take up to 20 years for quantum computing-proof algorithms to mature and be fully integrated into organisations, Baloo said there were things that information security professionals could and should do now to ensure they are not totally defenceless.
“It is about ensuring that organisations are agile when it comes to encryption and have the ability to adapt and to implement post-quantum ciphers and algorithms when they become available,” said Baloo.
“I want to encourage information security professionals to document their organisations’ current situations, to examine and understand their current cryptographic landscape and consider how to extend that into action,” she said.
Baloo advised companies, at the very least, to consider extending the length of their encryption keys to the maximum possible under whatever encryption system they are using, to consider implementing quantum key distribution to preserve the integrity and confidentiality of data, and to start preparing to replace existing algorithms with post-quantum algorithms. | <urn:uuid:67bb6348-b374-45e8-a735-5ce2ccc8d5fd> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252442910/BT-announces-unhackable-quantum-secured-network | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00324.warc.gz | en | 0.938624 | 1,352 | 2.5625 | 3 |
How clean is clean energy?
Clean energy. Over the past decade, this term has become universal when describing the future of the energy industry and what is needed to combat climate change. Our communities, our country and our world need to move to cleaner sources of energy — such as solar, wind, geothermal, and hydro — that do not emit pollutants into the air and adversely affect climate change. However, there are questions and concerns around the manufacturing processes for products like solar panels and wind turbines, how electric vehicle (EV) batteries are produced and how these items will be recycled once decommissioned. These concerns and potential issues raise a bigger question: How clean is clean energy?
What is considered clean energy?
Clean energy is generally defined as energy that emits a minimal amount of or no contaminants and pollutants into the atmosphere, soil and water. Some well-known examples of clean energy generation include solar energy, wind energy, geothermal, biomass and hydropower. These energy sources all use natural, renewable elements to generate power with no emissions or, in the case of biomass, net zero emissions. According to the U.S. Energy Information Administration, clean energy sources are gaining traction in the United States — representing nearly 13% of total energy consumption as depicted in the figure below.
The dirty challenges
Despite the best intentions of the clean energy movement, there is still a great deal of work to do to make these energy sources cleaner from cradle to grave. China is the world’s dominant manufacturer of solar panels and uses electricity from coal-burning power plants to support the silicon, wafer, cell and panel manufacturing processes. The rapid increase in demand for solar panel installations over the past several years has forced these manufacturers to ramp up production — thereby also increasing electricity consumption and carbon emissions. And for wind turbine manufacturing, the production process has the potential to emit hazardous air pollutants such as xylene and ethyl benzene as well as volatile organic compounds into the atmosphere.
EVs provide a path towards reducing the emission impacts of the transportation sector, but they currently carry a significant environmental burden. According to a study from the Massachusetts Institute of Technology Energy Initiative, EV battery production generates more emissions than those produced for internal combustion engine vehicles — an initial emissions debt with EVs. Additionally, other concerns surrounding EV battery manufacturing include cobalt mining labor practices, the environmental impacts of the lithium extraction process and the potential toxic waste from the disposal of end-of-life batteries.
Recycling is another challenge as the United States and other countries strive towards clean energy sources and net zero goals. According to the National Renewable Energy Laboratory, wind turbine blade waste is anticipated to amount to approximately 2.2 million tons or more by 2050 — equivalent to the weight of over six Empire State Buildings in New York City. Currently, there are no existing cost-effective recycling methods for wind turbine blades, so blades reaching their end of life are landfilled. Solar panels face similar recycling challenges.
Shifting to the transportation sector, just 5% of EV batteries are currently recycled according to 2019 data from the U.S. Department of Energy. While battery recycling has likely improved over the past two to three years, it is safe to assume there is still a sizable recycling lag given the rapid rise in EV sales. Limited raw materials and regulatory policies may force EV manufacturers and other industry stakeholders to build the necessary recycling infrastructure and technologies.
An optimistic outlook on clean energy
Despite the range of challenges, there is growing optimism that the clean energy movement will become more resource efficient — reducing emission levels during manufacturing processes and creating a larger number of recycling and repurposing options for parts and materials. According to the International Council on Clean Transportation, the emissions from battery manufacturing are expected to decline significantly in the coming decades, particularly with the use of cleaner electricity throughout the production process. In fact, a 30% decrease in grid carbon intensity would reduce emissions from the battery production chain by about 17%. And even though the battery production process creates an initial emissions debt, EVs overtake traditional gas-powered vehicles in just two years with fewer overall emissions and reduced impacts to the environment.
On the recycling front, the U.S. Department of Energy recently announced funding to support the creation of new, retrofitted and expanded domestic facilities for battery recycling. The funding will also support research, development and demonstration of second-life applications — such as energy storage — for batteries previously used to power EVs. With wind power, several companies are looking into options for recycling and reusing wind turbine blades to create a more circular economy. Other firms have launched fully recyclable wind turbine blades or have committed to reusing or recycling all turbine blades once decommissioned.
While not perfect, clean energy initiatives are moving the needle towards increased sustainability, reduced emissions and fewer environmental impacts. The replacement of fossil fuel power with cleaner, more renewable sources of energy is a great start in the race to combat climate change. There’s still a significant amount of work to do, but the investments being made towards clean energy by government entities, utilities, companies and organizations, and consumers show the collective commitment to being better stewards of our world. Is clean energy really clean? Maybe not yet, but we’re getting there.
With 35 years of experience in the energy industry, Leidos is assisting utilities and government entities across the country with fleet electrification and carbon reduction initiatives. From EV site sizing studies to carbon-free assessments, our growing team of experts possess the qualifications and experience to help utilities and organizations confidently develop action plans and achieve their clean energy goals. For more information on Leidos and our clean energy support services, contact our team. | <urn:uuid:8c975b58-9a99-4b8b-95c4-1a90b7b83c20> | CC-MAIN-2022-40 | https://www.leidos.com/insights/how-clean-clean-energy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00324.warc.gz | en | 0.936454 | 1,148 | 3.71875 | 4 |
Artificial intelligence (AI) technology has rapidly progressed in recent years thanks to developments in data collection and better algorithm designs. As a result, various industries have embraced AI, allowing the groundbreaking technology to disrupt their work processes. Revolutionary applications have already been seen in a wide variety of sectors, such as healthcare, digital marketing, and telecommunications. The technology does give businesses a competitive edge, and in today’s world, any company that’s not incorporating AI into their operations is lagging behind.
To put things in perspective, current AI technology is what TechTalks labels as weak AI. In contrast, strong AI refers to the technology that can think and decide like humans, much like what is shown in science fiction films. The AI that is often referred to these days involves machine learning, deep learning, automation, and any other features that cater to very specific tasks. Ayima details that current AI technology consists of neural networks that are inspired by the design of the human brain. They function in two phases: the learning phase, wherein the model is trained by being fed large data sets; and the application phase, where the AI model takes the information and puts it into action.
It seems that all businesses will find a use for AI, including media, which is an industry that’s rather unexpected as it runs primarily on human labor and creativity. Take a look at some the promising developments for AI in news media.
Research has shown that the gap between those who consume TV news and those who consume online news is rapidly narrowing, with almost half of Americans admitting they regularly read online news sources. Experts predict that at least 50% of adults in developed countries will have at least four exclusive online media subscriptions by 2020.
Aside from a preference for online news, there is also a demand for more video content. However, video can be costly, and most news organizations are still on the fence about placing a significant amount of investment in it. They have to deal with the difficulty of scaling video, as well as the uncertainty of commercial returns.
It is in this light that Chris Richardson predicts that hyper-personalized content will mark a new era for news media. AI will be able to deliver relevant content to each user based on user preferences. It will achieve this by gathering highlights from different videos and compiling them into one customized video that is then sent to the user. In theory, this solution would be scalable, and would require no production costs from news companies. The commercial return would come in the form of consumption-based micro-payments.
News by AI-powered voice assistants
The consumer trend of smart speakers like Amazon’s Alexa, Microsoft’s Cortana, and Google’s Home Assistant shows that people are beginning to adapt to speaking with inanimate objects. While these devices do a remarkable job of serving users in terms of basic inquiries, news media organizations are beginning to experiment in how they can use voice assistants to better deliver news.
The Washington Post experimented in how smart speakers can use the company’s unique voice when sharing news stories from their website. The challenge lies in what information the Post can exclusively offer that the main interface of a smart speaker can’t. Another aspect worth exploring is how news organizations can send voice notifications through these voice assistants. They would have to consider the time at which users receive these notifications so that the news content would relevant.
Last but not least is the futuristic notion of robo journalism. Humans do a pretty good job of getting the news out there as events are happening, but there are times when news teams aren’t fast enough in their coverage — especially in today’s rapid-fire consumption of content and the importance of the Zero Moment of Truth. AI technology can come in by shortening the time that news stories are reported and delivered.
Michael Spencer wrote about a tool that searches breaking news in social media posts and composes news reports out of them via an algorithm. It does this by analyzing text, photos, and even punctuations like exclamation marks in order to find newsworthy items. The technology has also been claimed to be capable of weeding out false news stories.
Meanwhile, a Silicon Valley startup has developed a news website that is powered by AI algorithms. The technology approaches a news story in a unique way in that it scours other websites for the same report and analyzes any potential biases. After the search, it would then create its own “impartial” version of the story. And the most amazing part? This entire process of researching, analyzing, and writing happens in less than a minute.
While the majority of these developments are still being experimented, they do reveal a revolutionary future for the news industry. Many are worried that AI technology will slowly replace human jobs, but a lot of AI proponents argue that humans will only discover new jobs that involve working with AI to ensure that the algorithms do their tasks well. And while we wait for these innovations to become mainstream applications, there’s plenty for us to learn and be excited about in the world of AI. | <urn:uuid:6dd3c9f3-f24a-451d-a51f-09c5878ca85e> | CC-MAIN-2022-40 | https://bdtechtalks.com/2018/10/31/artificial-intelligence-future-news-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00324.warc.gz | en | 0.95539 | 1,041 | 2.71875 | 3 |
Distribution Analysis Tool
One Tool Example
Distribution Analysis has a One Tool Example. Go to Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer.
Use Distribution Analysis to fit one or more distributions to the input data and compare them based on a number of Goodness-of-Fit* statistics. Based on the statistical significance (p-values) of the results of these tests, you can determine which distribution best represents the data.
The Distribution Analysis tool can be helpful when trying to understand the overall nature of your data as well as make decisions about how to analyze it. For instance, data that fits a Normal distribution is likely to be well-suited to a Linear Regression, while data that is Gamma Distributed might be better-suited to analysis via the Gamma Regression tool.
This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R tool. Visit Download and Use Predictive Tools.
Configure the Tool
Use the Configuration tab to set the mandatory controls for distribution analysis.
- Select a field for analysis: Select a field from the incoming data for analysis.
- Select distributions for comparison: Select one or more distributions to compare. The distribution options are:
- Normal: A commonly occurring continuous probability distribution that is often used in both the natural and social sciences to represent real-valued random variables (in other words, continuous random variables that can take both positive and negative values).
- Lognormal: A continuous probability distribution of a random variable whose logarithm is normally distributed. This distribution is well-suited to the description of natural phenomena such as growth rate and size distributions. In addition, it is often used to describe income distribution in a sufficiently large population.
- Weibull: A relatively flexible distribution that is closely related to the exponential distribution. It's frequently found in data that describes "failure" rates of some kind, for example, random mechanical failure, mortality, churn, mechanical wear-out rates, etc.
- Gamma: A continuous probability distribution characterized by a significant concentration of cases at non-integer, non-negative lower values while also allowing for the reasonable possibility of much higher values. The Gamma distribution has a wide range of uses and is commonly found in data that describes aggregate (or average) amounts per case, for example, the average size of an insurance claim, measured per individual.
The Lognormal, Weibull, and Gamma distributions only work for non-negative data.
Columns that contain unique identifiers, like surrogate primary keys and natural primary keys, should not be used in statistical analyses. They have no predictive value and can cause runtime exceptions.
Graphics Options Tab
Use the Graphics Options tab to set the controls for the graphical output.
- Plot Size: Select Inches or Centimeters for the size of the graph and set the Width and Height values.
- Graph Resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi), 2x (192 dpi), or 3x (288 dpi).
- Lower resolution creates a smaller file and is best for viewing on a monitor.
- Higher resolution creates a larger file with better print quality.
View the Output
A set of report snippets that includes a histogram, basic summary statistics of the test results, goodness-of-fit statistics, data quantiles per distribution, and the distribution parameters.
*D'Agostino, R., Stephens, M.A. (1986) Goodness of Fit Techniques. | <urn:uuid:264f3dcd-4665-42f7-a0e6-706da6de16f2> | CC-MAIN-2022-40 | https://help.alteryx.com/20221/designer/distribution-analysis-tool | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00324.warc.gz | en | 0.870427 | 756 | 2.96875 | 3 |
(This article is part of our ITIL v3 Guide. Use the right-hand menu to navigate.)
What is problem management?
Problem management is one aspect of ITIL implementation that gives many organizations headaches. The difficulty lies in the similarity between incident management and problem management. The two processes are so closely aligned that differentiating the activities can become difficult for ITIL novices. At what point does one turn into the other? In some organizations, the two processes aret so closely related they are combined altogether. The differences are important, however, since they are not the same and have different objectives.
The term “problem” refers to the unknown cause of one or more incidents. A useful metaphor for understanding the relationship between problems and incidents is to think of the relationship between a disease and its symptoms. In this metaphor, the disease is the problem and the symptoms are the incidents. Just as a doctor uses the symptoms to diagnose the disease, so problem management uses the incidents to diagnose the problem.
When incidents occur, the role of incident management is to restore service as rapidly as possible, without necessarily identifying or resolving the underlying cause of the incidents. If incidents occur rarely or have little impact, assigning resources to perform root-cause analysis can’t be justified. However, if an individual incident or a series of repeated incidents causes significant impact, problem management is tasked with diagnosing the underlying cause of the incidents and, ultimately, to identify a means to remove that cause.
Problem management’s first activity is to diagnose the problem and validate any workarounds. Problem management uses a problem database to track problems and to associate any identified workarounds with them. Once the problem has been diagnosed and a workaround identified, the problem is referred to as a “known error.” These are documented in the known error database (KEDB), which may be the same physical database as the problem database. The KEDB is a significant tool for incident management in resolving incidents caused by known errors.
After the known error has been identified, the next step is to determine how to fix it. This will typically involve a change to one or more CIs, so the output of the problem management process would be a request for change, which would then be evaluated by the change management process, or included in the CSI register.
Problem management is thought of as a reactive process in that it is invoked after incidents have occurred, but it is actually proactive, since its goal is to ensure that incidents do not recur in the future, or if they do, to minimize their impact.
Problem Management 101
Problem management is a step beyond incident management in the ITIL service operation lifecycle. Incident management handles any unplanned interruption to or quality reduction of an IT service, whereas problem management handles the root causes of incidents. Or in clearer terms, incident management restores service whereas problem management eliminates the cause of failed services.
A problem is defined by ITIL as the cause of one or more incidents. Some incidents, such as a malfunctioning mouse at a user’s workstation, are not indicative of a problem. Other incidents, such as repeated network outages, create a problem investigation due to their frequency. In this case, problem management is reactive. Proactive problem management involves addressing the state of hardware, software, and processes, and preemptively addressing issues before they cause excessive incidents. Neither incident management nor request management has the ability to be proactive like problem management.
The purpose of problem management
When users continue to face the same incidents without resolution, they lose trust in the service desk’s ability to resolve any problem. Hence the primary objective of problem management is to identify, troubleshoot, document, and resolve the root causes of repeated incidents. Incident information filters up to problem management and problem management, in turn, provides the service desk with the known error and workaround information necessary to mitigate problems in the short term.
Problems include issues such as failing hardware or an inadequately configured database query. Problem management reduces incidents over the long term. Incident reduction decreases the load on the service desk, improves end-user satisfaction, and decreases the long-term costs associated with user and service downtime. When problems cannot be resolved, problem management works with the service desk to mitigate the impact of the related incidents. The end goal of problem management should always be to reduce the overall quantity of preventable incidents and thereby increase the quality of service provided.
The scope of problem management
Problem management has a very limited scope and includes the following activities:
- Problem detection
- Problem logging
- Problem categorization
- Problem prioritization
- Problem investigation and diagnosis
- Creating a known error record
- Problem resolution and closure
- Major problem review
The main function of problem management
While problem management involves several functions, the most important is the service desk. While it is also known as a help desk, this is not the ITIL-preferred term and should be avoided. In ITIL, this function acts as the single point of contact for service customers to report incidents and submit service requests. Without a single point of contact, users may contact staff and expect immediate service without prioritization limitations. Unfortunately, this means that urgent incidents could be ignored while incidents that don’t impact the business get handled first. Another common scenario is that important but low-priority incidents are not handled for weeks while the IT support staff take care of the most pressing issues on their desks, leaving no time for smaller issues. The service desk allows the service provider to address everyone’s issues promptly and sequentially. It also encourages knowledge transfer between departments, collects data on IT trends, and feeds problem management.
This function can be divided into separate support levels called tiers. The first tier is for basic issues. This includes low-priority issues such as basic computer troubleshooting. Tier one incidents are the most likely to be turned into incident models, since these are easy to solve and recur often. Tier-one incidents do not impact the business or other users. They can always be worked around until the service desk resolves them. For example, a Microsoft® Outlook® error can be worked around by using the web-based email application instead.
Then there’s tier two. The second-tier support level handles issues that have some impact on the user but not on the business as a whole. Usually these incidents require more skill or access to resolve. Tier-two incidents are medium priority, and require a more immediate response and higher level of access or training than tier-one incidents.
Tier-three incidents affect the entire organization and many users. Sometimes, a VIP may fall into a tier-two or tier-three categorization to provide a faster response time for these users. Often, these incidents fall into the Major Incident Response (MIR) process. These incidents are defined by ITIL as those that cause significant disruption to the business. These are always high priority. Incidents that require MIR are good candidates as potential problems, since they affect the business and likely have a different root cause than regular incidents.
You’ll know that you’ve accurately assessed tiers and priorities when most incidents fall into tier one/low priority, fewer fall into tier two, and only a few require escalation to tier three.
The service desk interfaces with the problem management team in several ways. The first interaction is when a potential problem is raised. This often happens when an incident is deemed unresolvable at the service desk and must be escalated. This also happens when an incident occurs repeatedly despite normal troubleshooting and resolution steps. Finally, when the problem management or continual service improvement team identifies problems proactively, they may contact the service desk for more information or incident statistics.
The problem management process
The ITIL problem management process has many steps, and each is vitally important to the success of the process and the quality of service delivered.
The first step is to detect the problem. A problem is raised either through escalation from the service desk, or through proactive evaluation of incident patterns and alerts from event management or continual service improvement processes. Signs of a problem include incidents that occur across the organization with similar conditions, incidents that repeat despite otherwise successful troubleshooting, and incidents that are unresolvable at the service desk.
The second step is to log the problem. In an ITIL framework, problems are logged in a problem record. A problem record is a compilation of every problem in an organization. This can be accomplished via a ticketing system that allows for problem ticket types. Pertinent problem data, such as the time and date of occurrence, the related incident(s), the symptoms, previous troubleshooting steps, and the problem category all help the problem management team research the root cause.
The third step is to categorize the problem. Problem categorization should match incident categorization. Incident [and problem] categorization involves assigning a main and secondary category to the issue. This step is beneficial in several ways. One benefit is that it allows the service desk to sort and model incidents that occur regularly. The modeling allows for automatic assignment of prioritization. The third and most important benefit is the ability to gather and report on service desk data. This data allows the organization to not only track problem trends, but also to assess its effect on service demand and service provider capacity.
The fourth step is to prioritize the problem. A problem’s priority is determined by its impact on users and on the business and its urgency. Urgency is how quickly the organization requires a resolution to the problem. The impact is a measure of the extent of potential damage the problem can cause the organization. Prioritizing the problem allows an organization to utilize investigative resources most effectively. It also allows organizations to mitigate damage to the service level agreement (SLA) by reallocating resources as soon as the issue is known.
The fifth step is a two-part process, which involves investigating and diagnosing the problem. The speed at which a problem is investigated and diagnosed depends on its assigned priority. High-priority issues should always be addressed first, as their impact on services is the greatest. Correct categorization helps here, since identifying trends is easier when problem categories correlate to incident categories. Diagnosis usually involves analyzing the incidents that lead to the problem report as well as further testing that may not be possible at the service desk level, such as advanced log analysis.
The sixth step is to identify a workaround for the problem. A workaround should always be indicated, because problems are not resolved at the incident level. A workaround enables the service desk to restore services to users while the problem is being resolved. A problem can take anywhere from an hour to months to resolve, therefore a workaround is vital. A problem is considered open until resolved, so a workaround should only be considered a temporary measure.
Step seven is to raise a known error record. Once the workaround has been identified, it should be communicated to staff within the organization as a known error. It’s good practice to record a known error in both an incident knowledge base and what ITIL calls a known error database (KEDB). Documenting the workaround allows the service desk to resolve incidents quickly and avoid further problems being raised on the same issue.
Step eight is to resolve the problem. Problems should be resolved whenever possible. Resolution resolves the underlying cause of a set of incidents and prevents those incidents from recurring. Some resolutions may require the change management board, as they may affect service levels. For example, a database switchover may cause slowness during the switchover period. All risks should be evaluated and accounted for before implementing the resolution. Document the steps taken to resolve the problem in the organization’s knowledge base.
The ninth step is to close the problem. This step should only occur after the problem has been raised, categorized, prioritized, identified, diagnosed, and resolved. While many organizations stop at this step, it isn’t the last according to ITIL.
The final step is to review the problem. This is also known as a major problem review. The major problem review is an organizational activity that prevents future problems. During the review, the problem management team evaluates the problem documentation and identifies what happened and why. Lessons learned, such as process bottlenecks, what went wrong, and what helped should be discussed. This is where having a complete problem log will help. A completed log will work much better than trying to pull the details from memory. This problem review should result in improved processes, staff training, or more complete documentation.
Problem management process flow diagram
How problem management fits into ITIL
Problem management is only one component of the ITIL service management lifecycle. Within ITIL, it exists in the service operation main process. As a process, it interfaces with many other parts of ITIL. Due to its relationship with the service desk, it is directly affected by and affects incident management. It also interfaces with financial management, since the financial impact of a problem is considered during the prioritization and resolution stages. It interfaces with service design when past and potential problems are considered during the IT design process. It interfaces with knowledge management when known issues are recorded. Finally, it interfaces with continual service improvement when problem management is proactive, since both have the goal of improving the quality of service delivered to internal and external customers.
This process is one that is integral to long-term service delivery success and therefore should not be ignored when designing a robust IT service, whether it’s internally or externally facing. Read on to discover more about the ITIL lifecycle. | <urn:uuid:e01ef442-9ebf-4c0c-9882-050330dfa6b1> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/itil-v3-problem-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00324.warc.gz | en | 0.952868 | 2,778 | 2.953125 | 3 |
It’s not China. Unless it is. Or maybe it’s a 400 lb hacker in their basement. Unlikely. Who can tell who does anything on the Internet and why do we care anyway? Attribution is the practice of taking forensic artifacts of a cyber attack and matching them to known threats against targets with a profile matching your organization. If this seems overly complicated, that is intentional. There are degrees of attribution that map to very specific contexts and painting over that context with a simplistic reading accomplishes very little other than frightening decision makers into unnecessary expenditures. Attribution is something we should care about. Not because successful attribution will cause the authorities to paradrop into hostile territory to neutralize our enemies, but because it is a way of checking the assumptions of our threat model against the real world and revise those assumptions accordingly. (You have a threat model, right? If not, start here .) If we’re going to take a look at what cyber attack attribution is, it might be helpful to look at what it is not.
- Attribution is not a smoking gun that will hold up in a court of law. Even in sensational pieces like Crowdstrike’s ‘hatribution’, they could not prove the individual had hands on the keyboard at the time of an attack, nor could they provide concrete evidence that even if he did, he was doing so at the direction of the Chinese government. Mandiant made a considerably stronger case by bolstering their technical analysis with open source intelligence, but they too lacked the key piece of tying the motivations of an individual to those of a country at large. Unless you are the NSA, attempting to establish that link is a waste of time and resources 10 of 10 times - especially if you're dealing with a common one-off phishing wave rather than a sustained state sponsored infiltration.
- Attribution is not binary. There is no defined state at which you are ‘done’, because again, only an intelligence agency can definitively answer questions of motivation, intent, and capabilities with a simple yes or no answer. Some people take this to mean attribution is not a meaningful pursuit – this goes a little too far in the opposite direction. Taken as a check against assumptions underpinning a company’s allocation of resources, any attribution data grounded in concrete evidence is valuable.*
- One forensic artifact is not an attribution make. Those of us in the security industry have at times seen extensive lists of indicators of compromise (IOCs) definitively attributed to various nation state groups at one point or another, typically by a government agency. The idea here is to take a snapshot in time – a forensic mug shot – and spread it around so defenders can keep their eyes open for similar TTPs within the same timeframe. (Should you be wondering when this is relevant to an organization’s defense, it really isn’t. Establishing a campaign timeframe is mostly relevant to collecting strategic intelligence, which is why it's usually the government releasing these lists to begin with.) Some organizations have interpreted these lists to mean *any* IOC listed is a hard attribution to a particular nation state, and will commence searching their logs for “attacks.” Do not do this. It is a waste of tier II SOC time, money, and presumes an immutability to Tools Tactics and Procedures (TTPs) that is frankly a little baffling. The benchmark should be what a preponderance of evidence would cause a reasonable observer to conclude – not a single IP registered in China.
- Anonymous is not a group. Its non-existence as a group means that it cannot have interests, motivations, or capabilities. More typically Anonymous is a series of pseudo-political statements adopted as cover for ego-bolstering and small political goals of local hackers. Again, Anonymous is not a group and if you have attributed an attack to them you have attributed nothing at all.
- The State Sponsored Thing. You might notice that the two companies I referenced for detailed attribution both focus on state sponsored actors. This is largely because state sponsored actors have a mandate to exploit targets over the longest period of time to extract maximum intelligence. As a result, the odds of a target having a rich trove of forensic data to analyze and correlate goes way, way up. So we tend to see flashy, sensational data on state actors and discount the much more frequent one off opportunistic attacks by financially motivated actors. Don't ignore the OWASP Top Ten to focus on the zero-day attack that probably isn't coming.
Tune in next time for more on what good attribution looks like, and why you should care.
* Things that are not concrete evidence: vendor claims of a forum post without providing any reference to a source, obfuscated or not. Vendors who provide translated threat actor speech without also providing the source language. Mentions of your organization on disreputable websites. Publicizing vulnerabilities in infrastructure that your organization also uses. A threat actor’s membership in a group that at one time has expressed interest in a theoretical attack on your organization. Any single IOC that shows up in your logs without further context. | <urn:uuid:c4b1394e-ab30-401c-94a4-8367ae4510f7> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2016/10/attribution-and-when-you-should-care-part-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00324.warc.gz | en | 0.950778 | 1,048 | 2.625 | 3 |
Gartner defines edge computing as “solutions that facilitate data processing at or near the source of data generation,” a nice, succinct explanation of this booming technology. If you don’t need it already, you most likely will soon as it’s a key enabler of cloud-based applications, including Internet of Things (IoT) applications and others supporting the digital transformation of business.
“Organisations that have embarked on a digital business journey have realised that a more decentralized approach is required to address digital business infrastructure requirements,” says Santhosh Rao, principal research analyst at Gartner. “As the volume and velocity of data increases, so too does the inefficiency of streaming all this information to a cloud or data centre for processing.”
There’s so much buzz around the edge that it has even spawned its own glossary of terms, under stewardship of the Linux Foundation.
The Evolution of Edge Computing
The movement to edge computing follows the cyclical nature of IT trends. We began with a centralised, mainframe-centric model, then moved to a decentralised model with client-server networks, with distributed computing power. The cloud is another example of a centralised model, but this time it’s augmented by the edge, thus creating a hybrid centralised/decentralised model.
This hybrid model combines the best of both worlds. The cloud can be used for data that requires massive amounts of processing or that does not require immediate attention, for example. The edge will support applications that demand lots of bandwidth, rapid response times and low latency. Examples include real-time decision-making and gathering data from intelligent devices, such as in a healthcare setting. The edge is also useful in meeting compliance requirements around where data is physically located.
How Edge Computing Addresses Performance and Regulatory Issues
While they can take many forms, edge data centres generally fall into one of three categories:
- Local devices that serve a specific purpose, such as an appliance that runs a building’s security system or a cloud storage gateway that integrates an online storage service with premises-based systems, facilitating data transfers between them.
- Small, localized data centers (1 to 10 racks) that offer significant processing and storage capabilities. Ideally, these “micro data centres” are delivered in self-contained enclosures that contain all required physical infrastructure, including power, cooling and security.
- Regional data centers with 10 racks or more that serve relatively large local user populations.
As this wide range of options indicates, it’s not the size of the data centre that defines it as edge, but its proximity to the source of data that needs processing, or those consuming it. With edge data centers nearby, bandwidth becomes less of an issue because data often travels over a private, high bandwidth local-area network, where links of 10G or more are common. The close distance likewise solves the latency issue and organisations can place them wherever they need to for regulatory compliance.
The Opportunity at the Edge
With a sound edge computing strategy in place, organisations will be positioned to take advantage of IoT applications – including those incorporating artificial intelligence and augmented reality – to provide significant business benefits.
For example, let’s look at the applications that can help companies improve customer experiences. Retailers are using the technology to enable digital signage that helps customers find their way and alert them to sales, as well as smart mirrors that help customers virtually try on clothes. Industrial field service personnel are using augmented reality applications that help guide them through complex repairs. Healthcare providers use IoT technologies to power digital health records and telemedicine.
Improving operational efficiency is another driver. Artificial intelligence enables predictive maintenance applications, which drives down maintenance costs in areas ranging from manufacturing to data centres, while reducing the risk of failures. Retailers use RFID applications to help manage inventory and reduce losses. Cities can use IoT applications to monitor busy intersections and control signal lights to help with traffic flow.
IoT applications are also driving new revenue streams, and even entirely new businesses. Uber and Lyft depend on it to match drivers with customers. Logistics companies have launched new lines of business around providing real-time status on cargo, including climate controls. Healthcare providers are offering remote device monitoring and analysis services.
The possibilities for IoT applications are virtually endless, but many if not most of them rely on edge computing to deliver the performance they need.
Protecting Edge Computing Environments
Given their critical role in IoT and other critical business applications, edge data centres must be protected in much the same fashion as traditional data centres. That means providing tools that enable remote management as well as physical security.
In terms of physical security, you need to make sure no unauthorised users can access the compute infrastructure – the first step in providing proper cyber security. A regional edge data centre will likely have security such as card readers on the door. But physical security also needs to be addressed at un-manned edge data centres in remote locations. They will likely require sensors and security cameras that can be monitored remotely and issue alerts for everything from excessive temperature levels to water leaks and unauthorised human access.
Similarly, edge data centres with little to no IT staff on site require remote management capabilities, enabling a centralised group to perform day-to-day management of the infrastructure.
And it’s not just compute infrastructure; you also need to manage and protect the network components that connect the edge data center to the cloud, enabling a hybrid data centre-cloud environment. Without network connectivity, the edge environment is just an island.
Access Edge Computing Resources
Many companies, such as retailers with many stores, also need a strategy for how to deliver a series of edge data centers that meet all of these requirements quickly and reliably.
At APC by Schneider Electric, we have invested significant resources in delivering everything customers need to quickly roll out reliable, high-performance, secure edge data centers, including micro data centers, to allow you to achieve Certainty in a Connected World. To learn more, visit our edge computing page. | <urn:uuid:76f92eb0-f61f-4105-ab4f-9e2ef5f78165> | CC-MAIN-2022-40 | https://blog.apc.com/uk/en/2019/01/22/edge-computing-is-here-to-unlock-the-power-of-your-iot-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00324.warc.gz | en | 0.921525 | 1,256 | 2.515625 | 3 |
HOME · Twitter · Flickr · LinkedIn · publications · @ Ars Technica · Running IPv6 (Apress, 2005) · BGP (O'Reilly, 2002) · BGPexpert.com · presentations · firstname.lastname@example.org
If you have any interest at all in using older Nikon lenses, you probably have some understanding of the difference between non-AI, AI, AI-S, AF and AF-S lenses. The trouble is that places on the web that explain the differences easily get lost in the details. This article is intended to serve as a slightly easier to digest version of the story.
A lens is a pretty simple thing. It takes light coming in at one end and bundles it so it can form an image on a piece of film or an image sensor at the other end. To do this well, the lens must be in focus and the lens opening (aperture) must be set to a useful size. So pretty much all older lenses have a focus ring, which usually moves the lens forward and backward to adjust the focus, and an aperture ring that sets the size of the effective lens opening.
If you think manual focussing is hard today, consider the plight of photographers who had to use a camera from the first half of the 20th century. With those, you don't actually look through the camera's lens, so basically you had to guess or measure the distance to your subject and then select that distance using the focus ring or lever. Those cameras also didn't have light meters. But film is relatively forgiving, so outside during the day it's entirely doable to set your exposure based on the time of day and the weather, starting with the sunny 16 rule. Until the 1990s film often came with an exposure guide printed on the box. So you just set the aperture and the shutter time and you're ready to shoot.
With the single lens reflex (SLR) camera, you actually look through the lens and see the image that will be captured by the film or sensor when you press the shutter button. This makes focussing a breeze: just turn the focus ring until the image is sharp! Photography has never been this easy. Of course with your aperture at f/16 in accordance to the sunny 16 rule, the viewfinder image gets rather dim: f/16 lets in only 1.5% of the light that f/2 lets in. SLR makers solved this by keeping the aperture at maximum so the image in the viewfinder is nice and bright, and then quickly closing down the aperture to the selected value when the shutter is pressed. And life was good.
Then came along the next innovation: light meters. A separate light meter tells you how much light falls on the meter. Which is not so helpful when shooting something a little or a long ways away. You also have to adjust manually for filters and special lens behavior, for instance, with macro photography. So light meters were put inside the camera where they could measure the light through the lens (TTL).
However, keeping the aperture open until the shot is taken now starts to complicate matters. If we're still on f/16 on our sunny day, we'd expect the camera's light meter to indicate a shutter time of around 1/125 using 125 ASA (ISO) film. But the lens is still wide open at f/2 at this point. So the light meter needs to take into account the difference between the aperture of the lens wide open and the aperture setting that will be used to take the shot. Which gets us to...
Nikon uses the name "Nikkor" for its lenses. The lenses that Nikon used on its F-mount SLR cameras between 1959 and 1977 have a little fork or "rabbit ears" on the aperture ring. The camera has a pin that rests between the tines of the fork so it can keep track of the position of the aperture ring.
When mounting a lens, obviously the pin had to be inserted in the fork and it was also necessary to turn the aperture ring all the way in both directions so the camera could learn the maximum aperture.
The newest cameras that work like this are now more than 35 years old, so this meter coupling system is very, very obsolete. Later cameras that are made for AI/AI-S lenses (up next) may be damaged if you mount one of these lenses, so don't use an unmodified non-AI lens on them. However, entry-level digital SLRs (DSLRs) such as the D3xxx and D5xxx series can mount these lenses just fine, although those cameras then operate without their light meter. These lenses also work with many non-Nikon cameras through an adapter.
When Nikon introduced the AI system, the company would convert older lenses to the new system for a modest price. There are still a few places that perform this conversion with different levels of finesse, but if you have a nice unconverted non-AI F-mount lens, it's probably worth more in its original state.
In 1977, Nikon came up with a new way to couple the aperture ring to the camera's light meter: Auto Indexing or AI. AI lenses have a ridge that catches a feeler on a ring surrounding the lens mount on the camera. (Mounting a non-AI lens may bend or break the feeler.) AI lenses also have a mechanical system to indicate their maximum aperture to the camera, so now you just mount the lens and you can start shooting, the light meter has all the information that it needs.
AI lenses also have a second, smaller row of aperture numbers, allowing the photographer to see the selected aperture through the viewfinder. In order to let enough light fall on the aperture numbers, the rabbit ears now have a little hole on each side. The rabbit ears are no longer used, but for a long time Nikon kept including them on newer lenses for backward compatibility with older cameras. This video shows the differences between non-AI and AI/AI-S lenses:
Only a few years later, in 1981, Nikon improved the AI system to AI-S. It's not clear what the S stands for. The big difference between AI and AI-S is that the relationship between how much the camera moves the little lever that closes the aperture and how far the aperture closes is now standardized. This means the camera can control the aperture. Before AI-S, automatic exposure meant that the photographer selected the aperture, and the camera would measure the light and select an appropriate shutter speed. (Aperture priority or A on the mode dial.) With AI-S lenses, shutter priority (S on the mode dial) is also possible, where the photographer selects the shutter time and the camera the aperture, or the camera selects both (program or P mode).
Ironically, even though modern cameras still use that same system with modern lenses, they 'll only work in A or M (manual) mode with AI and AI-S lenses, not S or P mode. This also means that modern cameras don't care about the difference between AI and AI-S: since you control the aperture through the aperture ring on the lens, they work exactly the same.
AI and AI-S lenses can be mounted on all Nikon cameras with an F-mount. However, lower-end cameras such as the D3xxx and D5xxx don't have the aperture feeler so you lose light metering. On the D7xxx and higher, you can use these lenses in A and M modes with light metering, but modern cameras lack the mechanism to determine the maximum aperture of these lenses, so you have to go into the menu and enter the focal length and maximum aperture under the "non-CPU lens" settings. The aperture selected with the aperture ring then shows up in the viewfinder and LCD display(s) as well as in the EXIF data, along with the focal length.
You may have noticed that AI and AI-S lenses have colored aperture numbers on the aperture ring. This helps determining the depth of field through the matching colored lines that indicate how far before and after the distance the focus ring is set to sharpness extends. On AI-S lenses, the highest aperture value is in orange.
In 1986, Nikon introduced autofocus (AF) lenses. This works though a little screwdriver that sticks out of the camera, which connects to a screw in the lens that is connected to the focus ring. With this, a motor in the camera can adjust the focus. The camera uses a number of focus sensors to determine whether different parts of the image are in focus and turns the screw accordingly.
All AF lenses also include a CPU. The camera communicates with the CPU in the lens electronically and learns the focal length and minimum aperture of the lens that way. This allows autofocus cameras to do their through the lens light metering and use S and P as well as A and M. (And AUTO.)
In order to manually focus an AF lens on an AF camera, you have to move the AF switch from AF to M. This retracts the screwdriver so it's possible to turn the focus ring freely. Lower end DSLRs such as the D3xxx and D5xxx don't have the focus motor, so on those cameras, AF lenses must always be focussed manually. Metering and program modes are not affected. On cameras without a focus motor or in manual focus mode, the autofocus system in the camera will still tell you if the image is in focus or not with the focus confirmation dot in the viewfinder. There are a few manual focus lenses that do have a CPU, those simply work like AF lenses in manual focus mode on AF/AF-S capable cameras.
Older models AF lenses are completely compatible with the AI-S system. The popular AF Nikkor 50mm f/1.8D, which is still sold new today, even has a couple of tiny indentations on the aperture ring where you would drill the holes for the screws to attach the rabbit ears so it can work on non-AI cameras! However, when used on an AF camera, the aperture must be set to the minimum. There's a little lock tab that keeps it there. The aperture is then set (manually or automatically) through the camera.
The D in model designations indicates that the lens can tell the camera the distance the focus is set to. This makes metering a little easier. At some point Nikon started building lenses that are no longer AI-S compatible. These are lenses with G in the name. Most notably, G lenses lack an aperture ring. All G lenses are also D lenses.
Although the original AF system works well and allows for smaller and lighter lenses, in 1998 Nikon introduced AF-S (not to be confused with autofocus single mode, also called AF-S). AF-S lenses have their own focus motor built in. Nikon uses ultrasonic motors, which they call "silent wave". Makers of third party AF-S compatible lenses such as Tamron, Tokina and Sigma have their own names for this type of motor. (Each company also has their own name for what Nikon calls VR, vibration reduction, which moves a lens element in real time to counteract camera movement during the exposure.)
Most F-mount film SLRs since 1990s (there are some exceptions such as the F55) and all digital SLRs can autofocus with AF-S lenses. Unlike the aperture ring, which is now pretty much a thing of the past, they all do have a manual focus ring. A switch on the lens itself switches between autofocus and manual focus.
On some (mostly cheaper) lenses this is a mechanical switch that disconnects the focus motor from the focus ring so the focus ring can be moved safely and easily. On these lenses, the focus ring rotates with autofocus, and the autofocus switch switches between A and M. Selecting manual focus through the camera will turn off autofocus but not release the motor from the focus ring.
On higher end Nikon lenses, the switch on the lens switches between M/A and M, where M/A means "autofocus but feel free to adjust focus manually at any time". Turning the focus ring doesn't stress the autofocus motor here, so with these lenses you can turn off autofocus with the switch on the camera or through the menu system and leave the switch on the lens in the M/A position. Of course moving the switch on the lens to the M position also turns off autofocus.
With older Nikon lenses, typically turning the focus ring all the way to the right sets focus to infinity. With newer lenses, this is usually not the case: they'll focus a little bit beyond infinity, so you have to use autofocus or manual focus to set the lens to infinity. With astrophotography, where there may not be enough light for the autofocus sensors, this can be a problem—although I've been able to use autofocus to focus on a bright star with an f/1.8 lens. To add insult to injury, many cheaper/smaller AF-S lenses no longer have a distance scale, so there's no way of knowing what distance the lens is focussed at.
35 mm film has an image area of 36 by 24 millimeters. All Nikon lenses until 2003 are designed to project an image that covers a piece of film or an image sensor that size. However, such large image sensors are expensive, so low and medium end DSLRs have smaller sensors, which Nikon calls DX. These are 24 by 16 millimeters (actually 23.5 by 15.6 mm on the D7100), or about a factor 1.5 smaller than a "full frame" FX sensor or 35 mm film. This means that a 50 mm lens on a DX camera produces the same angle of view as a 1.5 x 50 = 75 mm lens on an FX camera. Conversely, to get the same result as with a 50 mm on FX, you need a 35 mm lens on DX. (33 mm to be precise.) So DX is said to have a "1.5 x crop factor".
FX lenses work just fine on DX cameras, but the problem is that a wide angle lens isn't so wide with the effective focal length multiplied by 1.5. So in 2003 Nikon introduced lenses specially made for DX cameras, taking advantage of the fact that the image that the lens has to project is 1.5 times smaller than with FX. For the same or similar focal length or focal length range the DX version is typically cheaper than the FX version, if both are available.
DX lenses are not useful on film SLRs because the corners of the image will be dark or even black. FX DSLRs on the other hand, work fine with DX lenses. They simply switch to crop mode, where only the center area of their bigger sensor is used.
With a 55-year history, there are of course tons of additional details, exceptions, caveats and more. But the above is pretty what you need to know about Nikon lens compatibility before you go lens shopping. Good luck! | <urn:uuid:8decbc5e-5a70-41f4-85d7-358fa64bd4b6> | CC-MAIN-2022-40 | https://www.iljitsch.com/mu2014/05-08-understanding-old-nikon-lenses.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00524.warc.gz | en | 0.949602 | 3,130 | 2.78125 | 3 |
Tuesday, September 27, 2022
Published 3 Months Ago on Tuesday, Jul 12 2022 By Amr Houssein
Industry 4.0 has been well and truly explosive for the last decade, and its growth doesn’t seem to be stopping. The Global System for Mobile Communication Association (GSMA) predicts that, by 2025, there will be more than 25 billion Internet of Things (IoT) connections globally. While IoT has transformed industry, there are still challenges when it comes to streamlining deployment. Here, Amr Houssein, managing director of eSIM pioneer Mobilise, explains how eSIMs complete IoT connectivity.
In short, the industrial IoT (IIoT) enables machine-to-machine (M2M) communication, making manufacturing facilities smart and digitalised. By using sensors to capture factory floor data, manufacturers gain a comprehensive overview of their facility to optimise processes, improve machine performance, reduce waste and energy consumption, and result in less unexpected downtime. But what is the technology behind getting connected?
Getting Manufacturers Connected
Connecting IoT devices over a mobile network is referred to as the cellular IoT. Using existing mobile networks removes the need for a separate, dedicated infrastructure. Instead, a range of networks can be used — whether that’s 3G, 4G, 5G, or IoT-specific networks.
LTE-M and NB-IoT are networks designed specifically for IoT connections. While LTE- M offers a lower price point and voice and SMS support, NB-IoT offers low power, low data usage for long range and reliability. Whichever network is used, connecting devices to the cellular IoT through the traditional SIM cards presents several challenges for manufacturers.
An IoT SIM card has traditionally been responsible for connecting a device to the network. But it doesn’t come without its challenges.
IoT SIM cards typically only allow a device to connect to one carrier network. When deploying devices globally across multiple networks, or working with devices that are involved in the supply chain or logistics that move across the world, this creates a logistical nightmare. Manufacturers must source and distribute physical SIMs for a local network for each device.
As SIM cards need to be removable for maintenance or carrier changes, IoT devices cannot be sealed, meaning that harsh operating conditions are more likely to damage a device. There are also the added concerns that having a removable element exposes IoT devices to risks of service theft.
eSIMs Are the Future
While IoT SIM cards do the job, these challenges are hard to ignore when there’s a solution on hand. eSIMs, or embedded SIMs, are a digital alternative to physical SIMs, connecting devices to a network over the air. Initially adopted for wearable devices and connected cars, eSIMs are also now a key component of the IIoT.
Unlike physical SIM cards, eSIMs download network credentials onto a chip on the printed circuit board of an IoT device through over-the-air provisioning. Eliminating the physical component of a SIM makes the entire network onboarding process remote, which has a wealth of benefits for manufacturers.
eSIMs eliminate the problems associated with IoT SIM cards — the device’s network is determined after the production, shipment and deployment of an IoT device. Manufacturers can easily swap connectivity providers as and when required for ultimate flexibility depending on device location or subscription cost.
Provisioning network credentials over the air means the eSIMs are connected and maintained remotely. There’s no need to physically handle a device to make changes to its connectivity, making devices more durable and less prone to environmental damage.
In terms of security, an eSIM’s location on a small chip on the circuit board means it’s not removable. Being physically soldered to the device eliminates risks of physical theft of the SIM, as it’s hard to identify and impossible to remove.
In this way, IoT devices can be deployed without any local human control of the connectivity — all responsibility lies with the manufacturer’s service provider (SPs). Mobilise’s HERO platform supplies SPs with a cloud based eSIM orchestration layer, to enable eSIM provisioning, management, enterprise billing and CRM systems. This means SPs take responsibility for managing subscriptions, taking the pain out of cellular connectivity for manufacturing users.
While IIoT is nothing new for manufacturers, making it more streamlined, convenient and digital is key to its continued success. Adopting eSIM technology alleviates some of the pain points manufacturers are experiencing, making operations slicker and opening a world of opportunity for more efficient processes.
Mobilise is a leading provider of SaaS solutions to the telecommunications industry. Focused on delivering highly engaging digital-first service propositions with excellent customer experience, Mobilise has a proven track record, deep industry knowledge and a team of specialists to support clients to building and executing transformational strategies.
Clients range from large corporate organisations with over 100,000 employees to small enterprises with under 20 employees. Mobilise has a deep knowledge of the telecoms business model and our experience includes working with over 40 service providers across eight markets for brands including Virgin, Dixon’s Carphone, Red Bull Mobile, Manx Telecom and Freenet.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Expert Insights section to stay informed and up-to-date with our daily articles.
With all its innovation and glory, the Qatar world cup is almost here. Qatar has branded this world cup edition as the world cup of the invention. In many ways, this statement can hold its weight. Some of the technology and aspects of the Qatar world cup are revolutionary. The Arabic nation will be hosting […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:23ce9d37-fe00-4735-ba5b-8ac27c697378> | CC-MAIN-2022-40 | https://insidetelecom.com/overcoming-iiot-deployment-challenges-through-esims/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00524.warc.gz | en | 0.915349 | 1,242 | 2.640625 | 3 |
The art of forgetting. It’s been well depicted by human beings on numerous occasions. But what about machines? Are they capable of forgetting data as well?
This article answers that question.
As human beings, we are blessed with several cognitive skills. Including the habit of forgetting. People say it’s a boon to forget what’s unwanted. And a sin to forget the things that are important.
For example, people forget names, places and directions all the time! Forgetfulness can be caused by a number of factors. Such as retrieval failure, interference, failure to store or deliberate motivation.
Is it the same for machines?
We all know that artificial intelligence in machines try to mimic the natural cognitive process. With all the improvements and technical advancements made in machine learning as a field, mimicking the behaviour of the human brain, should be easier.
However, one question arises. Is the advancement enough to teach machines the habit of forgetting?
Yes. It is. Machines can and do forget what they learn in the past. There’s even a name for it. It’s called the “Catastrophic forgetting problem”. And it happens more often in artificial neural networks and deep learning.
Can it be resolved?
Interestingly enough, there are other strategic ways a modeller can force an algorithm to forget data. In systematic ways. Some of these techniques include LSTM — Long short term memory, Elastic Weight Consolidation (EWC) and the Bottleneck Theory.
The catastrophic forgetting problem can be resolved by providing samples from previous data. This data will be used by the model to retain the knowledge it has gained. This technique is called “Pseudo rehearsal”. | <urn:uuid:6406f53f-8eb3-488e-a919-d161eace31e1> | CC-MAIN-2022-40 | https://www.crayondata.com/can-machines-forget-learn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00524.warc.gz | en | 0.934659 | 357 | 3.171875 | 3 |
The emergence of smart cities holds great promise for the economy and the citizens, but it also presents certain challenges. Smart and sustainable cities are data driven and rely on IoT to connect the unconnected. But only capturing data from things is not the key challenge. The question is what to do with all that data. Aggregating the information in a way where it can be analyzed and used for improvements is essential. At the same time, political pressure mandates a green agenda and a more sustainable way of operating large cities.
On its way to a smart, sustainable city, Paris adopted an ambitious Energy Climate Plan: a commitment to reduce greenhouse gas emissions and energy consumption by 30 percent before 2020. Together with Cisco, Paris launched a pilot experiment using Cisco Energy Management to develop a replicable energy optimization plan. By properly instrumenting various buildings and places in the city, Cisco Energy Management helps to benchmark buildings, direct investments where they need to be and measure improvements thanks to sensor networks called the Internet of Things (IoT).
A common mistake I see in many IoT projects is to start with deploying hundreds or thousands of sensors without even knowing what problems should be solved with the accumulated information. So instead, we started this project establishing a series of use cases that are based on real issues that had been brought up by occupants of those places.
For example, there is a dance training room with very specific issues related to how it was built. Temperatures are uneven and depending on the type of activity and external conditions, it can lead to uncomfortable situations for dancers. By measuring real-time temperature, presence and luminosity from both inside and outside the building, we can correlate the data to identify patterns all in the same system. Now it is possible to anticipate when conditions will become uncomfortable and raise an alert in advance, leading to proactive, corrective actions.
After the use cases were defined we installed hundred sensors that were specifically custom-made for this project, and that measure temperature, humidity, brightness, luminosity, noise level and human presence in different buildings. Cisco Energy Management delivers precise and regular data that enables the City of Paris to monitor the “real life” of the studied buildings – what’s happening inside depending on different parameters – to improve the buildings efficiency.
Watch the video: | <urn:uuid:5ee6d6f4-06da-4c28-b081-240faf34834f> | CC-MAIN-2022-40 | https://blogs.cisco.com/analytics-automation/how-paris-improves-building-efficiency-to-become-a-smart-city | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00524.warc.gz | en | 0.942921 | 462 | 2.921875 | 3 |
Why parking management supports a safe and livable city
Everyone who has visited or lived in a large town or city understands the struggle when arriving by car. Each day, people spend a lot of time on the road looking for parking slots, which is valuable time lost from work, family, and life. Finding a parking space ahead of an important meeting, or with a hungry or tired child can be a stressful situation. Frustrated and stressed drivers are often consequences of poor parking management, and it can be far more serious. It’s something that cities definitely need to address.
There’s been a steady growth of population in urban areas over the past decade and that’s not changing. On the contrary, it’s expected that by 2050 the urban share of the world population will grow to more than 6.4 billion. As a consequence, there has also been a growing number of cars in more compressed cities: a combination that negatively impacts city parking.
In smart cities, multiple systems and data silos can be connected to improve the overall accessibility in a city and to use parking resources more efficiently. Enhancing the management of parking also doesn’t necessarily require cities to invest huge sums in completely new technology: existing surveillance technology can be employed to tackle parking issues and improve the situation.
Parking: a seemingly small factor, but with a big impact on livability
Transportation in general is an important factor in the perceived livability of a city, and for those using cars, accessible parking is essential. This is true for permanent residents, visitors and drivers of delivery vehicles. Poor parking management results in stress, inefficiencies, and a broader impact on traffic congestion as vehicles drive slowly through streets looking for a vacant space. Ultimately, this may mean that people park illegally. Not only is this an inconvenience: if emergency service vehicles are blocked, it can be life-threatening.
There are also negative environmental impacts to poor parking management. Cars moving slowly through streets and delivery drivers leaving engines running while parked illegally, causing additional congestion and traffic incidents all add to issues with pollution, air quality and noise levels.
A recipe for parking-success
The solution to urban parking management comes in the form of video surveillance, analytics and data.
Network surveillance cameras – often those that are already implemented in many cities – can be enhanced through specific analytics applications to both alert officers to parking violations and guide drivers to vacant spaces.
Pre-defined detection zones can trigger automated alerts should an unauthorised vehicle stop in the zone for too long. The alerts are sent to authorities or enforcement agents so they can verify the incident and clear potentially important areas. The cameras not only detect violations but also help to prevent congestion and disruptions.
A combination of surveillance cameras and analytics can be used to identify free parking spaces and when connected with, for example, a navigation app can efficiently guide drivers to them. It saves time, avoids traffic jams and improves the residents’ or visitors’ experience.
Further enhancements come through connected data and systems. For example, in combination with payment apps and license plate recognition, parking management systems could be used to pay parking fees automatically. This would also save time, for example when leaving a parking garage, and make processes smoother for drivers.
With license plate recognition it would also be possible to spot cars without appropriate permits for specific parking zones (given these permits are connected with a certain license plate) or if they park in other restricted spaces. Once detected, an officer can be sent to move the respective vehicle or inform the driver to leave.
Through analysis of real-time and historical data it would also be possible to predict peak-times in certain areas and prepare for it, for instance by opening an extra parking lot or informing drivers in time that there is no parking available and direct to other areas. Adding artificial intelligence to the analysis, parking management solutions can even predict where the chance is highest to find an available parking lot when you arrive. In this way, the distribution of cars could be better controlled ensure drivers don’t arrive at a parking garage only to find out it’s full, again causing frustration and further congestion.
An achievable goal
Effective parking management is essential in increasingly crowded and congested cities. It helps to reduce traffic incidents and parking violations, leads to increased safety and security of citizens, reduces stress and improves the environment: all factors which result in an improved quality of life. And it’s a goal well within reach, with many cities already having much of the infrastructure and technology in place to take advantage.
Please find more information about how Axis contributes to livability in smart cities. | <urn:uuid:9d8aabf7-562a-412d-b670-34ac6f6a156a> | CC-MAIN-2022-40 | https://www.axis.com/blog/secure-insights/parking-management-livable-city/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00524.warc.gz | en | 0.948144 | 946 | 2.546875 | 3 |
Imagine you’re a pilot, calmly flying along. Suddenly, alarms blare that a heat-seeking missile has locked onto you. How hard do you think it would be to get rid of a missile that’s tracking you?
Now imagine that you’re calmly browsing the Web. Did you know that you’re being tracked here? You won’t hear any alarms to tell you about it.
Fortunately, there are several ways you can prevent tracking, or limit what information trackers can collect. Let’s dive into what trackers are, how they track you, and what you can do about it.
- The Threats
- Best Anti-Tracking Software: How to Increase Your Privacy
- Additional Resources
- What You Should Do
In a hurry? Here are the best anti-tracking browser extensions. Read the rest of this post for more details, and for other ways to block tracking.
Google trackers are present on 82% of the web traffic.
25% of the web has a hidden Facebook tracking pixel. Facebook knows more than what you just do on Facebook
1881 out of 6000 top websites have more than 10 trackers per page.WhoTracks.me
Who is Tracking You?
Advertising companies such as Google and Facebook make up a large share of trackers. They track advertisements, to learn when they’re seen and clicked, and details about who sees and clicks them.
Many organizations use analytics (such as Google Analytics) to learn how their websites are being used, and details about who uses them.
Social media companies such as Facebook and Pinterest integrate with websites to show like and pin buttons, and to track sites you visit.
You’ve probably noticed that if you’re looking at a product on Amazon or some other shopping site, you start to see ads for that product on other websites. That’s made possible by tracking.
How Are You Being Tracked?
You’ve probably heard of cookies, and they’re still being used. But other methods have arisen in recent years.
Cookie: A file that contains information that identifies you to a website, so that it can keep track of who you are. Cookies are simply part of how users interact with websites, and aren’t inherently a privacy risk. But when cookies are used to track users around the Web, they are considered a privacy risk. A first-party cookie comes from the site you’re on. A third-party cookie comes from a different website; this is the type used for tracking across websites.
Internet Protocol (IP) address: The Internet address given to your device by your network or Internet Service Provider (ISP). If you’re at home, your home’s IP address comes from your ISP, and can reveal your general location.
Fingerprinting: Identifying users based on the characteristics of their device or browser, such as operating system (OS), browser extensions, language, and installed fonts. You can see details about your fingerprint at AmIUnique.
Supercookie or evercookie: A file that’s stored in a different place than normal cookies, making it harder to detect and remove them.
Beacon (AKA tag or pixel): A small, usually invisible object embedded into a webpage or email. When you view the webpage or email, the beacon is loaded, and your activity is recorded.
Best Anti-Tracking Software: How to Increase Your Privacy
There’s a setting in many browsers called Do Not Track. This was supposed to be an easy way to opt-out of tracking, similar to how the National Do Not Call Registry was supposed to be an easy way to opt-out of telemarketing calls. Unfortunately, both efforts have failed to deliver.
There’s no enforcement for Do Not Track. A website can choose to honor your setting or not, and you don’t know if they are. So, we need to take other steps to limit tracking.
You may hear that you should delete all cookies or prevent your browser from accepting cookies to block trackers. This isn’t the solution, because companies are increasingly using methods other than cookies for tracking. Also, preventing or deleting first-party cookies (a cookie from the website you’re using at the moment) can break basic website functionality like the ability to be logged in to a site, or to keep a product in your shopping cart.
Third-party cookies (a cookie from a website other than the website you’re using at the moment) are less-frequently used for functionality and more-frequently used for tracking, so blocking them is a different matter.
Configure Your Browser to Limit Tracking
The place to start in blocking web tracking is your browser. Most browsers have settings that can limit tracking, and some browsers offer more settings than others.
I’ll explain some settings to look for in the desktop versions of these browsers. There are often similar settings in the mobile versions of these browsers.
I recommend that you use one of the browsers that offer better privacy-protection, such as Safari, Firefox, and Brave.
Safari makes it harder for you to be fingerprinted by websites.
… whenever you visit a webpage, Safari presents a simplified version of your system configuration. Your Mac looks more like everyone else’s Mac, which dramatically reduces the ability of trackers to uniquely identify your device.Apple
In Safari Preferences, click the Privacy tab, then check the box to Prevent cross-site tracking.
Safari uses machine learning to identify advertisers and others who track your online behavior and removes the cross‑site tracking cookies and website data they leave behind.Apple
Learn more about increasing Safari security and privacy.
In Firefox Preferences, click Security & Privacy. You can choose the level of Enhanced Tracking Protection: Standard, Strict, or Custom. If you choose Custom, you can choose what to block. I recommend choosing Custom and checking all the boxes, and setting Cookies to Third-party trackers.
Lower on that page, set Send websites a “Do Not Track” signal that you don’t want to be tracked to Always.
You can change Enhanced Tracking Protection on a per-site basis. When you’re on a website, just click the shield icon in the address bar to see your options.
In Chrome, click the More icon (3 vertical dots), then click Settings. The settings screen will appear, with several sections of settings. At the bottom of the Settings screen, you can click Advanced to see more settings.
Under Advanced, in the Privacy and security section, enable Send a “Do Not Track” request with your browsing traffic.
Click Site Settings, then Cookies and site data. Enable Block third-party cookies. Chrome will show a cookie icon in the address bar. You’ll be able to click it and then choose to allow that site to set cookies in the future. When you do, you can also click Show cookies and other site data and then the Blocked tab to see which cookies are being blocked. You then have 2 options (buttons): Allow and Clear on Exit. Allow allows those cookies in the future. Clear on Exit stores the cookie only until you close/quit Chrome, then the cookie is deleted.
Learn more about increasing Chrome security and privacy.
Note: these instructions are for the Chromium-based version of Edge, released in January 2020.
In Edge, click the 3 dots (…), then Settings. On the left, click the menu icon (3 horizontal lines), then Privacy and services. Toggle Tracking prevention to on. Then, choose the level of tracking prevention you want: Basic, Balanced, or Strict. I recommend choosing Strict.
On that same screen, scroll down to the Privacy section, and toggle Send “Do Not Track” requests to on.
If you need to change the tracking prevention settings for a particular site, when you’re on the site, click the padlock or i symbol to the left of the web address (URL), and below Tracking prevention, change the dropdown to On or Off.
Learn more about tracking prevention in Edge.
Brave blocks trackers by default. You can change your settings in Settings > Shields. I recommend enabling Block cross-site trackers and setting Cookies to Only block cross-site cookies. Learn more about Shields settings.
At the bottom of the Settings screen, click Advanced to see more settings. In the Privacy and security section, enable Send a “Do Not Track” request with your browsing traffic.
You can change the Shields settings for particular sites. Just click the Shields icon (the same lion head logo as Brave uses), then toggle Shields on or off, or toggle particular protections. Learn more about using Shields while browsing.
If you want to go the extra mile in protecting your privacy while browsing, look at Tor Browser. It routes your traffic through the Tor network, which hides your real IP address. It’s designed to foil fingerprinting attempts. By default, it doesn’t keep browsing history, and cookies are only valid for a single session. It’s like using private browsing all the time. You can choose between 3 security levels: Standard, Safer, and Safest.
Because of how it works, you probably wouldn’t use Tor Browser as your main browser, but if there are certain situations where you need to ratchet up your privacy, it’s worth considering.
You shouldn’t install extensions in Tor Browser, because they can conflict with its privacy protections.
Private browsing (AKA Incognito mode, AKA InPrivate browsing) by itself doesn’t block tracking, but it’s slightly helpful because of how it handles cookies. It doesn’t share cookies with the regular browsing mode, and it doesn’t preserve cookies once you close it.
Firefox is an exception to the norm, in that it includes tracking protection in its Private Browsing.
Anti-Tracking Browser Extensions
Once your browser is set to limit tracking as much as possible, it’s time to add one or more anti-tracking browser extensions.
The extensions available to you depend on the browser you use. They have varying levels of configurability, ranging from a few settings to a mind-boggling number.
In general, you can use more than one tracking-blocking extension, but you should check the documentation of any extensions you use for any warnings against this. For most people, one extension will be sufficient.
I’m going to cover extensions for desktop browsers. Several of these have related mobile browsers which can usually be set to block trackers.
This is my top recommendation because of its balance of power and simplicity. It blocks third-party trackers and shows a privacy grade for websites. If you notice that it prevents a website from working properly, you can whitelist that site, temporarily or permanently.
This is basically a set-it-and-forget-it option. There’s nothing to configure, because it learns as it goes.
If as you browse the web, the same source seems to be tracking your browser across different websites, then Privacy Badger springs into action, telling your browser not to load any more content from that source. … If it observes a single third-party host tracking you on three separate sites, Privacy Badger will automatically disallow content from that third-party tracker.Electronic Frontier Foundation (EFF)
Download for Chrome, Firefox, Opera.
This is another blocker with fairly simple options. It also anonymizes your data, and lets you customize blocking.
Because of Apple’s restrictions on Safari extensions, Ghostery isn’t available, but Ghostery Lite is. It lets you block trackers in 8 categories. I check all the boxes except Advertising, because I don’t want to hurt sites that rely on ad revenue. You can also individually trust websites, to allow all trackers from a particular site to load.
Download for Chrome, Firefox, Edge, Opera, and Ghostery Lite for Safari.
This is another one that can function as set-it-and-forget-it, but you can configure it if you’d like. By default, it blocks a wide range of trackers. You can manually allow certain trackers, or all trackers on a website.
Disconnect’s Visualize page feature is unique; it shows a graph of third parties that Disconnect is blocking. This is only available on Chrome and Safari.
Download for Chrome, Firefox, Safari, Opera.
This could be a fit if you’re technical and want something with more controls. It can use a large number of filter lists (lists of domains), which you can enable or disable. For example, I disable advertising-related lists, because I don’t want to hurt sites that rely on ad revenue. It only takes a couple of clicks to enable or disable uBlock Origin on a particular website.
I like that uBlock Origin automatically blocks known malware sites.
Note: this is not the same as uBlock (without “Origin” behind it).
Avast AntiTrack stops websites from tracking your online activity. It keeps your true identity private and secure while you browse the Internet.
This paid software feeds fake data to trackers, so they don’t see your true digital fingerprint. It also deletes tracking cookies and other tracking data. It says it does all this without breaking sites, which can happen when blocking trackers.
Download for Chrome, Firefox, Safari, Edge, Internet Explorer, Opera.
A VPN (virtual private network) provides some tracking protection, because you get a different IP address, sometimes every time you connect to the VPN. For sites that track by IP address, that will throw them off the scent. Another advantage is that the IP address you get usually points to a different geographic location than you’re actually in.
Other than changing your IP address, a VPN doesn’t provide other protection against trackers. But, it will protect your data when you’re on public Wi-Fi.
ProtonVPN offers secure VPN through an encrypted VPN tunnel, so your passwords and confidential data stay safe, even when you are using public or untrusted Internet connections.
Private Internet Access provides state of the art, multi-layered security with advanced privacy protection using VPN tunneling. It helps block unwanted connections, hide your IP address, and defend yourself from data monitoring and eavesdropping.
Some anti-malware has anti-tracking technology. If your anti-malware software does, you can learn more about it, and decide whether to use it.
I don’t recommend choosing anti-malware software based on how well it blocks web tracking; instead, focus on how well it prevents and removes malware.
Malwarebytes crushes the latest threats before others even recognize they exist. It helps protect your devices, data, privacy whether you're at home or on the go.
Note that some anti-malware claims to block tracking, but all it does is delete cookies. This isn’t an effective way of blocking tracking, and it can make browsing a pain.
Most people think of blocking web tracking and blocking ads as the same. And web advertising often uses tracking, so it’s understandable to lump them together. But it’s possible to block some tracking without blocking all ads. Why would you do this?
It’s an ethical issue rather than a technical one. Many websites rely on advertising as a source of revenue. Sometimes it’s the main, or only, way they earn money! If those websites don’t earn, they can’t pay their employees, pay for web hosting, etc. These sites (and the people behind them) are financially harmed by ad-blocking.
For this reason, I try to configure my browsers and extensions to allow ads, but block other tracking. I encourage you to do the same.
What You Should Do
- Choose a privacy-protecting browser. I recommend Safari (only available on Apple devices), Firefox, and Brave.
- Configure your browser to limit tracking. See the instructions in this post.
- Install at least one anti-tracking browser extension. If you want the simplest, set-it-and-forget-it option, I recommend Privacy Badger. If you want something with a few more options, I recommend DuckDuckGo Privacy Essentials. You can consider the others I covered in this post.
- For the next few days, pay attention to your browser and any extensions you installed. This will help you learn how they’re blocking trackers, and will help you fine-tune your settings.
Avast AntiTrack stops websites from tracking your online activity. It keeps your true identity private and secure while you browse the Internet. | <urn:uuid:600581c2-3a6b-47f8-86d6-d9503b270e6b> | CC-MAIN-2022-40 | https://defendingdigital.com/best-anti-tracking-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00724.warc.gz | en | 0.896818 | 3,570 | 2.75 | 3 |
Exploitation of just ONE software vulnerability is typically all that separates the bad guys from compromising an entire machine. The more complicated the code, the larger the attack surface, and the popularity of the product increases the likelihood of that outcome. Operating systems, document readers, Web browsers and their plug-ins are on today’s front lines. Visit a single infected Web page, open a malicious PDF or Word document, and bang -- game over. Too close for comfort if you ask me. Firewalls, IDS, anti-malware, and other products aren’t much help. Fortunately, after two decades, I think the answer is finally upon us.
First, let’s have a look at the visionary of software security practicality that is Michael Howard as he characterizes the goal of Microsoft’s SDL, "Reduce the number of vulnerabilities and reduce the severity of the bugs you miss." Therein lies the rub. Perfectly secure code is a fantasy. We all know this, but we also know that what is missed is the problem we deal with most often, unpatched vulnerabilities and zero-days. Even welcome innovations such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) only seem to slow the inevitable, making exploitation somewhat harder, but not stopping it entirely. Unless the battlefield itself is changed, no matter what is tried, getting hacked will always come down to just one application vulnerability. ONE. That’s where sandboxes come in.
A sandbox is an isolated zone designed to run applications in a confined execution area where sensitive functions can be tightly controlled, if not outright prohibited. Any installation, modification, or deletion of files and/or system information is restricted. The Unix crowd will be familiar with chroot jails. This is the same basic concept. From a software security standpoint, sandboxes provide a much smaller code base to get right. Better yet, realizing the security benefits of sandboxes requires no decision-making on the user’s behalf. The protections are invisible.
Suppose you are tasked with securing a long-established and widely-used application with millions of lines of insanely complicated code that’s deployed in a hostile environment. You know, like an operating system, document reader, Web browser or a plug-in. Any of these applications contain a complex supply chain of software, cross-pollinated code, and legacy components created long before security was a business requirement or anyone knew of today’s class of attacks. Explicitly or intuitively you know vulnerabilities exist and the development team is doing its best to eliminate them, but time and resources are scarce. In the meantime, the product must ship. What then do you do? Place the application in a sandbox to protect it when and if it comes under attack.
That’s precisely what Google did with Chrome, and recently again with the Flash plugin, and what Adobe did with their PDF Reader. The idea is the attacker would first need to exploit the application itself, bypass whatever anti-exploitation defenses would be in place, then escape the sandbox. That’s at least two bugs to exploit rather than just one. The second bug, to exploit the sandbox, obviously being much harder than the first. In the case of Chrome, you must pop the WebKit HTML renderer or some other core browser component and then escape the encapsulating sandbox. The same with Adobe PDF reader. Pop the parser, then escape the sandbox. Again, two bugs, not just one. To reiterate, this is this not say breaking out of a sandbox environment is impossible as elegantly illustrated by Immunity's Cloudburst video demo.
I can easily see Microsoft and Mozilla following suit with their respective browsers and other desktop software. It would be very nice to see the sandboxing trend continue throughout 2011. Unfortunately though, sandboxing doesn’t do much to defend against SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, Clickjacking, and so on. But maybe if we get the desktop exploitation attacks off the table, perhaps then we can start to focus attention on the in-the-browser-walls attacks. | <urn:uuid:6aef32ae-5750-4780-b98b-63b7f62b373c> | CC-MAIN-2022-40 | https://blog.jeremiahgrossman.com/2010/12/sandboxing-welcome-to-dawn-of-two.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00724.warc.gz | en | 0.927611 | 855 | 2.578125 | 3 |
In recent years the complexity of the IT infrastructures that underpin business operations has increased significantly. This is largely due to the ongoing digital transformation of enterprises, which are replacing manual and error-prone processes with IT-enabled ones. At the same time, the advent of computing paradigms such as the Internet-of-Things (IoT) has to lead to an expanded deployment of more sophisticated systems that comprise Internet-connected devices and smart objects like drones, automated guided vehicles and industrial robots. This rising sophistication of IT infrastructures comes with a host of automation and productivity benefits. Nevertheless, it also introduces new challenges, such as the need for stronger and effective Cybersecurity.
Cybersecurity has always been a major concern for deployers and operators of non-trivial IT infrastructures. However, its complexity and importance has recently risen for a number of additional reasons
As a result, there is a growing need to educate employees and other stakeholders on how cybercrime works, but also on how to engage in the implementation of robust security measures and policies. This need has led to the development of Cyber Ranges, which are emerging training and simulation environments for cybersecurity. The term Cyber Range stems from the concept of a “Shooting Range”: Cyber Ranges are safe places where people can be trained on cybercrime defense practices, much in the same way shooting ranges provide safe places where people fire guns at given targets.
A Cyber Range training environment is typically interactive and comprises a simulated representation of an organization’s cyber infrastructure. The cyberinfrastructure includes models for local networks, systems, applications and tools that are all connected in the simulated Internet-based environment. This environment is destined to support the development of cyber-skills based on practical testing while acting as a sandbox for testing of new products and services in terms of their cybersecurity characteristics.
The simulated environment of a Cyber Range includes a combination of real-life hardware and virtualized software components. It’s the proper mixing and combination of the inputs and outputs of these components that render a Cyber Range environment realistic. A certain portion of the network traffic of a Cyber Range is simulated and may comprise realistic representations of web pages, browsers, and e-mail services depending on the processes that are to be simulated for training or testing purposes.
The simplest form of Cyber Ranges is single stand-alone ranges, that are deployed and used internally by different types of organizations such as private enterprises, industrial organizations, governmental organizations, military agencies, as well as schools and universities. However, it’s also possible to expand the scope of the Cyber Range training environment based on the interconnection of a Cyber Range of an organization with Cyber Ranges established in other organizations. This interconnection can expand the scope of the training activities towards covering larger scale environments. However, the Cyber Ranges interconnection process is not a trivial one: different Cyber Ranges have to be interconnected in an interoperable way.
Let’s have a closer look at some of the main functionalities of a Cyber Range environment:
Cybersecurity professionals are the primary users of Cyber Range environments. However, many other stakeholder groups and professionals’ benefit from the use of Cyber Ranges, including law enforcement employees, IT experts, incident handlers, IT administrators, as well as regular personnel working in critical infrastructures. Furthermore, Cyber Ranges are commonly used by cybersecurity students and trainees, as part of their practical training curricula. In general, Cyber Ranges are closely related to all security training processes, including processes and examinations for obtaining security certifications.
Much as Cyber Ranges are important for individual workers, they are also very useful to entire organizations as tools for evaluating cyber competencies, testing new procedures, training personnel and evaluating new security processes and protocols.
Beyond training and education, Cyber Ranges are about early preparedness. One of the main cybersecurity issues faced by organizations nowadays is that they tend to be reactive when coping with cybersecurity threats and incidents. This reactiveness incurs significant damage, which takes place until the organization realizes the scale of the problem and remedies its root cause. Cyber Ranges can alleviate such poor reactions by making organizations more proactive. In particular, they can train employees to communicate fast about security-related information or security incidents’ indicators such as a dangerous email or an unusual behavior of an IT system. To this end, there is a need for employees to be able to identify these situations and to communicate them to cybersecurity experts. Likewise, cybersecurity teams must be very well trained on all incidents that they are likely to encounter, but also on the remedial actions that they should undertake, especially during the very first moments after the identification of an incident. Cyber Ranges can ensure that both cybersecurity teams and other employees are ready to play their role in the early cybersecurity preparedness of their organization.
Despite significant investments in cybersecurity and regulatory compliance, cybercrime incidents around the globe are still on the rise. Cyber Ranges can be a powerful tool in an organization’s cybersecurity arsenal in the years to come.
Advantages of Data Tokenization for enterprises
The benefits of cybersecurity mesh for distributed enterprises
The Rising Cybersecurity Threats CIOs cannot afford to ignore
Six Factors Affecting Security and Risk Management in the Post COVID Era
Surviving Cybercrime in 2021: Guidelines for Effective Cybersecurity Investments
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
How Metaverse could change the business landscape
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive | <urn:uuid:e0e383ae-4155-414e-8b8a-05813c3d6834> | CC-MAIN-2022-40 | https://www.itexchangeweb.com/blog/cyber-ranges-a-valuable-tool-in-your-cybersecurity-arsenal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00724.warc.gz | en | 0.945224 | 1,318 | 2.984375 | 3 |
In this edge computing vs cloud computing comparison, we will explain the definition of both terms, their best examples, and more. Is edge computing just a rebranded form of cloud computing, or is it something genuinely new? While cloud computing use has been on the rise, advances in IoT and 5G have given birth to technological breakthroughs – edge computing being one of them. The hybrid cloud enables IT administrators to leverage the strengths of both the edge and cloud. Still, they must understand the benefits and drawbacks of each technology to integrate them into business operations properly. Edge computing draws computers closer to the data source, whereas cloud computing makes sophisticated technology available over the internet.
Table of Contents
Edge computing vs cloud computing: What do they mean?
Businesses and organizations have already taken their computing activities to the cloud, which has shown to be a successful method for data storage and processing. On the other hand, cloud computing is not efficient enough to handle the fast stream of data produced by the Internet of Things (IoT). So, given current cloud-centric architecture limitations, what else can be done?
Edge computing is the answer. Today’s computers are moving from on-premises servers to the cloud server and then, more rapidly, to the Edge server, where data is collected from the outset.
Edge computing and cloud computing are two important elements of today’s IT environment. But before edge computing vs cloud computing, we should understand what these technologies entail.
What is cloud computing?
Cloud computing is the provision of computing resources, such as servers, storage, databases, and software for on-demand delivery over the Internet rather than a local server or personal computer. Cloud computing is a distributed software platform that employs cutting-edge technology to create highly scalable environments that may be used by businesses or organizations in a variety of ways remotely. IF you wonder about cloud computing vulnerabilities and the benefits of cloud computing, go to these articles.
Any cloud service provider will offer three major characteristics:
- Flexible services
- The user is responsible for the costs of various memory, preparation, and bandwidth services.
- The cloud service providers handle and administer the software’s entire backend.
Cloud computing jobs are also on the rise. We have already explained cloud computing jobs requirements trends and more in this article.
What is edge computing?
One of the most significant features of edge computing is decentralization. Edge computing allows for using resources and communication technologies via a single computing infrastructure and the transmission channel.
Edge computing is a technology that optimizes computational needs by utilizing the cloud at its edge. When it comes to gathering data or when someone does a particular action, real-time execution is possible wherever there is a need for that. The two most significant advantages of edge computing are increased performance and lower operational expenses.
There is also fog computing-related to them. If you wonder, “is fog computing more than just another branding for edge computing?” we discussed fog computing definition, origins, and benefits.
Edge computing vs cloud computing: The differences
The first thing to realize is that cloud computing and edge computing are not rival technologies. They aren’t different solutions to the same problem; rather, they’re two distinct ways of addressing particular problems.
Cloud computing is ideal for scalable applications that must be ramped up or down depending on demand. Extra resources can be requested by web servers, for example, to ensure smooth service without incurring any long-term hardware expenses during periods of heavy server usage.
Edge computing is also well suited for real-time applications that produce a lot of data. IoT, for example, is the networked use of smart devices. The internet of things (IoT) is a type of data collection that involves connecting various physical devices that exist today to the internet.
These devices lack powerful computers and rely on an edge computer for computational demands. Doing the same thing with the cloud would be too slow and infeasible because of the amount of data involved.
In a nutshell, both cloud and edge computing have applications that can be effective, but they must be utilized depending on the application. So, how do we choose? What are the differences between edge computing and cloud computing?
Edge computing vs cloud computing: Architecture
The term cloud computing architecture refers to the many loosely coupled elements and sub-components needed for cloud computing. It describes the components and their connections. Cloud computing provides IT infrastructure and applications as a service over internet platforms on a pay-as-you-go basis to individuals and businesses.
Edge computing is a more advanced version of cloud computing, combining distributed computing and on-premises servers to solve latency, data security, and power consumption by bringing apps and data closer to the network edge.
Edge computing vs cloud computing: Benefits
Things not only consume data, but they also produce it in edge computing. It allows compute, storage, and networking services running on end devices to communicate with cloud computing data centers.
Because the cloud demands a lot of bandwidth, and wireless networks have restrictions. However, edge computing enables you to use less bandwidth. Because devices in close proximity are employed as servers, most concerns such as power consumption, security, and latency are alleviated effectively and efficiently. Edge computing is used to enhance the IoT’s overall performance.
Edge computing vs cloud computing: Programming
Several application programs may be utilized for development, each with a distinct runtime.
On the other hand, cloud development is best when developed for a development environment and uses only one programming language.
Edge computing vs cloud computing: Security
Because edge computing systems are decentralized, the cybersecurity paradigm associated with cloud computing is changing. This is because edge computers may send data directly between nodes without first communicating with the cloud. An edge system that utilizes cloud-independent encryption techniques that work on even the most resource-constrained edge devices is required. However, this may have a detrimental impact on the security of edge computers vis-à-vis cloud networks. A chain is only as strong as its weakest link, after all. On the other hand, Edge computing improves privacy by making data less likely to be intercepted while in transit since it restricts the transmission of sensitive information to the cloud.
Because cloud computing platforms are inherently more secure due to vendors’ and organizations’ centralized deployment of cutting-edge cybersecurity measures, they are often more secure. Cloud providers frequently employ sophisticated technologies, rules, and controls to boost their overall cybersecurity posture. In the case of cloud technologies, data security is simpler due to the widespread adoption of end-to-end encryption protocols. Finally, cybersecurity professionals implement tactics to protect cloud-based infrastructure and applications against potential hazards and advise clients on how to do the same.
Edge computing vs cloud computing: Relevant organizations
Edge Computing may be better for applications with bandwidth difficulties. Edge computing is especially beneficial for medium-scale firms on a tight budget who wish to optimize their money.
Given that large data processing is a typical issue in development programs, cloud computing is more appropriate.
Edge computing vs cloud computing: Operations
Edge computing is when a system rather than an application handles data processing.
Edge computing vs cloud computing: Speed & Agility
Edge technologies take their data-driven counterparts’ analytical and computational capabilities as close to the data source as feasible. This improves responsiveness and throughput for applications running on edge hardware. A well-designed and sufficiently powerful edge platform could outperform cloud-based systems for certain applications. Edge computing is superior for apps that require little reaction time to ensure secure and efficient operations. Edge computing may emulate a human’s perception speed, which is useful for applications such as augmented reality (AR) and autonomous vehicles.
Traditional cloud computing configurations are unlikely to match the agility of a well-designed edge computing network, yet cloud computers have their way of oozing with speed. For the most part, cloud computing services are available on-demand and can be obtained through self-service. This implies that an organization can immediately deploy even huge quantities of computing power after a few clicks. Second, cloud platforms make it easy for businesses to access a wide range of tools, allowing them to develop new applications rapidly. Any business may obtain cutting-edge infrastructure services, massive computing power, and almost limitless storage on demand. The cloud allows businesses to conduct test marketing campaigns without investing in costly hardware or long-term contracts. It also allows enterprises to differentiate user experiences through testing new ideas and experimenting with data.
Edge computing vs cloud computing: Scalability
Edge computing demands scalability according to the heterogeneity of devices. This is because different items have varying levels of performance and energy efficiency. Furthermore, when compared to cloud computers, edge networks operate in a more dynamic environment. This implies that an edge network would require solid infrastructure for smooth connections to scale resources rapidly. Finally, security measures on the network might cause latency in node-to-node communication, slowing downscaling operations.
One of the primary advantages of cloud computing services is scalability. Businesses may quickly expand data storage, network, and processing capabilities by using an existing cloud computing subscription or in-house infrastructure. Scaling is usually rapid and convenient, with no downtime or interruption associated. All of the infrastructures are already in place for third-party cloud services, so scaling up is as easy as adding a few extra permissions from the client.
Edge computing vs cloud computing: Productivity & Performance
In an edge network, computing resources are located close to end-users. This implies that client data is analyzed with analytical tools and AI-powered solutions within milliseconds. As a result, operational efficiency—one of the system’s major advantages—is improved. Clients who meet the specified use case will benefit from increased productivity and performance.
Cloud computing eliminates the need for “racking and stacking,” such as setting up hardware and correcting software related to on-site data centers. This increases IT personnel’s productivity, allowing them to concentrate on more important activities. Cloud computing providers also help organizations improve their performance and achieve economies of scale by constantly adopting the newest computing hardware and software. Finally, companies don’t have to worry about running out of resources because changing demand levels cause fluctuations in supply. Cloud platforms ensure near-perfect productivity and performance by ensuring that there is always the right amount of resources available.
Edge computing vs cloud computing: Reliability
Edge computing services require smart failover management. Users will be able to access a service entirely effectively even if a few nodes go down in an adequately set up edge network. Edge computing vendors also ensure business continuity and system recovery by using the redundant infrastructure. Edge computing can also improve performance by limiting or eliminating duplicate application data and packaging processes that are not directly related to one another. Edge computing systems may provide real-time detection of component failure, allowing IT staff to act promptly. On the other hand, Edge computing networks are less dependable because of their decentralized nature. Finally, because edge computers can function without access to the internet, they have several benefits over cloud platforms.
Edge computing is not as reliable as cloud computing. Data backup, business continuity, and disaster recovery are all simpler and less costly in the case of cloud computing because it is centralized. If the closest site becomes unavailable, copies of critical data are kept at various locations that may be accessed automatically. Even if the entire data center goes down, large cloud platforms are frequently capable of continuing operations without difficulty. On the other hand, Cloud computing requires a solid internet connection to function properly on both the server and client sides. Unless continuity procedures are in place, the cloud server will be unable to communicate with connected endpoints, bringing operations to a halt unless continuity mechanisms are in place.
The hybrid approach
As previously stated, cloud computing and edge computing are not rivals; instead, they address distinct difficulties. That raises the question: can they both be utilized in tandem?
Yes, this is possible. Many applications use a mixed approach that combines both technologies for maximum effectiveness. An on-site embedded computer, for example, is often linked to industrial automation equipment.
The main computer operates the device and handles complicated computations quickly. However, this computer also transmits limited data to the cloud, which manages the digital framework for the entire process.
By combining the power of both technologies, the app draws on the advantages of both paradigms, relying on edge computing for real-time processing while leveraging cloud computing for all other tasks.
Foggy edges of the cloud
As new ingredients are added to the tech term salad, we like to compare them, and the same goes for edge computing vs cloud computing comparison. However, this comparison only gives some of the answers we are after. The real question is how edge and cloud computing change the modern IT infrastructure.
Cloud and edge computing complement each other with several advantages and applications. Edge computing was developed to address cloud technology’s centralized data collection and analysis challenges. However, the cloud is still a great choice with its flexible resource management and higher overall utilization rates equate to cost savings.
The puzzle is completed for the nonce
Edge comes into the picture when there is no time to wait for data to be sent to and analyzed on the cloud. Edge computing completes the contemporary real-time data processing puzzle with the cloud and IoT. These three can work connected for real-time data processing.
Cloud and edge computing can do great things together for an organization, but we will remember what these technologies bring to the table separately before we delve into that. But first thing first, let’s clarify why we can’t wait for data to make its journey to central cloud platforms for analysis anymore.
Need for speed
Cloud computing platforms allow organizations to extend their infrastructure across multiple locations and scale computational resources up or down. Hybrid clouds provide businesses with unprecedented flexibility, value, and security for IT applications.
However, things have changed. Real-time AI apps need a lot of computer power, and they’re frequently located far from central cloud servers. Some workloads must remain on-premises or on a certain site due to security, latency, or residency regulations.
With the introduction of GPU-based AI solutions, organizations looked to augment networks with edge computing, a method of processing that takes place where data is generated. Edge computing refers to handling and storing data on-site in an edge device rather than processing it remotely on the cloud.
The rapid expansion of IoT is one of the cloud’s most challenging problems. Devices are strewn about an organization’s physical IT environment, performing various activities from simple readings to complex operations in response to the production line or smart building requirements. IoT devices are data-rich, but they’re also “noisy,” which means a lot of that data is useless. This information is chatty in nature, as it isn’t a continuous flow rather than a series of incidents over time. Such data does not need to travel across the network; however, many IoT components lack the inherent intelligence to recognize this.
Tackling the complexities of an IoT environment with a cloud-only platform is not the ideal approach. The issue is that to utilize all of this data from these IoT devices, it must travel through the network and reach where that cloud capability exists. The latency caused by these processes also slows down the data itself and imposes a serious bandwidth restriction on the cloud. And this is exactly where edge computing steps in.
Taking the edge off workloads
IoT devices are generating ever more data, but we haven’t seen the peak yet. As 5G networks expand to more mobile devices, the amount of data generated will increase further. The promise of cloud computing and AI has long been to automate and accelerate innovation by promoting actionable knowledge from data. However, the enormous scale and complexity of data produced by networked devices have outpaced network and infrastructure capacity.
That device-generated data would have to go to a centralized data center or the cloud, creating bandwidth and latency problems. Edge computing is more efficient than this approach since data is processed and interpreted at or closer to its source. Thus, latency is considerably reduced because data does not travel over a network to be processed. Edge computing allows faster and more comprehensive data analysis, detailed insights, quicker responses, and better customer experiences.
If a network’s endpoints are connected by edge devices that can provide storage and processing capabilities, those devices’ storage and computing resources will be abstracted, pooled, and shared across a network—essentially becoming part of larger cloud infrastructure. Edge computing is not always connected to the cloud. Actually, edge computing’s usefulness stems from the fact that it is intentionally disconnected from clouds and cloud technology.
And now some fog
The emergence of edge computing also paved the way for developing new computing approaches that are very efficient in some scenarios. And fog computing is one of them. Some consider fog computing as Cisco’s ideal interpretation of edge computing, consequently their latest contribution to the terms bonanza we enjoy (!) today. However, this new “approach” has its differences and a few aces up on its sleeves, including the repeatable structure and scalable performance.
Fog computing also brings computing to the network’s edge Cisco-way. Moving storage and computing systems near the applications, components, and devices that require them reduces processing latency. This is especially important for connected IoT devices that create massive data. Because they are closer to the data source, these devices have less latency in fog computing.
The fog metaphor derives from the meteorological term for a cloud close to the ground, just as fog computing focuses on the network’s edge.
Cloud-optimized fog computing uses standard procedures to guarantee repeatable, organized, and scalable performance within the edge computing framework. But it differentiates by utilizing both edge processing and the infrastructure and networks for data transfer.
Fog computing eliminates the gap between the processing location and the data source by utilizing edge computing methods in an IoT gateway or fog node with LAN-connected processors or within the LAN hardware itself. This approach results in a greater physical distance between the computations and sensors, yet no additional latency. | <urn:uuid:4cdfc53b-5cdd-49d0-9072-69eb207248bd> | CC-MAIN-2022-40 | https://dataconomy.com/2022/05/edge-computing-vs-cloud-computing/?utm_source=s5_mobile_site&utm_medium=mobile&utm_campaign=s5_89983 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00724.warc.gz | en | 0.924081 | 3,732 | 2.96875 | 3 |
The Internet of Things (IoT) has the potential to revolutionize the way systems and businesses operate, allowing for not only greater automation but also greater visibility thanks to the massive amounts of data that can be collected, analyzed, reported, and acted upon – often without the need for human interaction or involvement.
The capacity to collect data efficiently begins with the use of IoT sensors. Sensors are devices that respond to physical inputs and then display, transmit, or employ artificial intelligence (AI) to make judgments or modify operational conditions based on those inputs. In the context of the Industrial Internet of Things, data received from sensors is utilized to assist business owners and managers in making informed choices about their operations, as well as to enable clients and users to use the company’s goods and services more effectively.
As the Internet of Things (IoT) project grows, more sensors will be utilized to monitor and gather data for analysis and processing. This article provides an overview of some of the many types of sensors that will be used to drive data collecting in the IoT effort.
IoT Sensor Types
IoT Sensors are built to respond to particular sorts of physical circumstances and then provide a signal that represents the magnitude of the condition being monitored. Light, heat, sound, distance, pressure, or any more particular scenario, such as the presence or absence of a gas or liquid, are examples of such situations. The following are examples of common IoT sensors that will be used:
- Temperature sensors
- Pressure sensors
- Motion sensors
- Level sensors
- Image sensors
- Proximity sensors
- Water quality sensors
- Chemical sensors
- Gas sensors
- Smoke sensors
- Infrared (IR) sensors
- Acceleration sensors
- Gyroscopic sensors
- Humidity sensors
- Optical sensors
Each of these sensors is described in detail below.
Temperature sensors monitor the temperature of the air or a physical item and convert it to an electrical signal that may be calibrated to represent the observed temperature precisely. These sensors might be used to track the temperature of a crucial piece of equipment to detect when it is overheating or approaching failure.
Pressure sensors detect air pressure, the pressure of a stored gas or liquid in a sealed system such as a tank or pressure vessel, or the weight of an item by measuring the pressure or force per unit area applied to the sensor.
Motion sensors or detectors can sense the movement of a physical item by employing any one of numerous technologies, including passive infrared (PIR), microwave detection, or ultrasonic, which utilizes sound to detect things. These sensors may be utilized in security and intrusion detection systems, as well as to automate the control of doors, sinks, air conditioning, and heating systems, and other systems.
The level of a liquid relative to a normal value is converted into a signal using level sensors. Gasoline gauges, for example, show the level of fuel in a vehicle’s tank and offer a continuous level reading. There are also point-level sensors, which are a digital or go-no-go depiction of the liquid level. When the gasoline level tank gets extremely close to empty, certain vehicles include a light that glows, functioning as an alert to notify the driver that fuel is likely to run out totally.
Image sensors collect pictures that are then digitally stored and processed. License plate readers, as well as facial recognition systems, are examples. Image sensors in automated production lines may identify quality concerns such as how effectively a surface is coated after leaving the spray booth.
Proximity sensors use a range of technological designs to detect the presence or absence of items approaching the sensor. These strategies include:
- Inductive technology that can be used to detect metal items
- Capacitive technologies are those that work with things that have a different dielectric constant than air.
- Photoelectric technologies, which use a beam of light to illuminate and reflect back from an item, or photovoltaic technologies, which use a beam of light to illuminate and reflect back from an object.
- Ultrasonic technologies detect an item approaching the sensor by sending out a sound signal.
Water Quality Sensors
The importance of water to humans on the planet, not just for drinking but also as a critical element in many manufacturing processes, necessitates the ability to feel and evaluate characteristics related to water quality. The following are some instances of what is felt and monitored:
- Chemical Presence – such as chlorine levels or fluoride levels.
- Oxygen Levels – which may impact the growth of algae and bacteria.
- Electrical Conductivity – which can indicate the level of ions present in water.
- PH Level – a reflection of the relative acidity or alkalinity of the water.
- Turbidity Levels – a measurement of the number of suspended solids in water.
Chemical sensors are designed to detect the presence of certain chemical compounds which may have mistakenly escaped from their containers into places that are occupied by workers and are important in managing industrial process conditions.
Gas sensors, like chemical sensors, are calibrated to detect the presence of combustible, poisonous, or flammable gas in the sensor’s surroundings. The following are some examples of particular gases that can be detected:
- Acetone (e.g. Paints And Glues)
- Toluene (e.g. Furniture)
- Ethanol (e.g. Perfume, Cleaning Fluids)
- Hydrogen Sulfide (e.g. Decaying Food)
- Benzene (e.g. Cigarette Smoke)
Smoke sensors or detectors use optical sensors or ionization detection to detect the presence of smoke conditions that might be a sign of a fire.
Infrared (IR) Sensors
Objects generate infrared radiation, which is detected by infrared sensor technology. These sorts of sensors are used in non-contact thermometers to measure the temperature of an item without having to place a probe or sensor on it directly. They’re useful for assessing electronics’ heat signatures and monitoring blood flow or blood pressure in patients.
While motion sensors detect movement, acceleration sensors, commonly known as accelerometers, measure the rate at which an object’s velocity changes. A free-fall state, a quick vibration creating a movement with speed variations, or rotating motion might all cause this shift. Acceleration sensors use a variety of technologies, one of which is:
- Hall-Effect Sensors – rely on magnetic field variations to detect changes.
- Capacitive Sensors – which depend on monitoring changes in voltage from two surfaces.
- Piezoelectric Sensors – create a voltage that varies in response to pressure due to sensor distortion.
Using a 3-axis system, gyroscopes or gyroscopic sensors are used to monitor the rotation of an object and estimate the rate of its movement, known as angular velocity. These sensors allow the orientation of an item to be determined without having to see it.
Humidity sensors can detect the relative humidity of air or other gases, which is a measure of how much water vapor is present. Controlling environmental conditions is crucial in the manufacturing of materials, and humidity sensors allow for measurements and adjustments to be made to minimize rising or falling levels. To maintain desired comfort levels, HVAC systems are a typical application.
Optical sensors respond to light that is reflected off of an object and generate a corresponding electrical signal for use in detecting or measuring a condition. These sensors work by either sensing the interruption of a beam of light or its reflection caused by the presence of the object. The types of optical sensors include:
- Through-Beam Sensors – which detect objects by the interruption of a light beam as the object crosses the path between a transmitter and remote receiver.
- Retro-Reflective Sensors – which combine transmitter and receiver into a single unit and use a separate reflective surface to bounce the light back to the device.
- Diffuse Reflection Sensors – which operate similarly to retro-reflective sensors except that the object being detected serves as the reflective surface.
AKCP Wired and Wireless Sensors
Wired and Wireless sensors for monitoring a wide range of industries. Data Center Temperature Monitoring, Remote Site Sensors. Power Monitoring and Environmental Monitoring. AKCP has over 30 years of experience and is the world’s largest installed base of environmental monitoring sensors.
AKCP offers powerful yet power-conscious sensors for temperature, humidity, power, contact, and more. They’re configurable to transmit infrequently and operate with 10-year battery life. This means minimal maintenance, a network you can deploy and depend on. Our IoT sensors also integrate seamlessly with our Wireless Tunnel Gateways (WTG) pushing data into the AKCPro Server so you can easily connect your data to a variety of devices and applications.
Sensors enhance our capacity to observe and report on the world around us. What a sensor sees can be the difference between that which is imagined and that which is possible. | <urn:uuid:924be78c-623d-4246-9d5d-0f3145603f8c> | CC-MAIN-2022-40 | https://www.akcp.com/blog/different-types-of-iot-sensors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00724.warc.gz | en | 0.898824 | 2,111 | 3.4375 | 3 |
A data center could be built on the Moon before the end of the decade as part of an international effort to develop a permanent base on our nearest neighbor.
As part of the wider NASA Artemis Moon program, Italian space agency ASI turned to Thales Alenia Space to study 16 design concepts to support a human presence on the Moon, including a data center.
We caught up with Eleonora Zeminiani, the head of the aerospace company's Human Exploration New Initiatives division, to learn how a data center will be key to living on the Moon.
Building a Lunar data center
By 2024, NASA hopes to put the first woman and person of color on the Moon as part of the first phase of Artemis.
Then comes Phase 2, 'sustainability,' which is all about developing a long-term base. "This means that it is an element needed after 2025 and towards the end of the decade, because that is the timeframe in which consistent lunar infrastructure is currently planned for deployment and the Lunar Data Center will need to be there to serve it," Zeminiani said.
The TAS project will investigate 16 key high-level architectural elements for future sustainable lunar exploration, Zeminiani explained. "For example, rovers, orbital platforms, surface habitats… Among these, one is the Lunar Data Center. In other words, we devoted one of the 16 study streams entirely to the Lunar Data Center.
"This is because we believe the LDC would be a major building block, able to serve most - if not all - of the other elements, and a game-changer in how we design and operate the other systems."
The study will aim to investigate the architecture and design of the data center, with TAS and its partners proposing a few initial solutions, each "extremely different one from another."
With the process still in its early days, TAS is first trying to determine what the LDC will need to be used for. "Then, based on those requirements we will be able to assess the different configurations to find the most promising one," Zeminiani said.
Currently, lunar rovers and proposed systems use a mixture of on-board Edge compute and direct line of sight communication to Earth compute resources.
"However, our goal here is to look beyond that, to truly explore the case for an LDC," Zeminiani explained.
"For many needs, relying on Earth-based computational resources is simply not acceptable, because communications with Earth are subject to a [noticeable] latency, one order of magnitude bigger than what we consider acceptable for today’s VoIP standards and two orders of magnitude bigger than the desired standard for low latency applications such as virtual machines and network storage."
While it is closer than the Earth, the proposed Lunar Gateway is also not suitable for a data center, the company said. The space station expects to serve as a solar-powered communication hub, science laboratory, short-term habitation module, and holding area for robots.
"But it is not designed to sustain heavy computational demand from external clients," Zeminiani said, who did not rule out putting the LDC in an orbital location of its own.
The study is set to be completed before the end of the year, with the company still to determine how much of the results will be made public.
"My goal with LunaNet is that it'll be just as enabling as the Internet was to the Earth," NASA project head David Israel explained to DCD. "Once this whole network-based mindset gets into the user side, the people planning the missions, then there'll be all sorts of new types of missions and applications that just grow out of it.”
In the next issue of the DCD Magazine, we speak to Internet co-founder Vint Cerf about Delay-Tolerant Networking and why it is key to an Interplanetary Internet. Subscribe for free today. | <urn:uuid:fbd4c2b8-2998-4edd-ad79-4ec42a7eb105> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/analysis/a-data-center-on-the-moon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00724.warc.gz | en | 0.947156 | 805 | 3.078125 | 3 |
Cloud computing is perhaps one of the most powerful and promising solutions in the modern computing space. The ability to call external sources, create client workstations for users on the fly, and to scale dramatically given a wide range of possible solutions is extremely exciting and makes Cloud solutions poised to become the de facto forward-thinking solution.
Unfortunately, Cloud computing has some serious concerns that many likely haven’t addressed as part of their security considerations. Because of the nature of the cloud and how it’s built to mimic common OS interactions in a classical sense, many think that you can treat the Cloud in the same way you would a local network resource. This is untrue and extremely dangerous.
Today, we’re going to look into cloud computing. We’ll discuss what it is, how it’s typically implemented, and the special security concerns implied in its adoption.
So what specifically is the cloud? In the simplest terms, we can call it a “collection of remote resources tied together to mimic a local resource”. That’s a big, simplistic overview, but it does work in a general sense. When a cloud solution is integrated, each element of a local resource, such as storage, processing, and input/output is moved from the internal network to an external endpoint.
This has a lot of implications for scalability, and that’s largely why it has been considered a great technology. If you asked a classical network to add 100 devices, you’d be hard-pressed to do so in an economic, time-efficient, useful way. Under cloud networking, it can be as simple as sending a single spin up request for new services, and you’re set to go.
Part of the problem with the cloud, though, is that it’s a new paradigm designed to function like an old one. Cloud systems have been created with average users in mind, and thus most cloud solutions mimic behaviors of local solutions like your laptop or workstation.
While that’s a great thing in terms of UX, it also means the propagation of bad habits and misunderstanding from the traditional ecosystems. These bad habits, formed from years and years of using local resources, carry over into the cloud solutions in a way that not only carries all the negative effects with them, but in fact magnify the results of these effects in a more complex, damaging, and complete way.
Bad Habits Die Hard
Part of these bad habits is a misunderstanding of what specifically a local resource does. Misconstruing data destruction for data erasure and vice versa, failure to configure proper security policies, and more can drastically harm a local network, but these failures are even worse on a forward-facing, quasi-public cloud solution.
Take for instance the nature of users to save revisions of a file. On a local system, this might be acceptable, as the data can be more easily tracked and controlled using revision management. In the cloud, however, multiple revisions might be stored on a device, on the local network, and on the cloud — and these multiple revisions kept simply from habit form a network of data exposure.
That is really the crux, here — a huge drawback of this category is the fact that the resources are not local, but they are often treated like local. Consider how much security you expect your email to have versus your basic file management on your operating system. You assume it’s local, thus you have depressed considerations of security. Many people think the same way about the cloud — and this itself is a huge problem.
Identifying these habits is a great first step to fixing them, but the problems are a lot more complex than simple human behavior.
A huge concern for cloud solutions is the misconfiguration of the underlying systems in play. This security concern drastically magnifies small mistakes in ways that aren’t mirrored on the local system. For instance, requesting a write operation for a new drive partition on the local network is not that big a deal – requesting the change over hundreds of devices and improperly configuring this change can cause a cascade failure effect, reducing overall security and availability.
Another huge security concern is the fact that, when using a cloud-based system, data will inevitably leave the network. On a local system, data is kept internal – everything is processed internally, and thus data is kept secure during the processing stage.
For cloud services, this is not true — data will inevitably leave the network for the cloud system, resulting in yet another point of potential failure that needs to be checked and monitored for security purposes.
Physical Barrier Elimination
A big benefit of the rather limiting nature of the traditional local network is that in order to use resources on the network, you have to be on the network. This is not so with cloud solutions, where a user can theoretically access resources from almost anywhere on the planet.
This shift means that, as much as security is always a concern, these concerns are amplified due to the ability for users to access these resources using mobile devices, laptops, and other remotely connected devices.
This obviously has serious data implications but also requires more complex security policies to ensure data is not stored locally in cached forms.
A major issue with Cloud Storage is the vulnerability of sharing a server with another subscriber to the cloud provider. In our post, Cloud vs. Physical Data Storage, you can read about the risks of shared access. Sharing virtual space with another business or individual who may have more lax data management practices could open up the other virtual tenants to potential data breaches. If a hacker were able to access one tenant’s data it’s likely the other tenants would be vulnerable to attack.
Multitenancy is largely unexplored and rightfully makes IT professionals nervous, here you can read how researchers were able to access other tenant’s private information.
Finally, cloud computing is complex. This complexity adds a lot of functionality, but it also adds many points of failure that would otherwise not exist. Even barring these additional points of failure, though, there are still some issues with increased complexity.
Chief of which is the fact that as a system gets more complex, it becomes harder to check for errors and other issues. A local network has a limited set of functions and elements. A cloud system is much more complex and has more diverse items connecting to it.
This ultimately means that what might be an easy identification for a local network becomes much harder to identify, and as a sum total system, this means dramatically decreased security if not properly managed.
A great tool that can help mitigate many of these issues is Clarabyte Complete. Designed to be a complete suite, this platform leverages several powerful solutions to identify points of failure and mitigate any potential issues long before they become big ones.
Claracheck is a great solution for many of the issues highlighted here. The system automates the diagnostic process, identifying driver injection, managing data migration, and assuring proper security policies.
As part of Clarabyte Complete, ClaraWipe is also a great tool for data destruction and secure wiping. ClaraWipe supports or adheres to the following standards, making it a gold-standard solution for the industry at large:
• Sarbanes-Oxley Act (SOx)
• HIPAA & HITECH
• The Fair and Accurate Credit Transactions Act of 2003 (FACTA)
• US Department of Defense 5220.22-M
• CSEC ITSG-06
• Payment Card Industry Data Security Standard (PCI DSS)
• Personal Information Protection and Electronic Documents Act (PIPEDA)
• EU data protection directive of 1995
• Gramm-Leach-Bliley Act (GLBA)
• California Senate Bill 1386
• and others.
Additionally, Clarabyte Complete offers Clarasell, a feature-rich commerce-oriented application that helps identify added value and potential leads within your own data. While this doesn’t help in terms of security, it’s an added benefit, and certainly, one that takes Clarabyte Compete to a whole new level. | <urn:uuid:6701bb36-8173-4382-bb3f-d4b1ff0f2ad9> | CC-MAIN-2022-40 | https://clarabyte.com/blog/data-security-concerns-with-cloud-solutions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00724.warc.gz | en | 0.927527 | 1,687 | 2.625 | 3 |
Student interest in ICT finally seems to be growing with an increase of nearly 13% of students opting to take the exam at GCSE this year. Despite the rise, the government still plans to scrap the curriculum, launching a computer science alternative in two years' time.
The number of students taking ICT at GCSE level increased in 2012, despite a decline in top grades for the subject and an overall decline in GCSE results for the first time in 24 years.
An estimated 658,000 16-year olds across the UK received their results last week. Of those, 53,197 took the ICT exam this year – an increase of 12.8% on last year’s 47,128.
The GCSE results come as Education Secretary Michael Gove is considering reforms which could end GCSEs and bring back a tougher, O-level-style exam. Students are expected to sit the first O-level exams in 2016.
Due to pressures from exam regulator Ofqual, harsher grading was used for this year’s GCSE results as a way to curb ‘grade inflation’ following criticism over some subjects' past papers. A WJEC ICT paper from 2011 showed questions such as: “Give one feature of the desktop publishing software which could be used to check for spelling mistakes” and “Give a reason why bank cards have a PIN number.”
Scrapping the current ICT curriculum
Due to a growing debate about whether past papers were asking relevant questions, the ICT curriculum will be scrapped in schools from September and replaced with a computer science qualification from September 2014.
Geoffrey Taylor, head of academic programme, SAS UK said while it’s encouraging to see more students taking ICT at GCSE, we must not get complacent as the number is still fairly low: “There’s clearly still work to be done to make sure that we are equipping our students with skills that match the needs of businesses today.
Read more on ICT teaching
- Teachers share concerns and ideas for ICT in 2014 curriculum
- Axing ICT teaching could worsen skills gap
- Schools must replace ICT lessons with business IT
“It’s no secret that the tech industry finds it difficult to hire people with the right expertise and this poses a serious threat to growth. With the rise of social technologies and the proliferation of mobile devices, organisations are in need of employees that can exploit the great volumes of data they generate, but our graduates lack the analytical skills needed by businesses to navigate big data."
Taylor said a new curriculum for computer science is certainly a step in the right direction: “By involving employers and universities, schools will be able to equip their students with the skills that will really be of use in the future.
“It is vital for us all to encourage students to seriously engage in the perceived tougher subjects of maths and science for GCSE and A-levels as this could ensure their future career paths.”
All students should see ICT as a life skill they will need in their business lives
Ian Moyse, sales director, Cloud CRM
According to Ian Moyse, sales director at Cloud CRM provider Workbooks.com, to say that the recent decision to scrap GCSE ICT without a suitable replacement already in place is "short-sighted" is an understatement: “IT in business is in the process of change and this provides an opportunity for new, educated blood to step into new shoes with skills at hand. This will be hindered by a lack of progressive thought and education on the areas that businesses seek.
“All students should see ICT as a life skill they will need in their business lives and one that can also serve them well at home.”
Tony Glass, vice president of sales EMEA at Skillsoft said: “We understand why the government has chosen to scrap irrelevant and unpopular ICT courses – but what will replace them and ensure young people can acquire the business-critical IT and digital communications skills they urgently need to be productive members of the workforce? How many degree courses include a mandatory computer skills course?”
Glass said Skillsoft has seen this problem reflected in the way businesses use the resources from the company’s Books24/7 collection: “Last year, the book that soared in the popularity stakes was Microsoft Excel 2012 Step by Step. Outlook and Word basic introductory titles were also in high demand.
“As outdated ICT courses are consigned to the curricular dustbin, we must ensure that they are replaced by something that will prepare young people for the rewarding careers that lie ahead; and which will release employers from the need to top up new recruits’ basic education.”
The exam board OCR launched a pilot computer science GCSE in 2010, before formally launching the qualification in September 2011. An AQA GCSE in computer science has also been accredited and is due to be launched in schools this September.
Drop in number of ICT students at A-Level
Despite a rise in the number of students sitting the GCSE ICT exam this year, the number of students choosing to study ICT at A-level dropped. The results from the Joint Council for Qualifications revealed a near 10% decrease in A-level students sitting the ICT exam for 2012.
This year, 11,060 students opted to take the ICT exam – 872 fewer than in 2011.
Roy Dungworth, managing director at Modis, which is part of Adecco Group’s Unlocking Britain's Potential Campaign, said IT is one of few industries to buck the trend of job shortages for young recruits, yet there has been a 10% drop in the number of students sitting ICT at A-level: “This points to a systemic failure at the heart of IT education.”
ICT skills should be valued alongside academic excellence in our education system
Roy Dungworth, managing director, Modis
Dungworth said the new computing GCSEs, launching in September will have the potential to refresh young people’s perceptions of ICT, by teaching them how to build key programmes instead of just using them: “This is a welcome move that will capitalise on young people’s natural enthusiasm for technology. It’s just as important for every young person to have a basic understanding of technology to thrive in the modern workplace.
“These softer skills are prized by employers and should be valued alongside academic excellence in our education system. As part of the Unlocking Britain’s Potential campaign, we are calling for a long-term work skills strategy to be embedded into the national curriculum.”
Bindi Bhullar, director, HCL Technologies said, with some sources reporting a 44% drop in the number of young people going on to take higher education courses in ICT over the last 10 years, and a 53% drop in computing A-levels taken since 2004, it’s vital that the skills gap is addressed or the IT industry in this country is one of many that could be severely jeopardised.
"Perhaps the government should look to follow the lead of economies like India, and find local government sponsorship for training and support from high-tech multinational corporations," said Bhullar. "There are so many savvy young minds who are facing the prospect of having to do low-skilled, poorly paid jobs, and if the government is truly serious about embracing innovation, it should invest in IT skills for the young as a means of creating jobs, and driving Britain out of economic uncertainty." | <urn:uuid:c109320f-2f9b-43a7-be4f-4991aae355a9> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/If-popularity-of-ICT-is-on-the-rise-why-are-we-scrapping-the-curriculum | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00724.warc.gz | en | 0.963504 | 1,567 | 2.65625 | 3 |
ATM and TDM are two types of data transfer technologies. TDM stands for Time-division multiplexing, which is a method of combining multiple data streams into one and sending it together over one signal. ATM stands for Asynchronous Transfer Mode. It is based on a fixed “cell” that is used to convey a Multimedia payload, i.e. Voice, Data, etc..
Summary: ATM uses “cells” whereas TDM uses a “nailed up” TimeSlot to convey information. Internally, HSL uses ATM “cells”.
Over 20 years ago when TDM was overloaded with the signalling load there was the move to 2Mbits/s HSL. This was an interim step before the full adoption of Sigtran (SS7overIP). | <urn:uuid:034682ae-282c-476d-b386-1d43bb8f1799> | CC-MAIN-2022-40 | https://www.erlang.com/reply/52131/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00124.warc.gz | en | 0.920791 | 169 | 3.328125 | 3 |
How to Secure a Computer Network?
- June 3, 2016
Computer network is a necessity of every company be it a small or a large one. Like any other system, a computer network system has its own shortcomings. With the expansion of a company, its network also expands. Large networks are complex and a challenge to handle. The larger the network grows the more vulnerable it becomes. A computer network has many weak links and in order to make a corporate network secure and robust these large and small loopholes must be plucked, especially if it is a network of a large corporation. A careful plan needs to be in place while establishing and expanding a corporate network. Following are some of the important things to look at while planning and securing a computer network.
Whenever an important task is to be done, planning is the first and the most important step. Planning has to be done in two steps: network planning and then securing the network. At planning level, policies have to be defined. Policies like network and server infrastructure, management policy, user level access controls, delegation of duties, hardware and software requirements etc.
When planning is completed it has to be implemented as well. Without implementation planning is useless. When it comes to network security some strict steps must be taken. Following are some of the important steps that can be taken in order to secure the network:
- Parameter security of network and server rooms. Physical access controls need to be applied so that unauthorized access to sensitive areas can be restricted.
- User policies must be strictly implemented. Any change in the privileges must be approved by relevant authority and properly documented.
- Disposing old hardware is a very critical task that is often overlooked as “not so important”. When disposing off old hardware it must be made sure that no data in readable or recoverable form is left on it. Simply formatting a hard drive is not enough because data from a formatted hard drive can be recovered, which can disclose important data like username and passwords etc.
- Vulnerable systems in a network can give access to these systems thus inviting an attacker in a corporate network. To fix the vulnerabilities in any software, security updates are normally pushed from time to time, so in order to keep the operating systems and software secure they must be regularly updated.
Often controls are very well planned and implemented but they are seldom audited. It is in human nature that they tend to relax and be carefree when they are performing similar tasks over an extended period of time, no matter how critical the task is. On the other hand, old policies must also be audited. It is a possibility that over a period of time some of the policies that were relevant let’s say 2 years ago become totally irrelevant. Under such circumstances doing routine audits is extremely necessary and it helps in multiple ways. One, it helps evaluate the performance of people who are managing critical tasks and secondly it helps reviewing the already established controls in order to understand how useful they currently are. Auditing can fix these issues that are created due to negligence in repetitive tasks and helps accommodating changes in company structure that occur over a period of time.
Appropriate hardware is extremely necessary for securing a network. A corporate network has to be layered in order to protect it from outside attackers. Attacker can sometime be inside the network as well. So a protected zone has to be created within the network that must not be accessible to anyone. Following is a list of very important hardware that can keep a network secure:
- IPS / IDS Devices
- Security Cameras
Wireless networks are an integral part of corporate networks. It has come to a point where wireless networks are overtaking wired networks. Penetrating a wired network is difficult as compared to a wireless network. To penetrate a wired network, an attacker has to gain physical access. On the other hand, sometimes an attacker doesn’t even have to enter premises of the company to gain access to the wireless network. Special care has to be taken while configuring security of wireless networks. Following are some tips to keep corporate wireless networks secure:
- Use an SSID that is not associated with your company name. Secondly suppress the signals as much as possible. Although this will not deter a serious attacker but will keep the noise off.
- Use 802.1 x authentications in wireless network so only approved devices can connect.
- Use strongest authentication. Currently WPA2 Enterprise is the strongest for wireless networks.
- Every company has visiting guests and they often need to connect to internet. It is a good practice to create a separate network for guest users. Make sure that the guest network is isolated from the company network.
All devices come with default username, password, IP address etc. for initial configuration. Changing these default values must be made mandatory as a company security policy. Often these values are left on default which makes it extremely easy for attackers to penetrate networks. There is no point in purchasing, configuring and implementing high end devices when you are going to leave access username and password to default.
Network users often turn out to be the weakest link in network security. In a corporate environment users can be of all types. Those who understand the technology and risks associated with it are usually least in number even within a tech environment. Therefore, it is extremely important to educate users on security. This will help greatly in securing corporate assets as well as their personal data and information. Some of us will argue that they are already well versed with the threats that technology brings with it so what is the point of telling the same story over and over again? Actually security is an issue that needs constant reminder and while attackers are inventing new techniques and exploits rapidly, every computer user must also be updated so that they can avoid these new hacking techniques.
Securing a network is not a task that can be done with applying few controls and adding some fancy hardware. It’s a continuous process and it is about doing small things right and keep evaluating and reevaluating. It is about educating every stakeholder; it is about developing a culture in organization where security is given importance. Hiring a security team and a competent network team may not be enough in certain cases. Upper management has to take ownership and establish a system that should influence every task that is being performed. | <urn:uuid:717f04bb-cbb1-49ae-a4c7-13eeeccb26c3> | CC-MAIN-2022-40 | https://www.kualitatem.com/blog/how-to-secure-a-computer-network | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00124.warc.gz | en | 0.95262 | 1,318 | 2.734375 | 3 |
New Processor Is Great… At Cracking Passwords
We see it all the time. An idea is developed with a positive outcome and within a short amount of time, someone has found a way to use that great tool for evil! According to a TechRadar article, this recently happened with Nvidia’s RTX 3090. The RTX 3090 is Nvidia’s flagship graphics card which features a GA102 graphics processor with 10,496 cores and 24GB of GDDR6X memory. For those of you who have no idea what that means, it’s a really powerful processor 😉
In fact that Graphics Processing Unit (GPU) was designed to offer a significant boost in graphical performance in PC games and creative workloads. Which is all fun and games… until it was realized that that same GPU is also very good at cracking passwords, especially weak ones!
More reasons why you should use strong, complex passwords.
You’ve heard time and time again about using strong passwords. You may even be fighting an internal battle… strong passwords vs the ease of use. On one hand, you know complex passwords are a great way to keep hackers and bots out of your accounts, but on the other hand, strong passwords make it harder for you to access your own files and accounts.
All of us here at Devfuzion understand this reasoning and we want to help you make your workday and technology more efficient. But we can not stress enough how important strong passwords are. Having a complex password is one of the simplest ways to help protect your data against cyber threats. In fact, having strong passwords is becoming more and more important as technology becomes more advanced.
What Exactly Is A Strong Password?
View our Strong Password Guide article for some tips and tricks on how to make your password more secure.
Protect Your Data
Using strong and complex passwords are just the tip of the iceberg. There are many ways to help prevent cyber threats from accessing your data. To learn more about Cyber Security or how to better protect your business, contact Devfuzion today! | <urn:uuid:d6e57dad-da0d-44da-8297-9bdfdaa85163> | CC-MAIN-2022-40 | https://www.devfuzion.com/secure-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00124.warc.gz | en | 0.947499 | 423 | 2.640625 | 3 |
Cybersecurity Lingo Everyone Should Know
Table of Contents
- By Steven
- Aug 25, 2022
The language of cybersecurity has changed in the years following the dot com boom. We've rapidly transitioned from America Online (AOL) dial-up internet service through modems connected to landline phones to lightning-fast wireless internet available across nearly the entirety of the globe.
Let's look at some of the top cybersecurity terms everyone should know, regardless of their professional title or occupation. We also delve into some of the most notable web attacks of note in the month of July.
Accidental Exposure in Plain English
If you hear about an accidental cybersecurity exposure, you may not be as flustered or confused if you understand what the term means. Accidental exposure is a form of data breach that results from human error or insufficient digital security protections.
Anonymous FTP misconfigurations, stolen or missing computers, defaults, databases in the cloud, and simple mistakes all fall under the umbrella of the overarching term of accidental exposure. Even a smartphone used for work purposes that is lost or stolen, allowing access to job-related information, is viewed as an accidental exposure.
Data Breaches Explained
There will inevitably come a time when usernames, passwords, medical records, and financial records are mistakenly made available or disclosed. Data leaks and hacks have the potential to occur by mistake and intentionally. Breaches typically lead to the sale of data on shady online forums to fellow digital criminals.
The Difference Between Incidents and Other Cybersecurity Attacks
The word "incident" has different meanings according to context. Incidents are when businesses have vulnerabilities yet lack reassurance that hackers stole no data. Incidents are eventually resolved, yet they have the potential to linger until a digital security team can perform an analysis.
July’s Notable Online Attacks
According to Constella Intelligence, more than 1,400 breaches occurred this past July alone. Miltor.ru was hit by an attack that resulted in the theft of millions of usernames, passwords, email addresses, phone numbers, and other information. Wishbone.io suffered data exposure that has the potential to lead to the illegal access of nearly 50 million emails and passwords.
July was an especially bad month for Epik.com, a site that suffered a breach resulting in the exposure of nearly 25 million names, email addresses, passwords, login usernames, USPS addresses, and more. It appears as though this breach occurred in September of 2021 yet was not reported until the summer of 2022.
Cybersecurity Statistics of Note
Incidents, accidental exposures, and data breaches are occurring at record rates. The war in Ukraine combined with the increase in computer use during the pandemic are two contributors to the increase in breach and attack frequencies. However, most people are naïve about the extent of online attacks and their frequency for a good reason.
The mainstream media doesn't highlight attacks on the web as often as physical attacks partially because digital attacks are ubiquitous, meaning occurring at all times, 'round the clock throughout the year. If you have not updated your digital safeguards, now is the time to do so. Whether you strictly use a work computer, smartphone, or home computer, it is in your interest to protect your tech investment. | <urn:uuid:118dd6a6-aa88-48a4-b158-692a01cc7bb9> | CC-MAIN-2022-40 | https://www.idstrong.com/sentinel/cybersecurity-lingo-everyone-should-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00124.warc.gz | en | 0.949262 | 669 | 2.90625 | 3 |
What is robotic process automation?
Robotic process automation (RPA) is an application of technology, governed by business logic and structured inputs, aimed at automating business processes. Using RPA tools, a company can configure software, or a “robot,” to capture and interpret applications for processing a transaction, manipulating data, triggering responses, and communicating with other digital systems. RPA scenarios range from generating an automatic response to an email to deploying thousands of bots, each programmed to automate jobs in an ERP system.
Many CIOs are turning to RPA to streamline enterprise operations and reduce costs. Businesses can automate mundane rules-based business processes, enabling business users to devote more time to serving customers or other higher-value work. Others see RPA as a stopgap en route to intelligent automation (IA) via machine learning (ML) and artificial intelligence (AI) tools, which can be trained to make judgments about future outputs.
What are the benefits of RPA?
RPA provides organizations with the ability to reduce staffing costs and human error. Intelligent automation specialist Kofax says the principle is simple: Let human employees work on what humans excel at while using robots to handle tasks that get in the way.
Bots are typically low-cost and easy to implement, requiring no custom software or deep systems integration. Such characteristics are crucial as organizations pursue growth without adding significant expenditures or friction among workers.
When properly configured, software robots can increase a team’s capacity for work by 35% to 50%, according to Kofax. For example, simple, repetitive tasks such as copying and pasting information between business systems can be accelerated by 30% to 50% when completed using robots. Automating such tasks can also improve accuracy by eliminating opportunities for human error, such as transposing numbers during data entry.
Enterprises can also supercharge their automation efforts by injecting RPA with cognitive technologies such as ML, speech recognition, and natural language processing, automating higher-order tasks that in the past required the perceptual and judgment capabilities of humans.
Such RPA implementations, in which upwards of 15 to 20 steps may be automated, are part of a value chain known as intelligent automation (IA).
For a deeper look at the benefits of RPA, see “Why bots are poised to disrupt the enterprise” and “Robotic process automation is a killer app for cognitive computing.”
The RPA market consists of a mix of new, purpose-built tools and older tools that have added new features to support automation. Some were originally business process management (BPM) tools. Some vendors position their tools as “workflow automation” or “work process management.” Overall, the RPA software market is expected to grow from $2.4 billion in 2021 to $6.5 billion by 2025, according to Forrester research.
Some of the top RPA tools vendors include:
- Automation Anywhere
- Blue Prism
- Cyclone Robotics
- EdgeVerve Systems
- Samsung SDS
For a closer look at these vendors’ RPA offerings, see “Top 21 RPA tools today.”
There are 10 key factors to consider when choosing RPA tools:
- Ease of bot setup
- Low-code capabilities
- Attended vs. unattended
- Machine learning capabilities
- Exception handling and human review
- Integration with enterprise applications
- Orchestration and administration
- Cloud bots
- Process and task discovery and mining
For a more in-depth look at these selection criteria, see “How to choose RPA software: 10 key factors to consider.”
What are the top RPA certifications?
As organizations increasingly adopt RPA, they also need individuals with expertise in RPA tools and implementations. Many of the most popular RPA certifications are offered by vendors, including:
- Automation Anywhere
- Blue Prism
10 tips for effective robotic process automation
Implementing RPA can be challenging, given both the potential complexity of legacy business processes and the level of change management that can be required for RPA to succeed. The following tips can help your organization on its way:
1. Set and manage expectations
Quick wins are possible with RPA, but propelling RPA to run at scale is a different animal. Many RPA hiccups stem from poor expectations management. Bold claims about RPA from vendors and implementation consultants haven’t helped. That’s why it’s crucial for CIOs to go in with a cautiously optimistic mindset.
2. Consider business impact
RPA is often touted as a mechanism to bolster return on investment or reduce costs. But it can also be used to improve customer experience. For example, enterprises such as airlines employ thousands of customer service agents, yet customers are still waiting in the queue to have their call fielded. A chatbot could help alleviate some of that wait.
3. Involve IT early and often
COOs were some of the earliest adopters of RPA. In many cases, they bought RPA and hit a wall during implementation, prompting them to ask for IT’s help (and forgiveness). Now “citizen developers” without technical expertise are using cloud software to implement RPA right in their business units Often, the CIO tends to step in and block them. Business leaders must involve IT from the outset to ensure they get the resources they require.
4. Poor design, change management can wreak havoc
Many implementations fail because design and change are poorly managed, says Sanjay Srivastava, chief digital officer of Genpact. In the rush to get something deployed, some companies overlook communication exchanges, between the various bots, which can break a business process. “Before you implement, you must think about the operating model design,” Srivastava says. “You need to map out how you expect the various bots to work together.” Alternatively, some CIOs will neglect to negotiate the changes new operations will have on an organization’s business processes. CIOs must plan for this well in advance to avoid business disruption.
5. Don’t fall down the data rabbit hole
A bank deploying thousands of bots to automate manual data entry or to monitor software operations generates a ton of data. This can lure CIOs and their business peers into an unfortunate scenario where they are looking to leverage the data. Srivastava says it’s not uncommon for companies to run ML on the data their bots generate, then throw a chatbot on the front to enable users to more easily query the data. Suddenly, the RPA project has become an ML project that hasn’t been properly scoped as an ML project. “The puck keeps moving,” and CIOs struggle to catch up to it, Srivastava says. He recommends CIOs consider RPA as a long-term arc, rather than as piecemeal projects that evolve into something unwieldy.
6. Project governance is paramount
Another problem that pops up in RPA is the failure to plan for certain roadblocks, Srivastava says. An employee at a Genpact client changed the company’s password policy but no one programmed the bots to adjust, resulting in lost data. CIOs must constantly check for chokepoints where their RPA solution can bog down, or at least, install a monitoring and alert system to watch for hiccups impacting performance. “You can’t just set them free and let them run around; you need command and control,” Srivastava says.
7. Control maintains compliance
There are lot of governance challenges related to instantiating a single bot let alone thousands. One Deloitte client spent several meetings trying to determine whether its bot was male or female, a valid gender question but one that must take into account human resources, ethics, and other areas of compliance for the business.
8. Build an RPA center of excellence
The most successful RPA implementations include a center of excellence staffed by people who are responsible for making efficiency programs a success within the organization. Not every enterprise, however, has the budget for this. The RPA center of excellence develops business cases, calculating potential cost optimization and ROI, and measures progress against those goals.
9. Don’t forget the impact on people
Wooed by shiny new solutions, some organizations are so focused on implementation that they neglect to loop in HR, which can create some nightmare scenarios for employees who find their daily processes and workflows disrupted.
10. Put RPA into your whole development lifecycle
CIOs must automate the entire development lifecycle or they may kill their bots during a big launch.
Ultimately, there is no magic bullet for implementing RPA, but Srivastava says that it requires an intelligent automation ethos that must be part of the long-term journey for enterprises. “Automation needs to get to an answer — all of the ifs, thens, and whats — to complete business processes faster, with better quality and at scale,” Srivastava says.
More on robotic process automation: | <urn:uuid:e1ca897a-f3e3-483c-ad57-d6d76bc82d31> | CC-MAIN-2022-40 | https://www.cio.com/article/227908/what-is-rpa-robotic-process-automation-explained.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00124.warc.gz | en | 0.928591 | 1,999 | 3.140625 | 3 |
Phishing is still on everyone’s lips, and the threat landscape has changed for a number of reasons. Phishing is a type of digital attack that uses fraudulent emails or websites to trick users into revealing personal information such as passwords or credit card numbers.
Phishing attacks are often difficult to detect because they can mimic legitimate emails or websites. Now phishing attacks are becoming even more sophisticated, using artificial intelligence (AI) to assist. With the help of AI, attackers can create realistic and personalized phishing emails that are very difficult for even the most thorough users to detect. Attackers are constantly evolving their methods to stay one step ahead of users and businesses. As phishing attacks become more sophisticated, it is important to be vigilant and educate yourself and others on how to protect yourself from these threats.
Phishing 4.0 – A History
Phishing has been a growing problem since the beginning of the Internet, and we have written about the attack a few times on the blog as well. Regardless of whether we have conducted our own research on the subject of phishing or have drawn attention to particularly perfidious scams: The topic belongs to awareness like spinach to Popeye. But even before the Internet, there were scams. At that time, of course, not by e-mail, but by letter. Nevertheless, the past also shows that criminals were quite creative. However, due to ever faster networking and digitization, the business with criminal mails has become more and more scalable and has thus become a real plague. Spam filters and other security solutions keep trying to keep malicious emails out of inboxes, but the success is rather moderate. Also because criminals are constantly finding new methods and attack vectors to bypass these filters.
Artificial Intelligence – A Driver of Digitalization
Artificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents. This means that systems should learn to think logically, learn and act independently anyway. Research is currently focusing on the question of how computers or programs can be created that are capable of intelligent behavior.
In practice, AI applications can be divided into different categories:
- Machine learning: This is a method of teaching computers to learn from data without being explicitly programmed.
- Natural language processing: This is about teaching computers to understand human language and respond in a way that is natural to humans.
- Robotics: This involves the use of robots to perform tasks that would otherwise be difficult or impossible for humans to accomplish.
- Predictive analytics: This is a method that uses artificial intelligence to make predictions about future events, trends, and behaviors.
The history of artificial intelligence is long and complex, dating back to the dawn of computer technology. The field of artificial intelligence was formally established at a conference in 1956 and has undergone a number of changes and developments since then. A current trend in artificial intelligence is GPT-3 (Generative Pre-trained transformer 3): GPT-3 is a machine learning platform designed to write text. The platform is currently capable of producing human-like texts and can even copy the style of a particular author. GPT-3 should also be able to understand the context of a text and produce text that is appropriate to the context. Of course, artificial intelligences are also vulnerable to attacks, but we’ll describe that in another blog post.
Phishing 4.0 – Artificial intelligence driving scale?
OpenAI is a company dedicated to research around artificial intelligence. OpenAI has now released an API that allows developers to harness some artificial intelligence features. OpenAI API is a platform that enables developers to equip applications with sophisticated artificial intelligence. The API provides tools and services that developers can use to train and deploy AI models. This API could now be exploited by attackers to formulate and send phishing emails in an even more personalized and, above all, completely automated way. The Playground can show how easy it is to do this.
So in Playground, all I have to do is enter for whom I want to write a phishing mail and the program will write me a plausible phishing message completely automatically. Of course, this can be completely automated via the API. All an attacker has to do now is insert a corresponding link and the phishing message is ready. This then only needs to be sent. Especially in combination with big data thefts, which also allow automated processing of mail addresses, names and other personal details, we need to be prepared for even better and scalable phishing waves. Especially when we pepper the message with more supposedly personal details, as seen in the video below:
How do I protect myself from Phishing 4.0?
There are a few things users can do to protect themselves from Phishing 4.0. They are not that different from the general tips on how to recognize a phishing message.
- Pay even more attention to domain and the sender name from which the email originates. This must be well forged or chosen for the message to appear truly authentic.
- Get informed: By reading this blog, you have taken another step: you should be aware that artificial intelligence is increasingly being used to create phishing emails, and even supposedly personalized emails can have a malicious purpose.
- Be aware that similar attacks exist for other types of communication, for example, phone calls, SMS messages and also chats.
As AWARE7, we are currently working on simple and individually applicable countermeasures. Even ones where you don’t have to rely on technology. | <urn:uuid:8048b9d1-feb0-4dc2-a813-f4a5df57c1d4> | CC-MAIN-2022-40 | https://aware7.com/blog/phishing-4-0-new-level-through-ai/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00124.warc.gz | en | 0.953739 | 1,117 | 3.4375 | 3 |
What a Hard Drive Printed Circuit Board Is (and What It Does)
A hard drive’s Printed Circuit Board, or PCB, allows electricity to pass between various components that allow the hard drive to function. If you have ever handled a hard drive, you may recognize the PCB as the green or blue board on the bottom of the device.
There are various chips on the PCB, including one that contains firmware with a number of drive-specific attributes such as voice coil voltage, spindle speed, head-to-head relation (HHR) and various other information. This information is critical for the drive’s operation.
In extremely simplified terms, the board “tells” the hard drive how to operate. It processes signals from the computer and allows the drive to output information to the central processing unit (CPU). However, it is not the primary storage space for user data, and it does not have any mechanical components.
Common Issues That Affect a Hard Drive’s Printed Circuit Board
A hard drive PCB is fairly resilient, but it can become damaged during normal operation. Some common causes of hard drive circuit board failure are listed below, along with tips for preventing premature damage.
- Extreme Heat – The inside of a computer can get extremely hot, especially if the user doesn’t take precautions to provide adequate ventilation. Heat can eventually cause electronic malfunctions.
- Damage from Improper Handling – Electrostatic discharge can permanently damage electronic components. Always ground yourself before handling your hard drive.
- Damage from Faulty Electrical Supply – A computer power supply may send inconsistent levels of electricity through the motherboard and to components, resulting in damage. High-quality power supplies are well worth the investment.
- Damage from Power Surges – We recommend keeping every computer on an uninterruptible power supply (UPS) to keep power surges from damaging hard drive circuit boards. Power surges can also affect other computer components, so a UPS is an essential means of protection.
- Manufacturing Issues – While rare, some circuit boards have manufacturing issues that cause components to become unseated. This can occur even when the hard drive is operating in a reasonably well-controlled environment.
Failure Symptoms That Accompany PCB Issues
A printed circuit board failure will often cause a hard drive to stop functioning entirely. However, this is not always the case. The drive may appear to start normally; you may hear the platters spinning up to speed, and the drive might not make any unusual sounds.
If your hard drive is a boot drive, it may not be able to load your operating system. If it is not a boot drive, you may not be able to access any files or folders. You may hear clicking or whirring sounds.
The symptoms of PCB failure are not consistent, and there’s a tremendous amount of overlap with other issues such as read/write head failures. Because of this, you should never operate a hard drive that shows any signs of damage, especially if you don’t have a backup of your data. Instead, call us at 1.800.237.4200 to discuss hard drive data recovery options.
Can I Switch Out My Hard Drive’s Printed Circuit Board?
There is an extremely common misconception that hard drive PCBs are interchangeable among certain models (for instance, that Hitachi Travelstar drives have identical circuit boards). While this was true for early hard drives, all modern disks use drive-specific microcode. This allows for excellent dependability, fast read/write speeds, and generally better performance for high-capacity hard drives.
However, it prevents end users from switching out failed circuit boards. Even if you attempted to transplant your hard drive’s original firmware chip onto a new, identical board, you could cause problems that would permanently destroy your files.
Datarecovery.com’s engineers can copy, rewrite, or repair microcode using advanced equipment. We take extensive precautions to protect your disk at all times, and all of our laboratories have microcode repair tools – we don’t automatically outsource electronically damaged drives to a central location, and our process gives you the highest possible chances of a fast, successful recovery. We recover over 97 percent of hard drives with PCB damage (hard drive data recovery service).
To learn more or to start a case, call 1.800.237.4200 today. | <urn:uuid:32f9fa85-75fb-4914-846b-1d5f4cb0eaf3> | CC-MAIN-2022-40 | https://datarecovery.com/rd/pcb-swapping/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00124.warc.gz | en | 0.925397 | 900 | 3.8125 | 4 |
Computer Telephony Integration
Computer Telephony Integration or CTI is the ability for computers to interact with telephones. CTI combines computer systems with telephone systems to increase the capabilities of each. A great example of CTI today is our smartphones. It’s literally a combination of a telephone and a computer. In business, CTI takes on a slightly different use case, which is the focus of this blog.
Computer telephony integration started with relatively small ambitions but over the past 30 years it has evolved into its own multi-billion dollar industry. CTI is used by over half the world population every day to drive better interactions with businesses and each other.
The first CTI was based on the CSTA (computer supported telecommunications applications) protocol introduced in 1992 by the ITU and was the most adopted protocol for computer telephony integration. AT&T and Novell launched TSAPI a few years later and Microsoft jumped into the CTI arena with the introduction of TAPI in 1993 for combining telephones with Windows applications and an industry was born.
As CTI has evolved, the business use cases have as well. Most businesses look to CTI to improve the agent and customer experience, reduce the time it takes to help customers, and to maximize the number of calls agents can handle over time.
Typically, CTI covers 4 major functional areas for businesses:
- Screen Pop – This industry term refers to a process whereby a callers information is “popped” to an agent even before the call is answered so the agent will know more about the customer before saying “Hello”.
- Click-to-Dial / Automated Dialing – This time saving feature allows agents to either simply select a record in their application and the telephone call is automatically made, or it’s even enabling more rapid dialing of phone numbers through capabilities like power dialing and predictive dialing.
- Phone Controls – Rather than having use a telephone set to answer the phone, place it on hold, or initiate a conference call, phone controls can be placed directly in an application through a mechanism commonly referred to as third-party call control.
- Intelligent Routing – Is the ability to request additional data and assistance from computer systems so calls can be routed to agents that can best help the customer or are idle and ready for another phone call.
CTI today has matured past the initial intentions. With the introduction digital telecommunication, technologies computing systems and telephone systems have essentially been fused together. CTI has expanded beyond just telephony integration and now includes other channels like chat, SMS, video and social networks. There are dedicated CPaaS (Communication Platform as a Service) and CCaaS (Contact Center as a Service) companies which focus entirely on CTI. CTI has also deepened within business applications, as CRM companies like Salesforce, SAP, and Microsoft look for ways to combine their applications with CPaaS and CCaaS platforms.
My Own CTI Journey
I started AMC Technology in 1995, to build CTI solutions. The industry was just getting started and the initial CTI projects were very complex, costly, and lengthy. I built AMC to take these complicated CTI protocols and turn them into solutions that customers could practically use.
Today, just as CTI has evolved, we have as well. Our mission of improving interactions remains the same, but we now offer DaVinci, an XiPaaS (Experience Integration Platform as a Service), so businesses can focus on driving value to their customers through their own unique experiences rather than focusing on the mechanics of CTI.
26 years ago, CTI captivated me with its unlimited potential to bring people together through better interactions and it captivates me even more today. | <urn:uuid:dd94d203-f6f4-428c-898a-d413c188b0c5> | CC-MAIN-2022-40 | https://www.amctechnology.com/blog-what-is-cti/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00124.warc.gz | en | 0.948966 | 779 | 2.59375 | 3 |
When you’re buying a cellular booster, you have a lot to consider. For example, you have to know how many devices you’ll connect to the booster to ensure you buy the correct size. The more you learn about cellular signal, the easier it becomes to find the booster you need. Understanding uplink versus downlink in cell phone signal boosters will help you buy a booster with maximum strength.
Defining Cellular Boosters
Before we dive into uplink and downlink, you may be wondering what a cell phone booster is or, more importantly, why it’s so beneficial. A weak cellular signal affects call quality, text ability, and data speeds. Cellular signal boosters amplify a cellular signal to enhance communication between your phone and the nearest network tower.
A booster comes with three main components—an indoor antenna, amplifier, and outdoor antenna—that work to improve your signal. While the external antenna pulls a signal from the network tower, your amplifier will enhance it so that the inside antenna can rebroadcast this “donor” signal.
What Are Uplink and Downlink?
Knowing about uplink versus downlink in cell phone signal boosters is important, but these terms may sound a bit technical to some people. However, uplink and downlink are simply upload and download speeds. More specifically, uplink is the line of communication from your phone to the tower. On the other hand, downlink refers to the line from the tower to your phone. Outside forces such as distance, nature, and building material can all disrupt these lines of communication, weakening your signal. This is where the role of a booster comes into play.
It’s important to note that decibels are used to measure uplink and downlink, so broaden your knowledge on what dBm (decibel milliwatts) is and how it relates to signal. By understanding this, you’ll have an easier time picking out the right cellular booster.
How Does a Booster Help?
Since cellular signal boosters improve communication between your phone and the nearest network tower, they also improve uplink and downlink. However, keep in mind that the gain you receive depends on the type of booster you buy. The appropriate cellular booster can improve your upload and download speeds while also boosting your signal. Finding the ideal booster for your home, commercial space, or vehicle may take time as you carefully consider each factor. However, in doing this, you’ll find the best match at the best price.
You can buy cell phone signal boosters in Canada at SureCall Boosters. Improve your cellular service at a fair price to make meeting all your data needs easy. Whether you’re working from home or traveling the country, we have a booster kit to keep you connected! | <urn:uuid:6d1d51ca-7265-4645-96de-6af17020aacd> | CC-MAIN-2022-40 | https://www.surecallboosters.ca/post/uplink-vs-downlink-in-cell-phone-signal-boosters | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00124.warc.gz | en | 0.932209 | 575 | 2.640625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.