text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Do you know the term "cosa nostra"? Roughly translated, it means "our thing." Mobsters allegedly coined it back in the early 1950s to express solidarity with one another, implying that whatever the crime families did to one another was of no concern to society at large. Luckily, that sentiment isn't widespread among systems professionals in the federal government. Most feds express a keen sense of the public nature of their work.
Do you know the term cosa
Roughly translated, it means our thing. Mobsters allegedly coined it back
in the early 1950s to express solidarity with one another, implying that whatever the
crime families did to one another was of no concern to society at large.
Luckily, that sentiment isnt widespread among systems professionals in the
federal government. Most feds express a keen sense of the public nature of their work.
Because GCN goes exclusively to government readers and a few companies that do business
with the government, we dont often hear from those outside cosa nostrathat is,
the citizens who interact directly with government systems.
But a recent story, posted on our Web site at http://www.gcn.com,
drew a response that shows just how much folks out there care about what you do.
Thats the benefitand the curseof the Internet. Whatever you post on
it is there for the whole wide world to see. And it turns out that the material we send
you in print form is eagerly read by a much wider audience on the Web.
Many people deal with computers, and most people interact with government from time to
time. So it stands to reason that governmental computing is of wide interest.
The story in question was about the Navys Smart Ship program. It detailed how
software failures disabled the fly-by-wire USS Yorktown [GCN, July 13, Page 1]. Since that story appeared, e-mail letters have been pouring in,
proving that the story is rocketing around software programming circles via the Net.
Reporters from other publications have called the paper to ask us about it.
But the biggest response has come from technicians. Programmers and software engineers,
like artists, are among the most opinionated people youll find. Some vow with moral
certainty that the Yorktowns network operating system was to blame; others
indignantly put the blame on poor programming.
Blame-fixing and speculation are not a newspapers mission.
But the incident is a good reminder of the importance of government work and a
barometer of how carefully the public monitors that work.
The great thing about public service is that it is, well, public service.
Unfortunately, it also means you must work in a fishbowl with lots of critics
watching what you do.
Thomas R. Temin
NEXT STORY: Warfighting systems get a once-over during JWID | <urn:uuid:287c2f48-697f-4f5a-b767-4a7f1772882e> | CC-MAIN-2022-40 | https://gcn.com/1998/08/eyes-are-on-feds/291709/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00171.warc.gz | en | 0.955664 | 617 | 2.640625 | 3 |
Neolithic-Era Megaliths in Normandy
Pierre Tournesse is a megalithic structure dating back to 3,900 B.C. in the Neolithic era. It is a passage grave, where the burial structure was buried under a mound of earth and was accessed through a tunnel. It is located on the north edge of the small town of Cairon, in the département of Calvados in Normandy, in northern France, near the city of Caen. You can easily visit it on your own.
The people living here toward the end of the prehistoric period would have been settling down as agriculture was replacing the hunter-gatherer mobile lifestyle. This allowed them to form communities and construct permanent structures like this burial complex.
Until 1992, all that showed above the ground was a broad capstone surrounded by stone slabs. This suggested the presence of some sort of cairn. The town of Cairon acquired about a hectare of land centered on the site. Scientific excavations were carried out in 1996-1999, followed by partial reconstruction.
These pictures show how one to two meters of soil had to be removed in order to expose the original structure. The site seems to be surrounded by a low berm, which is really just the slope up to the contemporary ground level.
My parents, standing in the central chamber, and one of their friends standing near them provide a sense of scale.
In the distance is our rental car, and beyond that are fertile wheat fields.
The structure is built from limestone, all of it probably gathered from nearby. This dates from the Neolithic, the New Stone Age, so metal tools had not yet been developed (as would happen later in the Bronze Age, the Iron Age, and so on). Stone working and transport would have been difficult then.
The plan on the marker shows how they constructed the tomb. The overall structure is nearly circular, about 24 meters in diameter. The low outer boundary wall is of drystone construction. That is, stones tightly fitted together but not mortared.
The central chamber, about five by three meters in size, is close to the center of the structure. It had a small alcove on its north side.
The central chamber was accessed by a corridor about eight to nine meters long and less than one meter wide and tall.
The smaller chamber, about three by two meters, is more directly accessible from the exterior. Its entrance is on the opposite side from the long passage.
Another diagram on the sign shows the process of construction, across the bottom, and a cutaway of the finished structure, above.
The central chamber was built as a dolmen with rows of large slabs and some drystone forming the walls and truly megalithic cover stones as a roof. The access tunnel was made with drystone walls and large slabs as a roof. The smaller chamber was built entirely of smaller stones in an igloo-like shape forming a corbelled vault.
This assemblage was then buried in a mound mostly of earth with some stones, and the outer surface covered with stones.
The smaller chamber may have built after the main structure was completed, by digging into the mound.
The access passage is aligned to 100°, just a little south of due east. Maybe the orientation was significant to the builders, with the sunrise shining down the passageway on an auspicious day of the year. No one knows, this was a truly prehistoric era and so we have no recorded information about their beliefs.
The Neolithic was the end of prehistoric time, when agriculture was being established and replacing mobile hunting and gathering. Human groups could settle and form small communities, domesticating animals and raising crops. The formation of communities allowed the construction of significant permanent structures like this one.
The remains of at least twelve individuals, a few of them children, were found in the central burial chamber. It was carbon-14 dating on these remains that allowed the dating of the structure to 3,900—3,700 B.C.
The entry passage walls were all drystone construction with small stones.
The main chamber walls were partly built from large stones, with small drystone walls filling the gaps.
Large slabs spanned the roof of the chamber.
The smaller chamber held the body of one child wearing a necklace with a pieced dog tooth as a pendant.
The smaller chamber is oriented with its central axis and opening aligned toward about 270°, roughly opposite the direction of the main passageway.
The human remains and everything else found at the site are now at the national archaeological museum at Saint-Germain-en-Laye on the western outskirts of Paris.
The broken pieces of the cover stones were not replaced on the central dolmen.
Other Megaliths in CalvadosTo the Megalithic Travel page
The Calvados département has a number of megaliths, many of them individual stones called menhirs especially in the case of standing stones. Many of them share a common legend about a large stone that can turn itself over. The name of this structure literally means "The Turning Stone" as tourner is French for turn. All that was visible and known about in historic time was the top of one or more of the capstones, and this was said to be one of the "turning stones".
Other nearby similarly named sites are la Pierre Tourneresse à Gouvix, la Pierre Tournante à Fresney-le-Puceux, la Pierre Tournante à Livarot, and also a stone no longer to be found called la Pierre Tourniresse, once located between Thaon and Colomby-sur-Thaon.
The Germans had used this site as a gun emplacement and shelter during World War II, using the exposed capstones as part of that. They destroyed their site as they withdrew in June, 1944, damaging the capstones in the process. Until then, the capstones were complete.
A Würzburg radar site is nearby, and a visit there is easily combined with Pierre Tourneresse.
Nazis and self-rotating megaliths, this sounds like a case for the Bureau for Paranormal Research and Defense.
Visiting Pierre Tourneresse
Access is quite easy if you have planned ahead and have a decent map. The 1:200,000 Michelin map can get you to Cairon easily. The excellent 1:25,000 map from the Institute National de l'Information Géographique et Forestière gives detail down to individual buildings. The site is at:
Take the D22 road northwest from the Caen ring road to Cairon. This may appear as Cairon-le-Vieux or "Cairon-the-Old" on some maps. Near the center of this town of about 1,600 people there is a small road leading north along the Vey, a small river flowing toward the Mue.
Just before the highway passes a building with the local boulangerie et patisserie and crosses a small stream, Rue de la Cachette crosses the highway. A sign there points to the town's mairie, école, bibliothèque, and so on, all of that straight ahead on the D22, and a smaller sign points to the right to Pierre Tourneresse. Turn right!
After the street intersection about 100 meters north of the D22, the lane curves to the left and then to the right as it follows the Vey. Along the way it turns into a farm lane, really just two tracks, continuing to the north. The lane is labeled as Chemin Nôtre Dame du Marais (or The Lane of Our Lady of the Swamp) on Google Maps.
Pass the sign saying that no one except residents, bicycles, and agricultural equipment should continue back the lane. Rented cars surely fit into one of those categories, so keep going. About 500-600 meters north of the D22, turn sharply to your right and back and then continue about 400 meters back to the south. The site will be on your right. Here is a view to the north from that lane. The lane passes between a wheat field to its east and the site on its west. From here you can continue on out to the D22.
Alternatively, depending on your direction of travel, you might spot a sign for La Pierre Tourneresse on the D22 and see how to get there directly.
The town of Cairon is mentioned in records from the year 1077, when it was spelled as Karon. It was mentioned as Cayron in 1231. The name is thought to come from the Roman name Carius suffixed with -onis. Since the Celtic word karn meant stone, it is possible that the town got its name indirectly from the exposed capstone.
The town's nearby fortified Church of Saint-Hilaire was built on a low hill in the 13th century. It was a strategic point in the Hundred Years War, a series of conflicts between England and France from 1337 to 1453.
Cairon was liberated from the German occupiers on 11 June 1944 by the 46th Royal Marine Commando. | <urn:uuid:c9f6a41e-62be-494e-affe-bc810adaabfc> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/france/pierre-tourneresse/Index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00171.warc.gz | en | 0.960098 | 1,934 | 3.53125 | 4 |
Green Technology Finland
Last week we touched upon how a project in Finland had blended two of the world’s most important industries, cloud computing and green technology, to produce a data centre that used nearby sea water to both cool their servers and heat local homes.
Despite such positive environmental projects, there is little doubt that large cloud data centres and social networking sites consume vast amounts of electrical power. A recent Greenpeace report claims the Apple data centre in North Carolina uses more power than 250,000 European homes combined. Estimates now predict that cloud computing is responsible for as much as 2 percent of the world’s electricity use.
Clearly, therefore, the data world uses extraordinary amounts of energy – but is it really all bad news?
Migrating to the cloud and sharing resources saves considerable energy costs by removing the need to power countless duplicate data centres around the world.
A report released in 2011 by the Carbon Disclosure Project (CDP) found that North American companies who used cloud computing services achieved both a combined annual energy saving of $12.3 billion and a reduction in carbon emissions equivalent to 200 million barrels of oil. Indeed, not only did they achieve such large energy savings, but by moving to the cloud they also improved operational efficiency by dramatically decreasing capital expenditure on IT resources.
Supporting the CDP’s research, a joint report by Microsoft, Accenture and WSP Environment & Energy found that that a 100-person company that utilises cloud computing can reduce energy consumption and carbon emissions by more than 90 percent, a figure which scales to 30% for a 10,000-person workforce.
Considered to be the most widely adopted green IT project that companies have either implemented or are planning to implement, virtualisation allows a single server to run multiple operating systems concurrently. The consequence is a significant reduction in the size of the physical footprint of a data centre and a substantial improvement in both energy efficiency (with less equipment drawing power) and resource efficiency (with less equipment needed to run the same workload).
When virtualisation is combined with cloud-based automation software, businesses are able push the limits of their typical consolidation and utilization ratios. The software will allow for rapid provision, movement, and scalability of workloads – hence reducing the infrastructure needed and in turn maximising energy and resource efficiencies.
Sadly, IT managers and companies don’t necessarily put green tech at the top of their IT strategies. It is estimated only 5 percent of data centres are currently green, primarily as a result of decision-making being predominantly driven by cost-savings and competitiveness. Nonetheless, cloud computing is already greener than many people would believe, but there are some steps the industry can take to become even more green.
Renewable energy is certain to play a major part. Apple has recently boasted of a target of 100 percent renewable energy company-wide, including data its data centres (its aforementioned North Carolina centre is powered by a 100-acre, 20-megawatt solar panel), and Google currently uses renewable energy to power 34% of their systems.
Latest research now suggests the cloud will continue to become increasingly environment-friendly, as the global market for green data centres is expected to grow from $17.1 billion in 2012 to $45.4 billion by 2016.
How do you think data centres of the future will be powered? Let us know in the comments below.
By Daniel Price
Daniel is a Manchester-born UK native who has abandoned cold and wet Northern Europe and currently lives on the Caribbean coast of Mexico. A former Financial Consultant, he now balances his time between writing articles for several industry-leading tech (CloudTweaks.com & MakeUseOf.com), sports, and travel sites and looking after his three dogs. | <urn:uuid:466ac2a5-1aaf-43c0-b80a-56e8c739697a> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/02/green-technology-finland-innovation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00171.warc.gz | en | 0.948178 | 778 | 2.953125 | 3 |
Has Google just announced quantum supremacy?
A paper from Google and NASA claims that the organisations have achieved quantum supremacy, prompting mixed reactions from the scientific and technological communities, before being taken down.
“To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor. Quantum processors have thus reached the regime of quantum supremacy.”
This is one of the reported conclusions drawn in ‘Quantum Supremacy Using a Programmable Superconducting Processor’; a paper authored by Google, in collaboration with researchers at NASA, which appeared online briefly (and, we can assume, mistakenly) last week. Since then, the publication has been taken down, hopefully until the final version is ready.
From when the Financial Times broke the story, it didn’t take long for the mainstream media and both the scientific and technological communities to catch up, with articles appearing on Physics World, WIRED, VICE and The Economist to name but a few.
Response to this news has been mixed. Within hours, US Democratic candidate Andrew Yang was warning that Google’s quantum computers could break encryption, while others were questioning what the authors actually meant by the term ‘quantum supremacy’. Meanwhile, neither Google nor NASA were willing to comment.
Whether it meant to or not, this paper places ever-more focus on the world of quantum computing; its benefits and threats. In turn, this means that quantum-safe security is becoming more important than ever.
About the experiment
As described in the leaked paper, the first computational task the research team carried out to demonstrate quantum supremacy was to compare the quantum processor against classical computers in sampling the output of a random quantum circuit.
Rather than using Google’s 72-qubit ‘Bristlecone’ quantum chip, the researchers designed a smaller processor named Sycamore. While the processor originally consisted of 54 qubits, the experiment was carried out by only 53 of the quantum bits after one didn’t function properly.
As described in Physics World “The paper describes how a quantum computer comprising 53 programmable superconducting quantum bits was used to determine the output of a randomly-chosen quantum circuit made from a sequence of quantum gates. The output is a string of binary numbers and if the process is repeated many times, the results can be described as a probability distribution that resembles an interference pattern. This arises from the quantum interference that underlies the operation of quantum circuit.”
This pattern was determined by Sycamore making one million measurements on the quantum circuit, taking about 200 seconds. The authors say that a state-of-the-art (classical) supercomputer would require about 10,000 years to perform the equivalent task; leading to their claims of realising quantum supremacy.
What do we mean by quantum supremacy?
Quantum supremacy refers to the point at which a quantum computer can solve problems that are practically unsolvable by classical computers – i.e. the ability to solve them in a reasonable timeframe.
By this definition, it would indeed seem that Google and NASA have achieved quantum supremacy. After all, solving a 10,000 year classical computing problem in 200 seconds could be said to be a perfect illustration. However, this event only tells part of the story (namely the capabilities of the quantum processor, rather than its practical application), so perhaps it’s time to expand and add context to this definition.
While ground-breaking, the experiment itself was an illustration of quantum supremacy and not one that is going to trouble long-term cryptographic security in its current form. It is currently thought that a feat such as breaking RSA encryption can only be accomplished by a quantum computer with thousands of logical qubits. Current quantum processors all have fewer than 100 physical qubits, with an overhead of at least a factor 1000 to obtain a logical qubit. There is still work to do!
A milestone for deploying quantum-safe security
While the results of this experiment will not directly impact the cryptographic standards that underpin much of the modern economy, it once again highlights the risks of quantum computing to organisations. It also emphasises the importance of deploying quantum-safe security solutions, such as Quantum Random Number Generation (QRNG) and Quantum Key Distribution (QKD), sooner rather than later.
The paper is reported to address the growth of the computational powers of quantum computers, expecting them to grow at double the exponential rate. “As a result of these developments, quantum computing is transitioning from a research topic to a technology that unlocks new computational capabilities. We are only one creative algorithm away from valuable near-term applications.”
Find out more about IDQ’s quantum-safe security solutions. | <urn:uuid:cbc69eed-0b6e-4952-a1f3-d09a6ff82eb6> | CC-MAIN-2022-40 | https://www.idquantique.com/has-google-just-announced-quantum-supremacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00171.warc.gz | en | 0.938606 | 969 | 2.5625 | 3 |
As far as security when it comes to computer or network access is concerned, does the real beef in being safe and secure really lie in passwords? There are about billions of potential combinations before a hacker can be able to find out your password but just the same, there are other ways to get it like phishing or probably common passwords that some take for granted these days.
Unless you have been among the many victims of being hacked for access on certain programs or events, chances are you may not even care if and when another person would suddenly gather interest in hacking your account. Surely, not all people may have something interesting to go all through the trouble of but just the same, the bragging rights and distinction of being able to crack the access granted to a certain program, site or email is still vulnerable.
Passwords are slowly losing their use. They are indeed security precautions but perhaps the best person to make sure that they still serve their purpose would be the person who is given access. It is not all about making it hard to guess but making sure that you are the only one who knows it by heart and mind.
Also, do not be content with being assigned one. You should have the freedom to set your own password without anyone knowing it. This is one thing about security administration these days. Administrators should not be the only one to set passwords but the actual users themselves. | <urn:uuid:6964d954-8c6f-4cdb-b39f-e5e8b8df352d> | CC-MAIN-2022-40 | https://www.it-security-blog.com/uncategorized/users-should-set-their-passwords-independently/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00171.warc.gz | en | 0.97423 | 279 | 2.59375 | 3 |
The Impact That MAC Randomization Has On Location Analytics
Bluetooth is a wireless personal area networking standard for exchanging data over short distances. Bluetooth low energy (BLE) (also known as Version 4.0+ of the Bluetooth specification, or Bluetooth Smart) is the power- and application-friendly version of Bluetooth that was built for the Internet of Things (IoT). The power efficiency and low energy functionality make this protocol perfect for battery-operated devices. Since BLE now comes native on every modern phone, tablet, and computer, it also makes for a perfect starting point to connect with the vast multitude of devices that IoT promises to bring to the world. A Bluetooth device, like any wireless device, announces itself to the world by sending out advertisement packets.
BLE advertisements are a periodic unidirectional broadcast from the peripheral device to all devices around it. A listener can use the information in these packets to gather the information being advertised or connect to the advertiser. Certain devices cannot be connected to, and this depends on what is announced in the advertisement header. The four types of advertisements are:
- Connectable undirected advertising
- Connectable directed advertising
- Non-connectable undirected advertising
- Scannable undirected advertising
Devices that only transmit, such as beacons, use the third advertising type. Devices that need to quickly connect to something else use the second type. Most other devices use the first advertisement type. While advertising, the device can also indicate if it is using a random MAC address, or using its own MAC address. This gets important when doing passive analytics.
The advertisement packet has up to 31 bytes that can be used to advertise additional information about the device. The most common payloads are:
- Local Name
- Manufacturer-Specific Data
- Power Level
The manufacturer-specific data, as indicated by the name, is where a device manufacturer can slot in their own specific information, while also identifying the make of the device. Every company that advertises over BLE is supposed to obtain a company identifier from Bluetooth SIG, and these identifiers can then be used to distinguish devices that are heard over the air. The manufacturer-specific data is also where the payloads for beacons such as iBeacon, AltBeacon, and Eddystone are present. For standard BLE devices, this is where Apple, for example, places information that can be used for services such as Handoff, Airdrop, and Airplay.
Analytics and Privacy Implications
With just the awareness of this knowledge, we can hypothesize about good mechanisms for ensuring a user’s privacy and then run some real world analysis to see if this holds up.
For starters, any device that does not require a connection should use non-connectable advertisements. If connections are required, and only to specific previously known devices, then the connectable directed advertisement would be a suitable advertisement type to use.
In either case, and as we have seen in the world of WiFi, randomizing the MAC address used to transmit is almost always useful.
Taking iOS and MacOS as an example, we see some interesting patterns (see the following table). Both do a fairly good job of keeping things random and ensuring that the device is not easily trackable. In our experiments, every time an iOS or MacOS device wakes up, it uses a new random MAC address. The device also only advertises in some scenarios. When the screen is unlocked, we were able to connect via BLE to a device and read out basic information like the Hardware Model number, Firmware version and current battery status of the device. Some interesting packets that Apple Devices also send out include those for supporting common features like Handoff, Airplay, and Airdrop provided the device has BLE enabled.
|Text Message Alert||x|
|“Hey Siri”||yes||No device info – so not identifiable but connectable|
From what we’ve seen, only the Apple TV does not randomize its MAC address. From an analytics standpoint, this constant randomization does a great job of maintaining user privacy while also making it seem that there is a lot more devices around than actually are. In our current analysis, we haven’t yet been able to determine a pattern in the randomization, but this continues to be a work in progress.
A more detailed capture via Ubertooth sheds some light on BLE behavior by other devices. Of the mobile devices, it turns out that there are a few that never seemed to randomize the MAC address in an advertisement packet, which implies that once traced to a user, that device can be monitored anywhere in the world.
Mobile accessories don’t seem to follow a consistent behavior. At home, for example, smart TVs and headphones all advertise over BLE without any randomizing, while also remaining connectable. Some connectable devices share details like the Device Information Service once a BLE connection is maintained, but also seem to have timeouts in place to kick off random connections. Using a mixture of listening for advertisements and sending scan requests to devices that use Connectable Advertisements, one can also derive the user specific name of a device.
Generically, accessories that need connections tend to avoid the randomized MAC address. We believe this is to facilitate easy connection by an app. Examples of such devices are wireless headphones and headsets and connectable lamps.
While many devices do use methods to obscure themselves from prying eyes, there are still some ways in which you can run passive analytics for BLE devices. This has limited scope, however, and can get into murky waters when it comes to user privacy.
Active analytics shows more promise. By getting people to install apps, you can drive user engagement and make people more aware of the system overall. This also helps in “de-anonymizing” the data coming from devices and opens up the possibility of relying on a mixture of WiFi (connected as well as unconnected) in conjunction with BLE. | <urn:uuid:f0635124-671b-48b0-a3cc-404af1c852a7> | CC-MAIN-2022-40 | https://www.mist.com/documentation/ble-mac-randomization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00171.warc.gz | en | 0.937807 | 1,277 | 2.921875 | 3 |
High availability cluster configuration overview
High availability (HA) clustering is a method used to minimize downtime and provide continuous service when certain system components fail. HA clusters consist of multiple nodes that communicate and share information through shared data memory grids and are an effective way to ensure high system availability, reliability, and scalability.
- Application and service failures—affecting application software and essential services.
- System and hardware failures—affecting hardware components such as CPUs, drives, memory, network adapters, and power supplies.
For the Automation Anywhere Control Room application, you can add multiple server nodes under a single Load balancer to support high availability. You can further configure failover clusters to support essential services such as DB service and SVN services.
The ability to handle failure allows clusters to meet the following requirements in most data center environments:
- High availability—the ability to provide end users with access to a service for a high percentage of time and reduces unscheduled outages.
- High reliability—the ability to reduce the frequency of system failure.
To support HA for Automation Anywhere, configure the selected components in your data center.
- Cluster components—A cluster is a set of servers (nodes) that are connected
using a network and software. In an HA environment, these clusters of servers are
required to be in the same physical data center. Note: In the context of clusters, though the terms server, host, and node each have specific meaning, they are frequently used interchangeably.
- Cluster group (role)—Group of clustered services that fail over together and are dependent on each other.
- Host—The cluster machine that is hosting the services.
- Node—A generic term for a machine in a cluster.
- Primary node—The active database service or SNV service in a HA cluster where the production activities run.
- Secondary node—The passive/standby duplicate database service or SVN service in a HA cluster that is designated as the target in the event of a failover.
- Graceful degradation—Process allowing cluster dependencies to operate gracefully on a degraded primary node
- Redundancy—HA clusters use redundancy to prevent single points of failure (SPOF), such as a failed server or service. HA clusters include primary (active) servers that host services or databases and secondary (standby) servers that host replicated copies of the services and databases.
- Downtime—Duration of time when a cluster is unable to service request successfully.
- Failover—Failover in HA means the capability to switch essential services such as DB services and SVN services automatically to a standby MS SQL server or SVN server on the failure or abnormal termination of the previously active servers.
- Failback—Failback is the process of restoring the application in a state of failover back to its original state (before failure). The cluster service fails back a group using the same procedures it performs during failover.
Following are the clustered groups (roles) created in HA to support failover:
- Database HA role group is for a group of database servers configured in a high
Database replication—Configure synchronous replication between the primary node (active) and secondary node (standby) MS SQL servers to ensure consistency in the event of a database node failure.In a HA replication system, if the primary database server fails, a secondary database is promoted to primary node.
- General service group role is for version control of available resources (SVN) to support failover. | <urn:uuid:8eb08f21-319e-4acc-a69c-12d7e51c3293> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/bundle/enterprise-v11.3/page/enterprise/topics/control-room/ha-dr/ha-dr-cluster-overview.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00372.warc.gz | en | 0.890756 | 731 | 3.09375 | 3 |
NetworkTigers discusses the difference between scammers and hackers.
When it comes to cybersecurity threats, the terms “hacker” and “scammer” are used regularly. To some, the words may seem interchangeable and, in fact, hackers and scammers often overlap in the methods they use to meet their goals. However, there are differences between the two that define them as separate threats.
What is a hacker?
A hacker is an individual who uses a computer to overcome a technical problem or obstacle. The term, coined in the 1960s, is used to refer to someone especially savvy with coding and programming as well as to describe one who uses said skills for criminal activity.
While there are ethical, “white hat” hackers who are employed to work against criminals, most people associate the word with “black hat” hackers who steal data or commit otherwise unscrupulous deeds.
What hackers do
- Create, deploy or install malicious code that is designed to penetrate victim networks.
- Exploit vulnerabilities within systems to gain unauthorized access.
- Vandalize targeted websites or social media accounts for reasons that range from political motivation to bragging rights.
- Engage in corporate or political espionage by stealing data from competing companies or nations.
- Employ ransomware in order to encrypt a victim’s network and extort them.
- Break into systems to cause disruption.
What motivates hackers?
Because there are so many different kinds of hackers working from all over the world, their motivations vary greatly.
Hackers employed by governments, whether transparently or in secret, typically target websites or systems used by organizations associated with opposing countries. This can include those belonging to third-party contractors and other agencies who may be in possession of, or have access to, government information that is deemed to be valuable for purposes of sabotage or espionage.
In cases like North Korea, state-backed hackers are even used to hack into financial institutions or cryptocurrency exchanges to provide a source of revenue for the country.
Some hackers are motivated by social issues or work independently to bring attention to their causes and inflict damage on organizations and even governments that they disagree with politically. The hacking collective Anonymous is best known for performing this type of “hacktivism.”
Many other hackers are after money. Whether it’s through a ransom or by selling stolen data on the black market, information is valuable to those who wish to do harm to others.
What is a scammer?
A scammer, defined broadly, is a person who participates in a fraudulent scheme or operation.
In the context of cybersecurity, scammers use a variety of means to steal money or information from victims. While there are hackers on both sides of the law, scammers are, by definition, malicious.
What scammers do
- Employ social engineering tactics to fool people into turning over sensitive information such as login credentials.
- Engage in phishing schemes that impersonate trusted companies, organizations or even individuals to convince people to provide everything from financial data to personal information that can be sold.
- Scammers sometimes study victims in order to create convincing fake correspondences or scenarios that encourage trust.
- Blast out hundreds or thousands of emails or texts, hoping to play the numbers game and rope in unsuspecting or accidental victims.
- Create “romance scams” in which they adopt a fake identity to gain someone’s affection and then ask for money.
What motivates scammers?
Whereas hackers may be motivated by politics or social justice, scammers are exclusively interested in financial gain. This can be in the form of funds directly siphoned from a targeted victim or money exchanged for stolen data.
From shooting out hundreds of emails that mimic the look and branding of trusted institutions like banks or PayPal to engaging directly with their victims by impersonating a love interest or fellow employee, there are a myriad of ways that scammers can successfully part a target from their money.
While most people can’t conceive of engaging in such unethical behavior, psychologists generally regard scammers and con artists as narcissists. Only interested in their own personal gain, they lack empathy and view themselves as superior to those who are foolish enough to fall for their tricks.
The internet age puts new tools and more anonymity into the hands of scammers, but this behavior is not new. From tricking sailors into believing they were destined for a paradise of riches in the 1820s only to have them find themselves in Honduras to selling city monuments to people eager to make big investments, scammers have been preying on the trust of others for centuries.
How to avoid scammers and hackers
To prevent hacks, it’s important to ensure that your fortifications are strong. Be sure to keep your software and firmware regularly updated. Setting up automatic updates lets you keep your systems refreshed without the risk or forgetting to do so. They also let developers patch their products in the event of a newly discovered threat or vulnerability.
The best policy one can have with regard to scammers and hackers alike is one of vigilance. The confidence that most people have regarding how easily they may be fooled is often turned against them. Because no one believes in their own gullibility, con artists and criminals are able to manipulate people surprisingly easily.
Tips to avoid becoming a victim
- Check for misspellings and typos in emails from supposedly trusted organizations or companies that have a sense of urgency or remind you of a purchase you don’t recall making.
- Check the email address of origin for messages you receive. If the address is not one you recognize or contains misspellings or other suspicious characteristics, it is likely a scam.
- Never provide login information, personal data or financial information to someone over the phone or via email or text.
- Stick to official platforms for communicating with colleagues at work. Scammers may attempt to impersonate an employee in need of help via messaging services that operate outside of your business.
- If you receive a message from someone you trust that feels off, find another way to reach out to them to confirm that it’s legitimate.
- Never send money to someone that you haven’t met.
- Keep up to date on the latest cybersecurity threats by regularly checking websites that feature news regarding today’s trends. | <urn:uuid:4aa273dd-dbad-4c96-a394-3da53bbece1d> | CC-MAIN-2022-40 | https://news.networktigers.com/opinion/scammers-and-hackers-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00372.warc.gz | en | 0.943371 | 1,296 | 3.34375 | 3 |
So, what does it mean to be tech savvy? It’s defined as being well-informed about or proficient in the use of modern technology, especially computers. The Pew Research Center found that today, 73% of people 65 years or older use the internet, where just 15% of this same age group did in 2000.
While internet usage is on the rise for seniors, technology changes quickly, and it can feel like a challenge to keep up. When older generations believe that technology is catering to younger people, it becomes a barrier to computer literacy.
Whatever you may think, you are never too old or too young to become more tech savvy. Learning more about the assortment of digital devices available can help you stay connected with family, shop, or even bank — all online or on your smartphone.
As the world becomes increasingly digital, it is a great time to become more tech savvy. Here are three ways to get started.
1. Have an open mind and positive attitude
Technology can be frustrating, but a great first step is changing your perspective. Instead of thinking “I can’t do this,” think “I will figure out a way to do this.” Even though you may not discover a solution right away, practice makes perfect. Plus, having a positive attitude can determine how much and how you fast you can learn. You may enjoy the experience and even find it to be thrilling.
2. Don’t be afraid to ask for help
You don’t have to do it alone. Younger generations, especially ones that have been surrounded by technology since birth, are skilled when it comes to using various devices and software. Use this time to bond with your child(ren) or grandchild(ren).
There are also hundreds of books, articles, audio recordings, and live coaching available for you to get help that’s tailored to you. Google, Skype, and Zoom are all great tools that can allow you to chat with someone else or do more research on a topic you struggle with. It’s okay if you forget something, make a mistake, or run into a brick wall. You already know that success comes from learning from all the mistakes you made along the way.
3. Use YouTube to your advantage
YouTube provides access to millions of videos on various topics. For example, if you search “How to use Microsoft Word,” hundreds of videos will instantly appear. Videos can range from 3 to 30 minutes, depending on the content. Major companies, such as Apple and Windows, have also utilized their YouTube channels to upload self-help videos. Topics range from how to restore your iPhone if you forgot your password to how to download apps from the Microsoft Store.
Here at CenturyLink, we have a dedicated section of our YouTube channel designed to help you learn more about the internet and how to get online. Check out this one on how to change your wireless password.
The best part about YouTube is that you can stop, pause, rewind, and play a video as many times as you may need depending on your learning speed.
Exploring and learning something new can be scary and intimidating, but trust that you will get better at it over time. Don’t let your fear of technology hold you back from living your best life. Find different ways to make becoming technology proficient fun, like playing games or turning it into a challenge or competition with a friend who wants to become more tech savvy.
Our final tip: never forget why you are learning to begin with. Becoming more tech savvy will have untold benefits along the way. Knowing how to use technology can help you connect with friends and family, no matter where they are in the world. And using technology can help older adults age in place for longer and remain independent.
Do you have tips for other seniors about how to become more tech savvy? We’d love to hear them over on social media. Find us on Facebook or Twitter and drop us a comment! | <urn:uuid:cdc9a237-972a-4fc5-910e-e7be652c78c5> | CC-MAIN-2022-40 | https://discover.centurylink.com/3-ways-to-become-more-tech-savvy-at-any-age.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00372.warc.gz | en | 0.945614 | 823 | 2.859375 | 3 |
What would life be without OpenSSL, can we even imagine one!
By definition, OpenSSL is an open-source crypto library implementing SSL and TLS protocol RFCs. It is quite easy to use OpenSSL for several routine tasks that we need in the everyday usage of crypto, and other daily need-to-knows of programmers. By understanding command-line options, you can run them on your Windows shell or Linux command line. (Just make sure to have utility OpenSSL installed and available.) Nowadays, many Linux distributions come with OpenSSL pre-installed.
The manual of OpenSSL lists the following capabilities:
- Creation and management of private keys, public keys and parameters
- Public key cryptographic operations
- Creation of X.509 certificates, CSRs and CRLs
- Calculation of Message Digests
- Encryption and Decryption with Ciphers
- SSL/TLS Client and Server Tests
- Handling of S/MIME signed or encrypted mail
- Time Stamp requests, generation and verification
But beyond above, OpenSSL is capable of doing much more. For this article, we are only going to scratch the surface and check out how OpenSSL can help us complete simple tasks. OpenSSL is so versatile, powerful, and big that one article is not enough, so let's get started.
The capabilities of OpenSSL range from cryptography to a toolkit for hackers, but that does not mean analysts and sys-admins cannot benefit from its power. All you need is a little bit of command-line knowledge of Linux shell. We will perform some common tasks later in this article, but first, let's look at a full list of command families:
Message Digest commands (see the `
dgst' command for more details)
Cipher commands (see the `
enc' command for more details)
You have a really excellent ssl/tls client and server test application in OpenSSL.
$ openssl s_client and
$ openssl s_server
They are the test TLS client and server applications which can be incredibly useful for a wide variety of testing and diagnostic purposes for programmers, security analysts or sys admins.
Here are some test invocations:
$ openssl s_client -connect www.google.com:443
Here is the sample output:
You can do a lot of stuff with these SSL wrappers as most protocols are SSL enabled on the Internet today. In addition to checking for the cipher suites that are negotiated in an SSL connection, you can find out certificate’s trust hierarchy or expiry date and so on.
Apropos of this, if you wish to simply SSL-enable a particular insecure app, you have a tool called stunnel.
Now let us do some simple encryption and decryption with OpenSSL.
To inspect it:
Then you can use the encrypted file or send it and then finally to decrypt it:
You can start by computing the hash values of big files or simple strings: userid and password colon separated for HTTP digest authentication.
To do simple cipher manipulations like this, we have another utility in the UNIX universe called GnuPG.
Then there is also X.509 certificate manipulation, including generation, checking, computing the hash, generating RSA keypairs, and converting certificates between PEM, DER, and PKCS12 formats.
The capabilities of OpenSSL range from cryptography to a toolkit for hackers, but that does not mean analysts and sys-admins cannot benefit from its power. In this article we scratched the surface of possibilities. What benefits do you see from OpenSSL?
Did you enjoy this content? Follow our linkedin page! | <urn:uuid:cd6ed07c-52aa-41ca-be9a-dde039fa5c89> | CC-MAIN-2022-40 | https://blogs.query.ai/dipping-our-toes-into-openssl | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00372.warc.gz | en | 0.895771 | 767 | 3.171875 | 3 |
Four research teams will work to develop a hardware accelerator and software stack for fully homomorphic encryption that can bring the speed of FHE calculations in line with similar unencrypted data operations.
As agencies struggle to protect personally identifiable information, intellectual property, military secrets and other sensitive data in applications at rest and in motion, many have considered fully homomorphic encryption. Rather than decrypting data to run computations, which opens that data to cyberattack and potential theft, FHE allows users to work with data while it’s encrypted. If implemented at scale, FHE coud protect data confidentiality across a range of applications -- from enabling government use of untrusted networks to enhancing data privacy, according to the Defense Advanced Research Projects Agency.
FHE, however, currently requires a prohibitive amount of time and compute power. Each homomorphic computation creates a certain amount of noise that corrupts the encrypted data, DARPA officials said. At some point, the noise accumulates to the point that it becomes impossible to recover the original underlying plaintext. Workarounds can help reduce the noise, but they take massive compute resources.
On March 8, DAPRA announced four research teams that will work to reduce FHE processing time from weeks to seconds. The Data Protection in Virtual Environments (DPRIVE) program seeks to develop an FHE hardware accelerator and software stack that reduces the computational overhead required to bring the speed of FHE calculations in line with similar unencrypted data operations.
“We currently estimate we are about a million times slower to compute in the FHE world then we are in the plaintext world,” DARPA Program Manager Tom Rondeau said. “A computation that would take a millisecond to complete on a standard laptop would takes weeks to compute on a conventional server running FHE today.”
The research teams -- which will be led by Duality Technologies, Galois, SRI International and Intel Federal -- will create accelerator architectures that are flexible, scalable and programmable. They will explore various approaches to making FHE feasible through memory management, flexible data structures and programming models and formal verification methods.
They will also experiment with different native word sizes, which will impact the signal-to-noise ratio of how encrypted data is stored and processed. Current standard CPUs are based on 64-bit words, but the DPRIVE research teams will explore whether a diversity of word sizes -- from 64 bits to thousands of bits -- can solve the challenge, DARPA officials said.
As the concurrent design of FHE algorithms, hardware and software is critical to the successful creation of the target DPRIVE accelerator, each team is bringing varied technical expertise to the program as well as in-depth knowledge on FHE.
“If we are able to achieve this goal while positioning the technology to scale, DPRIVE will have a significant impact on our ability to protect and preserve data and user privacy,” Rondeau said.
NEXT STORY: Achieving air-tight cybersecurity with KVM | <urn:uuid:823e8c1b-1991-4f00-b9db-2ce7d1f33469> | CC-MAIN-2022-40 | https://gcn.com/cybersecurity/2021/03/darpa-picks-teams-to-bring-homomorphic-encryption-to-life/315533/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00372.warc.gz | en | 0.912488 | 616 | 2.546875 | 3 |
- August 18, 2022
- Posted by: Aanchal Iyer
- Category: Data Science
Data is a cause of concern for all of us. Organizations want to learn how data can help them cut costs and increase profits. The healthcare business also wants to know how data can help them predict illnesses and offer better treatment to its patients. Data science also provides an in-depth understanding of financial statistics and the stock market – we sell, sell, and hold stocks. All of this is done to make money.
Data Science Principles for the Stock Market
There are various ideas and terms in Data Science that most individuals are not familiar with. This blog is here to explain all the essential data science terminology. Let’s go through some financial and stock market-related data science principles.
Algorithms are typically helpful in coding and data science. An algorithm is a group of instructions for completing a task. Algorithmic trading is becoming more popular in the stock exchange.
For testing and training, the entire dataset is divided into two halves – training data or training set. For accurate predictions, the deep learning model must learn from previous data.
We want to understand how well the model performs once we train it with the training data. This information is sometimes referred to as testing data or a test dataset.
Features and Target
Data is represented in a tabulated form in data science, such as a DataFrame or an Excel Sheet. These data points can mean anything. Columns are critical; for example, we can assume one column has stock prices while the other columns offer P/B Ratio, volume, and additional financial information.
Use of Data Science in the Stock Market
Data Science provides us with a new view of financial data and the stock market. Some concepts, such as purchase, sell, or hold, are followed during trading. The aim is to generate a lot of money. Trading platforms today are more popular. To evaluate if it is sensible to invest in a particular stock and undertake stock market research, one must first comprehend some basic principles in Data Science.
Data science is mainly dependent on data modeling and projecting future results. A time series model is being used in the stock market to predict the drop and increase of share values.
Modeling uses a mathematical approach to evaluate past behaviors to predict future outcomes. That model is usually a Time-Series model regarding financial data in the stock market. A Time-Series is a series of data; in our example, this would be the price value of a stock. Most data and stock charts are time series.
Another model in data science and machine learning is called a Classification Model. Models using classification models are provided with specific data points and then classify or predict what those data points represent.
For the stocks or stock market, we can give a machine learning model other financial data such as the Daily Volume, P/E Ratio, Total Debt, and so on to determine if a stock is fundamentally a good investment.
The topics in this blog are everyday key machine learning and data science concepts. These topics and ideas are essential to learning data science. These factors make stock values very difficult and volatile to anticipate accurately. | <urn:uuid:fc871e39-76d4-4ae0-ab62-b50a206c1ee0> | CC-MAIN-2022-40 | https://www.aretove.com/usage-of-data-science-in-stock-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00372.warc.gz | en | 0.925314 | 670 | 2.96875 | 3 |
What are Rootkits?
Rootkits are covert computer programs designed to provide unrestricted access to a computer without being detected. The term “Rootkit” is the combination of the words “root” and “kit.” Originally, rootkits were the tools that granted administrators access to a computer system or network. “Root” is the term used to refer to the superuser or administrator who, by default, has access to all files and commands in a Unix/Linux system. “Kit” refers to the superuser’s software access to all files and commands. Rootkits initially targeted Linux systems and are associated with malware – such as trojans, worms, and viruses – that hide their existence and actions from users and other system processes.
A rootkit is a component of multipurpose malware that may be able to carry out several functions. These functions include granting attackers remote access to compromised hosts, intercepting network traffic, snooping on users, capturing keystrokes, stealing authentication information, mining cryptocurrency, and assisting in DDoS attacks. A rootkit aims to mask this illegitimate activity on the compromised computer.
How does Rootkit Work?
A rootkit is exceptionally capable of hiding malicious code with a legitimate program. As a result, installing a rootkit allows remote administrators to access the internal functions of your operating system. Since rootkits cannot spread by themselves, they must use covert methods to infect computers. When users install rootkit installer programs on their systems, they hide until hackers activate them. Rootkits contain malicious software, including banking credential stealers, password thieves, keyloggers, antivirus disablers, and bots for denial-of-service attacks. Generally, rootkits are installed in the same way as malicious software through phishing emails, malicious executable files, crafted malicious PDF files, Microsoft Word documents, connecting to compromise shared drives or downloading software infected with a rootkit from risky websites. However, rootkits focus on exploiting registry and other root directories and hiding in the system root directory.
Examples of Rootkits
Let us examine a few notable rootkit models from previous years, some made by well-known programmers while large companies crafted the others:
- 1990: Historically, the rootkit was developed by Lane Davis and Steve Dake at Sun Microsystems for SunOS and UNIX.
- 1999: According to Greg Hoglund’s article, he recognized that he had created a Trojan called NTRootkit. It was the first rootkit developed for Windows.
- 2003: An infection and hostile rootkit enumeration prompted a race between HackerDefender and Rootkit Revealer after the disclosure of HackerDefender. Early trojans altered or enhanced the OS on a very low level of functionality.
- 2004: In an attack known as the Greet Watergate, a rootkit is used to tap the phones of nearly 100 Vodafone Greek employees, including the phone of the country’s Prime Minister.
- 2005: Sony BMG has generated a great deal of fury after distributing CDs with rootkits embedded in them – without seeking customers’ consent.
- 2008: TDL-4, also known as TDL-1, is a rootkit responsible for filling the Alureon trojan, which was used to create and support botnets.
- 2009: Machiavelli was a rootkit that targeted and attacked Mac OS X (often called Mac OS X). This study revealed the vulnerability of MacBooks to rootkits and malware.
- 2010: Apparently, Israel and the United States developed the Stuxnet worm, the first known rootkit for industrial control systems. This was a rootkit designed to hide inside Iran’s nuclear program. However, neither nation asserted any responsibility for the assault.
- 2018: LoJax is the first rootkit to be able to compromise a PC’s UEFI, the firmware that controls the motherboard. The rootkit can therefore be reinstalled after the operating system has been reinstalled.
- 2019: Rootkit Scranos was used in the latest rootkit assault, intended to take secret keys and installation details recently put away in the gadget’s program. This malware turns gadgets into click farms to produce video income and YouTube endorsements.
Types of Rootkits
There is a wide range of rootkits, depending on where they attack and how deeply they embed themselves in the PC. In particular, they are divided into the following classes:
- Application Rootkits: Applications rootkits replace ordinary files on your computer with rootkit records and may change the execution of a normal application. Frequently, these rootkits attack Microsoft Office, Paint, or Notepad. These applications can enable hackers to access the system. This rootkit is difficult to detect since it would, in any case, usually work. Since they work at the application level, they can be detected by antivirus software and identification programs.
- Bootloader Rootkits: In bootloader rootkits, the code for initiating the boot process or loading an operating system or application is loaded simultaneously as the operating system boot code and thus targets the Master Boot Record (MBR) or the Volume Boot Record (VBR). Bootloader rootkits attach themselves to these types of records, making it difficult for a rootkit remover or antivirus to detect them.
- Client-Mode Rootkits: Attacks on the operating system’s organization access and top privileges enable rootkits to conceal themselves. Malware may also be concealed with rootkits. Rootkits are designed to boot alongside your PC’s operating system; restarting won’t eliminate them. A malware scanner or removal application can detect client mode rootkits as the identification code runs at a deeper level in bit mode.
- Firmware Rootkits: In a firmware rootkit, the malicious software is stored on the boot-related software of particular hardware components. Their persistence through reinstallation of the operating system makes them especially stealthy.
- Kernel-Mode Rootkits: A kernel-mode rootkit is a sophisticated piece of malware that can modify or add code to the operating system. Kernel rootkits can be complicated to create, and if they’re buggy, they can heavily impact the target computer. An antivirus solution will detect a breadcrumb trail left by a buggy kernel rootkit.
- Virtualized Rootkits: Virtualized rootkits, on the other hand, boot up before the operating system, as opposed to kernel-mode rootkits, which boot up simultaneously as the targeted system. Virtualized rootkits can take hold deep within the computer and are extremely difficult – or even impossible – to remove.
Symptoms of Rootkits
The following are some of the symptoms of a rootkit attack:
- Antimalware stops running: An antimalware application that simply stops functioning indicates an active rootkit infection.
- Windows settings change by themselves: A rootkit infection may cause Windows settings to change without any user action.
- Causes a malware infection: The rootkit can install malicious software containing trojans, worms, ransomware, spyware, adware, and other destructive programs that compromise the performance of devices and systems.
- Performance issues: The presence of a rootkit may also be indicated by unusually slow performance or high CPU usage.
- Computer lockups: Computers fail to respond to input from the mouse or keyboard when users cannot access their computers.
- Removes files: Rootkits gain access to a system and network and can be installed through a backdoor into a system, network, or device. Rootkits can run programs that steal or delete data from an operating system.
- Intercepts personal information: A type of rootkit known as a payload rootkit uses keyloggers to record a user’s keystrokes. When users open spam emails, these rootkits install themselves. The rootkit steals personal information in both cases, including credit card numbers and online banking details.
How to Detect Rootkits?
Cyber security threats and ransomware attacks are increasing at a tremendous pace. It is extremely difficult for cyber security analysts and incident responders to investigate and detect cyber security threats using conventional tools and techniques. NetSecurity’s ThreatResponder, with its diverse capabilities, can help your team detect the most advanced cyber threats, including APTs, zero-day attacks, rootkits, and ransomware attacks. It can also help automate incident response actions across millions of endpoints, making it easy, fast, and hassle-free.
Want to try our ThreatResponder, cutting-edge Endpoint Detection & Response (EDR), and ThreatResponder FORENSICS, the Swiss knife for forensic investigators in action? Click on the below button to request a free demo of our NetSecurity’sThreatResponder platform.
The page’s content shall be deemed proprietary and privileged information of NETSECURITY CORPORATION. It shall be noted that NETSECURITY CORPORATION copyrights the contents of this page. Any violation/misuse/unauthorized use of this content “as is” or “modified” shall be considered illegal and subjected to articles and provisions that have been stipulated in the General Data Protection Regulation (GDPR) and Personal Data Protection Law (PDPL). | <urn:uuid:ececf6a0-14c7-4b05-95a4-3ccf9709b484> | CC-MAIN-2022-40 | https://www.netsecurity.com/what-are-rootkits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00372.warc.gz | en | 0.89274 | 1,960 | 3.75 | 4 |
Understanding the Testing of VR Applications
- December 7, 2018
Virtual Reality is the future of technology. Though been in talks since 1950s, however, it has entered limelight altogether in a new form. And so, following an increase in the popularity of VR applications, the demand for VR Testing will take a front seat.
For a quick understanding of the concept of VR, it is described as a 3-D, computer-generated environment that a person can explore and easily interact with. It is basically an artificial environment created and presented to the user in such a way that the user suspends belief and accepts it as a real environment.
What’s VR used for?
The latest VR technology frequently uses virtual reality headsets or multi-projected environments, occasionally in arrangement with physical environments or some other physical props, in order to generate realistic images, sounds, as well as other sensations that simulate a person’s actual physical presence within a virtual or imaginary setting.
Different kinds of Virtual Reality:
There are different types of devices that will all come together at some point to create a complete set of VR hardware. Though, there exist a variety of different sorts of Virtual Reality systems. You can easily identify it being different from another through the mode with which it interacts with its users. Some of the modes are given below:
Window on World
This Virtual Reality system is perfect for the field of medicine. It usually uses a desktop monitor rather than an HMD and also allows its user to visualize multifarious medical procedures, for example, surgeries or something as complicated as colonoscopies. Additionally, it can also be used for the simulation of different training situations.
Immersion system uses a virtual headset. By taking its users away from the real physical world, and instead of putting them within a virtual world, the crisp visuals and audio delivered using the HMD can help them escape daily life and have a chance to explore a faraway land.
In Telepresence the sensors are controlled and operated remotely by the user. For example, bomb disposal robots, underwater exploration, or even drones being operated through telepresence VR.
Another kind of Virtual Reality is Mixed Reality. This involves computer-generated inputs merged with the formerly mentioned telepresence inputs or the user’s view of the real world for creating a rather valuable output. For example, a fighter pilot’s view of maps or key information points being displayed within his helmet on one side, or a surgeon’s view of real-time patient information throughout a complex surgery in progress while wearing an HMD.
VR App Testing:
- VR app testing depends on specialized hardware. To make sure that these products function properly is to test them on different devices and against all requirements. Such as using Oculus Rift or the HTC Vive, both of which connect to personal computers for a powerfully immersive VR experience. Other hardware Samsung Gear Google Daydream, work with the user’s smartphone to create a more mobile VR experience
- Rigorous compatibility testing is required to ensure that product teams do not face any surprises when they go to market. Compatibility testing aids in measuring the performance of the application when it is accessed on different devices and platforms. It also helps to identify dangerous non-functional issues or bugs, including device overheating.
- In VR testing, Tester needs to test the extra dimensions that are added to the field of view.
- Protected VR testing station is required for engineers and testers to move without fear of being hurt
- A team which is less sensitive to VR sickness is required so that the testing is complete on time.
Objective and purpose of VR App testing:
- Testing ensures the correct or expected behavior of any particular application or device at hand. Testing of VR applications has its own nuances, due to its complexity and aspects pertaining to the human-machine interface.
- Manual Testing is considered for evaluating the application’s user interaction. Manual tests specifically help to gauge the user’s interaction with the applications and whether it leads to the desired outcome.
- Automated testing is implemented for internal application components.
- The purpose of testing is to comprehend the overall impact of the environment on the device and how all this assimilates for the user in the virtual environment.
- In order to process such a high degree of interaction and weigh the input and corresponding output, automating a chunk of tests is essential.
Types of testing applicable for VR Testing:
Some common types of testing are given below:
Formal testing plans covering the functional and UI/UX Issue of App and focusing on software bugs.
According to the experts, there is no specific or accepted pattern for automated testing of VR applications. Currently, the industry is implementing existing software engineering practices and applying them for testing VR applications.
Interviewing groups of people for demographic and market reception questions
Playtesting is a crucial part of making informed decisions about your VR application
– Real human reactions.
– Identify hidden problem areas.
– Idea generation
– Solve design arguments.
– Quickly evaluate a hypothesis.
Testing tools for VR app testing
Some of the tools that are available in the market for VR testing are given below.
SteamVR Performance tool:
There are simple tools in the marketplace for techies to test the readiness of the PC for VR. For example, SteamVR Performance Test is a simple tool that helps evaluate your PC’s compatibility with the VR application.
360° EYETRACKER™ software solution:
VR experiences allow the user to look anywhere in the environment at any time. As such it is important to be able to direct audience attention so that they are engaged with the narrative and don’t miss out on all the action. By tracking the position and orientation of a user’s head movement while they are in a VR experience we are able to identify exactly where in the environment they are looking and extrapolate their field of view.
Results can be given in three different options (or all)
- Heat map with interactive controls
- Video of 360 content in a 2D environment
- Relive viewer experience through VR headset | <urn:uuid:3abaa14e-abc1-415b-aa38-746467859ff8> | CC-MAIN-2022-40 | https://www.kualitatem.com/blog/understanding-testing-VR-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00372.warc.gz | en | 0.92006 | 1,295 | 3.171875 | 3 |
IQ Bot 11.x: High Availability and Disaster Recovery overview
High availability (HA) provides a failover mechanism if an IQ Bot service or server fails. Disaster recovery (DR) enables recovery across a geographically separated distance if a disaster causes an entire data center to fail.
IQ Bot uses a minimum of 3 nodes and a maximum of 5 nodes in a cluster for high availability (HA).
IQ Bot HA and DR solution
In the context of IQ Bot, implementation of High Availability (HA) and Disaster Recovery (DR) reduces downtime and maintains continuity of business (CoB) for your bot activities.
- High Availability (HA)—High availability is an architectural system design that attempts to safeguard a system against certain failure scenarios. This means that even if parts of a system is failing, as a whole it is still available and usable. High availability solutions typically protect against specific scenarios such as: server failures, single component failures, dependency failures, variable load increases, and networks splits where dependent on system components that become unreachable on a network.
- Disaster Recovery (DR)—Disaster recovery involves a set of policies and procedures to enable the recovery or continuation of vital infrastructure and systems following a natural or human-induced disaster. Disaster recovery addresses many different causes of failures in a system where high availability typically accounts for a predictable few. Disaster recovery has a focus on re-establishing services after an incident not just failover. Recovery of a system includes scenarios such as: restarting a service or system, restoring configuration files or a database from backups.
Required HA and DR infrastructure elements
- Distributed Approach—In addition to clustering IQ Bot related data center components, we also recommend that you deploy IQ Bot on multiple physical and, or virtual servers.
Load balancing—Performed by a load balancer, this is the process of distributing application or network traffic across multiple servers to protect service activities and allows workloads to be distributed among multiple servers. This ensures bot activity continues on clustered servers.
Databases—Databases use their own built-in failover to protect the data. This ensures database data recovery.
Between the HA clusters, configure synchronous replication between the primary (active) and secondary (passive) clustered MS SQL servers in the data center. This ensures consistency in the event of a database node failure.
For the required HA synchronous replication, configure one of the following:
- Backup replica to Synchronous-Commit mode of SQL Server Always On availability groups
- SQL to Server Database Mirroring
- Between the DR sites, configure your database to provide asynchronous replication from the primary (production) DR site to the secondary (recovery) DR site that is at a geographically separated location from the primary DR site.
Point all IQ Bot instances within the same cluster to the same database and repository files. This is required to enable sharing data across multiple servers and ensuring data integrity is maintained across IQ Bots servers within a cluster.
HA and DR deployment models
To ensure your IQ Bot is protected by HA and, or DR, configure your data centers according to the deployment models described in:
HA implementation requirements
- Install IQ Bot on multiple servers.
- Access to IQ Bot is through a load balancer.
- Open a RabbitMQ v3.8.18 synchronization port between IQ Bot servers.
- Configure the Microsoft SQL Server in high availability mode.
Installation HA and DR configuration requirements
- The IQ Bot installer does not directly support cluster
installation. To set up a cluster do the following:
- Run the installer on each application server node.
- Share the
output folderusing the access role
- Post installation, execute the
messagequeue_cluster_configuration.batwith appropriate command line arguments.
- Configure IQ Bot in a high availability configuration.
- Open firewall ports: 4369 and 25672.
- Install RabbitMQ v3.8.18 on every IQ Bot node in the
The first node where IQ Bot is installed becomes the primary RabbitMQ v3.8.18 node. The host name of the primary node is used to configure the RabbitMQ v3.8.18 cluster.
- The load balancer is required to distribute a traffic to all IQ Bot server nodes.
- Configure Microsoft SQL Server for high availability. Use the Microsoft SQL Server Always On option.
- For RabbitMQ v3.8.18 specific installation, see your RabbitMQ v3.8.18 documentation.
HA and DR known limitations
- To discover the availability of IQ Bot instances, a load balancer periodically sends pings, attempts connections, or sends requests to test the IQ Bot instances. These tests are called health checks.
- Health checks do not verify the availability of RabbitMQ v3.8.18 instances. | <urn:uuid:e46ce0ee-a48d-43e5-a830-789eba87d271> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/bundle/iq-bot-v6.5/page/iq-bot/topics/iq-bot/architecture/iq-bot-ha-dr-overview.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00372.warc.gz | en | 0.806 | 1,002 | 2.6875 | 3 |
The Kingdom of Saudi Arabia recently announced the issuance of the Evidence Law, which comes into effect on July 8, 2022. This law governs all commercial and civil transactions and, in part, administrative and criminal cases. It permits and legitimizes the submission of digital evidence in an electronic form and gives it the same status as written evidence.
The law considers but is not limited to, emails, digital signatures, and digital media as evidence, and the fact that digital evidence is admissible in court. More importantly, it indicates that Saudi courts can use digital software and tools in evidentiary procedures.
One might think, what discipline underpins digital evidence? What is it? And how can organizations effectively handle digital evidence?
The Emergence of Digital Evidence
With the spread of personal computing devices in the 1980s, a new scientific field was born: computer forensics. Programs and processes were developed to meet the demand from the law enforcement community to examine computer evidence. The first organizational structure, (CART) was formed by the FBI. In the 1990s, Computer forensics evolved to Digital Forensics. The Scientific Working Group Digital Evidence (SWGDE), which was founded in 1998 defines digital evidence as “information or probative value that is either stored or transmitted in a binary form”, which was later transformed from binary to “digital”.
Ever since, data has exponentially grown as the cost to store data has decreased. However, data spread across multiple sources and complexity have dramatically grown, posing a challenge to organizations to govern and retain.
The Growing Data Landscape
If we examine today’s data landscape in an organization, we will immediately realize the growing digital footprint of any employee across multiple data sources. This increases the need to utilize robust solutions to quickly identify, collect, and preserve such data in case of investigations or disclosure purposes.
To put things into perspective, imagine the number of devices, cloud locations and data volumes you would need to inspect to identify traces of activity or potential evidence compared to 20 years ago. In 2009, 2.5 billion devices were connected to the internet, and it is estimated that the growth of internet of things devices (IoT) will result in more than 100 billion devices to be connected by 2050. In comparison to the expected world population of around 10 billion in 2050, there are about 10 devices per human that will be connected to the internet.
This drastic growth demanded that digital forensic professionals up their game and rapidly innovate to meet data identification, collection, and forensic analysis challenges.
Six Digital Forensics Trends in 2022
In a rapidly changing digital landscape, there are multiple trends that digital forensic professionals regularly assess:
- The rise of remote work culture, due to the COVID-19 pandemic, means that digital assets may not be physically accessible, introducing remote data preservation techniques in a forensically sound manner. Professionals found solutions to resolve this, such as mailing in preservation kits and assisted remote access to computing and mobile devices.
- Storage technologies have evolved, drives within computing devices are rarely using HDD technology, and rather use Solid State Drives (SSD). Unlike HDD drives, where unless a drive is forensically wiped chances are high to recover and preserve deleted evidence, SSD drives are a lot harder to recover.
- The complexity and size of data to be analyzed is growing. In the example of multimedia (pictures and videos), AI techniques are being implemented to identify frames of interest from thousands of hours of video footage. Additionally, AI techniques also help digital forensic professionals in grouping and categorizing multimedia based on content, such as clustering pictures that contain guns in a criminal case.
- The need for advanced mobile forensics and recovery of deleted content is rising. There are more than 115 million WhatsApp subscribers in the Middle East and North Africa region with adoption rates of 77%. Recovery and analysis of WhatsApp communications is almost essential to any investigation, and therefore, there is an- increased responsibility to preserve, recover and present data from mobile sources in forensically acceptable manner in legal cases.
- The significant growth of many non-traditional data sources, such as drones, digital surveillance cameras and IoT devices. Capabilities to preserve such data sources continue to develop to capture and decode data and metadata associated with such data sources.
- Encryption is on the rise, posing challenges to future digital forensics and ability to uncover cyber-attacks and malicious electronic activity.
In an era where data is exponentially growing along with its sources, it is essential to treat data in a forensically defensible manner for litigation purposes. The disruptive trends in the global landscape require swift adaptation and continuous innovation by digital forensic professionals.
Regionally, we are ahead of a transformation in the laws that govern evidence and now accept the submission of digital evidence. Is your organization prepared to retain and handle digital evidence?
© Copyright 2022. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice. | <urn:uuid:0d747701-a5ec-4ad2-af70-adf14371ab61> | CC-MAIN-2022-40 | https://angle.ankura.com/post/102hqmg/breaking-down-saudi-arabias-new-evidence-law | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00572.warc.gz | en | 0.929088 | 1,199 | 2.828125 | 3 |
Spyware is a type of malicious software that enters your computer or mobile device without consent in order to gain access to your personal information and data and relays it to a third party. Considered a type of malware, spyware spies on the computer user, capturing keystrokes, emails, documents, or even turning on the video camera.
Spyware has been a part of the public discourse since the mid-90s and in the early 2000s the term “spyware” began being used by cybersecurity companies in much of the same way that it is used today. Today, spyware continues to be the most common threat on the internet and because of the way it quietly infiltrates your computer, it can be extremely hard to detect.
Types of Spyware & Other Related Malicious Malware Terms
A type of malicious software that disguises itself as legitimate. Often acting as an important update or file, you are tricked into letting the spyware in. It then either steals, disrupts, or damages your personal data.
A type of tracking software that tracks your browser history in order to sell your data to advertisers so that they can better target you with ads. Adware can either be used for legitimate purposes or malicious ones. In addition to advertising, the adware may include spyware that spies on the user’s computer activities and browser preferences without their knowledge.
Limits or blocks users from accessing individual files or entire systems until a ransom is paid. Sometimes, these attacks may use the information found in a spyware attack to demand a ransom.
A type of malicious software used to install spyware code. They are often designed to avoid detection from traditional anti-visual protection solutions.
Tracking cookie files can also be placed into your server in order to track your web activity and used for malicious marketing purposes.
There are a number of applications that can be deceitfully added to your computer like Keyloggers, Infostealers, and Password Stealers, in order to track any activity on your computer like keystrokes, chatroom dialogues, websites visited, as well as collect sensitive information like passwords and health data.
Also referred to as system monitors, these are applications that capture computer activity via screenshots to capture keystrokes, search history, email discussions, chatroom conversation, websites visited and more.
An application that scans infected computers with the goal of collecting personal information like usernames, passwords, documents, spreadsheets, and then transmits the information to a remote server.
A malicious application that steals passwords from infected computers or mobile devices.
How Does Spyware Work?
1. Device Infiltration
Spyware has the potential to infiltrate your device due to a number of factors:
- Your device has security vulnerabilities – such as backdoors and exploits.
- Phishing and spoofing – when criminals try to get you to perform an action like open a malware-infected file or asking you to give up your password credentials.
- Misleading marketing – marketing tactics can be effective in tricking users to download their spyware program by presenting it as a useful tool.
- Software bundles – Free software packages are appealing to users and criminals may conceal a malicious add-on, plug-in, or extension to these software programs.
- Trojan horses – Malicious code or software disguised as legitimate but used for the purpose of entering one’s computer and disrupting, damaging, or stealing.
- Mobile device spyware – Malicious apps for Android or Apple users that either contain harmful code, are disguised as legitimate apps or contain fake download links.
2. Steal your Data
Once the spyware is downloaded to your computer, it then begins tracking your online activity via keystrokes, screen captures, web searches, and more, in order to collect your data
3. Sends Data to a Third Party
After the spyware collects your data, it then sends it to a third party source or used directly.
What Types of Problems Are Caused By Spyware?
Identity and Data Theft
When personal information like email accounts, saved passwords for online banking, credit card information, and social security numbers, is stolen, it can be used for the purpose of identity theft.
Computer and System Damages
Spyware software is often poorly designed and has the potential to drain your computer’s energy, memory, and processing power. This can result in severe lags between opening applications, your computer overheating, and even the system crashing.
Spyware can manipulate your search engines into delivering unwanted websites that are either fraudulent or dangerous. You may also be faced with unwanted advertisements appearing in the form of pop-ups or banners, causing annoyances.
What Are Signs of Spyware Infiltration?
Here are some of the signs you may have been infiltrated by spyware:
- Your device is running slowly
- You’re being redirected to pages you didn’t navigate to
- You’re feeling annoyed by pop-ups
- Your usual homepage isn’t appearing
- You’re noticing icons of applications you don’t remember downloading
- You’re noticing add ons or plug-ins you don’t remember downloading
Examples of Spyware
A program that disguised itself by promising to improve internet speed, but instead, replaced all error and login pages with advertisements
Takes advantage of security vulnerabilities in your Internet Explorer to hijack it, change the settings, and collect your data.
Uses security vulnerabilities to enter into one’s computer and record search histories and keystrokes. It is also known as Zlob Trojan.
Monitors victim’s web surfing habits and uses the information to target them with ads.
Who Do Spyware Authors Target?
Spyware authors do not have one specific target – instead, they intend on targeting as many potential groups as possible. Therefore, everyone is susceptible to spyware. Spyware authors are more concerned about what they are after rather than who they are after.
What to Do if You Suspect Spyware
Clean your System of Infection
Run a scan to identify any malicious software present and use a reputable virtual removal tool to clear your device. Of course, do be mindful of accidentally downloading even more spyware.
Contact Necessary Parties of Fraudulent Activity
Contact your employer, bank, financial institution, or enterprise of any potential fraudulent activity that may have occurred.
Contact Local Law Enforcement
If your data has been stolen, and especially if it is sensitive in nature, you should alert your local law enforcement.
How to Protect Yourself from Spyware
- Don’t open emails from unknown senders
- Avoid clicking on pop-up advertisements
- Update your computer or mobile device regularly
- Don’t open suspicious email attachments or files
- Mouse over suspicious links before clicking to see where you’ll be taken
- Adjust browser settings to a higher security level
- Know that “free” is almost never free and these are often false advertisements
- Read the terms and conditions of anything you download
- Use a reputable malware protection software like Cyren
Ready to make sure your business is protected against spyware? Read more about state-of-the-art spyware protection from Cyren. | <urn:uuid:a44784d3-34c3-4ef1-83eb-accbcd9fb8a9> | CC-MAIN-2022-40 | https://www.cyren.com/blog/articles/what-is-spyware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00572.warc.gz | en | 0.897511 | 1,573 | 3.21875 | 3 |
2018 started with a bang for those in cyber-security — and almost everyone else running a device that uses an Intel, AMD or ARM chip made from the mid-nineties till today. News broke (and spread like wildfire) that three security flaws, named Spectre (Variant 1 and 2) and Meltdown, could be exploited by ransomware and other malicious code to extract sensitive information by utilizing the chip's way to speed up processing.
The intricacies of how this works and why this has not been made known before have been discussed at large by much bigger cyber-security experts than myself. After giving you the must-know facts about Spectre and Meltdown, I'd much rather focus on who's affected, how to fix it, at what cost, and how to prepare for the future when other decades-old flaws are found.
What Are Spectre & Meltdown?
Modern Intel, AMD or ARM chip processors built in the past 20 years use what is called "speculative execution." This process is designed to maximize the processing speed by predicting what the user will need to do next before he or she even asks for it. The processor, instead of stopping at a yes/no condition, will execute both. The wrong decision gets stored temporarily in a cache, including sensitive information such as browser history and passwords.
Out-of-order execution is another way modern chips maximize speed. The chip determines the most efficient way to execute code and therefore can execute several steps ahead. Again, any falsely performed are sent to cache.
This behavior has resulted in two known vulnerabilities: the chip is executing commands without checking the validity of the request, and the temporarily stored data is not encrypted anymore.
A potential Meltdown attack would be looking for any unencrypted data stored in the cache from these out-of-order executions. It is not targeting any specific data, but instead gathers what it can. An attack leveraging the Spectre vulnerability, on the other hand, would induce the victim's processor to perform an irregular speculative execution, meaning it would lead the processor astray on purpose and then leak the sensitive data.
Who Is Affected?
Speculative execution and out-of-order execution in processor chips were not closely-guarded industry secrets. Anyone who took a computer class in college and learned about processor speed would have become aware of this potential vulnerability.
While chip manufacturers and operating system providers are scrambling to provide fixes and patches for this vulnerability, hackers are equally clambering to create malicious attacks to exploit these security flaws.
Because almost every Intel chip made since the mid-nineties — except some Intel Atom and Itanium chips — uses speculative and out-of-order execution, Meltdown affects nearly anyone running a modern device with an Intel chip. Spectre can take advantage of the Intel chip exploit, as well as AMD and ARM chips.
In essence, anyone running a modern PC, Mac, iPhone, or Android phone is at risk — even cloud servers are not safe since they are typically running virtual machines from one physical computer.
How To Fix It And The Pitfalls Associated With It
Patches are widely available now — with more being developed
However, they do come at a price: the performance of your machine. Simply put, the older your device and the older your OS, the more significant the decrease in performance. According to Microsoft, most users running Windows 7 or 8/8.1 will notice a significant slowdown. Some users on older machines running Windows 10 will notice a slowdown, while users with the latest CPU chipset on Windows 10 will probably not notice a significant difference.
Part of the decision to update and incur noticeable performance slowdowns is partially dependent on how many third-party applications your organization runs that could be exploited. If you are running mainly in-house programs, deferring these fixes might not pose as big of a security risk.
However, most organizations have applications running that can be exploited and eventually you will need to update. As of now, Windows Update is not pushing out these patches to machines, which could be because there has been no known case of these attacks seen in the wild yet.
What Is The Security Advice?
With such a widespread issue, but no known attacks yet, rushing to upgrade might do more harm than an attack. With all the patches and fixes that have been released so far, besides noticeable slowdowns, Intel's own patches have several bugs in them. Read here for more info on the problems with the patches that have been released.
This is not to say that you should just wait several months for all patches to be released and all bugs to be worked out. It is always a good idea to use common sense and take security precautions, such as reviewing currently in use third-party apps, cloud services, and browsers and temporarily eliminate unpatched or problematic ones.
How To Avoid A BAU Disruption
While it is not possible to predict how many future flaws will be discovered in the next few years, an incident like this serves as a reminder that something of this scale could be discovered again and leave large organizations scrambling.
And the consequences of a slowed-down IT environment are severe!
As previously discussed, even a small computer distraction (update, slowdown, install, etc.) to a millennial employee could lead to a half an hour of non-productive time waste. Another study found that IT issues cost UK businesses the equivalent of $1 million to $60 million a year, depending on company size. However, 95% of that cost is caused by loss of productivity (78%) and lost revenue (17%)!
Therefore, the Spectre/Meltdown fixes could cause major disruption to your Business-as-Usual operation and potentially impact your company's bottom line.
Tackle The Fix Slowdown Problem With Dashworks
This is a great example of where Dashworks puts you in a position of power. Using Juriba's IT Transformation Management platform, you can deal with the current microprocessors chip issue, as well as future flaws. If your organization is still running on Windows 7 or 8, the time to upgrade has arrived.
Even though extended support for Windows 7 and 8 ends in January 2020 and 2023 respectively, the slowdown of your devices will cost your company millions of dollars in time.
Not only can Dashworks make the process of migrating to Windows 10 more efficient, it does so much more to help you understand how much of your legacy estate will be affected by the slowdown fix. It can help you manage your patch rollout, especially where other readiness items need to be considered. You'll know:
- How many PCs are running on which chipset, so which ones are most affected
- How many, and which, 3rd-party apps are running so you can see which ones are most vulnerable
- Which devices are woefully out-of-date and are already running at a significant performance lag
- Which devices need which patches, or have the patches already installed
And since Windows 10 is a Windows-as-a-Service OS, using Dashworks' central command and control will keep you up-to-date with the twice-a-year feature updates and monthly security updates. Using Dashworks to simultaneously update both your OS and hardware will help you navigate through the Spectre variants and Meltdown so there is as little distraction to your BAU as possible. | <urn:uuid:4228f24b-1ab4-474e-acfe-adb31110dfb3> | CC-MAIN-2022-40 | https://blog.juriba.com/security-or-performance-spectre-meltdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00572.warc.gz | en | 0.956665 | 1,498 | 3.296875 | 3 |
In order to communicate or transfer the data from one computer to another computer we need some address. In Computer Network various types of address are introduced; each works at different layer. Media Access Control Address is a physical address which works at Data Link Layer. In this article, we will discuss about addressing in DLL, which is MAC Address.
Media Access Control (MAC) Address –
MAC Addresses are unique 48-bits hardware number of a computer, which is embedded into network card (known as Network Interface Card) during the time of manufacturing. MAC Address is also known as Physical Address of a network device. In IEEE 802 standard, Data Link Layer is divided into two sublayers –
- Logical Link Control(LLC) Sublayer
- Media Access Control(MAC) Sublayer
MAC address is used by Media Access Control (MAC) sublayer of Data-Link Layer. MAC Address is worldwide unique, since millions of network devices exists and we need to uniquely identify each.
Format of MAC Address –
MAC Address is a 12-digit hexadecimal number (6-Byte binary number), which is mostly represented by Colon-Hexadecimal notation. First 6-digits (say 00:40:96) of MAC Address identifies the manufacturer, called as OUI (Organizational Unique Identifier). IEEE Registration Authority Committee assign these MAC prefixes to its registered vendors.
Here are some OUI of well known manufacturers :
CC:46:D6 - Cisco 3C:5A:B4 - Google, Inc. 3C:D9:2B - Hewlett Packard 00:9A:CD - HUAWEI TECHNOLOGIES CO.,LTD
The rightmost six digits represents Network Interface Controller, which is assigned by manufacturer.
As discussed above, MAC address is represented by Colon-Hexadecimal notation. But this is just a conversion, not mandatory. MAC address can be represented using any of the following formats –
Note: Colon-Hexadecimal notation is used by Linux OS and Period-separated Hexadecimal notation is used by Cisco Systems.
How to find MAC address –
Command for UNIX/Linux - ifconfig -a ip link list ip address show Command forWindows OS - ipconfig /all MacOS - TCP/IP Control Panel
Note – LAN technologies like Token Ring, Ethernet use MAC Address as their Physical address but there are some networks (AppleTalk) which does not use MAC address.
Types of MAC Address :
1. Unicast –
A Unicast addressed frame is only sent out to the interface leading to specific NIC. If the LSB (least significant bit) of first octet of an address is set to zero, the frame is meant to reach only one receiving NIC. MAC Address of source machine is always Unicast.
2. Multicast –
Multicast address allow the source to send a frame to group of devices. In Layer-2 (Ethernet) Multicast address, LSB (least significant bit) of first octet of an address is set to one. IEEE has allocated the address block 01-80-C2-xx-xx-xx (01-80-C2-00-00-00 to 01-80-C2-FF-FF-FF) for group addresses for use by standard protocols.
3. Broadcast –
Similar to Network Layer, Broadcast is also possible on underlying layer( Data Link Layer). Ethernet frames with ones in all bits of the destination address (FF-FF-FF-FF-FF-FF) are referred as broadcast address. Frames which are destined with MAC address FF-FF-FF-FF-FF-FF will reach to every computer belong to that LAN segment.
What is MAC Cloning –
Some ISPs use MAC address inorder to assign IP address to gateway device. When device connects to the ISP, DHCP server records the MAC address and then assign IP address. Now the system will be identified through MAC address. When the device gets disconnected, it looses the IP address. If user wants to reconnect, DHCP server checks if the device is connected before. If so, then server tries to assign same IP address (in case lease period not expired). In case user changed the router, user has to inform the ISP about new MAC address because new MAC address is unknown to ISP, so connection cannot be established.
Or the other option is Cloning, user can simply clone the registered MAC address with ISP. Now router keeps reporting old MAC address to ISP and there will be no connection issue. | <urn:uuid:d03c79f1-47f0-4c23-825d-1823ddd5483d> | CC-MAIN-2022-40 | https://cybercoastal.com/cybersecurity-tutorial-for-beginners-mac-addressing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00572.warc.gz | en | 0.914503 | 956 | 3.375 | 3 |
Cybersecurity is something everyone should be concerned about. Every business must work to protect sensitive data and personal privacy. Using the internet without a cybersecurity plan is like driving a car without airbags or seat belts.
Here are 7 indispensable cybersecurity tips for any and everyone on the internet:
No. 1 - Understand the threat
More than 80 percent of all businesses are targets for cyberattacks. What’s more alarming, cyber breaches usually aren’t noticed until nearly a year (200 days on average) later. In the wake of a cyberattack, 60 percent of small companies are out of business within six months.
What does this mean? It simply means that every business must act as a cyberattack is not only possible but imminent. And, therefore, every business (and every individual) should be adequately prepared and employ preventative measures.
No. 2 - Keep software-up-to-date
One of the easiest preventative measures is simply to update your software. Sometimes we skip updates because they can be inconvenient, time-consuming tasks. This is a mistake. Investing in new technology is always a sound decision. But, you must update consistently to ensure quality.
Updates usually come with a batch of patches, sometimes these include security updates. Hackers and other cyber attackers can exploit out-of-date and unsupported software with simple but devastating scripts.
No. 3 - Check firewall permissions
These might be an overly technical task for some, but those unafraid of tackling system permissions should definitely do so. Open up windows defender every now and again and see if there are any programs that shouldn’t be there. Check file paths and permissions, keeping a lookout for any suspicious programs.
No. 4 - Understand phishing scams
If we did everything perfectly by the book, there wouldn’t be much threat to our computing devices. In fact, there wouldn’t be any threat at all. Unfortunately, we humans have faults.
Phishing scams and other email scams can spell the end of our personal security if we’re not careful. That’s why it’s important to understand that scammers can try to bait us with social tactics rather than programmatic ones to get what they want. Don’t give out passwords, usernames, and credit card information, and don’t click on suspicious links.
No. 5 - Encrypt sensitive data
Encrypting sensitive data is one of the easiest ways to protect it. Sometimes when using a cloud platform, there is built-in encryption. Sometimes this form of encryption isn’t enough, however. You’ll want to encrypt local files and data in transit, too. This also can be a technical process.
On local machines, it’s an easy enough task. Simply select a folder and select encrypt on Windows. On a Mac, you can encrypt files though creating a separate disk image. For transferring encrypted information, you may also want to look into encrypting and securing your local network.
6. Rotate strong passwords
Strong passwords are hard to crack. That’s because many hackers simply try to brute force login credentials. This means that they’ve got a program running that iterates through lots of different combinations. If you’ve got a password with tons of unique characters with no discernible pattern, these brute force attacks have no power.
No matter how great your password is, you’ll need to change them and change them often. This way, even if someone does gain access, they only have access for a limited amount of time. If you fail to catch a hacking attempt or breach, this acts as another safeguard. Automatically undetected hackers will be booted. Be sure to change all your passwords every six months.
7. Get a password management program
Generating and rotating passwords is a lot of work. If you’re not careful, you’ll lock yourself out of your own devices and files. Get a password management program (that encrypts all of your information) so that this doesn’t happen. Whatever you do, don’t hold passwords in an unencrypted plain text file or excel sheet. This can leave you wide open to a myriad of unwanted consequences.
Keep passwords strong, change them often, and find a program that allows you to manage them securely. Encrypting your data is also of the utmost importance, this ensures if a leak does happen, the data itself is protected.
These seven tips will help you be more secure. Installing trustworthy antivirus programs like can also be extremely helpful. Remember to also keep software up-to-date and check for phishing scams and other forms of social engineering.
Michael Volkmann is an entrepreneur with a focus on business operations and finance. | <urn:uuid:15f50f44-6c2c-42f6-b09f-d040464ac67a> | CC-MAIN-2022-40 | https://www.mbtmag.com/security/blog/13245436/7-indispensable-cybersecurity-tips | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00572.warc.gz | en | 0.915938 | 993 | 2.609375 | 3 |
When most people use the Internet, they use domain names to specify the website that they want to visit. However, computers use IP addresses to identify different systems connected to the Internet and route traffic through the Internet. The Domain Name System (DNS) is the protocol that makes the Internet usable by allowing the use of domain names.
DNS is widely trusted by organizations, and DNS traffic is typically allowed to pass freely through network firewalls. However, it is commonly attacked and abused by cybercriminals. As a result, the security of DNS is a critical component of network security.
DNS can be used in different ways. Some threats include attacks against the infrastructure:
DNS can also be abused and used in cyberattacks. Examples of the abuse of DNS include:
DNS is an old protocol, and it was built without any integrated security. Several solutions have been developed to help secure DNS, including:
Monitoring your DNS traffic can be a rich source of data to your Security Operations Center (SOC) teams as they monitor and analyze your company’s security posture. In addition to monitoring firewalls and IPS systems for DNS Indicators of Compromise (IoC), infected hosts or DNS tunneling attempts, SOC teams can also be on the lookout for lookalike domains.
Check Point solutions can help organizations protect DNS infrastructure and detect DNS-based attacks. Next-Gen Firewalls detect malicious traffic and DNS tunneling attacks via Reputation filtering and IPS DNS Tunneling protections. In addition we can empower SOC teams to research IoCs and find look alike domains to protect against cyber threats such as those exploiting DNS in phishing attacks. Check out this demo of Check Point Infinity SOC. | <urn:uuid:eb925ce4-62ff-4328-b2bb-ee5bedbf31f9> | CC-MAIN-2022-40 | https://www.checkpoint.com/cyber-hub/network-security/what-is-dns-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00572.warc.gz | en | 0.947795 | 351 | 3.1875 | 3 |
March 11, 2021 — Since 2009, Daniel Tward and his collaborators have analyzed more than 47,000 images of human brains via MRI Cloud—a gateway created to collect and share quantitative information from human brain images, including subtle changes in shape and cortical thickness. The latter was the topic of a recently published study in the journal Neuroimage: Clinical by Tward and his team.
Entitled Cortical Thickness Atrophy in the Transentorhinal Cortex in Mild Cognitive Impairment, the study detailed new findings related to this particular area of the brain’s thinning during the early stages of Alzheimer’s disease and how it impacts mild cognitive impairment.
“Until now, we haven’t been able to measure these changes in living people,” said Tward, assistant professor of computational medicine and neurology at the University of California Los Angeles. “By using supercomputers like Comet at the San Diego Supercomputer Center at UC San Diego and Stampede2 at Texas Advanced Supercomputing Center, we were able to study a large cohort of patient images over time.”
Specifically, Tward said he and his team used allocations from the National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) to access supercomputers that allowed for observation and quantification of thinning in the transentorhinal cortex,
in a pattern that agrees with autopsy results. Located in the temporal lobe of the brain, the transentorhinal cortex has been believed to be the first area impacted by Alzheimer’s disease; however, until now, this was only able to be shown in autopsy results.
He said that being able to confirm that this thinning of the transentorhinal cortex is caused by Alzheimer’s could help clinicians provide patients with an earlier diagnosis, which is currently not diagnosed until autopsy. Additionally, the newfound discovery could result in shorter and less expensive clinical trials, which again allows for faster discovery of potential treatment for those suffering from Alzheimer’s disease.
What Was the Role of Supercomputers?
Tward and his colleagues used XSEDE allocations on Comet and Stampede2 in conjunction with MRI Cloud, to analyze hundreds of large imaging volumes of human brains—with a focus on the transentorhinal cortex.
“Reducing computation time from months to days allowed this complex neuroimaging project to be feasible,” said Tward. “XSEDE provided us with a platform to exceed our expectations as we conducted a study with significant results for both academic researchers and clinicians working on Alzheimer’s disease diagnoses and treatment.”
This work relied on allocations from XSEDE, which is supported by the NSF (ACI-1548562). The research was supported by the National Institutes of Health (P41-EB015909, R01-AG048349, RO1-DC016784, and R01-EB020062).
Source: Kimberly Mann Bruch, SDSC | <urn:uuid:84c45e6b-dd67-4141-be21-4dd23833c927> | CC-MAIN-2022-40 | https://www.hpcwire.com/off-the-wire/xsede-allocated-supercomputers-help-accelerate-alzheimers-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00572.warc.gz | en | 0.933288 | 631 | 3.03125 | 3 |
Automation Anywhere Digital Worker overview
A Digital Worker contains a set of professional skills that enable it to act, process, and analyze in the same ways as a human.
For example, one of the skills of a Digital Accounts Payable Clerk is invoice processing. The invoice processing skill includes tasks that involve the following:
- Acting: Extract the invoice from email.
- Thinking: Identify the correct data items to extract from the invoice via artificial intelligence.
- Analyzing: Monitor straight-through processing (STP) to identify and highlight opportunities to accelerate the transaction process.
With Digital Workers, human workers and organizations benefit as follows:
- Human workers can refocus their efforts on more interesting, value-added activities (instead of repetitive ones), and deliver greater value to their organization.
- Organizations can rapidly scale their automation activities to increase productivity, efficiency, and growth. | <urn:uuid:c397af1c-8e34-4382-ac0c-d29465d3f0ba> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/es-ES/bundle/enterprise-v11.3/page/enterprise/topics/aae-developer/aae-digital-worker-overview.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00572.warc.gz | en | 0.90972 | 181 | 2.9375 | 3 |
Technology In K-12 Education – New Challenges, New Opportunities
I continue to be amazed at both the pace and breadth of innovative technologies available to school districts and professional educators that allow customized learning and the promise of better educational outcomes. Along with these opportunities come significant challenges related to the cost, infrastructure, support and deployment of educational technology, especially for large urban districts.
This time, it’s Personal!
Trends in education have a tendency to come and go, but Personalized Learning is one that looks like it’s here to stay. Supporting initiatives like 1:1 device(s), digital classrooms, immersive learning, etc. hold tremendous potential to allow teachers to design a custom curriculum—complete with course materials, real-time analytics and outcome-based results. The Digital Classroom opens up new opportunities to replace printed books and source materials with their digital equivalents, leading to cost-savings on printed materials but adding cost on the technology side. Backed by cutting edge technologies like Machine Learning/Artificial Intelligence using adaptive software, the latest Personalized Learning model seeks to provide teachers with a set of tools to allow them to truly provide a 1:1 teacher/ student relationship.
Another strong trend in educational technology is the “builder” movement—whether it be robotics, games, mobile applications, 3-D printing, immersive technologies (VR) or Lego. The satisfaction of seeing the tangible results of their efforts can excite and motivate students to greater achievement.
The pace of change for cutting-edge technologies also presents the likelihood of a shorter product lifecycle
The common factor all these exciting trends share is that they are not cheap—increased hardware, support, maintenance and infrastructure costs to accommodate these have to be factored in to any purchase decisions. The Total Cost of Ownership (TCO) can often greatly exceed any preliminary cost estimates. The pace of change for cutting-edge technologies also presents the likelihood of a shorter product lifecycle. Since the majority of districts are unable to fund every new trend, doing the upfront research and due diligence to pick the best fit for the district’s educational goals is extremely important.
The Power of Analytics
While analyzing results has been around as long as teaching has, the ability to collect data, analytics and metrics around educational outcomes is faster and more powerful than it’s ever been. The sophistication of these tools, often powered by neural networks using deep learning allows for much quicker results, the ability to spot weaknesses early on and mid-course corrections leading to better outcomes. While the richer the data, the richer the result, privacy and security concerns also have to be vetted and mitigated.
Full STEAM Ahead
STEAM (Science, Technology, Engineering, Arts+Design & Mathematics) initiatives are taking education by storm. These initiatives have proven very successful in helping shrink the gap among US students for these critical skills. While lauding their effectiveness, it’s also important to note that they don’t come for free – the need for more computers and computer labs, server & disk space, networking & internet bandwidth means greater outlays for equipment, support & staff. Moving to virtualize as many servers as possible, an in-progress hybrid SAN upgrade and aggressive expansion of high-speed access to all schools and facilities allows us to be nimbler and responsive to the ever-changing technology landscape.
Organizations like Hour of Code seek to demystify the process of designing and writing code. With activities starting as early as 2nd grade going through HS and beyond, students will be exposed to the concepts surrounding application design and development. As eager participants in the Hour of Code and other STEAM-based initiatives, we’re embarking on a long-term strategy to engage students early in these critical skills and keep them engaged through graduation.
Looks Phishy to Me!
As the rate of technology use and ubiquity increases, so do the challenges of keeping our systems and data secure and private. The importance of understanding outside threats and being able to respond quickly cannot be overstated. The increasingly inter-connected nature of devices and move towards everything being internet-capable, or the Internet of Things (IoT) greatly broadens the landscape of potential threats. When everything is a computer, everything is also a potential threat that needs to be managed and mitigated. Add to this is an expanding Bring Your Own Device (BYOD) strategy in response to needs of the schools and community and the situation becomes even more complex for school districts.
The increasing frequency and sophistication of phishing emails, Distributed Denial-of Service (DDOS) attacks, port scanning and other attempted breaches require constant vigilance and aggressive response. The trend towards Cloud-based technologies for application hosting and SaaS provides an additional complication. Beyond using best-of-breed tools and products to identify, respond and stop malicious attempts before they become a breach, on-going user education is paramount to limiting these potentially devastating attacks. In a district of our size and diversity, the technology IQ varies greatly across our user base requiring a least-common-denominator approach to training and education on suspicious or malicious activities.
All CIOs face some of the same challenges—cost-cutting pressures, timing of large-scale deployments, rising infrastructure demands, security and privacy considerations, etc. The educational space provides its own unique set of challenges & opportunities for technology professionals, with the additional mandate of being good stewards of the taxpayer’s money. Every company or institution has a mission – for K-12 school districts it’s to provide the best possible tools and environment to prepare the youth of today for success in the area of their choosing. It’s a mission we take very seriously and strive to achieve—the strategic planning and use of technology plays a large role in our ability to fulfill that mission.
School districts across the country, like other taxpayer-funded entities have certainly felt the impact of the current anti-tax sentiment. Here at the School District of Palm Beach County, we have been fortunate to have passed two major referendums in the past two years to address long-standing maintenance and technology issues. These were successful in large part to our ability to convey how students would benefit directly as well as transparency in how the funds would be used. The community’s trust in and support for our mission and purpose is satisfying and reinforces our belief that we’re on the right track. | <urn:uuid:2a9eea7c-3c95-419a-9b91-589c5250e95c> | CC-MAIN-2022-40 | https://education.cioreview.com/cioviewpoint/technology-in-k12-education-%EF%BF%BD%EF%BF%BD%EF%BF%BD-new-challenges-new-opportunities-nid-27994-cid-27.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00572.warc.gz | en | 0.940887 | 1,323 | 2.75 | 3 |
Though it’s been a while since cloud technology was introduced into our world still there is much confusion surrounding Network Security and Cloud Security. If you are one of those who can’t find the difference between these two terms: Network Security and Cloud Security. Then you’re in the right place.
Today in this article you will get to know about the difference between these two domains and the career opportunities and skills required and more. Okay without further ado let’s get started.
What is Network Security?
Network security is the branch of cyber security that focuses on the protection of data, applications, or systems that are connected at the network level. To understand more about Network security you should first know what a Network is.
So the simple definition for the network is, that network refers to the two or more computers that are systems that are linked to share resources and communications. Today’s network architecture is developed into more complex ones and is open to various vulnerabilities.
These vulnerabilities are spread through various devices. It can be unauthorized access to data, hardware or software problems, and so on. So a network security analyst is responsible for protecting the data, and resources of the computers or other electronic devices connected in a network.
Network Security Control Methods
Network security can be achieved by the following three types of controls –
i) Physical Network Control
Here the security personnel focuses on preventing unauthorized access to the network through physical components like Routers, cables, etc… Some of the security measures taken are biometric authentication to data or network rooms, locks, etc…
ii) Technical Network Control
Here both data and system are protected from the malicious activities of both outsiders and employees. The most well-known security measures like firewalls, and antivirus come under this control. They protect the network from any technical threats.
iii) Administrative Control
This control deals with the control of policies and other processes like user behavior, administrative powers, etc….This is achieved by providing different levels of power to each system in the network, in short, it gives special power to the admin to access and rewrite the data of the company.
What is Cloud Security?
Cloud security refers to the protection of interests of both cloud provider and client in a cloud-based infrastructure. Cloud security is a broader concept than network security which covers the whole corporate structure, as they are mostly offered as infrastructure as service. Before seeing more about cloud security let’s what is a Cloud?
Cloud computing is an advanced form of networking where all the computers are connected to a particular cloud or server through the internet instead of physical cables. These cloud services are available in three forms: infrastructure as a Service, Software as a service, and platform as a service.
And different types of cloud security are adopted in each of the above forms. Though cloud services providers take active steps to minimize the risk, in modern days the threats are increasing as most businesses are migrating to cloud-based services.
Cloud Security Solutions
Here are some well-known cloud security solutions –
i) Identity and Access Management (IAM)
It is like administrative control in network security, IAM allows the enterprise to policy-driven enforcements and protocols to prevent authorized access. Separate digital identities are created for each user to achieve this
ii) Data Loss Prevention (DLP)
Offers a set of tools and services to ensure the security of the cloud, which includes data encryption, remediation alerts, backup strategy, etc…
iii) Security Information and event management (SIEM )
It focuses on threat monitoring and detection in cloud-based environments, uses AI-driven technologies to correlate with the past data, and ensures against any potential threats.
Difference Between Network Security and Cloud Security
Now we get to know about the difference between Network Security and Cloud Security. Let’s summarize the things we have seen till now, to form a difference table. | <urn:uuid:11e450ee-9757-4dbb-98a8-e6cec92a2f3a> | CC-MAIN-2022-40 | https://networkinterview.com/network-security-vs-cloud-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00572.warc.gz | en | 0.934142 | 802 | 2.734375 | 3 |
The evolution of cryptocurrency has taken the world by storm in the past decade. In 2013, there were only 66 types of cryptocurrency worldwide. In February 2022, Statista reports around 10,397 cryptocurrencies that people can invest in.
Cryptocurrency, often referred to as just crypto, is a form of digital money. This electronic currency uses blockchain technology, a secure, digital ledger containing crypto transactions.
While many investors are keen on learning about crypto, following the latest crypto trends and making a profit, it’s critical to weigh the risks associated with trading cryptocurrency.
Below, we’re going to discuss some of the cybersecurity risks associated with investing in crypto, as well as some ways to protect yourself when investing your hard-earned money into digital tokens.
Crypto has the potential to yield significant returns for users, especially because exchange rates are so volatile. However, investing in crypto can be potentially dangerous for those who fail to research or exercise best cybersecurity practices.
As crypto grows and becomes more widely used, the easier it becomes for hackers to use various methods to steal sensitive information and investor assets. In one case, the website crypto.com reported that it lost over $30 million in Ethereum and Bitcoin when hackers made unauthorized withdrawals.
To avoid facing any cybersecurity issues, it’s critical to know how investing in crypto can get you into murky waters. Below are some common cybersecurity risks that come with investing in crypto that you should know.
Phishing attacks are so common, even outside of the crypto world. Essentially, phishing is a method used by hackers to make it seem as if they’re representing a reputable company, such as a crypto trading platform, to email users and get them to perform some action. Sometimes, it’s clicking on a suspicious link or having them forward their login credentials.
Hackers rely on phishing scams to have crypto users turn over their digital assets. Spear phishing, DNS hacking, phishing bots and fake browser extensions are examples of common phishing attacks hackers will use to take advantage of crypto investors.
Because cryptocurrency is still evolving, new trading platforms are emerging, hoping to gain the trust of people interested in investing in crypto. However, not all of these platforms are legitimate.
Consider One Coin, for example. One Coin was a seemingly reputable cryptocurrency company that lured users in by promising big returns, but the entire currency system ended up being a scam. It was found to be a multi-level marketing scam that ended up costing people a lot of money. Not every risk associated with crypto comes in the form of a hack or data breach. Sometimes, the fraudulent activity is happening in plain sight.
In some cases, crypto investors will rely on third-party applications or software to manage their digital assets. For example, it’s common for investors to use crypto tax reporting services, but this can open them up to more cybersecurity risks.
It was reported that a hacker was able to steal data from over 1,000 users after breaking into CryptoTrader.Tax. The hacker gained access by entering a marketing and customer service representative’s account, which displayed all kinds of sensitive information that put users at risk.
Essentially, crypto-malware is a form of malware that allows unauthorized users to mine cryptocurrencies using someone else’s computer or server. Hackers will use one of two methods to infect someone’s computer:
- Victims are tricked into installing malware code onto their computers using phishing-like tactics.
- Cybercriminals inject malicious code into websites or ads. When victims interact with them, the code runs and gives hackers access.
In 2018, Forbes reported that crypto-malware had grown by 4,000%.
It’s critical to understand that users access their digital assets by using a “private key,” which is essentially a complex password code. Many users will store their private keys on their computers, but that comes with risk. If hackers gain access to your computer, they’ll also be able to use that private key to log in to your digital account.
Once a private key is stolen, there’s no way of getting it back because cryptocurrency is not highly regulated. Investors are the only ones responsible for keeping their private keys out of the hands of hackers, which makes crypto investing riskier compared to traditional investments.
As mentioned above, crypto is almost like the Wild West because it’s unregulated and a bit of a free for all. Cryptocurrency is decentralized, meaning that no agency, organization or governing body oversees the creation, management or movement of cryptocurrencies.
While some believe the lack of regulation is beneficial, it can have its downsides. China even outlawed cryptocurrency transactions in 2019. More countries will likely crackdown on cryptocurrency regulations because they can breed hackers and scammers.
Because cryptocurrency is not yet widely understood, it can lead to detrimental outcomes for unbeknownst investors. The very nature of cryptocurrency, crypto exchanges and blockchain technology are complex. It can be challenging to understand, even for seasoned investors.
Crypto only exists on the ether of the internet. Unlike traditional assets, like money in your savings account, cryptocurrency is generally less secure, making it riskier for investors.
Above, we’ve identified some of the major cybersecurity risks related to cryptocurrency investments. However, how can you protect yourself when investing in crypto, and what are some of the best cybersecurity practices you can employ?
Here are some specific steps you can take:
- Never share your private key or login credentials with anyone, regardless of if they claim to represent a reputable cryptocurrency company. Consider keeping your key stored on an external device, such as a USB.
- Do your due diligence and research companies and their tokens before investing.
- Don’t respond to unsolicited offers to invest in crypto. Avoid clicking on any suspicious links or ads — this could open you up to more cybersecurity risks.
- Keep an eye on the latest crypto trends, news stories and any announcements related to cryptocurrencies you invest in.
- Use strong, unique passwords at all times to make your online accounts more secure and keep hackers at bay.
Keep all of these risks and cybersecurity practices in mind when investing in cryptocurrency. We’re still learning more about digital money, but it’s always wise to be on the lookout for cybersecurity threats so you can protect yourself and your assets.
There are no signs of cryptocurrency slowing down. As it becomes more mainstream, hackers will use all the tools in their arsenal to target unsuspecting victims.
By understanding the risks that come with investing in cryptocurrency, you are better prepared to fend off hackers and keep your assets safe. Consider using some of the tips above to protect yourself and avoid losing out on any significant investments in the future. | <urn:uuid:9a0b2ddf-d4ff-4b16-beb8-0aac4c471e3b> | CC-MAIN-2022-40 | https://cybersecurity-magazine.com/the-cybersecurity-risks-of-cryptocurrency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00772.warc.gz | en | 0.941497 | 1,388 | 2.859375 | 3 |
IBM plans massive data set to improve facial recognition and reduce AI bias
IBM is planning to make the largest facial attribute and identity training set in the world, with more than a million images, available this fall to help improve the training of artificial intelligence facial recognition systems and reduce bias in algorithms.
The dataset, which IBM says in a blog post is five times the size of the largest one currently available, will be annotated with attribute and identity information, with images drawn from different countries using Flickr geo-tags, and sample selection bias mitigated with active learning tools. While currently available datasets include attributes, such as hair color, or tags identifying that multiple images are of the same person, the new set from IBM will include both. A dataset with 36,000 facial images evenly distributed among ethnicities, genders, and ages will also be released specifically to help identify and address bias.
“As the adoption of AI increases, the issue of preventing bias from entering into AI systems is rising to the judgement, intuition and expertise. The power of advanced innovations, like AI, lies in their ability to augment, not replace, human decision-making. It is therefore critical that any organization using AI — including visual recognition or video analysis capabilities — train the teams working with it to understand bias, including implicit and unconscious bias, monitor for it, and know how to address it.”
IBM showed earlier this year that the error rate of its Watson Visual Recognition service for facial analysis has been decreased nearly ten-fold, according to the post.
IBM is one of the facial recognition providers whose algorithms were tested by M.I.T. Media Lab Researcher Joy Buolamwini when she found major differences in error rates between people of different populations earlier this year.
Microsoft, another of the leading facial recognition providers with algorithms demonstrating bias in the same test, just announced a dramatic improvement in its facial recognition algorithm’s ability to recognize the gender of people with darker skin tones, as it attempts to deal with the same issue. | <urn:uuid:de499c28-c703-4043-a905-72937fafaa33> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201806/ibm-plans-massive-data-set-to-improve-facial-recognition-and-reduce-ai-bias | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00772.warc.gz | en | 0.94879 | 416 | 2.890625 | 3 |
Investigators at the Stanford University School of Medicine and several other institutions have shown that a new type of vaccination can substantially enhance and sustain protection from HIV.
A paper describing the vaccine, which was given to monkeys, will be published online May 11 in Nature Medicine. The findings carry broad implications for immunologists pursuing vaccines for the coronavirus and better vaccines for other diseases, said Bali Pulendran, Ph.D., professor of pathology and of microbiology and immunology at Stanford.
The key to the new vaccine’s markedly improved protection from viral infection is its ability – unlike almost all vaccines now in use – to awaken a part of the immune system that most current vaccines leave sleeping.
“Most vaccines aim at stimulating serum immunity by raising antibodies to the invading pathogen,” said Pulendran, referring to antibodies circulating in blood.
“This vaccine also boosted cellular immunity, the mustering of an army of immune cells that chase down cells infected by the pathogen. We created a synergy between these two kinds of immune activity.”
Pulendran, the Violetta L. Horton Professor II, shares senior authorship of the study with Rama Amara, Ph.D., professor of microbiology and immunology at Yerkes Primate Research Center at Emory University; Eric Hunter, Ph.D., and Cynthia Derdeyn, Ph.D., professors of pathology and lab medicine at Emory; and David Masopust, Ph.D., professor of microbiology and immunology at the University of Minnesota. The lead authors are Prabhu Arunachalam, Ph.D., a postdoctoral scholar at Stanford; postdoctoral scholars Tysheena Charles, Ph.D., and Satish Bollimpelli, Ph.D., of Emory; and postdoctoral scholar Vineet Joag, Ph.D., of the University of Minnesota.
Some 38 million people worldwide are living with AIDS, the once inevitably fatal disease caused by HIV.
While HIV can be held in check by a mix of antiviral agents, it continues to infect 1.7 million people annually and is the cause of some 770,000 deaths each year.
“Despite over three decades of intense research, no preventive HIV vaccine is yet in sight,” Pulendran said. Early hopes for such a vaccine, based on a trial in Thailand whose results were published in 2012, were dashed just months ago when a larger trial of the same vaccine in South Africa was stopped after a preliminary assessment indicated that it barely worked.
Vaccines are designed to arouse the adaptive immune system, which responds by generating cells and molecular weaponry that target a particular pathogen, as opposed to firing willy-nilly at anything that moves.
The adaptive immune response consists of two arms: serum immunity, in which B cells secrete antibodies that can glom onto and neutralize a microbial pathogen; and cellular immunity, in which killer T cells roam through the body inspecting tissues for signs of viruses and, upon finding them, destroying the cells that harbor them.
But most vaccines push the adaptive immune system to fight off infections with one of those arms tied behind its back.
“All licensed vaccines to date work by inducing antibodies that neutralize a virus. But inducing and maintaining a high enough level of neutralizing antibodies against HIV is a demanding task,” Pulendran said.
“We’ve shown that by stimulating the cellular arm of the immune system, you can get stronger protection against HIV even with much lower levels of neutralizing antibodies.”
In the new study, he and his colleagues employed a two-armed approach geared toward stimulating both serum and cellular immunity. They inoculated three groups of 15 rhesus macaques over a 40-week period.
The first group received several sequential inoculations of Env, a protein on the virus’s outer surface that’s known to stimulate antibody production, plus an adjuvant, a chemical combination often used in vaccines to beef up overall immune response.
The second group was similarly inoculated but received additional injections of three different kinds of viruses, each modified to be infectious but not dangerous. Each modified virus contained an added gene for a viral protein, Gag, that’s known to stimulate cellular immunity.
A third group, the control group, received injections containing only the adjuvant.
At the end of the 40-week regimen, all animals were allowed to rest for an additional 40 weeks, then given booster shots of just the Env inoculation.
After another rest of four weeks, they were subjected to 10 weekly exposures to SHIV, the simian version of HIV.
Monkeys who received only the adjuvant became infected. Animals in both the Env and Env-plus-Gag groups experienced significant initial protection from viral infection. Notably, though, several Env-plus-Gag animals – but none of the Env animals – remained uninfected even though they lacked robust levels of neutralizing antibodies.
Vaccinologists generally have considered the serum immune response – the raising of neutralizing antibodies – to be the defining source of a vaccine’s effectiveness.
Even more noteworthy was a pronounced increase in the duration of protection among animals getting the Env-plus-Gag combination. Following a 20-week break, six monkeys from the Env group and six from the Env-plus-Gag group received additional exposures to SHIV.
This time, four of the Env-plus-Gag animals, but only one of the Env-only animals, remained uninfected.
Pulendran said he suspects this improvement resulted from the vaccine-stimulated production of immune cells called tissue-resident memory T cells. These cells migrate to the site where the virus enters the body, he said, and park themselves there for a sustained period, serving as sentinels. If they see the virus again, these cells jump into action, secreting factors that signal other immune-cell types in the vicinity to turn the tissue into hostile territory for the virus.
“These results suggest that future vaccination efforts should focus on strategies that elicit both cellular and neutralizing-antibody response, which might provide superior protection against not only HIV but other pathogens such as tuberculosis, malaria, the hepatitis C virus, influenza and the pandemic coronavirus strain as well,” Pulendran said.
The World Health Organization (WHO) identifies uniformed armed personnel among some of the key populations to be focused upon in the national HIV strategic plans for several countries in sub-Saharan Africa, due to their higher risk for HIV infection compared to the general population .
For example, members of the uniformed armed forces in Congo were found to have higher HIV prevalence compared to the general population (3.8% versus 1.3%) .
Similarly, the Uganda Peoples Defence Forces (UPDF) has been listed among the most at-risk population due to high HIV incidence rates compared to the general population (3.56 per 100 person-years, 95% confidence interval [CI]: 1.49–5.52, versus 2.1 per 100 person-years, 95%CI: 1.1–3.1) [3, 4].
This is attributed to the nature of their occupation characterised by mobility and long periods of separation from their families, which predisposes them to risky sexual behaviours .
Members of the Uganda Police Force (UPF) are potentially likely to follow the same trends in HIV risk since they share similar operational structures as those of the army. A study conducted in Tanzania among members of the urban police force demonstrated high-risk sexual practices including low condom use, resulting in high HIV prevalence and incidence .
Although most of the countries in sub-Saharan Africa have implemented efforts to address HIV in the armed forces, there have been gaps noted in the amount of research in this area .
Evaluation of novel HIV prevention strategies (including HIV vaccine research) necessitates the recruitment of populations with presumed high exposure to HIV and high motivation to remain under study .
A study conducted in a population of police officers in Dar-es-Salaam, Tanzania, demonstrated they were a suitable population for HIV vaccine research due to their high HIV prevalence and high rate of willingness to participate in future vaccine trials .
Thus, the first HIV vaccine trial in Tanzania was conducted in a population of police officers . There is surprisingly little reliable data on the prevalence and incidence of HIV among other uniformed personnel in East Africa, including the UPF, and their suitability as potential participants of HIV vaccine trials.
In this paper, we describe the findings of a study to determine the acceptability and suitability of UPF personnel for future HIV vaccine trials by setting up a cohort study to estimate the recruitment and retention rates as well as HIV incidence rate and associated factors over a one-year period.
Our study shows that it is feasible to recruit and adequately follow up volunteers from a population composed of police force and their relatives for research. We established incidence rates of HIV and syphilis in this population.
However, the data show an unexpectedly low HIV incidence and low syphilis prevalence which might be as a result of our recruitment methods. During recruitment some individuals at high risk of infection were selected out at screening and lost to follow up.
The heterogeneous composition of our cohort, including non-police officers could have caused some form of risk dilution by including low risk individuals explaining the low incidence if HIV despite high rates of condomless sex as evidenced, for women, by the high pregnancy rates.
Of the 2059 individuals who attended the community voluntary HIV testing, only 560 (27.2%) made it to the clinic for further study eligibility assessment. This was because majority of those approached only required HIV testing services and although the study sample size of 500 was attained, the testing service continued.
The observed low incidence of HIV and syphilis is encouraging and may be partly due to the education level and prior HIV/STI -prevention knowledge of those recruited into the cohort. In addition, HIV testing is mandatory for recruitment into the police force in Uganda, and only HIV negative individuals are eligible to stay, this is likely to have contributed to the low prevalence and incidence because we had a number of newly recruited members of the force in the cohort. Of note, the low HIV incidence would make this population suitable for phase I and II trials which require low risk individuals for safety and immunogenicity studies .
The observed decline in the reported risk behaviours over one year could be attributed to the risk reduction counselling that was offered at each quarterly visit. A similar finding was observed in a female sex worker cohort study, by Traore et al. in Burkina Faso, where zero HIV infections were observed following a combination intervention over a two year period.
In another study conducted by Kaul et al. among female sex workers in Nairobi, a reduction in risk taking was observed after an intensive period of risk reduction counselling and regular STI treatment.
In another study, Ghys et al. found that HIV prevention intervention contributed to significant lowering of the HIV-1 seroincidence rate during the intervention study than before the study (6.5 versus 16.3 per 100 person-years; P = 0.02).
However, such a reduction as observed in our study should be interpreted with caution, since it could also be a result of social desirability bias [15, 16].
Our study demonstrated a good overall retention rate in this population. In common with similar studies , we observed that retention was better among volunteers who had lived in the facility longer, a possible reflection of their relative stability, which may also have affected their HIV-infection risk.
Volunteers who reported no knowledge of HIV risk were more likely to be retained compared to those reporting to be knowledgeable, possibly because the latter sought to attend in order to acquire more knowledge.
We observed that loss to follow up was associated with volunteers who reported the most high-risk behaviours as well as those who reported having travelled away from home in the last month. This association may also explain the low incidence observed in the study.
Our study had limitations. Firstly, during screening and recruitment of our study population, we did not systematically recruit to ensure a weighted representation from the different police departments and ranks, giving rise to possible selection bias.
From our anecdotal observations, we noted that the majority of enrolled police officers were from lower ranks, and we had no representation from some of the departments such as the traffic and mobile patrol units, who might differ in terms of the variables and outcomes we were investigating. Secondly, our study was not designed to collect specific reasons why volunteers were lost to follow up, which would be useful in explaining the reasons and so inform possible interventions. Thirdly, the study findings are based mostly on self-report which might potentially introduce social desirability bias if participants choose to modify their responses to mask risk behaviour. However, inclusion of biological information such as HIV and syphilis incidence add credence to our findings.
The study showed it is possible to recruit and adequately follow up volunteers from the community of the Uganda Police Force for participation in future HIV vaccine trials. The low HIV incidence and decline in HIV risk behaviour during follow-up, combined with the favourable retention rate could make this population potentially suitable for Phase I & II HIV vaccine trials, where low risk individuals are required.
However, the surprisingly low HIV incidence in our cohort suggests that those at higher risk of infection (i.e., those in mobile divisions in the force) may have been omitted from our cohort, and such population would not be adequate for Phase III HIV vaccine trials. We recommend more stringent sampling to ensure greater representation of different ranks and divisions (i.e. traffic police) in future epidemiological studies in such similar populations.
More information: T cell-inducing vaccine durably prevents mucosal SHIV infection even with lower neutralizing antibody titers, Nature Medicine (2020). DOI: 10.1038/s41591-020-0858-8 , www.nature.com/articles/s41591-020-0858-8 | <urn:uuid:6e49f89e-bc87-431f-9e11-458dc97a63f4> | CC-MAIN-2022-40 | https://debuglies.com/2020/05/12/a-new-type-of-vaccination-can-substantially-enhance-and-sustain-protection-from-hiv/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00772.warc.gz | en | 0.958531 | 2,949 | 3.21875 | 3 |
Education Technology Services
A deep dive into hybrid learning technologies
The COVID-19 pandemic pushed the entire education sector and billions of students worldwide into a frenzy. Institutes scrambled to cope and maintain continuity in education. The time was ripe for online and, eventually, hybrid learning and teaching technology to plant its feet firmly. As we leave the pandemic behind, hybrid learning technologies brought a tectonic shift that is here to stay. Let us examine how the learning ecosystem benefits from technology in the classroom.
What is hybrid learning?
Hybrid learning combines classroom and remote learning using state-of-the-art technologies. Students can spend 25–50% of time learning in the classroom and the rest online. The benefits of hybrid learning are evident, and over 77% of academic leaders say that the results of hybrid learning are equal to or better than in-person classes. There are three key principles for hybrid teaching and learning:
- Inclusivity: Breaking down the barriers and ensuring that everyone is seen and heard and can share their ideas freely
- Engagement: Ensuring smooth collaboration with a variety of design experiences.
- Ease: Having a range of intuitive and easy-to-navigate in-person and online experiences
The importance of hybrid learning
Hybrid learning offers students opportunities other than merely classroom learning, including:
- Networking opportunities: Pure online learning models don’t offer many co-learning opportunities. But with hybrid learning, you can network and still learn remotely. According to a study, 60% of students feel comfortable with a mix of online and in-person learning.
- Higher accountability and self-discipline: When a teacher isn’t physically present to remind you about due dates and you are not surrounded by other students, it’s an opportunity to be self-disciplined. It is an invaluable skill when you enter into the professional world. Hybrid learning also teaches you management and organisational skills.
- Flexibility: Many people study while they manage a job, family, and children. Hybrid learning provides them with the flexibility to manage everything without sacrificing their studies. For example, if your child is unwell and needs attention, you can complete the lectures online until you are ready to return to in-person classes.
- Access anytime, anywhere: Students who opt for in-person classes must live near the campus and be available during the lectures. With hybrid learning, you can take some classes in-person and the rest from a remote location or even another country. Technology also makes it easy for differently abled people to study. You may carry the study material on a tablet and not require physical books.
- Affordability: With all the study material available online on a laptop or a tablet, you can save on accommodation and books. There are still upfront costs involved, but it may be considerably less than that of enrolling in a physical classroom.
Hybrid learning technologies
The following hybrid learning technologies make students active rather than passive learners.
- Online training platforms: Cloud-based training platforms pool the tools and technologies needed for a truly hybrid environment. Thes include videoconferencing, online exam assessment and scoring, automated alerts, remote submissions, etc.
- Collaboration and active learning: Online training platforms connect student devices together in a private network, making it ideal for secure content sharing and collaboration. Teachers can moderate the environment and monitor the classroom through video feeds from various displays in the classroom.
- Multiple video feeds: Students learning online need to see everything that goes on in the classroom. This includes the teacher, fellow students, study material, and group partners. Multiple video feeds with routing solutions can help students in learning remotely.
- Cameras and audio: A central camera and microphone may work in a small class. But in a bigger location, you may need multiple cameras, microphones, and audio devices to make the experience immersive for students connecting remotely.
- Student devices: Students normally work on a bring-your-own-device (BYOD) model, using laptops, tablets, or mobile phones. Using their devices, students can easily share learning material and interact with peers.
For organisations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed organisational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like living organisms will be imperative for business excellence. A comprehensive yet modular suite of services is doing precisely that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organisations that are innovating collaboratively for the future.
How can Infosys BPM help?
The AI-enabled and cloud-based edutech solutions reduce time, effort, and costs. Some of the hybrid learning solutions offered by Infosys BPM are:
- Intelligent assessment services
- Smart virtual event hosting services
- Gamification services
- Enterprise services
- Learner segmentation and recommendation services
Read about all the edutech assessment platform solutions in detail. | <urn:uuid:890811ad-e951-4827-9d44-9ad36840813d> | CC-MAIN-2022-40 | https://www.infosysbpm.com/blogs/education-technology-services/deep-dive-into-hybrid-learning-technologies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00772.warc.gz | en | 0.930748 | 1,080 | 2.96875 | 3 |
Ransomware is a type of malware that specifically prevents victims from gaining access to their files or their entire system, and to regain control, the victim has to pay a ransom.
In the past, ransomware payments had to be sent via snail mail, but the scheme has evolved to the point where cybercriminals now request payment via credit or, more popularly, with cryptocurrency.
The key difference between ransomware and other types of malware is that it is a money-making scheme—the cybercriminal has a financial incentive—whereas other types of malware may have different aims.
For example, certain Botnets may simply aim to harvest some of your device’s computing power, and other types of malware might just aim at stealing sensitive data for corporate espionage or state-sponsored cyberattacks.
Of course, there are some types of malware made by cybercriminals who simply wish to “watch the world burn,” so to speak.
So how did all this start? Well, in the late 1980s, the first ransomware known as PC Cyborg came into the scene demanding $189 by mail, but the encryption used with this attack was fairly simple and easy to reverse.
But over the next 10 years, more serious ransomware threats began to appear, such as GpCode and WinLock.
Why does ransomware exist? Because it’s proven to work out well for cybercriminals! With every company or individual that pays a ransom, criminals get more confirmation that this is a reliable way to make money.
Datto’s report states “The average ransom requested by hackers stayed roughly the same year-over-year. MSPs report the average requested ransom for SMBs is $5,600 per incident, compared to $5,900 last year.”
What started as a relatively harmless virus that only asked victims to cough up $189 has now transformed into a billion-dollar industry. And until we all get better about our cybersecurity, these attacks will continue.
How Ransomware Works
How does ransomware work? Well, that depends on the type of ransomware.
For instance, scareware doesn’t work the same way that doxware works, and vice versa.
Cryptolocking malware, which was discussed earlier in the article, works by locking your files with strong encryption that can’t be broken. The criminal then holds your files hostage and offers to give you the encryption key in exchange for payment.
But how does the ransomware get on your computer in the first place?
There are a lot of ways this can happen, whether it’s falling victim to a phishing attack, visiting a spoofed domain, or clicking on a suspicious-looking link in your email.
This brings us back to the fact that it depends on the type of ransomware you’re dealing with, so let’s go ahead and take a look at the different versions of this cyberattack.
Types of Ransomware
- Scareware — These were prevalent many years ago. Scareware is a malware tactic that tries to scare you into downloading a piece of malware that encrypts your data. Typically, this would look just like a message, sometimes as a popup, from an entity claiming to be the FBI, and they say that they’ve noticed some bad software on your computer and that they can remove it for you. There are also many tech support scams where the scammers claim to be from Microsoft and want to help fix your computer.
- Screen lockers — Screen locker ransomware is when the virus infects your operating system and, as the name implies, locks you out of your computer or devices. This blocks you from accessing any of your files and has the potential to create serious downtime for a company.
- Encrypting ransomware (cryptolockers) — This type of ransomware is among the most dangerous and is most prevalent. This is when the malware encrypts your files, folders, and even your hard drives.
- Doxware — Doxing is when someone publishes private or identifying information about a particular person on the internet, usually with malicious intent. Following that vein of thought, doxware is when a cybercriminal threatens to publish your stolen sensitive data online unless you pay a ransom. This particular form of ransomware has become more prevalent as more and more people share their lives and business information online.
- RaaS — Ransomware-as-a-service (RaaS) is a more recent service that cybercriminals offer to potential scammers. There are always those that develop malware to earn money with less risk as they don’t do the attacking, they just create and sell the ransomware. This also allows non-technical criminals to break into this industry. There are even subscription models for this service.
- Ransomware on mobile devices — As the name suggests, this type of ransomware is specific to mobile devices. They infect your phone and steal your private data before demanding you pay them, often in cryptocurrency, in exchange for the return of your information. These forms of ransomware tend to be encountered as a form of social engineering on social media.
Common Ransomware Targets
The largest target of ransomware attacks targets small to medium-sized businesses (SMBs). Why? Because they tend to have the least protections in place while at the same time being in more desperate need of their data should it get taken, hostage?
In contrast, larger businesses and enterprise targets are generally going to be more protected, more secure, and have all their critical data backed up. There are some cases where larger organizations get targeted, but the vast majority of attacks are aimed at small businesses.
So who gets targeted the least? Every day consumers have little to offer to cybercriminals.
What is it that gets attacked?
At this point, we know who tends to get targeted, but what is it that hackers are trying to break into?
The usual targets for ransomware attacks include windows endpoint systems, which is to say, your employee’s PCs, software-as-a-service applications, data repositories, and databases.
Datto reported that 91% of ransomware attacks this year targeted PCs. The second-highest number of attacks (76%) were aimed at Windows Servers.
Understand that when we say that SaaS apps are targets of ransomware attacks, we don’t mean that you could get malware just from using something like Salesforce, but rather that your salesforce account might be what the cybercriminals intend to take hostage from you.
Datto’s data on SaaS application incidents states that:
- 64% of MSP’s reported attacks within Microsoft 365
- 54% of MSP’s reported attacks within Dropbox
- 25% of MSP’s reported attacks within Google Workspace
Why Businesses Should Be Concerned about Ransomware
We’ve already briefly touched on some of the devastating effects that ransomware can have on your business, but to further illustrate that point, some of the key reasons that businesses like yours should be making ransomware one of your top concerns.
Many SMBs Are Still Unaware of Ransomware’s Threat
As mentioned near the start of this article, many SMBs seem to be unconcerned about the potential threat of a ransomware attack while their MSPs seem very concerned.
When businesses aren’t concerned about a potential problem, they don’t prepare for it, and that makes SMBs a vulnerable target of these types of attacks.
There are, however, many SMBs that are waking up to the problem of ransomware and are taking the proper precautions to avoid becoming another victim of the industry.
Ransomware Attacks Keep Getting Past Security Efforts
Although there has been increased spending on cybersecurity, ransomware continues to bypass security measures, including antivirus, employee education, pop-up blockers, email filtering, and even endpoint detection solutions.
How are cybercriminals managing to get past security? Many MSPs reported in the Datto report that criminals consistently make modifications to their malware to avoid detection, and the social engineering attacks have become increasingly sophisticated and difficult to detect.
This is further reinforced by the fact that 54% of ransomware attacks come from phishing attacks. Despite increased awareness training, many end-users continue to fall victim to social engineering tactics.
Since breaches are rarely limited to a single computer, ransomware attacks tend to create a considerable amount of downtime for businesses as the infection usually spreads throughout the entire business network.
This is one reason why many SMBs simply pay the ransom. To get back to operations, they need their data back, and paying the ransom is almost always cheaper than downtime.
To illustrate this point, consider that the average ransom in 2020 cost around $5,600, whereas the average cost of downtime was around $274,200.
Who’s Getting Targeted?
The top industries being targeted for ransomware attacks include but are not limited to healthcare, finance/insurance, government, professional services, and education.
While SMBs are the primary target of hackers, MSPs are also being targeted more often. The reason for this is that the hackers figure that they can get to the MSP's clients by hacking into their systems and stealing their credentials.
MSPs are, of course, increasing their security as a response to this growing tactic.
How to Protect Your Organization from Ransomware
Now that we’ve gone through what ransomware is, the usual targets of ransomware, and why it’s still a prevalent threat, it’s time to dig into the practical methods you can use to protect your company from it.
Backup Your Data
Data backup relates to backing up files, emails, and databases within your organization. It begins with the replication of your company’s data on all workstations, servers, and even storage appliances.
Once the initial full backup is complete, future backups need only make updates based on what data has changed since the previous backup was completed.
This process saves a considerable amount of storage, bandwidth, and time, compared to the resource costs of running full backups every single time.
By backing up your company’s data consistently, you significantly lessen the sway that a cybercriminal has over you were they to put your data and systems up for ransom. Even if you never get your original data back, you still have the backup.
All critical backup files should be given strong encryption and stored in a safe, secure, and accessible location only to authorized personnel. This creates additional protection should the cybercriminal also intend to attack your backups.
Having a backup doesn’t solve all your problems, however. If the cybercriminal were threatening your organization with doxware, you would still be at risk of having confidential data go public.
In the unfortunate event that you lose your data to a cybercriminal, your company should be able to fall back on a data recovery plan.
This ensures that any critical information that was lost and not backed up is at least recoverable.
Having strong recovery policies will help make the process of data recovery smooth and efficient. When creating these policies, be sure to consider the following questions:
- Which files are more critical than others?
- Is the way you’re currently organizing data effectively? Is there a better way this could be done?
- How long does it take to restore backups?
- Who are the key figures who are in charge of restoring data should it be lost?
To learn more about data recovery, please see our article on Data Backup and Recovery (BCDR).
Use Next GenFirewall Security Software
Modern firewalls often called next-generation firewalls (NGFW), are incredibly effective at defending against ransomware attacks.
This sophisticated firewall software grants your company protection from malware that attempts to enter your network. Traditional firewalls fall short in this capacity.
A longtime player in this specific field is a next-generation firewall vendor called Sophos XG. Their solution delivers a suite of offerings that include public cloud protection, enterprise protection, and other services catered to your needs.
If you do get a next-generation firewall, be sure to keep it updated to ensure that it works properly. This goes for any security applications your business uses. If you don’t regularly update these apps, hackers may find ways to sneak into old versions of the software.
Safe Internet Practices
Phishing attacks are still the primary method that hackers use to break into the critical data of SMBs, so practicing safe internet and email usage areas must.
This involves making sure that your employees are using secure networks as they browse the internet and avoid clicking on suspicious links within emails.
If an email looks legitimate but it’s asking for something unusual, your employees should know to notify your IT team to check for a phishing attack.
A great way to mitigate the risk of employees encountering cyberattacks online is to implement a company-wide security awareness program. The program would support your employees by helping them stay informed about changes to cybersecurity, cyberattacks, and rising threats.
What To Do if Someone in Your Organization has a Ransomware Infection
How should your company respond to a ransomware attack?
Firstly, don’t pay the ransom as this may get you into more trouble. Reuters recently released an article addressing how your company could be prosecuted for paying the ransom to cybercriminals.
Another reason to avoid paying the ransom is that doing so supports cybercriminals. The primary reason hackers continue to attack companies in this way is that they’ve successfully made money from the scheme.
If everyone cut off the cybercriminals’ cash flow by not paying ransoms, these kinds of attacks would become less frequent, if not disappear altogether.
Instead, the first thing you do in response to the attack is to isolate the infected device to stop the spread of the infection. This can be achieved simply by disconnecting the device from your network/internet.
Once that’s done, take stock of the damage and identify what data has been affected. You want to know what data you’ve lost in part so that you can know what data needs to be restored (if the relevant backups exist, of course).
The next thing to do is to identify the type of ransomware you’re dealing with. Once it’s been properly identified, or even if you can’t identify it, report the attack to the authorities.
If your business has a disaster recovery plan, this is the time to implement it. If you know what data has been lost you should be able to restore it from your backups.
The most common ransomware recovery method involves using a re-imaging machine, which restores the infected device from a backup.
Don’t Let Your Data Become Hostage for Ransom
There may eventually come a day when avoiding the threat of ransomware is as simple as downloading a single application, but until that day comes, it’s still largely up to you and your employees to keep your data safe from cybercriminals.
By reaching this point in the article, you’ve familiarized yourself with what ransomware is, how it breaks through your security, and what you can do about it. Knowing this information is one thing, but if you aim to keep your private data safe, you’ll have to take appropriate action.
What is Ransomware? — Ransomware is a type of malware that prevents victims from accessing personal data and or entire systems. To regain access to the victim’s data, the hacker demands a ransom.
How Ransomware Works — It depends on the type of ransomware you’re dealing with. Cryptolocking ransomware, also known as Screenlockers, locks your files with strong encryption. Doxware steals your private data and the hacker will threaten to make it public unless a ransom is paid. Ransomware-as-a-service is a service that cybercriminals use to sell ransomware viruses to less tech-savvy criminals. There is also ransomware that can be acquired on your mobile device, usually through social engineering attacks on social media.
Common Targets — Ransomware is most often aimed at SMBs because they tend to be less protected against these types of attacks, whereas larger enterprise companies usually have strong guards against it.
Why Businesses Should Be Concerned — If your company isn’t concerned about ransomware then you probably won’t invest money or time in the relevant security measures, which makes your company a more likely target of these types of attacks. Another reason to be concerned is that an attack like this can have costly repercussions, in large part due to the downtime created from the attack.
How to Protect Against Ransomware — Back up your data, create recovery policies, utilize next-generation firewalls, and establish and implement safe internet practices.
What to do in the event of a Ransomware Infection — Don’t pay the ransom. Paying the ransom may be illegal, and it also supports this type of criminal activity. Instead, what you should do is isolate the infected device and cut it off from the rest of your network, find out what data was affected, identify the type of ransomware you’re dealing with, and implement your disaster recovery plan.
Avoid Unnecessary Downtime with Ransomware Security
The amount of time and energy that goes into recovering from a ransomware infection is much more than the costs of investing in ransomware prevention. As the old adage goes, an ounce of prevention is worth a pound of cure.
Although ransomware attacks are still very prevalent in today’s business world, not all who get attacked experience downtime; those who don’t tend to have implemented effective business continuity and disaster recovery solutions.
The Datto report, which was mentioned earlier in this article, indicated that BCDR clients are among the least likely to experience significant downtime as a result of a ransomware attack.
Unlike other cookie-cutter security services, we take a holistic approach to understanding your business’s security strengths and vulnerabilities and then work to address them accordingly. | <urn:uuid:469b9eb8-c06b-44ba-9cb4-ca0932177b7b> | CC-MAIN-2022-40 | https://blog.commprise.com/en/what-is-ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00772.warc.gz | en | 0.946417 | 3,762 | 3.484375 | 3 |
A new report from the Chartered Institute of Insurance warns that Big Data could create an “underclass” of people who cannot afford insurance.
Big Data has been heralded as the future of business, with computer algorithms and machine-learning increasingly being used in tandem to enable companies to gather and act upon new insights. For example, such data could tell a business how a machine is performing on a production floor, or which product is selling best in a retail store.
There has been excitement for Big Data in the insurance world too, with risk managers seeing the potential to know more about their policyholders, and how much of a risk they really are. By knowing more about their customers, and how they behave, insurers believe they can improve risk management, reduce the likelihood of having to pay out on a claim, and ultimately improve their own bottom line.
In addition, through Big Data, insurers believe that they can price more effectively and ultimately advise their policyholders on how to lead a healthier and safer lifestyle.
Despite this, the new paper from CII warns that the Big Data approach threatens to destroy the insurance market model of pooling risk.
“Data is a double-edged sword,” said David Thomson, director of policy and public affairs at the CII, in the report. “The insurance sector needs to be careful about moving away from pooled risk into individual pricing. They need to think about the broader public interest.”
The report says that the concept of pooling risk “underpins the effectiveness of insurance cover”.
“Some people may be identified as such high risk to insurers that they are priced out of insurance altogether,” adds the report.
“Big Data could, in effect, create groups of ‘uninsurable’ people. While in some cases this may be to do with modifiable behaviour, like driving style, it could easily be due to factors that people can’t control, such as where they live, age, genetic conditions or health problems.”
The ethical issues of Big Data
Experts say that pricing around health and, in particular, genetic data is contentious. For example, some insurance professionals have questioned at what point an insurer intervenes in the event of a serious incident – like a heart attack – while basing pricing on genetic conditions seems unfair and exclusive to many.
The UK government acted on the latter in 2000, signing an agreement with the Association of British Insurers (ABI) in order to stop the insurance industry from using predictive genetic test results. That agreement runs until 2019, although a review is due later this year.
“You could price people out of the market for health products. There’s a danger insurers will not offer health cover to some people. The government would intervene if people are doing social sorting,” added Thomson.
Swiss Re customer technology manager Oliver Werneyer touched on some of the difficulties of IoT and Big Data at our recent Internet of Insurance summit.
“It’s great to figure out those people that are now healthier than they were, and for those you can give discounts. That’s exciting, except you now have people that are not as healthy as you thought they were.”
Spiros Margaris, VC and senior advisor at http://moneymeets.com, kapilendo.de, dser.de and ranked No. 1 Fintech Influencer by Onalytica, told Internet of Business that these dangers could be allied by emerging InsurTech start-ups.
“There is a danger that with Big Data insurance companies will not insure some people anymore and therefore some people might fall through the cracks. Though I truly hope and believe that InsurTech (insurance technology) start-ups would pick up where others fail.
“The Fintech (financial technology) industry’s greatest achievement will be to provide the unbanked – or in this case the uninsured – a possibility for a better life.”
Taking place on 27-28 September in New York, the Internet of Insurance is exploring the profound impact of IoT on insurance business models and customer relationships. Featuring case studies from USAA, Progressive, Liberty Mutual and more – email [email protected] for more information | <urn:uuid:8a5781dd-e648-4957-b73f-d695747a7323> | CC-MAIN-2022-40 | https://internetofbusiness.com/big-data-make-people-uninsurable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00772.warc.gz | en | 0.956235 | 884 | 2.6875 | 3 |
Understanding the differences between virtual reality, augmented reality and mixed reality
Virtual reality is hot, and enterprise- and consumer-facing organizations are eager to figure out how they can take advantage of the new medium, whether it be for entertainment, productivity, sales, or a myriad of other potential uses.
However, sometimes lost in all this excitement is the difference between virtual reality platforms and whether the required technical underpinnings are in place to deliver a satisfying user experience. It’s important to understand what virtual reality, augmented reality, and mixed reality are in relation to each other, as well as the technical considerations that those hoping to create experiences for these platforms need to keep in mind.
Virtual Reality defined
Virtual reality, or VR, is often used as a blanket term for all digital-reality variations. But in practice, it’s a specific kind of experience. While AR and MR incorporate some aspect of the real environment around the user, VR refers to a 100% virtual, simulated experience. VR headsets cover the user’s field of vision and respond to eye and head movements and shift what the screen displays accordingly, thus creating the illusion that the viewer is actually inside the other location or world.
Virtual reality is exceptionally sensitive to lag and slowdown—delays between when an input is placed and when the system reacts to it, and noticeable disruptions in the consistent stream of data being delivered, respectively. A significant portion of its value proposition includes the experience of actually being transported somewhere, and thus a frozen screen or patch of pixelated haze smashes that illusion quickly, ruining the experience for many—and in some cases causing motion sickness.
When an event is broadcast in VR—a concert, sporting event, or ceremony, for example—camera rigs that capture 360-degree (or 180-degree depending on the event) panoramic views are needed to provide the viewer with the ability to look at every angle. This requires a number of lenses and thus multiple video streams moving side-by-side. To transmit this information, copious amounts of bandwidth are required—up to 4 to 5 times as much for 360-degree video compared to regular video, according to YouTube’s Anjali Wheeler.
Further complicating bandwidth requirements is whether or not the content being streamed to the VR device is “live,” in real-time. If so, the bandwidth requirements are significantly higher.
Live VR can take two forms: Live as in watching an event as it occurs; and “live” involving interaction with others within a virtual environment. The former is like watching an extremely immersive movie, and the unit is passively accepting the data stream from the network, which requires low latency and a high bandwidth connection to achieve high video throughput. With the latter, to enable interaction between the VR source and multiple users, also requires latency to be so low that it does not cause a noticeable delay, even as data is moved back and forth between the individual VR units connected and servers.
Augmented Reality blurs the line between real and digital
The crucial difference between VR and AR lies in the way digital content is mixed with reality. Augmented reality (AR) doesn’t block out the world around the user in favor of a new, fabricated one; rather, it places a digital layer between the viewer and reality.
AR units are at least semi-transparent, allowing the user to see the world around them even as web pages, graphs, maps, and more are displayed in front of them (think Google Glass). This kind of technology also allows engineers and designers to see and manipulate models of what they’re working on alongside or overlaid onto their current work. Similarly, a surgeon could use an AR visor to highlight specific anatomy, pull up a model of an organ for reference, or help train other surgeons.
AR can also work through mobile phones using the integrated camera, and is the kind of virtual reality that powers Pokémon Go, the mobile app that surpassed Twitter in active daily users a week after its release. With Go, players wander around in their cities and towns and try to catch creatures that appear on top of the real world through their phone’s cameras. The game’s massive adoption shows the average consumer’s appetite for AR technologies.
Augmented reality is the most versatile of the “VR” technologies today. Unlike VR systems that require users to remain tethered to a stationary unity, AR is typically delivered through a visor or portable screen that allows for mobile use.
The challenge is that the computational ability of a unit small and light enough to comfortably be supported by a human head is considerably small, and as such, services and content that work with AR units, for now, need to be low-bandwidth and require minimal computing power and minimal battery consumption.
If AR is to enter the mainstream and live up to the technology’s potential, there needs to be a way to deliver content to the devices with high bandwidth and low latency while allowing compute to happen outside the unit. According to GSMA Intelligence, generic AR applications will require upwards of 100Mbps bandwidth throughput and nearly as low as 1ms delay—difficult specs for a device you can walk around with.
By offloading compute functions from the portable device to shared data center resources, power consumption of the device can be reduced, and sharing of the compute environment across many users can be maximized. Again, the challenge here is latency, as it must be low enough so as to maintain the experience.
Mixed Reality: blending VR and AR into an interactive world
Mixed reality—MR—is a combination of both virtual and augmented reality. Whereas in AR, digital content is simply overlaid onto the real environment being viewed—typically informational content such as a timetable when looking at a train—with MR platforms, the digital world is integrated into the real world in an interactive way. MR use cases are diverse, and reach the everyday consumer. For example, a homeowner could sample new furniture or paint colors as they would appear in their living room, without having to move the existing sofa.
As a combination of both AR and VR, mixed reality needs to be built upon technology capable of both: high bandwidth, low latency, and able to allow the user to explore a digital 360-degree space while the headset reacts to the environment around it. We’re very early in the development of MR technologies, so it’s too soon to say with accuracy what the required network resources will be for these devices, but we know they’ll be significant, and will require an even more robust and flexible network than both VR and AR.
Luckily, we have a little time to figure this out—though there are a number of people anticipating MR to be huge, so we had better sort it out before too long. The MR startup Magic Leap has, despite only showing its technology to a few individuals (and never publicly), raised $1.4 billion in funding, and Microsoft has been developing its own MR platform, HoloLens, for some time now, and has even begun shipping development kits.
Building toward adoption
The key to unlocking the potential of these platforms lies in making sure massive amounts of data can be transferred without being slowed down or limiting the experience. In short, this means we need to look to 5G and the infrastructure to support it, as well as improved wireline access.
Bringing the bandwidth to the user from a wireless perspective will be important. Off-loading compute functions from the device to the cloud will free up significant bulk in the lens or headset. Importantly, there must also be an access infrastructure in place to enable low latency connections to the servers supporting VR platforms. Wi-Fi will allow users to be untethered inside the building, while the bandwidth and latency specifications being driven for 5G will help enable untethered access outside the building.
In addition to wireless, going the wireline route offers the potential to take advantage of fiber technologies to push bandwidth from hundreds of Megabits-per-second into Gigabits, whether into a home or enterprise. Real time, ultra-high definition content with high QoE for multiple users is going to need to require that level of access bandwidth.
There are so many exciting, aspirational VR stories about all the things we can do with the many permutations of this technology, in the enterprise, healthcare, education, and more. The use cases are worth working toward, but we haven’t yet focused enough on the network infrastructure to make them a reality. We need to get to the point in which these capabilities become seamless, and we’re not there just yet. | <urn:uuid:e1a1955c-e936-4896-a5c2-fbfa8a822218> | CC-MAIN-2022-40 | https://www.ciena.com/insights/articles/Understanding-the-differences-between-virtual-reality-augmented-reality-mixed-reality.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00772.warc.gz | en | 0.942332 | 1,777 | 2.984375 | 3 |
With the pandemic still a problem for normal school programs, many institutions have turned to digital tools to keep things going. But with it comes the ugly reality of having to deal with data breaches and theft of students’ private data. This means that students and teachers have to find a way to deal with online vulnerabilities. As a student, you may need to keep in mind several things to ensure your data is safe from those who want to steal it. Here are some hints to stay safe online at school and at home.
- Keep Your Social Media Secure
It’s become almost impossible to engage with teachers and other students without having to rely on Facebook, Twitter, and Instagram. These social apps aren’t only crucial in keeping in touch with school work, they are also a way for students to escape the harsh school environment for some fun. To stay safe on these apps, make sure you review their privacy settings and keep your information hidden from those who don’t need to see it. For instance, for Facebook, you can make your account only visible to friends and ensure that only people you follow can see your tweets on Twitter.
- Only use Websites and Apps that Encrypt Information
- Use a VPN
A virtual private network service can also protect your information from being stolen by hackers. It protects data such as location information. You can use a VPN if you are unsure whether the wi-fi you’re connected to is secure. It’s highly recommended that you use a VPN when using a public network.
- Use strong and unique passwords
Try and use a password that cannot be easily guessed by anyone who may be interested in your information. Cybercriminals sell millions of passwords on the dark web, don’t make it easy for them. If you cannot come up with strong passwords, you can use a password manager that helps you generate uncrackable passwords. Also, make sure you don’t use the same password on more than one online platform. Again, you can find apps that help you securely store your passwords.
- How do you protect your assignments?
So far we’ve looked at how to protect your personal information, but how can you protect your schoolwork from being plagiarized. Let’s say you’ve been working on a paper (it could be a lab paper, essay etc.) that is taking you too long and you are worried that it could be stolen by hackers (it happens). The best thing to do here would be to trust your assignment to a professionals. Millions of students trust these professionals to keep their paper safe, and all you have to do is search for someone to do my assignment online and you will have access to the best professionals. Remember, as much as everything else needs to be secured; nothing matters more than your schoolwork.
- Don’t Fall for Phishing Scams
Cybercriminals notoriously use phishing scams to steal your information, and if you are not careful, you could give a criminal access to your crucial data. To be safe from these scams, don’t click on suspicious links, avoid giving financial information in emails you send out, and check to ensure that the email you are replying to is from a trusted sender. If you fall victim to such a scam, quickly follow up with an email to the authorities. Change your financial information immediately if you can and contact your bank.
- Read the Terms and Conditions
It sounds like a tedious chore, but you could save yourself from a lot of trouble by going through a website or an app’s T&Cs. For instance, how else would you know if the app you are using to scan your face isn’t sending out your personal information to third parties?
- Don’t give out personal information
This point may sound obvious, but it cannot be emphasized enough: Do not share your personal information carelessly on the internet. If you have to share something personal, at least make sure that you know who you are sharing it with. All it takes is a slightly determined cyber thief to make everything come crumbling down for you.
- Don’t download attachments from strangers
An email attachment from an unknown sender can be the gateway for all kinds of cyberattacks and information theft. Malware and phishing scams are run this way. If your device is connected to a more extensive network, the damage could be even more devastating.
- Avoid Unprotected Public Networks
Anyone with malicious intentions can easily access public networks that are not protected. By simply connecting to such a network, a criminal can get access to literally everything on your computer or phone. Keep this in mind before you desperately connect to a public wi-fi that seems free and easy.
As much as the internet has been a great place to keep education going during this pandemic, it has also laid bare the need to protect personal information from cybercriminals. These are only a few hints that you can use to protect yourself during this time.
Brandon Kryeger is a freelance writer and journalist. He loves writing about movies and books. When he’s not writing, he loves to cook and enjoys outdoor activities like swimming and cycling. | <urn:uuid:18b1c227-517f-4518-8b2f-65aceb32d425> | CC-MAIN-2022-40 | https://gbhackers.com/ten-cybersecurity-hints-for-students-at-home-and-in-college/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00172.warc.gz | en | 0.950283 | 1,176 | 3.0625 | 3 |
How does cyber crime affect the global finance profession?
Computers play a huge role in the day to day role of finance professionals. According to a recent report, said professionals are right in the line of fire for cyber crime.
According to the report titled Cybersecurity – Fighting Crime's Enfant Terrible, the theft of financial assets through cyber-intrusions is the second largest source of direct loss from cyber crime.
The report also states a key factor to take note of is that cyber security is no longer a purely technical issue. In fact, it is the impact of a cyber-breach that is typically felt across every aspect of a business.
“What is needed, but is still often lacking, is a strategic approach to mitigating cyber crime risks,” the report says.
“Professional accountants and finance professionals can, and should, play a leading role in defining certain key areas of such an approach.
According to the report, these key areas include:
- Creating reasonable estimates of financial impact that different types of cyber security breaches will cause, so that a business can be realistic about its ability to respond to an attack and/or recover from it.
- Defining risk management strategy.
- Helping businesses to establish priorities for their most valuable digital resources, in order to implement a ‘layered' approach to cyber security.
- Closely following the work of governments and various regulators, in order to have clear up-to-date information on relevant legislation and on requirements for adequate disclosure and prompt investigation of cyber security breaches.
The report quotes a survey that showed 48% of respondents were more concerned about cyber crime than they had been 12 months earlier. A total of 85% respondents in Asia had changed their opinions.
The report also says that there really is no ‘silver bullet' solution for cyber crime.
"Even though the benefits that the Digital Revolution has brought us are truly remarkable, but these benefits do not come free." | <urn:uuid:fbc7d8ea-7897-4bb1-9355-a3e9c63b48c5> | CC-MAIN-2022-40 | https://securitybrief.asia/story/how-does-cyber-crime-affect-global-finance-profession | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00172.warc.gz | en | 0.957479 | 406 | 2.53125 | 3 |
Teachers use learning apps, digital tools, technology devices and next-generation technology to promote a fun environment, facilitate communication among students, give every student access to technology, and make learning more personal.
Pearson released its assessment of student engagement using digital learning, including ways that schools use technology to assure intellectual, emotional, physical, and social engagement.
The U.S. Department of Education said that using technology in the classroom can increase productivity, reduce costs of materials, and deploy a new model of connected teaching. The model links students and teachers to learning content that improves instruction and meets each student’s individualized needs.
- Customized Project Programs
“Students embrace learning that allows choice and sparks curiosity,” Pearson said in an infographic. Teachers can encourage intellectual engagement by using programs such as Summit Academy that scales a list of projects created for each student based on data about their academic level.
The program uses smartphones as a virtual reality portal for students to experiment using the Hubble Telescope and the International Space Station.
- Systems that Use Online and Blended Learning
“Students learn best when they feel ‘known’ and cared about as learners,” Pearson said. Schools such as New Directions in Prince William County, Va., and Brooklyn LAB in New York City promote emotional engagement by using virtual spaces to coach students through their personalized learning pathways.
Schools are using Pokémon Go along with maker experiences to encourage students to learn using their whole body and increasing physical engagement in classes other than physical education.
Schools that allow Facebook on their servers can utilize this typically used socializing tool to encourage students to communicate and collaborate on projects, which enables social engagement.
Unlike Facebook, Edmodo is a social media site designed with students, teachers, and parents in mind. Edmodo provides a safe and easy way for teachers and students to communicate, while keeping parents informed.
Wikispaces Classroom allows students to talk with one another about group assignments and teachers to measure performance in real time and give feedback and support as needed.
Forty-eight states and the District of Columbia use some type of online learning to supplement instruction or to enroll students in a full-time online learning institution, according to the Department of Education. A combination of programs and apps could be integrated into the curriculum to foster different types of engagement. | <urn:uuid:4d9ea42c-03fa-4c87-a715-3980e024d49e> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/7-online-tools-that-increase-student-engagement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00172.warc.gz | en | 0.934666 | 478 | 4.09375 | 4 |
When we look at information on a screen or on a piece of paper, our brains perceive that information, process it, and categorize it, seemingly within an instant. Our brains are powerful in that way, capable of reading 200-400 words in a single minute. Even more impressive is the research done by a team of neuroscientists at MIT that found the human brain can process an entire image after seeing it for as little as 13 milliseconds.
Modern technology is starting to catch up with the capabilities of the human brain in some areas. Intelligent Document Processing, or IDP, is a powerful technology that can extract and process information on a level not unlike our brains. IDP processes documents with a high degree of speed and accuracy—frequently outperforming human employees. IDP technologies allow for the extraction of structured data from completely unstructured sources. The data extracted using IDP can then be used for intelligent automation of an end-to-end business process. After being relieved of repetitive and basic document processing tasks, your workers can then spend more time on the more challenging work that only they can do.
IDP can also be called document automation—an end-to-end system of processing documents at scale. Read further to learn how IDP works, and see some valuable intelligent document processing use cases for diverse business fields. (For even more support of IDP’s usefulness, check out these IDP statistics!)
Learn about our intelligent automation platform and low-cost Quick Start Program to find out how it can save you time and money.
Intelligent Document Processing evolved from a technology called Optical Character Recognition, or OCR. OCR has been around quite some time and still has a role to play in IDP, but combining it with more advanced Artificial Intelligence (AI) technologies makes it much more accurate and useful.
When OCR is combined with AI technologies—such as Natural Language Processing (NLP), Machine Learning (ML), and Computer Vision (CV)— you can consider the result to be IDP, or holistic document automation. In IDP, these technologies work together, making it possible for pixels to be translated into characters, recognized as text, combined to make words, and analyzed to ensure the words make sense in context.
All of these cross-checks allow for better extraction of content than a business could achieve with any one of the technologies on its own.
|NOTE: IDP isn’t just about paper documents. These same technologies can be used to process unstructured data on screens, allowing IDP to retrieve data from virtually anywhere and send it to the appropriate place. This is especially useful for legacy systems or anything that must be accessed through a terminal emulator.|
Businesses in all industries occasionally need to reconcile data—cross-check and verify it by comparing it with data located in a different place. This level of document automation is extremely important for industries like insurance, health care, and banking (more on the latter two below). An effective cross-checking method produces a higher accuracy rate, a higher volume of documents processed, and overall faster processes.
Insurance companies have many reconciliation use cases that involve comparing and validating from multiple different sources. They use IDP to perform these reconciliations quickly and accurately with detailed reports produced and used for other downstream tasks, such as account creation, custom policy quoting and underwriting.
Using IDP for data reconciliation saves thousands of staff-hours and can reduce process turnaround time by more than 50%, as it has for multiple Nividous insurance clients.
IDP is very helpful in document automation for healthcare. IDP can assist medical providers with patient identity verification and insurance pre-authorization: Medical providers, after scanning a patient’s insurance card or ID, can use IDP to instantly extract the patient’s member ID and insurance and validate coverage using the appropriate provider portal, to ensure that a patient will be covered for a given visit or procedure within minutes, if not seconds.
Nividous has also employed IDP to help large healthcare companies process seemingly insurmountable numbers of incoming faxes—each file hundreds or thousands of pages long.
Here are some other examples of what IDP can do in health care:
In all these cases, IDP was invaluable in reducing manual effort. It can increase health care productivity as much as 65% and reduce costs by 45% when applied properly.
In one case, IDP was deployed successfully to help a life sciences company that works with pharmaceutical companies on their drug applications.
The drug application process requires pulling in large amounts of data from disparate sources and eventually culling it all down into the final application document. For many pharmaceutical companies, it requires hours of manual effort to sort through sources and put them together in the necessary application.
Nividous used IDP to help this company automate the initial data-gathering and application-building processes. A human worker then takes over to review the different sources and elements of the drug application. This change meaningfully reduced the manual labor involved, as well as time and money. Read the full case study here.
Loan processing is a common use case for IDP in finance.
Loan applications, whether they are constructed manually via handwritten notes from a loan officer or on a web-based form, require a lot of data from many different sources in order for the loan application to be submitted, reviewed, approved, disbursed and serviced. IDP can extract data from the initial application and insert it into the bank’s own loan origination application. This data, retrieved within an instant, allows bank employees to quickly review loan terms and decide whether an applicant will be approved.
IDP can also assist banking and finance firms by:
In IT, IDP can be used to support the help desk.
Customers get support for products and services by calling, emailing, chatting, or even texting. When the IT department receives that information, they have to triage to figure out what departments to contact to solve your problem.
IDP makes it possible for IT to perform that processing much faster. For example, IDP can analyze the text of an email to determine whether the sender is happy, discouraged, or irate about your product or problem. This insight can then be used to route the message to a help desk associate who’s best equipped to help you with your problem, leading to an increase in customer satisfaction and reduced turnaround times.
Whether you’ve pinpointed a specific Intelligent Document Processing use case at your own business, or you simply have a business challenge you suspect IDP and intelligent automation could solve, we’d love to hear from you. Reach out to us today to meet with one of our experts! | <urn:uuid:141e294a-e0ff-4610-8a34-3dc4cb347428> | CC-MAIN-2022-40 | https://nividous.com/blogs/intelligent-document-processing-use-cases | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00172.warc.gz | en | 0.947739 | 1,381 | 2.6875 | 3 |
Scientists have discovered a new material with improved chemical stability, lightness, and flexibility. Which used to make smartphones and other devices that are less likely to break.
Currently, most parts of smartphone made of silicon and other compounds, are expensive and break easily. But with almost 1.5 billion smartphones purchased worldwide last year. Manufacturers are on the lookout for something more durable and less costly, researchers said.
Researchers, including those from Queen’s University Belfast in the UK. Found that by combining semiconducting molecules C60 with layered materials. Such as graphene and hBN, they could produce a unique material technology, which could revolutionize the concept of smart devices.
van der Waals solids Process
The winning combination works because hBN provides stability, electronic compatibility, and isolation charge to graphene. While C60 can transform sunlight into electricity. Any smart device made from this combination would benefit from the mix of unique features. which do not exist in materials naturally.
This process is called van der Waals solids. Allows compounds to be together and assembled in pre-defined way.
The material also could mean that devices use less energy than before because of the device architecture. So could have improved battery life and less electric shocks,” said Santos. One issue that still needs to be solved is that graphene and the new material architecture is lacking a ‘band gap’.Which is the key to the on-off switching operations performed by electronic devices, researchers said. However, the team is already looking at a potential solution transition metal dichalcogenides (TMDs). These are a hot topic at the moment as they are very chemically stable, have large sources for production and band gaps that rival Silicon. | <urn:uuid:8ef21ed7-00b1-4d70-bc3b-69b567f1f1d4> | CC-MAIN-2022-40 | https://areflect.com/2017/06/04/new-material-can-make-unbreakable-smartphones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00372.warc.gz | en | 0.960003 | 353 | 3.625 | 4 |
MS has long been characterized as a disease of the brain’s white matter, where immune cells destroy myelin—the fatty protective covering on nerve cells.
The destruction of myelin (called demyelination) was believed to be responsible for nerve cell (neuron) death that leads to irreversible disability in patients with MS.
However, in the new findings, a research team led by Bruce Trapp, Ph.D., identified for the first time a subtype of the disease that features neuronal loss but no demyelination of the brain’s white matter.
The findings, published in Lancet Neurology, could potentially lead to more personalized diagnosis and treatments.
This new subtype of MS, called myelocortical MS (MCMS), was indistinguishable from traditional MS on MRI.
The researchers observed that in MCMS, part of the neurons become swollen and look like typical MS lesions indicative of white matter myelin loss on MRI.
The disease was only diagnosed in post-mortem tissues.
The team’s findings support the concept that neurodegeneration and demyelination can occur independently in MS and underscore the need for more sensitive MRI imaging techniques for evaluating brain pathology in real time and monitoring treatment response in patients with the disease.
“This study opens up a new arena in MS research.
It is the first to provide pathological evidence that neuronal degeneration can occur without white matter myelin loss in the brains of patients with the disease,” said Trapp, chair of Cleveland Clinic’s Lerner Research Institute Department of Neurosciences.
“This information highlights the need for combination therapies to stop disability progression in MS.”
In the study of brain tissue from 100 MS patients who donated their brains after death, the researchers observed that 12 brains did not have white matter demyelination.
They compared microscopic tissue characteristics from the brains and spinal cords of 12 MCMS patients, 12 traditional MS patients and also individuals without neurological disease.
Although both MCMS and traditional MS patients had typical MS lesions in the spinal cord and cerebral cortex, only the latter group had MS lesions in the brain white matter.
Despite having no typical MS lesions in the white matter, MCMS brains did have reduced neuronal density and cortical thickness, which are hallmarks of brain degeneration also observed in traditional MS.
Contrary to previous belief, these observations show that neuronal loss can occur independently of white matter demyelination.
“The importance of this research is two-fold.
The identification of this new MS subtype highlights the need to develop more sensitive strategies for properly diagnosing and understanding the pathology of MCMS,” said Daniel Ontaneda, M.D., clinical director of the brain donation program at Cleveland Clinic’s Mellen Center for Treatment and Research in MS.
“We are hopeful these findings will lead to new tailored treatment strategies for patients living with different forms of MS.”
Dr. Trapp is internationally known for his work on mechanisms of neurodegeneration and repair in MS and has published more than 240 peer-reviewed articles and 40 book chapters. He also holds the Morris R. and Ruth V. Graham Endowed Chair in Biomedical Research.
In 2017 he received the prestigious Outstanding Investigator award by the National Institute of Neurological Disorders and Stroke to examine the biology of MS and to seek treatments that could slow or reverse the disease.
Journal reference: Lancet Neurology search and more info website
Provided by: Cleveland Clinic | <urn:uuid:45b690d0-fe9f-46fd-9704-cd10c8129b8b> | CC-MAIN-2022-40 | https://debuglies.com/2018/08/22/cleveland-clinic-researchers-have-discovered-a-new-subtype-of-multiple-sclerosis-ms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00372.warc.gz | en | 0.940101 | 725 | 3.328125 | 3 |
I came across an excellent story by Ars Technica on the Stuxnet malware. It’s well worth the read as it goes into detail on how the virus originated, how it was analyzed and how security researchers got to the bottom of what it had been designed to do.
Stuxnet is a piece of malware allegedly designed to infect Iran’s nuclear facilities’ systems and damage the centrifuges where uranium enrichment was taking place.
It’s an intriguing story on what people can achieve when they launch targeted cyber attacks on their victims. The Stuxnet malware was quite sophisticated; using obfuscation techniques to avoid detection and reverse engineering, multiple zero-day exploits to help it spread and infect new machines, as well as having a malicious payload targeting specific hardware (the centrifuges). The Stuxnet malware also used stolen digital certificates from two companies, Realtek and JMicron Technologies, to trick the system into accepting it as a genuine piece of software.
The Stuxnet malware was designed to use programmable logic controllers that altered the way the centrifuges worked, allegedly induce stress and, finally leading to a breakdown in the system. By altering the frequency of the centrifuges, the virus forced the centrifuges to rotate at maximum speed for brief periods of time, then at normal speed, and then at the slowest possible speed before rotating against at normal speed again. This occurred only when the hardware met particular specifications.
This story shows how malware can be designed to cause serious damage to a targeted system or organization. Stuxnet hijacked the application controlling the programmable logic controls in such a way that the physical changes to the hardware were made but they would not be noticed by staff checking the system’s operational parameters.
It is unlikely that such complex malware would be engineered to attack non-high profile targets but it’s a great insight into the brains behind malware designers and how their work evolves and hits targets with surgical precision. | <urn:uuid:3fa50fc0-2069-4582-be10-dcab9d96ca67> | CC-MAIN-2022-40 | https://techtalk.gfi.com/stuxnet-malware-story/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00372.warc.gz | en | 0.947781 | 404 | 2.84375 | 3 |
Back in simpler times, school year shopping involved buying things like pencils, paper and notebooks. Now, thanks to increased classroom computer use, notebooks are still essential, except now notebooks are computers instead of paper.As schools increasingly turn to technology to educate students, districts need to properly address the high costs that can be associated with the modern classroom computer, according to one author.
Bridget McCrea, in an August 8 article for THE Journal, wrote that districts need to understand that new technology involves an upfront investment of time and resources that might not be always available.
“The new technology being infused into today’s classrooms doesn’t come cheap; nor is it always easy to install, repair, maintain, and upgrade,” she wrote. “Physical facilities take time and money to upgrade and replace, and teachers must be trained on how to use any new equipment and applications that are introduced into the classroom. For the 21st Century classroom to operate at an optimal level all three legs of the stool must be addressed – and that costs money.”
In addition to the costs related to purchasing and upgrading technology, districts need to account for costs relating to infrastructure upgrades. For example, McCrea said schools might need to obtain new desks that are better fit for laptop use or facilities might need to opt for a higher speed internet connection. Plus, schools need to provide adequate training to teachers so they can properly use any new classroom software given to them.
“Budgets are tight, classes are getting larger, and teachers are having to find ways to do more with less,” Anne Yount, founder of the Boston Tutoring Center, said in THE Journal. “Technology can help bridge that gap in the classroom, but whether it’s accessible or not often comes down to funding.”
How schools deal with technology costs
While the purchase of a new classroom computer or mobile device does sometimes involve a significant upfront investment on the part of districts, school administrators said they expect the technology to reduce costs in the long term in addition to being worthwhile investments for education.
For example, students in Huntsville, Alabama, will receive laptops and tablet devices as the school district shifts to digital textbooks this school year. According to Superintendent Casey Wardynski, textbooks cost the district $5 million annually. In the first year of this new program, the district will spend $3.2 million on new technology and digital textbooks. By year three of the program, Huntsville schools will be paying $2.5 million a year, The Huntsville Times reported.
By using applications instead of textbooks, districts can achieve cost savings by not having to pay for the extra costs associated with the publishing process. Apps for the iPad from Houghton Mifflin Harcourt have all the content available in their algebra and geometry textbooks, except the apps cost $13 less than the books, U.S. News and World Report said.
To help save money on classroom technology, one school district is considering leasing computers and tablets instead of purchasing them. Carol Stream Elementary District 93 in Illinois spent $879,000 to lease 570 MacBooks and 290 iPads from Apple, according to the Daily Herald.
Another way schools can more affordably incorporate technology into classrooms is to let students bring in their own devices. Quakertown Community School District in Bucks County, Pennsylvania, allows students to use a personal laptop instead of a classroom computer offered by the school. Last year, the district was able to spend less money on technology since 30 percent of its students elected to use their own laptops.
“Your district doesn’t have to be rich to create the best possible learning environment,” Kim Klindt, a fourth grade teacher and technology facilitator at Guy Emanuele Jr. Elementary School in Union City, California, said to THE Journal. “You can do it if you have a few computers and teachers who know how to integrate and use the technology.”
Are schools spending too much or not enough money on the latest classroom technology? What are some other ways that schools can make sure enough money is available for the latest gear and software? | <urn:uuid:2d9fe3e4-ddee-4b1a-bf62-2d3b03ce9ba1> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/the-price-of-classroom-software | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00372.warc.gz | en | 0.953994 | 851 | 2.890625 | 3 |
Multipath TCP is an extension of TCP that will soon be standardized by IETF. It is a succesful attempt to resolve major TCP shortcomings emerged from the change in the way we use our devices to communicate. There’s particularly the change in the way our new devices like iPhones and laptops are talking across network. All the devices like the networks are becoming multipath. Networks redundancy and devices multiple 3G and wireless connections made that possible.
Almost all today’s web applications are using TCP to communicate. This is due to TCP virtue of reliable packet delivery and ability to adapt to variable network throughput conditions. Multipath TCP is created so that it is backwards compatible with standard TCP. In that way it’s possible for today’s applications to use Multipath TCP without any changes. They think that they are using normal TCP.
We know that TCP is single path. It means that there can be only one path between two devices that have TCP session open. That path is sealed as a communication session defined by source and destination IP address of communicating end devices. If some device wants to switch the communication from 3G to wireless as it happens on smartphones when they come in range of known WiFi connection, TCP session is disconnected and new one is created over WiFi. Using multiple paths/subsessions inside one TCP communication MPTCP will enable that new WiFi connection makes new subsession inside established MPTCP connection without braking TCP that’s already in place across 3G. Basically more different paths that are available will be represented by more subsessions inside one MPTCP connection. Device connected to 3G will expand the connection to WiFi and then will use algorithm to decide if it will use 3G and WiFi in the same time or it will stop using 3G and put all the traffic to cheaper and faster WiFi.
TCP single path property is TCP’s fundamental problem
In datacenter environment there is a tricky situation where two servers are talking to each other using TCP to communicate and that TCP session is created across random path between servers and switches in the datacenter. If there are more paths of course. If there are (and there are!) another two servers talking in the same time, it will possibly happen that this second TCP session will be established using partially the same path as the first TCP session. In that situation there will be a collision that will reduce the throughput for both sessions. There is actually no way to control this phenomenon in TCP world. As in our datacenter example the same thing works for every multipath environment so it it true for example for the Internet.
Answer is MPTCP!
Multipath TCP – MPTCP is better as TCP in that enables the use of multiple paths inside a single transport connection. It meets the goal to work well at any place where “normal” TCP would work.
Multipath TCP, as the name says, enables the creation of multiple paths within one MPTCP session and in that way achieves better performance and adaptation of sessions. Thers was probably a great effort to make MPTCP compatible with TCP so that there is no need for any change in networks, devices and apps. After all, without this compatibility there would be no deployment and furthermore no use of MPTCP. Nobody wants to change the whole Internet because of better protocol!
That is the main reason for the creation of multipath capability of TCP, performance improvement by distributing traffic load to more than one subflows across different paths. That of course additionaly requires that MPTCP always performs at least as well as standard TCP with one path. They are exaples when that goal was not meet but there are solutions in buffer size customizations andd implementation of algorithms that would mitigate reduced performance issues.
MPTCP works if both sides of the communication (user and server) have support for MPTCP. If only one side has MPTCP deployed that device will try to use MPTCP but it will only suceed to establish normal TCP session. It will always try MPTCP first and if there is no answer from other side it will use TCP.
Applications are not MPTCP aware; Apple IOS7 is first operating system in production that uses MPTCP and solely for Siri application that is now able to use WiFi and 3G simultaneously.
But how this works?
Multipath TCP is evolution of standard TCP that makes multipath data packet transport over one connection possible. Multipath TCP is made for next generation devices like iPhones and other smart devices that are multihomed (they use different Internet access options like WiFi and 3G).
To make this kind of communication over more different paths possible there is still need for data sequence number. From normal TCP you know that data sequence number is used to put the segments in order when they all arrive at the receiver. But now they can arrive to the receiver using more than one subflows inside one MPTCP connection. How will then be possible to put them in the order at the receiver. And more interesting than that, how will this whole MPTCP connection keep track of loosed packet accross the connection. One more thing, some IDS midleboxes will not allow the TCP subflow with gaps in the sequence space (from MPTCP arhitecture this subflow will be seen as normal TCP flow).
There should be a way to keep track of loss detection and retransmission of packets across every separate subflow. So the only answer to that question is to use sequence number for each subflow that will track loss and retransmission and separate data sequence number for reordering of packets at receiver side.
On this image below we are looking at standard TCP header from frc 793 that must stay as it is to keep backwards compatibility of MPTCP. The creators of MPTCP did play with some header parts and they decided to put data sequence number and data ACK inside TCP as new option. There was a second way of doing that by encoding data sequence number and data ACK inside payload (data). Fortunately they decidec to use TCP new options so that there will be no chance of deadlocks. (will explain in some other article). One more thing is that using TCP new options there is more chance that traversing different strange firewall midleboxes will be succesful. Middleboxes like firewall and others tend to remove some strange payloads and sometimes even TCP options if they don’t understand what they are.
What that means?
Here’s an example with two subflows inside one MPTCP connection. Data is send in three data frames of witch two are taking red subflow and one is taking green subflow. Data sequence numbers are 1,2,3 for whole MPTCP connection so that packets on receiver side can be ordered and connected into whole data. Subflow sequence numbers are 200,201 for red subflow and 300 for green subflow. In that way each subflow can have loss detection and retransmition of lossed frames.
In for some reason one of the subflows (in our case green subflow) breaks down, the frame with DATA:2 will be redirected to other subflow in the way that it can be sent to receiver. His Subflow sequence number will be changed in order to go across red subflow and the data sequence number will stay the same (2) as this is the same frame.
All of this will happen without breaking the MPTCP connection so that device will practically not see anything going on, maybe only a little delay in loading content across that connection.
This article was written by me in the last few days as I got into learning how this new TCP technology works. I would like to post here all the materials I used to get to know MPTCP theory and way that it uses to function.
- People behind all the material on MPTCP that I used:
- Costin Raiciu, Universitatea Politehnica Bucuresti; Also speaker at the USENIX from video on the link below
- Christoph Paasch and Sebastien Barre, Université Catholique de Louvain;
- Alan Ford; Michio Honda, Keio University;
- Fabien Duchene and Olivier Bonaventure, Université Catholique de Louvain;
- Mark Handley, University College London
- MultiPath TCP – Linux Kernel implementation project
- Great video session: How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP | <urn:uuid:8d568f57-f1d8-4b14-a8b0-4baa0d5050fa> | CC-MAIN-2022-40 | https://howdoesinternetwork.com/2013/multipath-tcp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00372.warc.gz | en | 0.930228 | 1,815 | 3.40625 | 3 |
(September 22, 2022) Phishing attacks continue to be a preferred method hackers use to propagate malware and steal user credentials and other sensitive information, according to Proofpoint’s 2022 State of the Phish Report. Of the 600 IT security professionals surveyed for the report, 86 percent said their organizations experienced bulk phishing attacks in 2021, up from 77 percent in 2020. Targeted phishing attacks, including spear phishing and business email compromise (BEC), increased 18 percent year over year. That’s why it’s critical that users know how to spot the signs of a phishing attack. Do you know how to spot a phishing attack?
Traditionally, hackers have used email to distribute phishing scams, but other methods are on the rise. Collaboration tools have skyrocketed, and those platforms have become fertile breeding grounds for phishing messages. Many users believe that messaging is internal and controlled, creating a false sense of security that encourages them to let down their guard.
Another method on the rise is the use of social media. The hackers utilize highly emotional topics such as politics to get users to click on links that contain malware.
Phishing capitalizes on the fact that humans are the weakest link in the security chain. Whether from a sense of expediency or a desire to be helpful, people will often click on a link or open an attachment in a phishing email or text message. The risks are even greater in remote and hybrid work models, with people using personal devices that lack many of the security protections provided by the corporate network.
It would be a mistake to assume that all attacks are clumsy and easy to spot. However, these factors can be helpful in identifying phishing scams:
- The email or text asks for personal or sensitive information. Scams commonly involve messages that appear to be from a legitimate business asking you to “confirm” your account information. Legitimate companies will not ask for login details or other personal information by email.
- It is impersonal. Phishing messages often use generic salutations such as “Dear account holder” or “To our valued customer.” Legitimate companies are more likely to address you by name.
- The source is suspicious. Professional organizations won’t send emails from Gmail or Hotmail accounts. Even addresses that look legit at first glance require further scrutiny. Users should hover their mouse pointer over the link or the address to reveal the true source.
- There’s an attachment. An unsolicited email with an attachment is a huge red flag. Legitimate companies rarely do this. They are far more likely to provide directions on how to download a document from their website.
- There’s a suspicious hyperlink. An embedded hyperlink is another red flag. Cybercriminals use embedded links to redirect you to phony websites in an attempt to either extract personal information or download malware.
- It is poorly written. Spoofed messages often originate in countries where English is not the native language, resulting in spelling, grammar, logic, and syntax errors.
- There’s a heightened sense of urgency. Phishing scams are meant to make you act quickly without taking the time to investigate fully. Many suggest there is a risk of having your account suspended or terminated unless action is taken immediately. Legitimate organizations don’t rely on email messages to deliver such news.
- It’s too good to be true. Offers of incredible deals or amazing rewards are also designed to get you to act quickly without considering the risk. For example, phishing scams offering expedited stimulus payments have been widespread during the pandemic.
Cybersecurity training and education programs are essential for boosting the security of your remote workforce. Best-in-class programs offer phishing-specific training and even provide tools for simulating a phishing attack to test user awareness. Your managed services provider (MSP) partner can help you select and implement a security-focused training program while also minimizing the risk that phishing messages will reach your users’ inboxes.
ABOUT MAINSTREAM TECHNOLOGIES
Mainstream Technologies delivers a full range of technology services in Arkansas and the surrounding region including managed technology States services and consulting custom software development and cybersecurity services. We also offer industry-leading data center services in our Little Rock facilities. Established in 1996, Mainstream has earned a reputation for delivering quality, reliable, and professional technology services for public and private-sector customers across the United.
IT Business Development Manager
(479) 715-8629 Office
(501) 529-0008 Mobile | <urn:uuid:7fddfe39-1d24-463a-898f-4289d4cabea4> | CC-MAIN-2022-40 | https://www.mainstream-tech.com/spot-a-phishing-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00372.warc.gz | en | 0.925963 | 949 | 2.625 | 3 |
Over the past few years, owners of cars with keyless start systems have learned to worry about so-called relay attacks, in which hackers exploit radio-enabled keys to steal vehicles without leaving a trace. Now it turns out that many millions of other cars that use chip-enabled mechanical keys are also vulnerable to high-tech theft. A few cryptographic flaws combined with a little old-fashioned hot-wiring—or even a well-placed screwdriver—lets hackers clone those keys and drive away in seconds.
Researchers from KU Leuven in Belgium and the University of Birmingham in the UK earlier this week revealed new vulnerabilities they found in the encryption systems used by immobilizers, the radio-enabled devices inside of cars that communicate at close range with a key fob to unlock the car's ignition and allow it to start. Specifically, they found problems in how Toyota, Hyundai, and Kia implement a Texas Instruments encryption system called DST80. A hacker who swipes a relatively inexpensive Proxmark RFID reader/transmitter device near the key fob of any car with DST80 inside can gain enough information to derive its secret cryptographic value. That, in turn, would allow the attacker to use the same Proxmark device to impersonate the key inside the car, disabling the immobilizer and letting them start the engine.
The researchers say the affected car models include the Toyota Camry, Corolla, and RAV4; the Kia Optima, Soul, and Rio; and the Hyundai I10, I20, and I40. The full list of vehicles that the researchers found to have the cryptographic flaws in their immobilizers is below:
Though the list also includes the Tesla S, the researchers reported the DST80 vulnerability to Tesla last year, and the company pushed out a firmware update that blocked the attack.
Toyota has confirmed that the cryptographic vulnerabilities the researchers found are real. But their technique likely isn't as easy to pull off as the "relay" attacks that thieves have repeatedly used to steal luxury cars and SUVs. Those generally require only a pair of radio devices to extend the range of a key fob to open and start a victim's car. You can pull them off from a fair distance, even through the walls of a building.
By contrast, the cloning attack the Birmingham and KU Leuven researchers developed requires that a thief scan a target key fob with an RFID reader from just an inch or two away. And because the key-cloning technique targets the immobilizer rather than keyless entry systems, the thief still needs to somehow turn the ignition barrel—the cylinder you slot your mechanical key into.
That adds a layer of complexity, but the researchers note that a thief could simply turn the barrel with a screwdriver or hot-wire the car's ignition switch, just as car thieves did before the introduction of immobilizers neutered those techniques. "You're downgrading the security to what it was in the '80s," says University of Birmingham computer science professor Flavio Garcia. And unlike relay attacks, which work only when within range of the original key, once a thief has derived the cryptographic value of a fob, they can start and drive the targeted car repeatedly.
The researchers developed their technique by buying a collection of immobilizers' electronic control units from eBay and reverse-engineering the firmware to analyze how they communicated with key fobs. They often found it far too easy to crack the secret value that Texas Instruments DST80 encryption used for authentication. The problem lies not in DST80 itself but in how the carmakers implemented it: The Toyota fobs' cryptographic key was based on their serial number, for instance, and also openly transmitted that serial number when scanned with an RFID reader. And Kia and Hyundai key fobs used 24 bits of randomness rather than the 80 bits that the DST80 offers, making their secret values easy to guess. "That's a blunder," says Garcia. "Twenty-four bits is a couple of milliseconds on a laptop."
When WIRED reached out to the affected carmakers and Texas Instruments for comment, Kia and Texas Instruments didn't respond. But Hyundai noted in a statement that none of its affected models are sold in the US. It added that the company "continues to monitor the field for recent exploits and [makes] significant efforts to stay ahead of potential attackers." It also reminded customers "to be diligent with who has access to their vehicle’s key fob.
Toyota responded in a statement that "the described vulnerability applies to older models, as current models have a different configuration." The company added that "this vulnerability constitutes a low risk for customers, as the methodology requires both access to the physical key and to a highly specialized device that is not commonly available on the market." On that point, the researchers disagreed, noting that no part of their research required hardware that wasn't easily available.
To prevent car thieves from replicating their work, the researchers say they left certain parts of their method for cracking the carmakers' key fob encryption out of their published paper—though that wouldn't necessarily prevent less ethical hackers from reverse-engineering the same hardware the researchers did to find the same flaws. With the exception of Tesla, the researchers say, none of the cars whose immobilizers they studied had the ability to fix the program with a software patch downloaded directly to cars. The immobilizers could be reprogrammed if owners take them to dealerships, but in some cases they might have to replace key fobs. (None of the affected carmakers contacted by WIRED mentioned any intention of offering to do so.)
Even so, the researchers say that they decided to publish their findings to reveal the real state of immobilizer security and allow car owners to decide for themselves if it's enough. Protective car owners with hackable immobilizers might decide, for instance, to use a steering wheel lock. "It's better to be in a place where we know what kind of security we're getting from our security devices," Garcia says. "Otherwise, only the criminals know."
This story originally appeared on wired.com. | <urn:uuid:c95a1114-bc4e-4837-bd8c-b06f30e2717d> | CC-MAIN-2022-40 | https://arstechnica.com/cars/2020/03/hackers-can-clone-millions-of-toyota-hyundai-and-kia-keys/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00372.warc.gz | en | 0.953899 | 1,257 | 2.609375 | 3 |
September 13, 2016
The Laboratory for Cryospheric Research is dedicated to the monitoring and understanding of the frozen earth, including glaciers, ice caps, ice shelves, snow, and sea ice. The facility was opened in September 2007, with funding from the Canada Foundation for Innovation, Ontario Research Fund, and University of Ottawa.
Laboratory members are undertaking research across northern Canada, including monitoring glacier changes in Kluane National Park, examining ice shelf and sea ice interactions along northern Ellesmere Island, and measuring glacier and ice cap dynamics across the Canadian Arctic Archipelago. The Laboratory for Cryospheric Research is based in the Department of Geography at the University of Ottawa, and directed by Dr. Luke Copland.
Dr Copland is using our RockSTAR product, combined with a solar panel and extra battery pack, to provide long-term position monitoring of the sea ice. He’s sent some wonderful pictures back showing the setup, and it’s amazing to see just how large these floating blocks of ice are.
You can find out more about the project here: https://cryospheric.org/. | <urn:uuid:9763f47c-5903-4e30-b1df-6208c8a9d0d2> | CC-MAIN-2022-40 | https://www.groundcontrol.com/us/blog/iceberg-monitoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00372.warc.gz | en | 0.943703 | 232 | 2.890625 | 3 |
1 - The Importance of Facilitation
Being an Effective FacilitatorHarnessing Knowledge, Experience, and DiversityEncouraging Group Motivation and CommitmentObserving the Team Process
2 - Facilitating Process and Content
Identifying Process and Content ElementsManaging the FlowResolving Tensions and Disagreement
3 - Setting the Stage for Facilitation
Laying the Groundwork, Educating Participants, and Securing SupportSelecting the Right FacilitatorPlanning for a Facilitated Meeting
4 - Facilitating Team Development
Encouraging ParticipationRecognizing Stages in the Team Life CycleSupporting the Team through the Stages
5 - Building Consensus and Reaching Decisions
Gathering and Presenting DataSynthesizing and SummarizingIdentifying Options and BrainstormingFacilitating SWOT AnalysisCreating a Short ListUsing the Multi-Option technique
6 - Disruptions, Dysfunctions and Interventions
Handling Disruptions and Difficult BehaviorAddressing DysfunctionAgreeing on Ground RulesRestating and Reframing
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
Leaders and professionals who manage teams or groups and are responsible for their outcomes. | <urn:uuid:4776e432-dabb-4198-86c9-99d3f900d677> | CC-MAIN-2022-40 | https://seattle.newhorizons.com/training-and-certifications/course-outline/id/1018126720/c/critical-facilitation-skills-for-leaders | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00372.warc.gz | en | 0.802707 | 275 | 2.734375 | 3 |
While IoT continues to grow by leaps and bounds worldwide, devices based on ARM processors seem to have taken center stage, mainly because of their orientation to mobile devices (due to their low power consumption and low cost), which makes them suitable for IoT devices built for some basic applications. They send data directly to the cloud, such as temperature or humidity sensors, energy monitoring, among many other. However, with the passage of time, other types of IoT architectures have emerged, such as Edge computing, that give back to hardware based on x86 processors the opportunity to take an essential role in IoT. What is Edge Computing and what role exactly processors with x86 architecture play in it?
The millions of devices that make up the Internet of Things (IoT) have something in common: they collect information, but they do nothing with it. They send it to the cloud, where large data centers receive it, it is combined, and the data is processed collectively to obtain certain results or activate certain events. This “passive” operation of all these devices is what wants to be changed the so-called Edge Computing, a type of philosophy applicable especially in business and industrial scenarios that brings much more autonomy to all those devices, making them something more “smart”. It is defined as the IT infrastructure that exists near the IoT devices (such as turbines, production lines, robots, scanners, among others). Thus, instead of the information having to travel the entire trajectory of the network to reach the IT infrastructure, there will be intermediate points to transform the data into valuable information.
x86 based hardware’s critical role
While ARM processors are being suitable for IoT devices themselves, x86 based hardware is more suitable for being used in Gateways on the edge of the infrastructure for analyzing and storing important data. Let´s see how it would work on the following situation.
For example, one way of making IoT devices smarter would be the following. If it’s not necessary to combine data to get the desired results. Then IoT sensors simply need to process the collected data and send results when certain requirements are met. And here’s exactly where one can see the benefits of edge computing gateways. If there’s less need to be sending and collecting all data in a centralized cloud repository, you could save a lot on expensive bandwidth transporting this data. Also key data can be stored on these gateways in order to compare it with new collected data and determined which should finally be uploaded to the cloud.
On the contrary, another option would be that sensors only connect to the cloud when they have something important to report. This design provides the opportunity to reduce IoT networking costs by leveraging technologies such as cellular-based technologies that use a lower-cost, pay-per-kilobit billing method as opposed to more expensive always-on connectivity. The reason for all this is that Edge Computing allows the data produced by the devices of the internet of things to be processed closer to where they were created (in local gateways) instead of sending them through long routes to reach data centers and computing clouds. That also haves another fundamental advantage, since it allows organizations to analyze important data almost in real
Time, something that is a clear need in many industries such as manufacturing, health, telecommunications or the financial industry.
What are the main advantages of x86 over ARM processors to work on this type of IoT Edge computing devices? Here are some:
If we think of a real life example, perhaps a SCADA system on an industrial environment, we would think of x number of PLC’s all over the production lines measuring a number products or specific operations performed by machine or employees. If we add to the equation some other kind of systems, like machine vision or barcode reading, the amount of data increases dramatically. In an Edge computing infrastructure, we would analyze the data first before uploading anything to the cloud or providing some kind of response to the PLC’s. In this type of scenario X86 based hardware does a better job than other options, like ARM, due to its high processing capacity. If we consider the huge amount of data being received on an Edge computing gateway, we would need an option that’s reliable on more complex tasks.
Due to the many years of x86 based hardware, the manufactures have designed and produced an immense amount of hardware and its supporting software (i.e. drivers) specifically for this type of architecture, and it’s worth mentioning how competition has also raised the bar in quality of the devices. The peripherals options available for x86 based hardware gateways and it complementary infrastructure devices makes this option better for a smoother implementation. On the other hand, ARM solutions, still being a newer option makes a lot of the devices unique and incompatible with others with the same architecture.
Software Solutions Compatibility
Due to the well know and established design of x86 based hardware it’s also more compatible with a large amount of software solutions out there, ready to be implemented in the workplace.
Based on this example, we can say that, x86 based hardware still plays a major role in the Industry 4.0 revolution, in architectures such an edge computing, where mobile devices work as a complement with more powerful high-end processing devices closer to the process and then sending data to the cloud (data center).
As your trusted partner, Lanner carries the long-established expertise in telecommunication and enabling manufacturing sectors with the similar architecture to deliver the required real-time, secured and cutting-edge technology as the next-generation manufacturing revolution. We provide highly integrated edge server platform with Intel multi-core processors that fully optimize performance and minimize latencies, while consolidating all the needed network functions.
LEC-3340, a 3U rackmount industrial edge consolidation server have some key features that are well designed to work on a corporate or industrial environment with high volumes of data:
- Intel® Xeon® E3-1505L V6, Core™ i3-7100E, or Core™ i5-7442EQ (formerly Kaby Lake-H) processor, to offer outstanding performance.
- Optional redundant power supplies
- Designed to be robust, LEC-3340 is IEC-61850 and IEEE 1613 compliant
- 4 x PCIe slots
- 4 x RJ-45 GbE LAN ports
- 5 x USB 3.0 ports
- 2 x 2.5” swappable drive bays
- DP/DVI display port, IRIG-B
- 2 x isolated COM ports
As IoT is in a continuous evolution in more mobile and efficient devices, we cannot forget the counter part where all the collected data is stored to be used by companies for critical business decisions. There’s where x86 based devices fall into place naturally due its unique features. | <urn:uuid:8f0cc552-795e-45f7-95a3-702acbddbde8> | CC-MAIN-2022-40 | https://www.lanner-america.com/blog/x86-based-hardware-key-element-iot-edge-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00372.warc.gz | en | 0.937882 | 1,416 | 3.390625 | 3 |
Most of us have a love-hate relationship with Microsoft Excel. In this data-driven business environment, Excel spreadsheets are an essential tool for organizing and analyzing information. But let’s be honest—copying and pasting numbers into a daily report is no one’s idea of a good time.
It’s not surprising that organizations want to save time by automating Excel. The standard way is through excel automation using VBA (Microsoft’s programming language), also known as macros. Users write VBA code to run specific tasks within Excel.
Macros, like any type of automation, provide benefits like increased efficiency and time-savings. But they may also be putting your business at risk, and you might need an alternative to Excel macros.
Viruses hidden in Microsoft Office macros were a major threat in the 90s. Probably the most famous macro virus was the Melissa virus, which appeared in 1999. Melissa would arrive in a Word document seemingly sent from one of your contacts. When the document was downloaded, it would send itself to the first 50 people in your Microsoft Outlook address book.
For a while macro viruses seemed like a thing of the past, but in recent years they’re making a comeback. Alternative methods of automation eliminate confusion about which documents are safe to run.
While cybersecurity is a chief concern of all modern businesses, malware isn’t the only reason to avoid macros.
Spreadsheets can be more trouble than they’re worth. The typical Excel document is so riddled with errors that a professor of IT management at the University of Hawaii called spreadsheet errors a “pandemic." The European Spreadsheet Risks Interest Group keeps an ongoing list of horror stories. Sure, using macros for Excel automation can help you eliminate some of the basic copy/paste mistakes, but they can also contribute to the problem.
Not everyone using your spreadsheets is going to be a VBA expert. Relying on macros means that some members of the team won’t be able to help with Excel VBA automation at all, while others will have just enough knowledge to write bad code.
With macros, your business logic will end up spread across multiple documents. These documents will likely be emailed back and forth, sometimes being altered or duplicated in the process. You have no central way to manage the excel automation for your business.
And what happens when you want to update that Excel document that is so critical to your operations, but the person who created the macros is long gone? They may not make sense to the next user. Hours will be wasted trying to either decipher the VBA scripts or create a new spreadsheet with macros.
Lack of Enterprise Features
Enterprise-class automation software includes central management, error handling, audit logs, and security features. It’s scalable and easy to use without extensive training. If something goes wrong, you can likely get support from the vendor.
At best, you will have to write new scripts to duplicate enterprise features. In many cases they will not be available with Excel macros alone.
Alternatives to Excel Macros
Relying on Excel macros can be a threat to your business. But if you already have macros that are critical to operations, you don’t have to toss them out. Robotic process automation software can run the existing macros.
Managing your macros through RPA software gives you advantages that you don’t get running macros alone. You can monitor and audit all of your Excel automation across the enterprise from a central location, and integrate the macros into workflows involving other applications.
You may quickly find that much of the Excel automation you are used to managing with macros can be handled by RPA instead. Robotic process automation solutions are easy to use even if you have no coding experience—just drag and drop building blocks like “open Excel workbook” or “set value of cell” into a workflow.
The best part is that your RPA investment will pay off well beyond Excel automation. Software robots can scale to meet the automation needs of every department in your organization. | <urn:uuid:3a63b07d-9614-46a6-84bb-9af0a456f298> | CC-MAIN-2022-40 | https://www.helpsystems.com/blog/are-excel-macros-putting-your-business-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00572.warc.gz | en | 0.926684 | 852 | 2.578125 | 3 |
Whether it’s speculation or for real, a lot has been said about AI. Most features like social media alerts and notifications, custom search engine results, e-commerce suggestions, and listings are all empowered by AI-based algorithms. With this, it has become a key aspect of technology world. We will now cognize everything that we know about this field.
Written by Danish Wadhwa
While few people know what machine learning is, it is one of the biggest breakthroughs in AI, which encompasses complex techniques that allow machines to attain efficiency at tasks with the help of learning and experience.
In the beginning, the AI technology revolution touched a few industries with its benefits. New entrants like Uber and Lyft are prominent examples of this. With this technology, they have redefined the boundaries of the cab industry. Now, AI is widening its radius as growing number of industries joining it to get the benefit of automation.
Beginning with AI is not easy as you need to understand machine learning algorithms, deep learning, big data, super intelligence, business intelligence, etc., to hold the reins. To learn this, you would benefit from an artificial intelligence training course to understand its basics. Moreover, you need to understand the existing AI utilities in different ways to predict if AI will dominate humans in the future or if it is our partner?
Let’s figure it out!
While most people are unaware of the difference between machine learning, AI, and deep learning, we cannot deny the impact these innovations have on our daily lives. These computing methods have become important to drive automated reasoning, learning, and perception. Moreover, our virtual partners Siri or Cortana are always there to help us no matter what our requirements are.
Let’s talk about some of the everyday uses of AI. Advanced navigation systems like GPS have optimized our driving experience by routing us to destinations on time. Machine Intelligence has given an edge to smartphones as they can predict what we are going to type and correct spelling errors. When we post a picture on social media sites like Instagram, the artificial intelligence algorithm finds and detects a person’s face and tags that individual. Additionally, the financial sector is also taking its advantage using it for organizing and managing data. Artificial intelligence helps detect frauds in a smart card based system.
Predicting Natural Disasters
With the introduction of Google’s models, forecasting weather conditions have become easier and more accurate. You can plan your trips as you get to know about the weather conditions for the next five days. Now, you can even get the information that was impossible with traditional methods in the 1970s. According to IBM, AI has become the prime medium to predicting disasters. Moreover, the IBM Watson rightly predicts the time of volcanic eruptions by detecting the changes in tectonic plates. Likewise, the data that cellphone sensor projects give out data through the phone’s magnetometers, which helps analysts make successful predictions of an earthquake.
The growth of internet messenger platforms has led to an evolution of Chatbots in 2016. Since then about 20,000 Kik bots and 11,000 Facebook Messenger bots came into view. About 100,000 bots were developed for Facebook Messenger in April 2017. Their multitasking nature, such as handling shopping, travel search and booking, payments, office management, customer support, and task management enable their wider usage.
Luvo, a natural language processing AI bot, is launched by Royal Bank of Scotland (RBS) which resolves queries of RBS, Natwest and Ulster bank customer queries while performing banking tasks like transactions. In case Luvo could not answer a query then the human knowledge is used to answer the same. Though it is the first one to launch this type of service, Sweden’s SwedBank and Spain’s BBVA have started following the same path by launching their virtual assistants.
The National Health Services (NHS) in the UK is another example that has launched an AI-powered chatbot on the 111 non-emergency helplines. Now, about 1.2 million residents in North London prefer a chatbot instead of executives on the 111 helplines.
AI is a new Doctor
Healthcare is leveraging artificial intelligence and how! The healthcare industry use AI to make effective treatment plans by analyzing lots of data. Precision medicine by Deep Genomics is a new treatment plan, in which new computational technologies are developed by researchers to understand genetic linkages and mutations.
AiCure is a mobile application which detects whether a patient is taking a medicine or not. It deploys advanced algorithms with a camera that informs whether a patient follows the prescriptions without skipping a single dose. AI has had some significant advances when it comes to the usage of prescriptions. Babylon is yet another healthcare app with which you get medical consultations and health services on your phone in a few clicks. Being an AI-based app, it tracks your medical results and keeps you updated.
Artificial intelligence is a calling card for automation. We are likely to become faster in our regular tasks, personal or professional, as most of the work would be done by robots. When AI and robotics are implemented together, humans are likely to explore new horizons while gaining access to new avenues that we have only expected to achieve. From genetic engineering to cybersecurity, each sector would get a leap by utilizing its data more effectively.
About the Author:
Danish Wadhwa is a strategic thinker and an IT Pro. With more than six years of experience in the digital marketing industry, he is more than a results-driven individual. He is well-versed in providing high-end technical support, optimizing sales and automating tools to stimulate productivity for businesses. | <urn:uuid:ee04fa36-0e31-4f2b-b461-0c71a69caf12> | CC-MAIN-2022-40 | https://swisscognitive.ch/2018/03/31/will-ai-dominate-with-its-uses-in-different-sectors-how/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00572.warc.gz | en | 0.949503 | 1,164 | 2.96875 | 3 |
Who Is the Customer?
This is a very central question. The most obvious answer is that the customer is the one who pays. The relevance of this answer is evident, since it identifies who is directly responsible for generating our economic benefits. Therefore, we have to include the buyer as a critical element of our customer base. However, often we shouldn’t stop there, because the customers of our customer, is either as important as, or even more important than, the buyer. We need to relate to that base for two reasons. First, we would like to help the customers to do a better job with their customers. Second, final consumers could be the most critical element in the economic chain. If they stop buying, everything stops. Think about the auto industry. There are two critical types of customers: dealers and consumers. We will see some examples of how to segment these two groups later on.
Moreover, if we regard the Extended Enterprise as the most relevant entity, we might expand the definition of the customer to include all of the remaining constituencies, meaning particularly suppliers and complementors. In the broadest sense, the relevant customer is everybody who should be the focus of a differentiated value proposition, because that is the foundation of a well-articulated strategy. Having said that, in most of what I write, the customer will be identified as either the buyer, or the consumer, or both.
Why Are Customers Different?
This is also a very critical question, because it will define the segmentation criteria that will be used in our analysis and that will lead to the development of the value proposition. The conventional way of segmenting the customer is using demographic characteristics,
such as age, levels of income, geographical locations, and the like. Another conventional way is to group them according to some generic business characteristics, such as size, vertical markets, levels of profitability, and others. We have found that, with very few exceptions, these criteria are not the most appropriate to characterize the differences across customers. They are useful in segmenting the “markets” but not the “customers,” which is quite a different task. Remember that each resulting customer segment will be the subject of a distinct value proposition.
Suppose that you choose to segment the customers by the size of the enterprise, say large, medium, and small, and even worse, either explicitly or implicitly assign priorities accordingly, meaning that large customers are better than medium-sized, and those in turn are better than small. Two fallacies result from this. One is that we will be treating all large customers the same. From a strategic point of view, that seldom makes sense. We are indiscriminately putting together customers that could have very different needs for support. Second, the priorities might be totally wrong. In fact, it is often the case that large customers are the least desirable ones, because they are totally self-sufficient and, therefore, they tend to commoditize us. On the contrary, medium and small companies can offer us great opportunities for the development of exciting value propositions based on the Total Customer Solution option.
Who Is the Most Attractive Customer?
The test of the value proposition that we have just completed provides the basis to address the question, “Who is the most attractive customer?”
The one who has the greatest gap between its needs and capabilities, and we are in a best position to close that gap
• The one who receives the highest value added
• The one who has the most positive attitudes towards us
• The one with whom we can jointly define a unique sustainable high value-added value proposition leading towards an unbreakable bonding.
If these conditions are met, these customers should also be the most profitable.
Source: Arnoldo Hax | <urn:uuid:737052fa-8ba1-4ffc-a679-16e6ea186b9c> | CC-MAIN-2022-40 | https://globalriskcommunity.com/profiles/blogs/the-delta-model-who-is-the-customer?context=tag-value | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00572.warc.gz | en | 0.962521 | 772 | 2.625 | 3 |
Using AI to fly an airplane will be an enormous achievement; the ultimate reflection of AI’s ability to manage complexity. It will be a critical piece to implementing the changes that will be required in the next decade as A&D organisations reassess manufacturing automation in their factories. The recent pandemic, furloughs, bankruptcies and retirements are all impacting productivity and widening the talent gap, forcing the industry to adopt new technology such as AI to reimagine their business.
What is AI and machine learning?
AI, like so many technology buzzwords, can mean different things to different people. For us, an AI system is one that leverages software functions created through a machine learning process rather than through traditional programming. Data, rather than source code, is the critical element. The performance of an AI application is shaped by the data used to train the application.
Without going into detail on machine learning algorithms or approaches, which is beyond the scope of this paper, we can generalise that the power in AI comes from machine learning’s ability to model complex systems and environments far beyond what we can reasonably build in traditional software.
Imagine building a speech recognition system through traditional programming — having a function for every word or a case statement for every pronunciation or accent. It would take a staggering amount of time to cover even 10 percent of the English language. Machine learning models, on the other hand, have made short work of this task, to the point that robust and accurate systems can understand a full vocabulary from hundreds of languages and accents.
Analytical AI vs. operational AI
Today, two distinct classes of AI applications are emerging across industries. The first is analytical AI, as in the type of system that can predict when a machine is going to fail, detect credit card fraud or recommend the next book to buy on Amazon. Operational AI, the second class of AI applications, actually does something in the physical world. It can manage a factory process, fly a plane, drive a vehicle or act on predicted events. It’s artificial intelligence at work.
Underpinning automation, the industry also needs a strong digital foundation where both machine usage and labor can be tracked and optimized automatically.
Analytical AI is maturing
The development tools and environment for building analytical AI applications are rapidly maturing. Previously, data scientists wrote Python code to enable most algorithms and approaches to machine learning. They did a lot of heavy lifting, extracting from various sources and then transforming the data to ingest into AI algorithms.
We can now address some of the more complex challenges the industry is trying to solve. Where does the industry need to pivot, from a technology point of view, so it can thrive during moments of disruption? We can also see coming technology advancements such as the introduction of more drones and air taxis.
What the industry really needs to do is to scale up the building of complex operational AI systems. We need the self-driving car level of AI across the A&D industry. To do this, we need a different approach to developing AI systems. The scale of data needed to train these complex systems is many orders of magnitude larger than what has been done before. This data must be well-managed through a significantly more complex machine learning system, where AI software is often trained in stages, leveraging data that is synthetically generated at key points. We must also consider the testing and validation environment, since we are talking about building systems that operate complex machinery in the real world.
Operational AI in aerospace and defense: Three use cases
Unlike AI development for analytical use cases, the development of toolchains and methodologies for complex operational AI is only just emerging. We believe that success in any complex operational AI endeavor will be determined most critically by having access to a robust development ecosystem.
Factory automation. Many of the woes in A&D over the past 10 to 15 years have been caused by the inefficient supply chain, which has created cost overruns, delays and even bankruptcies. Many companies are struggling with major supply chain problems, from a lack of control — in both delivery and timeliness — to a lack of quality control. Across the industry, supply chain problems have cost companies tens of billions of dollars in unnecessary costs. In short, the supply chain in the A&D industry has been a major problem.
The automotive industry is highly digitised and highly automated from a manufacturing standpoint, because there is a relatively stable and manageable supply chain of 10 or 15 original equipment manufacturers (OEMs), along with cooperation and partnerships so everyone can invest in automation. By contrast, the global aerospace industry lacks this level of partnership and, critically, the necessary manufacturing volume to make similar investments. A&D companies depend on a very deep and unwieldy supply chain with almost no automation and very little planning control or quality control.
In terms of delays and quality issues, the root causes are poor manual processes and poor manual planning. Underpinning automation, the industry also needs a strong digital foundation where both machine usage and labor can be tracked and optimised automatically. Automation not only brings conformity and control, but it also brings an automatic improvement in quality that is sorely needed. The answer to automating in A&D lies in using general-purpose robotics, more specifically, cobots (collaborative robots), which are general-purpose humanoid robots that can work among a human population on both factory floors and typical A&D production lines. Cobots introduce automation at a much lower cost because retooling of manufacturing programs is not needed.
Underpinning automation, the industry also needs a strong digital foundation where both machine usage and labor can be tracked and optimised automatically.
These cobots still need to be programmed, and that’s where AI comes in. Programming traditional automation solutions is an expensive proposition, as each task needs to be custom programmed to fit a particular factory and production line. With AI, programming countless different tasks one by one is not needed. AI plus cobots enable the automation of that operational program and factory at high, low and no scale, and at a dramatically lower cost, so that implementing automation is achievable, given the constraints of the aerospace industry.
Air traffic management. AI is critical for managing the anticipated disruptions in this industry over the next 10 years. One such disruption being watched closely is the pending introduction of drones and air taxis, often known as urban air mobility systems. Thus, the second use case is building an AI air traffic controller.
Today, air traffic management is generally managed by people. Based on many projections of this growing industry, in 15 to 20 years there will be 30 times the volume of air traffic flying over a large city such as Los Angeles than there is now. Human beings would be hard-pressed to manage that huge amount of air traffic, and it may be impossible.
But this is not the type of problem where more humans can be added and each person given a smaller slice of the air traffic pie when we consider the amount and type of new air traffic expected to operate, especially at the 0 to 3,000-foot level. Given all this, there is wide consensus that the industry needs AI to manage this exponentially higher level of complexity.
Fully autonomous vehicles. This third use case is obvious because we are inundated with news about the self-driving car every day. First, being autonomous is different from being unmanned. Drones flying today don’t have pilots sitting in them, but they are still overseen by a pilot from the ground. Such drones have a ground station that has aspects of a cockpit repurposed on a desk, with a human flying the drone remotely.
The future progression from this state is full autonomy, where an AI system is constantly evaluating and reacting to the airspace and making decisions to act in accordance with its mission. The impetus for fully autonomous flight is the same as it is on the ground. A new, generally accepted roadmap has now been published and with autonomy, not only can we imagine a more efficient world, but we can imagine a safer one as well.
Creating complex operational AI systems
To create complex operational AI systems in the A&D industry, very different needs must be met for data management and for algorithm creation and implementation. Very robust simulation is also needed in the testing phase. Above all, there is a strong need for a solution that enables complex operational AI DevOps.
When training an AI system, there are two datasets. First is the historical dataset for the predictive use case containing the instances of what is being predicted. Next comes a testing or validation dataset. In the complex operational world, organisations often investigate neural networks to handle the pattern recognition. Algorithms can be used to automatically build better and better neural networks, based on the performance of the best-performing neural networks from past iterations.
But given the much higher complexity of the environment in which pattern recognition and training are run, higher-level approaches must be considered. For complex operational AI, the various machine learning approaches demand data at a higher scale and management at a greater complexity.
Building a robust simulated environment that matches real life can be a key factor in acceptance, and where necessary, certification.
What is needed is a tiered approach to building different datasets. Start with a dataset that is focused on the physics of what you are trying to model; then build a dataset that accounts for the operating environment. After that, extrapolate the data to create a dataset that is orders of magnitude larger.
Algorithms then need to be constructed to create synthetic data to fill into that next-larger phase. Plus, building a robust simulated environment that matches real life can be a key factor in acceptance, and where necessary, certification. For example, certifying an airborne system is all about proving it is safe, and complex operational AI DevOps can play a crucial role in providing a robust simulation environment in the testing phase.
Operational AI is not easy, but it is possible
Building complex operational AI systems is no simple task. There are many challenges in managing the development environment and specific workflows. Data management over the full development life cycle is key, as is using the right technology to manipulate, extrapolate and scale data. It is essential to build a system that effectively manages and curates the data that represents the foundational dataset; then integrates, manages and curates the data from that environmental representation dataset; and ultimately provides the platform and simulation for the synthetic extrapolation of that merged dataset into a much, much larger dataset.
It is no easy feat to provide the data hosting, management and curation environment through these data stages that are each so massive in scope. Providing the simulation environment is enormous, because that is the “secret sauce” needed to be able to take that large dataset, combine it and then extrapolate it to ultimately produce a robust algorithm.
The good news — not only for the A&D industry but for all industries — is that a solution does exist that provides the DevOps environment to make all this come together and work at scale. DXC Robotic Drive is the first solution that provides a soup-to-nuts development toolchain and management environment for building complex operational AI.
Learn more about DXC Analytics and Engineering services. | <urn:uuid:3ac67911-8c6e-4fec-a856-18d32f17f985> | CC-MAIN-2022-40 | https://dxc.com/au/en/insights/perspectives/paper/the-future-of-ai-in-the-aerospace-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00572.warc.gz | en | 0.945882 | 2,288 | 3.015625 | 3 |
We all have preconceived notions based on our life experiences, which can sometimes create a flawed sense of intuition. Such human bias can negatively impact our decisions, especially in professional settings such as a college admissions office or a job recruitment company.
Now, data scientists are looking to machine learning (ML) to help neutralize human bias in the workplace because ML offers data-driven insights on any topic without any inherent bias. Machine learning often goes hand in hand with Predictive Analytics (PA), which uses the insights that ML gathers to offer predictions. Due to the objective nature of these algorithms, Predictive Analytics and machine learning can go a long way toward neutralizing human bias in virtually every industry.
Removing Human Bias from Algorithms
The first step to using machine learning and Predictive Analytics to reduce human bias is to make sure the algorithm itself isn’t biased. After all, data scientists can unknowingly insert their own prejudices into the mix in what is known as algorithmic bias or predictive bias.
In order to eliminate the chances of algorithmic bias, follow these best practices:
- Don’t borrow from old software, code and algorithms: While it may be tempting to use pieces of software that were used in other solutions, they may be riddled with biases that will create an impartial product that produces poor quality results.
- Be aware of personal biases: Data scientists need to be aware of their own prejudices and make an active effort to maintain an open mind. They should look at the algorithm development process from a fresh pair of eyes, not allowing previous experiences to influence their decisions.
- Include a variety of data sources: Drawing from many sources allows each dataset to carry the same amount of weight, ensuring that one set of data isn’t tipping the scales too much. Collect as many data points as possible and find the common threads across datasets.
- Utilize black box testing: Black box testing requires software testers to dig deep into a machine learning or Predictive Analytics model to understand how it unearths insights or makes predictions. This form of testing looks for gaps or flaws in functions, errors with the software’s interface, behavior or performance errors, data structure errors, external database access issues, and initialization/termination errors.
By following these guidelines, developers can feel confident that the algorithm itself will present accurate, objective results.
Applications for Predictive Analytics and Machine Learning to Reduce Human Bias
So, what situations would benefit from the use of machine learning and Predictive Analytics to increase objectivity? Well, the applications are virtually limitless, but here are a few specific ways these technologies could make the world a better place.
Everyone needs healthcare, but unfortunately, biases (whether intentional or unintentional) can sometimes lead to certain patients not getting the treatment they deserve. After all, a hospital that only accepts patients who have a certain degree of wealth is doing a disservice to the healthcare industry as a whole by promoting classism. A hospital can reduce such bias by using machine learning algorithms that decide whether a patient is eligible for surgery based on the urgency of their symptoms rather than their economic status.
Alternatively, consider the academic world. In a college admissions office, ML and PA could neutralize human bias that may hurt the chances of applicants who may not be straight-A students but still have a lot to offer in other ways. While grades and athletics are still considered king by admissions offices, this may overlook the value of applicants with artistic talents. A points-based admission software that ranks students based on a variety of factors may shift the way it values students and become more accepting of different talents. In this instance, machine learning can learn more about how data is structured within the application’s algorithm. Then, software testers can adjust the algorithm to reduce any inherent bias.
With the help of machine learning and Predictive Analytics, businesses in all industries can work to give everyone an equal opportunity. By reducing the effects of prejudice, companies will both improve the effectiveness of their work processes and increase parity for their clients.
Machine Learning and Human Bias: Making a Better World
Machine learning and Predictive Analytics have the potential to create a more objective world that treats people from all walks of life fairly. Almost every industry can benefit from what the technology has to offer, and now data scientists are developing sophisticated business solutions that create a more level playing field.
At 7T, we believe that cutting-edge technology can change the world. We offer machine learning and Predictive Analytics services to help companies looking to neutralize human bias in their decision-making processes. In addition, our development team is well-versed in other emerging technologies such as augmented reality, virtual reality, blockchain and natural language processing. | <urn:uuid:d0820072-336a-4786-ac03-d93d2b791237> | CC-MAIN-2022-40 | https://7t.co/blog/how-predictive-analytics-and-machine-learning-neutralize-human-bias/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00572.warc.gz | en | 0.932601 | 961 | 3.078125 | 3 |
What Is Ransomware? Definition, Types, Detection, & Removal
Ransomware is a type of malware that can lock computers, networks, and systems until a ransom is paid. It's a growing problem for businesses and individuals alike.
The Internet Crime Complaint Center received 2,474 ransomware reports in 2020, causing millions of dollars in damages, and those numbers will likely increase when the 2021 numbers are revealed. Ransomware is constantly in the news, leading to the question: what is ransomware and why is it so prevalent?
Various state agencies and the private sector keep track of ransomware attacks and related tactics worldwide, but malicious actors change and evolve their ransomware strategies all the time, making it hard to detect and block every attack. We’ve put together a comprehensive guide that will define ransomware, how to detect it, and what steps to take if you’ve fallen victim to a ransomware virus attack.
What Is Ransomware?
Ransomware is any type of extortion malware that locks your computer and demands payment in exchange for freeing your systems, hence the name. The ransomware definition can be boiled down to any type of cyberattack that encrypts its victims’ files where once attackers have infiltrated a system, they then demand a ransom in exchange for returning access to the data.
As part of the attack, victims are provided with instructions on how to obtain the decryption key by paying the ransom. Ransom fees can range from a few hundred to several thousand dollars, and in some rare cases, rising into the millions. In recent years, ransoms to hackers are often paid in cryptocurrency.
Types of Ransomware
The two most common forms of ransomware are locker ransomware and crypto-ransomware:
Locker Ransomware: Prevents the victim from accessing their machine. Once access is denied, the victim is prompted to pay the ransom to unlock their device.
Crypto Ransomware: Encrypts the user's data and prevents it from being accessed. The cybercriminal then demands money to decode the information. Cryptoware has become the most popular type of ransomware in recent years.
Other types of ransomware include:
Lock Screens or Non-Encrypting Ransomware: Restricts access to files and data but do not encrypt them.
Master Boot Record (MBR) Ransomware: Makes it impossible for victims' PCs to boot into a live OS environment.
Extortionware or Leakware: Steals compromising or damaging information that attackers then threaten to release if the ransom is not paid.
Mobile Ransomware: Infects cell phones through drive-by downloads or fake apps.
Is Ransomware A Virus?
The short answer is no, but defining ransomware can be tricky. Computer viruses attack your software and can multiply themselves, but ransomware scrambles your files, making them useless, and then demands payment to unscramble them. They can both be deleted with antivirus software, but if your files are encrypted, you won't be able to recover them.
Ransomware attacks have had success amounting to billions of dollars. Here are a few examples of recent ransomware attacks:
- WannaCry: WannaCry was a ransomware outbreak that spread over 150 countries in 2017. It was created to exploit a Windows flaw and infected over 100,000 machines by May 2017. The attack wreaked havoc on several UK hospital trusts, costing the NHS £92 million when users were locked out and a Bitcoin ransom was requested. The hack revealed the dangers of relying on out-of-date technology and resulted in approximately $4 billion in global financial damages.
- Ryuk: Ryuk spread in the middle of 2018. On PCs, the Windows System Restore feature was disabled by the ransomware, meaning users weren’t able to recover encrypted files without a backup. Victims paid the ransoms, and the total loss is believed to be $640,000.
- KeRanger: KeRanger is considered the first ransomware attack to target Mac machines using the OS X operating system. KeRanger was included in an installation of Transmission, an open-source BitTorrent client. After three days of inactivity, it encrypted 300 distinct sorts of data. It then downloaded a file containing a ransom note that demanded Bitcoin and instructions to pay the ransom. The victim's files were decrypted when the ransom was paid.
- Petya: Petya caused everyone a scare, but it was considerably less devastating than WannaCry. Petya mostly hit Ukraine with more than 90% of assaults, but victims also reported efforts in other parts of the world.
How Does Ransomware Work?
Ransomware attacks may disrupt business operations and leave companies without the data they need to operate or deliver mission-critical services, not to mention the damage to a company’s reputation after suffering a security breach. As a supplementary type of extortion, malicious actors have modified their ransomware techniques to include pressing victims for payment by threatening to expose stolen data if they refuse to pay. The monetary value of ransom demands has also risen, with some surpassing $1 million in extreme cases.
Malicious actors use lateral movement to target sensitive information and spread ransomware across entire networks. These actors also increasingly employ techniques that make restoration and recovery more difficult (or impossible) for targeted businesses, such as destroying system backups. Ransomware spreads swiftly and strikes hard, from malicious email attachments and false links to social media frauds.
Here are a few methods used by ransomware attackers:
Social engineering is a phrase used to describe the process of fooling individuals into downloading malware via a phony file or link. Malicious files are frequently disguised as legitimate papers (order confirmations, invoices, bills, and notifications) and look like they came from a trustworthy organization. It's as simple as downloading one of them to your computer, trying to open it, and bam! You've been infected.
Malvertising is the term for sponsored advertisements that transmit ransomware, spyware, viruses, and other malicious software at the click of a button. Hackers will invest in ad space on popular websites to obtain your personal information.
Exploit kits are ready-to-use hacking tools that contain pre-written code. As you might expect, these kits are designed to exploit vulnerabilities and security flaws created by out-of-date software.
Drive-by downloads are hazardous files that you didn’t request and may be completely unaware of. While you're surfing an innocent-looking website or watching a video, some dangerous websites take advantage of out-of-date browsers or applications to quietly download malware in the background.
What Is A Ransomware Attack?
If a company has fallen prey to one of the above attacks, how quickly does it escalate? What does a ransomware attack look like? Here’s a general timeline of attacks:
- Infection: The ransomware installs itself on the system and any network devices it can access after being transmitted through an email attachment, phishing email, infected program, etc.
- Secure Key Exchange: The ransomware communicates with the hackers behind the attack's command and control server to create the cryptographic keys utilized on the local machine.
- Encryption: The malware encrypts any data it finds on local computers and across the network.
- Extortion: Once the encryption is complete, the ransomware shows ransom payment instructions, threatening data destruction or publication if payment is not made.
- Decryption: Companies can pay the ransom and hope the hackers decrypt the files or recover data. This is done by removing infected files and computers from the network and restoring data from clean backups. Negotiating with cyber thieves is typically futile, as a recent study revealed that 42% of businesses that paid a ransom did not get their files decrypted.
Who Does Ransomware Target?
There are many methods through which ransomware criminals select the organizations they attack. It's also a matter of timing. For example, attackers may target colleges since they have smaller security teams and a wide user base that shares numerous files, making it simple to breach their defenses. On the other hand, large corporations are appealing targets because they appear to be more inclined to pay a ransom quickly and have the means to do so.
Government institutions and medical facilities, for example, frequently require rapid access to their information. Law firms and other businesses with sensitive data are more likely to pay to keep an attack hidden from the public, since these organizations may be particularly vulnerable to leakware assaults.
How To Detect Ransomware
Ransomware attacks are difficult to identify fast enough to avoid serious consequences. They’re installed through devious social engineering tactics, and sensitive data is scrambled using military-grade encryption algorithms. Once a computer or other endpoint has been compromised, ransomware may swiftly spread throughout the network, making it virtually impossible to respond in real-time. Often, the infected business is only aware of the attack after the ransomware has encrypted its data and made an announcement demanding payment. The following are signs of a ransomware attack:
Hundreds of unsuccessful file changes, among other strange file system activities due to the ransomware attempting to access those files.
Unexpectedly high CPU and disk activity due to the ransomware searching for, encrypting, and removing data files.
Access to some files is restricted, a result of ransomware encrypting, deleting, renaming, or relocating data.
Suspicious network communications as a result of the ransomware's contact with the attackers' command and control server.
How To Prevent Ransomware
The best form of ransomware protection is prevention. In order to take preventative measures, you'll need a keen eye and the proper security software. Vulnerability checks can also aid in the detection of intruders on your network. First and foremost, ensure your machine isn't a prime ransomware target. Make sure that you always keep your device’s software up to date to benefit from the most recent security updates.
Furthermore, proceed with extreme caution online, mainly when dealing with fraudulent websites and email attachments. However, even the most nuanced preventative measures might fail, emphasizing the importance of having a backup plan. A backup of your data is a good contingency plan in the case of a ransomware attack.
While no company is immune to cyberattacks, there are a few best practices that can decrease your chances of becoming a victim:
- Educate your staff. Give workers a checklist of what to do if they get a questionable email or visit a suspicious website. Teach them to look for red flags in phishing emails.
- Analyze your systems for any unusual activity. You should regularly scan file systems for unusual behavior, such as hundreds of unsuccessful file changes.
- Monitor all incoming and outgoing traffic. Determine the usual user activity baseline and search for anomalies ahead of time. Investigate any odd behavior right away.
- Set up honeypots. Honeypots are decoys, or false file repositories, that appear to be authentic. Honeypots will be targeted by hackers, allowing you to detect them before they widen their attack to your system. Early detection aids in the safe eradication of malware and saves your infrastructure from being hacked.
- Implement anti-ransomware solution. Use whitelisting software in conjunction with antivirus and anti-ransomware software to detect risks.
- Systematically examine and filter spam or questionable email content. Configure email settings so that incoming mail is automatically filtered and suspicious messages are not delivered to a user's mailbox.
How To Remove Ransomware
If you’ve fallen victim to a file encryption ransomware attack, you may remove the encryption malware by following these instructions:
- Disconnect from the internet. First, disconnect all virtual and physical connections. This can help to prevent ransomware from spreading throughout the network. Wireless and wired devices, external hard drives, storage devices, and cloud accounts are all examples. If you believe that additional places have been impacted, follow the procedures below to restore those areas as well.
- Use your internet security software to investigate. Use the internet security software you've installed to run a virus scan—this aids in detecting dangers. If you find any potentially harmful files, either delete or quarantine them. You can manually delete dangerous files or use antivirus software to do it automatically. Manual virus eradication is only suggested for experts.
- Use a decryption tool. If a system has been infiltrated by ransomware, you will need a decryption program to restore access to your files.
- Recover your data from a backup. Create a backup of your system externally or in cloud storage. Cleaning and restoring your device is far more difficult if you don't have any backups. It’s suggested that you generate backups regularly to avoid this problem. If you have a habit of forgetting essential items, employ automated cloud backup services or create calendar notifications to remind you.
Step-By-Step Guide: What To Do If You're Under Ransomware Attack
1. Isolate the ransomware.
Ransomware detection rate and speed are crucial in countering fast-moving assaults before propagating across networks and encrypting sensitive data. The first thing to do is isolate it from other computers and storage devices. Remove it from the network (wired and wireless) as well as any external storage devices, as you don't want the ransomware's command and control center to communicate across the network.
Be careful as there might be more than one patient zero, indicating that the ransomware may have infiltrated your business or household via numerous machines or that it may be dormant and has not yet shown itself on certain systems. Suspect all linked and networked devices and take precautions to guarantee that none of them are infected.
2. Identify the ransomware.
When ransomware requests money, it usually identifies itself. Knowing what you’re dealing with can help you understand:
The type of ransomware
How it spreads
What type of data it encrypts
What removal options you have
Once you know the type, you can figure out what to do next.
3. Report the attack.
By reporting ransomware to the authorities, you’ll be doing everyone a service. Regardless of the outcome, the FBI's Internet Crime Complaint Center encourages ransomware victims to report their attacks. Reporting allows law enforcement to gain better knowledge of the threat, offers solutions for ransomware investigations, and contributes essential information to ongoing cases. Knowing more about the victims and their ransomware experience can aid the FBI in determining who is behind the attacks and how they identify or target victims.
4. Evaluate your options.
When infected with ransomware, you have the following options:
Cover the cost of the ransom
See if it's possible to get rid of the malware
Completely erase the system(s) and start over
Paying the ransom is typically thought to be a poor choice, as it fosters the spread of additional ransomware, and unlocking the encrypted files is often unsuccessful.
5. Restore the system.
You can either try to eradicate the malware from your devices or wipe and reinstall them from secure backups and fresh OS and application sources. However, it's uncertain if you can successfully and eradicate a ransomware infection as there isn't a viable decryptor for every known ransomware attack. The newer the ransomware is, the more sophisticated it’s likely to be, and the less time there is to build a decryptor.
The most reliable approach to ensuring that malware or ransomware has been eradicated from a system is to erase all storage devices and reinstall everything from the ground up. You should format the hard drives on your system to guarantee that no vestiges of the virus remain.
Ransomware: The Bottom Line
Hackers are constantly refining their methods of delivering ransomware. The only way to mitigate the threat posed by online extortionists is to know how to recognize malicious actors and keep a close eye on the evolution of ransomware attacks. Unfortunately, this requires time and resources that may need to be reallocated from business-critical activities.
To stop ransomware attacks that come via email, you can implement next-generation integrated cloud email security that provides protection against the most advanced attacks, including ransomware, business email compromise, and more. Adding a solution on top of your Microsoft or Google environment will provide you with the best possible protection to prevent malware, ransomware and other attacks.
Want to learn more about how Abnormal stops ransomware attacks? Request a demo today to discover how integrated cloud email security can protect your organization. | <urn:uuid:67e7af76-17ac-4e8a-96e6-ef3f3823dff0> | CC-MAIN-2022-40 | https://abnormalsecurity.com/glossary/ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00572.warc.gz | en | 0.925649 | 3,376 | 3.078125 | 3 |
What is Cloud all about? How do we use Cloud? Where is Cloud? Do we use Cloud? What is the benefit one gets from it? Such basic questions and many more keep on circling around when ones hears “Cloud” or “Cloud Computing”.
In simple words, Cloud Computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.
An example of “Cloud Computing” is Yahoo email or Gmail etc. All one needs is just an internet connection and one can start sending emails. The server and email management software is all on the cloud (internet) and is totally managed by the cloud service provider Yahoo, Google etc. The end user gets to use the software alone and enjoy the benefits. Another case is google drive where one stores personal information over Internet on the Cloud and not on the local PC / laptop Hard Drive.
There is also great choice in the level of security and management required in cloud deployments, with an option to suit almost any business –
- A Public Cloud is one where services and infrastructure are hosted off-site by a cloud provider, shared across multiple clients and accessed by these clients via public networks such as the internet. Public clouds offer great economies of scale and redundancy but are more vulnerable than private cloud setups due their high levels of accessibility.
- Private Clouds on the other hand use services and infrastructure stored and maintained on a private network – whether physical or virtual – accessible for only one client. This provides improved level of security and control. Cost is on higher side as the enterprise in question will have to purchase/rent and maintain all the necessary software and hardware.
- The 3rd one is Hybrid Cloud and as the name suggests, combines both public and private cloud elements. A hybrid cloud allows a company to maximise their efficiencies; by utilizing the public cloud for non-sensitive operations while using a private setup for sensitive or mission critical operations, companies can ensure that their computing setup is ideal without paying any more than is necessary.
Moving away from deployment models, broadly speaking there are 3 models of cloud computing which describe the service on offer; these are Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).
- IaaS (Infrastructure as a Service) takes the traditional physical computer hardware, such as servers, storage arrays, and networking, and lets you build virtual infrastructure that mimics these resources, but which can be created, reconfigured, resized, and removed within moments, as and when a task requires it. The most well-known IaaS provider, Amazon Web Services
- PaaS (Platform as a Service) provides a method for programming languages to interact with services like databases, web servers, and file storage, without having to deal with lower level requirements like how much space a database needs, whether the data must be protected by making a copy between 3 servers, or distributing the workload across servers that can be spread throughout the world. Typically, applications must be written for a specific PaaS offering to take full advantage of the service, and most platforms only support a limited set of programming languages. Some examples of PaaS solutions are the “Google App Engine” system, “Heroku” which operates on top of the Amazon Web Services IaaS system, and “Force.com” built as part of the SalesForce.com Software as a Service offering.
- (SaaS) Software as a Service is at the top layer of cloud computing. Software as a Service is typically built on top of a Platform as a Service solution, whether that platform is publicly available or not, and provides software for end-users such as email, word processing, or a business CRM. Software as a Service is typically charged on a per-user and per-month basis, and companies have the flexibility to add or remove users at any time without addition costs beyond the monthly per-user fee. Some of the most well-known SaaS solutions are “Google Apps”, Salesforce.com, and Microsoft’s “Business Productivity Online Suite”
More of difference of IaaS,PaaS,SaaS – http://www.ipwithease.com/saas-vs-paas-vs-iaas/
Benefits of Cloud Computing –
- Reduced cost: Cloud computing can reduce both capital expense (Capex) and operating expense (Opex) costs because resources are only acquired when needed and are only paid for when used.
- Refined usage of personnel: Using cloud computing frees valuable personnel allowing them to focus on delivering value rather than maintaining hardware and software.
- Robust scalability: Cloud computing allows for immediate scaling, either up or down, at any time without long-term commitment.
Related- Colocation vs Cloud | <urn:uuid:1cec396e-b672-4a10-9914-c00deb07ca63> | CC-MAIN-2022-40 | https://ipwithease.com/cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00772.warc.gz | en | 0.928415 | 1,036 | 3.265625 | 3 |
Interoper - What?
Interoperability is the seamless sharing of data, content and services among systems or applications. As districts’ ecosystems become increasingly digital, the need for interoperability between systems becomes increasingly apparent— for cost efficiencies as well as teaching and learning effectiveness. According to the Consortium for School Networking (CoSN) the state of Michigan conducted an extensive study and found the lack of interoperability in schools costs an astounding $163,000,000 per year.
Schools' IT network operators are under pressure to build robust connectivity to support the goal of interoperability. Through the years they have purchased additional equipment including switches and optics to scale. Sometimes past purchase decisions don't support today and tomorrow's needs. Ripping and replacing existing systems can be costly, and operators tend to avoid shouldering that responsibility at all costs.
Interoperability also exists all too well in the world of networking infrastructure. When a school network operator purchases a switch from OEM manufacturers such as Cisco or Juniper, they may also purchase the same branded optical transceivers but at a substantially higher cost. These optical transceivers, widely used in networking hardware installations, directly impact the bandwidth, speed, and transmission of data. An upgrade to these optics can easily avoid costly replacements while enhancing interoperability tenfold.
For the same product, a compatible optic brand such as AddOn can offer up to 70% in cost savings with guaranteed interoperability in different switch manufacturers. This means at a fraction of the cost, compatible optics can work just as well as a Cisco or Juniper branded optic in a Cisco or Juniper system.
K–12 leaders have options to maximize their existing network infrastructure. The switches or routers in use tends to remain so for years until a new feature is needed. These are time and cost intensive to replace. The transceivers, however, are easy to swap out without replacing major systems and lend to immediate benefits in speed and bandwidth. Many of these are even hot swappable, meaning they can be replaced while the system is on, avoiding service outages.
Why Ed Tech leaders choose AddOn optical transceivers
- Easy to deploy - AddOn's optical solutions are hot-swappable, which means they can work without any additional configurations. They function identical to the OEM with no extra steps to hot swap or plug. Simply plug and play.
- Keep existing infrastructure- AddOn transceivers leverage data density-improving tech like tunable optics and multiplexing (CWDM/DWDM) to increase performance without costly new fiber links. We maximize the data value of each fiber you already have.
- Immediate benefits- Deploying an AddOn transceiver solution provides immediate relief to bandwidth bottlenecks or speed traps, enhancing the student experience while simultaneously improving the technical capabilities of your educators.
- Cost-effective - With hardware costs at a fraction of the OEM alternative and hands-on expert support available without paid service contracts, choosing AddOn helps bridge the gaps in your school’s IT and service budgets.
AddOn's high-performance optics have empowered schools across the nation.
Be an Ed Tech leader and solve interoperability challenges today! | <urn:uuid:8951f4e6-7d3b-4582-a3d8-c0bc1ad3d488> | CC-MAIN-2022-40 | https://www.addonnetworks.com/news/interoperability-in-school-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00772.warc.gz | en | 0.924192 | 660 | 3.140625 | 3 |
Steganography is the practice of concealing a file, message, image, or video within another file, message, image, or video. Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. This post would cover Steganography in Kali Linux – Hiding data in image. You can pretty much use the same method to hide data in Audio or Video files.
In digital steganography, electronic communications may include steganographic coding inside of a transport layer, such as a document file, image file, program or protocol. Media files are ideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every 100th pixel to correspond to a letter in the alphabet, a change so subtle that someone not specifically looking for it is unlikely to notice it.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal. Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.
Steganography in Kali Linux
There’s two primary tools available in Kali Linux for Steganographic use.
Steghide is a steganography program that is able to hide data in various kinds of image- and audio-files. The color- respectivly sample-frequencies are not changed thus making the embedding resistant against first-order statistical tests.
- compression of embedded data
- encryption of embedded data
- embedding of a checksum to verify the integrity of the extraced data
- support for JPEG, BMP, WAV and AU files
Stegosuite is a free steganography tool written in Java. With Stegosuite you can hide information in image files.
- BMP, GIF and JPG supported
- AES encryption of embedded data
- Automatic avoidance of homogenous areas (only embed data in noisy areas)
- Embed text messages and multiple files of any type
- Easy to use
Hiding data in image using steghide
Installation is simple in Kali Linux as steghide is already available in Kali Linux repository. Run the following command and you’re done.
root@kali:~# apt-get install steghide
Hide text file in Image
I created a folder
steghide in root home folder and placed
secret.txt file in there.
picture.jpg is the file where I am going to hide
secret.txt file. I am going to show the commands here.
To hide text file in Image in Kali Linux using steghide, use the following command:
root@kali:~/steghide# steghide embed -cf picture.jpg -ef secret.txt Enter passphrase: Re-Enter passphrase: embedding "secret.txt" in "picture.jpg"... done root@kali:~/steghide#
This command will embed the file
secret.txt in the cover file
Now you can email, share or do anything with this new
picture.jpg file without having to worry about exposing your data.
Extracting text file from Image
After you have embedded your secret data as shown above you can send the file
picture.jpg to the person who should receive the secret message. The receiver has to use steghide in the following way:
root@kali:~/steghide# steghide extract -sf picture.jpg Enter passphrase: the file "secret.txt" does already exist. overwrite ? (y/n) y wrote extracted data to "secret.txt".
If the supplied passphrase is correct, the contents of the original file secret.txt will be extracted from the stego file picture.jpg and saved in the current directory.
Just to be on safe side, I am checking the content of the
secret.txt I extracted. Seems ok.
root@kali:~/steghide# head -3 secret.txt Linux. It’s been around since the mid ‘90s, and has since reached a user-base that spans industries and continents. For those in the know, you understand that Linux is actually everywhere. It’s in your phones, in your cars, in your refrigerators, your Roku devices. It runs most of the Internet, the supercomputers making scientific breakthroughs, and the world\'s stock exchanges. But before Linux became the platform to run desktops, servers, and embedded systems across the globe, it was (and still is) one of the most reliable, secure, and worry-free operating systems available. root@kali:~/steghide#
Viewing Info of embedded data
If you have received a file that contains embedded data and you want to get some information about it before extracting it, use the info command:
root@kali:~/steghide# steghide info picture.jpg "picture.jpg": format: jpeg capacity: 3.1 KB Try to get information about embedded data ? (y/n) y Enter passphrase: embedded file "secret.txt": size: 6.5 KB encrypted: rijndael-128, cbc compressed: yes root@kali:~/steghide#
After printing some general information about the stego file (format, capacity) you will be asked if steghide should try to get information about the embedded data. If you answer with yes you have to supply a passphrase.
Steghide will then try to extract the embedded data with that passphrase and – if it succeeds – print some information about it.
If you want more detailed information please read the man(ual) page.
Hiding data in image using Stegosuite
Stegosuite is pretty much a GUI for similar steghide-type functionality.
Installation is simple in Kali Linux as stegosuite is already available in Kali Linux repository. Run the following command and you’re done.
root@kali:~# apt-get install stegosuite
Embed text file in Image using Stegosuite
You need to run it from Application menu (or you can just search it). Go to File > Open and open the image you want to use. Right-click on the file section and select add files and select your secret.txt file. Type in a passphrase and click on embed. Few seconds and it will create a new file picture_embed.jpg.
Extracting text file from Image using Stegosuite
If you want to extract text file or data from the image, simply Open the image, type in the passphrase and click on Extract.
Steganalysis and detection
In computing, steganographically encoded package detection is called steganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean-copies of these materials and compare them against the current contents of the site. The differences, assuming the carrier is the same, comprise the payload. In general, using extremely high compression rates makes steganography difficult, but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).
I found few references that I’ve included here, but I am really not able to find a really good source or tool. Perhaps the readers might suggest more tools and methods.
- Steganalysis: Your X-Ray Vision through Hidden Data
- A few tools to discover hidden data
- Steganography Tools
- An Overview of Steganography for the Computer Forensics Examiner
- Steganography Countermeasures and detection
- Digital Forensic Tools: Imaging, Virtualization, Cryptanalysis, Steganalysis, Data Recovery, Data Carving, Reverse Engineering | <urn:uuid:582b43cd-6b48-4dfd-8716-8f093aef4ab0> | CC-MAIN-2022-40 | https://www.blackmoreops.com/2017/01/11/steganography-in-kali-linux-hiding-data-in-image/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00772.warc.gz | en | 0.868576 | 1,797 | 2.609375 | 3 |
Organizations face a variety of different risks, and cyber threats are becoming a top-of-mind security concern. An organized approach to risk management is essential to minimizing the probability and impact of successful cyberattacks.
A risk management framework (RMF) can help with this. An RMF defines the process for identifying and managing the risks to an organization to eliminate or minimize the probability and impacts associated with these risks.
What is the Purpose of a Risk Management Framework?
A risk management framework enables an organization to make intelligent decisions about how it will address the various risks faced by the business. Effectively managing these risks provides a number of different benefits to an organization, including:
- Insider Threat Management: Most data breaches are caused by insiders, either intentionally or unintentionally. By identifying risky behaviors that could lead to a breach and codifying responses, an organization positions itself to respond quickly and effectively to a potential incident.
- IP Protection: An organization’s intellectual property is vital to its ability to compete effectively in the marketplace. A risk management framework can help an organization to identify risks to its IP and develop strategies for minimizing these risks.
- Regulatory Compliance: Many types of customer data are protected by data privacy laws and data protection regulations. To avoid regulatory penalties and legal action, an organization must take steps to minimize the potential for exposure of the protected data in its care.
- Vulnerability Management: Within an organization’s network, different vulnerabilities have varying levels of exploitability and impact. A risk management framework provides a structured process for identifying and managing the risks associated with these vulnerabilities.
How Do You Develop a Risk Management Framework?
The National Institute of Standards and Technology (NIST) has created a risk management framework for securing US government systems. However, the risk management framework steps outlined by NIST’s are widely applicable to cyber risk management.
1. Categorize Information Systems
Different information systems have different levels of importance within an organization. Some computers may be “critical systems” or store and process sensitive data that is protected by laws and regulations. Others, like employee workstations, may be useful but are not vital to operations.
Categorizing information systems based upon their roles and the data that they can access is crucial to risk management. This information helps to determine the impact of attacks and with prioritizing security controls.
2. Select Security Controls
Based on the categorization of each information system, the next step in risk management is selecting a set of security controls for each asset. Security controls should be selected based upon several different factors, including:
- Regulatory Requirements: Data protection regulations like the GDPR, PCI DSS, and HIPAA outline minimum requirements for the security of sensitive data protected under the law. Security controls should be selected to meet or exceed these requirements.
- Corporate Policy: Data protection regulations only cover certain types of data. Additional protections may be required for systems containing intellectual property or other types of sensitive business data.
- Business Needs: Security controls should balance security with usability. Select security controls that meet requirements, but also ensure that it is still possible for employees to do their jobs.
When selecting security controls, it is important to define a policy that is sustainable. Rather than taking a “check the box” approach to compliance, design controls that meet requirements but are also maintainable.
3. Implement Security Controls
After designing security controls for a system or systems, the next step is to implement these security controls. At this stage in the process, it is essential to document the controls put in place to ensure that they can be properly monitored and maintained.
4. Assess Security Controls
After implementing security controls, test them to ensure that they are effective. If they don’t work, return to step 2 and design new controls.
5. Authorize Information System
Once a system is secure it can be authorized for use. Any risks not mitigated by the selected security controls should be documented as accepted risk.
6. Monitor Security Controls
Security is not a “one and done” exercise. Security controls should be regularly monitored and assessed to ensure that they are effective and updated as needed. | <urn:uuid:dbae0a31-15ef-4b56-a2bc-de9a82b5f5df> | CC-MAIN-2022-40 | https://www.code42.com/glossary/risk-management-framework/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00772.warc.gz | en | 0.922277 | 855 | 3.078125 | 3 |
IBM Research has been working on new non-volatile magnetic memory for over two decades.
Non-volatile memory is wonderful for retaining data without power, but it is extremely slow, and does not last forever. Primary computer memory (Dynamic Random Access Memory, or DRAM) is fast but volatile (thats the D part), and on-processor chip memory (Static Random Access Memory, or SRAM) is extremely fast, but is not as dense as we would like. Technology, like life itself, is full of compromises. But, …
Wouldn’t it be nice if computer memory was non-volatile, dense, fast and durable? High-Performance Computing (HPC) and Artificial Intelligence (AI) applications would run faster, and consume less power. And main memory itself could be non-volatile. Well, that day may be closer than you might think.
A team of IBM Research scientists has been working on Magnetic Random-Access Memory (MRAM) for decades. Early MRAM had significant performance and manufacturing limitations, but now these scientists believe they are close to inventing something closer to memory nirvana. Ok, perhaps something short of Nirvana, but closer. If these scientists are correct, they could revolutionize storage for on-processor last-level cache memory and faster non-volatile memory (NVM) on edge devices. Let’s look at why IBM is so excited about this advancement and what it will take to finish the job. We have also published an in-depth report here.
IBM foresees a memory technology that could be fast, dense, non-volatile and durable. (Photo by
The Promise of STT-MRAM
IBM is developing a technology called Spin-Transfer-Torque MRAM. On-processor SRAM memory offers exceptional bandwidth at low latencies, providing a fast cache between DRAM and the processor cores. However, while SRAM is fast, it is not particularly dense, limiting the size of SRAM caches to hundreds of megabytes. Meanwhile, emerging applications such as AI accelerators demand more memory capacity and MRAM could double that capacity at low power and unlimited endurance if a much faster version were available. In this world, ASICs such as AI accelerators could increase performance with more on-chip memory for model weights and parameters. Accelerators needing more memory capacity could also benefit from the reduced frequency of DRAM accesses a larger cache could provide.
BM envisions four eventual markets for STT-MRAM. The first is what most of us think of as stand-alone memory. STT-MRAM could one day even replace DRAM in applications requiring non-volatility. The second market is for embedded non-volatile memory in chips, where Samsung is already fabricating STT-MRAM on 28-nm Silicon on Insulator (SOI) manufacturing lines. Cache memory on slower low-power processors such as used in mobile phones is the third market opportunity. The fourth and largest market opportunity is to replace some SRAM for high-performance computing and Artificial Intelligence as a last-level cache.
IBM envisions four markets for STT-MRAM once the research is complete. Source:IBM
Everspin Technologies has effectively shipped all early STT-MRAM devices into the market thus far, targeting high-end ultra-reliable storage buffers. IBM’s FlashCore module uses this technology today. However, to target the larger market of last-level cache, IBM will need to improve the read-write time from 30-70ns to something like 2ns. And STT-MRAM endurance would need to improve from the current 1010 writes to virtually unlimited data retention, or something like 1018 writes (is that an “Exa-Write”?).
The Challenges of STT-MRAM
Five challenges below could enable last-level cache memory and embedded flash. IBM Research had previously solved the first four challenges, the most advances occurred in 2020. Only one issue remains unsolved to date: IBM must figure out how to reduce the current needed to switch states by about 50%.
- The time it takes to switch states must be fast, in the 2-3 nanosecond range.
- The switching must be reliable, down to 1e-9 write error rate.
- The switching voltage distribution must be in a tight range for consistent operation.
- The fabrication process must be possible on the advanced process nodes used in microprocessors, currently in 5 or 7nm.
- The current required to switch states must be low, about ½ what is presently possible.
Memory technology changes have slowed dramatically over the decades. Core memory was invented in 1964. Then DRAM was invented by Bob Dennard of IBM in 1966. SRAM was invented in 1969, and the first Intel DRAM chip shipped in 1970. NAND flash memory was developed in 1980. However, since these remarkable inventions, changes over the last four decades have been primarily enabled through VLSI manufacturing advancements, not fundamental shifts in the physics of a memory cell.
With STT-MRAM, we are finally looking at an entirely new implementation of a one and a zero. Faster, cheaper, denser, and durable non-volatility combined in a single memory design. STT-MRAM will not replace everything, at least not anytime soon. Level 1 and level 2 cache will remain implemented in SRAM, at least for now. And NAND flash memory will remain the king of the NVM hill for low-cost and high-density. But STT-MRAM may soon challenge existing memory devices in Level 3-4 caches and embedded flash. DRAM could also be built with STT-MRAM where non-volatility is required.
It has been a long journey for many at IBM Research, but success is finally in sight. | <urn:uuid:a7cb087c-d325-4694-8603-23e7c23132ef> | CC-MAIN-2022-40 | https://cambrian-ai.com/ibm-nears-breakthrough-in-new-memory-class/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00172.warc.gz | en | 0.93449 | 1,219 | 3.53125 | 4 |
Leadership is the “exercising of influence over others on behalf of the leader’s purposes, aims or goals.” The debate about leaders “being made” or “being born” is quite old, and both sides have equally strong points in their defense. But plain logic makes us infer that some people are born with certain natural traits that help them become better leaders.
This, however, doesn’t mean that every person with such traits becomes a great leader. It means that when such people are groomed properly, allowed to harness their inherent capabilities and hone their skills, they are highly likely to become good, even exceptional, leaders.
A true leader needs to be self-aware
Self awareness is, by far, the most important trait that potential leaders can have. It allows us to train ourselves to attain true leadership qualities and skills. In many ways, each person is her own best judge.
To be able to learn new skills and progress in life, the primary requirement is the belief that you are deficient in knowledge and skills, and there is scope to learn and acquire more.
“You cannot improve what you cannot manage, and you cannot manage what you are blind to in your personal habits and behavior.”
– Tim Kight, organizational development expert
What ‘leadership’ means to most leaders
Most “leaders” aren’t really aware of what “leadership” entails. Modern-day corporate hierarchy may put managers in a “theoretical leadership position,” but most managers fail to realize that leadership is more than budgeting and scheduling.
Leadership goes beyond everyday organizational activities to far more important things like negotiating with people and “keeping the wheels moving.” Leadership is the art (and science) of inspiring and motivating people to achieve a common goal. Because a leader needs to delegate responsibilities to other people in accordance with their unique talents and skills, it is imperative for a true leader to be self-aware.
A leader is expected to find her way out of challenging situations and find solutions to problems, however difficult they may seem. Coaching and mentoring sessions by experts can go a long way in honing leadership qualities and getting the required skills. But simply undergoing a couple of “leadership coaching” sessions does not make one a good leader. Leadership training is a continuous process. Skills are like an axe. To keep them functional, they need to be sharpened time and again.
Why continual learning matters
Continual learning keeps leaders abreast of the happenings in their chosen fields. It also helps them acquire new insights in the arena of human psychology, a field they need to be well-informed about. Books written by established and reputed leaders also provide tremendous insight.
Learning can take place through fun activities as well. For example, gamification is a great way to learn. It has been proven that the inherent learning it provides makes the game exciting and appealing. Whenever we play a game, reach higher levels and try to complete them successfully, we are essentially learning to face problems and find solutions that overcome them.
Playing simple arcade-like games, like Hexa Dots, can make for a fun, relaxing and engaging way to learn about overcoming challenges and finding solutions, the two most important traits in effective leaders. In this game, players have to move four dots of the same color into one line to eliminate them. However, new dots appear while the player is in the process of moving the dots, making it harder to put four dots in a line. This kind of activity encourages lateral thinking and develops the ability to look at a problem from multiple perspectives to find the best possible solution.
From studies, it is proven being proficient in math can be an advantage for a leader because math skills can help build an analytical mind. Somehow, arithmetic has earned a reputation for being a scary subject. This may have something to do with the way it is taught in schools. However, online tools like Catchup Math can help you brush up on math fundamentals that you may have learned in school but had forgotten over the years. Catchup Math is a platform that uses active and cooperative learning methodologies, with instructional videos, lessons, hands-on activities and practice problems.
There is absolutely no dearth of online tools to sharpen your leadership axe — and these tools help leaders attain the next logical step after self awareness: self improvement. And self improvement is the underlying foundation of being a good leader.
Leadership qualities may be inherited or attained. But what is certain is that leadership qualities can certainly be enhanced and the required skills can be learned and honed. The ambition and the drive to better oneself is a prerequisite to becoming a good leader. A true leader is empathetic to her followers’ or team’s aspirations and shortcomings, and she uses their talents to guide them collectively toward achieving a higher goal.
To be able to improve, leaders should be ready to accept honest feedback. They should adopt continuous learning as a precondition to inspire their teams to achieve greatness. | <urn:uuid:9f373fec-aa17-43f7-a513-3359287d477b> | CC-MAIN-2022-40 | https://www.cio.com/article/236786/continuous-learning-tools-help-leaders-stay-ahead-of-the-curve.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00172.warc.gz | en | 0.963813 | 1,051 | 2.890625 | 3 |
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
Heather Oliver is a Technical Writer for Constellix and DNS Made Easy, subsidiaries of Tiggee LLC. She’s fascinated by technology and loves adding a little spark to complex topics. Want to connect? Find her on LinkedIn.
If you’re looking for something that can help automate and centralize the management of IP address distribution, DHCP may be for you. In this resource, you’ll learn all about this protocol, including how it works and the pros and cons of using it.
DHCP stands for Dynamic Host Configuration Protocol. This protocol dynamically assigns or leases unique IP addresses for devices (clients) on a network. DHCP also allocates the subnet masks and default gateway addresses for a network. In many cases, routers function as a DHCP server, but they could also be a computer. Before DHCP, network administrators often relied on software like Excel to manage large volumes of IP addresses. On top of that, they all had to be assigned and tracked manually. You can imagine how fun that is! Aside from the headache of such a task, manual assignment increases the margin of error. With DHCP implemented into a network, IP addresses are automatically assigned.
Before we go any further, there are a few terms you should know:
DHCPDiscover: The packet sent from a client or device when connecting to a DHCP network.
DHCPOffer: A DHCP Offer includes predefined rules and settings, as well as an assigned IP address.
DHCPRequest: This refers to when a client asks for permission to use an IP.
DHCP Reservation: A predefined range of IP addresses for a network.
DHCPACK: An Ack (acknowledgment) is confirmation that a device can receive an IP and connect to a network.
DHCPNACK: A Nack (negative acknowledgment) is when a device is denied an IP.
Now that all the terminology is out of the way, let’s get into how DHCP works. This protocol gives you full control over the usable amount of IP addresses in your network. The way this works is by assigning IP ranges. Think of this like a block of rooms reserved at a hotel for an event. You’re just reserving a block of IPs for devices (guests) that connect to your network (hotel). The number of IPs available will depend on your network’s router.
Each time a device connects to a DHCP-enabled network, it sends a DHCPDiscover packet to the server. This is the device’s way into a network. After receiving the signal, the server returns a DHCPOffer. Once the client receives the offer, it sends back an official request to connect to the IP. The DHCP server then sends an ACK or “signs off” on the request, and the client is now connected to the network. If the requesting device doesn’t meet the criteria for the network, the DHCP server will return a NACK.
Here are some common devices that connect to DHCP networks.
Dynamic IP addresses are IPs that periodically change. The DHCP server retrieves IPs from a block of addresses set up by a network administrator. This works the same for home networks, except an internet service provider (ISP) supplies the IPs. In a business setting, it’s not uncommon for a device to receive a new IP address every time it connects to a DHCP network.
While dynamic assignment of IP addresses is ideal for many devices on a DHCP network, some IPs are better off static. Static IP addresses are manually assigned IPs that do not change. Printers or remote file servers are good examples. If these types of clients have dynamic IP addresses, each connected device would require setting updates every time the IP for the printer or server changed. Not everyone understands how DHCP configurations work, and it may not be possible for IT administrators to be present at all locations. Because of this, it’s a good idea to use static IP or DHCP reservations for certain devices.
If you want a device to keep the same IP address or need to set up port forwarding, you can use DHCP reservations. These reservations are pre-set IP addresses for specific clients. They work the same as regular DHCP IP assignments but are considered “permanent” leases. Using reservations is a huge timesaver, especially if a device gets frequent firmware updates. However, some routers require a device’s MAC address to make a reservation.
We’ve covered most of the pros of DHCP already, but here’s a list of them for quick reference.
Now, here is a list of DHCP disadvantages:
Dynamic Host Configuration Protocol (DHCP) is a network protocol that automatically assigns IP addresses and any corresponding information to each host on a network. This allows endpoints to communicate more efficiently and simplifies IP management. DHCP also defines related configuration variables and allocates the subnet masks and default gateway addresses for a network. Because IP addresses using DHCP are generated automatically rather than manually, networks are less prone to experiencing errors.
If you found this useful, why not share it? If there’s a topic you’d like to know more about, reach out and let me know. I can never talk about DNS enough!
If you liked this, you might find these helpful:
Sign up for news and offers from Constellix and DNS Made Easy | <urn:uuid:16d90f48-35a6-430f-8438-0ad1cf6e2e3f> | CC-MAIN-2022-40 | https://constellix.com/news/what-is-dhcp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00172.warc.gz | en | 0.915676 | 1,209 | 2.59375 | 3 |
The Data Protection Act 1998 was enacted to bring British law into line with the EU data protection directive of 1995. Since then, the rate at which technology has advanced has been astronomical, resulting in a surge of innovative ways in which businesses can commercially exploit personal data.
What’s more, the world has become increasingly interconnected, the nature of data exchanges has become more globalised and the legislative approach across EU member states is widely acknowledged as being disjointed.
In response to these changes, and the consequent focus on the importance of protecting personal data, the European Commission has published proposals for the reform and harmonisation of EU data protection law. The Regulation, a supposedly single comprehensive legal framework governing data protection, is expected to overhaul and replace existing data protection legislation.
The Regulation’s objectives remain the same; protecting individuals with regard to the processing of personal data and enabling the free movement of personal data between member states via secure means. However, the effect of the new Regulation will bring significant change to how businesses deal with personal data in practice.
Guidance suggests that the Regulation will swing data protection law in favour of the individual, to ensure their personal data is adequately protected. Any individual data captured by a business will most likely be considered ‘personal data’ and such businesses will therefore need to comply with the Regulation. With the introduction of the Regulation expected over the next year or two, now is the time to consider what steps must be taken to proactively address data protection risks.
How does the Regulation apply to member states in the EU?
Although some, including the UK government, believe reform would be better delivered as a directive, primarily to afford member states some more flexibility and discretion in its implementation, the Regulation would be directly binding on all member states immediately. The Regulation will be self-executing and will not require any implementation measures, meaning there is no two year implementation phase after the date on which it comes into force.
So what’s new?
• Non-EU Companies which offer goods/services to individuals in the EU and/or monitor their behaviour must comply with the Regulation.
• Companies cannot work on the basis of implied consent in certain circumstances. All consent must be explicit, for example by obtaining consent via opt-in tick boxes on websites.
• The extent to which data controllers must collect and process data will be limited to the ‘minimum necessary’ (rather than ‘not excessive’). This is a more robust data minimisation principle.
• Individuals can request that the data controller erase all personal data relating to them (i.e. ‘the right to be forgotten’) and to abstain from further dissemination of that data.
• Data processors are now specifically included within the scope of the Regulation, meaning data subjects have enhanced protection where their data is processed by a party other than the data controller.
• Companies may be fined up to 1m Euros or up to 2 per cent of global turnover for data protection breaches, a significant increase on the maximum fine the ICO can currently impose (£500,000).
• One set of rules will apply across the EU, meaning businesses will not need to deal with member states’ varying rules.
Top tips for compliance
• Conduct regular data protection audits and risk assessments
• Maintain and adhere to a remediation and security plan and appropriate controls and training
• Ensure you have clear internal data protection policies
• For Privacy Policies/Notices:
- Use plain English
- Use language appropriate to the audience
- Transparency about the purpose of collecting data
- Make available before providing goods or services
• Enter into, and vary existing, written agreements with third parties to whom you pass personal data that you control and ensure such agreements are compliant with the Regulation
• Collect and process the minimum data necessary
• Properly inform your users about what will happen to their personal data
• If applicable, identify yourself as a data controller, e.g. provide your email/website address
• Allow users to easily review and change their decisions once you have begun providing goods and/or services
• Remember: Failure to comply with the Regulation comes at a price!
• Take advice
The expected date of the introduction of the Regulation is 2016/2017. Businesses therefore need to start considering, and preparing for, the impending changes to ensure it is data protection compliant on a practical level moving forward.
A failure to do so can lead not only to significant fines, but also damage to business reputation. Implementing new procedures and reviewing those which already exist to ensure compliance are, compared with the enormous costs that may be incurred for non-compliance, relatively small.
Don’t be caught out by the Regulation, start making the necessary changes prior to its introduction.
David McGuire --Outsourcing, Technology and Commercial Team at Wright Hassall LLP (opens in new tab) | <urn:uuid:bd35ff1d-4c81-4e75-8f64-9b1561b2ede0> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/07/19/regulation-changes-to-data-protection-the-importance-of-protecting-personal-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00172.warc.gz | en | 0.93205 | 1,005 | 2.84375 | 3 |
Software-defined networking is giving agencies looking to upgrade their enterprises opportunities to program and virtualize complex networks.
This article was changed March 18 to correct the spelling of Mike Haugh's name.
Science-minded government agencies and universities are exploring software-defined networking (SDN), an emerging field of technology research that seeks to liberate networking from its traditional hardware orientation.
SDN could dramatically change the way governments deploy communications systems, say computer science researchers, who see networks lagging behind servers and storage when it comes to management control and other benefits of virtualization.
Now agencies looking to upgrade their networks may have real opportunities to do so, given recent developments in using SDN to program and virtualize enterprise networks, experts say.
“SDN is exciting because it provides an architectural path forward, cutting through the complexity in networks today and providing more programmability,” noted P. Brighten Godfrey, assistant professor of computer science at the University of Illinois at Urbana-Champaign.
With SDN, software takes on networking chores normally embedded in the guts of routers and switches. In a conventional setup, the network component that determines how data will travel (called the control plane) and the part that actually transmits data (the data plane) remain locked in hardware.
An SDN, however, severs the control plane from the hardware and makes it a software function.
The upshot for IT managers? SDN’s software focus makes networks more flexible and much easier to manage, according to the technology’s advocates.
Currently, network administrators configure individual network devices to accommodate changes in network traffic patterns. SDN, in contrast, lets managers program a network’s multitude of devices from a centralized software controller.
Accordingly, the technology has the potential to reduce the amount of time it would take administrators to perform network management tasks. Security improvements are another possible upside, since admins could more readily flag vulnerabilities and program a network to deal with a particular threat.3
In short, SDN enables new ways of building more secure and more efficient networks, said Godfrey, who is working on SDN projects with National Science Foundation backing.
But while SDN appears promising, the technology is far from mainstream. The arrival of SDN-capable networking gear is a fairly recent development. Tools for managing SDNs are also relatively scarce. And while SDN could improve security, it may also introduce risks. As a nascent technology, it has yet to be fully vetted from a security perspective.
Another question mark regarding SDN: How will agencies go about implementing the new networks? An agency early adopter could attempt to introduce SDN in one fell swoop as part of an overarching network upgrade. But an incremental approach that places SDN at the network’s edge offers another possibility and perhaps a more realistic one for budget-constrained agencies.
Getting started with SDN
The technologies underpinning SDN have been under development for a few years, but most of the deployment activity has occurred since 2011. The basic setup includes an SDN controller – a software application that sends instructions to network devices – and a protocol that lets the controller communicate with those devices. The Open Networking Foundation’s OpenFlow protocol supports a number of SDNs, but other protocols can also do the job. ONF describes OpenFlow as a vendor-neutral communications interface.
SDNs are starting to appear in large commercial enterprises where networking is core to the business. They are also found among cloud providers and other carriers who have plenty of incentive to get out in front of the SDN adoption curve. Among large commercial enterprises, Google deployed SDN in its B4 network, a private wide-area network that links its data centers. Japan’s NTT Communications will use SDN in a virtual network service scheduled to debut in March.
In contrast, government agencies generally find it harder to cost-justify a rapid jump into a new networking approach and are more focused on piecemeal enterprise improvements.
Nevertheless, SDN is in play across the public sector, particularly in the energy and defense spheres. Mike Haugh, senior manager, market development at Ixia, which makes test applications for SDN and other networks, said there are now more than 100 ongoing SDN trials, some of which are in the government space.
The Department of Energy is exploring SDN through the agency’s Energy Sciences Network (ESnet), a high speed network linked to 30 major DOE sites and over 100 other research and education networks. Other current public sector work with SDN tends to be focused in universities, with support from the NSF.
Researchers contend that SDN provides an opportunity for improving network manageability and security in complex, ultra-fast networks. Some of these networks are so complex, in fact, that
operators lack confidence that changes made to a network -- a small tweak to add a new tenant or an alteration to a security policy -- will have the right effect networkwide.
“It has become an inhuman task to understand what the network is really doing,” University of Illinois’s Godfrey said.
To address the difficulty of managing its networks, the University of Illinois is using a tool called VeriFlow, which confirms security levels and ensures interdependent components of a network are working properly. NSF backed the project along with the National Security Agency.
The VeriFlow software runs at the SDN controller, where it observes changes in instructions to the network. The software checks each change in real time and determines whether a change is going to have an adverse impact on the network, such as imposing a critical security flaw. If that’s the case, the change is prevented from going to the network, Godfrey said.
When university researchers used VeriFlow in pilot deployments, they uncovered inconsistent security policies along with other errors and security vulnerabilities, Godfrey said.
SDN’s security upside
Eric Chiu, president and founder of HyTrust Inc., a Mountain View, Calif., company that focuses on cloud security automation, suggested that SDN can also improve cloud computing security. “[SDN] can enable great security benefits, primarily around network and endpoint security automation and making them dynamic to meet the needs of changing cloud environments,” Chiu said.
With SDN, security policies can automatically follow virtualized workloads wherever they are located, he said. And delivered as a virtualized resource, security can be spun up on the fly as the cloud scales up and then spun down when the cloud scales down. SDN takes virtualization a step further, Chiu said, “by enabling the automated deployment of networking and security services without a tremendous amount of human interaction.”
On the security front, SDN could also provide a more flexible, on-the-fly reaction to denial-of-service attacks and other security incidents, said Inder Monga, chief technologist and area lead for ESnet.
At Energy, SDN is also seen as a means to improve network utilization. One ESnet demonstration project, for example, used OpenFlow to match large data flows to the most efficient network tier. This network management technique becomes particularly important for moving enormous scientific data sets around a network. SDN, for example, can identify a large data flow and send it to a network’s optical transport layer, Monga said.
For others, SDN benefits go well beyond network management. “It’s just that you can innovate much faster,” said Deniz Gurkan, an associate professor of computer engineering technology, at the University of Houston. SDN’s programmable software allows for much quicker development than attempting the same innovation in hardware.
Gurkan’s research focuses on creating network debugging tools using SDN technologies. The tools will be built for use on current networks, not just SDN-enabled ones. The idea is to deploy SDN technology on networks as they exist today. “We cannot really overhaul the network to become SDN overnight,” she said.
SDN security trade-off
Although security is a considered a potential benefit of SDN, it carries some weaknesses too, according to researchers.
An SDN program review, hosted late last year by Energy, NSF, and the Networking and Information Technology Research and Development Program, noted that the security of SDN needed additional R&D, according to Monga.
One vulnerability stems from the SDN’s centralization of control. “SDN concentrates risk given that [it collapses] traditional, physical systems, networks and data onto a single software layer, which leads to a single point of failure and attack,” Chiu said.
“All your eggs are in one basket, so to speak.”
This is similar to what happens with virtualization and cloud infrastructure, Chiu noted. Consequently, organizations adopting SDN will need to pay special attention to securing the SDN controller, a measure that becomes critical for addressing the concentration of risk and the potential for catastrophic failure.
Chris Wright, senior principal software engineer for open software developer Red Hat, came to a similar conclusion. “If you have a logically centralized controller in your system, that becomes a point of interest for an attacker,” he said.
To bolster security, Wright said adopters need to make sure data traffic between the controller and the devices managed on a network takes place in a segment of the network not immediately accessible to an end user. The control plane traffic should remain on an administrative portion of the network, rather than traversing the same network that is providing bandwidth for applications.
“We need to be really clear on what the security threats are to this new model and just engineer around those,” Wright said.
Another SDN hurdle is a lack of management tools, which the program review identified as an important gap. “Vendors are focusing on building the product set, but the people who manage networks need not just the product set but a set of tools to be able to manage that,” Monga said.
SDN equivalents of such common diagnostic tools as traceroute have yet to emerge, Monga said, noting that some academic work and startup activity has taken place in the SDN tool space. But, for the most part, the tools network managers need either don’t exist or aren’t sufficiently mature to help run an operational SDN, he said.
Monga said the SDN program review participants discussed building a prototype to explore the architecture’s shortcomings in greater detail. The network, if it were to be built, would operate along the lines of ARPAnet, the predecessor to the Internet, to help government and university network managers gain experience with SDN and to flag security and management issues, Monga said.
A test network could also shed light on how to go about deploying SDN, a matter of some debate at the moment. “That is a question we ask everyday,” University of Houston’s Gurkan said.
The deployment method would depend on the type of organization planning to adopt the architecture. An organization with a rigid networking structure and well-defined layers of racks and switches in a single data center may be in a position to pursue an all-at-once SDN deployment, Gurkan said.
An SDN transition isn’t nearly as simple for an enterprise-style network such as a campus environment, Gurkan added. A campus network involves multiple departments, each with its own IT managers and bandwidth needs. One department may need to transmit petabytes of data over high-speed links, but it would prove costly to design an entire network to support such volume.
Another deployment consideration: single vendor versus multi-vendor solutions. Today, many SDN deployments are vertically oriented, single-vendor affairs. Haugh said this situation is due to the limited number of SDN applications available.
“If you choose an OpenFlow solution from vendors like NEC, HP, Cisco and others, they offer the applications, controller and network layer switches,” Haugh said. “The other issue is that most have slightly modified OpenFlow, so they would need other vendors to work with them.” OpenFlow conformance testing will help improve interoperability and limit vendor-only features, Haugh added.
The multi-vendor OpenDaylight Project, a Linux Foundation project that aims to boost adoption of SDN, is also stepping into the SDN arena with an open platform for network programmability. The project last month announced its first open source software release, an offering dubbed Hydrogen, which includes an SDN controller and OpenFlow plugin.
SDN has yet to fully mature, which means agencies can expect further technology approaches and product offerings to emerge, say networking researchers. And while the technology may yet become the standard for virtual networking, the way there is far from settled. “I would say that this area is still innovating in leaps and bounds,” Monga said. “It is very exciting, but it makes it harder to give an accurate prediction on how this market is going to evolve and how this is going to play out.” | <urn:uuid:ee7334c6-0436-4f55-ac4d-4f5332f79203> | CC-MAIN-2022-40 | https://gcn.com/cloud-infrastructure/2014/03/agencies-experiment-with-software-defined-networks/297192/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00172.warc.gz | en | 0.938355 | 2,733 | 2.59375 | 3 |
MIL STD 810H Humidity
MIL STD 810H Humidity Method 507.6 is a test method for evaluating products that are likely to be stored and/or operated in a warm, humid environment. MIL-STD-810 Environmental Engineering Considerations and Laboratory Tests is a Department of Defense (DoD) standard for military and commercial applications. It is a series of laboratory test method that replicate the effects of environments on products. These methods are meant to be tailored to the specific environmental effects expected during the life cycle of the product. This is an important consideration because there are few definable goal posts in this standard. Tailoring is required because the environmental effects likely to be encountered on equipment designed for aircraft, for example, will be quite different from those found on a vehicle.
Effects of Humidity
The effects of humidity are often overlooked when faced with more obvious environmental stressors such as temperature, shock, and vibration, but there are numerous physical and chemical effects that humidity can take place both within and on the exterior of equipment. For surface effects; oxidation, electrochemical breakdown of coatings, interaction with deposits of materials that produce corrosive films, and changes in friction coefficients. Other effects include; loss of physical strength of materials, degradation of insulative properties, changes in elasticity or plasticity, and degradation of lubricants.
Humidity is an extremely complex environmental phenomena that is intricately linked with temperature. There a limitations in what a laboratory method can reproduce and simulate. Method 507.6 is comprised of two procedures.
- Induced (Storage and Transit) and Natural Cycles
For procedure I, induced cycles of temperature and humidity are used to simulate various storage and transit scenarios where equipment is packaged or stored in environmentally uncontrolled warehouses. The standard points out that multiple tests may be applicable for storage or transit based on the nature of those sequences and nature of packaging. Natural cycles are intended for the testing of equipment in its intended environmental conditions.
Procedure II exposes the test item to more extreme temperature and humidity levels than those found in nature, but for shorter durations. While this can be an advantage for early detection of design vulnerabilities, results may not accurately represent those found in nature.
Conditions of humidity vary considerably across the globe. MIL-HDBK-310 defines three geographical categories that are used for generation of cyclic profiles.
B1 – Constant High Humidity
This profile is representative of conditions found in heavily forested areas with little solar radiation exposure. Geographical locations typical of this profile are Congo and Amazon Basins, the jungles of Central America, Southeast Asia (including the East Indies), the north and east coasts of Australia, the east coast of Madagascar, and the Caribbean Islands.
B2 – Cyclic High Humidity
B2 profile occurs in the same areas as B1 but is more representative of urban areas where solar radiation exposure is expected. Solar radiation when present in the diurnal cycle creates a wider variance in temperature and humidity.
B3 – Hot-Humid
This profile is found in areas near bodies of water with high surface temperatures, specifically the Persian Gulf and Red Sea. Testing for this extreme condition does not verify the unit under test’s ability to endure the rigors of B1 or B2.
Additional categories are provided for induced environments where temperatures as high as 160 °F (66 °C) can be reached for enclosed environmental conditions where little or no cooling air is available. These induced categories are meant to replicate various transport and storage scenarios.
The effects of humidity require lengthy test durations to evaluate potential degradation. Often testing is not performed at adequate lengths to provide meaningful data. MIL STD 810H Humidity Method 507.6 durations are shown in the table below.
MIL-STD-810 states that hazardous test items will generally require longer tests than other items to achieve a desired confidence. The standard defines Hazardous test items as “those in which any unknown physical deterioration sustained during testing could ultimately result in damage to materiel or injury or death to personnel when the test item is used”. It calls for double the number of cycles for hazardous items.
For Natural Cycles, generally intended for operational testing, Method 507.6 calls for 15 to 45 tewnty-four hour cycles of testing dependent on which geographical area the equipment may be used in.
For Aggravated testing per Procedure II, ten cycles are recommended in addition to a 24 hour conditioning period. Again the proviso for lengthening for hazardous items is called out but no exact measure is indicated.
For humidity testing there is often more questions than answers. Today’s defense and commercial equipment is liable to be used anywhere in the world. Given that time and money are major concerns for most product developers, it is unlikely that resources are available for testing all climatic categories for transit, storage, and operational profiles. While Aggravated testing is tempting due to its shortened test length it may not provide realistic findings. Unless product specifications specify exact testing requirements difficult decisions must be made.
CVG Strategy’s test and evaluation experts have decades of experience in environmental (climatic and dynamic) testing as well as EMI/EMC. We offer a wide variety of services including: EZ-Test Plan Templates, Test Program Management, Test Program Witnessing, and Product Evaluation. We also provide a two day seminar/webinar “Understanding MIL-STD-810” to help your product development team garner the most from their test and evaluation programs. Contact Us today to see how we can help. | <urn:uuid:01b3aaf4-72da-4b1a-a909-d826ef7c043e> | CC-MAIN-2022-40 | https://cvgstrategy.com/mil-std-810h-humidity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00172.warc.gz | en | 0.924424 | 1,194 | 2.671875 | 3 |
Our cell phones are some of our most important electronic devices because we use them daily to check our messages, emails, financial accounts, and stay connected to our family and friends. A mobile phone usually contains a high volume of personal data and information ranging from photos and videos to access to our banks and financial information. It’s important you take mobile device security precautions to protect this information.
Most accounts with enhanced security system often use a two-factor authentication system, meaning they call or text you a security code that they then use to verify your identity. Because of this, it is very important to take certain mobile device security precautions to prevent compromising your accounts.
Four Simple Mobile Device Security Precautions
- Never leave your device unsecured. To ensure that your device is secure at all times, you should never leave it in a place where it is vulnerable to theft or use without your authorization. This includes sharing your device with others.
- Set a passcode or fingerprint for use. You should also make sure you set a unique pass code to use the device. Most smartphones are equipped with fingerprint recognition software that verifies that you are the only user. You should enable this feature and use your fingerprint to log into certain applications like banks and money transfer apps.
- Be careful about downloads and links. Be careful about visiting unsecured or potentially vulnerable sites as you can acquire certain malware that can be used to steal your personal information or identity. You should also be careful about clicking on links from non-trusted sites.
- Enable or download applications for lost phones. If your phone is lost or stolen, there are several applications that will use your phone’s GPS to help locate it. These applications can also shut the phone down so that it cannot be accessed until it is reset by you.
Follow these easy and helpful tips to improve your mobile device security. | <urn:uuid:4561ea3d-187a-4fe6-aab0-8cd942731266> | CC-MAIN-2022-40 | https://www.ccsipro.com/blog/simple-things-to-enhance-your-mobile-device-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00172.warc.gz | en | 0.928802 | 381 | 2.703125 | 3 |
It’s a simple form of digital security you’ve been using for as long as you’ve had your bank account. Using your access card and a pin code, that is a perfect example of Multi-factor authentication. You are using both a possession factor (your physical card) and a knowledge factor (your pin code).
Multi-factor authentication (MFA) is an added security feature that is simple to implement, assuring its increased popularity. MFA consists of two or more independent credentials to verify you are who you claim to be. Three of the most common credentials consist of what the user knows (password), what the user has (security token) and what the user is (biometric verification).
Additional MFA verification credentials can include a time factor and a location factor.
Is a password hopefully using a unique combination of letters, numbers and symbols. With increased CPU speeds, brute-force attacks and password decrypting software are growing more intelligent and faster. If a password is exposed which has been used for multiple accounts, without MFA the malicious users will have access to your sensitive data and may even have access to the entire network.
A popular MFA verification method is to generate a security token or one-time use password (OTP) using your phone or email address. The advantage of using possession factor identification is that the browser or app will often remember your device or IP address. This means you only have to perform an MFA when logging in from a new phone or computer.
Another form of possession factor is having physical token access such as key cards or key fobs.
The inherence, or biometric, verification method requires a fingerprint scan, retina scan, iris scan, facial recognition or voice recognition.
Current time can be considered as a fourth option for MFA. If you login from Canada and two hours later are attempting to log in from Russia that’ll send a red flag and lock out the malicious user. Some MFA’s only allow logins during a specific time frame, let’s say, during an 8-hour shift.
With smartphones attached to our hip’s, location factors are a great secondary fourth or even fifth option for MFA. Allow the chosen app or browser to use your GPS and If a login attempt is happening in a location that is halfway across the globe it’ll trigger a lockout.
Oftentimes you have to grant access for MFA’s to be activated while using an app. You can find this access typically in the application settings. If you are using an app that doesn’t have the technology, we highly suggest using third party MFA’s, or not use the app altogether. Your identity is too important to risk and we’d consider reaching out to the developers and asking for an MFA update.
Having a Multi-factor authentication is an excellent way to add an extra security layer to your system. It will prevent malicious activity on your network, keeping your sensitive data safe and secure. We suggest you allow Multi-factor authentication everywhere it can be used and if you don’t have the capabilities perhaps it’s time to invest in a little extra digital security.
If you want to learn more about Multi-factor authentication or how to implement it onto your devices and networks AlphaKOR can help! We have experts available to answer any of your or your employee’s questions so you can take the next step to assure your company remains digitally secure. We offer knowledge resources, security implementation, and employee training at your convenience.
We also offer tips and tricks to take your security knowledge to the next level with our Free eBook “10 things your IT technician wants you to know”. You can never be too safe in this digital era, and by being proactive with your digital security you can prevent emergency situations that would cost your company time and money. | <urn:uuid:a2cc4056-042f-4295-afaa-5850ae23f119> | CC-MAIN-2022-40 | https://www.alphakor.com/blogs/it-services/our-top-cybersecurity-tip-enable-multi-factor-authentication-use/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00172.warc.gz | en | 0.935104 | 826 | 2.734375 | 3 |
The Internet of Things is exploding, and it’s not hard to explain why it’s happening now. The sensors, networking chips and other technology required to connect to the Internet devices ranging from light bulbs to smartwatches to industrial equipment have all become inexpensive.
These connected “things” send and receive data through the network relating to a variety of physical characteristics – temperature, moisture level, pulse rate, light level, velocity or revolutions per minute – as well as more complex data such as maintenance requirements, sounds, and static or moving images.
Most analysts agree that that the Internet of Things will be huge. Two-thirds of consumers expect to buy connected technology for their homes by 2019, according to Acquity Group (a part of Accenture); nearly half expect to buy wearable technology. Gartner predicts that the total number of connected consumer, business and industrial “things” to grow to 26 billion units by 2020, representing an almost 30-fold increase over the 900 million things in 2009. (Gartner also says the Internet of Things has reached its hype peak.)
To be of any practical use, things collecting and transmitting data have to be connected to what Jeffrey Hammond, an analyst at Forrester, calls a system of automation. Such a software system intelligently manages the things and the networks they use, organizes and stores the vast amounts of data they generate, and processes it before finally presenting it to end uses in a useful way.
Building Internet of Things Apps Begs Important Questions
This begs some important questions for developers. What’s the best way to build an “Internet of Things application” that could do anything from control home appliances remotely, to inform an aero-engine manufacturer that one of its engines somewhere in the world needs servicing, to gather meteorological data from sensors to produce a weather forecast? What skills are needed to do so? Where do you even begin?
[ Analysis: What the Internet of Things Will Mean for CIOs ]
The starting point for Internet of Things applications are the things themselves. These edge devices typically have no screen (although that’s not always the case), a low-power processor, some sort of embedded operating system and a way of communicating (usually wirelessly) using one or more communication protocols. The things may connect directly to the Internet, to neighboring things or to an Internet gateway device – typically a plastic box with blinking lights.
The next tier of the system, an ingestion tier, is software and infrastructure that runs in a corporate data center or in the cloud and receives and organizes the streams of data coming from the things. Software running in the ingestion tier is usually also responsible for managing things and updating their firmware when necessary.
After this comes the analytics tier; this takes the organized data and processes it. Finally, there’s the end-user tier, the application that the end user actually sees and interacts with. This may be an enterprise application, a Web app or, perhaps, a mobile app.
If you’re looking to build an Internet of Things application, the last two tiers are the ones you’re most likely to have to work on, according to Frank Gillett, a principal analyst at Forrester. “As a developer, you’re unlikely to have the tools for dealing with the edge devices or gateways, or capabilities suitable for the ingestion tier anyway.”
That’s why it usually makes more sense to build an application on top of a ready-made “Internet of Things platform,” Gillett adds. These platforms usually include an ingestion tier that carries out time-series archiving for incoming data, as well as an analytics tier, thin provisioning, activation and management capabilities, a real-time message bus, and an API to allow communication between the platform and applications built on top of it.
[ More: 10 Hot Internet of Things Startups ]
A large number of new companies offer these sorts of platforms. They include Xively, Mnubo, Bug Labs and ThingWorx , and they have the capability to communicate with a range of “things” produced by a large number of manufacturers.
More established companies such as Microsoft, with its Intelligent Systems Service, and enterprise software vendors likes SAP, with its Internet of Things Solutions, are also adding Internet of Things capabilities to their offerings.
“We are likely to see some of these companies acquired by the likes of Oracle and other enterprise software vendors in the future,” says Gillett, “but I think that many of these specialized (Internet of Things) platforms will endure for particular industry use cases.”
Building IoT Platform From Scratch ‘Considerable Amount of Work’
California-based OnFarm used ThingWorx’s cloud-based Internet of Things platform to develop its Web-based farm information application. This collects data from a variety of connected things, such as soil moisture sensors, and integrates it with data from other sources, such as weather information providers. It then presents the information on a customizable dashboard to its farmer customers.
OnFarm CEO Lance Donny briefly considered hiring developers to build an Internet of Things platform from scratch, but the idea was quickly rejected. “That would have been a considerable amount of work. Building our own back end would have slowed us by about one or two years,” he says. “We would be significantly behind if we had done that.”
[ Also: An Internet of Things Prediction for 2025 – With Caveats ]
By using ThingWorx to manage all the data ingestion, he says the amount of programming work was largely reduced to creating the Web dashboard that connects to the data through ThingWorx’s APIs.
OnFarm currently takes readings from more than 5,000 “things” for its customers, taking in more than 7 million pieces of data per month. This figure grows at a rate of 30 percent annually, Donny says.
Another advantage of the pre-built platform, he adds, is that its scalability has already been proven. This matters, as Internet of Things applications are relatively new. If the Internet of Things is to succeed as many people expect, then applications vendors such as OnFarm may be required to scale their offerings very rapidly in the coming years. | <urn:uuid:a47002a4-1a5f-454c-bc62-ae684cb46d0e> | CC-MAIN-2022-40 | https://www.cio.com/article/250757/how-to-develop-applications-for-the-internet-of-things.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00373.warc.gz | en | 0.947698 | 1,490 | 2.953125 | 3 |
Chances are, you haven’t thought much about the maritime industry recently. Everyone has a passing awareness that a lot of the goods they use are shipped from around the world, but you really don’t often see the full scope of the maritime industry firsthand.
The fact is, if the maritime industry suddenly disappeared without a trace, the economic, social, and political impacts would be devastating. Billions of tons of vital products like food, medicine and oil are shipped around the world every year, and if these goods stopped flowing, billions of people would suffer the consequences. We saw a taste of this devastation early this year, when a ship lodged itself in the Suez Canal, blocking other ships from getting through. The incident cost the world nearly $10 billion in trade each day it was stuck.
This is only a fraction of the damage that could be caused by cyber attacks in the maritime industry.
There are various vectors for hackers to attack which could result in taking full control over a vessel or fleet, creating damage to critical systems on board or it could just be ransomware or a malicious virus attempting to take control. In one of the cases, we have seen that hackers took control of the pipeline and essentially held it hostage until they were transferred a certain amount of money they requested. In the end, faced with no other option, the pipeline company paid $4.4 million in ransom to the foreign hackers, according to the Colonial Pipeline CEO.
The hackers then reopened the pipeline, but the damage had already been done. The Colonial pipeline transferred huge amounts of oil across the country, and the shutdown caused massive shortages and panic buying. Gas prices went up across the country as a result of just a few hackers managing to exploit a vulnerability in the pipeline’s system. It’s easy to see from this one incident, how cyber attacks can affect much more than your personal computer.
Now, it is evident that the greatest cyber threat lies in the maritime industry. The COVID-19 pandemic sped up the already occurring digitization of the world, as a result of guidelines that required people to work from home over the internet. As such, the maritime industry also had to rely more heavily on the internet than ever before. You may not think of vessels and fleets as deeply connected with technology, but vessels are constantly connected to the internet.
Here’s where the real problem lies: some of the systems and computers on these vessels often use incredibly complicated and old systems. This makes it much harder to protect them from cyber attacks. The systems that these ships use are so complexly intertwined that there are many blind spots that are virtually unknowable.
Since the maritime industry is shifting into the digital age, and since the pandemic has forced it to rely even more heavily on the internet, there have verifiably been more cyber attacks on vessels recently. In only the first few months of the pandemic alone, attempted cyber attacks on maritime vessels shot up by 400%. This dramatic increase has truly sent a shockwave through the maritime community. The industry is one of the oldest industries in the world, and so it was surprising to some, how much they could be affected by just a few hackers.
Imagine if a hacker took control of a ship that was carrying something truly vital, like COVID vaccines. At this point, the internet is so deeply integrated with maritime systems, it would be impossible to switch to a manual system, so hackers would have full control.The hacker could shut down the ship for as long they wanted to, and as in the case of the Colonial pipeline, there is nothing the owner of the vessel could do but give them whatever it is they were asking for. Significant delays could cause millions, even billion dollars in economic damage, and have even more social and political effects.
Imagine if a hacker with malevolent intent took control of an oil tanker, containing millions of gallons of flammable liquid, and decided to do something terrible with it? We’ve seen oil spills before, but LNG tankers are so dangerous that even a small amount of damage could cause an explosion on the scale of a nuclear bomb. So what can we do?
The first thing is being aware of the potential for destruction and the likelihood in which these types of events may occur.
The second thing is taking action and conducting assessments and cross checks to make sure the vessel and fleet are not exposed to cyber threats. The maritime industry now has to become more proactive in order to make sure its operation is not interrupted in any way and that no hacker is taking advantage of its shipping lines.
It is a bliss that countries are taking these seriously now, like the new executive order from President Biden aimed at preventing and protecting against these cyber threats. The directive requires pipeline companies to report any cyber incidents to federal authorities, which will hopefully further educate the people in power as to the massive scale of this threat.
We can also research and invest in greater cybersecurity measures, specifically made for the maritime industry. There are already some cybersecurity products that have been adapted to work on vessels, but the types of systems used in the maritime industry are they merit their own solution. There have been some comprehensive solutions to crop up recently, like Cydome, but quality cyber security systems are few and far between.
The most frightening thing in the world is the unknown, and the scale on which cyber attacks could affect the world is still very much unknown. All we know is that there is a significant danger that has yet to be addressed, and we should address this problem sooner rather than later. | <urn:uuid:d2550eee-748a-4f84-9e8a-553c5725be61> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/maritime-cyber-attacks-are-among-the-greatest-unknown-threats-to-the-global-economy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00373.warc.gz | en | 0.975749 | 1,133 | 2.890625 | 3 |
Even though it’s the holiday season, when we talk about packet loss in the world of voice over IP, we are not talking about misplaced shipments from your local UPS driver! Packet Loss refers to the data within your VOIP system that fails to reach its connection point.
Each time you send data across your computer network small chunks of information called packets reach out to virtual destinations. When talking about voice over IP, packet loss is the failure of your data to make a connection. Most packet losses last a short amount of time (1-3 seconds) and can go undetected in many systems. For the user it creates recurring gaps and jitter.
If your VOIP system is experiencing packet loss it is important to pinpoint the source of the packet loss and overcome the problem. In general, there are two types of packet loss when it comes to VOIP: Receive Packet Loss and Receipt Packet Discard. Let’s take a few minutes to discuss each one
In Received Packet Loss the data has been left somewhere in the network. It has been thrown to the wayside in a way while the system continued to move forward. This can be a result of:
• Malfunctioning links
• The system is overloaded
• Connecting paths fail
• Problems with MSU or DSP
With Receipt Packet Discard the data has failed to reach its destination on time. This is a case of too little too late and is the most frequent type of packet loss. This can be a result of:
• Malfunctioning QoS
• Packets that are occurring in the wrong sequence
If you are experiencing packet loss in your system please contact us. If you are not connected to our network, then contact the vendor who provides your VOIP. Your administrator should be able to access their monitoring system to pinpoint the origin of the problem. | <urn:uuid:5c117808-0f3e-4a59-8220-e775cc4c4640> | CC-MAIN-2022-40 | https://www.infiniwiz.com/sources-packet-loss-voice-ip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00373.warc.gz | en | 0.950982 | 385 | 2.640625 | 3 |
Up To: Contents
See Also: Active Checks, Host Checks, Check Scheduling, Predictive Dependency Checks
The basic workings of service checks are described here...
When Are Service Checks Performed?
Services are checked by the Nagios daemon:
On-demand checks are performed as part of the predictive service dependency check logic. These checks help ensure that the dependency logic is as accurate as possible. If you don't make use of service dependencies, Nagios won't perform any on-demand service checks.
Cached Service Checks
The performance of on-demand service checks can be significantly improved by implementing the use of cached checks, which allow Nagios to forgo executing a service check if it determines a relatively recent check result will do instead. Cached checks will only provide a performance increase if you are making use of service dependencies. More information on cached checks can be found here.
Dependencies and Checks
You can define service execution dependencies that prevent Nagios from checking the status of a service depending on the state of one or more other services. More information on dependencies can be found here.
Parallelization of Service Checks
Scheduled service checks are run in parallel. When Nagios needs to run a scheduled service check, it will initiate the service check and then return to doing other work (running host checks, etc). The service check runs in a child process that was fork()ed from the main Nagios daemon. When the service check has completed, the child process will inform the main Nagios process (its parent) of the check results. The main Nagios process then handles the check results and takes appropriate action (running event handlers, sending notifications, etc.).
On-demand service checks are also run in parallel if needed. As mentioned earlier, Nagios can forgo the actual execution of an on-demand service check if it can use the cached results from a relatively recent service check.
Services that are checked can be in one of four different states:
Service State Determination
Service checks are performed by plugins, which can return a state of OK, WARNING, UNKNOWN, or CRITICAL. These plugin states directly translate to service states. For example, a plugin which returns a WARNING state will cause a service to have a WARNING state.
Services State Changes
When Nagios checks the status of services, it will be able to detect when a service changes between OK, WARNING, UNKNOWN, and CRITICAL states and take appropriate action. These state changes result in different state types (HARD or SOFT), which can trigger event handlers to be run and notifications to be sent out. Service state changes can also trigger on-demand host checks. Detecting and dealing with state changes is what Nagios is all about.
When services change state too frequently they are considered to be "flapping". Nagios can detect when services start flapping, and can suppress notifications until flapping stops and the service's state stabilizes. More information on the flap detection logic can be found here. | <urn:uuid:4be43ac8-ee1b-405c-88e4-5b6543154d9a> | CC-MAIN-2022-40 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/servicechecks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00373.warc.gz | en | 0.886157 | 624 | 2.515625 | 3 |
The digital transformation of manufacturing goes by many names — Industry 4.0, Smart Manufacturing, The Fourth Industrial Revolution. Cyber spies like to think of it as the Mother Lode.
The potential advancements arising from the interconnection of everything from manufacturing design to maintenance and repair to enterprise business and supply chain systems are exciting. The ripple effects are wildly disruptive — we’ll be able to produce consumer goods and build airplanes in ways we never imagined. But with the possibilities come risks. As more equipment, processes, suppliers, and people are connected online together to form the digital thread connecting everything inside factories and extending across the value chain, the cyber attack surface grows exponentially.
Bad actors — organized cybercriminals, state-sponsored hackers, and even hacktivists — see newly connected Industrial Control Systems (ICS), factories, and public utilities as a unique opportunity to steal trade secrets, carry out extortion schemes through threats to public safety, make some quick bitcoin via ransomware, or sabotage operations.
The global race for dominance in smart factories, shipyards, energy systems, and aerospace and defense has already begun. Made in China 2025, Make in India, the EU’s Factories of the Future, and Australia’s Advanced Manufacturing Growth Centre are but a few examples. The massive efforts to create digital factories and supply chains by integrating operational technology (e.g., factory equipment), and traditional IT, and then collecting related data in real-time across the extended enterprise are still nascent. Manufacturers and their suppliers are making heavy use of commercial cloud computing infrastructure and software to replace and connect outdated proprietary systems.
The stakes are high. In the U.S. alone, manufacturing still accounts for approximately 10 percent of GDP. With a trade-and-tariff war looming on the horizon, manufacturing and related industries are under immense pressure to stay ahead, reduce costs, and beat competitors in terms of delivery speed, innovation, and quality. And increasingly, they have to defend against cyberthreats that could lead to disaster.
Attackers Are Active and on the Move
Unfortunately, these threats are not theoretical. In October 2017, the US government issued a rare public warning about the targeted attacks on critical nuclear, energy, aviation, water, manufacturing, and government entities, the purpose of which was to gain access to the organizations’ networks. The activity observed appeared to be the work of groups associated with the Russian government. Other groups being monitored are connected to China, Iran, and North Korea. National Intelligence Director Dan Coats reiterated the warning in July, saying, “the warning lights are blinking red again” in reference to intelligence channels tracking these threats.
According to the 2018 Verizon Data Breach Industry Report, state-sponsored attackers caused more than half of the data breaches in manufacturing. Along with these state-sponsored attacks, the Verizon report reveals that cyberespionage was the leading motive behind these breaches.
In the new 2018 Spotlight Report on Manufacturing, Vectra reveals that attackers who evade perimeter security can easily spy, spread and steal, unhindered by insufficient internal access controls.
The manufacturing industry exhibits higher than normal rates of cyberattack-related reconnaissance and lateral movement activity. This is due to the rapid proliferation of Industrial Internet of Things (IIoT) devices, many of which were not robustly designed for security, on enterprise IT and OT networks that were traditionally air-gapped or isolated from the outside world.
The information in the spotlight report is based on observations and data from the 2018 Black Hat Edition of the Attacker Behavior Industry Report from Vectra. The report reveals attacker behaviors and trends in networks from over 250 opt-in customers in manufacturing and eight other industries.
From January-June 2018, a cyberattack-detection and threat-hunting platform from Vectra monitored network traffic and collected enriched metadata from more than 4 million devices and workloads from customer cloud, data center and enterprise environments.
The three key findings that were of most interest in the report are the frequency of external remote access, the volume of internal movement between systems, and the way data was stolen, or exfiltrated, from manufacturing networks.
How Attackers Infiltrate
The use of external remote access tools is the most common command-and-control behavior observed in manufacturing. External remote access occurs when an internal host device connects to an external server.
While external remote access is common process in manufacturing business operations, it also runs the risk of allowing attackers to infiltrate networks. Cyberattackers perform external remote access, just like in manufacturing operations, but with the intent to disrupt industrial control systems.
Sometimes attackers hijack already-established external remote access connections. For example, IIoT devices can be used as a beachhead to launch an attack. Once an attacker establishes a foothold in IIoT devices, it is difficult for network security systems to identify the backdoor compromise.
Control system owners and operators who make use of remote access technology should be asking:
- What is connected and remotely connecting to my systems?
- Do I have visibility and adequate security controls on my external and internal connections?
- How can risks and rewards with remote access be responsibly balanced?
What Are Attackers Doing Once Inside?
Manufacturing networks consist of many gateways that communicate with smart devices and machines. These gateways are connected to each other in a mesh topology that simplifies peer-to-peer communication.
Cyberattackers leverage the same self-discovery used by peer-to-peer devices to map a manufacturing network in search of critical assets to steal or damage. This type of attacker behavior is known as internal reconnaissance and lateral movement.
IIoT systems make it easy for attackers to move laterally across a manufacturing network, jumping across non-critical and critical subsystems, until they find a way to complete their exploitative missions.
Consequently, a higher-than-normal rate of malicious internal reconnaissance behaviors were detected. And an abnormally high level of lateral movement behaviors indicated that attacks are proliferating inside the network.
What Are They Getting Away With?
IIoT devices exhibit behavior in which an internal host acquires a large amount of data from one or more internal servers and subsequently sends a significant amount of data to an external system.
IIoT network architectures reflect this behavior, where multiple sensors will aggregate data at a network gateway that sends the clustered data to a cloud database for monitoring and analytics. This IIoT architecture is common within the manufacturing industry and does not normally indicate an attack.
However, sometimes these exfiltration behaviors are associated with other threat behaviors across the attack lifecycle that point to an in-progress attack. It is critical to ensure that systems are sending data to the intended and approved external systems instead of attackers who are trying to steal intellectual property and other critical assets.
What Can Manufacturers Do to Stop Attacks and Exfilatration?
Many factories connect IIoT devices to flat, unpartitioned networks that rely on communication with general computing devices and enterprise applications. These digital factories have internet-enabled production lines that support data telemetry and remote management.
In the past, manufacturers relied on customized, proprietary protocols, which made mounting an attack more difficult for cybercriminals. The conversion from proprietary protocols to standard protocols makes it easier to infiltrate networks to spy, spread and steal.
For business reasons, most manufacturers do not invest heavily in security access controls. These controls can interrupt and isolate manufacturing systems that are critical for lean production lines and digital supply-chain processes.
Consequently, network visibility and real-time monitoring of interconnected systems is essential to identify the earliest signs of attacker behaviors in the manufacturing infrastructure.
However, network-wide visibility can be a double-edged sword. Manually monitoring network devices and system administrators creates a challenge for resource-constrained organizations that cannot hire large security teams.
Numerous security analysts are needed to perform the manual analysis required in identifying attacks or unapproved behaviors in large, automated networks that have IIoT and IT/OT devices.
In the end, both cybersecurity and manufacturing are continuous exercises in optimizing operational efficiency — and in applying systems data intelligently to solve dynamic problems. Organizations have limited resources to address unlimited risks, threats and attackers. Network security must always be evaluated in terms of efficiency as well as its impact on the operational fitness of the organization.
As manufacturing supply chains grow more dispersed and complex, they introduce similar risks and management challenges. In both disciplines, artificial intelligence is essential to augment human experts as we face unprecedented challenges. In the global race for resources, technological innovation, and trade dominance, we need to develop a whole new level of visibility, control, and speed to stay ahead of attackers and competitors.
Christopher Morales is the head of security analytics at Vectra. | <urn:uuid:e393d44a-a98b-49f7-a1ac-0313865c9e48> | CC-MAIN-2022-40 | https://www.mbtmag.com/security/press-release/13245340/under-threat-of-global-cyberattacks-cybersecurity-in-manufacturing-industry-must-keep-pace-with-digital-transformation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00373.warc.gz | en | 0.933106 | 1,799 | 2.65625 | 3 |
Fight against Cybercrime : The importance of the fight
Cybercrime has been on the rise the last few years. Annually, an average of 1.5 million cyber-attacks are reported; 4000 every day and 170 every hour (“Cybercrime Statistics”, 2017). The days to come seem to be becoming gloomier. c are devising new techniques and coming up with new tools each passing day. In 2016, it was estimated that the global economy lost $450 billion to cyber-crime (“Cybercrime Statistics”, 2017).
That was an almost quadruple increment from the estimated $168 billion lost in 2015 (“Cybercrime Statistics”, 2017). It has been estimated that this figure will reach $2 trillion by 2019 (“Cybercrime Statistics”, 2017). This is barely a year and a half from today. Even with these shocking figures, individuals and businesses are failing to put up basic security mechanisms on their devices.
They are also acting recklessly on social media by giving away information that could lead to being hacked. It was said in 2014 that 47% of all American adults had some piece of private data about them stolen in data heists done by hackers in large companies (“Cybercrime Statistics”, 2017).
Justice against cyber criminals is hardly gotten in court. It has proven to be very difficult to catch and prosecute today’s cyber criminals (Grimes, 2017). They are advanced and know how to cover their trails. They have enough money to hire the best lawyers to defend them. Only one in 10,000 hackers gets caught and only one out of 100 successfully gets prosecuted in court (Grimes, 2017).
This calls for governments, organizations, and individuals to fight collaboratively against cyber-crime. The most important reason to fight cybercrime is to fight for future prosperity. Cybercrime is growing at a fast rate and it may bring down the global economy if it is given the chance to thrive. Once the estimated $2 trillion loss to cybercrime is reached in 2019, there will be a cross cutting effect to the economy that will be felt. Therefore become a fight for future prosperity.
Another importance of fighting against cybercrime is to gain the assurance of privacy in the future. Almost half of all adult US citizens have lost their privacy so far (“Cybercrime Statistics”, 2017). If the cyber criminals continue hacking big companies, not a single person will claim to have privacy. Lastly, the fight against cybercrime is important since it will assure the integrity and availability of systems in the future. There is no worse scenario that people doubting data contained in banks, stored by governments or health care centers.
Cybercrime has already threatened the integrity of such data with hackers compromising and modifying data stored by such institutions. There is an ongoing trend where most things such as paying bills, buying items and communication are being done mostly online. Cyber criminals are threatening the availability of such systems with denial of service attacks that are being supported by armies of botnets (Mazurczyk, Holt & Szczypiorski, 2016).
Importance of sharing the experience
It is important for individuals and organizations that have successfully fought cyber-crime to share their experience. This will enable other organizations and individuals to pick up the best cyber security practices. There is a lot that goes on in the preparation for cyber security incidences. It could certainly help if organizations had a good example to learn from. It is also important to share to support the collaborative efforts towards preventing future hacking attempts. If an organization that has been hacked releases this information in time, other organizations will act quickly to prevent the same type of an attack. Lastly, sharing the experience will psychologically demotivate future hacking attempts on the same organization.
Cybercrime Statistics. (2017). CBS. Retrieved 23 August 2017, from http://www.cbs.com/shows/csi-cyber/news/1003888/these-cybercrime-statistics-will-make-you-think-twice-about-your-password-where-s-the-csi-cyber-team-when-you-need-them-/
Grimes, R. (2017). Why it’s so hard to prosecute cyber criminals. CSO Online. Retrieved 23 August 2017, from http://www.csoonline.com/article/3147398/data-protection/why-its-so-hard-to-prosecute-cyber-criminals.html
Mazurczyk, W., Holt, T., & Szczypiorski, K. (2016). Guest Editors’ Introduction: Special Issue on Cyber Crime. IEEE Transactions on Dependable and Secure Computing, 13(2), 146-147. | <urn:uuid:f2d9a10e-0a88-49b5-bc8a-5a371cdb3d55> | CC-MAIN-2022-40 | https://www.erdalozkaya.com/fight-against-cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00373.warc.gz | en | 0.940266 | 995 | 2.765625 | 3 |
Security information and event management (SIEM) refers to a set of tools which assist detection and response efforts by centralizing security data. Because it’s impractical for security operators to manually collect the vast amounts of information needed to properly understand the security posture of their network, SIEMs serve to aggregate relevant data and present it to operators within a coherent and single user interface. Within these interfaces, SIEMs serve the twin functions of collecting and categorizing information.
Typical information collected by SIEMs might include event logs, user activity, permission changes, known vulnerabilities, and network traffic. As this information is collected, it is further categorized in a variety of manners – depending on the needs of the security operations center. Categorization might distinguish between high or low network traffic, known and unknown IP addresses, or routine or unexpected permission changes.
The real benefit of a SIEM system is combining the collection and categorization functions to generate tailored security alerts by establishing filters to better guide the attention of security operators. For example, perhaps there is a device within a network that regularly communicates with addresses outside of the local network. Rather than reviewing all such traffic, there may only be a need to review such traffic that is also considered abnormally high. While a simple example, this ability to collect, categorize, and filter demonstrates the core functionality of SIEM tools.
The primary benefit of a SIEM is to maximize personnel effectiveness by minimizing redundant activity. Without a centralized solution, security operators often lose situational awareness within the noise of security alerts. Carefully tailored system filters limit redundancies and the time security personnel must spend sifting through system noise. Acknowledging this, the April 2022 draft of NIST’s OT Security Guide recommends using SIEMs to, “help filter the types of events and reduce alert fatigue.”
Aside from enhancing security, SIEM tools have the added benefit of increasing system efficiency when deployed for industrial control systems. OT efficiency is heavily reliant on assets that are not easily replaced. Therefore, closely monitoring system activity can also inform predictive maintenance efforts, aid in coordinating asset updates, and increase response time to non-malicious outages. In IT networks such problems are more easily resolved without centralized management, but having effective SIEM functionality is critical to mitigate both security and operational threats within OT architectures.
Potential problems are most likely to arise during the collection portion of the SIEM workflow. Using traditional monitoring methods to feed a SIEM may not be effective for OT environments since they weren’t built for these unique systems which prioritize safety and availability over confidentiality and integrity. Many of the devices supporting a SCADA system are not designed to handle various scanning procedures that some SIEM systems might employ, which could result in safety hazards and loss of availability. Subsequently, it is important to feed any SIEM systems using tools that are designed specifically for OT.
Context is critical to identify system threats and vulnerabilities. Since malicious activity is not always immediately identifiable through individual device monitoring or discrete network packets, contextual information is required to determine system anomalies. Context can be collected from many different data points such as: user behaviors, network activity, event timelines, and system configurations. Manual or siloed collection efforts are simply impractical when capturing this data. Instead, the collect, categorization, and filter functions of a SIEM are well designed for the task. Many threats and vulnerabilities are uncovered through the interaction of different systems. SIEMS, therefore, do not simply draw operators’ attention to problems, but can illuminate vulnerabilities that were not previously visible.
Context is simultaneously important to respond to system threats and vulnerabilities. Once a threat or vulnerability is detected, contextual knowledge informs the method of response. Security operators need to understand the nature of a given exploit, affected systems, and operational dependencies when responding to a security incident. A lack of information may cause operators to remove more devices than necessary from operation. Conversely, the true extent of a system penetration may be obscured without enough contextual information. Having accurate data reveals the true depth of a threat and decreases the mean time to respond (MTTR). This is an important benchmark for security operators as it translates directly to safety and business continuity.
Contextual data can also help predict threats and vulnerabilities. As threats continue to grow and diversify, contextual data can feed machine learning (ML) security solutions. The fundamental approach of machine learning solutions is to train software with a model of a specific environment. Within OT environments, however, it is important that the model is accurate, complete, consistent, timely, unique, and valid. Unlike IT systems, there is significant variation in equipment logic and system activity that prevents ML tools from being integrated into a network it was not trained within. Therefore, having accurate contextual data will be increasingly necessary in OT infrastructure to take advantage of emerging security solutions.
Security orchestration, automation and response, or SOAR, technologies enable organizations to efficiently observe, understand, decide upon and act on security incidents from a single interface. As the name suggests, SOARs automate responses to common types of security events. Since many events require predictable responses, having those actions immediately triggered when an event is first detected by a monitoring system provides the advantage of further decreasing MTTR and maximizing the efficiency of security workflows. The extent to which these benefits are realized, however, is heavily determined by the amount of contextual information integrated into a SOAR program.
SOAR functionality has been difficult to establish within OT environments. Given the uniqueness of network architectures and sensitivities of industrial control systems, automation can result in unanticipated effects, including safety issues. In IT environments, security responses are easily reversable. Within the OT landscape, however, network dependencies and legacy equipment make automated actions riskier and harder to reverse. For example, quarantining a personal computer from the network may be inconvenient for one user within the office. On the other hand, quarantining a single PLC may cause an entire factory to go offline. For these reasons SOAR adoption has been difficult and slow in OT infrastructure. Building a data repository of operational technology asset information, including location, criticality and function is the only way that companies can take the first step towards implementing SOAR in their OT infrastructure.
Two integral components of SOAR functionality are Endpoint Detection and Response (EDR) and Managed Detection and Response (MDR). EDR systems monitor endpoints directly to identify and respond to security incidents. With the rise of the Internet of Things (IoT) and the subsequent proliferation of network endpoints, EDR has become a more popular method for securing digital systems. MDR solutions remotely monitor network architectures. An MDR solution might centrally manage detection for an enterprises cloud services and network activity across multiple physical presences.
Both EDR and MDR have limitations with respect to the needs of OT environments. Most importantly, EDR can easily pose specific threats to maintaining safety and availability. Within industrial control systems, endpoint traffic must be carefully managed – adding monitoring or scanning traffic can easily disrupt established processes. Likewise, remotely managed activity with MDR can also generate the same concerns. Having remote access set up for a manufacturing or factory location may itself create additional security concerns – particularly if that system previously had a more secure airgap.
Therefore, an EDR/MDR alternative for OT systems must be tailor-made around how industrial environments operate. This alternative should be based upon an intentional integration of SIEM and SOAR capabilities. SOAR protocols are only as effective as the information provided to them. Therefore, they must be integrated with an established SIEM to optimize operation. This will limit the potential for false positives which might otherwise trigger unnecessary actions.
Given the cybersecurity advantages gained through contextual information, OT security operators must establish, customize, and integrate SIEM solutions within their security practices. When establishing a SIEM solution, it’s important to determine whether the methods for collecting, categorizing, and filtering work seamlessly for an OT environment.
Once established, to be effective, the SIEM must be customized to the particular security needs of the specific network. This can be done by determining filters in an iterative manner which verifies that the workflow is being streamlined and that relevant information is not being lost.
Finally, the SIEM should be integrated with a SOAR system as an alternative to EDR/MDR approaches. This design will enhance the response time of security operators while maintaining the maximum system safety and availability.
The Industrial Defender for Splunk application equips security operators with the right OT data to integrate SIEM and SOAR functionalities into their security architectures. Designed with OT environments in mind, the application can detect changes in network activity, provide contextual data, and help facilitate IT/OT collaboration.
Additionally, Industrial Defender’s OT Machine Learning Engine enhances security by incorporating information from OT environments into existing data models for detecting, investigating and responding to cyberthreats such as ransomware. This functionality is particularly useful for larger systems needing additional mechanisms to limit alert fatigue and mitigate false positives. | <urn:uuid:d2b2e12a-4179-4539-8a2d-5e925168cdb4> | CC-MAIN-2022-40 | https://www.industrialdefender.com/blog/how-to-centralize-ot-security-data-in-siem | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00373.warc.gz | en | 0.928947 | 1,834 | 2.53125 | 3 |
About the Chip Shortage
The Covid-19 global pandemic brought chaos to many segments of the economy. From travel to fuel to electronics, the pandemic caused shortages and restrictions that will reverberate for years to come. The focus of this article is pertinent to the shortage impacting General Informatics — computers. Specifically, the tiny chips that make everything work.
All over the world, there are issues stocking computers or even the parts to maintain the ones currently in use. Anyone who tried to buy a computer recently probably noticed either long wait times, price increases, or worse, no availability. Should we be optimistic and hope that after almost two years of this pandemic the end is in sight? Unfortunately, the answer is no. We likely have another 3-4 years until the supply chain will fix itself and be normal again. The reasons for the chip shortage stem from high demand, lack of labor, and transportation. Let’s look at each of those a little further.
When the first quarantine was announced in early 2020, people were sent home to work or attend classes remotely. As a result, the world saw massive increased demand for laptops. Prior to March 2020, the average American household had .8 computers. This low figure was likely due to the increased internet accessibility of smart phones. Once the country was in “lockdown”, families found themselves needing on average 2-3 computers depending on how many school-age children were in the household. Naturally, supply was reduced significantly.
Another factor affecting demand was the large increase of cloud computing. Simply put, the “cloud” is a collection of resources typically run on an off-site massive server farm, requiring extensive resources and computer processing power. As such, they demand a large supply of computer components to maintain, adding to the supply shortage.
Lack of Labor
By the end of 2020, the surplus of laptops in warehouses were non-existent, and the next trial to overcome was producing computer chips as quickly as possible.
The United States manufactures on average about 12% of the world’s computer chips annually. However, in February 2021, a natural disaster in Texas, home of Intel’s largest chip producer, caused a power crisis (a/k/a “The Freeze”) that completely halted production. Additionally, Taiwan, the global leader in chip production, worked for several months at reduced capacity, and at times had to completely shut down most of its factories.
What's Next for the Chip Shortage?
While this supply chain issue is affecting multiple markets, it appears to hit the electronic industry hardest. What can we do to resolve this issue? One option for consumers is to stretch the life of our electronics. Examples include:
- Foregoing annual or bi-annual phone upgrades, opting to keep the phone for 3-4 years instead.
- Buying more powerful laptops. This may mean spending 30-40% more on a laptop initially but doing so could extend its use by 3-4 years over a cheaper, less equipped version.
In summary, best estimations state that the chip shortage will start to get better around Q1 2023, but we are likely to deal with its ramifications well into 2024.
IT Management and Support
The best department in your company.
At General Informatics, we work closely with clients to fuel business growth through technology. As a result, our clients consistently land on local and national top company lists. Contact us to schedule a FREE in-depth IT assessment. | <urn:uuid:6ea7308d-45e8-44f7-a42a-bef7b2c4d63f> | CC-MAIN-2022-40 | https://geninf.com/the-continuing-chip-shortage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00573.warc.gz | en | 0.949137 | 888 | 3.40625 | 3 |
As more and more websites turn on HTTPS and online communications rely on cryptographic protocols such as Transport Layer Security, the Internet is increasingly more encrypted. Except for one significant part: the Domain Name System.
DNS acts as the phonebook for the Internet and translates human-readable domain names to the actual address of the machine (numeric string for IPv4, alpha-numeric for IPv6) hosting the content or application the user is interested in. Since DNS queries are typically sent in plaintext via UDP or TCP, the entity operating the DNS server can see all the requests—essentially, the entirety of the user’s online activity. For many users and organizations, the internet service provider provides DNS, which means the ISP can monitor what websites the user visited, when the visits occurred, and what device was used.
Encrypting DNS traffic would make this kind of web surveillance harder because ISPs and other DNS providers won't be able to see what users are doing online. A number of technology companies have been working on alternatives to sending DNS queries over UDP and TCP. DNS over HTTPS, based on the Internet Engineering Task Force’s RFC 848 standard adopted last October, is perhaps the most well-known. Another is DNS over TLS.
There are several options for DNS over HTTPS, including Cloudflare with its 22.214.171.124 service, and non-profit Quad9's 126.96.36.199 service. Cisco's OpenDNS offers encrypted DNS and Mozilla has been working on its own efforts for Firefox. This week, Google announced general availability of DNS over HTTPS for its own public DNS service on 188.8.131.52.
“Today we are announcing general availability for our standard DoH service. Now our users can resolve DNS using DoH at the dns.google domain with the same anycast addresses (like 184.108.40.206) as regular DNS service, with lower latency from our edge PoPs throughout the world,” wrote Google product manager Marshall Vale and security engineer Alexander Dupuy.
Right now, if governments want to see where users are going online, they can demand to see the ISP’s records. In fact, in the United Kingdom, ISPs are required to track all the sites citizens visited for the previous 12 months under the 2016 Investigatory Powers Act (IPA). ISPs are also allowed to share the data with third-parties for content filtering and advertising purposes. Using public DNS services such as the one provided by Google (220.127.116.11) meant bypassing the ISPs, but it meant giving the data-hungry search giant access to all of the DNS requests.
Encrypted DNS queries just cuts out the ISP, or attackers lurking on the network. The DNS provider (say, Google or Cloudflare) still can see the DNS query, so there is a tradeoff on who gets to see the user's entire browing history. Cloudflare, to its credit, has pledged to keep only 24 hours worth of DNS queries, to keep the amount of data being collected low.
Along with boosting user privacy, DNS over HTTPS will reduce the threat of man-in-the-middle attacks against DNS infrastructure via DNS Spoofing, DNS Hijacking, and DNS Poisoning. By transmitting DNS queries through an encrypted HTTPS tunnel would prevent anyone from hijacking DNS queries to redirect users to some other site. | <urn:uuid:248b7251-57d0-4d6e-a01c-2efdaf32d86d> | CC-MAIN-2022-40 | https://duo.com/decipher/google-makes-encrypted-dns-generally-available-for-8-8-8-8 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00773.warc.gz | en | 0.936795 | 700 | 2.8125 | 3 |
Mixture models are used to discover subpopulations, or clusters, within a set of data; a Gaussian mixture model has parameters that correspond to a probability that a specific data point belongs to a specific subpopulation. The probability function is a Gaussian distribution – the traditional bell-shaped curve with a mean and standard deviation – and can be used for single or multiple variable models.
Gaussian mixture models, typically deployed in unsupervised machine learning, are widely used in applications like financial investments and pricing, natural language analysis, image recognition, and predictive maintenance. They are widely available in open-source libraries, are easy to implement, and are faster and more stable than other solutions like gradient descent in converging to a minimum.
C3 AI makes it easy to apply Gaussian mixture models to address domain-specific AI applications to deliver business value today. The C3 AI Application Platform is a complete, end-to-end platform for designing, developing, deploying, and operating enterprise AI applications at industrial scale. C3 AI supports Gaussian mixture models to analyze data within an ML pipeline using either the low-code C3 AI ML Studio development environment or the no-code C3 AI Ex Machina tool. Within Ex Machina, for example, a business analyst can graphically link a data set to an analytics model to run a Gaussian mixture model to determine the optimal segmentation into clusters, and then apply that model to classify unlabeled incoming data, all without writing a line of code. | <urn:uuid:f902602f-990a-40c7-b95d-cbc53e4c6c74> | CC-MAIN-2022-40 | https://c3.ai/glossary/data-science/gaussian-mixture-model-gmm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00773.warc.gz | en | 0.89057 | 304 | 2.515625 | 3 |
3 Elements of magic and the bond between technology and illusion
Augmented Reality (AR) is melding the real world with technology to create some amazing experiences that can help people do all kinds of things like perform medical procedures remotely, teach users new skills, and even enhance entertainment. At AT&T SHAPE in Los Angeles, cyber illusionist Marco Tempest demonstrated how magic and technology could merge to create fascinating tricks at his talk on “Inventing the Impossible.” At the heart of his work is discovering new ways to blend illusion with technology to create new magic tricks. However, there’s a lot more that goes into creating a good trick than just some sleight of hand and implementing technology in the right way. Magic is a deception, and in order to truly enjoy it, an audience must suspend disbelief – something we do every day.
Jean Robert-Houdin, first recognized the role of a magician as a storyteller, which means every trick is a story that follows the archetypes of narrative fiction. However, in magic, they are stories with a twist. Magicians tend to tell these stories dramatically (think: catching a bullet or trying to escape from a locked box submerged in water). “The finale of a trick defies logic, gives new insights into a problem, and the audience laughs,” said Tempest. “It’s fun to be fooled.”
The Art and Craft of Creating Contemporary Illusions
1. Advanced Technology as Magic
The concept of magic and science is not new; magicians have known this for over 2,000 years. The temples of ancient Greece in 150 BC were filled with magic: doors mysteriously opened and alters dispersed into flames. These were all magic tricks that applied the science they knew at the time. However, attendees didn’t know that and thought it was magic. Even today, when magicians hear about a new technology, they find a way to use it in their tricks before the public realizes it’s capabilities. You might call magicians the first “early adopters” of technology.
2. Magic and Psychology
We may think that magic is about deceiving the eye, but Tempest explains this is not true. The success of a magic trick depends on deceiving the brain, which makes magicians take on the role of psychologist. The brain can be lazy and to save energy, it looks for patterns. “The magician exploits this, setting up familiar patterns, so the brain jumps to the wrong conclusions,” said Tempest.
When designing a trick everything must seem familiar, so magicians use ordinary props and straightforward language. This device is a mask for the deception that’s actually occurring. Interestingly, the eye sees everything, but the brain is fooled. Tempest performed the “Princess Card Trick” (a variation of the familiar the pick a card tricks) first developed in 1903 using AR to demonstrate the connection between the brain and magic. Sure enough, the card the audience picked disappeared.
Magicians are keepers of secrets. Consider the fact that the slogan of the Magic Castle in L.A. is indocilis privata loqui, which translates to no acts can disclose secrets. After all, if you know the secret to a trick it’s unlikely to fool you. Tempest has a different take on secrecy and believes that he can give his audience a truly magical experience by collaborating with writers, technologists, software developers, artists, engineers, and designers. Together each person brings their skillset, and the result is a trick far better than anything a single magician could do on their own. “Sometimes when devising a trick for the 21st-century secrecy is replaced by collaboration,” suggested Tempest.
Magic and the Future
When you think about the magicians of the past such as France’s Jean Robert-Houdin, you start to see how they are instrumental in pre-visualizing the future. For example, Robert-Houdin created one of the first incandescent light bulbs and had an electronic gate installed in his home long before it was available to the public. “Sometimes a well-performed piece of magic often looks like advance technology,” explained Tempest.
In doing so, the illusion becomes so convincing that it’s almost indistinguishable in reality, which in turn over time will turn the illusion into reality. Magicians are not only entertaining, but they are also providing their audience with a glimpse at what the future might look like. They already show us what it might be like to fly or read minds.
Watch Marco Tempest’s session at SHAPE here and the tricks he performed during his presentation. Let us know what you think of the session in the comments below. Do you think man will control machines or the other way around?
Marco Tempest, Executive Director of the NYC MagicLab and a Director’s Fellow at MIT Media Lab.
Marco Tempest is a cyber illusionist, combining magic and technology to produce astonishing illusions. He began his performing career as a stage magician and manipulator, winning many awards and establishing and international reputation as one of the world’s most unique performers. His interest in computer-generated imagery led him to incorporate video and digital technology in his work and the development of a new form of contemporary illusion. The expansion of the Internet and social media provided more opportunities for digital illusions and ways of interacting with audiences and creating magically augmented realities. Tempest is a keen advocate of the open source community, working with artists, writers, and technologists to create new experiences and researches the practical uses of the technology of illusion. He continues to perform around the world, is a media consultant on the subject of magic and illusion and lectures at international conferences on the psychology of deception and creative thinking. | <urn:uuid:7f329911-74df-4952-8628-093c0d988f80> | CC-MAIN-2022-40 | https://developer.att.com/blog/marco-tempest-elements-of-magic-shape | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00773.warc.gz | en | 0.955903 | 1,199 | 2.765625 | 3 |
There seems to be some confusion about the differences between the router and firewall. One of the contributing factors to this is that device manufacturers tend to combine the functionalities into one device. Traditionally, these devices are specialized hardware that does a specific job well.
Both of these devices have advantages and disadvantages over the other, unique features, and different purposes. In this article, we will define what they are, identify their primary use in your network, and explain why you may need both.
What is a router?
A router is a device that quickly forwards data from one network to another. For example, for your devices to communicate to the Internet, you need a networking device to transmit the traffic from your home to the Internet Service Provider (ISP). Typically, this device is a router that either you purchased or provided by your ISP.
The type of router found in most homes and some small businesses is called a wireless router. The wireless router combines the functionalities of multiple devices: wireless access point, switch, and a router.
Furthermore, a lot of routers in the market provide some level of network security by including features like Network Address Port Translation (NAPT), Stateful Packet Inspection (SPI), etc.
What does a router do?
The principal function of a router is to route network traffic between networks. The job of a router is similar to the role of the United States Postal Service (USPS). The router tries its best to forward the data between the sender and the receiver in different networks.
Since the majority of routers in a lot of small businesses are wireless routers, they also allow the connection of wired and wireless devices such as computers, printers, mobile devices, etc.
What is a firewall?
A network-based firewall is a device that provides security by monitoring incoming and outgoing traffic and makes a decision whether to allow or deny specific traffic based on the rule sets.
For many years, the firewall has been an integral part of any successful security program. It serves as the first line of defense in network security.
Today’s modern operating systems, such as Windows and macOS, include a software firewall that provides added network protection. A software host-based firewall functions similarly to a traditional network-based firewall.
Nowadays, firewall manufacturers add extra features like anti-malware, Intrusion Prevention System (IPS), application awareness, URL filtering, etc. referred to as a next-generation firewall (NGFW). An NGFW offers far improved security than a router or a traditional firewall.
What does a firewall do?
The principal function of a firewall is to provide network protection by blocking unwanted traffic. A job of a firewall is similar to the role of the Transportation Security Administration (TSA). The firewall inspects network traffic to make sure everything looks good before it is allowed to pass through.
Some firewalls designed for small businesses or branch offices also combine functionalities of wireless routers, allowing both wired and wireless network connectivity.
Which one should you buy?
Unfortunately, the answer to this question is it depends. Determining the right device for your business requires an understanding of the goals and requirements.
For a small coffee shop, a wireless router from your favorite retailer may be sufficient. For some small and medium-sized businesses (SMB), they may opt to purchase NGFW for better security.
In some scenarios, you might need to purchase both a router and a firewall. For example, if a branch office has the following requirements: WAN connectivity options (both wired and wireless), VoIP, switching, NGFW, and computing. Then, buying a router that can do the majority of these requirements and a separate NGFW could be a suitable solution.
There are some instances where you don’t want to, by default, restrict network traffic. For example, in higher education space, the researchers may expect no restrictions and a fast network to transfer data between each other.
Both devices can provide a level of network security. However, NGFW gives a higher level of protection compared to a router with some firewalling features.
Choosing between a router and a firewall will vary from one company to another. The key to determining the proper device is by gathering the requirements, goals, and business and technical constraints.
If security is paramount to your company, then purchasing a next-generation firewall with a subscription to the advanced features is the right way to go.
Still unsure on what to get?
Let us answer your questions by contacting us. We’ll help you with hardware selection, design, configuration, and implementation.
NetworkJutsu provides networking and network security consulting services for startups, a more established small and medium-sized business (SMB), or large business throughout the San Francisco Bay Area. | <urn:uuid:cf83ac34-2bd5-4345-9bcc-b24cc75900a3> | CC-MAIN-2022-40 | https://networkjutsu.com/router-versus-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00773.warc.gz | en | 0.941342 | 986 | 3.125 | 3 |
Huawei Router Interface Configuration
Huawei Router Interface Configuration is one of the first configuration that Junior Network Engineers learn on Huawei Routers. Here, we will show how to configure Huawei Router Interfaces with given IP addresses.
You can test youself with Huawei HCIA Questions Page.
For our example, we will use the below simple topology.
You can download this configuration on Huawei eNSP Labs Page.
As you can see above, there are two types interfaces. These are :
- Pyhsical Interfaces
- Loopback Interfaces
In this configuration example, we have 1 loopback and 1 physical interface per router. Physical interface configuration will be done for the communication of two router. Loopback interface configuration is done for other special activities.Here, we will also configure it only to show you that it is not different than a physical interface configuration.
Let’s start the configuration with Router A.
In Router A, firstly we will con figure Gigabit Ethernet 0/0/0 interface IP Address with its Subnet Mask. After this, we will open the port. Because by default router ports are shutdown. We will use “undo shutdown” command to open the port.
[Huawei-RouterA] interface GigabitEthernet 0/0/0
[Huawei-RouterA-GigabitEthernet0/0/0] ip address 192.168.0.1 255.255.255.0
[Huawei-RouterA-GigabitEthernet0/0/0] undo shutdown
The second interface configuration on Router A will be Loopback 0 configuration. Again, we will configure IP Address and Subnet Mask. We can configure Subnet Mask with long versions like (255.255.255.255) or short versions (32).
[Huawei-RouterA] interface Loopback 0
[Huawei-RouterA-Loopback0] ip address 10.10.10.10 32
Now, it is time to configure Router B. We will do the same configuration steps for Router B too. Only the IP Addresses will change.
< Huawei-RouterB> system-view
[Huawei-RouterB] interface GigabitEthernet 0/0/0
[Huawei-RouterB-GigabitEthernet0/0/0] ip address 192.168.0.2 255.255.255.0
[Huawei-RouterB-GigabitEthernet0/0/0] undo shutdown | <urn:uuid:bf1bc1e5-6885-4857-a097-5d6c68638884> | CC-MAIN-2022-40 | https://ipcisco.com/huawei-router-interface-configuration-with-ensp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00773.warc.gz | en | 0.706381 | 610 | 2.515625 | 3 |
In August, Russian hackers stole 1.2 billion user name and password combinations and more than 500 million email addresses from thousands of websites. It’s unclear it this was a general trolling exercise or targeted attacks. Either way, sensitive information was compromised.
There are numerous other examples from the large to small of malicious or inadvertent data breaches throughout businesses and organizations of all types and sizes. Hackers get all the press headlines, but insiders pose as great a risk as any external party when it comes to vulnerabilities. Regardless of who you are, your information is under attack.
With the end of the year holidays approaching, now is a good time for a few tips on preventing a data breach.
- Secure sensitive customer, employee or patient data – store paper files and removable devices containing sensitive information in a locked drawer, cabinet, safe or other secure container when not in use. Only give access to those who need it to do their jobs, whether in paper or electronic form.
- Properly dispose of sensitive data – shred documents containing sensitive data prior to recycling. Remove all data from computers and electronic storage devices before disposing of them.
- Use password protection – password protect your computers, including laptops and smartphones, and access to your network and servers. Require employees to have a unique user name and a strong password that is changed at least quarterly and don’t share credentials with other users.
- Control physical access to your computers – make sure servers, desktops and laptops are locked in place when unattended. Limit network access on computers in public spaces, such as the reception area.
- Encrypt data – encryption helps protect the security and privacy of files as they are transmitted, while on the computer and in use. Encrypt all sensitive information with a data-centric security policy.
- Protect against viruses and malware – install and use antivirus and antimalware software on all of your computers. Don’t open email attachments or other downloads unless you’re sure they’re from a trusted source.
- Keep your software and operating systems up to date – install updates to security, web browser, operating system and antivirus software as soon as they are available.
- Secure access to your network – ensure your network firewall is up to date with patches. Enable your operating system’s firewall. Ensure your Wi-Fi network is password protected, secure, encrypted and hidden so that its network name or SSID can’t be picked up by the public.
- Verify the security controls of third parties – before working with third parties that have access to your data or computer systems or manage your security functions, be sure their data protection practices meet your minimum requirements and that you have the right to audit them.
- Train your employees – people are the weakest link in security, so make sure your employees understand your data protection practices and their importance. Document your policies and practices and distribute them to everyone. Review your practices regularly and update them as required. Be sure to retrain your staff as updates are made.
Photo credit Sam Churchill | <urn:uuid:a58ebff9-2c17-43df-8009-69afa6c9c2d4> | CC-MAIN-2022-40 | https://en.fasoo.com/blog/top-tips-to-prevent-a-data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00773.warc.gz | en | 0.920863 | 626 | 2.640625 | 3 |
The world has been intrigued by cryptocurrency and cryptomining for almost a decade now. This virtual or digital currency is secured by cryptography, which makes it almost difficult to fake or double spend. Now while many cryptocurrencies are technically on a decentralized network based on blockchain technology enforced by various networks of computers—this may not be the entire picture.
The entire internet that we use today is controlled by only a couple of technology giants. All crypto relies on storage, websites, and application interfaces. These gatekeepers include Amazon, Google, Facebook, etc. If only a handful of companies control the entire Internet, these companies also get to decide the fate of the cryptocurrency market. A simple barring of crypto companies from using their services should immediately put a halt to all crypto as we know it. This is where the Internet Computer Protocol or ICP comes into the picture.
The Internet Computer was first founded in 2016 by Dfinity. Internet Computer is a digital token that uses its own registered protocol named Internet Computer Protocol. It lets anyone publish content or build software without using services from the tech giants mentioned earlier.
The idea behind ICP is to allow people to build a new open Internet giving users a better deal. Dfinity founder Dominic Williams says that this new protocol is a hackproof platform that will help bring down user costs. ICP uses the same idea as other blockchain cryptocurrencies which use smart contracts or codes serving has an agreement between users.
The Internet we know today was created by a decentralized protocol called IP or Internet Protocol. IP intertwines millions of private networks forming a single global network. It’s durable and easy to use as it frees connected software from worrying about how the information is being routed throughout the network.
Internet computer is also created on a decentralized protocol, but it uses blockchain technology instead. The ICP uses the compute capacity of node machines connected by data centers around the world to make up a cohesive system that can host smart contract software and all data is used. The Internet Computer can be utilized to build websites, internet services, applications, and enterprise systems.
The Internet Computer uses an algorithm called Threshold relay, a modification of the Proof of Stake algorithm, to attain an agreement. Proof of Stake is what the Ethereum cryptocurrency uses for its consensus, which requires users to stake their ETH to be validated by the network. ICP’s version has nodes that produce a random number called a random beacon, which is applied to select the next group of nodes. This drives the platform’s protocols. The tool is called the Threshold Relay Consensus model and is one of the main aspects that make ICP what it is.
The Internet Computer also uses what they call Chain Key Technology. This is used to divide smart contract function execution two ways. The first is “update calls” and the second is “query calls”. This helps provide competitive user experiences for blockchain. Update calls make persistent changes. They also cannot be tampered with because the Internet Computer Protocols run them on every node in the subnet. Query calls work differently. Any changes made to memory are discarded after its run. This allows query calls to be executed in milliseconds. ICP could be thought of as an improvement to blockchain technology as it’s faster and could have more potential.
The Internet Computer is an open network that allows users to vote on the future of what the Internet Computer becomes. This is decided by ICP token holders. These tokens are used to vote on proposals that shape what the Internet Computer is in the future. ICP tokens are available to trade at exchanges including Coinbase Pro, Huobi Global, Binance, and more.
ICP tokens give their holders the power to help shape and govern this new Internet. This network aims to help developers make websites, internet services, enterprise IT systems, and applications by installing the code directly onto the public internet. The potential of an Internet without the big technology giants making all the rules is intriguing to many people.
The potential of what the Internet Computer could be is intriguing, but let’s weigh out some of the benefits and drawbacks of ICP. As mentioned earlier, the potential of ICP could be quite endless. This means the scalability aspect of it can also be unlimited. Security is also another advantage. Dfinity states that its system of checks is, even more, superior to Ethereum’s. It is compatible with smart contracts, which means decentralized apps can be developed on the platform. Another advantage of ICP is its speed. Bitcoin transactions can take around 10 minutes and Ethereum can take about 15 seconds subject to network congestion. Dfinity states the Internet Computer can finalize transactions in one to two seconds.
Because the Internet Computer is still fairly new, the protocol still isn’t battle-tested. Some people are arguing there is a great deal of exclusive code that is involved. Criticizers are also saying the blockchain being used is cautiously controlled, and the true value of the protocol is uncertain.
Cryptocurrency continues to evolve in many different ways. Even art is now a cryptocurrency. The Internet Computer is still in its early stages, but the many possibilities make the future of the Internet Computer fascinating. Being at the forefront of a potential new internet can be beneficial for early adopters and investors. The Internet Computer’s purpose is to create a superior blockchain for the world to build upon. The idea of a platform controlled by a larger group of people instead of only a couple of tech giants is interesting. If the Internet Computer does what it’s set out to do, the way the internet currently is will change forever. It will be a truly open internet that also democratizes tech opportunities to a larger group of people. Only time will tell the repercussions of the Internet Computer and how the rest of the world is affected. | <urn:uuid:2dcb7e64-96f3-45f2-8776-a7d9516f3804> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/what-is-icp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00773.warc.gz | en | 0.94303 | 1,190 | 2.984375 | 3 |
Last year, it often seemed that the topic of fraud was in the news more days than it wasn’t. With data breaches at many big name retailers, researchers began developing solutions that would protect merchants while making consumers feel safe when making purchases and the fix may just be quantum physics.
A group of researchers in the Netherlands have come up with a way that would prevent criminals from forging documents and gathering useful data from breaches like the ones at Home Depot and Target. Quantum physics, an area of study that studies tiny particles of matter and energy, could be used to, in essence, confuse criminals.
When cards or IDs are used in an ATM or other reader, light patterns are used to authenticate the card. Criminals can intercept these patterns, steal them, and use them to make fraudulent transactions. The new Quantum-Security Authentication would involve painting a small white strip of nano-particles on a card, driver’s license, or passport. When created a laser fires small light bundles into the strip. These bundles would bounce around among the nanoparticles and create a pattern that would be impossible to copy. When using the card, the ATM would send quantum light into the paint that would reflect a pattern that is impossible to decode by criminals.
Sound like something from the far off future? Maybe not. The technology is already available and is surprisingly cheap. There are some downsides, though, as stolen cards could still be used by criminals if companies don’t have proper identity verification procedures, like EVS’s Consumer Identity Verification in place. By utilizing solutions like IdentiFraud, retailers and consumers can feel safer knowing there is an additional line of defense before they become a vicim of fraud.
[Contributed by EVS Marketing] | <urn:uuid:d447a748-593b-42bb-ba6f-9a1a5a59dc31> | CC-MAIN-2022-40 | https://www.electronicverificationsystems.com/blog/quantum-physics-as-the-future-of-fraud-prevention | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00173.warc.gz | en | 0.947773 | 359 | 2.890625 | 3 |
The Benefits of Certified Ethical Hacking Course
The majority of people have either been “hacked” or know someone else who has been. It is unnerving and even devastating for the victims of such evil acts. Not all hackers are cybercriminals though and not all hacking is illegal. Much of the hacking done today is not only legal but also necessary for the betterment of humankind.
In Canada, the Criminal Code considers hacking, or the aiding and abetting of others to hack, to be an act of “mischief” and punishable of up to 10 years in jail. In learning how to ethically hack you will be part of a world-wide innovative team practically deterring cybercrime.
In a fast-growing field, one which requires an increase of talent in the field of cybersecurity, you will learn innovative practices for preventing the destruction of data, interruption of computer system securities, websites and digital networks while practicing reverse psychology techniques to counter the mindset of malicious attackers.
Computer crimes are not limited to your own homeland and cybercrime conversation is not limited to computer programmers. Cybersecurity has great importance to all people and businesses so much so that in October 2017, Public Safety Canada deemed October to be Cyber Security Awareness Month. While theories of information warfare may be circulated in Washington, D.C. conferences, in most places you travel you will hear conversations about how to limit your personal and business exposure.
As part of a new generation of ethical hackers, your desire to learn offensive security, and stake a claim as an expert in your field of study, will not only set you apart in Canada but will also open opportunities for you globally. Whether you desire to work in the banking industry to prevent customers from having their financial records compromised, opt to work for government creating policy, academia teaching others about cybersecurity risks or begin your own agency offering services to others, you too may find yourself taking the place of and becoming the next John Austen – Head of Crime Unit, New Scotland Yard. You have the opportunity to create and make a safer cyber environment for all.
Things to Prevent
Today, with increased technology, there are limitless possibilities for hardships. It's in human nature that there will always be someone who wishes to push the limits of either good or evil; however, as a well-educated and expertly trained individual, you will be part of an initiative that will curb their appetite and help block their trouble making.
“Hacking has evolved from teenage mischief into a billion-dollar growth business.” Knowing how to prevent Botnets, Browser hijacks, Ransomware, Trojans, Viruses, and Worms, to name a few, will place you in the enviable position of having the cutting edge knowledge to assert yourself in organizations, businesses, the military and other fields of loss prevention.
Catapulting Yourself: How and Where you Work
Being a Certified Ethical Hacker will afford you the luxury of working remotely or working in-house for a big bank. Whether you are in a home office, travel to exotic workplaces or have a designated office on a job site, you will have the opportunity to experience remuneration commensurate with the ever-increasing demands for scrutinizing the evil deeds of the Black Hat community.
What does Compensation Look Like for Hackers?
Depending on your education level, work experience, geographic location and desired field of work, opportunities and salaries are limitless. Presently, the average salary for ethical hackers is $76K with higher salary ranges for IT security architects.
At the end of the day, most folks want to be compensated for a job well done. In the case of ethical hacking, there are a couple more built-in bi-products: loving what you do and curtailing the deeds of evildoers.
Wear the White Hat
If you are an ethical and creative problem solver in search of great learning, desire to be part of an exclusive knowledge experts club and wish to pledge the infamous Hacker Manifesto than you are someone who will be called upon for years to come to advise and consult, solve mysteries and prevent potential crimes. There is no time like the present to get on the bandwagon and posse up to ride into the sunset with a team of cutting edge cybercrime preventers. Get out the tape measure and gauge your head for your Smithbilt. You will be fitted for a lifetime of adventure in the new cyber Wild West.
Read our other blog: How to Become a White Hat Hacker | <urn:uuid:16f42d0c-e455-4e9f-962d-6515dd73f578> | CC-MAIN-2022-40 | https://technoedgelearning.ca/the-benefits-of-certified-ethical-hacking-course/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00173.warc.gz | en | 0.932493 | 921 | 2.859375 | 3 |
"Grand challenges" are ambitious goals, legendary events such as landing on the moon. DARPA is looking for some help in determining what the 21st century's grand challenges will be.
What do putting on a man on the moon, the Human Genome Project and Wikipedia have in common? They all are examples of solutions to grand challenges – ambitious but achievable goals that captured the public’s attention and were fueled by innovation. So what will the next grand challenge be?
The Defense Advanced Research Project Agency is looking to discover just that. In a new request for information, DARPA is asking for input to identify the Grand Challenges of the 21st Century -- part of a broader national innovation strategy under the Obama administration.
"Grand Challenges are not restricted to projects to be undertaken under government sponsorship, but will likely be tackled by groups both within and outside the United States, using both public and private resources," the RFI noted. "Because of the cross-disciplinary nature of the most vexing problems facing the world today, Grand Challenges that are simply posed, inspirational, and easy to visualize for a variety of audiences are desired."
DARPA hopes to attract ideas from a diverse audience – "young and old, scientist and layperson, domestic and international." Responses are due by Jan. 1, 2013.
Last spring, Thomas Kalil, deputy director for policy at the White House Office of Science and Technology Policy, highlighted Grand Challenges as a critical part of American innovation and as an area some agencies are targeting to help their missions.
Kalil pointed out that the Energy Department is supporting the Grand Challenges movement in some of its clean energy initiatives, as well as USAID, which has launched grand challenges addressing newborn health and literacy. He also noted that the administration has encouraged incentivization to help meet Grand Challenges goals.
"Incentive prizes work as one tool to address Grand Challenges because they shine a spotlight on an ambitious goal without having to predict which team or approach is most likely to succeed," Kalil said at the Information Technology Innovation Foundation in Washington. "Incentive prizes help us reach beyond the 'usual suspects' to increase the number of minds tackling a problem, bringing out-of-discipline perspectives to bear and inspiring risk-taking by offering a level playing field."
It’s not clear what kind of incentives DARPA could offer, or if the agency will offer any – the RFI notes only that some selected responses may be featured or otherwise promoted by OSTP and DARPA. While DARPA did not respond to an FCW request by press time, finding such wide-reaching solutions themselves could be the ultimate incentive.
"These Grand Challenges can help solve an important societal problem by serving as a ‘North Star’ to provide focus and cohesion among disparate but potentially complementary research and development efforts," the RFI noted. "The consequences of these achievements will often affect many different disciplines, and the full ramifications may not be known for decades to come."
NEXT STORY: NIST fellow wins Nobel Prize for physics | <urn:uuid:846d2565-1a9a-4840-82dd-244fd7b5bc8f> | CC-MAIN-2022-40 | https://fcw.com/people/2012/10/darpa-seeks-the-next-great-challenges/206637/?oref=fcw-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00173.warc.gz | en | 0.946883 | 621 | 2.75 | 3 |
A cyberattack targeting the 2018 Winter Olympics in Pyeongchang, South Korea aimed to cause disruption at the start of the Games and required deep knowledge of the infrastructure - a sign the attackers had previously compromised it, according to researchers.
The attack took place prior to the Opening Ceremonies held on Friday, Feb. 9 and interfered with TV and Internet systems. Olympics officials confirmed technical issues affecting non-critical systems and completed recovery within 12 hours. On Sunday, Feb. 11, they confirmed that a cyberattack had taken place but didn't offer additional details.
Researchers at Cisco Talos identified malware samples used in the attack "with moderate confidence" and report the infection vector is currently unknown. Evidence indicates the actors responsible were not seeking information or monetary gain: Their primary goal was likely to cause destruction.
The so-called "Olympic Destroyer" malware studied by Cisco renders machines unusable by deleting shadow copies and event logs, and tries to use PsExec and WMI to move across the environment. Talos analysts point out they had previously seen this behavior in both the BadRabbit and Nyetya (NotPetya) attacks.
The initial malware sample is a binary that drops multiple files onto the target machine. From there, the malware moves laterally throughout the network, using two information stealers and hardcoded credentials within the binary. Talos found 44 individual accounts in the library and says the malware author knew several technical details about the Olympics infrastructure including usernames, domain name, server names, and passwords.
"This is a targeted attack and this involves some reconnaissance," says Craig Williams, director of Cisco Talos outreach. "The attacker came into the campaign knowing a large number of accounts. That involves, obviously, a phishing campaign or an intelligence-gathering campaign."
A key takeaway is this malware doesn't use an exploit to spread, Williams continues. It spreads through normal tools using valid credentials, a tactic that will help attackers evade most security tools.
The destructive part of the attack starts during execution. After files are written to disk, the malware deletes all possible shadow copies on the system. It then takes steps to complicate file recovery and ensure the Windows recovery console doesn't try to repair anything on the host.
"Wiping all available methods of recovery shows this attacker had no intention of leaving the machine usable," Talos researchers report. The purpose of the malware is to perform destruction of the host, leave the system offline, and wipe remote data. It also disables all services on the system.
Earlier Attacks on the Olympics
This isn't the first instance of an attack targeting the 2018 Winter Games.
McAfee Advanced Threat Research previously detected a fileless attack targeting organizations involved with the Pyeongchang Olympics. The threat used a PowerShell implant to connect target machines with the attacker's server and transfer system-level data. At the time, researchers were unsure what happened after the attacker gained access.
Now they say this attack had a second-stage payload in the form of Gold Dragon, a Korean-language implant detected in December 2017. Gold Dragon has stronger persistence than the original PowerShell payload and expanded capabilities for profiling target systems. It lets an attacker gather information on system processes, files, registry content, and data.
In early February, prior to the Opening Ceremonies, researchers updated their findings to report another variant of the fileless implant in a new malicious document. This document had the same metadata properties and same information as the campaign discovered in January.
"It's an indication the attacker has resumed deploying a new version of this implant," says Ryan Sherstobitoff, senior analyst of major campaigns at McAfee. "Gold Dragon is a more persistent type of implant that gave them far-reaching capabilities on the network."
Targeted attacks have different stages of payloads, he explains. The first gives them access; the second installs something more persistent. In this case, the earlier fileless attack could have given a threat actor the entry to drop Gold Dragon on the target network.
Sherstobitoff emphasizes there is no indication the attacker behind the earlier campaign is connected to the Opening Ceremonies-timed attack. However, Gold Dragon could have given them the level of access to collect the information they needed to conduct it.
CrowdStrike identified samples of a previously unknown malware family seemingly designed for data destruction. Earliest samples were detected on Feb. 9, the day of the Opening Ceremonies. All samples have sets of hard-coded credentials belonging to Olympics-related targets that let threat actors spread in a target network. Several attackers had access to organizations related to the targets through malicious backdoors, CrowdStrike reports, but it can't confirm whether anyone used this access to deliver malware.
Too Soon to Determine Whodunnit
"I don't want to say it's trivial, but it's not the most complicated piece of malware," says Warren Mercer, Cisco Talos technical lead for engineering, of the attack his team studied. "There's no crazy effort to try and obfuscate their code; there are no super-advanced techniques."
However, he continues, it's likely a sophisticated attacker is at play given the previous access to Olympics systems and ability to hardcode lifted credentials. The question is, which one?
"It's a tricky question when it comes to who could be behind a threat like this," adds Williams. This could be a new threat actor or group, he says, adding that many well-funded campaigns have pockets of developers. Attribution is further complicated by the publicity of widespread attacks like NotPetya, which have given rise to "copycats" who may be responsible, he notes.
Meanwhile, the US-CERT has issued a statement on cybersecurity at the Olympics and offered guidance for attendees to protect themselves against threats including data theft and third-party monitoring, as attackers may take advantage of the large audience to spread messages.
Engin Kirda, cofounder and chief architect at Lastline, points out how denial-of-service attack campaigns are one of the easiest attacks against large events like the Olympics. Outside event attendees and organizers, and fans are often targeted with phishing emails, domain theft, ransomware, and fake social media posts. These days, employees can expect to see malicious emails related to the Games.
"If an employee falls victim to one of these attacks on a work machine, it may put their business at risk as well," Kirda notes. "IT teams should caution employees about clicking on links or attachments from Olympics-related emails."
- Emailed Cyberattack Targets 2018 Pyeongchang Olympics
- 8 Nation-State Hacking Groups to Watch in 2018
- Back to Basics: AI Isn't the Answer to What Ails Us in Cyber
- New POS Malware Steals Data via DNS Traffic
Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register. | <urn:uuid:6e80479b-02ce-4dd8-b6cf-17f549455792> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/cyberattack-aimed-to-disrupt-opening-of-winter-olympics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00373.warc.gz | en | 0.949284 | 1,464 | 2.546875 | 3 |
Berkeley Labs Applies Quantum Computing to a Particle Process
(NewsWise) A team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) used a quantum computer to successfully simulate an aspect of particle collisions that is typically neglected in high-energy physics experiments, such as those that occur at CERN’s Large Hadron Collider.
The quantum algorithm they developed accounts for the complexity of parton showers, which are complicated bursts of particles produced in the collisions that involve particle production and decay processes.
“We’ve essentially shown that you can put a parton shower on a quantum computer with efficient resources,” said Christian Bauer, who is Theory Group leader and serves as principal investigator for quantum computing efforts in Berkeley Lab’s Physics Division, “and we’ve shown there are certain quantum effects that are difficult to describe on a classical computer that you could describe on a quantum computer.” Bauer led the recent study.
Their approach meshes quantum and classical computing: It uses the quantum solution only for the part of the particle collisions that cannot be addressed with classical computing, and uses classical computing to address all of the other aspects of the particle collisions. | <urn:uuid:4f6e4c7f-f0ed-469e-9e59-1082345282de> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/berkeley-labs-applies-quantum-computing-to-a-particle-process/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00373.warc.gz | en | 0.915405 | 243 | 3 | 3 |
The Internet of Things (IoT) today connects multitude of devices through the Internet, effectively integrating greater compute capability, and also using data analytics to extract meaningful information. With over 50 billion connected devices and 212 billion sensors available today, it has tapped into the opportunity to deploy new intelligent devices. Owing to its massive adoption, the cost of sensors, bandwidth, and computer processing are going down. These trends have unleashed the IoT potential, impacting the way we currently work and live. Such next-generation intelligent systems are collecting and analyzing large volumes of raw data, enabling manufacturers to act upon the results to reach new levels of factory automation. Prediction of new events from big data provides a concrete foundation for planning new projects, but all new insights may not be workable or interesting out of million events. So, revealing these meaningful insights is a challenge for data scientists to write suitable algorithms.
Additionally, considering connected car as a prototype for predictive analytics in industrial big data, analyzing the error messages generated by these cars provide the manufacturers with useful insights that assist in optimizing service and production of vehicles. This leaves manufacturers, dealers, exporters, selling agents, and service providers in automobile industry with heaps of data every day. Integrating and analyzing this big data assist for streamlining the productivity, product quality enhancement, market demand, customers’ interest of a specific model, and cost of vehicles. In an Industry 4.0 context, the collection and comprehensive evaluation of data from many different sources—production equipment and systems as well as enterprise and customer-management systems—will become standard to support real-time decision making. | <urn:uuid:9eaf79e3-ce18-4de7-bdbd-9c4e1b84b769> | CC-MAIN-2022-40 | https://www.cioadvisorapac.com/news/industry-40-and-its-impact-on-digital-manufacturing-nwid-732.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00373.warc.gz | en | 0.920004 | 324 | 2.515625 | 3 |
Frozen neon invention jolts quantum computer race
(SpectrumIEEE) New findings now suggests that electrons trapped on frozen solid neon could prove a simple yet powerful kind of qubit for use in future quantum computers.
In the new study, to create a qubit protected from environmental disruptions, the scientists experimented with neon, a noble gas like helium that virtually never reacts with other elements, potentially making it an ideal host for a qubit. Neon freezes into a solid when cooled to below roughly minus 248.6 degrees C and brought to pressures of more than 0.42 atmospheres.
“This is a completely new qubit platform. It adds itself to the existing qubit family and has big potential to be improved and to compete with currently well-known qubits,: said Dafei Jin, Argonne National Laboratory
Electrodes in the microchip can keep electrons that get trapped on the solid neon in place for more than two months. A superconducting microwave resonator on the chip, much like a microscopic version of a microwave oven, then emits microwaves to help control and read the qubit.
The scientists argue that useful qubits require three key qualities:
–They can show long coherence—that is, stay in superposition for long stretches of time—ideally more than a second.
–They can quickly change from one state to another to help perform operations rapidly, ideally roughly a billionth of a second.
==They can scale up to link with many other qubits via a quantum mechanical phenomenon known as entanglement so they can work in parallel together.
The group’s experiments reveal that within optimization, the new qubit can already stay in superposition for 220 nanoseconds and change state in only a few nanoseconds, which outperform qubits based on electric charge that scientists have worked on for 20 years.
The researchers suggest that by developing qubits based on an electron’s spin instead of its charge, they could develop qubits with coherence times exceeding one second. They add the relative simplicity of the device may lend itself to easy manufacture at low cost.
The new qubit resembles previous work creating qubits from electrons on liquid helium. However, the researchers note frozen neon is far more rigid than liquid helium, which suppresses surface vibrations that can disrupt the qubits. | <urn:uuid:e25580c6-18d8-4ad4-816d-36aeb8097882> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/frozen-neon-invention-jolts-quantum-computer-race/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00573.warc.gz | en | 0.921947 | 484 | 3.375 | 3 |
Very common health problem in every human at an early age is high blood pressure and also known as hypertension. The high blood pressure determined as pressure with flow of blood in arteries connected from heart to the body parts.
How to predict high blood pressure?
The pressure in arteries measured by considering two values, the contraction of ventricular with arteries pressure known as systolic. The reading 120/80 means the value above represents the systolic pressure and below value as diastolic pressure.
If the readings are greater than 119 above and 79 below is known as pre-hypertension. You need to take precautions to overcome high blood pressure. But the American heart association has introduced guidelines in measuring with blood pressure from 140 over 90, to 130 over 80. The high blood pressure associated with chest pain, shortness of breath, headache, dizziness, or back or abdominal pain, seek medical care immediately.
Types of high blood pressures
Primary: The high blood pressure caused without any impact of diseases are considered to be primary high blood pressures. The activity of the hormones that regulate of blood volume and pressure. It is also influenced by environmental factors, such as stress and lack of exercise.
Secondary: These secondary high blood pressures developed by the effect of other diseases in body. There are several diseases that cause hypertension.
Causes of high blood pressure
The causes of high blood pressure are based on various factors as blood pressure differs throughout the day. The main cause is age factor these features of high blood pressures are observed frequently.
The diet with high salt and high fat processed foods with low potassium one major causes behind the high blood pressure. But high intake of salts lead to kidney failures. Treating high blood pressure in obese people is complex.
It can be inherited genetically through parents. Also by cholesterol levels in body becoming artery walls very narrow blocking blood to flow through arteries.
Further, people with inactive tendency, obesity, consuming alcohol and smoking are more early effected by high blood pressure.
Diseases developing high blood pressure
Sleep apnea condition reducing the oxygen levels in body increases blood pressure causes stress on cardio vascular activity.
Hyperthyroidism causes irregular heart blood pumping rhythm develops various cardio vascular problems which may lead to high blood pressure.
Diabetes makes arteries narrow causes difficulty in flow of blood may rise to adverse effects as high blood pressure.
Although, there are more diseases related to kidney and other organs which may lead to high blood pressure.
Effects of High blood pressure
Chronic Kidney Disease: The high blood pressure causes kidney failures when blood vessels become narrow in the kidneys.
Eye Damage: Due to stress and pressure blood vessels in the eyes burst or bleed. This may lead to vision change or blindness.
Heart Attack: The most common warning symptoms of a heart attack are chest pain or discomfort, upper body discomfort, and shortness of breath.
An individual with hypertension will not identify any symptoms. It can cause damage to the cardiovascular system and internal organs, such as the kidneys.
Moreover, long-term hypertension can cause atherosclerosis, results in the narrowing of blood vessels. This makes heart pumping harder to deliver blood to the body.
Reducing salt intake to under 5 g per day to decrease the risk of hypertension. This can benefit people both with and without hypertension.
Preferably it is more important to have fruits and vegetables, Whole-grain, high-fibre foods, beans, pulses, and nuts, omega-3-rich fish twice a week, non-tropical vegetable oils, for example, olive oil, skinless poultry and fish, low-fat dairy products
Hypertension related to excess body weight, reducing weight followed by a fall in blood pressure.
Meanwhile, a healthy, balanced diet with a calorie intake that matches the individual’s size, sex, and activity level will help.
However, Patients with hypertension should perform 30 minutes of moderate-intensity dynamic aerobic exercise, such as walking, jogging, cycling or swimming, on 5 to 7 days of the week. | <urn:uuid:393e2166-b3af-4e36-8028-64113fb63d34> | CC-MAIN-2022-40 | https://areflect.com/2017/11/20/guidelines-and-precautions-of-high-blood-pressure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00573.warc.gz | en | 0.930439 | 843 | 3.4375 | 3 |
Vehicle authentication is a process enabling a person to gain access to a connected auto or smart car. This process assumes that the device is an Internet-connected auto and not a legacy analog vehicle.
With the emergence of the Internet of Things (IoT), vehicle authentication may also refer to vehicles authenticating themselves (M2M) to other devices such as roadside kiosks, toll authority resources, gasoline pumps, and restaurant drive-thru collections.
When authenticating to the vehicle, like any online resource a driver or passenger would present one or a combination of authentication factors such as knowledge, possession, or inherence. Access of this kind can be facilitated via a password-based or passwordless architecture, however IoT adoption is heavily reliant on seamless experiences. This points to password-based systems being unfeasible for the use case.
Other ways solutions can be architected include transforming the device into a standalone validation server to more directly close the loop between car and drive, instead of to the automaker/automotive service first, and by selecting from among communication protocols over which the authentication occurs (e.g. data, BLE, NFC).
"The world's top automakers are implementing smart car access systems such as True Keyless Authentication. Vehicle authentication that provides drivers a seamless user experience, is realistic from a hardware and connectivity standpoint, and is secure is what is going to work in the long run." | <urn:uuid:44365609-a646-474e-8df7-51ac0a6a75ff> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/vehicle-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00573.warc.gz | en | 0.924265 | 295 | 2.921875 | 3 |
As the popularity of mobile apps grows exponentially, so does the need for companies to ensure customer data stays safe and the integrity of their systems and intellectual property remains protected. More than ever before, data security is paramount.
We delve into ways your business can navigate the treacherous waters of app development and discuss ways to customer data safe. We discuss the various security measures your business can implement to ensure industry compliance and build customer trust.
Are businesses obligated to keep customer data safe?
In Australia, data sovereignty laws require personal data to comply with Australian Privacy Principles (APPs) and kept in Australian data centres.
The Australian Government has provided guidelines on how responsible business owners handle personal information under the Privacy Act 1988, which includes;
- authorised access
Information covered under the Act includes personal information such as a customer’s name, signature, contact details, medical records, bank details, photos and videos, IP address and even their opinions.
Every business is responsible for protecting customer data and obligated to notify affected individuals, the Notifiable Data Breach (NDB) scheme and the OAIC about any security breach.
What is meant by data security?
Data security is the process that ensures sensitive data remains safe and inaccessible by unauthorised persons. There are several types of data security, such as physical security, network security, internet security, endpoint security and encryption which are in place today to protect personal information and prevent devices and individuals from being exploited by a malicious attack.
What are the types of data security?
There are several security measures data companies can take to protect client information;
- Physical security: Physical security refers to a more traditional but essential process of protecting corporations from data loss or corruption from individuals intent on inflicting severe loss or damage.
- Encryption: Encryption is the process of disguising or “scrambling” data to make it unreadable by people not authorised to access it.
- Password Protection: The first line of defence in safeguarding sensitive company or customer data.
- Tokenisation: Tokenisation refers to the process of replacing sensitive data with a unique numerical code. This process can also be referred to as “data masking” and protects data by destroying the original information and using a code instead.
- Multi-factor authentication: Multi-factor authentication is a process where two or more pieces of information are required to authenticate to gain access to sensitive data.
Why is data security important?
The legal implications of a data breach are extensive, with consequences far-reaching, including the loss of business, fines, damaged reputation, even fines from retailers who sell products associated with your company.
The risks don’t stop there, even from within your organisation. The abundance of mobile storage devices such as laptops, USB, flash drives and smartphones add to the complexity of keeping data out of the hands of would-be thieves or hackers.
With these types of consequences in mind, why would companies delay securing their data and make it a high priority?
What is the primary threat to information security?
The largest threat to information security corporations need to be aware of is malware located on mobile devices. These are also referred to as “malicious apps” and are a popular way hackers gain access to company data.
Think of your smartphone as a mini-computer, and every app you download is like an “application” that can be added to, opening access to sensitive personal and corporate data. Hackers often use apps as a front for their hacking operations to gain access to valuable user information.
What is the difference between data privacy and data security?
Data privacy and data security are two terms often used interchangeably; however, the two are quite different.
The term data security refers to the various security measures that ensure a company’s data remains safe and not accessible by unauthorised individuals. Data privacy refers to an individual’s rights who entrust their personal information/data to a specific company or organisation.
Combating security threats to your organisation.
Companies are required by law to keep customer data safe and secure. Many businesses do not know how vulnerable they are until a breach occurs.
The biggest security threat from the data that your company has is its location on a server. It might be possible for an employee to download a virus onto an unsecured server or external hard drive that can make copies of itself and then transfer the virus into other computers and devices.
Common security threats to organisations include;
- Mobile apps
- Denial-of-Service (DoS) Attacks.
- Viruses and worms
- Trojans horse
- SQL Injection
- Password attacks
For an extensive list of the best cybersecurity tools to help detect and close security holes and block network attacks, we recommend reviewing the article from Software Testing Help.
What is website vulnerability?
Any weakness in the security system of a website classifies as a ‘vulnerability. The first step in preventing hackers from exploiting website vulnerabilities is performing a website and server audit and conducting them periodically. If you cannot find any vulnerabilities, at least you will be aware that none exist.
PCI security compliance and corporate obligations.
PCI security compliance standards resulted from a combined effort from credit card organisations and introduced in 2004. The standards dictate corporate obligations and operational requirements raised to protect customer credit card and account data.
PCI guidelines include:
- installation and maintenance of firewalls
- protection of stored cardholder information
- encryption of cardholder information transmitted across public networks
- use of anti-virus software
- tracking and monitoring of all network access
For those looking for a more detailed outline of the PCI DSS requirements, you check out the PCI Security Standards Council website.
What type of information do these hackers use?
Hackers often target data that pertains to your business and technology assets to get access to sensitive information, often for criminal purposes.
According to the PCI Security Standards Council, “a data breach happens when personal information is accessed or disclosed without authorisation or is lost.”
Organisations are obligated under the Privacy Act 1988 to notify affected individuals immediately upon detecting a breach whenever personal information is likely to have been compromised and cause possible harm.
App-level security issues every developer should consider.
Security breaches are increasing in frequency and have become a major concern to governments globally and the private sector. Some of the vulnerabilities often overlooked include;
- not scanning their code for vulnerabilities
- insufficient budget dedicated to mobile security
- lack of testing
- pressure to rush to release
- lack of mobile expertise in app development
We spoke to Rocket Lab for their thoughts on app development and security. Julien’s advice was for those considering building their app in-house, “be sure you have the expertise to not only develop your application but also thoroughly test its usability and security.”
Testing the integrity app security before launch.
Testing is crucial to the success of your app, as it is a way to catch errors in the design and implementation and ensure your app is ready for public release.
Some of the essential components to testing your app are;
- create personas that reflect your audience’s problems and their needs and consider how closely your product addresses those needs
- choose the right beta testers, qualified testers to help you detect bugs and provide constructive feedback on your product before its official launch.
- consider all feedback
- be prepared to make adjustments if necessary.
As you can see, data security is not something not to approach likely; the prevalence of hacking and phishing have had enormous ramifications to corporations and individuals over the last two decades.
As the audiences become more and more reliant on mobile technology and apps to deliver the services they need, so too does the window of opportunity widen for unscrupulous individuals. How well your organisation takes up the challenge to secure its data will determine whether your company becomes a victim of cybercrime or becomes a trusted source in the marketplace.
What is File Encryption?
File and database encryption solutions serve as a final line of defense for sensitive volumes by obscuring their contents through encryption or tokenization.
What are the key challenges facing businesses today?
The sheer volume of data that enterprises create, manipulate, and store is growing, and drives a greater need for data governance.
What are the new privacy regulations?
Fueled by increasing public demand for data protection initiatives, multiple new privacy regulations have recently been enacted, including Europe’s General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA).
What is AI and how does it affect data security?
AI AI amplifies the ability of a data security system because it can process large amounts of data.
What are the challenges facing data security?
These include understanding where data resides, keeping track of who has access to it, and blocking high-risk activities and potentially dangerous file movements.
What are the key data protection solutions?
Data discovery and classification tools Sensitive information can reside in structured and unstructured data repositories including databases, data warehouses, big data platforms, and cloud environments.
What are the key areas of data discovery and classification?
Data discovery and classification solutions automate the process of identifying sensitive information, as well as assessing and remediating vulnerabilities.
What are the key security concerns?
Physical security of servers and user devices Regardless of whether your data is stored on-premises, in a corporate data centre, or in the public cloud, you need to ensure that facilities are secured against intruders and have adequate fire suppression measures and climate controls in place.
What are the key security measures you can take to protect your data?
Backups. Maintaining usable, thoroughly tested backup copies of all critical data is a core component of any robust data security strategy. | <urn:uuid:cedc4944-5e78-45eb-8c18-07662f41b492> | CC-MAIN-2022-40 | https://gbhackers.com/data-security-app-development-technology-strategy-obligations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00573.warc.gz | en | 0.929214 | 2,056 | 2.53125 | 3 |
Basically, you need to reverse engineer an app or a feature when you do not have source code, but still need to know how it works. If it sounds a bit suspicious to you, here are some all-legal business situations when reversing comes at hand:
- Researching and fixing complicated software issues
- Improvement of the interaction between a software system and the platform
- Advanced software system compatibility with third-party solutions
- Research of various types of malware.
Thus, being a rather complicated practice, iOS reverse engineering is very interesting and useful for a broad range of tasks. A large set of tools is available to help with this process.
First, a couple of words about the internal architectures as it dictates tools selection and general reversing approaches.
iOS mobile devices are built using armv7, armv7s and arm64 CPUs. The corresponding reversing algorithms require researcher to be familiar with the instruction sets, calling conventions, and some things specific for arm (such as thumb mode or opcodes format)
As for the cache, system frameworks and dylibs are merged into a single file called shared cache, which can be found at /System/Library/Caches/com.apple.dyld/.
iOS Reversing Tools
Apple provides several standard command-line tools for iOS app research out-of-the-box:
- lldb. Quite feature rich default debugger in Xcode. It can be a useful C++, Objective-C and C code reverse engineering tool supporting debugging of the corresponding code on the desktop and iOS devices and simulators. It is based on the larger LLVM project re-using libraries such as its dissembler and others. See details: https://lldb.llvm.org/;
- otool. Complete console solution for exploring and in-place editing Intel and ARM binaries.
- nm. Console tool to browse names and symbols in mach-o executables. Get details here: https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/nm.1.html
- codesign. A tool to get information as well as create and manipulate with code signatures. Get details here: https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/codesign.1.html
Besides standard tools for reverse engineering provided by the vendor, there are several very useful third-party utilities:
- IDA (Interactive DisAssembler). It is probably the most wide-known and popular disassembler. Being almost a reversing standard for complex tasks, this system should be mentioned among the best iOS reverse engineering software products. Get details here: https://www.hex-rays.com/products/ida;
- Hopper. Another interactive reverse engineering tool, native MacOS disassembler. It is a shareware with limited demo version. Get details here http://www.hopperapp.com/;
- MachOView. An alternative to otool and nm but with GUI, which enables mach-o file structure visualization. It is a freeware tool. Get details here https://sourceforge.net/projects/machoview/;
- class-dump. This tool allows dumping classes declarations from executable headers. Get it here https://github.com/nygard/class-dump;
- dsc_extractor – This tool can be used to extract libs and frameworks from dyld_shared_cache. When extracting, it saves locations and original names of all object that being extracted. It is provided by Apple as an open source software
IDA provides an ultimate feature set for an effective reverse engineering
As stated on the official website: “IDA is a Windows, Linux or Mac OS X hosted multi-processor disassembler and debugger that offers so many features it is hard to describe them all.”
IDA Pro includes such features, as:
- same interface for dozen of different processors
- multitarget debugger (supports different types of OSes)
- large and flexible plugin architecture
- great interactivity
- Intel & ARM x32 and x64 pseudocode generator
- Finally, IDA 6.9 (latest version at this time) supports pseudocode generation for ARM 64 binaries
In general, IDA has too many great features that it would require a separate large article to cover them all.
Hopper is the macOS / iOS oriented disassembler. It is designed for macOS and Linux.
Using Hopper you can also perform reversing of any macOS / iOS binaries.
Some of Hoppers benefits:
- oriented to work with objective-C: specialized on retrieving obj-C specific information from the binary
- uses lldb or gdb as debugger
- most functions can be accessed from the python scripts
- displaying assembly, pseudocode CFG (Control flow graph) at the same time. That makes reversing more effective
- support of Swift names
- customizations: create own types, semantic coloration, user comments
All of the listed above makes reversing iOS applications with Hopper more effectively and comfortable.
Using Tools to Reverse Engineer
The simplest reverse engineering task is to research ipa or app executable. The executable itself can be easily obtained: no problem at all for an app, and for an ipa, which is a zip archive, it can be found in the Payload/*.app subdirectory. Then any reversing tool from the list above can be used to work on this executable.
The more complicated task is to reverse engineer a part of iOS. It usually requires a jailbroken device, but even without it, you can try to get the file using the Document Interaction functionality.
If you cannot get an executable from the device, you can try the iOS simulator. The fact is that the simulator is based on the x86 architecture and thus its code differs from the one on a real iOS device. Nevertheless, daemons and frameworks interfaces correspond to those on iOS devices.
Reversing kernel extensions (.kext)
Sometimes it’s necessary to perform reversing on kernel extensions (also known as drivers on Windows). macOS kernel extensions are simple folders with .kext extension. They have bundle-like structure. The target file for reversing is the file with the same .kext name, located in /Contents/MacOS subfolder.
Reversing of kernel extensions is the same as reversing usual application. But be warned that majority of kexts are written partially with C++.
The process of reverse engineering on a closed platform like iOS can require significant time and efforts as well as a set of specific skills. Nevertheless, there are a set of iOS reverse engineering tools and approaches developed to facilitate this task.
[su_box title=”About Dennis Turpitka” style=”noise” box_color=”#336588″][short_info id=’101642′ desc=”true” all=”false”][/su_box] | <urn:uuid:c451eb18-570f-4539-89a5-b09f0c9c5215> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/best-ios-reverse-engineering-tools/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00573.warc.gz | en | 0.900953 | 1,481 | 2.796875 | 3 |
This year we’ve seen climate emergencies declared locally by 265 different local authorities across the UK.
Copyright by data-economy.com
And the alarm has been raised at the supranational level too, with the EU Parliament declaring a global climate emergency.
At the same time, blazing bush fires in Australia, toxic levels of air pollution in India and severe flooding here in the UK have dominated news headlines – demonstrating the dangers climate change is already creating.
With concern rising about the impact of climate change, we need to find ways of tackling its causes and mitigating its effects. Here, data has a crucial role to play.
Whether it’s information on how the world’s climate has changed over time or key sources of carbon emissions, the data we collect holds crucial insight.
This insight is already shaping our responses to climate change and will become even more important in 2020.
Driving sustainability through data
Earlier this year, Google launched its Environmental Insight Explorer (EIE). A free online tool that uses mapping data to estimate the carbon emissions of buildings and transport in cities across the globe.
Its purpose is to allow urban planners to recognise key sources of pollution and reduce emissions by planning cities more sustainably.
Tackling carbon emissions generated by the environment is crucial to combating climate change – so this is a significant development.
Buildings and construction are responsible for 39 per cent of carbon emissions worldwide and 28 per cent of those emissions resulting from the energy used to heat, cool and light buildings (World Green Building Council, Bringing Embodied Carbon Upfront 2019).
And the EIE initiative puts data centre stage in trying to address the sustainability of our towns and cities.
This is not the first time Google has used data to highlight carbon emissions that comes from the built environment.
In fact, it has already used data to create models to improve the sustainability of its data storage centres themselves – significantly reducing its carbon footprint.
Google used its data to challenge the perceived wisdom that data storage facilities needed to be cooled to 18 degrees to operate properly – with higher temperatures leading to poor performance.
By using real-time data analysis to measure the performance of its data centres Google found this was not true.
Its analysis found that’s its data centres could operate in temperatures up to 27 degrees without any reduction in performance. The energy required to cool the centres was completely redundant.
At St Vincent’ Hospital in Australia, installing a data-led predictive model within the building’s HVAC system has led to a 20 per cent reduction in energy consumption.
Here, software is used to monitor data points including the weather conditions, building occupancy, energy prices and tariffs in real time. […] | <urn:uuid:46d6d141-27cc-486b-b00c-afa38ddcf388> | CC-MAIN-2022-40 | https://swisscognitive.ch/2019/12/27/data-will-lead-the-fight-against-climate-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00573.warc.gz | en | 0.935679 | 555 | 3.515625 | 4 |
What is the Difference Between Wi-Fi 5 and Wi-Fi 6?
Wireless Internet is changing. New devices are demanding even more from your Internet network. Wi-Fi 6 is here to answer those demands.
Wi-Fi standards are the services and protocols that determine your network connection. Wi-Fi standards are determined by the Institute of Electrical and Electronics Engineers (IEEE). The IEEE defines their set of standards with the number 802.11. Then, the standard is further defined by different specifications with letters. For example, 802.11ac (also known as WiFi 5) is the Wi-Fi standard that is predominantly used since 2014, however, a new upgraded standard called 802.11ax or WiFi 6 is now available.
Wi-Fi standards determine the speed and frequency of your network. Most current home wireless routers are 802.11ac compliant, meaning they’re operating on Wi-Fi 5 and support 2.4GHz and 5GHz frequencies. Wi-Fi 5 was underperforming in crowded areas with multiple devices such as stadiums and airports. WiFi 6 incorporates many new technologies to help with this congestion when dozens of Wi-Fi devices are on a single network. It lets routers communicate with more devices at once, and allows routers to send data to multiple devices in the same broadcast signal. Wi-Fi 6 devices can then communicate simultaneously with your Wi-Fi 6 router, thereby creating a stronger connection that supports more and more devices that are demanding data.
Four key areas where Wi-Fi 6 excels:
Wi-Fi 6 performs 4 times better in crowded areas that have numerous devices (such as stadiums, hotels, and airports).
Wi-Fi 6 delivers up to 40% higher peak data rates for faster throughput.
Wi-Fi 6 increases network efficiency by four times.
Targeted Wake Time (TWT) extends device battery life.
Wi-Fi 6 Timeline
New routers labeled “Wi-Fi 6 Certified” or “Wi-Fi 6 Compatible,” were announced in late 2018 for home networks and are expected to emerge more in 2019 as the standard becomes more commonplace.
The Samsung Galaxy S10 is the first smartphone that supports Wi-Fi 6. As new generations of devices come out, they will adopt Wi-Fi 6 standard technology and hardware. In order to see the full benefits of Wi-Fi 6, you will need both a compatible router and devices. In other words, if you want Wi-Fi 6 performance on your smartphone, you will need both a router and a smartphone that supports Wi-Fi 6.
Should you run out and buy all new hardware that is Wi-Fi 6 certified? Not yet. As you replace your devices over the next two to four years, you will bring home new ones that include this latest version of Wi-Fi certification. There is one thing that you will have to make a point of going out and purchasing: a new router. If your router doesn’t support Wi-Fi 6, you will not see any benefits of the improved technology, regardless of how many Wi-Fi 6 devices you have in your home.
If you are experiencing a slow Internet connection, consider enhancing your network with Actiontec’s Optim Managed Wi-Fi, only available through your Service Provider. Optim’s easy to navigate dashboard allows you to monitor and manage your entire home network and all of its connected devices with a click of a button. Optim’s advanced intelligence provides your Service Provider with the tools and data needed to remotely troubleshoot complicated WiFi problems and help you resolve them without the pain of waiting for a technician come to your home.
Does your Service Provider have Optim? Learn More about the Advanced Tools available for your Internet Service Provider and ask them about Optim Managed WiFi.
For everything you need to know about Wi-Fi networking and Wi-Fi standards, check out Actiontec’s Complete Guide to WiFi Networking. | <urn:uuid:a9951880-8b50-4055-934b-f8d51f16bc88> | CC-MAIN-2022-40 | https://www.actiontec.com/what-is-the-difference-between-wi-fi-5-and-wi-fi-6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00573.warc.gz | en | 0.935872 | 827 | 2.71875 | 3 |
It is no longer a secret that, mobile technology has taken over virtually every sector of the world with health care not an exception. Thanks to a novel technology and engineering tactics that pushed health care in many parts of the globe for a huge positive development. Moreover, it is thus no surprise that Mobile computing is one of the most addicted in the realm of health care. The basic reason being that mobile devices are a now part of health care employees as much as they are common in other industries. Adoption of mobile devices in health care has set a trend known as Bring Your Own Devices (BYOD). You may have heard of the term, but what exactly is BYOD?
BYOD Definition and Benefits
In simple terms, BYOD is a concept whereby employees and professionals are allowed to use their mobile devices for work related aspects. It is also being encouraged in Healthcare facilities given the number benefits that associated with the trend. If we talk about its advantages, the following are some of its advantages:
1. Utilization of Healthcare Infrastructure
The main advantage with BYOD is that the professionals can eliminate the need to carry multiple devices to achieve tasks. As a result, it leads to better utilization of infrastructure deployed by health care facilities. A good example of such an infrastructure is a wireless communication network in form of WLAN. Through such a network, cloud computing is feasible and very efficient in allowing user efficiency. Most health care companies spend a great deal of time and money in setting up such infrastructure so why not make good use of them?
2. Employee Satisfaction and Attraction of Talent
The key to business development is having a good Human Resource management base to build a company on. Human Resource entails proper employee management coupled with spot on talent recruitment techniques. A research-by Xigo’s 2012 “Mobility Temperature Check” study found that the main reason behind the deployment of BYOD in organizations is to keep employees happy. Think of it this way, employees use devices like Smartphone in virtually any activity, thus wouldn’t it be wise to afford them the same Smartphone to access various functions at work? This next generation is full of passion for mobile devices. It is even safe to say that our lives are driven by this kind of technology. In terms of attracting talent BYOD adoption in health care could be used to lure new employees owing to the flexibility it affords. The point here is that BYOD can be used as selling point in a bid to beat off competitors in hiring raw talent.
3. Employee Productivity
BYOD is known be provide the flexibility that brings the best out of employees. Employees tend to visualize the chance to work from any place without experiencing too many impediments. In fact, research has shown that a health care organization’s productivity peaks when an employee is allowed to access health care functions from home. An employee can log into work related applications at home working extra hours, thus boosting organizational productivity unlike in the traditional way of working.
4. Reduced Costs
BYOD is a great trend to reduce the cost for health care organizations around the world. The key reason is that an organization can shift a portion of its hardware acquisition costs to employees in a passive manner. The tricky part with technological hardware and software is the rate at which it turns obsolete, thus companies are always forced to incur costs frequently to get new ones. With BYOD, employees use the latest technology that can solve the problem of spending budget for new models.
Of course with any technology, there are some cons associated. With BYOD, there are a number of challenges that are moving back the trend in health care, but the major one has got to be a data security breach. Statistics even show that the number of health care related data breaches has grown more popular than any other security breach! In 2013 for example, it accounted for 44% of all breaches, according to the Identity Theft Resource Center. Perhaps what has been driving these numbers is the invaluable nature of Personal Health Information in identity theft.
Another fact is that 1 out 10 American citizens have been affected in one way or another by a health care data breach. The interesting fact is that employee negligence was found to be the loophole behind this shocking statistic. According to the HIMSS Security Survey, employee access and handling of data is a major concern in the fight against this vice. Now, this is where the data security risk with BYOD make health care organizations cautious. The question that begs for an answer then is how can BYOD data be secured?
5 strategies to secure healthcare data in BYOD
Regardless of a companies’ stand in terms of BYOD, the trend has in a way or another passed into many institutions. This is to imply that BYOD data security is colossal to companies that have policies against the BYOD trend and employees that are already implementing it. There are a few strategies that can be effectively used to curb data breaches in BYOD:
1. Risk Assessment
The first step before implementing BYOD in health care is to assess all the risks associated with the system. For example, in the healthcare business, how many staff members have access to the patient information? The essence of this is to identify any loopholes that you will think possible in the future. Carrying out a data inventory and threat analysis plus an analysis of the current BYOD status is one way to ensure that the Personal Health Information will be kept safe on implementation.
2. Mobile Device Management
BYOD security in communication is best achieved through protecting data as it flows from one end to another. Mobile Device Management (MDM) is one powerful way of implementing such kind of security in mobile communications. This approach utilizes software platforms in aspects like configurations, software updates while also keeping an eye on the security of information. Choosing an MDM system that can manage phones from different platforms is a cost effective option for health care institutions.
The other key part of MDM is encryption of data in networks and devices. This can be done through the use of multi factor authentication and encryption algorithms. Containerization is another scheme that works effectively. In containerization, it allows IT to not only secure the data on a device, but also grants control to apps to access data and manage data sharing.
Create clearly defined rules and regulations before implementing BYOD is also one very important element in fighting data breaches in health care. This is to ensure that users have a clear understanding of the device specs allowed, their roles and consequences of breaking regulations. For example, users should not be allowed to share PHI (personal health information) through file sharing platforms.
4. Invest of Securing PHI
The biggest mistake that many health care organizations make is that they focus on securing devices in BYOD environment instead of PHI. There is a limit at which one can secure a mobile device, in this way, one is advised to try and secure data flow and data access before allowing users to start using the system. Drawing a map of data flow is normally used to check and deploy PHI security procedures.
5. Do Not Compromise on Usability
Data security, integrity will not be enforced well without great user experiences in BYOD. User mobility is the ultimate goal of BYOD and it should not be compromised even though security layers have to go together with this feature. The best way to do this is to make sure of enough support in terms of IT staff that will smooth the use of BYOD systems in the health care facility
It is expected that health care will grow prone to hackers in the future, thus data security will grow primarily in all health care facilities. As hacking grows in size so should data security experts. To be safe, protection is a great step towards mitigating cybercrime disasters-prevention will always be much better than cure! | <urn:uuid:9cfa7364-fd83-4471-b564-8cef57b3db00> | CC-MAIN-2022-40 | https://www.crayondata.com/5-strategies-for-securing-health-care-data-in-byod/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00573.warc.gz | en | 0.950996 | 1,589 | 2.578125 | 3 |
Modern networks are getting increasingly complex. So many things can go wrong with them. Business can suffer, and that is something we definitely want to avoid. Monitoring network performance helps us anticipate incoming problems and troubleshoot them ahead of time.
IPFIX provides a sufficient level of information on several network traffic parameters so that we can understand traffic structure.
IPFIX ElementID ipTTL and ipTotalLength are primarily related to network performance and network attacks. Let’s look at how to monitor and collect them on Cisco IOS devices.
1. ipTTL – IPFIX ElementID
The IPFIX Information Element 192 value corresponds to the Time to Live (TTL) field in the IPv4 packet header. The TTL field is 8 bits long, giving us a maximum value of 255. For IPv6, the value of the information element matches the value of the Hop Limit field in the IPv6 packet header.
TTL is a mechanism that prevents packets from looping endlessly in the case of routing loops. Without this mechanism, packets would be sent endlessly between the same routers. Figure 1 shows the location of the 2B Total Length and 1B TTL fields in the IPv4 header.
Figure 1 – The Total Length and TTL in IPv4 Header
The TTL field in the packet header is set by the sending host. Its value depends on the operating system being used. For example, the Linux operating system sets the TTL to 64, while the Windows operating system generates a TTL of 128. Cisco, in its turn, generates IP packets with a TTL of 225.
The TTL value is decremented by 1 on each router (hop) before the packet is forwarded to the other interfaces. When a hop receives a packet with TTL set to 1 and the packet is not destined for the hop itself, the packet is dropped. In this case, the router sends IPv4: type 11: “Time Exceeded”, code 0: “time to live exceeded in transit” to the source IP of the packet.
The TTL value used by your OS can be easily checked by sending a ping to the return IP address. In the following example, the TTL value on the Linux system was temporarily changed from the default value of 64 to 1 using the following command:
$ sudo sysctl -w net.ipv4.ip_default_ttl=1
Figure 2 – ICMP Echo Replies with TTL value 1 while Pinging Loopback IP
Sending host (freepc) receives the “ICMP Time to live exceeded” message from the default gateway (router-lan) while the TTL is set to 1 when pinging noction.com
Figure 3 – Default GW Generates Time Exceed When TTL is set to 1
So why do we need to monitor IPFIX flows with a certain TTL value or hop count?
Firstly, the TTL should remain constant between two hosts in the backbone; if it does not, it could mean that the routing has changed. This could trigger an alert that is set for a particular TTL value.
Secondly, unauthorized NAT configured on end devices can pose a significant security problem as it can hide a number of hosts . This, however, can be detected based on an unexpectedly lower TTL in flows. Every hop reduces the TTL by 1; therefore, the NAT presence can be effectively detected .
Finally, the TTL Expiry attacks can be detected based on a large number of flows with the ipTTL value set to 1. If an attacker sends a flood of packets with the TTL value set such that the packets expire on the switch, the switch is forced to generate many ICMP TTL Exceeded messages. . This way, the CPU usage increases, and all the network services are affected.
To collect the first seen TTL/hop limit value fields in the IPFIX or NetFlow v9 flows, the following must be configured on Cisco IOS devices:
Router(config)# flow record FLOW-RECORD-1 Router(config-flow-record)# collect ipv4 ttl Router(config-flow-record)# collect ipv6 hop-limit
To collect the lowest IPv4 TTL and IPv6 hop limit values seen in the lifetime of the flow:
Router(config)# flow record FLOW-RECORD-1 Router(config-flow-record)# collect ipv4 ttl minimum Router(config-flow-record)# collect ipv6 hop-limit minimum
Similarly, the following example configures the highest value for IPv4 TTL and IPv6 hop limit seen in the flows as a nonkey field:
Router(config)# flow record FLOW-RECORD-1 Router(config-flow-record)# collect ipv4 ttl maximum Router(config-flow-record)# collect ipv6 hop-limit maximum
2. IpTotalLength – Element ID 224
IPFIX IpTotalLength ElementID 224 reports the total length of IP packet. Monitoring packet length helps network admins identify performance issues caused by fragmented IP packets or small size packets.
For instance, packets need to be fragmented if the packet size exceeds the maximum transmission unit (MTU). In such cases, fragmentation can cause excessive retransmissions when fragments encounter packet loss, and reliable protocols such as TCP retransmit all of the fragments to recover from the loss of a single fragment .
On the other hand, too many small size packets add a great overhead for the network so the bandwidth is wasted.
The Total Length is the length of the IP packet, measured in octets, including the internet header and data. This field allows the length of a datagram to be up to 65,535 octets.
To collect IPv4 total length of the first seen packet for Cisco IOS devices, the following must be configured:
Router(config)# flow record FLOW-RECORD-1 Router(config-flow-record)# collect ipv4 length total
To collect both the largest and smallest value for IPv4 length seen in the flow:
Router(config)# flow record FLOW-RECORD-1 Router(config-flow-record)# collect ipv4 length total maximum Router(config-flow-record)# collect ipv4 length total minimum
The IPFIX flow shown in Figure 4 is captured by Wireshark. The TTL value set by the host 192.168.88.102 is 255, so the host is likely a Cisco router. The default TTL value has not been decremented, which means there are no additional hops between hosts 192.168.88.101 and 192.168.88.102.
Figure 4 – IPFIX Flow with IP TTL and IP total Length Information Elements
The destination TCP port is 22, meaning SSH is used to remotely manage a device with IP 192.168.88.101. The reported IP packet length is only 44B, and the flow consists of two packets. The presence of the SYN and ACK TCP flags confirms that the flow is part of a three-way TCP handshake. | <urn:uuid:c2ccfba3-5d76-4952-9d9b-1fd626dff236> | CC-MAIN-2022-40 | https://www.noction.com/blog/ipttl-ip-total-lenght | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00773.warc.gz | en | 0.85825 | 1,486 | 2.59375 | 3 |
As cybercrime becomes more prevalent, international governing bodies and individual governments have set standards for cybersecurity in companies that handle their customers’ data. I’ve previously talked about the FTC and the NIST Cybersecurity Framework on this blog, but now I’m going to explain the European Union’s General Data Protection Regulation, also called the GDPR. The GDPR effectively replaced the E.U.’s privacy law from 1995. It is a wide-ranging, practically global, privacy, and data security regulatory scheme that affects anyone who touches European Union citizens’ data. Companies that aren’t compliant are punished with massive fines. But what if you’re a company based in the U.S.? Can these fines reach you over the ocean? They most certainly can, and the E.U. has no problem fining American companies that operate in Europe and have data from citizens of the E.U.
When I say fines, I mean hefty ones. Violators can be fined up to 4% of their revenue or 20 million euros, whichever is higher. With consequences like that, you need to know the rules and stick to them if you want to process or control European Union citizens’ data, regardless of where you are based. To illustrate just how much this fine is, I’ll go back to TalkTalk, a company in the U.K. that had a data breach and was fined 400,000 pounds. Under the GDPR, that fine would have been 59 million pounds- quite a difference. And while the GDPR has some very technical language and phrases that are commonplace in Europe but rare in America, Article 4 of the document contains definitions of all of these problematic terms so that anyone could understand it if they spent enough time reading.
The GDRP was written and released on May 4th, 2016, but was not enforced until May 25th, 2018, giving companies two years of prior notice before the regulations came into effect. The GDPR has over 200 various provisions with many sub-parts, but it comes down to two core areas: substantive privacy rights and substantive security requirements. The privacy rights are detailed in a chapter of 18 articles, giving the data subjects the right to, essentially, transparency. The ability to access your data, change it if it’s wrong, and even remove it are all detailed. However, it must be anonymized as well. The security requirements are about data controllers (who has the data) and data processors (who works with the data). So, a data controller could be a bank that holds account information, and a processor could be the company that prints bank statements with that data on it. I’m going to put this on hold for now, and next week I’ll write some more about the liabilities of data controllers and data processors. | <urn:uuid:501c8ca1-ae53-4948-a151-86c3d02202ed> | CC-MAIN-2022-40 | https://www.cyberriskopportunities.com/what-is-gdpr-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00773.warc.gz | en | 0.960668 | 583 | 2.78125 | 3 |
A new report has added further to the debate as to whether cloud computing is a greener approach to running data centres. A survey by Pike Research has indicated that the energy savings of cloud computing are “substantial”.
In its report, Cloud Computing Energy Efficiency, the market intelligence firm claimed that the adoption of cloud computing would lead to a 38 per cent reduction in worldwide data centre energy expenditures by 2020.
As part of its cloud computing adoption scenario, Pike Research forecasts that data centres will consume 139.8 terawatt hours (TWh) of electricity in 2020, a reduction of 31 per cent from 201.8 TWh in 2010. The reduction will drive total data centre energy expenditures down from US$23.3 billion in 2010 to US$16.0 billion in 2020, as well as causing a 28 per cent reduction in greenhouse gas emissions from 2010 levels.
According to Pike Research, the report shows computing clouds are able to achieve industry-leading rates of efficiency. The report highlights the fact that only the largest of organisations would have the financial resources to offer the same levels of efficiency within their own datacenters. Pike predicted that much of the processing being handled by today’s datacentres will have been transferred to the cloud by 2020.
“The growth of cloud computing will have a very significant positive effect on data centre energy consumption,” said senior analyst at Pike, Eric Woods. “Few, if any, clean technologies have the capability to reduce energy expenditures and greenhouse gas production with so little business disruption. Software as a service, infrastructure as a service, and platform as a service are all inherently more efficient models than conventional alternatives, and their adoption will be one of the largest contributing factors to the greening of enterprise IT.” | <urn:uuid:a83a1aab-1247-45f0-9d2e-162b2783a5ad> | CC-MAIN-2022-40 | https://channeldailynews.com/news/research-supports-claim-that-cloud-is-energy-efficient/7175 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00773.warc.gz | en | 0.95092 | 367 | 2.734375 | 3 |
Network Attached Storage (NAS) is a dedicated file storage and sharing server. What benefits do NAS systems offer?
For businesses, data is a critical asset to function successfully and get things done. Companies might not be able to provide the quality of service their consumers require without proper access to their business data. Lack of access to corporate information at the right time may lead to decreased sales, poor customer service, or even business collapse. However, it is also true that most enterprises place more emphasis on the applications that use the storage than on the storage itself.
There are various benefits of data storage through centralized systems over decentralized ones. Centralized business databases enable staff to work together on a single file version. It also aids in creating backups that are more effective at protecting data. One of those centralized systems with a wide range of functions is network attached storage (NAS) technology. Due to their various capabilities like enabling multiple users and client devices, being less expensive and expanding the storage capacity, NAS storage systems are currently employed more frequently. NAS systems are so adaptable that they may be used in homes and businesses, unlike file servers and SANs.
What is Network Attached Storage?
Network Attached Storage (NAS) system is connected to a network and enables authorized network users and clients to store and retrieve data from a centralized place. These devices typically include a file service implementation engine (NAS device) and one or more storage devices (NAS drives).
A NAS system’s function is to offer shared file-based storage in the form of an appliance designed for speedy data storage and retrieval to a local area network (LAN). Only the most frequently accessed data should be stored on NAS because it is a costly storage solution.
Many enterprise IT organizations are considering moving NAS and Object data to the cloud to cut costs, increase agility, and boost productivity.
Benefits of Network Attached Storage
NAS systems are easy to use, and an IT specialist is not required. The time it takes to set up and administer the system is significantly reduced because NAS designs are sometimes offered with streamlined scripts or even appliances preconfigured with a streamlined operating system. Accessing data across the network, including cloud-based applications and data, is made faster by using a NAS. Additionally, it includes built-in data security, simple data backup and recovery, and interoperability with redundant storage arrays. To ensure data integrity, NAS can be formatted to enable duplicated drives, a redundant array of separate discs, or erasure coding.
Without replacing or upgrading the current servers, you can expand the storage capacity of NAS and add new storage without shutting down the network. It allows authorized network users and clients to centralize data storage in a secure, dependable manner. Compared to other storage systems like SAN, it is substantially less expensive and can save wasted space. NAS systems are simple to scale up and expand upon and offer a wide range of applications.
Although NAS technology has been around for a while, its use has recently increased. Due to rising storage demands, we are seeing an increase in the number of businesses running their apps and workloads on all-flash arrays. NAS is mainly used for unstructured data storage like surveillance videos, backups, files, snapshots, and emails. NAS technology is perfect for an office setup that works on effective data sharing among various departments. The COVID-19 pandemic has accelerated business transformation initiatives and cloud migration, which are the main factors driving the growth of unstructured data and the necessity for these services. One needs to store these workloads together to enable insights and learning as firms want to utilize cutting-edge technologies like edge computing, AI, and machine learning. Finally, NAS systems are frequently used to support cloud storage providers as a data backup, archiving, and disaster recovery system, contributing to increased productivity of the team and the company.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals! | <urn:uuid:ea02f435-67fb-4b19-8f83-5f962be6afca> | CC-MAIN-2022-40 | https://ai-techpark.com/network-attached-storage-nas-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00173.warc.gz | en | 0.937611 | 819 | 2.859375 | 3 |
Thanks to COVID-19 and the need to social-distance from one another, many companies shifted rapidly to remote operations. Teams that used to work side by side had to move to a home office or remote location. It also meant that all of their equipment, the systems they required access to, and related technologies had to be adapted for the new conditions.
Because things moved so quickly, it opened up many of these technologies and systems to outside attacks. It’s no surprise that we saw a significant increase in phishing, malware, and ransomware attacks. Key findings from a recent report revealed a 72% increase in ransomware attacks amid the COVID-19 crisis. The report also showed a 50% jump in mobile vulnerabilities.
What are some of the new and lucrative attack vectors that have appeared during the pandemic? Moreover, how can companies expect to address these issues?
Problem: Managing the Basics
Weak passwords are broken all the time, as are passwords used across multiple accounts. Hackers and thieves regularly share data dumps containing old and commonly used passwords. They use this information to gain access to various systems, including vital business networks, online services, and more.
A dump from 2019 included 1.1 billion login credentials and was one of the largest at the time. Since then, many more have happened, both big and small. Yet, people still use the credentials contained within these dumps.
Companies should be resetting passwords on a schedule, and it should especially be done as workers return to the office. You never know if or when passwords are compromised. The exception is if the passwords are auto-generated, but even then people usually have the option to change or customize them.
More importantly, proper password etiquette should be used to create strong, uncommon passwords. They should be at least eight characters long, composed of lowercase and uppercase letters, as well as numbers and symbols. Anniversaries, birthdates, and other publicly accessible details should never be used.
Problem: Improper Data Handling
Large datastores should have an expiration date, especially when they contain highly personal and sensitive information. Recent legislation has made it necessary to not only purge data regularly but also provide full access controls to customers and clients. They should be able to opt out of data collection and request deletion of all related information, at any time. Even so, when the data is stored for long periods, it’s vulnerable.
In some industries, such as health care, data must be stored indefinitely. That is where data cleaning comes into play.
Data cleaning is primarily used to prepare and improve the accuracy of collected information, by weeding out unnecessary details. However, it also improves data security by ensuring only the information that is needed is retained.
There is a specific process for collecting information during surveys and polls, storing it, and tidying it up. It’s something that must be implemented foundationally, as opposed to just at the end of a data collection operation.
Problem: SaaS and Cloud System Attacks
Many companies turned to powerful SaaS (software-as-a-service) and cloud platforms to support remote work and always-on-access. Whether managed internally or by a third party, these systems open a network and data up to potential attacks. In the age of COVID-19, cloud attacks are on the rise, most likely due to the increase in remote access system deployments.
Hackers are bypassing advanced security, including multi-factor authentication, by leveraging unsecured devices with shared access.
Tighten up access protocols by locking out unsecured devices. Even if remote access is necessary, no one should be connecting using an unauthorized or unsafe device or terminal. It is possible to lock down employee equipment, company-owned or not.
Problem: Employees Coming Back With Their Devices
As workers start returning to the office or workplace, they will be bringing either their assigned equipment back or their personal devices, which may or may not be infected. This is where the repercussions of a rapid remote work transition come into play.
Policies that are too lax, alongside improper security protocols, could mean a massive surge in attacks and infections. This is augmented even more by the fact that mobile vulnerabilities and mobile-related cyberattacks are on the rise.
The only solution to this is to prevent employees from bringing personal or outside devices to work, at least until they can be evaluated properly. Assigned equipment should go through an assessment and cleansing process before it’s issued again or provided access to company networks.
Advanced security solutions must be implemented, including firewalls and AI-based monitoring, with real-time authentication and reactions.
Preparing for the Big Return
For many organizations, the biggest security concern is going to be the eventual return to the office or the workplace. As everyone has been working remotely for some time, they will need to access internal systems, machines, and terminals. Moreover, they will be bringing either their personal devices or assigned equipment onto company property, ultimately connecting to the business’s network. That could bring a host of breaches or attacks, as could cloud or SaaS vulnerabilities.
Cybersecurity solutions should be readied for this big return, as should the necessary systems. All passwords should be reset and specific guidelines issued for creating new, stronger ones.
Hopefully, proper data handling and storage protocols have been leveraged all the while during the pandemic. If not, this is the moment to start. | <urn:uuid:31382f3a-7d72-4bfa-abdc-48ebe5244ee0> | CC-MAIN-2022-40 | https://www.drchaos.com/post/security-vulnerabilities-generated-by-covid-19-and-how-to-address-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00173.warc.gz | en | 0.955053 | 1,132 | 2.65625 | 3 |
Paestum and Magna Graecia
Paestum was a major city of Magna Graecia, a Greek area of the Roman Empire in the southern part of the Italian peninsula. It's south of Salerno, easily visited on a day trip from the Amalfitani coast. It could also be visited on a full day trip from Naples.
Magna Graecia was the Latin for "Greater Greece". The people who settled it would have called it Μεγαλη Ελλας or Megale Hellas, "Greater Greece".
Greek settlers colonized southern Italy and Sicily in the 8th Century BC. It was absorbed into the Roman Republic after the Pyrrhic War (280-275 BC). However, a small population in the "heel" of Italy still speaks Griko, a language combining ancient Doric, Byzantine Greek, and Italian.
To get to Paestum from Salerno, take the bus. The schedule should be something like the following. Do check the return schedule carefully, to avoid getting stuck in Paestum overnight without any place to stay!
CTSP bus number 34 runs south through Paestum from Salerno. It leaves from the bus stop along Piazza della Concordia, about halfway from the train station to the ferry pier.
The city remained faithful to Rome during Hannibal's invasion of Italy, winning it special favors such as the minting of its own currency. It prospered for centuries, but declined as Rome did.
Paestum was abandoned by the Middle Ages and largely forgotten. Drainage had changed, leading to swampy conditions and malaria.
When Pompeii and Herculaneum were rediscovered in the 1700s, these massive ruins started to get some attention again. Now it's hard to imagine it being abandoned due to its being a malarial swamp, as the area has become pretty dry.
The Plumbing of Paestum
As a culturally Greek but Roman administered city, Paestum had major bath facilities and other water infrastructure. Here you see the supports for the raised and heated floors in one of the major baths in Paestum.
I believe that this was a pool, although it may have been a bath instead.
The Salerno Landing of 1943
When you're done seeing the ancient history at Paestum, it's just a short 1.5 kilometer walk to the beach and the site of the 9 September 1943 landing of the U.S. 36th Infantry Division during Operation Avalanche, the Allied invasion of Italy.Normandy
After the defeat of the Axis Powers in North Africa, the Allies disagreed as to the next step. Winston Churchill especially wanted an invasion of Italy, "the underbelly of Europe". However, General George Marshall and most American planners wanted to avoid all delay of the Normandy invasion.
When it became clear that the Normandy invasion could not happen until 1944, Operation Husky, the invasion of Sicily, was approved. It happened in July 1943 and was very successful, soon followed by a coup deposing and imprisoning Benito Mussolini.
Rather than try to gradually move up the rugged Italian peninsula, the Allies wanted to take the major port at Napoli (Naples). However, Napoli was beyond (or just barely at) the range limit for Allied air cover. The beaches south of Salerno were a little closer, and they provided much better landing opportunities as shown below.
The U.S. 36th Infantry Division landed right at Paestum, and the initial hours of the battle passed through the ruins.
As shown on the map, most of the forces landed on the relatively flat river deltas south of Salerno. The coast west from Salerno through Amafi to the tip of the peninsula is very rugged, with cliffs and nearly vertical slopes 100 to 200 meters high and only very small beaches or piers at a few towns. See my pictures of the coast at Salerno and to its west for why landings on the Italian coast have limited choice.
The invasion went well.
However, the following war up the length of the Italian peninsula was brutal.
The German forces had been in place for a few years, and had had plenty of time to plan and build defenses.
The Allies slowly pushed them north up the peninsula, but it was a matter of hard fighting for each defensive line (typically along a river running down from the central Apennines to the coast). The Germans would then fall back to their next hardened defensive line.
Today the beach near Paestum is a holiday spot. You see the restaurants and cafes as you approach from Paestum.
One reminder of its heritage is the small Italian military logistics facility there.
Where Next?International travel
Military History Travel | <urn:uuid:c7e50022-8bff-46c8-9cb3-6a407a2e0ae6> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/italy/paestum/Index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00173.warc.gz | en | 0.961824 | 1,083 | 3.046875 | 3 |
Meta engineers have asked the ITU to end the practice of adding "leap seconds" to keep atomic clocks in sync with the rotation of the Earth.
Leap seconds are occasionally added to correct universal time (UTC), just as leap years include an extra day to keep our calendars in sync with the Earth's motion around the sun. Unfortunately, they create anomalous time stamps which have caused major network outages in the past. Meta, the owner of Facebook, has issued a strongly-worded blog post arguing that leap seconds are a "risky practice which does more harm than good," and urging the International Telecommunications Union (ITU) to put an end to it.
The ITU is due to make a decision in 2023, to be enacted by the International Earth Rotation and Reference Systems Service (IERS).
Look before we leap
Most networks and computers use UTC, but the Earth's rotation is very slightly irregular, and slows down imperceptibly over time. In 1972, the first leap second was introduced to keep official clocks in line with the Earth's rotation, and there have been 27 leap seconds in total since then.
In this century, leap seconds have become more of a problem, because networks and other aspects of human existence increasingly rely on UTC. Software usually assumes that time will always move forward, and if this does not happen, the software can crash.
In 2012, a leap second caused a major Facebook outage, as Facebook's Linux servers became overloaded trying to work out why they had been transported one second into the past.
In 2016, a similar thing happened to Cloudflare, when a leap second at midnight on December 31 extended 2016 by a single tick of the clock.
Now, Facebook wants to call a halt to the practice, because it is dangerous to systems relying on UTC. Instead, it wants to allow clocks to slow down fractionally for a short period, so the extra second can be "smeared" across most of a day.
"While the leap second might have been an acceptable solution in 1972, when it made both the scientific community and the telecom industry happy, these days UTC is equally bad for both digital applications and scientists, who often choose TAI or UT1 instead," says a Meta engineering blog dated yesterday. TAI is the precise global atomic time standard, while UT1 is the more imprecise observed solar time.
"Introducing new leap seconds is a risky practice that does more harm than good, and we believe it is time to introduce new technologies to replace it," says the anonymous blog. "As engineers at Meta, we are supporting a larger community push to stop the future introduction of leap seconds and remain at the current level of 27, which we believe will be enough for the next millennium."
One easy answer would be for all systems to continue to use TAI, and simply apply a conversion factor (the cumulative number of leap seconds) to convert to UTC whenever talking to humans. According to Wikipedia, the TV industry, electric grids, and Bluetooth mesh networks have settled on this practice.
An alternative is to apply the leap second, but to "smear" it, breaking it down into a large number of smaller steps - effectively slowing down system clock by a tiny fraction for a period, until they have lost a whole second.
Meta has taken this approach to handle the two leap seconds since Facebook was tripped up in 2012: "There is no universal way to do this, but at Meta we smear the leap second throughout 17 hours, starting at 00:00:00 UTC based on the time zone data (tzdata) package content."
Smearing over 17 hours makes the process more reliable, says Meta, because if Facebook's network time (NTP) servers are brought into line gradually with tiny steps, none of them ever register as faulty compared to other servers. However, this approach requires "nontrivial conversion logic" inside timing systems, such as Facebook's Time Appliance.
In fact, smeared time is also used at other web giants, including Amazon and Google, who both do leap smearing by different methods. This has raised the unfortunate prospect of there being multiple time smearing standards.
However, Google has got its standard out to the public some years back. In late 2016, before the leap second which took Cloudflare down, Google offered its internal smeared NTP time service as a public smeared time service.
It will be interesting to see if Meta adopts Google's smeared time. | <urn:uuid:bd8bedce-b508-494c-b2cd-14c906c9661a> | CC-MAIN-2022-40 | https://direct.datacenterdynamics.com/en/news/meta-asks-the-itu-to-stop-adding-leap-seconds-to-alter-time/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00173.warc.gz | en | 0.95535 | 926 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.