text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
We are at a very special moment in history right now. Never before in modern times have we seen such a global impact and a global response to a crisis, which largely ignores geopolitical borders. The Covid-19 outbreak and its repercussions have put cities, countries, entire regions on hold. The news each day brings new stories of economic hardship, of fear, and grief, peppered with signs of hope – hope for a cure, for a vaccine, for ways to work and earn a living whilst facing lockdown, hope for the time after the Corona virus has lost its capacity for destruction.
One saving grace of this crisis is that the global digital infrastructure – the terrestrial and mobile networks, the data centers, the undersea cables and the satellite connections that support the global Internet – is by now well enough developed for people in most countries to stay in constant contact despite isolation.
Lockdown does not mean shut-down
Where possible, people are finding ways to make the best they can of the situation. Companies have sent their workforce to work at home, keeping staff in employment and their operations running. Logistics are being maintained, and retailers have been quick to offer delivery services for those unable to leave their homes. Freelancers are seeking creative new ways to make ends meet using digital tools, and communities are looking for ways to support local initiatives.
Digital communication is vital to this. It enables people to stay in contact with loved-ones they can’t meet with. It enables children and students of all ages to continue with their education. It helps doctors to – when appropriate – provide consultations and therapy via telemedicine to avoid unnecessary contact. Even medical researchers, who we all pin our hopes on, are using digital applications to remain in contact and share data in their efforts to understand the virus, and find a vaccine.
Digital applications are key
So digital applications that enable communication and collaboration are key to enduring the current crisis. Globally, we now see a very high demand for digital applications, and these are becoming crucial for enabling business and private life to function, not only in times of crisis. But even the best application cannot perform if the underlying digital infrastructure is not as solid, resilient, and secure as possible.
Of course, this is even more critical in the current situation, but it also highlights the general trend that, as our economy becomes more global, as our planet becomes home to more and more people, and as more regions need to be enabled in terms of communication, the only answer can be digitalization. Because otherwise it won't scale, and therefore won't work in the long run.
Reliable digital infrastructure is the only answer
Therefore, one answer to some of the challenges posed by the Covid-19 pandemic – and the modern world in general – is sophisticated digital infrastructure, because this allows the use of smart digital applications and solutions which will make people’s lives better. In a globalized world, economic growth and the development of societies in most regions is now based on digital communication and digital services, and these in turn depend on the underlying digital infrastructure.
As a result, the interconnection community – more than ever before – must deliver continuous and high-performance connectivity: everywhere, for everybody, and for everything. This community, and the infrastructure that it builds and maintains, is just as vital as other critical services in a crisis. It is essential that this digital infrastructure is as global, open (neutral), resilient, scalable and secure as possible, in order to deliver the many and varied services needed by people, institutions, and businesses.
As an element of this crucial digital infrastructure, interconnection services need to allow communication to occur along the shortest route and in the most secure way. in order to enable the digital applications and digital communication needed by businesses, medical facilities, education, recreation/entertainment and for news and media outlets.
Digital communication on the rise
These times of global lockdown are having a strong impact on how we interact with each other and how we behave, how we work and how we communicate with each other. Internet traffic is growing, together with demand for quality. While different regions are at different stages of development, depending on when the Covid-19 infections began to take off in their locality, the trend is valid from North America to Europe, to the Middle East, and on to the Indian sub-continent: DE-CIX’s Internet Exchanges on four continents are all recording the same trend.
Three types of Internet traffic in particular have risen substantially: traffic from collaborative communication tools has doubled since the crisis began, as has traffic from streaming services. This indicates both enterprises and the education sector are migrating their activities online. Added to this, we see around a 50 percent increase in traffic from online gaming. Everywhere, we see a similar demand for reliable digital infrastructure.
Communication behavior will change
Even before the current crisis, we were seeing huge investments in new streaming services. But what’s happening now, in response to lockdowns around the world, will change the game in many areas of activity. The current transformation of attitudes, processes and systems will continue to echo through the post-Corona era. People are taking the time to keep in touch with their loved-ones on a more regular basis. To value the time they have together, despite the distance.
Many employers are looking into how remote working benefits business continuity and supports their employees to master challenges. But beyond this, decision makers are also beginning to recognize the long-term benefits of a more profound digital transformation. Companies are taking a long, hard look at how they manage their offices, how staff interact, how teams collaborate, what business travel is actually essential, whether meetings can be reconceived to be more productive. They are becoming aware of how the move online can unlock the potential to save money and increase revenues.
This won’t only have an effect in the short-term – it will be a game changer for business and private communication behavior, and is likely to lead to even higher usage of digital applications than we would have forecast in the pre-Covid-19 world. This is not to suggest that now everything will be digital only. Rather, that we need to recognize that we have options: processes should be reviewed, and we should learn from the times of crisis how to make our life better in general.
While the virus itself currently remains a serious threat, we are all doing what we can towards minimizing its impact, stopping its spread, flattening the curve, and finding a cure. But this crisis can also offer us a chance to re-evaluate, to see things differently. Right now, we are being forced to do that. Let's also take the time to learn to manage tasks more efficiently, operate more prosperously, discover successful modes of behavior in terms of business and private communication, which we can apply now and in the future.
Make meaningful investment decisions
We have to learn out of this so we can make meaningful investment decisions in the future. Digital infrastructure is the enabler of this long-term transformation, and it helps to ease the pain of today’s lockdown. The Corona crisis throws into stark relief the regions that have solid, reliable digital infrastructure, and those regions of the globe that remain underserved. The digital divide must be eliminated so that all communities can in future have access to information, access to digital communication tools, and access to digital continent. The Internet industry must take as their mandate the goal of a minimum level of digital infrastructure everywhere.
Nothing will be the same after Covid-19. Not for humankind itself, not for how we do business or how we (inter)connect in this new decade. This century, like last century, is presenting us with global challenges. However, these challenges – though of a different nature – can also be transformed by people and businesses. The current global crisis will change our life going forward, and to survive in the present and prepare for the post-Corona future, this global lockdown needs a full digital unlocking. | <urn:uuid:6d8637a4-80bd-4cc2-9f65-732a13ad5db7> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/opinions/global-lockdown-needs-digital-unlocking/ | 2024-09-15T17:47:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00222.warc.gz | en | 0.954754 | 1,641 | 2.515625 | 3 |
Most professional résumés say the person has “good” interpersonal skills because companies for years list them as a requirement. But what are they and why are they so critical to your professional IT career?
Much of how we work — communicating, collaborating, problem solving and negotiating — never happens in a vacuum. IT professionals, it may be argued, particularly rely on their interpersonal skills to get the job done. More and more IT training includes subjects for written communications. However working with and for others deserves more attention if you expect to be a high quality IT professional.
Communications typically begins with what you wish to say. Consider how your audience might understand what you say. Put yourself in their position as a reader or listener. Here’s a favourite quote of mine, attributed to Richard M. Nixon, “I know you believe you understand what you think I said, but I am not sure you realize that what you heard is not what I meant.” More than pure clarity, interpersonal communication skills focus on how to communicate the message.
Good communication balances detail while being concise. Consider your audience and balance the details. If you are focused, you will sound assertive. Be blunt when necessary. Cloudy messages are poorly received. If you struggle being concise you need to step back and think more about the message and how it will be perceived.
Good listening skills are much more than being polite. How you present yourself and participate as a listener is your responsibility. Genuine listening gets little thought by many IT professionals yet it is a critical interpersonal skill. Depending on culture and working environment, connect with the speaker: face them and make eye contact. This creates the connection necessary for great communication. During long meetings or presentations try and picture what the speaker describes. Sketching, focusing on keywords or phrases or notes on the agenda can help us remember and not wander. Develop different tools for different situations.
Do you actively notice verbal and nonverbal cues? Likely you do. These cues offer good insight. Even if others don’t look at you, check out your audience for shyness, uncertainty, shame, guilt or other emotions. While you have to work with the cards dealt you, play them as best you can. Good interpersonal skills mean making the most of the situation and becoming aware of your environment is your first step.
And deal with the logistics. Prep the meeting space and put aside papers, books and other distractions. We often forget such simple steps or consider them ‘administrative’ however that’s all part of your professional skills.
Interpersonal skills are about you and your professional ability. How you conduct yourself can advance or terminate your I.T. career. Everyone gets angry and how you deal with your anger matters. Politeness, not interrupting or imposing your ideas and congeniality are all good interpersonal skills.
As important as these skills are to our daily personal and work lives they are often taken for granted. It’s always a good time to invest some time on honing your interpersonal skills. As an IT professional, turn you good interpersonal skills into great ones!
Don’t forget you’re not alone. You’ll be pleasantly surprised the interest from your peers. Professional societies such as CIPS offer valuable networking opportunities for informal discussions with your peers on everyday issues. | <urn:uuid:ded56c73-298b-4457-aa10-c6daa80ab104> | CC-MAIN-2024-38 | https://www.itworldcanada.com/blog/why-good-interpersonal-skills-are-essential-in-it/92717 | 2024-09-15T17:56:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00222.warc.gz | en | 0.947173 | 683 | 2.609375 | 3 |
According to a recent IBM report, the average cost of a data breach in 2020 was $3.86 million. Even though it’s a slight decrease from 2019, recent trends connected with the global pandemic and the increasingly remote workforce show that the number of data breaches is constantly rising.
It is a common misconception that cybercriminals only target bigger companies with revenue in the millions. That’s simply not true. Cyber crime is also targeted at small businesses and startups, presuming that they have weaker security protocols and haven’t invested as much money and expertise into protecting their online data.
No matter the size, all businesses are at risk of a data breach. This is particularly true for tech companies and companies that handle sensitive customer data, such as credit card information, social security numbers, and other confidential digital records. Recovering from a data breach is a daunting proposition and the results of an attack can be devastating for any business.
The best thing you can do for your company is to be preventive instead of reactive and take steps that can help minimize the chances of a data breach occurring. Creating a risk management policy and incident response program, educating your staff, and keeping up with technology solutions when establishing your security protocols are just some of the measures businesses should take to prevent criminals from hacking into your systems.
However, perfect protection from cyberattacks doesn’t exist, which is why you should consider transferring some of your risk to a third party. An adequate cyber insurance policy can help businesses deal with the consequences of a data breach by minimizing the damage inflicted to your business, both financially and structurally.
Data Breach Sources Cyber Insurance Can Cover
The majority of data breach incidents stem from the digital world. Cyber crime is extremely sophisticated and hackers are constantly seeking different methods for compromising their victims’ networks. However, sometimes even a simple human error or an oversight can lead to a data breach with severe consequences. Let’s have a look at what attack sources cyber liability insurance would respond to:
- Hacking attacks: The most common ones are malware attacks. They come in many forms, but the most concerning is the rise of ransomware attacks where hackers hijack your data and expect money to release it back to you (ransomware). It is predicted that a business will fall victim to a ransomware attack every 11 seconds in 2021. However, phishing attacks have also become commonplace and they are particularly dangerous because they rely on psychological manipulation to trick their targets into giving away sensitive information.
- Given that human error causes 90% of cyberattacks, it is essential that you provide adequate training for your employees and implement strong security protocols.
- Data theft/leaks: Your employees could access confidential information with the intent of distributing or copying it without authorization. They could also grant criminals access to proprietary data that would give them material for insider trading schemes. Given that 20% of cyberattacks are caused by the misuse of privilege, businesses should be careful when giving access to staff members and keep the circle small.
- Loss of devices or physical theft: Although it’s not a very common source of a data breach, a stolen laptop, smartphone, or even thumb drive could lead to a company’s network being compromised.
How Does Cyber Insurance Respond to a Data Breach?
A cyber liability policy is split into two types of coverage; first-party and third-party coverage. First-party covers your costs related to a data breach, whereas the third-party policy responds to the damages suffered by other direct victims of the breach.
Suppose hackers steal or expose your confidential information. An extensive data breach insurance policy would pay for the following:
- Notification costs: This is a significant expense because when a company becomes aware of a data breach, it is their obligation to investigate the extent of the damage and notify everyone affected.
- Credit Monitoring: Simply put, your insurance policy would kick in to cover all the victims’ insurance policies. Regulators require credit monitoring to ensure proper protection and compensation for all affected parties.
- Computer Forensics: Cyber insurance covers the costs of hiring cybersecurity experts to look into the source and scope of the breach and contain further damage. They should also help improve security protocols and prevent future exposure.
- Data Loss, Recovery, and Recreation: This process requires expert investigative work and can be a very lengthy and expensive process.
- Extortion Attempts: If hackers steal valuable information that’s crucial both for your business and your clients or partners, they could ask for ransom money to return that data to you. Your insurance company will help you assess the situation and cover the ransom if paying it is in your best interest.
- Deceptive Transfer of Funds: This type of fraud is often initiated as a social engineering attack where the cybercriminal poses as a company executive urging employees to provide information that can give them access to bank accounts or to simply make fraudulent transitions themselves. If one of your employees gets tricked into transferring money from the company account into the hacker’s account, that loss could be covered by your insurance policy.
- Business Interruption/Loss of Revenue: Businesses are sometimes obligated to shut down operations for months until they can contain all the consequences of a data breach. If that happens, your insurance policy would cover your lost business income during that downtime.
- PR Activity: Cyberattacks can have serious PR implications for any company if they are serious and involve a large number of third parties. The insurer covers the cost of hiring public relations experts that could help minimize the reputational damage and any other public relations fallout resulting from the data breach.
While cyber insurance can help your business with many of the financial strains that a cyberattack, such as a data breach, could cause, it’s important to remember that insurance does not protect you from cyber crime.
Businesses need to invest in risk management processes first to minimize the frequency and the extent of potential data breaches. Cyber liability and other insurance policies serve as financial support nets that can help businesses survive data breaches that might have been financially crippling had it not been for the many costs that well-constructed insurance policies cover. | <urn:uuid:9e80ee5d-0e6c-4f4f-92cd-bd2f17b8a16e> | CC-MAIN-2024-38 | https://em360tech.com/tech-articles/cyber-crime-is-rising | 2024-09-08T14:17:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00022.warc.gz | en | 0.954609 | 1,254 | 2.546875 | 3 |
In this special feature, John Kirkley sits down with Dr. Saul Perlmutter. A Nobel prize winning astrophysicist, Dr. Perlmutter’s talk at SC13 focused on how integrating big data — and careful analysis — led to the discovery of the acceleration of the universe’s expansion.
In 1997, based on his study of the exploding stars known as supernovae Type 1a, astrophysicist Saul Perlmutter expected to find a universe whose expansion, kicked off by the Big Bang some 13.7 billion years ago, was slowing. He found just the opposite.
Incredulous, he and his team checked and rechecked the data and finally published their results. Those results earned Dr. Perlmutter the Nobel Prize for physics in 2011.
He is an astrophysicist at the Lawrence Berkeley National Laboratory and a professor of physics at the University of California, Berkeley. He heads the Supernova Cosmology Project at the Lab.
John Kirkley: Saul, tell me about the work that led up to the Nobel Prize.
Saul Perlmutter: In the early part of the (Supernova Cosmology) project we knew we had a really great opportunity to measure something absolutely profound – the state of the universe and whether the universe was going to last forever. Also, if we lived in a universe that was infinite or finite.
Ever since Einstein’s theory of general relativity gave us a basic feel for the universe, we’ve been wondering: is it slowing down enough to come to a halt and collapse? (if there is enough stuff in the universe, gravity will do that.) Or will it expand forever? Also, is there enough stuff in the universe, enough gravity, to curve space in on itself so that there’s only a finite amount of real estate out there. These were the big questions we had on our minds at the time.
We had an opportunity to make the measurements using these exploding stars, these supernovas that we started discovering in the mid-eighties – specifically one kind of supernova, called Type 1A, that makes a great “standard candle.” You can identify it by its structure and once you’ve recognized it, you know it will always be roughly the same brightness – it reaches it peak in a couple of weeks and fades away over a couple of months. So it tells you how far you’re seeing across the universe. These are vast distances – light has been traveling toward us for five billion, ten billion years from these exploding stars and we use that to tell us how much the universe has stretched since those early times.
The project took much longer than we had originally estimated. We thought it would be a three year project, but by the end of three years we hadn’t yet found our first supernova. After five years we had developed the technique that would help us find the supernovas; after nine years we used those techniques to find 42 standard candles. We were ready to plot the information. When we created the plot, we were in for a surprise. We realized that the universe was not slowing down enough to come to a halt, it was actually speeding up. And ever since then we’ve been trying to figure out why, to understand what can be causing the universe to accelerate in an expansion that is getting faster and faster.
The two main options we’re now considering are the effects of a new energy that pervades all of space – what we call “dark energy;” or it could be that we have to change Einstein’s theory of general relativity.
John Kirkley: But Einstein’s theory is pretty sacrosanct, isn’t it?
Saul Perlmutter: Yes, it would be pretty amazing if that were true. Einstein’s theory works well to many digits of precision – it seems to be a very robust theory so changing it would be very difficult. And nobody knows what this change might be.
John Kirkley: During the period when you first started this work and now, have you developed some automated systems to find the Type 1a supernovas?
Saul Perlmutter: Yes, over the years we have advanced our capabilities to find these standard candles and have developed techniques that are running on large telescopes around the world every night. Originally we could only get a few nights per semester; but now it’s become a real effort for many groups. Where once we had 42 supernovas we now have over 700 that people use for this purpose.
Even so we need a whole new level of precision, so we are developing techniques to calibrate the supernovas to a much higher degree of accuracy than was ever done before. We are now building the next generation of instruments that will allow us to actually collect that data to very high resolutions. We need more supercomputers to narrow down the history of expansion because there are subtle differences in the histories that you get when using the theory of dark energy or Einstein’s theory of general relativity.
John Kirkley: Obviously high performance computing plays a huge role.
Saul Perlmutter: This is all data intensive. From the very moment that you’re trying to find the supernova, you’re hunting for a little, tiny spark of light embedded in collections of thousands of images where each image is many megapixels, or even gigapixels collected by the big mosaic cameras. And then, to analyze the data and compare your results to different cosmological models also requires large computers, as do the Monte Carlo and all the statistical models you need. Finally to compare these many models derived from first principles, require simulations of exploding stars – so that’s another large computer job that is part of the story.
John Kirkley: Big Data with a vengeance.
Saul Perlmutter: Absolutely. And that’s just one technique. Now other techniques are being built. For example, baryon acoustic oscillators (BAO). So we will be able to compare results using different techniques, which is crucial.
John Kirkley: Will the work that you’re doing actually shed light on the nature of dark matter and dark energy?
Saul Perlmutter: We hope that by doing a very precise measurement of the history of expansion, it will tell us about the properties of dark energy, which dominate theories of expansion today. Dark matter is in play in other ways – e.g., the clumping of matter in the universe. If we are able to study how matter clumps, this will help us understand the behavior of gravity and helps us distinguish Einstein’s theory of relativity from the dark energy explanations of expansion.
John Kirkley: And maybe come up with a “theory of everything?”
Saul Perlmutter: Exactly. That’s the big goal. We’ll eventually get a handle on both mysteries – dark energy and dark matter.
John Kirkley: What do you see coming up in the near future?
Saul Perlmutter: There are a number of ambitious projects in development which, in the next ten years, will allow us to conduct a more precise measurement of the universe than we’ve ever seen before. These projects use ground based telescopes and robotic controllers so you can position thousands of optical fibers exactly where the light from a galaxy will fall into the telescope. This allows us to study millions of galaxies in just a year of observation.
Another advance is that we’re developing new space missions like the Wide-Field Infrared Survey Telescope (WFIRST), which comes out of an earlier project we developed call SNAP (Supernovae Acceleration Probe). These are cameras with a field of view that is hundreds of times bigger than previous equipment. You can get a much more comprehensive survey of the universe than you ever could with Hubble-based telescopes, allowing us to find fascinating and unusual targets. For this kind of work, the surveys are the entire story. If you can understand the behavior of big chunks of the sky all at one time, we think that we’ll get a real handle on dark energy and dark matter.
We’re hoping that WFIRST will move ahead about the same time as the James Webb Space Telescope (JWST), because traditionally you always want a wide field telescope to find targets while at the same time you have the capabilities of a deep focusing telescope like the JWST.
John Kirkley: Over the centuries physics and astronomy have repeatedly changed the way we view the universe and our place in it – from Ptolemy to Copernicus and Galileo to Newton, Einstein and quantum physics. Do you see this happening again?
Saul Perlmutter: Absolutely. I think it’s a very interesting period for human cosmology because it’s a golden age in which every ten years we learn breathtaking new parts of the story. We have never before been able to do that in human history – to have the chance we have now to look at and understand the universe we live in. We will find bigger and bigger surprises as we move along. Already we’re discovering that the universe is not a simple, static place; we now know that it’s expanding and this expansion is accelerating. We also know that behavior of the universe is changing – its gone through periods of rapid acceleration, then slowed, then speeded up. We also know now that some of the basics of how the universe works have changed. Which is remarkable. We have assumed that we lived with a simple set of laws, but apparently some very big attributes of the universe are contingent. This is our opportunity to think about the universe in which we live in ways that are very different from how we think about it now.
John Kirkley: So perhaps even the laws of physics themselves are changing?
Saul Perlmutter: It’s very possible that some of what we take as absolute – the givens of physics – may turn out to be evolving as well. | <urn:uuid:0a0d92f6-4f0a-41bb-8032-b78d453c74cd> | CC-MAIN-2024-38 | https://insidehpc.com/2013/11/universe-surprises-store-nobel-laureate-saul-perlmutter/ | 2024-09-10T20:38:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00722.warc.gz | en | 0.955787 | 2,080 | 2.8125 | 3 |
For several years, most news articles about a computer, network, or Internet-based compromise have mentioned the phrase "zero day exploit" or "zero day attack," but rarely do these articles define what this is. A zero day exploit is any attack that was previously unknown to the target or security experts in general. Many believe that the term refers to attacks that were just released into the wild or developed by hackers in the current calendar day. This is generally not the case. The "zero day" component of the term refers to the lack of prior knowledge about the attack, highlighting the idea that the victim has zero day's notice of an attack. The main feature of a zero day attack is that since it is an unknown attack, there are no specific defenses or filters for it. Thus, a wide number of targets are vulnerable to the exploit.
Zero day attacks have been discovered recently that are potentially at least seven years old. I'm specifically referencing the Flame or Skywiper discovered in early 2012. However, it is much more common for zero day exploits to have existed for months before discovery. Again, whenever you see the phrase zero-day exploits, keep in mind it just means a newly discovered, previously unknown attack, for which there is no defense at the time of discovery.
Once security researchers become aware of a new zero day exploit, they quickly develop detection and prevention measures in the process of their forensic analysis. These new detection and defense options are distributed and shared with the security community. Once organizations and individuals install updates or make configuration changes, they can be assured that their risk of compromise from that specific attack as been significantly reduced or eliminated. Once detection and defense is possible, then an exploit is not longer considered a zero day as there is now notification of its existence.
A search using the term "zero day" reveals numerous recent compromises and exploitations. In fact, this should be obvious as new attacks are by nature zero day. But since we often label the attack as a zero day exploit at the time of discovery and for a moderate period of time afterwards, that label is a useful term for tracking down the appearance of historical attacks.
In 2012, there have been several fairly significant discoveries of exploits and attacks that were labeled as zero day. These include:
Flame/Skywiper is used for targeted cyber espionage against Middle Eastern countries
An IE exploit that allows hackers to remotely install malware onto Windows systems running IE 7, 8, or 9
A Java exploit that allows hackers to remotely install malware onto system running Java 5, 6, or 7
Exploit and Vulnerability Awareness Sites
To learn of more examples of zero day exploit discoveries, I recommend visiting a few sites on a regular basis.
Exploit Database (exploit-db.com) is a community driven notification site about newly discovered zero day attacks. What I like about this site, and which is unique to this site, is that in addition to disclosing the attack, it also provides access to the exploit itself. Most other vulnerability and exploit research sites do not provide you the actual attack code. I think this is an overlooked opportunity. When you have access to the exploit code, you can develop your own filter for the attack. You don't need to wait for a vendor to release a patch or a security vendor to update their tool's database; instead, you can add in your own detection filter and stop the attack.
The MITRE organization's Common Vulnerability and Exploit database (cve.mitre.org) is one of the better known collections of attack and compromise information and research. Perusing their collection will help you stay aware of recently discovered exploits and steps you can take to avoid compromise and reduce your vulnerability.
The US Cert site (us-cert.gov) is a US government-managed site with emphasis on providing security and exploit information to protect the nation's IT. Their mission is to provide information, promote awareness, and assist in protection preparations against all forms of compromise and abuse of computers and networking. I recommend signing up for their weekly bulletin which summaries the previous week's newly discovered exploits and vulnerabilities, but also provides references back to their main site as well as the CVE from MITRE.
If you visit these exploit and vulnerability awareness sites, you will notice that new zero day exploits are uncovered on a fairly consistent basis - daily. Malicious hackers across the globe (as well as security experts we perceive as being good guys) are writing new attack code with new exploits in an attempt to develop the next best computer weapon. They seek to compromise the most systems in the shortest amount of time, while gaining the most control, learning the most information, all while going without detection for as long as possible. It is a race and a battle of intelligence and creativity.
How Do Hackers Uncover New Vulnerabilities and Weaknesses?
A common question I hear from students is, "How does a hacking programmer learn about a flaw or vulnerability in the first place?" There are many ways by which new weaknesses or vulnerabilities are uncovered, but three are the most common: source code review, patch dissection, and fuzzy testing. | <urn:uuid:8d205a3c-cabe-4e9d-b1c3-d6d59f18e28f> | CC-MAIN-2024-38 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/zero-day-exploits/ | 2024-09-10T21:30:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00722.warc.gz | en | 0.957171 | 1,042 | 3.4375 | 3 |
Cyberattacks are the eighth risk the world will face in 2023.
The latest Global Risks Report from the World Economic Forum lists the top risks the world will face in the next decade, including the cost of living and environmental issues. The report uses data from the Global Risk Perception Survey 2022-2023 to understand the risks the world is likely to face in the next 10 years.
Within the list of the 10 global risks, both in the short term (2 years) and in the long term (10 years), cybercrime and generalized cyber insecurity occupy eighth place.
In this regard, Israel Gutiérrez, CTO of the global company specialized in cybersecurity A3Sec, comments that cyberattacks will be increasingly common, not only against people's information, whether it is identity theft or digital assets such as email accounts or social media, companies will be increasingly attacked, "we start from the digitization that we are experiencing, little by little companies are changing their processes, many of them are digitized and either that information is stored in the company's own computers or servers or Hosted in the cloud, this completely increases the attack surface and increases the potential windows that cybercriminals can find.”
According to IBM, nowadays the average cost of a cyberattack for a company reaches 4.35 million dollars, the data also shows that ransomware or data hijacking increases by 41%, which raised the cost of containment by up to $150,000 dollars, the industry that is most exposed is healthcare, the cost of a breach in the healthcare sector has increased 42% since 2020 and for the 12th consecutive year has the highest average cost of a data breach of any industry.
“For companies, it is not a question of if a data breach or hijacking will occur, but of when it will happen, it should be noted that this can happen more than once. The important thing here is to have the necessary protection to detect them faster, react at the right moment and be able to recover from threats, in the best possible way. In organizations that use Artificial Intelligence and automation, the breaches had a 74-day shorter life cycle and saved an average of 3 million dollars compared to those that do not use them,” says Israel Gutiérrez.
The World Economic Forum Report indicates that the growing interrelation of technologies with the critical functioning of societies exposes the population to direct internal threats, including those of the functioning of society. Along with the rise in cybercrime, attempts to disrupt critical technology resources and services, with anticipated attacks against financial systems, public safety, transportation, energy, and communication infrastructures, technology risks are not limited to just criminals. Sophisticated analysis of large data sets will enable the misuse of personal information through legitimate legal procedures, undermining digital sovereignty and the right to privacy, even in well-regulated democratic regimes.
It should be noted that cybersecurity is more important for companies, as it ranks fourth in importance at the business level, whereas for governments, it ranks ninth. | <urn:uuid:4cc753cd-6818-4cbb-bf80-fc00ded91a83> | CC-MAIN-2024-38 | https://a3sec.com/en/blog/cyberattacks-2023 | 2024-09-18T06:53:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00122.warc.gz | en | 0.940963 | 615 | 2.71875 | 3 |
Part 2 of CampusGuard’s series covering each of the critical controls from NIST SP 800-171
In nearly every data breach that occurs, there is a human failure somewhere in the chain of events. Human errors include mistakes like sending sensitive information to the wrong person, publishing private data to a public website, insecure disposal of sensitive information, or failing to comply with internal policies. Other incidents that are officially categorized as malware or hacking are also often due to a human failing to install updates or patches, programming errors that allowed access into a network, or clicking a link in a phishing e-mail.
All staff members, employees, temporary employees, and third-party vendors and contractors with access to your environment, should receive information security awareness training to help protect your organization from threats and safeguard your operations. Users are the single most important group who can directly impact the security of your environment by reducing errors and potential vulnerabilities. It is critical for you to educate your users so they are aware of actions they should or should not take to better protect sensitive information. But how do you implement an effective information security awareness program?
Below are some best practices for planning and implementing a program within your organization:
While training is a requirement in NIST SP 800-171, there are no prescriptive controls defined. Organizations can determine the appropriate level and methods for providing awareness training based on their specific business, the types of critical data that users have access to, and the information systems that staff are accessing. Consider referring to the NIST SP 800-50, Building an Information Technology Security Awareness and Training Program, for guidance on how to build an effective training program. This document will walk you through four critical steps, namely Program Design, Material Development, Program Implementation, and Post Implementation.
Program Design and Delivery:
Whether you are designing a new program, or updating your current one, you will want to be sure that the training addresses two basic questions:
- What behavior do we want to reinforce?
- What skills do we want the users to learn and apply in their daily roles?
The content must be relevant to the users and they need to be able to integrate this information into their daily work. Tailoring the training so that it clearly relates to their individual roles and responsibilities will make it more applicable and valuable, and avoid the feeling that this is just another checkbox requirement taking time away from their day. If users can reference the training back to specific job scenarios or things they have seen before, they will be more likely to remember and follow through with the recommendations when they are confronted with a similar threat in the real-world.
Technologies and risks are constantly changing, and the old risks aren’t exactly going away, so the options for security training topics may seem endless. Below are some general topics that we recommend that you include:
- Access Control
- Password Usage and Management
- Identity Theft
- Incident Management/Reporting
- Desktop Security
- Mobile Devices
- Data Recovery/Backup and Storage
- Information Disposal
- Violations of Security Policy
- Internet Usage
- Viruses and Malware
- Anti-virus Protection
- Vulnerability Patching
- Secure messaging/e-mail
- Social Engineering
- Insider Threats
- Physical Security/Visitor Management
You can train your users using a variety of methods. Comprehensive training should be initially conducted when a new team member is hired and annually for all staff. But you should also provide updates throughout the year as new risks emerge, new phishing schemes target your industry, etc. Share information from online security news websites, industry-hosted newsletters, professional organizations and vendors, periodicals, conferences, seminars, etc. Post reminders or posters in the breakroom. Varying the method of communication, and doing so throughout the year, will help remind staff of the on-going importance of information security awareness and emphasize the lessons learned during their training. Consider conducting social engineering tests (i.e. phishing or tail-gating attempts) to validate the training and identify any potential weaknesses that need to be addressed.
When launching your training initiative, be sure to clearly articulate expectations of each user, expected results of the program, and reasons why it is being implemented. As much as possible, tie the requirements back to existing policies and procedures. Support from your executive leadership team and their inclusion on communications to managers and employees can help achieve buy-in across the organization. Management should set an example for proper behavior by promoting and completing the training. Funding details should be addressed with department managers so they are know whether the training is funded by the central organization or if they will share in the expense for their employees.
In addition, schedules and completion requirements must be communicated. Clearly define due dates, provide reminders, and explain the consequences for failing to comply. Verify that your teams have the tools for accurately tracking progress and then documenting all user completions.
Once your information security awareness program has been implemented, processes must be put in place to monitor compliance and overall effectiveness. Training should be conducted upon hire, and at least annually. Ensure that the training materials continue to be updated as new technology and associated security issues emerge. This will keep the subject matter fresh and relevant, and avoid your staff from dismissing the value due to stale content. Continuous improvement should always be the goal for security awareness and training initiatives to avoid the most common vulnerability – human error.
Some additional guidance from the CRM Team below:
[Rivkin]: The basic security requirements from the NIST SP 800-171 covering Awareness and Training (Requirements 3.2.1 and 3.2.2) state that your organization should “ensure that managers, systems administrators, and users of organizational systems are made aware of the security risks associated with their activities and of the applicable policies, standards, and procedures related to the security of those systems.”
When an institution is implementing security and compliance training, all people, processes, and technologies all must be considered. In my experience, it’s evident that consistent communication with employees and students around the importance of PCI compliance is especially important given the unique environment of higher education. Establishing a structured PCI training program is essential to ensuring everyone on campus abides by university expectations and policy, remains up to date on industry changes and related risks, and is prepared to handle incidents if and when they arise. | <urn:uuid:553436d0-7e4d-405b-8956-6b56b48d9873> | CC-MAIN-2024-38 | https://campusguard.com/post/awareness-and-training-why-they-matter/ | 2024-09-18T09:08:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00122.warc.gz | en | 0.938472 | 1,306 | 2.609375 | 3 |
Since its inception, WiFi has played an integral role in keeping us connected at home and in public. We’ve come to expect a standard degree of connectivity wherever we go and regularly rely on WiFi to maintain our productivity, our organization, our health, and even our protection.
Recent advances in WiFi technology have greatly contributed to the Internet of Things, allowing us to be even more connected than ever before. But how many of us know the full history behind WiFi technology? When was WiFi invented? How exactly does it work? And just how far it has come in 25 years? Here we’ve explored the history of WiFi, from where it began, what it has helped us achieve, and what future it promises us as we become increasingly interconnected.
Purple Indoor Location Services
Before we continue with the history of WiFi, see how Purple’s Guest WiFi solution and additional product use cases have evolved to become a part of the Indoor Location Services market by accessing your complimentary report.
As of February 2023, Purple has been announced as a Niche Player within Gartner® Magic Quadrant™ for Indoor Location Services, which we believe demonstrates the company’s reputation as a recognized vendor for indoor location solutions.
What is WiFi, and How Does it Work?
At a base level, WiFi is a way of getting broadband internet to a device using wireless transmitters and radio signals. Once a transmitter receives data from the internet, it converts the data into a radio signal that can be received and read by WiFi-enabled devices. Information is then exchanged between the transmitter and the device.
1997 – When was WiFi invented?
WiFi was invented and first released for consumers in 1997 when a committee called 802.11 was created.
This led to the creation of IEEE802.11, which refers to a set of standards that define communication for wireless local area networks (WLANs).
Following this, a basic specification for WiFi was established, allowing two megabytes per second of data transfer wirelessly between devices.
This sparked the development of prototype equipment (routers) to comply with IEEE 802.11, and in 1999, WiFi was introduced for home use.
WiFi uses electromagnetic waves to communicate data that run at two main frequencies: 2.4Ghz (802.11b) and 5Ghz (802.11a). For many years, 2.4Ghz was a popular choice for WiFi users, as it worked with most mainstream devices and was less expensive than 11a.
2003 – Getting stronger
In 2003, faster speeds and distance coverage of the earlier WiFi versions combined to make the 802.11g standard. Routers were getting better too, with higher power and further coverage than ever before. WiFi was beginning to catch up – competing with the speed of the fastest wired connections.
2009 – The arrival of 802.11n
2009 saw the final version of the 802.11n, which was even faster and more reliable than its predecessor. This increase in efficiency is attributed to ‘Multiple input multiple output’ data (MIMOs), which uses multiple antennas to enhance communication of both the transmitter and receiver. This allowed for significant increases in data without the need for higher bandwidth or transmit power.
The downside of 2.4 GHz having an extended range meant that an increasing number of devices (from baby monitors to Bluetooth) were using the same frequency, causing it to become overcrowded and slower. Consequently, 5Ghz became the more attractive option.
Introducing simultaneous dual-band routers
To solve this issue, dual-band routers were created. These routers contained two types of wireless radios that could simultaneously support connections on both 2.4 GHz and 5GHz links. By default, devices in the range of a dual-band router would automatically connect to the faster, more efficient 5GHz frequency. However, if a device was further away or behind walls, the 2.4Ghz could be used as a backup.
2014 – Introducing WiFi 5
Primarily known as WiFi 5, the 802.11ac protocol aimed to make the 5Ghz range better: it had four times the speed of WiFi 802.11n, a greater width, and the ability to support more antennas, meaning data could be sent more quickly. During this time we also saw the birth of the Beamforming concept, which is explained by Eric Geier as focusing signals and concentrating data transmission so that more data reaches the target device. He notes: ‘Instead of broadcasting a signal to a wide area, hoping to reach your target, why not concentrate the signal and aim it directly at the target?’
2019 – Highly anticipated WiFi 6
The way in which WiFi works fundamentally hadn’t changed in almost a decade, and neither had its purpose.
The highly anticipated release of WiFi 6 in 2019 promoted great promise for faster connectivity and linkup between technologies with speeds of up to 9.6 Gbps, which when compared to 3.5 Gbps on WiFi 5 shows an increase of almost 300%.
The reason for WiFi 6’s jump in speed capabilities is down to the technologies being used to mitigate the issues of becoming overwhelmed with the number of connected devices. WiFi 6 routers are also capable of communicating with more devices at once enabling routers to send data to multiple devices in the same broadcast.
What’s the difference? – WiFi 5 vs WiFi 6
As we’ve covered, in 2014 the WiFi 5 standard 802.11ac was introduced using the less congested 5GHz band which made displayed a huge improvement over the older heavily congested 2.4GHz band but with the newer WiFi 6 standard 802.11ax released in 2019 we saw a huge migration in our customer base to the new standard, so why is it better?
WiFi 6 was built from the ground up to support the IoT world we live in now providing support for more connected devices and preserving their battery life through power consumption.
Beyond that, the full benefits include faster speeds, better safety protocols, and backward compatibility. The real bonus for Purple customers though is the ability to onboard more connected devices through its expanded MU-MIMO (Multi-user, multiple-input, multiple-output technology) support with lower latency speeds and providing the venue a robust WiFi experience for their users.
WiFi 6 is fast becoming the preferred standard for all WiFi networks, it does include a hardware upgrade but the benefits far outweigh the cost to entry as connectivity remains central to modern enterprises.
Looking to the future – WiFi 6e
WiFi 6e takes this a step further into the 6GHz band which is even less crowded than the previous ones offered and at the moment operates without interference or overlap, again the benefits here are less latency and of course faster speeds.
It’s still early for the WiFi 6e standard and the cost is still a little high but with the new frequency it brings it won’t be long before this standard starts to show more adoption.
WiFi Today and the Internet of Things
The use of WiFi today is summed up nicely by Rethink Wireless: “WiFi performance continues to improve and it’s one of the most ubiquitous wireless communications technologies in use today. It’s easy to install, simple to use, and economical too. WiFi Access Points are now set up at home and in public hotspots, giving convenient internet access to everything from laptops to smartphones. Encryption technologies make WiFi secure, keeping out unwanted intruders from these wireless communications.”
But WiFi is more about simply getting online to check email or browse social feeds. It has also enabled a mind-blowing number of consumer electronics and computing devices to become interconnected and exchange information – a phenomenon known as the Internet of Things.
According to Wi-FI.org, the IoT is “one of the most exciting waves of innovation the world has witnessed” and “its potential has only just begun to emerge.” WiFi-based businesses like Purple demonstrate just how much potential can be leveraged for businesses: with an ever-increasing amount of WiFi-enabled devices coming on the market, Purple allows its customers to gain incredibly thorough amounts of user data through the likes of location services, social login, and a wealth of digital marketing tools.
It’s clear that WiFi is no longer a one-way street – it has become an essential part of our personal and professional day-to-day and is constantly improving our efficiency, and our communication, and it persistently encourages the technology industry to push the boundaries of what’s possible.
All in all, the capabilities of WiFi are endless, and with the way things are going, we are incredibly excited to see what the future holds. | <urn:uuid:4325daa8-2252-4ba2-99b8-1918ac5a9f13> | CC-MAIN-2024-38 | https://purple.ai/blogs/history-wifi/ | 2024-09-18T08:38:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00122.warc.gz | en | 0.959248 | 1,791 | 3.390625 | 3 |
Distributions, also called distros, can be described as different operating system versions built on top of the underlying Linux Kernel to support a variety of use-cases and preferences. Since all distributions are built on Linux, most are similar and can be used interchangeably. Ubuntu, for example, is the most popular for it's ease of use and the ability to abstract smaller configuration tasks for you by default. Arch Linux, on the other hand, favors a high level of control over simplicity so that you can fine tune the way that your system functions.
Below is a full list of distributions available on our platform. Once you've decided on a distribution, we recommend selecting the latest LTS (Long Term Support) release for systems intended for production use. This ensures that the system receives security updates for as long as possible. For release and version information, see Supported Distributions.
Distribution | Description |
AlmaLinux | A nearly binary compatible derivative of RHEL intended to provide a long-term stable replacement for CentOS. Made by the same team as CloudLinux OS |
Alpine | Recommended for advanced Linux users only. Lightweight distribution popular with Docker and security minded users. |
Arch | Recommended for advanced Linux users only. Powerful and detail oriented, empowers more advanced users to fine tune their configuration. |
CentOS | Widely popular in professional and business settings while still being accessible to the average user. Versions 8 and earlier are binary equivalents of their corresponding RHEL (Red Hat Enterprise Linux) release. CentOS Stream has replaced CentOS and receives updates just ahead of the corresponding RHEL version. |
Debian | A popular and stable distribution that's been actively maintained longer than most other distributions. |
Fedora | Implements bleeding edge software. Fedora is similar though more advanced than CentOS and great for users who want to use the newest of the new and don't mind an added layer of complexity. |
Gentoo | Recommended for advanced Linux users only. Advanced distribution designed for power users who want more control over their configuration and are comfortable compiling everything from source. |
Kali Linux | Recommended for advanced Linux users only. A specialized and advanced Debian-based distribution designed for penetration testing and security auditing. This is a minimum installation, allowing you to install only the tools and metapackages you require. |
openSUSE Leap | Provides powerful tools specific to system administration tasks. Starting with version 15.3, this distribution maintains parity with SLE (SUSE Linux Enterprise), making it a great choice for users of SLE or those looking to benefit from enterprise-grade stability. |
Rocky Linux | A nearly binary compatible derivative of RHEL intended to provide a long-term stable replacement for CentOS. Built by a community team led by the founder of the CentOS project. |
Slackware | Recommended for advanced Linux users only. The oldest actively maintained distribution. One of the most UNIX-like Linux distributions available. |
Ubuntu | Arguably the most popular Linux distribution, widely regarded for it's ease of use. The LTS versions of Ubuntu are featured heavily in our guides and across the community. |
Though this list covers most popular distributions, creating a Compute Instance using a distribution that we do not provide is possible. Feel free to follow our Custom Distribution Guide for more information.
Updated 26 days ago | <urn:uuid:ca180a1e-1a91-467c-8225-977dfea9ed32> | CC-MAIN-2024-38 | https://techdocs.akamai.com/cloud-computing/docs/choose-a-distribution | 2024-09-07T11:09:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00286.warc.gz | en | 0.909836 | 688 | 2.53125 | 3 |
Creator: University of Minnesota
Category: Software > Computer Software > Educational Software
Tag: Automation, development, industry, processes, software
Availability: In stock
Price: USD 59.00
Software is quickly becoming integral part of human life as we see more and more automation and technical advancements. Just like we expect car to work all the time and can't afford to break or reboot unexpectedly, software industry needs to continue to learn better way to build software if it were to become integral part of human life.
In this course, you will get an overview of how software teams work? What processes they use? What are some of the industry standard methodologies? What are pros and cons of each? You will learn enough to have meaningful conversation around software development processes.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
After completing this course, a learner will be able to 1) Apply core software engineering practices at conceptual level for a given problem. 2) Compare and contrast traditional, agile, and lean development methodologies at high level. These include Waterfall, Rational Unified Process, V model, Incremental, Spiral models and overview of agile mindset 3) Propose a methodology best suited for a given situation | <urn:uuid:974e8fd9-b5ff-425e-adf8-7b0c200f50b8> | CC-MAIN-2024-38 | https://datafloq.com/course/software-development-processes-and-methodologies/ | 2024-09-08T15:21:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00186.warc.gz | en | 0.9182 | 255 | 3.109375 | 3 |
Introduction: Operating System & Application Software
Are you new to this computer revolution? Then you will be probably confused about the difference between the Operating System and Application software. They both have many differences; today in this article we will discuss the difference between them. Let’s start with the proper introduction to both of them.
What is an Operating system?
An operating system is computer software that acts as an interface between both hardware and software that are running in the computer. It is the prime software or core software that runs the other software in the system. There are different types of operating systems based on the needs and device’s functions.
Most commonly known Operating system is Windows and Mac OS. It is the first software that will be installed when you purchase a computer.
Features Of Operating System
Here are the features of the operating system –
- The main feature of the Operating system is the user interface. It gives user-friendly functions to the users. Each Operating system has its operating system.
- The next important feature is memory management. It controls the hardware assign memory space to respected functions
- Next is process management. The OS is the software that runs all the programs and allocates the resources like RAM, CPU core, etc… To the respected functions.
- Scheduling – It manages the regular update and other schedules the task based on the importance of them.
- Security Management – It provides three levels of securities to the user end. They are – File access level, system level, Network level.
What is an Application Software?
The application software is commonly known as the App. It is a computer program that is created to perform certain tasks that helps people in completing their work fast. You must have seen various application software in your day-to-day life. When you want to hear the music you use VLC or Windows media player they are application software that is created to play audio files.
Features of Application Software
Here are the features of application software –
- It performs different tasks like word processing, spreadsheets, etc…
- It needs memory space according to its size.
- It is easy to design and interact with the user.
- There are both free and paid Application software.
- They are generally written in a high-level language and modern languages.
Difference between Operating Software & Application Software:
Though they work together there are many differences between them. You can easily understand it if you learn the relationship between them. Here is an example –
- When the user commands the application’s software, say VLC media player, the application software communicates the information to the Operating system, and then the OS uses hardware resources (speaker) to play the music.
In simple terms, Operating software is like the ground or foundation where the application software is the building.
Comparison Table: Operating Software vs Application Software
Here are the major differences between them:
PARAMETER | OPERATING SYSTEM SOFTWARE | APPLICATION SOFTWARE |
Definition | A computer program that manages the hardware resources of a computer and creates an environment to run other software. | A computer program that is created to do defined tasks. |
Installation | It is installed in the system when the device is purchased | It can be purchased and downloaded through the internet. And then installed using the installation packages |
Objective | To effectively manage the hardware resources. | The main objective of the application software is to perform a special task. |
Dependency | It is independent; in contrast, all the software including application and system software depend upon it. | It is highly dependent on the operating system |
Run Time | It starts (boots) when the user turns on the computer and stops when he shut down it. | The user opens this when the particular needs to be done. Its run time depends on the task duration. |
Size | It usually comes in Gigabytes(GB) | The size may differ based on the complexity of the task. Usually comes in Megabytes (MB) |
Programming Language | It is developed using the C, C++, and assembly languages | It is developed using virtual basic, C, C++, and java. |
Importance | It is very important because a computer cannot run without Operating System | It is less important than the Operating system. It cannot be used without an operating system. |
Functions | It involves the primary functions of process management, memory management, task scheduling, hardware identification, etc… | Perform a specific task. Example In the case music player plays the audio file. |
Examples | Windows, Linux, NOS, DOS, Unix, etc… | Word, VLC, PowerPoint, Chrome, Skype, etc… |
Download the difference table here.
If you want to learn more about Operating System, then check our e-book on Operating System Interview Q&A in easy to understand PDF Format explained with relevant Diagrams (where required) for better ease of understanding.
ABOUT THE AUTHOR
I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.”
I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband.
I am a strong believer of the fact that “learning is a constant process of discovering yourself.”
– Rashmi Bhardwaj (Author/Editor) | <urn:uuid:e19f23c0-c27c-44e5-84a7-bc9cf4d4ded3> | CC-MAIN-2024-38 | https://ipwithease.com/operating-system-application-software/?wmc-currency=INR | 2024-09-08T16:16:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00186.warc.gz | en | 0.938476 | 1,136 | 3.546875 | 4 |
Botnets are trending in 2024. We’ve scoured the web to find some of the most interesting and relevant statistics surrounding the use of botnets and the concerning ways they are being used.
What is a Botnet?
A botnet is a collection of devices connected via the internet, each running a single bot or series of bots. Distributed Denial of Service Attacks (DDoS) are the most common use of botnets. However, crypto mining and click fraud are other activities performed by botnets to enable hackers to access the device. Once a botnet is set up, the attackers can use Command and Control (C&C) software to take control of it.
One of the main advantages of a botnet is harnessing the computing power of hundreds or thousands of machines. Because attacks come from so many different devices, it hides the attacker’s C&C, making them harder to block or trace.
1. Botnet attacks increased by 23% in a quarter
2. Russia has the most significant proportion of Botnet C&Cs
The report continued, showing that Russia saw a 64% rise in botnet C&Cs between Q2 and Q3 2021. Q3 to Q4 saw a much more dramatic increase with a 124% rise in these attacks.
3. The most used malware type used was credential stealers
Malware droppers, a malware bot that performs a series of malicious activities like downloading and executing additional malware, participating in DDoS attacks, and stealing login information, was the highest recorded malware bot in Q3 2021. Spamhaus’s report showed this was overtaken in Q4 2021 by an even greater threat of credential stealers.
4. DDoS attacks set a new record in the first half of 2021
A 1H 2021 threat intelligence report from NetScout found that 5.4 million DDoS attacks were launched in the first half of 2021, which shows an 11% year-on-year increase. DDoS attacks are typically launched using botnets to overwhelm a target’s servers.
5. DDoS extortion attacks target ISPs
NetScout also reported a rise in cybercriminals targeting ISPs in 2021 via the Lazarus DDoS extortion campaign, focusing on authoritative domain servers. These DNS servers match IP addresses with domain names and provide details on where websites can be found to recursive DNS nameservers.
6. Adaptive DDoS attacks increased in 2021
In 2021, attackers developed adaptive DDoS attack strategies to evade traditional mitigation techniques. Adversaries tailor each attack to bypass several layers of DDoS mitigation for both cloud and on-premises-based attacks.
7. The finance sector is the most targeted industry for botnets
Interpol’s ASEAN Cyberthreat Assessment 2020 measured how botnets affected different organizations based on their industry. The report found that the finance sector and its customers were prime targets in 2019. Attackers aimed to gain remote control of victims’ computers to harvest personal data such as banking details or to install and spread other types of malware.
8. Gafgyt and Mirai IoT botnets make up half of all DDoS attacks
The NetScout 1H 2021 report also showed that adversaries use tracked botnet clusters worldwide to facilitate over 2.8 million DDoS attacks. Gafgyt and Mirai botnets made up more than half of these attacks. The report states, “Ransomware gangs added triple-extortion DDoS tactics to their repertoire. Simultaneously, the Fancy Lazarus DDoS extortion campaign kicked into high gear threatening organizations in multiple industries with a focus on ISPs and specifically their authoritative DNS servers.”
9. Networks in the LatAm region hosted the most active botnet C&Cs
Spamhaus reports the Latm region hosted 60% of active botnet C&C listings in Q4 of 2021. The graph below shows the networks that hosted the most significant number of C&C botnets in the final quarter of 2021. Spamhaus explained that the hosts in the chart either have an abuse problem or don’t take the required action when abuse reports are received.
10. Canada has the most abused domain registrar overall
The United States and China came in the top three locations where the most domain registrars are abused, with 926 and 864 botnets respectively at the end of 2021. Meanwhile, Canada swooped the top spot with the most botnets overall at 1115, claiming over 23% of the total botnets identified in Spamhaus’ survey.
11. 1 in 20 IoT devices are protected by a firewall or a similar software
12. Credential theft is on the rise
Enisa’s findings showed that 60% of botnet activity is associated with stealing user credentials.
13. 300,000 instances of Emotet were observed in 2019
Enisa found a significant increase in notifications of Emotet botnet traffic in 2019, up 100,000 from the previous year. In comparing the second half of 2018 and 2019, researchers believed Emotet botnets surged by 91.3%.
14. Mirai variants increased significantly in 2019
There was a 57% increase in Mirai botnet variants identified in 2019. Mirai variants are typically used for brute force attacks on IoT devices. These attacks increased by 51%, while web exploits rose 87% in 2019.
15. The Andromeda botnet dominated in 2019
Although the Andromeda botnet was dismantled in 2017, Interpol reports that it was the most dominant botnet variant in the first half of 2019, accounting for 15% of all botnet infections. It was closely followed by Conficker, Necurs and Sality with 14% each respectively.
16. Gafgyt grew by 3.9 times in 2019
The Botnet Trend Report from NSfocus showed that Gafgyt was still very much active in 2019. There was 3.9 times more malware compared to the previous year, and the average number of C&C servers rose by 34.5%.
How Do Botnet Takeovers Work?
The first step for an attacker to create a botnet is by obtaining or creating malware that can be remotely controlled over the internet.
The attacker deploys malware on the target computer over the internet. This might be by hosting the file on a compromised website for a victim to download, sending the file as an email attachment, or sending a fake URL via a messaging app that infects the device when the user opens the link. The hacker could also scan the web for vulnerable IoT devices to perform brute force login attempts to infect the device remotely.
Once a computer is infected, it makes the hacker’s job easier to spread the infection to other devices that interact over the internet. For example, the hacker could set up the malware so that the infected system sends spam emails to other systems to spread the infection, thus giving them control of a larger botnet cluster.
The malware used is often a trojan-style variant disguised as a legitimate file hiding behind an executable. Opening the executable is enough to spread the infection to another device, but sometimes IoT infections (see example below) like Mirai don’t require any user input to infect and spread between devices.
Examples of botnet attacks
Malware used to create a botnet comes in many forms, but the following examples are some of the commonly used methods to trick users into opening an infected file:
- You visit a compromised website (without knowing) to find a valuable piece of software designed to speed up your computer. You download the executable, and it has the filename you expected, so you go ahead and open it. This infects your device, allowing the hackers to create a botnet.
- Someone sends you an email message, the sender looks authentic, and an image attachment or a billing agreement asks you to act urgently. You open the attachment, but it’s not a PDF or JPG, but an exe containing malware ready to infect your device.
- Malvertising is a new method attackers use to infect a device. They inject malware into a pop-up ad or a banner ad. When you visit the compromised website, you don’t even need to click the ad to get infected, sometimes it’ll auto-download a file to your device containing the infection needed to create the botnet.
- An IoT (Internet of Things) botnet attack called Mirai (named after a popular anime TV series) emerged in 2016 when a botnet consisting of 100,000 computers was created after being infected with malware. Mirai launched attacks against major cyber security organizations and generated traffic volumes in the region of 1Tbps, which took down a significant part of their infrastructure. Mirai began by sending TCP SYN probes to random-root IPV4 addresses on Telnet TCP ports 23 and 2323. Once it scanned and detected open Telnet ports, it began performing brute force login attempts to infect and spread the malware to more computers. | <urn:uuid:db056eaf-d643-469e-af4d-04e4cbc3b15f> | CC-MAIN-2024-38 | https://www.comparitech.com/blog/information-security/botnet-statistics/ | 2024-09-09T22:49:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00086.warc.gz | en | 0.939578 | 1,855 | 2.8125 | 3 |
This article addresses some of the challenges of symmetric cryptography as applied in banks, describing how it works and what are its unbeaten advantages as opposed to asymmetric cryptography.
Some major challenges of symmetric cryptography
Firstly, the complexity of key management grows significantly with every additional user that requires access to the secret key to encrypt or decrypt data. If you have several users that require access to the secret key, you may require some special processes.
Also, symmetric keys have no metadata inherently attached to a key - therefore key expiration tracking and rotation must be maintained by an additional tool (such as a central key-lifecycle management system).
Furthermore, unauthorized disclosure of one symmetric key makes all users/subscribers vulnerable - therefore symmetric keys need to be protected by highly secure storage units, such as Hardware Security Modules (HSMs). At the end of the key lifecycle, the key should be updated to a new key and the old one terminated or retired for all users.
Symmetric and asymmetric cryptography
There is a “secret” or a “key”, which is used to encrypt and decrypt. This key must be protected.
The asymmetric cryptographic scheme appeared after 1970. The idea here is to use one key to encrypt, and another to decrypt.
The asymmetric scheme, with variations, is widely used for protecting online communications nowadays. Public Key Infrastructure (PKI) is implemented everywhere and uses asymmetric cryptography.
There are people or organizations, such as the military, who do not trust asymmetric schemes. Why so? There is a strong reason for this disbelief - a weak mathematical basis. Asymmetric cryptography relies on “computational complexity”. On the other hand, symmetric schemes are mathematically proven. Computational complexity is always “relative” or “comparable”, therefore your enemy/attacker could possibly be more powerful than you. For example, you might use a 2048 bit RSA key, but they could use IBM Parallel Sysplex (parallel processing through a cluster of high performing computers) to crack your cipher.
Despite some potential weaknesses, asymmetric cryptography is widely used and will continue to be used due to its advantages over symmetric schemes.
How does the symmetric scheme work in reality
Symmetric scheme devices are bulky and not user-friendly. They were designed to resist any exploitation attempt. Typical fields of application are banking or military grade equipment.
How does it all work? Imagine we have 2 devices, designed to maintain a point-to-point encryption channel of some kind. Each device has a few symmetric keys:
- A master key or keys. This one is a long-term, highly physically protected key, used to decrypt other keys
- KEK - a key encryption key. Also heavily protected. Long-term stored.
- Session or block key. Generated on board by a hardware random number generator.
KEKs are installed by a security officer or in an automated arrangement by a key management system. A key distribution center sends a KEK, encrypted by the master key.
The device decrypts the KEK and stores it. If the device has data to encrypt, it performs the following actions:
- Creates a session key using a Hardware Random Number Generator
- Encrypts a small amount of the data on the session key
- Encrypts the session key with the KEK and taking certain precautions (key expansion and wrapping)
- Sends the encrypted data and encrypted KEK to the other side. Basically the dispatch is performed by different equipment.
- Destroys session keys
- Every N kilobytes of plain input data we have to re-create session key and repeat steps 1-3.
- A receiving party gets an encrypted piece of data, and an encrypted session.
The scenario can vary, but the problems remain the same.
There are many practical problems arising in this scenario:
- How can we keep master keys secret and protected for a long time?
- KEKs have to be rotated periodically - How often? By who?
- KDC - Key Distribution Center - must be used - how to secure a delivery channel from a KDC to a subscriber?
- Every communication link between the two subscribers has to use a common key (in this scheme - a KEK). The challenge is how to rotate these keys at the same time in all equipment, owned by all subscribers.
Why is the symmetric scheme still in use?
Symmetric algorithms resist any exploitation attempts, and are until today not matched by any other encryption scheme. In addition, due to the short recommended key length and the simplicity of the algorithm, data processing is much faster and thus economic than asymmetric encryption.
Imagine the millions of concurrent transactions done in the frame of payment card handling or during financial transactions. It is vital to have a fast and highly safe scheme. Symmetric key encryption is the right fit here.
But how do we handle the complex key handling challenges?
Let us turn the challenge into a feature. Cryptomathic provides an industry-grade Crypto Key (lifecycle) Management System (called CKMS), built around a hardware security module (or a cluster of hardware security modules). This system automates all the tedious management tasks and also provides comfortable and auditable analysis and logs at a central place. So the compliance audits, like PCI-DSS, are facilitated as well.
With such a key management system, symmetric keys can even be distributed into the most remote peripherals of a corporate or intercorporate architecture without compromising security.
Symmetric key schemes proved to be the right choice for banking-grade security, well outbalancing the disadvantages compared to asymmetric cryptography. Professional banking-grade key management systems like CKMS help compensate for the disadvantages of asymmetric cryptography and turn them into advantages.
References and Further Reading
- Symmetric Key Encryption - why, where and how it’s used in banking (2019), by Peter Smirnoff & Dawn M. Turner
Selected articles on Key Management (2012-today) by Dawn M. Turner, Guillaume Forget, James H. Reinholm, Peter Landrock, Peter Smirnoff, Rob Stubbs, Stefan Hansen and more
- Selected articles on Key Management in the Cloud (2017-today) by Edlyn Teske, Matt Landrock, Rob Stubbs, Stefan Hansen, Ulrich Scholten, Joe Lintzen and more | <urn:uuid:1019e0cf-246e-4b28-a3dd-7f914ee4ae47> | CC-MAIN-2024-38 | https://www.cryptomathic.com/news-events/blog/summary-of-the-practical-key-management-challenges-behind-symmetric-cryptography-in-financial-institutions | 2024-09-13T11:35:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00686.warc.gz | en | 0.913961 | 1,345 | 3.25 | 3 |
Take a look around you. Whether you’re in the comfort of your own home or in a public place, chances are you can spot some “smart” devices. As most of you tech savvy individuals know (I’m giving you the benefit of the doubt here), smart in this context refers to devices that have the capability of going online. But don’t get too comfortable with what you refer to as smart because the technological world as you know it is changing. Let’s just hope you can keep up!
Understanding The Internet of Things
To fully understand this technological advancement, you’ll want to be familiar with “The Internet of Things” (IoT). In broad terms, this means connecting devices with an on and off switch to the internet or even to each other. This includes common devices such as smartphones (e.g. iPhones) and tablets (e.g. iPads), to household appliances from coffeemakers to washing machines. One analyst firm based out of Connecticut estimates that there will be over 26 billion connected devices by the year 2020. That’s nearly three times the current population of the world - isn’t that crazy? It’s almost as if technology is invading this place we call home.
Data Creation In The IoT Network
As more devices join the IoT network, there will be an increase in the amount of data being created. With a whole lot of data being created on a yearly basis (2.6 quintillion bytes to be exact for you computer geeks), 90% of all the data in the world has been created in the past two years. This abundance of available data is known as Big Data (who would’ve guessed?) and businesses utilize this consumer information to optimize performance, or so they say to make you less suspicious.
Looking At The Pros & Cons
So what exactly are the pros and cons of IoT, you may ask? Well, in the business sector, companies can use this data for better marketing and brand-building potential. Between you and me, this is just a fancy way of saying that companies will have more information on you and ways to interfere in your daily life. While this is a pro for companies, it vey well could be a con for all consumers. After all, what happened to that little thing called privacy? Even when you think what you’re doing is off the radar, you can never be too sure because someone or something is always watching. The list of cons doesn’t even stop here. With IoT, companies will have even greater competition because they are no longer concentrated on just selling the product, but also the service. And, yes, this does affect you as a consumer too – lucky you. This means that companies can increase the prices on their products because they won’t just be products anymore.
That about sums up the whole IoT phenomena. Now it’s up to you what you want to do: join the revolution or go against it? | <urn:uuid:51b8f720-e4cb-4590-807b-93e9b86a9d8f> | CC-MAIN-2024-38 | https://www.databahn.com/blogs/fortune-1000-sales-trigger-events/the-internet-of-things-iot-or-the-invasion-of-technology-iot | 2024-09-21T01:05:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00086.warc.gz | en | 0.958784 | 619 | 2.734375 | 3 |
What Problems Can Cybersecurity Solve: Applying Yesterday’s Solutions to Today’s Problems
One popular adage that rhymes with the cyber world goes, “the best jam is made in old jars.” It is possible to juggle future technologies while relying on existing skills and updating the used, old, and tested methods.
The cyber-world is limitless, but no one knows its extent. The existing knowledge and technologies in cybersecurity help create a solid skills base by recycling old techniques as the core principles required in the cyber world.
Here, we focus on some of the skills used before but that are still applicable in current cybersecurity to offer best security practices. Also, find all the best information to understand what’s happening in information technology and how you can apply it in your organization.
Time-Resistant IT Skills Suited For Cybersecurity
Anytime you think about cybersecurity, the central idea that comes to mind is IT. It is the general term that covers everything related to the internet and computing. People think less of OT (Operational Technology), which is more inclined towards the industrial sector. However, the industry has seen tremendous changes and transformations that open its fair share of cybersecurity issues.
IT is dynamic and constantly evolves with new applications, new uses, enhanced user groups, and new devices. Cybersecurity players continuously adapt to the frantic rate of change by devising protective systems even when there are no clear visions of what could potentially happen.
The stakes are high, and the need for improvising has increased. Therefore, the cyber world’s ability to recycle its various tried and tested defensive techniques helps offer more streamlined changes to these issues. Also, some skills that become obsolete in one sector can be applicable in another one. For example, protective methods formerly used in IT are relevant for most current OT sector issues.
A significant example is the IPS ((Intrusion Prevention System). The solution offers detailed analytics of network communications that allow users to check for flaws if they have an unexploited protocol or a malicious command in the systems. IPS is still extensively used in the IT world but it is proving more invaluable in industries where a single altered connection may cause catastrophic results.
The takeaway here is, as much as OT previously had no ties with the internet, the operation networks used as production lines get exposed to cyber threats and will need measures used against IT attacks for security.
New Solutions Learn From Old: Same Goes For Cyber Attacks
The changes in cybersecurity occur mainly due to the changing atmosphere of cyber attacks. Relatively, cyber attacks take advantage of both yesterday’s skills and today’s technological advancements. According to Adrien Brochot, about 90% of new attacks build on the old attacks, and the attackers adapt them to the security barriers and break into systems by applying old techniques.
New attacks create further vulnerabilities in the systems as attackers change their methods and how they access systems. They recycle everything, including malware that is just variations of their predecessors. An attacker infiltrates a system by looking at the vulnerabilities and using prior cases to exploit them.
Therefore, creating old methods, tailoring cyberattacks, and using technological advances in the right way summarizes the evolution of cybersecurity. Since attackers exploit the areas that are likely to be less protected, the focus remains on how an organization can protect the integrity of their systems, using both yesterday’s methods and current technologies.
Role of Cybersecurity Solutions
The central role of cybersecurity solutions is to protect. However, to thoroughly examine their protective function and understand how they help, it would be best to establish how the solutions work and understand the world of cyber attacks.
Analyzing how attackers operate is key to discovering the flaws and bridging the necessary gaps to secure a system.
Security analysis helps offer accurate insights on how to identify hacks and solve them before they blow up. It will also help to understand the actions of an attacker and how it spreads across systems, whether in an IT or OT environment.
All this information becomes important when used to control a cyber-ecosystem by listing attacks and accessing vulnerabilities and patches in an organization’s IT system. It is easy to have a broader picture of the possibilities and tailor cybersecurity solutions accordingly with detailed information.
The changes taking place in the cyber world pose severe challenges to cybersecurity players. But, creating a hybrid environment where they get information from old techniques and mix it with current technologies can form tools and better opportunities to address any issues.
The cyber community should also share tools and knowledge to leverage the advantages of creating a hybrid environment.
Problems Cybersecurity Solutions Solve
Businesses of any size and type experience various possible security vulnerabilities daily. Cybersecurity helps to address these problems to enable your business continuity and boost production in the bottom line. Some of the issues mitigated include:
- Human errors: Employee errors are by far a significant reason for most data breaches. Therefore, cybersecurity solutions help reduce risks of human errors by reducing access to harmful sites and identifying phishing schemes.
- External threats: With advancements in technology, hackers have become increasingly skilled in finding ways to access firewalls and steal company data. Cybersecurity helps ensure company firewalls, anti-virus software, and other solutions are up to date to effectively protect the IT infrastructure and eliminate downtime.
- Third-party app security: Most companies use third-party apps whose creators do not have business safety in mind. Some of these apps don’t have adaptable security measures that could cause severe business attacks. Cybersecurity allows you to identify such vulnerabilities and weed them out.
- Unsecured cloud storage: Cloud servers are the new normal for most businesses, but they also have security risks. The security breaches have increased as most businesses migrate into the cloud. Network security services will help to ensure all your cloud systems are safe and you have proper security to avoid such breaches.
Contact Mathe Inc. to get expert help with withstanding cyber threats and protecting your company’s valuable data. You will find the best cyber security experts with the ability to assess and diagnose risks. Request a proposal today!
With over 35 years in the business of supporting and implementing technology for the SME market, and 6 years previously in Corporate IT and Voice. I have seen a great deal of change. The only common thread is I have always focused on the Business Wise application of Technology. We always try to look 5 years ahead of the current technology to make sure our clients are on the right track to meet current and future needs. | <urn:uuid:d3fd9437-2f82-489a-a4dc-fb7c4f563162> | CC-MAIN-2024-38 | https://www.mathe.com/cybersecurity-solve/ | 2024-09-08T18:35:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00286.warc.gz | en | 0.947288 | 1,335 | 2.75 | 3 |
What Is Open Banking?
Open banking is a system in which third-party financial service providers access consumer transactions and other financial data from banks and nonbank financial institutions through the use of APIs. Most people use open API systems on a regular basis, even if they don’t realize it.
Have you used CashApp or Venmo to pay friends back for picking up a dinner check? That’s open banking. People also use open banking when they transfer money from their checking accounts to their Robinhood accounts, or if they use budgeting apps that connect with their banks or credit card providers. According to Visa, 87 percent of U.S. consumers use open banking, but only 34 percent know that open banking drives their connected financial services.
But what truly distinguishes open banking from any other financial process? The answer is APIs.
What Are Open Banking APIs?
IBM defines an API as “a set of rules or protocols that enables software applications to communicate with each other to exchange data, features and functionality.” This allows previously siloed data to exist across many applications.
There are several different types of APIs:
- Data APIs: These provide read-only access to financial data such as account information, balances and transaction history.
- Transaction APIs: These enable everything from transferring funds to making direct deposits and sending payments.
- Product APIs: These enable third parties to list financial products, rates and terms. They are often used by comparison websites or marketplaces.
APIs are the backbone of open banking. They essentially act as bridges, allowing secure and standardized methods for third parties to access financial data with the consumer’s consent.
“While the sharing of customer-permissioned banking data with third parties is common in traditional banking, open banking differs in that it relies on APIs and advanced data aggregation techniques to systematically access and share this data,” notes CB Insights.
How Do Open Banking and APIs Work?
The open banking process generally involves four parties: the end customer, the financial institution, the data aggregator and the third-party app. While the specifics of each step in the process may vary depending on what third party is being connected to (a brokerage app versus a budgeting tool, for example), there’s typically a three-step process.
- The bank builds dedicated endpoints. These endpoints are what third parties call to obtain the specific financial data that consumers have given permission to be shared.
- The data aggregator serves as an API bridge. The data aggregator serves as the middleman, accessing consumer-permissioned data from the bank and then sending it securely to the third-party service.
- Banks give encrypted access to the third party. When end consumers successfully sign in to their financial institutions to allow third-party access, their banks create encrypted API tokens. These tokens work between the banks and the data aggregator. They create an ongoing connection so that the aggregator doesn’t have to store the consumer bank’s account credentials. | <urn:uuid:ee2e7fc5-e337-449e-9d2c-184721af1c05> | CC-MAIN-2024-38 | https://biztechmagazine.com/article/2024/04/open-banking-and-apis-what-it-leaders-need-know-perfcon | 2024-09-10T00:23:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00186.warc.gz | en | 0.938488 | 625 | 2.859375 | 3 |
NOTE:Modem pacing is set from the
It is possible for Reflection to transmit data to a serial device faster than the device can process it, or for a serial device to transmit data to Reflection faster than Reflection can process it.
Should this continue for too long, the slower system's buffer overflows and data is lost. If the serial device recognizes the XON/XOFF handshake, you can prevent the buffer from overflowing by keeping this value set to
.transmit pacing works as follows:
When the receive buffer has a limited amount of space left, an XOFF (DC3) character is sent as a signal to stop transmitting.
After processing most of the backlog of characters in the receive buffer, an XON (DC1) character is sent as a signal to resume transmission.
The two systems continue in this stop-and-go fashion until all the data has been transmitted.
is selected, the RTS and CTS pins on the RS-232 serial cable control data flow.When both the
and options under are set to and you're emulating a VT series terminal, Hold Session (VtF1) has no effect.
Select a flow control method to use when Reflection transmits data to a serial device on this port. |
Select a flow control method to use when the serial device on this port transmits data to Reflection. |
It is possible for Reflection to send data to the host faster than the host can receive it. For example, if you paste text from the Clipboard into a host editor such as EDT, you may overrun the host's buffer. By setting a delay between characters, you can specify how long Reflection should wait after each character when transmitting blocks of characters to the host. This delay also affects character transmission during file transfers. Setting a value of 3 at 9600 baud lowers the effective speed of data transmission to about 2400 bits per second. On a VMS host, setting the terminal's HOSTSYNC characteristic can also help prevent overrunning the host's buffer when pasting data. To do this, enter the following command at the DCL prompt: SET TERMINAL/HOSTSYNC. For backward compatibility, you can enter a value for character delay of up to 255. However, the maximum in Reflection always reverts to 100. |
Set the amount of time Reflection should wait after transmitting a carriage return character (the line delimiter) before transmitting the next line. This setting also affects the delay between frames during file transfer using the WRQ/Reflection protocol. Assigning a delay may help if you are experiencing file transfer problems over an X.25 connection. |
Select to send communication calls to an independent thread that handles IO (Input/Output) processing, thus improving performance. | | <urn:uuid:8fc3eed8-1020-4632-99a6-a464932e026b> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/amc-archive/reflection-desktop-v16-1-sp1/rdesktop-guide/data/settings_vt_more_serialport_cs.htm | 2024-09-13T15:57:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00786.warc.gz | en | 0.913372 | 576 | 2.640625 | 3 |
To pass certificate verification on edge servers, clients need to provide the root CA certificate or the root and intermediate certificates that signed the certificate that client devices use to identify themselves. The certificate authority and intermediate certificates, known as a certificate chain, are a list of certificates issued by successive certificate authorities (CAs) that enables edge servers to verify that the client and all CAs are trustworthy.
In the figure, you can see a certificate chain that leads from a certificate that identifies a client through two intermediary CA certificates to the CA certificate for the root CA. In the figure, you can see a certificate chain that leads from a certificate that identifies a client through two intermediary CA certificates to the CA certificate for the root CA.
A certificate chain follows a path of certificates in the hierarchy up to the root CA certificate. In a certificate chain, each certificate must meet the following conditions:
Each certificate is followed by the certificate of its issuer.
Each certificate contains the name (DN) of that certificate's issuer, which is the same as the subject name of the next certificate in the chain.
Each certificate is signed with the private key of its issuer. The signature can be verified with the public key in the issuer's certificate, which is the next certificate in the chain.
Updated about 3 years ago | <urn:uuid:ca34476e-593d-44f0-9c2a-8c0160bfaa4e> | CC-MAIN-2024-38 | https://techdocs.akamai.com/iot-edge-connect/docs/verify-certificate-chain | 2024-09-18T14:46:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00386.warc.gz | en | 0.94978 | 262 | 3.15625 | 3 |
Today Fujitsu Laboratories announced a collaboration with the University of Toronto to develop a new computing architecture to tackle a range of real-world issues by solving combinatorial optimization problems that involve finding the best combination of elements out of an enormous set of element combinations.
Combinatorial optimization is a subset of mathematical optimization that is related to operations research, algorithm theory, and computational complexity theory. It has important applications in several fields, including artificial intelligence, machine learning, mathematics, auction theory, and software engineering.
Fujitsu’s architecture employs conventional semiconductor technology with flexible circuit configurations to allow it to handle a broader range of problems than current quantum computing can manage. In addition, multiple computation circuits can be run in parallel to perform the optimization computations, enabling scalability in terms of problem size and processing speed. Fujitsu Laboratories implemented a prototype of the architecture using FPGAs for the basic optimization circuit, which is the minimum constituent element of the architecture, and found the architecture capable of performing computations some 10,000 times faster than a conventional computer. Through this architecture, Fujitsu Laboratories is enabling faster solutions to computationally intensive combinatorial optimization problems, such as how to streamline distribution, improve post-disaster recovery plans, formulate economic policy, and optimize investment portfolios. It will also make possible the development of new ICT services that support swift and optimal decision-making in such areas as social policy and business, which involve complex intertwined elements.
In society, people need to make difficult decisions under such constraints as limited time and manpower. Examples of such decisions include determining procedures for disaster recovery, optimizing an investment portfolio, and formulating economic policy. These kinds of decision-making problems–in which many elements are considered and evaluated, and the best combination of them needs to be chosen–are called combinatorial optimization problems.
With combinatorial optimization problems, as the number of elements involved increases, the number of possible combinations increases exponentially, so in order to solve these problems quickly enough to be of any practical utility for society, there needs to be a dramatic increase in computing performance. The miniaturization that has supported the improvements in computing performance over the last 50 years is nearing its limits (Figure 1), and it is hoped that devices will emerge based on completely different physical principles, such as quantum computers.
Conventional processors have a great deal of flexibility in handling combinatorial optimization problems because they process these problems using software, but on the other hand, they cannot solve them quickly. Conversely, current quantum computers can solve combinatorial optimization problems quickly, but because they solve problems based on a physical phenomenon, there is a limitation of only adjacent elements being able to come into contact, so currently they cannot handle a wide range of problems. As a result, developing a new computing architecture that can quickly solve real-world combinatorial optimization problems has been an issue (Figure 2).
Using conventional semiconductors, Fujitsu Laboratories developed a new computing architecture able to quickly solve combinatorial optimization problems. This technology can handle a greater diversity of problems than quantum computing, and owing to its use of parallelization, the size of problems it can accommodate and its processing speed can be increased. Key features of the technology are as follows.
- A new computing architecture for combinatorial optimization problems. The architecture uses a basic optimization circuit, based on digital circuitry, as a building block. Multiple building blocks are driven, in parallel, in a hierarchical structure (Figure 3). This structure minimizes the volume of data that is moved between basic optimization circuits, making it is possible to implement them in parallel at high densities using conventional semiconductor technology. In addition, thanks to a fully connected structure that allows signals to move freely within and between basic optimization circuits, the architecture is able to handle a wide range of problems.
- Acceleration technology within the basic optimization circuit. The basic optimization circuit uses techniques from probability theory to repeatedly search for paths from a given state to a more optimal state. This includes a technique that calculates scores for the respective evaluation results of multiple candidates at once, and in parallel, when there are multiple candidates for the next state, which increases the probability of finding the next state (Figure 4, left). It also includes a technique that, when a search process becomes stuck as it arrives at what is called a “local minimum,” it detects this state and facilitates the transition to the next state by repeatedly adding a constant to score values that increases the probability of escaping from that state (Figure 4, right). As a result, one can quickly expect an optimal answer.
Fujitsu Laboratories implemented the basic optimization circuits using an FPGA to handle combinations that can be expressed as 1,024 bits, and found that this architecture was able to solve problems some 10,000 times faster than conventional processors running a conventional software process called “simulated annealing.” By expanding the bit scale of this technology, it can be used to quickly solve computationally intensive combinatorial optimization problems, such as optimizing distribution to several thousand locations, optimizing the profit from multiple projects with a limited budget, or optimizing investment portfolios. It can also be expected to bring about the development of new ICT services that support optimal decision-making at high speed.
Fujitsu Laboratories is continuing to work on improving the architecture, and, by fiscal 2018, aims to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation. | <urn:uuid:ee83ebd0-0004-4317-a539-f04ce0efeff2> | CC-MAIN-2024-38 | https://insidehpc.com/2016/10/fujitsu-laboratories-develops-new-architecture-that-rivals-quantum-computers-in-utility/ | 2024-09-07T18:19:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00486.warc.gz | en | 0.936858 | 1,123 | 2.859375 | 3 |
This year’s theme for World Food Safety Day – ‘Safer food, better health’, underlines the need for continuous efforts toward keeping food safe that is accessible to people from all walks of life. The theme emphasizes the golden rule of “Safety First”, which has a direct connection to ensuring food safety and security that aids in ensuring health and wellness but also influences a nation’s economic prosperity and sustainable development. It recognizes the growing concerns about food-borne illness and draws attention to the need for sustained action to detect and manage food-borne risks and calls for a more pre-emptive approach to ensure food security for all. Continuous monitoring for food safety hazards and food-borne diseases is one of the important ways to guarantee food security as it can reduce spoilage and losses during supply chain transit and decrease the incidence of food-borne diseases.
The US Center for Disease Control and Prevention (CDC) defines a food-borne disease (FBD) outbreak as “two or more illnesses/diseases caused by the same microorganism such as virus, fungi, or bacteria, which occurs after consumption of the same food.” These outbreaks often affect populations of all ages while placing a significant economic burden on the government and industry as it results in product recalls and the need for significant source tracing. Federal agencies across the globe have laid down policies and guidelines for ensuring food safety throughout the supply chain. However, statistics still show that food-borne illness outbreaks are an ongoing concern. WHO highlights that food-borne diseases affect at least 1 in 10 people worldwide. The US FDA reported 585 food recalls in 2019, 495 in 2020, and 427 in 2021. The CDC and Singapore Food Agency (SFA) reports have resulted in product recalls ranging from onions to drinking water due to pathogenic contamination. Africa and Southeast Asia have the highest food-borne disease burden, with around 150 to 200 million cases accounted for annually.
These incidences show the global nature of food safety risks and the need for preventive action to ensure safe food that can meet the required nutritional demands of all age groups, leading to a healthier population. A key aspect of ensuring food safety is to continuously track and monitor potential contamination routes throughout the value chain and devise preventive strategies. Existing techniques and tools for contaminant tracking and detection are often time-consuming and are not widely adopted globally due to a lack of infrastructure, cost, availability, or awareness. This poses a problem when food products are one of the most exported/imported commodities. Technological developments in contaminant indicating materials, bio- and nano-sensors, imaging techniques, and molecular assays can overcome the challenges of existing techniques. The global transition to the digitalization era has led to the adoption of IoT, AI, and data analysis tools for predicting possible contamination and facilitating timely action. The development of miniaturized, hand-held, and remote monitoring devices for monitoring and detecting microbes, allergens, and other contaminants can ensure food safety.
Convergence of digital technologies with analytical and diagnostic tools and sensors can ensure rapid detection through real-time monitoring and efficient prediction, while the integration of spectral techniques and imaging tools, such as hyperspectral imaging or Terahertz sensing can improve physical and microbial contaminant detection. These technologies can provide quantitative and qualitative detection and be adopted by all stakeholders, including food producers, manufacturing and processing, and retail units for better food safety management.
There is a need for multisectoral collaboration to design, implement, educate and follow up on food safety programs for a healthier today and tomorrow. At present, despite the presence of international food safety standards, country-specific and regional standards and guidelines often take precedence. While these ensure the safety of imported and locally sold food products, it can make global trade and exports a laborious undertaking. A global legal framework with support measures for all stakeholders in the agriculture and food value chain can help in establishing global food safety systems that comply with international safety standards. It can also create a sustainable food production and transport system for providing better access to safer and healthier foods to end-consumers across geographies. However, this is not an easy task, and a collaborative approach between policymakers, industries, and consumers across nations and geographies is needed to ensure that current food chains are transformed sustainably to provide food safety and security for all to pave the way for a healthier tomorrow.
Schedule your Growth Pipeline Dialog™ with the Frost & Sullivan team to form a strategy and act upon growth opportunities: https://frost.ly/60o. | <urn:uuid:eb9bf2e7-196e-4773-acc4-055eae482620> | CC-MAIN-2024-38 | https://dev.frost.com/growth-opportunity-news/continuous-emphasis-on-food-safety-for-health-and-well-being/ | 2024-09-08T23:41:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00386.warc.gz | en | 0.934625 | 932 | 3.578125 | 4 |
Top 10 Most Expensive Camera Lenses In The World
Updated · Dec 04, 2023
Most Expensive Camera Lenses – As an integral component of a camera system, camera lenses play an essential role in gathering and focusing light onto its sensor or film. They influence every element of an image's angle of view, depth of field, and overall quality; their focal length dictates this. Every type of lens – from wide-angle to telephoto – serves a different task; wide-angle lenses provide a wider field of view for landscape photography while telephoto lenses have smaller fields that make them suitable for taking portraits or wildlife images. The amount of light passing through a lens and its depth of field depends on its maximum aperture; more light can pass through with smaller apertures (f-number), while lesser amounts pass through with larger ones (f-number).
History of Camera Lens
Early 19th-century photography began alongside its history of camera lenses. Simple glass lenses, sometimes consisting of just one element, were utilized to focus light onto light-sensitive plates or films in early cameras. Photographers then experimented with increasingly intricate lens designs in the middle 1800s; among these was Joseph Petzval's Petzval lens, which featured a faster aperture and produced crisper photographs than earlier lenses.
Lens makers continuously improved their designs during the 19th and early 20th centuries, producing lenses with multiple components designed to correct for various aberrations such as distortion and chromatic aberration. Professional photographers relied upon Carl Zeiss and Schneider-Kreuznach lenses produced for high-quality images produced during these centuries; amateur photographers could also take advantage of Carl Zeiss and Schneider-Kreuznach lenses produced. When 35mm cameras and interchangeable lens systems became mainstream in photography during this period, switching lenses to achieve different focal lengths or perspectives made switching lenses easy allowing captures of diverse subjects and styles more easily than ever before.
Lens design evolved along with photography's rapid growth. Apochromatic and aspherical lenses emerged due to new materials and production methods in the mid-20th century, reducing aberrations by eliminating more. Autofocus technology and electronic cameras emerged later on and presented designers with additional challenges; electronic cameras required lenses with built-in motors for communicating, while autofocus lenses needed quick, accurate focusing mechanisms.
No longer limited to just smartphone photography, camera lenses have undergone massive evolution over the years to meet ever more detailed photographic requirements. From tiny lenses designed for smartphone cameras all the way up to giant telephoto lenses used professionally for sports and wildlife photography – there are now an amazing variety of camera lenses on the market today. Thanks to cutting-edge optics and computer-aided design tools, current lens designs continue to push beyond what was once thought feasible; as manufacturers and photographers work to capture every scene with greater clarity and detail than before – camera lenses have undergone a constant transformation over time.
Types of Camera Lens
There are countless types of camera lenses on the market, each offering distinct qualities and applications.
- Regular Lenses: Also referred to as standard lenses, these lenses feature an approximate full-frame camera focal length of 50mm and provide realistic photography similar to what the human eye perceives. Regular lenses can make excellent daily photographs.
- Broad-Angle Lenses: Broad-angle lenses feature a wider field of view and shorter focal length (usually under 35mm on full-frame cameras), making them suitable for real estate, architectural and landscape photography. They're often employed when photographing real estate transactions or landscape scenes.
- Telephoto Lenses: Telephoto lenses are great for shooting sports, animals, and portraits since their longer focal length (usually over 70mm on full-frame cameras) allows you to capture larger subjects against distant backgrounds more effectively. Their limited field of view helps close in on subjects quickly while compressing the distance between the subject and the backdrop.
- Zoom Lenses: With their adjustable focal lengths and interchangeable lens elements, zoom lenses allow photographers to easily change the frame without moving. Zooms are widely used for photojournalism, travel photography, and events photography.
- Prime Lenses: Prime lenses boast superior image quality over zoom lenses due to their fixed focal length. Primes are often utilized in low-light, street, and portrait photography.
- Macro Lenses: Macro lenses are great tools for taking close-up images of small subjects like flowers, insects, and textures thanks to their close focusing distance and high magnification ratio.
- Fisheye Lenses: Fisheye lenses have an extremely wide field of view, producing circular, distorted images. They're commonly employed in experimental and artistic photography.
- Tilt-Shift Glasses: Tilt-Shift glasses allow users to tilt the lens forwards or backward as well as move it upwards or downwards and right or left, making it possible to regulate the depth of field and rectify perspective distortion, often employed for building photography.
Uses of Camera Lens
Camera lenses have many uses in photography and cinematography, including
- Portrait photographers typically turn to larger focal length lenses like 85mm or 135mm primes for portrait photography, often creating a pleasing bokeh effect and narrow depth of focus – perfect for isolating subjects from their background while giving images a more pleasing look.
- Landscape photographers commonly employ wide-angle lenses with focal lengths of 10-24mm for landscape photography, as they provide a broad field of view that makes it ideal for photographing open spaces, urban scenes, and natural environments.
- Wildlife photography frequently relies on lenses with focal lengths of 300mm or greater for wildlife photography, enabling photographers to capture detailed shots of distant animals without endangering or distressing them. With these lenses in place, it becomes possible to take photographs that capture precise images without injuring or upsetting them in any way.
- Sports photography requires lenses with fast focusing and wide apertures, like 70-200mm or 300mm prime lenses; that provide excellent images even in low lighting. Such lenses can quickly freeze moving subjects for stunning photos.
- Macro lenses equipped with tight focusing distance and a high magnification ratio are used to capture close-up images of insects; flowers, and textures at close range.
- Lenses with 35mm or 50mm focal lengths are often utilized when shooting street photography in urban environments. These lenses allow photographers to capture natural moments without drawing too much attention to themselves as photographers.
- Filmmakers looking for that cinematic feel typically employ lenses with smooth focus and aperture adjustment capabilities such as cine lenses. Cine lenses may be used to achieve particular looks or effects and come in various aspect ratios.
Camera lenses are an integral component of camera systems and play a critical role in setting image standards and characteristics. Camera lenses play an integral part in many important situations where their importance can be seen:
- The focal length of lenses determines both the angle of view and size of subjects in photographs, making a wide-angle lens an excellent choice for landscape and building photography with a larger field of vision and shorter focal length. Conversely, telephoto lenses have longer focal lengths with narrower ranges, making them better suited for portraiture or animal shooting. By changing lenses when necessary photographers can adapt quickly to various shooting situations and achieve diverse images.
- Depth of field, or the range of distances that are sharply in focus, is controlled by the aperture of a lens. Lenses with larger apertures (smaller f-numbers) produce a narrower depth of focus for photographers who wish to isolate subjects against blurrier backgrounds; similarly; a lens with narrower apertures can produce clear, crisp images from front to back using narrower f-numbers (narrower apertures).
- Lens quality can have a dramatic impact on the quality of photos taken with it, creating sharp, finely detailed photos with little distortion, aberration, or color fringing. Lens coatings may help reduce glare while increasing contrast for improved color rendition.
- Photographers can be more versatile and creative with their work by changing lenses to obtain various focal lengths; apertures and effects. Photographers also often experiment with various approaches and methods in order to capture a variety of themes and emotions through photography.
- Lens technology has advanced tremendously over time, enabling photographers to easily take precise; artistic photographs. New lens designs and materials have also provided higher-quality optics and enhanced low-light performance.
No one can overstate the significance of camera lenses in our world today; they are critical tools in accurately and creatively documenting what lies around us. Camera lenses offer unlimited opportunities for artistic vision and creative expression as technology develops and new lens designs are created.
The Popularity of Camera Lens
Camera lenses are indispensable parts of a camera system, as their use directly impacts image quality and adaptability.
One factor contributing to camera lenses' popularity is their variety. From wide-angle to telephoto lenses, each type offers different focal lengths and should be used for specific tasks; wide-angles work well for photographing nature and architecture while telephoto lenses excel in shooting action and animals. As changing lenses allow photographers to adapt quickly to changing conditions or topics.
Lenses have also become incredibly popular because of their ability to produce different effects and styles; giving photographers the ability to achieve different depths of field and create shallow focus effects in images. Furthermore; coated lenses may offer different color renderings or contrast levels which enable photographers to achieve particular looks or styles for their pictures.
Technology innovations can also be attributed to the rising popularity of camera lenses. New lens designs and materials have enabled photographers to capture sharper; more detailed pictures. Thanks to autofocus technology, photographers may now focus on objects quickly and precisely; while handheld photography or low-light photography has become easier thanks to image stabilization.
The growth and accessibility of photography gear have contributed to an explosion of camera lens popularity. Thanks to digital photography, taking and sharing high-quality photos has never been simpler; third-party lenses give photographers more options at reasonable prices when exploring various lens designs and styles.
Photography has long been an enjoyable pastime and career option. Many still find the ability to take and share images of the world fascinating; camera lenses play a critical role here by providing the clarity, accuracy, and creativity needed for great photographs.
Why Camera Lenses Are Expensive?
Camera lenses can be expensive for several reasons, including these:
- High-quality lenses are constructed using costly components like particular types of glass designed to minimize optical aberrations like color fringing, distortion, and other flaws. Production costs for high-quality lenses tend to be more extensive due to a more complex manufacturing process.
- New lens technologies are constantly being researched with generous funding from camera makers. This includes developing novel glass varieties, coatings, and optical designs – the cost of which often end up being passed onto users through higher costs for cutting-edge lenses.
- Precision manufacturing processes such as grinding, polishing, and coating the glass components required for creating camera lenses must adhere to stringent specifications in order to guarantee they operate effectively and meet professional photographers' high standards. Accuracy must also be ensured in terms of functioning as intended and in meeting professional photographers' high expectations for quality photos.
- Due to limited production runs of certain lenses, their prices may increase accordingly. This could be caused by using harder-than-average materials or being targeted toward niche markets.
- Some camera manufacturers are known for producing superior lenses. Even when offered at lower prices from less renowned makers, customers might still prefer investing in one from a renowned brand.
- Some lenses feature extras such as image stabilization, quick focusing, and weather sealing – features that add cost but may be necessary for specific types of photography.
- The price of camera lenses can be determined by supply and demand forces; those in high demand may pay more, while supplies remain scarce or increase over time, further increasing acquisition costs.
Lenses have an enormous effect on the quality of images captured, prompting many photographers to invest in high-grade lenses.
Top 10 Most Expensive Camera Lenses In The World
- Leica Apo-Telyt-R 1:5.6/1600mm – (Worth USD 2 Million)
- Nikkor 6mm f/2.8 Fisheye Lens – (Worth USD 160,000)
- Carl Zeiss 50mm Planar f/0.7 Lens – (Worth USD 140,000+)
- Zeiss Apo Sonnar 1700mm f/4 – (Worth USD 100,000+)
- Canon EF 1200mm f/5.6 L Lens – (Worth USD 90,000)
- Nikon 1200-1700mm f/5.6-8.0 Lens – (Worth USD 60,000)
- Canon 5200mm f/14 Lens – (Worth USD 45,000)
- Sigma APO 200-500mm f/2.8 with 2x Teleconverter – (Worth USD 26,000)
- Nikon AF-S Nikkor 800mm f/5.6E FL ED VR w/1.25 Teleconverter – (Worth USD 16,300)
- Horseman 23mm f/5.6 HR Digaron-S Lens Unit – (Worth USD 15,620)
#1. Leica Apo-Telyt-R 1:5.6/1600mm – (Worth USD 2 Million)
Leica APO-Telyt-R 1:5.6/1600mm holds the unenviable distinction of being the world's most expensive camera lenses ever manufactured for general public consumption; unfortunately, only two units were ever sold worldwide. Leica's Solms factory now displays a functional prototype; while photographer Saud bin Muhammed Al Thani of Qarati (Syria) requested another version made specifically for him by a photographer. Prince Alwaleed previously purchased one of only two copies for over $2 Million – making it one of the most costly camera lenses ever sold in photography history.
This lens measures around 1.2m long (1.55m with the lens hood attached), has a maximum barrel diameter of 42cm, is compatible with Leica R-series manual focus SLR cameras, weighs at least 60kg, and only covers 1.5 degrees diagonal angle of view with 1600mm focal length. Leica offers two APO teleconverters that should make this lens compatible: one 1.4x and two 2x APO teleconverters for optical systems that measure 1:8/2,240mm and 1:11/1,200mm respectively. Sheik Al-Thani likely purchased this lens to photograph animals.
#2. Nikkor 6mm f/2.8 Fisheye Lens – (Worth USD 160,000)
London's costliest camera lens is the Nikkor 6mm f/2.8 fisheye lens, boasting a 220-degree field of view and the unique ability to capture its own image. This lens was originally unveiled at the 1970 Photokina trade show in Cologne; Germany; and was one of only four lenses available at the time that could cover a 24x36mm picture area. Gray Levett from Westminster; London; discovered this 5.2kg lens during his travels; measuring an impressive 171mm in length and 236mm in diameter. It stands second in the most expensive camera lenses in the world list.
Nikkor 6mm f/2.8 fisheye lens; stands out among its large and innovative peers; with its unparalleled wide-angle perspective; features 12 lens systems divided into 9 groups and six built-in white balance filters; designed with purpose; and precision. Although it carries a hefty price tag of $160,000 and is only produced upon demand; its exceptional features quickly made it an invaluable choice among professionals as well as enthusiasts. Limited production was intended to meet demand by scientists or commercial entities; only a limited number have ever been produced so far.
#3. Carl Zeiss 50mm Planar f/0.7 Lens – (Worth USD 140,000+)
If the Nikkor Z 58mm F0.95 S Noct lens from Nikon is impressive, consider taking note of NASA's use of an extremely rare Zeiss Planar 50mm F0.7 lens from 1966 for lunar missions. NASA used this Zeiss lens to capture images from the far side of the moon using it during Apollo missions. There were only 10 lenses ever produced, of which six were given to NASA and are being auctioned off by Leitz Photographica now. Naturally, their quick debut will come at a premium cost.
Although the auction does not officially commence until June 12, the highest bid has already reached EUR55,000 (approximately $67,000), which is EUR5,000 more than its starting price. According to organizers, an expected hammer price of about EUR120,000 might also be achieved. Carl Zeiss is known for its super-large aperture lenses, yet not all are effective. One lens that stands out is this unique Planar 50mm f/0.7 lens; NASA placed an order with them back in 1966 so as to explore the moon's shadow side.
#4. Zeiss Apo Sonnar 1700mm f/4 – (Worth USD 100,000+)
Carl Zeiss created an extraordinary lens, the Zeiss APO Sonnar 1700mm f/4, specifically tailored to meet the photography needs of an athlete from Qatar who enjoys taking photographs of storks. If you have enough money, Carl Zeiss could even custom design something “one-of-a-kind”. Cost information has never been made public, but its enormous size, cutting-edge technology, and complex lens composition (15 components in 13 groups) would require incredible luck to afford it at such a magnitude of money. Carl Zeiss invented an unprecedented lens: the ZEISS Apo Sonnar T* 4/1700 was specially made for an ambitious customer who enjoys long-distance wildlife photography.
In order to achieve optimal image quality, the customer selected a Hasselblad 203FE 6x6cm medium format camera and ZEISS lenses as the ideal combination for his specific demands. In collaboration with Zeiss, an innovative focus servo control system similar to what can be found on astronomical telescopes was also developed; optical glass manufacture was put through rigorous trials as this project included a 1700mm focal length and an f/4 aperture; this unique design also involved casting delicate, rare types of glass never before seen on the such scale!
#5. Canon EF 1200mm f/5.6 L Lens – (Worth USD 90,000)
Sooner rather than later, telephoto lenses will outnumber camera lenses in terms of complexity, speed, and size. Unsurprisingly, our Canon EF 600mm f/4L IS USM stands as one of the first on this list. At its initial debut, Canon's EF 1200mm f/5.6L USM lens system was the largest lens available based on both focal length and maximum aperture. Crafting such an immense lens took considerable time and resources – including special coating materials – before finally becoming a reality. 13 lenses were divided into 10 groups using an eight-blade aperture system weighing 16.5kg and measuring 836x228mm to accommodate 48mm filters, using USM ultrasonic motor technology for power.
Canon's EF 1200mm f/5.6L USM lens was specifically created for large and broader applications, costing a whopping $90,000. As one of the world's most expensive lenses ever produced, this piece was specially commissioned for use at Los Angeles Summer Olympics in 1984. Canon originally created five lenses specifically tailored for photojournalists during the Olympics and afterward delivered them back to Canon's corporate office in Japan for production in smaller batches; each lens requires considerably skilled hand craftsmanship requiring many months or years for completion, leading to severe shortages and sky-rocketing prices.
#6. Nikon 1200-1700mm f/5.6-8.0 Lens – (Worth USD 60,000)
With 18 elements arranged into 13 groups and an approximate close focusing distance of 10 meters; this lens weighs more than 16 kg. Another standout feature of the product; is its two-piece construction; one half remains fixed even when turning the camera to correct horizontal/vertical alignment issues. F/5.6-8P IF-ED 1200-1700mm without internal focus motor. According to its manufacturers; the 1200-1700mm f/5.6-8.0 lens was created specifically for sports and wildlife photography. An example of an arrangement for this lens can be seen here in which two halves of its barrel are linked mechanically and electrically for a vertical/horizontal rotational mechanism.
A focus group movement is translated from focus ring rotation. A prototype was completed in February 1990. Assembly, adjustment, lens processing; working with metallic components, and optical design were completed smoothly; as trial production proceeded efficiently. Initial tests of this lens took place in Koshien Stadium where high school baseball competitions took place that year; with comprehensive photographic tests then taking place to complete its creation as the highest-precision Nikon lens at that time.
#7. Canon 5200mm f/14 Lens – (Worth USD 45,000)
The Canon 5200mm 1:14 Mirror Lens is a remarkable achievement in the world of SLR lenses, being the largest of its kind. It boasts a focus capacity of up to 32 miles and a minimum focusing distance of 120 meters. Transporting the lens is relatively easy, despite its hefty 100kg weight when unsupported. Full-frame DSLRs offer a focal length of 5200mm, while APS-C DSLRs offer 8320mm. Due to its rarity – only three of these lenses have ever been produced – an auction for the lens generated quite a stir on eBay.
This particular lens was crafted in Japan, purchased by a Chinese company, and subsequently put up for sale after hardly any usage. The optics appear to be in excellent condition, leading the team of optical specialists to consider devising a custom SLR/DSLR/EF mount. This enormous lens may be better suited for astronomical purposes, requiring the strength of two people to lift it. It is best mounted on an SUV or a modified vehicle, and in order to achieve its full magnification potential, a motorized or large gear-geared support head is necessary.
#8. Sigma APO 200-500mm f/2.8 with 2x Teleconverter – (Worth USD 26,000)
Due to its emerald hue and immense size, the Sigma APO 200-500mm lens could easily be described as The Incredible Hulk of lenses. Just like his comic counterpart; this beast moves swiftly while smashing both your finances and aspirations simultaneously.
This lens is known for being an apochromatic ultra-telephoto zoom lens featuring an unprecedented maximum aperture of f/2.8 at 500mm focal length – an engineering feat no other lens could match! This technological feat makes this technical endeavor truly impressive. The lens features an internal battery-powered motor that operates an LCD panel to display the current focus distance and magnification.
Wildlife and sports photographers will find these lenses particularly appealing; Canon, Nikon, and Sigma cameras can all use one. After more than ten years since its debut, Sigma's APO 200-500mm lens remains one of the most expensive available through standard channels and remains somewhat of an enigma to most buyers. Even with endless funds, most would probably find the lens more intriguing than necessary, given its spectacular high ISO capabilities of modern cameras.
While its impressive specs outdo all others on paper; in practice it may prove challenging to operate due to its sheer size and electronic focus feature – both of which stand out dramatically against its competition.
#9. Nikon AF-S Nikkor 800mm f/5.6E FL ED VR w/1.25 Teleconverter – (Worth USD 16,300)
As long as it took to arrive is an understatement. Nikon had not modified their 800mm f/5.6 ED-IF manual focus lens in over 25 years. Nearly 10 years ago they made announcements regarding an autofocus version – finally an 800mm lens had come along at last! Nikon delivered without question.
Their Nikkor 800mm marks an important advance, as it marks their first lens ever designed with fluorite elements outside of microscopy and medicine applications. Nikon is breaking with tradition with this telephoto lens by employing an electromagnetic diaphragm mechanism for accurate electrical control, offering users greater precision when it comes to image capture. This innovative solution allows photographers to achieve accurate results across different exposures while shooting rapidly in autofocus mode.
Thanks to a narrower barrel and favorite elements, the 800mm lens is lighter, more stable, and user-friendly than either its 600mm or 400mm predecessors. Utilizing autofocus is fast, accurate, and straightforward – well worth waiting for.
#10.Horseman 23mm f/5.6 HR Digaron-S Lens Unit – (Worth USD 15,620)
Komamura Corporation was once best known for their lightweight 6×9 and large format 4×5 field and technical cameras; which were lauded by photographers for their ability to handle larger format film photography. However, in recent years, the company has become equally recognized for its specialist digital solutions.
The Horseman VCC Pro handheld view camera is a popular choice among users who need to stitch images and control tilt, swing, shift, fall and rise motions with ease. It is compatible with Canon and Nikon SLRs. The Horseman Axella SX view camera is another one that's compatible with backs from Canon, Nikon, and Fujifilm (X and GFX), as well as Phase One and Hasselblad cameras.
For users who need full control over front/rear movement as well as vertical/horizontal stitching to produce high-resolution images, the Horseman LD Pro was specifically created for use with Hasselblad, Mamiya, and Phase One digital back.
However, it's important to note that because large format cameras use advanced technologies, super wide-angle lenses are somewhat limited in availability. The Horseman 23/5.6 Digaron-S is currently one of the broadest medium and large format lenses obtainable. It comes equipped with an exceptional focusing mechanism; consisting of standard Copal #0 shutters instead.
Every camera system needs lenses, which play an integral part in creating high-quality photographs. There are various kinds of lenses on the market each with unique qualities and applications. Even though camera lenses can be expensive, serious photographers often consider them an invaluable investment for creating the highest-quality images possible. High-quality lenses require expensive materials and complex production processes – which adds cost.
Professional photographers require specific features from their lenses. When selecting one for use in photography or art photography, the type of lens chosen depends on both subject matter and artistic vision. Becoming aware of all available lenses will help you select one with appropriate features for yourself.
Four factors such as resolution, lens aperture, focal length/zoom range, lens quality, sensor sensitivity and camera software all influence its composition and quality.
One of the key components of any camera is its viewfinder. This rectangular component on the back allows you to view and frame your subject by providing instantaneous framing information such as shutter speed, aperture size and ISO levels before taking photos. Some viewfinders even feature full digital information displays like shutter speed settings before snapping shots!
In most cases, using a superior lens compared to what comes standard on most cameras provides a noticeable improvement in image quality. While sensors play an integral part in how a camera processes images, lenses play the greatest role when it comes to the technical quality of images.
Due to its outstanding optical qualities and scratch resistance, glass is the material most often employed when making lens elements for camera cameras. Other materials used may include germanium- and meteoritic glass as well as quartz glass, fluorite crystals and acrylic (Plexiglass).
Aditi is an Industry Analyst at Enterprise Apps Today and specializes in statistical analysis, survey research and content writing services. She currently writes articles related to the "most expensive" category. | <urn:uuid:045a01ed-1363-4d31-9858-00427d90e7a4> | CC-MAIN-2024-38 | https://www.enterpriseappstoday.com/top-10/most-expensive-camera-lenses.html | 2024-09-08T22:46:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00386.warc.gz | en | 0.943109 | 5,855 | 3.265625 | 3 |
Web APIs are the lifeblood of web development, providing the back-end and internal communications for the vast majority of modern web and mobile applications. Web API calls account for over 80% of all web traffic and cybercriminals are increasingly targeting APIs, so ensuring web API security is crucial – and REST APIs are by far the most common type of web API for web services and microservices. This post shows how to test your web application interfaces for vulnerabilities and ensure REST API security.
What is a REST API?
REST (short for Representational State Transfer) is a software architecture style for web development, most often used with HTTP communication. RESTful APIs (or simply REST APIs) are application programming interfaces that follow REST principles, allowing web clients and servers to interact with a huge variety of web resources. REST APIs are accessed via HTTP URLs and are the most popular way to communicate with web services.
By convention, REST APIs use standard HTTP verbs (methods) and status codes, with JSON being the usual data format, but REST itself is not a strict standard or format. Other, more standardized web API formats do exist, two popular technologies being SOAP (which is XML-based) and GraphQL (which allows database-like API queries).
Also note that REST APIs are stateless like the HTTP protocol itself, meaning that they don’t store any information about current connections or sessions. RESTful web services provide ways to access and manipulate resources, while session management should be handled by the application. This is where many API security issues arise, as insecure application code may lead to vulnerabilities that enable API-based attacks.
Two levels of REST API security
One important thing to note before we get into the technical details. A web API provides a way to access a web application, so you need to think about security on two levels: access to the API and then access to the application. Security on the API level means that you need the proper authentication, authorization, and access privileges to ensure that only permitted clients can use the interface and only execute permitted operations. On the application level, you need to ensure that your application endpoints (i.e. the URIs used to access the interface) are not vulnerable to attacks that get through the interface or bypass it.
Let’s see how you can ensure REST API security on these two levels. For a detailed discussion of API security best practices, see the OWASP REST Security Cheat Sheet.
Ensuring secure API access
Most web APIs are exposed to the Internet, so they need suitable security mechanisms to prevent abuse, protect sensitive data, and ensure that only authenticated and authorized users can access them.
Security starts with the HTTP connection itself. Secure REST APIs should only expose HTTPS endpoints, which will ensure that all API communication is encrypted using SSL/TLS. This allows clients to authenticate the service and protects the API credentials and transmitted data from man-in-the-middle attacks and other traffic sniffing.
API access control
Most web APIs are available only to authenticated users, often because they are private or require registration or payment. Because REST APIs are stateless, all access control and auth is handled by local endpoints. The most common REST API authentication methods are:
- HTTP Basic Authentication: Credentials are sent directly in HTTP headers in Base64 encoding without encryption. This user authentication method is the simplest and the easiest to implement. However, it is also the least secure because it sends sensitive information in plain text, so it should only ever be used in combination with HTTPS to ensure that traffic containing these requests is encrypted.
- JSON Web Tokens (JWT): Credentials and other access parameters are sent as JSON data structures. These access tokens can be signed cryptographically and are the preferred way of controlling access to REST APIs. See our blog posts for more information about JWT security in general and JSON Web Token vulnerabilities in particular.
- OAuth: Standard OAuth 2.0 mechanisms can be used for authentication and authorization. OpenID Connect allows secure authentication over OAuth 2.0. For example, Google’s APIs use OAuth 2.0 for authentication and authorization.
User authorization based on API keys
API keys provide a way of controlling access to public REST services. Operators of public web services can use API keys to enforce rate limiting for API calls and mitigate denial-of-service attacks. For monetized services, organizations can use API keys to provide a level of access corresponding to a specific access plan purchased by the user.
API client restrictions
To minimize security risks, REST service operators should restrict API requests to the bare minimum set of capabilities required for the service. This starts with restricting supported HTTP methods to make sure that misconfigured or malicious clients can’t perform any actions beyond the API specification and permitted access level. For example, if the API only allows GET requests, POST and all other request types should be rejected with the response code 405 Method not allowed
Protecting applications that expose APIs
Once a web API client has obtained access, you need to protect the underlying web application from malformed and malicious inputs. REST API calls and responses may also include confidential data that needs a suitable level of protection.
Sensitive data in API communication
API calls often include credentials, API keys, session tokens, and other sensitive information. If included directly in URLs, these details could be stored in web server logs and leaked if the logs are obtained by cybercriminals. To avoid leaking confidential information, RESTful web services should always send it in HTTP request headers or in the request body (for POST and PUT requests).
Content type validation
As part of imposing API client restrictions, REST services should precisely define permitted content types and reject all requests that don’t have the correct declarations in their HTTP headers. This means carefully specifying permitted types in the Content-Type
. This helps to prevent header injection attacks. Similarly, if your endpoint never returns HTML, setting the correct content type reduces the risk of the response being parsed as HTML.
Security headers in responses
Additional HTTP security headers can be set to further restrict the type and scope of requests. These include X-Content-Type-Options: nosniff
to prevent cross-site scripting (XSS) attacks based on MIME sniffing and X-Frame-Options: deny
to prevent clickjacking attempts in older browsers.
If your service doesn’t support cross-domain calls, you should disable CORS (Cross-Origin Resource Sharing) in its response headers. If such calls are expected behavior, you should set CORS headers that precisely specify the permitted origins.
APIs are designed for automated access without user interaction, so it is especially important to ensure that all inputs are valid and expected. Any requests that don’t conform to the API specification must be rejected. Here are some general guidelines for REST API input validation:
- Use built-in validation and encoding functionality where available.
- Treat all parameters, objects, and other input data as untrusted by default.
- Always check the request size, content length, and content type.
- Use strong typing for API parameters (if supported).
- To prevent SQL injection, avoid building queries manually – use parameterized queries instead.
- Whitelist parameter values and string inputs wherever possible (rather than blacklisting).
- Log and monitor all input validation failures to detect credential stuffing attempts.
Why REST API security is so important
Web APIs are the backbone of modern web and mobile development. They allow applications and services to communicate and exchange data across hardware and software platforms. While SOAP APIs still have their uses and GraphQL is on the rise in big data applications, REST APIs are by far the dominant type, accounting for over 80% of all public web APIs. They provide the back end for the majority of mobile applications and IoT devices and allow easy integration across systems and applications.
Because they use the same technologies as web applications, REST APIs can be vulnerable to the same attacks. At the same time, APIs are not designed for manual access, so they can be difficult to test, especially if some endpoints and features are undocumented. API security testing requires accurate automated tools to ensure complete coverage. Alongside its industry-leading web application security testing features, Invicti also allows REST API vulnerability scanning out-of-the-box, with support for all popular authentication methods (including OAuth2) and automatic URL rewriting.
At Invicti, we believe that application security testing should include web APIs as a matter of course, not as an optional extra. Get our white paper on web API security to learn more, see the Invicti REST API test site documentation for technical details of REST API testing, and read our full article on scanning REST APIs for vulnerabilities with Invicti. | <urn:uuid:27dfa5c6-5b1e-4d67-a34f-756ab2d16621> | CC-MAIN-2024-38 | https://www.invicti.com/blog/web-security/rest-api-web-service-security/ | 2024-09-10T04:56:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00286.warc.gz | en | 0.881481 | 1,806 | 3.015625 | 3 |
What is Typosquatting?
Typosquatting, also known as URL hijacking, is a form of cybersquatting (sitting on sites under someone else’s brand or copyright) that targets Internet users who incorrectly type a website address into their web browser (e.g., “Gooogle.com” instead of “Google.com”). When users make such a typographical error, they may be led to an alternative website owned by a hacker that is usually designed for malicious purposes.
Hackers often create fake websites that imitate the look and feel of your intended destination so you may not realize you’re at a different site. Sometimes these sites exist to sell products and services that are in direct competition with those sold at the website you had intended to visit, but most often they are intended to steal your personal identifiable information, including credit cards or passwords.
These sites are also dangerous because they could download malicious software to your device simply by visiting the site. So you don’t even need to click on a link or accept a download for dangerous code to install on your computer, smartphone or tablet. This is called a drive-by download and many typosquatters employ this as a way to spread malicious software whose purpose is to steal your personal information.
In some cases, typosquatters employ phishing in order to get you to visit their fake websites. For example, when AnnualCreditReport.com was launched, dozens of similar domain names with intentional typos were purchased, which soon played host to fake websites designed to trick visitors. In cases like this, phishing emails sent by scammers spoofing a legitimate website with a typosquatted domain name make for tasty bait.
In order to protect yourself against typosquatters, I recommend you:
- Pay close attention to the spelling of web addresses or websites that look trustworthy but may actually be close imitations of the online retailer you are looking for.
- Instead of typing the web address into your computer, make sure you have a safe search tool, like McAfee® SiteAdvisor® which comes with McAfee® LiveSafe™ that provides warning of malicious sites in your browser search results.
- Don’t click on links in emails, texts, chat messages or social networking sites.
- Invest in a comprehensive security solution like McAfee LiveSafe™ service that protects all your devices, your identity and data.
There are more ways to scam people online than ever before. Your security intelligence is constantly being challenged, and your hardware and software are constant targets so make sure you stay educated and use common sense! | <urn:uuid:00ebdca9-4b8d-470b-b30a-3e87eacb7f89> | CC-MAIN-2024-38 | https://www.mcafee.com/learn/what-is-typosquatting/ | 2024-09-10T04:46:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00286.warc.gz | en | 0.942864 | 546 | 2.890625 | 3 |
Gen AI: In June 2024, an artificial intelligence-generated picture of the well-known K-pop star G-Dragon holding a transparent bag of bread while making peace signs with his hands went viral on X in South Korea.
The popular post on X showed an image with the caption “G-Dragon returning home after buying bread from Sungsimdang at Daejeon station, following his classes at KAIST”, and the AI creator’s name watermarked at the bottom.
The image was produced after the viral news of the celebrity accepting his position as a visiting professor of the mechanical engineering department at Korea Advanced Institute of Science and Technology, which is located in Daejeon. Paired with his apparent splurge at the famous local bakery chain in the photo, the AI-generated picture quickly garnered views across different social media platforms. On X alone, it recorded 2.8 million views, with many believing the picture to be real, quoting the post and commenting on how happy he seems.
The picture soon grabbed the attention of Korean media as well, clarifying that it is an AI creation, not a real picture. Outlets reported that the original creator of this AI image was RYAN OHSLING, who makes fake images based on requested commissions on Instagram and Pinterest.
With the image spreading faster than the news, recently updated comments are still about how shocked users are to learn it is made with AI. Some users suggested that this shows how easily AI can deceive people online, even when a watermark states the creator’s name.
4i Magazine reached out to the creator to ask about the reasons behind his production but did not receive a reply.
Difficulty of Telling What’s Fake and Real
Hyper-realistic gen AI productions have been causing a stir outside of South Korea, too.
One of the biggest examples of controversy is the video where Ukraine’s President Volodymyr Zelensky conceded defeat to Russia, which spread mainly on YouTube and Facebook. Shortly after, a video of President Vladimir Putin of Russia declaring peace was uploaded on YouTube. The videos were found to be fake AI productions and were promptly deleted by YouTube. However, some of the clips can be still found across social media.
Productions assisted by gen AI can potentially undermine democracy, with people not being able to discern between fake and real content.
Professors Sarah Kreps and Doug Kriner did a field experiment of sending randomised advocacy letters composed by humans and GPT-3 to 7,200 legislators for their study “How AI Threatens Democracy” in 2023. The experiment conducted in 2020 was to learn whether AI- and human-composed messages can be distinguished by these legislators choosing not to respond to machine-generated letters.
Out of six issues, the response rate for three issues was statistically almost the same for both machine- and human-written letters. As for the rest three, the AI-generated letters recorded a 2 per cent lower response rate.
Based on the result, the researchers concluded that legislators cannot perceive differences between AI- and human-generated content. This could lead to skewing directions of political discourse when a malicious actor makes unique messages and influences “legislators’ perceptions of which issues are most important to their constituents as well as how constituents feel about any given issue.”
Most recently, fake AI creations of the candidates running for the 2024 United States presidential elections have been at the centre of discussion for spreading misinformation online.
These productions included Republican candidate Donald Trump being surrounded and supported by groups of black people, which reportedly was shared among many African Americans who believe the photo to be true.
Another example was a robocall dialled to residents of New Hampshire saying that they would lose their right to vote if they cast a ballot in the primary of that state, in a voice that mimics incumbent President Joe Biden. It was later found that the call was made by a political consultant who wanted to push for AI-related rules.
Understanding the potential risks of affecting politics, some AI platforms are introducing solutions to curb malicious uses. For example, the image generator platform Midjourney started blocking users from making fake images of Donald Trump and Joe Biden in March 2024.
Efforts to Regulate Fake Productions and Prevent Real Problems
Ian Bremmer, a professor at Stanford University, wrote in his essay for Foreign Affairs last year that AI can affect the structure of current world powers. In democratic countries, AI widely collects personal information without consent and utilises it for commercial purposes, and it can transform into a tool to suppress freedom of speech and expression by authoritative regimes, he said.
Professor Bremmer added that AI regulations are necessary to maintain the justification of social consensus in a state that protects democracy and provides public resources.
Some experts emphasise the role of legacy media in the ocean of AI-generated misinformation. Legacy media should maintain a gatekeeping system to crosscheck facts or newly obtained information from various sources while following journalism ethics.
Imran Ahmed, the CEO and founder of the nonprofit Center for Countering Digital Hate, says that social media platforms and AI companies need to increase their efforts to shield users from the negative impacts of AI-generated content.
Referring to the picture of Donald Trump with black people, he was quoted as saying by AP: “If a picture is worth a thousand words, then these dangerously susceptible image generators, coupled with the dismal content moderation efforts of mainstream social media, represent as powerful a tool for bad actors to mislead voters as we’ve ever seen.”
Many countries in the West have already been taking legislative actions against the spread of AI-powered productions.
In May 2024, the European Union legislated the Artificial Intelligence Act, the first international regulation concerning the uses of AI. The act asks providers and users of AI systems to maintain transparent, clear labelling that the content is AI-generated. The act defines four different levels of risks, ranging from minimal risk to limited risk to high risk to the highest, unacceptable risk. Most AI-generated content falls under the limited risk category, whose risk is associated with providers and users not identifying the AI sources.
A company violating this law of limited risk can be penalised with a fine of either 15 million euros or 3 per cent of the global annual turnover in the previous financial year.
In the U.S., there is no federal law to regulate gen AI, but several states have introduced deepfake-related rules for election campaigns and sexual content. For example, the state of Texas revised its election law in September 2019, banning deepfake videos of candidates published or distributed within 30 days of an election.
Experts say that Asian countries should also strengthen their digital literacy education for the public and penalties for malicious uses of gen AI.
Lee Dae-sung, a professor at Gyeonggi University, emphasised that the foremost concern in regulating advanced AI-generated content, such as deepfakes, is the establishment of a legal framework to mitigate malicious applications of AI technologies. He noted that while many companies are introducing relevant legislation, there remain areas where the current legal system is inadequate.
“Especially concerning AI-generated content of a sexual or political nature, we require an institutional strategy to identify and punish (the distributors and providers) and safeguard the victims,” he stated. | <urn:uuid:1cda018a-d435-4f09-bbb1-5b168f9ce9fd> | CC-MAIN-2024-38 | https://4imag.com/gen-ai-productions-are-becoming-more-believable-what-can-we-do/ | 2024-09-12T15:08:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00086.warc.gz | en | 0.962745 | 1,510 | 2.5625 | 3 |
Teach and learn from anywhere.
In the wake of a health crisis — and to best keep students and educators safe — schools are re-evaluating their teaching practices and learning environments to accommodate an at-home, virtual learning experience.
The ability to make this fully functional and efficient falls squarely on IT. But, with the right tools in place, remote learning can be as simple and intuitive as in-person.
In our five-part blog series, we identify five ways you can best serve students and teachers when you move learning online. We cover how to:
Part 3: Promote peer-to-peer interaction
Part 5: Leverage apps and games
Promote peer-to-peer interaction
In a similar manner as to how teachers and students interact, students can work on group projects using the Classes and Lessons function within Jamf School. Teachers can create ad hoc classes with select students and assign projects to only those students.
Using the Chat functionality, teachers can stay in touch with students and groups of students actively. Students can communicate with the teacher and the teacher is able to assign group projects and activities using the Classes and Lessons feature to create assignments among multiple students. Students can then leverage third-party apps vetted by the teacher to empower students to collaborate outside of Jamf School on the assigned project.
Schools with Managed Apple IDs can choose to allow students to use them for iMessage or Group Facetime which is a phenomenal way to collaborate in real time. Other companies such as Zoom and Discord make their services free or discounted for education purposes.
Managed Apple IDs are a special type of Apple ID for students that empower IT to create the Apple ID on their behalf and dynamically update the student’s information. Managed Apple IDs are created in the Apple School Manager portal and can sync with Classroom data, as well as a school’s Student Information System (SIS).
Teachers can also request apps within Jamf Teacher and make them available to students. Once an IT administrator approves the app request from the teacher, the students can use as many apps as the teacher allows for productivity within the group.
Be prepared for all learning environments
Facilitating remote learning is only going to be more commonplace — with or without health scares. With a plan in place and the right tools at the ready, transitioning from in-class to online learning can be as easy as reciting the ABCs.
If you’re ready to deliver the best learning experience to every student regardless of location, we’re ready to help. Get started with a free trial of our education solutions. And once a customer, take advantage of over 130 free online training modules on how to best leverage Jamf Pro to empower your educators and students.
Be on the lookout for part four of our blog series on ways to offer active learning through projects and breakout groups. In the meantime, sign up for our April 2 webinar for additional ways to support remote learning.
S’abonner au blog de Jamf
Recevez les tendances du marché, les mises à jour d'Apple et les dernières nouvelles de Jamf directement dans votre boîte mails.
Pour en savoir plus sur la manière dont nous collectons, utilisons, partageons, transférons et protégeant vos informations personnelles, veuillez consulter notre Politique de confidentialité. | <urn:uuid:4f93280d-217e-4dc1-822d-f5428c056020> | CC-MAIN-2024-38 | https://www.jamf.com/fr/blog/reussir-avec-enseignement-distance-promouvoir-linteraction/ | 2024-09-12T15:57:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00086.warc.gz | en | 0.907124 | 719 | 2.625 | 3 |
Vulnerability Prioritization is Vital for IoT Security and Risk Management
The scale of vulnerability management has grown complicated in recent years. More than 28,800 vulnerabilities were reported in 2023, according to the National Vulnerability Database, an increase of 3,000 from the prior year. That’s about 80 a day, every day. The growth in identified vulnerabilities in software and hardware alone makes it clear that legacy processes for patch management are untenable.
This is especially clear with regard to Internet of Things (IoT) devices. The scale of IoT devices deployed within many organizations means that security teams need a more effective plan than “find and fix” when it comes to vulnerability management. What they need is to prioritize their vulnerabilities. Only then can they be effective at managing the risk of a cyberattack.
What is Vulnerability Prioritization?
Vulnerability prioritization is the practice of patching identified weaknesses based on impact on the business, rather than a severity score. The Common Vulnerability Scoring System (CVSS) assigns a severity score to each reported vulnerability that receives a CVE – Common Vulnerability Enumeration. This score runs from 0.0 to 10.0, with ranges connoting Low to Critical severity in terms of the level of potential damage the CVE could describe when exploited.
This practice has led to traditional patch management focusing on CVEs with higher scores as opposed to lower scores. Many organizations have internal service level agreements (SLAs) that tie the allowable time to fix a vulnerability straight to its CVSS score. In theory, a higher CVSS means that the vulnerability is more damaging and thus should be patched quickly. A lower score means that the weakness can be patched later. This isn’t necessarily true though.
A High or Critical severity vulnerability could be extremely difficult to exploit as part of an attack chain. In this case, a threat actor may not use the Critical vulnerability because it would require too much effort and too much time to do so. Keep in mind that cybercriminals, by and large, are opportunists. They’re looking for the path of least resistance and thus would be more likely to ignore weaknesses that take too much time or require too much skill to use. A CVE with a lower severity score may thus be more attractive and ultimately cause more damage because it’s easier to exploit.
Vulnerability prioritization means examining identified vulnerabilities for what could actually have the most impact on a specific company’s systems. In practice, this means looking at which flaws within your systems are the most exploitable and which ones could cause the most damage. For example, a temperature sensor in a remote facility that’s accessible from the open internet without sufficient mitigations could be a major risk. A vulnerability that allows remote code execution on that machine could be potentially damaging.
Vulnerability prioritization also takes into account the different risk levels based on device configuration. That same remote sensor may have been installed with a specific configuration that could be just different enough to make an identified vulnerability be less impactful. In that case, an identified weakness on that specific machine may then become a lower priority than a vulnerability on a security camera.
Prioritizing vulnerabilities based on actual impact on your systems means adopting a risk-based approach to patch management. Identified weaknesses are examined based on the specific context of your systems as opposed to an external CVSS severity rating that doesn’t take into account any defenses you may have in place.
How Vulnerability Prioritization Informs IoT Risk Management
Vulnerability prioritization is a necessary component of an effective IoT security and risk management strategy. Identifying weaknesses in your connected devices is only the first step in this process. Following that, you need to take a harder look at what these weaknesses are in the context of your unique system architecture.
Prioritizing these weaknesses based on your unique environment ultimately means that you can reduce the risk of a cyberattack spreading throughout critical systems and impacting patient care. Scoring identified IoT weaknesses based on the risk of exploitation and how damaging they could be if exploited means that you can allocate resources more effectively. Resolving the riskiest vulnerabilities first could eliminate a lot of possible attack pathways and ultimately make your company more secure quickly.
Your risk mitigation and management strategy requires this ability to prioritize identified weaknesses based on impact. There are too many CVEs and too many devices to patch each and every weakness as it comes up. Moreover, the lack of vendor support that’s common in the IoT space means that not every vulnerability even has a patch or vendor-identified mitigation measure.
There might need to be alternate methods of limiting the spread of a cyberattack that originates with legacy technologies that are connected to the internet and still in use. IoT devices that run on outdated operating systems are common. Remote sensors that operate on some old version of Windows may have a critical severity weakness that the vendor is not going to resolve. However, it serves the company’s needs just fine. Security teams would need to mitigate that vulnerability anyway. Prioritizing that weakness ahead of other issues may ultimately make the company more secure than it would otherwise be.
In this way, the ability to prioritize vulnerabilities based on the specific system context makes the company more secure. Understanding where the biggest risks are thus means that security teams can do their work more efficiently and more effectively. This makes your organization more secure and reduces the opportunities threat actors have to exploit vulnerabilities to achieve their goals.
How Asimily’s Advanced Vulnerability Prioritization and Management Informs Your Strategy
The Asimily platform features advanced vulnerability prioritization functionality designed to streamline your risk management and ensure appropriate resource allocation. The platform’s patented vulnerability prioritization capabilities provide holistic visibility into all IoT devices connected to your networks for IT and security teams to begin working toward a comprehensive security program.
This includes automated inventory identification through passive scanning to surface:
- Operating system
- IP address
- MAC address
- Port numbers
- Version number
This information means that you get a richer picture of all the IoT devices attached to your systems. Effective vulnerability prioritization means understanding the full scope of your inventory first, and Asimily’s passive scanning empowers that step. Asimily also scours security data provided by manufacturers, open source software repositories, attacker activity, and vulnerability criticality information to identify weaknesses and assign contextual criticality to them.
Asimily customers can efficiently identify high-risk vulnerabilities with our proprietary, patented algorithm that cross-references vast amounts of data from resources like Software Bills of Material (SBOMs), Common Vulnerability and Exposure (CVE) lists, the MITRE ATT&CK Framework, and NIST Guidelines. The Asimily platform uses this information in the context of your unique environment to allow our deep contextual recommendation engine to provide actionable remediation steps to reduce your risks and save time.
Leveraging Asimily’s advanced vulnerability prioritization and management capabilities empowers companies to mature their IoT security and risk management programs. Ultimately, this makes companies safer and reduces the risk of long-term business disruption.
To learn more about Asimily, download our IoT Device Security in 2024: The High Cost of Doing Nothing whitepaper or contact us today.
Reduce Vulnerabilities 10x Faster with Half the Resources
Find out how our innovative risk remediation platform can help keep your organization’s resources safe, users protected, and IoT and IoMT assets secure. | <urn:uuid:e5fb35b6-5b73-4349-ab86-b6465cb1907d> | CC-MAIN-2024-38 | https://asimily.com/blog/vulnerability-prioritization-iot-security-risk/ | 2024-09-17T12:43:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00586.warc.gz | en | 0.947721 | 1,534 | 2.5625 | 3 |
Wal-Mart is testing blockchain technology to improve food safety, according to a Bloomberg report. The retail giant is currently tracking two items in a test that includes thousands of packages shipped to various stores.
The company believes blockchain could potentially speed up the process of identifying and removing recalled food. Currently, it can take days to identify and remove a product once a customer becomes sick.
Using blockchain, Wal-Mart can conduct more targeted recalls rather than pulling all products from the shelves when a recall is announced.
The company is using blockchain technology co-developed by IBM. If the test is successful, it could change how Wal-Mart monitors food and launches recalls. If Wal-Mart adopts blockchain to track food globally, it would be one of the largest implementations of the technology to date, according to the report.
Blockchain technology helps guarantee information has a timestamp and recorded whenever any change happens, ensuring data can be trusted in real time. It’s now being used in a growing number of industries, including education, banking and financial services.
Swedish telecom company Ericcson, for example, is using blockchain to ensure data integrity and to serve as a strong foundation of its increasing cloud presence. | <urn:uuid:bc263c0f-6ab0-4cc3-97ea-04d0fdee2685> | CC-MAIN-2024-38 | https://www.ciodive.com/news/wal-mart-testing-blockchain-to-improve-food-safety/430910/?utm_source=Sailthru&utm_medium=email&utm_campaign=Issue:%202016-11-22%20Retail%20Dive%20Newsletter%20%5Bissue:8099%5D&utm_term=Retail%20Dive | 2024-09-20T00:46:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00386.warc.gz | en | 0.948104 | 245 | 2.75 | 3 |
Toyota Develops Wearable Mobility Device For The Blind
Project BLAID is dedicated to helping blind and visually impaired people navigate via a device worn around the shoulder.
Siri, Cortana Are Listening: How 5 Digital Assistants Use Your Data
Siri, Cortana Are Listening: How 5 Digital Assistants Use Your Data (Click image for larger view and slideshow.)
A company best known for affordable cars is branching out in an unexpected direction, as Toyota announces Project BLAID, a wearable device for the blind and visually impaired.
Worn around the user's shoulders, the device is designed to help blind and visually impaired people better navigate indoor spaces, such as office buildings and shopping malls, by helping them identify everyday features, including restrooms, escalators, stairs, and doors.
Users will be able to interact with the device by means of voice recognition and buttons. The device itself is equipped with cameras that detect the user's surroundings and communicate information to the individual via speakers and vibration motors.
Toyota also plans to eventually integrate mapping, object identification, and facial recognition technologies.
"Project BLAID is one example of how Toyota is leading the way to the future of mobility, when getting around will be about more than just cars," Simon Nagata, executive vice president and chief administrative officer of Toyota Motor North America, said in a statement. "We want to extend the freedom of mobility for all, no matter their circumstance, location or ability."
The device will begin beta testing soon. As part of Project BLAID, the company is launching an employee engagement campaign that invites team members companywide to submit videos of common indoor landmarks. These videos will subsequently be used by Project BLAID developers to teach the device to better recognize these landmarks.
"Toyota is more than just the great cars and trucks we build; we believe we have a role to play in addressing mobility challenges, including helping people with limited mobility do more,” Doug Moore, manager of partner robotics at Toyota, said in a statement. "We believe this project has the potential to enrich the lives of people who are blind and visually impaired."
The company has also produced a short video that showcases a young man who is blind testing an early-stage version of the device.
"This has the ability to transform and change people's lives," Rajiv Daval, a Project Blaid engineer, explains in the video. "Toyota's commitment is to develop a product that will actually improve the mobility for the blind and visually impaired community."
Earlier this week Toyota introduced a different kind of innovation with the debut of a two-seater concept car, called the Setsuna, which runs on an electric motor and is made largely from wood.
Setsuna, which means "moment" in Japanese, is made from a variety of distinctive types of wood for different parts of the car, including the exterior panels, frame, floor, and seats.
Set to premiere at Milan Design Week in Italy, the vehicle, which is not headed for mass production, was constructed using a traditional Japanese joinery technique called okuriari, which does not use any nails or screws.
Toyota's philosophy behind the concept is to imbue a family vehicle with the type of personality and memory association one has for a member of the family, a history "absorbing the aspirations, memories, and emotions of multiple generations" of users.
About the Author
You May Also Like | <urn:uuid:cb26da3c-73e3-4ea9-b35d-2271fa2aa000> | CC-MAIN-2024-38 | https://www.informationweek.com/software-services/toyota-develops-wearable-mobility-device-for-the-blind | 2024-09-20T00:24:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00386.warc.gz | en | 0.953482 | 706 | 2.5625 | 3 |
At a Glance
- OpenAI tested GPT-4 to see if it helps users create biological weapons. The answer: just slightly.
OpenAI is developing a blueprint to determine if their large language models could aid someone in creating a biological threat.
To that end, the maker of ChatGPT conducted a study to find out if its most powerful LLM, GPT-4, helps users create bioweapons. The answer: It provides only a “mild uplift” to their efforts.
OpenAI asked 100 biology experts with doctorates and students to complete tasks that covered the complete process for creating bioweapons, from coming up with ideas, to acquisition, magnification, formulation and release. Participants were placed in groups; some were given access to the internet while the others were given access to both the internet and GPT-4. The experiment tested for performance improvements in accuracy, completeness, innovation, time taken and self-rated difficulty.
The study found that GPT-4 did not provide a significant improvement for creating biological weapons, with the large language model sometimes either refusing some inputs or generating errors in responses.
“While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation," including more investigation into "what performance thresholds indicate a meaningful increase in risk," according to OpenAI.
‘Tripwire’ for caution
The findings of this latest study could contribute to creating what OpenAI described as a “tripwire,” an early warning system for detecting bioweapons threats.
But OpenAI also warned that while GPT-4 did not raise the risk, a future system might.
“Given the current pace of progress in frontier AI systems, it seems possible that future systems could provide sizable benefits to malicious actors,” the study reads.
The existential risk of AI, particularly the use of language models for nefarious purposes, has become an increasing concern for AI experts and lawmakers.
Apocalyptic concerns were exacerbated last October when the Rand Corporation published a study that said chatbots like OpenAI’s ChatGPT could help plan an attack with biological weapons.
OpenAI appears to be treading more cautiously following the high profile firing and rehiring of its CEO Sam Altman purportedly in part due to AI safety concerns. The startup recently unveiled its Preparedness Framework that would assess a model’s safety prior to deployment.
About the Author
You May Also Like | <urn:uuid:96468941-b33a-4084-b379-2730ed731b05> | CC-MAIN-2024-38 | https://aibusiness.com/responsible-ai/phew-openai-s-gpt-4-is-not-good-at-designing-biological-weapons | 2024-09-09T03:00:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00486.warc.gz | en | 0.963329 | 516 | 2.875 | 3 |
From castles to cities with Zero Trust
From Castles to Cities with Zero Trust
Once an opaque term used only in niche cybersecurity circles, “Zero Trust” is now the centerpiece of a sweeping Executive Order (EO) signed by President Biden in May 2021. With that action, Zero Trust has transformed from a trendy buzzword to a core part of an upcoming federal mandate. Yet while the term may have entered our common lexicon, it’s still not entirely clear what exactly Zero Trust really means.
Castles and Cities
Let’s start with what Zero Trust isn’t. Zero Trust is not a technology. It’s neither hardware nor software nor even a network or digital ecosystem. Zero Trust is a concept, a mindset that challenges traditional cybersecurity thinking. If you’ve made it this far, you may be wondering why we’re referring to castles, moats, and…cities? We’ll explain.
Traditional cybersecurity practices focus on a “castle and moat” model, where security protocols concentrate on keeping threats out. Most importantly, the castle and moat approach assumes that any user with the right credentials to access a network has done so legitimately and can be trusted to move freely through the system. As the trend towards the cloud picks up speed, the concept of a security perimeter as we know it is becoming obsolete.
Zero Trust makes a different assumption, that networks are either actively under attack or already breached. After all, recent cyberattacks have made it clear that internal threats are more pervasive than external attacks and almost always more damaging. Suppose the traditional cybersecurity model sees networks like castles to be isolated and protected from outside danger. In that case, Zero Trust sees networks as cities where communication with external applications and networks is constant, and users need to move freely without sacrificing usability or security.
If we continue with the “city” analogy, Zero Trust architecture can be thought of as a type of internal police force or traffic enforcement agency. Zero Trust represents many different validation points, barriers around sensitive content, and strict controls even on verified users. An individual user may be a citizen in good standing in her virtual city with valid credentials. However, according to Zero Trust, that still doesn’t give her free reign around the city or allow her to access any information she wants. Zero Trust limits the actions and access even credentialed "citizens" can have by segmenting off only the streets and pathways through the city needed for access and by implementing controls such as multi-factor authentication, specific permissions levels, and greater levels of encryption.
Implementing Zero Trust Architecture
Every organization is different, with specialized needs and priorities. As a leading provider of cybersecurity to the Federal Government, Leidos is uniquely equipped to help customers plan and navigate the roadmap to Zero Trust. With that said, there are some basic guidelines agencies should have in mind when beginning their path towards Zero Trust. In fact, the Biden cybersecurity EO does not dictate a standard approach (other than the instruction to follow NIST guidance) to Zero Trust but asks agencies to work with the rest of the federal government to understand their best path forward.
In an upcoming post, we’ll get into the details around migrating to a Zero Trust framework and go in-depth on our recommended four-phased approach to Zero Trust Readiness. Here’s a quick preview:
Phase 1: Assess
Assess data/mission criticality, system complexity, attack surfaces, and risk.
Phase 2: Discover
Identify and inventory all actors, assets, processes, organizational structures, and executive buy-in.
Phase 3: Plan
Create a prioritized plan to incrementally introduce new capabilities.
Phase 4: Execute
Implement new capabilities and work into true Zero Trust security structure.
Finally, in large networks, implementing AI/ML capabilities can be critical in building adaptive access control system that continuously calculate risks. Zero Trust with AI/ML can also orchestrate and automate security responses to detect and mitigate attacks at machine speed.
In most of today’s cybersecurity models, systems look for known patterns of attach or data signatures known to be associated with malware. As the threat ecosystem evolves, machine learning allows us to learn continuously and dynamically to detect new types of malicious activity as they evolve.
Zero Trust by the Numbers
- 80% of data breaches involve compromised privileged credentials (Forrester)
- In 2020 alone, there were over 1,000 known data breaches in the United States, affecting over 150 million individuals (Statista)
- 72% of organizations planned to implement Zero Trust in 2020 (IntelligentCISO)
- By 2022, 80% of new business applications 2020 opened up to ecosystem partners will be accessed through zero-trust network access (Gartner)
- By 2023, 60% of enterprises will phase out most of their remote access virtual private networks (VPNs) in favor of Zero Trust Network Access (Gartner)
- 40% of cyber breaches actually originate with authorized users accessing unauthorized systems (IDC)
- By 2025, more than 85% of successful attacks against modern enterprise user endpoints will exploit configuration and user errors rather than make use of advanced malware (Gartner)
Leidos is a leader in researching, advising, and implementing Zero Trust architecture for both the private sector and government customers. Our consultative approach makes each project unique with one priority in mind—your security. | <urn:uuid:b071bc38-730b-43d1-8abf-d54486ada9db> | CC-MAIN-2024-38 | https://www.leidos.com/insights/castles-cities-zero-trust | 2024-09-10T07:59:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00386.warc.gz | en | 0.929991 | 1,122 | 2.515625 | 3 |
About forty years ago, Richard Feynman postulated that quantum systems could be simulated much more efficiently by using computer hardware that operates quantum mechanically. This means that information is stored and processed in a system that obeys the laws of quantum mechanics and that these laws form the basis for the information processing. To do this, the information in a quantum processor is encoded into the states of its qubits (quantum bits), a two-level system with some remarkable properties.
As opposed to the classical bit, where information is represented by the “on” or “off” states of a transistor, a qubit is not limited to only being in the states “0” or “1”, but can be in both at the same time. This fundamental concept in quantum physics, is known as superposition.
Superposition reflects the statistical nature of quantum mechanics and is maintained for as long time as the system is not observed or measured. When we measure the quantum state, the system collapses into one specific state, probabilistically ending up in one of the two states. For this reason, we need to keep the qubit protected and non-observed throughout the course of a quantum algorithm.
Decoherence – Loss of “quantumness”
Although the idea of a quantum processor has been around for a long time, there is not yet a large-scale realization of a quantum computer. One of the main reasons why a quantum computer is so hard to build is that the lifetime of the qubit superposition is limited. One of the main challenges for quantum engineers is therefore to build as long-lived quantum systems as possible, without compromising the performance of the processor.
The loss of quantum coherence is known as decoherence and it happens when quantum superpositions collapse into classical states when they are measured. This happens regardless of whether it is an intentional measurement by an observer or caused by interference from the environment – the quantum system simply cannot tell the difference.
To measure qubit coherence time is, therefore, a cornerstone in any quantum lab, as it provides important information about the quality of the qubit and its shielding and how it is operated using quantum gates, as well as the characteristics of the qubit readout.
The coherence time of a qubit can be divided into two different timescales, depending on the impact that interference has on its state: The first is energy relaxation time – which is the time after which the qubit state relaxes to its ground state. This type of decay is non-reversible since the energy leaves the system. The second is dephasing time, which is the time scale after which the state loses its phase information Loss of phase coherence can, under certain circumstances, be reversible, by applying a refocusing gate sequence.
Entanglement and quantum interference
Quantum superposition states can also extend between quantum systems, so that an operation on one system immediately affects the others. This type of interaction is called entanglement and provides quantum computers with their advantageous scaling in computational power with each added qubit.
To illustrate this, consider the number of complex coefficients needed to describe the full quantum register. A single qubit state can be represented by two complex coefficients, c0 and c1, corresponding to the probability of finding the system in states |0〉 and |1〉, respectively. If we want to represent the states of two entangled qubits, we need four complex coefficients, i.e. c00, c01, c10, c11, and so on. In fact, the number of coefficients scales as 2N, where N is the number of qubits. This means that in a qubit register containing 300 qubits, the number of coefficients is larger than the number of atoms in the known universe.
In a quantum algorithm, performed on a qubit register containing many qubits, the expected outcome contains a statistical distribution between the probabilities to find the qubits in certain states. Given that the qubits are entangled with each other, the answer to a computational task is retrieved by analyzing the quantum interference pattern (or correlations) of its outcome states. | <urn:uuid:843afb7e-4a36-4806-92a4-6fa2936ebd42> | CC-MAIN-2024-38 | https://www.frontier-enterprise.com/quantum-computing-a-paradigm-shift-for-information-technology/ | 2024-09-15T02:54:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00886.warc.gz | en | 0.948965 | 854 | 3.671875 | 4 |
How augmented reality bridges the gap between humans and machines
Not so long ago, we were worried that robots were coming to take away our jobs. While the fear remains, many are convinced that it is not a fully automated world that we are likely to witness! This is mainly because of instances of failures that occurred in autonomous systems. Many tech leaders are of the view that it is not ''robots'' but ''cobots'' that will take centerstage in the future!
Cobots are bots that work with man-machine collaboration. One of the key technologies that foster a culture of collaborative work of man and machine is Augmented Reality (AR).
What is AR?
Augmented Reality is a technology that blurs the gap between real and virtual worlds. This technology overlays virtual elements onto real-world environments, paving the way for endless possibilities. AR facilitates human interaction with virtual objects and data within natural environments. With the power of Machine Learning (ML), Computer Vision, and advanced sensors, AR enhances our physical reality by augmenting it with digital content. This revolutionary technology is disrupting virtually every industry from retail to manufacturing by bridging the gap between humans and machines. It drives better collaboration, improves processes in product development, enhances training and hiring of staff, and more,
Let’s delve into how AR is bridging the gap between humans and machines across sectors:
Various processes in aeronautics such as design-to-market and maintenance, etc., are complex and demanding. These processes require highly specialised technicians who have expertise in specific aircraft systems.
AR technology supports aeronautical technicians by:
- Clearly displaying the specific parts in the aircraft that require maintenance.
- Verifying the processes performed by the technicians with the help of an AR-based body tracking software.
- Enhancing the precision of assembly through CAD model superimpositions in AR that provide a step-by-step virtual assembly process.
- Facilitating a seamless collaboration of experts through AR remote assistance.
Building Information Models (BIM) are created by overlaying CAD models in AR/VR. The data generated through these models helps in optimising construction projects leading to enhanced productivity and cost-savings.
Another application of AR in construction boosting the efficiency of the sector is the use of AR devices like tablets or smartphones to visualise the precise location of the construction process. These devices can also help determine where future installations can be envisaged.
AR visualisation helps urban planners model the layout of an entire city.
AR technology is being widely used to track the surroundings of a vehicle and display vital information on the driver’s panel. These include information such as static GPS data, dynamic direction arrows, reading road signs, the vehicle’s direction preview in the driver’s rear camera, and more.
AR-based systems provide healthcare professionals with computer-processed imaging data in real time. Wearables, head-mounted displays, and mobile apps have enhanced the quality of healthcare delivery and patient experience.
AR helps healthcare professionals overcome limitations in areas like physical therapy and rehabilitation, surgery, accessing real-time patient data, seamless hospital navigation, and more.
AR technology is making huge strides in medical training as well. Medical students can learn human anatomy with the help of AR which enables them to delve into the human body in an interactive 3D format. AR can also boost the depth and effectiveness of medical training in multiple areas including the operation of medical equipment or performing complex surgeries.
AR technology is being harnessed to enhance the customer experience in both online and brick-and-mortar stores. The technology helps shoppers to view the product in realistic environments on their smartphones, etc. For instance, shoppers can view how a furniture item will look placed in specific locations at their home and assess its suitability.
AR has revolutionised the manufacturing sector by fostering increased innovation. It facilitates improved product quality, better decision-making, enhanced operational efficiency, greater employee safety, and reduced costs.
AR provides product specialists with a 3D representation of various elements of a product. This helps them to enhance the product quality and deliver the product to the market quickly.
AR offers professionals contextualised information and insights into equipment performance. This data speeds up problem-solving and helps optimise processes.
AR has made inroads into different logistics functions to enhance productivity. The technology provides real-time insights into operations thereby boosting accuracy and efficiency.
AR enhances inventory tracking accuracy by quickly scanning and monitoring product locations in a warehouse. It enhances the speed, accuracy, and safety of maintenance by providing remote assistance to technicians. AR optimises the efficiency and accuracy of warehouse operations by recommending layout and workflow changes.
AR-based systems provide an immersive training experience to new employees by generating realistic simulations of operating procedures and safety protocols.
Besides these, AR has revolutionised several other processes and systems by working as an efficient and fast support system for professionals. AR technology has filled the lacunae in human efficiency by facilitating a seamless man-machine collaboration. These are some reasons that justify Statista reports that AR software is the world’s largest segment recording a market volume of USD 11.58 billion in 2023.
For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future. | <urn:uuid:068605cd-90a1-4cea-8d65-876eea0728a1> | CC-MAIN-2024-38 | https://www.infosysbpm.com/blogs/business-transformation/how-augmented-reality-bridges-the-gap-between-humans-and-machines.html | 2024-09-16T08:19:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00786.warc.gz | en | 0.939541 | 1,206 | 3.453125 | 3 |
Zero trust is an identity-focused security framework that uses identity and access management (IAM), microsegmentation, and the principle of least privilege to independently assess user requests before permitting them access to corporate resources. This strategy holds true whether the request comes from within or outside the enterprise’s network perimeter.
The zero-trust model works on the principle of not trusting anyone or anything until they are verified. By default, zero trust assumes that until a device or a user is authenticated, they’re untrustworthy and not allowed access to the corporate network. Zero-trust architecture microsegments networks and uses multi-factor authentication (MFA) and other granular controls to provide access on a need-to-know basis. This increased level of security helps enterprises tackle cybersecurity threats and future-proof the network.
The COVID-19 pandemic has had far-reaching consequences on how companies operate. Many businesses were forced to adopt the hybrid work model, where a significant chunk of the workforce worked outside of the secure perimeter of the office. Additionally, the pandemic also accelerated digital transformation efforts, with enterprises realizing the benefits of moving to cloud-based models.
However, this quick pivot to newer working methods led to increased cybersecurity challenges. As a result, security incidents became more common and more drastic. As per the 2021 IBM Cost of a Data Breach Report, data breaches cost companies $4.24 million per incident on average — the highest cost in the 17-year history of the report. Nevertheless, the report highlighted that embracing automation, artificial intelligence (AI), and building a zero-trust deployment can reduce security incidents to a large extent.
Also see: Top Zero Trust Networking Solutions
The Principles of Zero Trust
Historically, enterprises have secured their networks with virtual private networks (VPNs), where data travels through secure tunnels within a company’s network. But this method doesn’t serve its purpose well when workers are distributed across the globe. So, how do you secure your networks in light of these new changes?
“Zero trust is a trend that has continued to grow over the past few years,” said Tommy Gallahger, founder of Top Mobile Banks. “This approach has many benefits, including increased security, faster response times to cybercrime scenarios, and improved compliance standards. In addition, it allows companies to operate autonomously without relying on third-party providers or intermediaries.”
What enterprises need to remember is that zero trust is not a single tool that can magically secure their endpoint devices and networks. Instead, zero trust is a methodology that radically changes how we have been approaching network security up till now.
Zero trust works on the following principles:
- All assets require mandatory authentication like MFA before granting access.
- All data is identified and microsegmented to inhibit the damage caused by lateral traffic.
- Continuous security management is required for all accounts and sessions.
- Access is provided on a need-to-know basis.
- The policy of least privilege is enforced by the enterprise.
- All endpoints are validated.
- All activities are documented, and all traffic within the network is inspected.
Also see: Steps to Building a Zero Trust Network
Zero Trust Going Forward
Challenges posed by the pandemic, like work-from-home and bring-your-own-device (BYOD) policies, have increased cybersecurity risks. Further, the process of securing endpoints has become much more difficult. Together, this has contributed to the increased interest in zero-trust network access (ZTNA). Organizations now want ZTNA not only for their remote workers, but they also want to implement it in their offices.
Given the shift to a distributed enterprise and naturally an increase in the attack surface, it is no wonder Gartner predicts that by 2023, ZTNA will grow to 31%. Gartner also reports that by 2025, at least 70% of remote deployments will implement ZTNA — up from less than 10% at the end of 2021.
Further, it is worth mentioning that the U.S. federal government has made a big push to include zero trust as part of improving U.S. cybersecurity. The executive order states:
“The Federal Government must adopt security best practices; advance toward Zero Trust Architecture; accelerate movement to secure cloud services…drive analytics for identifying and managing cybersecurity risks; and invest in both technology and personnel to match these modernization goals.”
Looking Ahead to Handle Zero-Trust Challenges
While security has to move beyond the traditional moat-and-castle concept to focus on all the endpoints, several challenges can hinder its adoption. Yet, with the right measures, the challenges can easily be overcome.
Patchwork zero-trust solutions
A zero-trust solution has the potential to lessen an organization’s cybersecurity woes. Organizations buy a patchwork of zero-trust products and believe they have taken all adequate precautions. However, this leaves security holes in the network cyber criminals can easily exploit.
To overcome an overdose of tools, buy tools that can solve multiple problems. Also, identify the gaps in your security posture that require immediate attention, and fix them fast.
Zero trust involves a new way of protecting resources. Unfortunately, businesses with legacy systems have difficulty in adapting to zero trust.
Enterprises need not adopt zero trust in a single step. Instead, they can start small and begin with the most critical areas. Then, once they develop confidence, they can gradually extend it to the entire network.
Ongoing management needs
With zero trust, you cannot just implement it and forget about it. It requires continuous monitoring of systems and resources. Often, organizations, especially small and medium businesses (SMBs), do not have the means to invest in ongoing management.
Routine checks are necessary to ensure everything is patched and that measures are in place to counteract zero-day attacks. Automation tools can help businesses monitor attacks and stay a step ahead of threat actors.
Constant and ongoing management of networks has another drawback. Monitoring and troubleshooting network problems can hinder staff productivity.
“Today, some may say that zero trust can hinder productivity, which could be the case if back-end management processes and governance operations are granted manually,” said CTO at StrongDM, Justin McCarthy. “In the future, zero-trust technologies must also take the user experience into account, improving productivity as well as security.
“One example is making it easy to grant access and audit access control. This type of zero-trust architecture can drive higher overall levels of security, simplify accessibility, and deliver reduced operational overhead.”
Zero Trust in the Years Ahead
With the ever-growing trend in digital transformation, further exacerbated by the changes caused by the COVID-19 pandemic, zero trust has garnered increased adoption across many industries. Despite its challenges, there’s no denying the way zero-trust policies ensure organizations’ networks are secured from many cybersecurity threats among other benefits.
As organizations place greater importance in cybersecurity and data protection, we’ll begin to see an increase in the importance of zero trust. | <urn:uuid:fff9909b-51e4-4db1-96ff-83a6008c593f> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/data-center/handling-zero-trust-problems/ | 2024-09-07T23:15:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00686.warc.gz | en | 0.94485 | 1,473 | 2.65625 | 3 |
Technology has greatly expanded the methods of creating, editing, maintaining, transmitting and retrieving records. From creation to disposition, records in electronic recordkeeping systems may now utilize a variety of media. An example of an electronic recordkeeping system is one in which a personal computer generates the original records, which are subsequently stored on a secondary electronic resource. While paper copies of the electronic records may be printed for distribution, the original records are transferred electronically.
After a specified period, the inactive records are transferred to condensed electronic long-term storage and the original media upon which the records were stored may be erased and reused. Other electronic records may be created, manipulated, maintained and disposed of on the computer without ever being produced in hard copy.
Paper forms traditionally have been the primary medium for conducting business transactions, in both private and public environments. With the explosive growth of internet-based transactions for commercial and retail business, the electronic form is becoming the primary user interface—the visual record—when transacting electronic commerce. As such, the graphics and text of the forms, as well as the information that is entered into the forms, compose the "record" of transacted business. Records of electronic transactions will have an important role as evidence in legal disputes and regulatory audits.
With the rapid advancement in business systems, practices and procedures must be established to guide public and private entities through the potential minefield of electronic records management issues. What the courts require and auditors expect—therefore what commercial and public entities should be striving for—are systems and processes that are accurate and reliable and, as such, are viewed as trustworthy in the management of their electronic transaction records.
Managing records and documents refers to the process of controlling electronic or hardcopy documents over the course of their lifecycle—from inception through permanent retention or destruction. The activities in this process ensure the availability of documents to people who need them, while at the same time, protecting them from access by people who don't. The activities also ensure a company's compliance with regulations affecting its documents and reduce the legal exposure documents present when they are held past their useful life. In the business world, this process is usually referred to as document management. Although some use the terms "document" and "record" interchangeably, "record" historically has referred to a paper document.
The Difference Between “Electronic Document Management” and “Electronic Record Management”
You can find more information about IT performance risk on KnowledgeLeader by accessing the document Records Management Risk Key Performance Indicators. Here are other KnowledgeLeader resources that might be of interest to you as well: | <urn:uuid:48047e1f-441e-4562-a9a7-ace1885ae71a> | CC-MAIN-2024-38 | https://info.knowledgeleader.com/guide-to-records-management | 2024-09-15T09:14:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00086.warc.gz | en | 0.950974 | 529 | 3.09375 | 3 |
Back in December 2018, we predicted that 2019 would bring more of the same problems in the world of cybersecurity. We also said that advanced email attacks would become more sophisticated in this year and that spear phishing would become more prevalent. From what we can see now, most of what we’ve said is, unfortunately, coming to fruition.
However, a lot more has happened. Headlines of massive data breaches, privacy violations, email breaches, and other forms of cybersecurity threats are all under the spotlight. These problems, affecting both businesses and consumers, continue to make cybersecurity a vital topic for everyone.
Two quarters into 2019, let’s review our predictions and see what to expect for the coming years.
The IoT continues to grow and so does the Attackers’ Interest in It
As expected, the Internet of Things and all its connected devices is spreading all over the globe. In 2018, there were around 23.14 billion connected devices, while in 2019, there are 26.66 billion. That number is set to reach 75.44 billion as early as 2025. Such growth inevitably brings the attention of hackers.
Ever since IoT devices first appeared, experts have been saying that there are many inherent security flaws in all of them. One of the most highlighted flaws is the ability, from IoT devices, to trigger communication to users and to other devices (e.g. receiving a warning from your refrigerator). This communication, which can be an app or an email, can be tampered and used for phishing or for obtaining sensitive information.
However, there is hope that their increased popularity will bring more regulations that can protect IoT devices. Even though many ignore it, there are already several ways to safeguard IoT devices from cyber threats.
Data Breaches Continue to Affect Everyone
Data breaches continue to be a big problem in 2019, and the trend is unlikely to stop. Massive companies like Facebook are still experiencing leaks. More importantly, the number of data breaches is continuing to rise, most notably in the health industry, where an 80% increase has occurred from 2017 until today. One of the reasons for this rise is legal: several countries are enforcing the need to communicate breach incidents, and both minor and major incidents, as well as just the suspicion of an incident, are published.
It’s important to note that one of the causes making it a huge problem comes from the fact that most companies take too long to identify data breaches. The average time across all industries is 197 days, and some 69 more days to contain the breach. During such long periods, most companies stand to spend millions to remedy the situation.
Advanced Email Attacks Continue to Affect Companies
Email attacks are always changing, but 2019 is proving to be the year of the most dangerous email attacks – impersonation attacks. Such sophisticated email threats - Business Email Compromise - have been on the rise lately, and they continue to scare many business owners. 61% of organizations have stated that they are worried about email attacks on their businesses.
One of the most aggressive attacks on businesses is certainly a business email compromise, which continues to be the most significant risk for companies of all sizes. Hackers are increasingly using these sophisticated attacks while turning away from less profitable cyberattacks like ransomware. BECs usually lead to data breaches, in the form of information exfiltrated from organizations. The email attack vector is actually one of the main drivers for some of the highly publicized data breaches we’ve seen this year.
Improving Cybersecurity Culture in your Organization
The email will remain the primary targeting method of advanced attacks. Having advanced email security software in your company it’s crucial to detect and to prevent against the most sophisticated email attacks.
With so many sophisticated email attacks and other cyber threats on the rise, companies need to start paying more attention to their cybersecurity, and all a company can do is strive to make a safer digital environment with proper cybersecurity technology and employee education. By enforcing those security practices, you’re very likely to avoid the risk and remain safe of the most advanced threats in your company. | <urn:uuid:92e8e3c6-2047-418a-9769-fb370aed1192> | CC-MAIN-2024-38 | https://www.anubisnetworks.com/blog/iot-and-advanced-email-attacks-cybersecurity-in-2019 | 2024-09-15T09:00:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00086.warc.gz | en | 0.966453 | 832 | 2.765625 | 3 |
Piz Daint is a 5,272-node Cray XC30 supercomputer with Intel Sandybridge processors and NVIDIA K20X GPUs, with a theoretical peak performance of 7.8 petaflops. It runs software from a wide variety of disciplines, including climate science, geoscience, chemistry, physics, biology and materials research.
With many of the software codes customized or written by the scientists that use Piz Daint – often utilizing high-performance programming models such as OpenACC or CUDA – CSCS recognized reliable and bulletproof tools were needed to ensure that the codes reach their potential and chose to work closely with Allinea and deploy the Allinea DDT debugger.
Saving time on resolving software problems is key. We want to help people focus on their core activity – whether in their science or in improvements on their existing codes. Allinea DDT helps us to do that,” says Jean-Guillaume Piccinali, Scientific Computing Support Specialist.
Allinea and CSCS work together to test and validate pre-production environments, which enables changes to be rolled out with confidence, and problems in systemware or libraries to be triaged quickly. In addition, they run training programs to help the users work productively, while focusing on their science.
With today’s rapidly changing HPC environments, collaboration is the key to enabling the rapid deployment of new technology – and then to delivering on the potential of HPC,” said Patrick Wohlschlegel, Technical Services Manager of Allinea Software. “Together CSCS and Allinea are shortening the time it takes for developers to solve bugs on the system.” | <urn:uuid:2a3d3a31-e998-48ac-a69f-fa6d481a15e0> | CC-MAIN-2024-38 | https://insidehpc.com/2015/04/accelerating-the-piz-daint-supercomputer-with-allinea/ | 2024-09-16T12:46:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00886.warc.gz | en | 0.934423 | 348 | 2.546875 | 3 |
What’s the big deal about edge computing? Does it have an advantage over cloud computing? Can it offer anything that the cloud can’t? What’s the point of having an edge when you already have a cloud? These are just some of the questions we will answer in this article.
Related: Why Edge Computing Is Gaining Adoption Among Big Data Companies
Edge Computing Goes Mainstream in 2022
Edge computing is gaining traction as a powerful addition to cloud computing. Unlike cloud computing, where all data processing takes place in centralized servers, usually in a different location from where the application is being used or accessed, edge computing brings all processing closer to users.
It can be applicable in several different use cases. Still, its main advantage is that it allows users to process data closer to the source without being dependent on the cloud.
Everything from IoT devices to self-driving cars uses edge computing to identify and respond quickly to certain events.
What Is Edge Computing?
The term “edge” refers to any location outside traditional data centers in which data processing occurs. It features low latency, closer proximity to users and devices, and local processing capabilities for IoT applications.
Edge computing environments can be on-premise or smaller regional data centers located nearer to end-users. They can process data in real-time without relying on centralized cloud platforms for resources.
As per Gartner’s prediction, “75% of enterprise-generated data will be processed outside the cloud or traditional data centers by 2025.” This means enterprises should prepare for a decentralized IT environment with distributed computing power across multiple locations.
Keep reading: The Differences Between Cloud and Edge Computing
Top Edge Computing Trends That Will Dominate in 2022
The amount of data generated worldwide is growing exponentially. The IDC predicts that within the next few years, there will be 175 zettabytes (175 trillion gigabytes) of data created every year. This growth is driven by the rise of IoT devices, providing businesses with more data and insights than ever before. This data helps organizations improve their processes and customer experiences.
While cloud computing has helped companies benefit from the power of this data, it’s not enough. According to NVIDIA, sending all this data to the cloud takes too much time — up to a tenth of a second. A tenth of a second might not seem like much, but autonomous vehicles need to make decisions within milliseconds if they want to avoid crashes. That’s why edge computing is growing in popularity: It enables businesses to quickly access data from IoT devices without relying on cloud computing alone.
Want to learn more about edge computing? Watch the replay of our Living on the Edge webinar where our speakers break down the complexities of edge computing and provide strategies for successfully deploying edge solutions.
The Internet of Things (IoT) has been the defining trend of the last decade. Gartner predicts that by 2025, there will be 75 billion connected devices in use worldwide. This effectively means that everything from our streetlights to our cars to even our fridges will soon be connected to the internet.
The edge computing market is the best way to handle this massive expansion of connected devices. Edge computing allows for data processing at the source, reducing bandwidth requirements and ensuring faster network speeds for billions of users. This is especially important as more and more devices are added to the IoT networks every year.
5G and Edge Computing
With every new generation of mobile technology, data speeds increase exponentially. According to Nokia, 5G networks are 100 times faster than 4G LTE networks and can simultaneously support ten times more devices. In a nutshell, 5G will connect everyone on the planet and make lightning-fast data transmission possible for all devices.
Suffice it to say, edge computing will be instrumental in making this a reality. With more users connecting at once, it’s going to be essential that we have distributed cloud platforms available for processing data at the source.
Edge computing is already seeing massive growth in the automotive industry. The tech will fuel the rise of autonomous vehicles (AVs).
The reason for this lies in AVs’ need for a low latency connection. They will also require real-time insights into their surroundings. These include road conditions and other vehicles on the road. All of this can only be achievable with edge computing.
As a result, edge computing will help make AVs a reality by 2022. It will ensure that the vehicles are safe and efficient on the road.
Read more: The Shift to Digital-First Retail
How Can Edge Help Businesses?
Edge computing is the future of data centers. It enables real-time analytics, increases speed, and reduces latency. The technology is already in use across several industries, such as retail, healthcare, and finance.
For example, it can help retailers in the following ways:
Real-time analysis – Edge computing allows stores to capture customer data and analyze it on the go. This information can serve to improve product selection and inventory management.
Speed – Edge computing allows retailers to process data faster than ever before. This helps them streamline their operations and offer customers a better shopping experience.
Latency reduction – Edge computing reduces latency by routing traffic from central servers to local ones or even directly through the cloud. This ensures that customers have access to the information they need when they need it without delays or interruptions due to congestion on the network (which helps reduce costs).
Challenges to Edge Computing
Here are the top challenges of edge computing and how your organization can overcome them:
Edge computing requires a greater focus on security than most organizations are used to. The more places you have data, applications, and hardware, the more potential points of vulnerability there are to hackers. In other words, security becomes even more important with edge computing than before. Organizations need to prioritize security at every step when implementing edge computing, including in their vendor contracts.
For some applications, network latency — the delay between sending and receiving data — doesn’t matter much. For others, it’s critical. Suppose an application will be used at the edge of a network. In that case, it’s important to understand how that application will be affected by inevitable delays in transmitting data over long distances.
Some applications that work well on the cloud might not work as well at the edge because of latency issues. Until 5G equipment becomes available for everyone, this will be an ongoing problem for any organization attempting to create applications for the edge of networks.
Edge computing is going mainstream, and discussed above are the top trends, opportunities, and challenges to watch out for in 2022. | <urn:uuid:c2b57cac-466c-4026-b70d-4fc834713449> | CC-MAIN-2024-38 | https://iotmktg.com/edge-computing-2022-trends-potentials-amp-challenges/ | 2024-09-16T11:56:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00886.warc.gz | en | 0.920196 | 1,360 | 2.546875 | 3 |
You’ve heard about ChatGPT, OpenAI’s revolutionary tool. Ever wondered how it might shake up your industry? From writing to customer service, even legal drafting, it’s not just automating jobs but creating new opportunities. Let’s delve into ‘How does ChatGPT Affect Industries?’ Get ready for a deep dive into AI’s potential and its implications for your work. It’s time to explore the brave new world of AI.
Understanding ChatGPT and AI
To kick off, you must understand that ChatGPT, a cutting-edge AI tool developed by OpenAI, has the potential to shake up various industries in ways you might not have imagined. It’s not just about automated responses or voice assistants; it’s about the profound implications of AI that can redefine the way we work.
Understanding AI isn’t only about grasping its technical aspects. It’s also about appreciating AI’s capabilities and limitations. While AI can efficiently handle tasks like customer service inquiries or even writing code, it isn’t capable of replicating human emotions or intuition. It’s a tool, not a replacement for human reasoning.
AI’s future appears promising, but it’s not without ethical considerations. AI’s implications include job displacement, privacy concerns, and potential misuse. Yet, there’s also immense potential for growth, innovation, and efficiency.
Balancing these considerations is key. As we move forward, we must adapt to AI’s capabilities, prepare for its limitations, and mitigate potential ethical issues. Your understanding of ChatGPT and AI isn’t just about the tech—it’s about preparing for a future where AI becomes an integral part of our industries.
AI’s Impact on Traditional Jobs
As you delve into the world of AI, you’ll quickly see how it’s shaking up traditional jobs in unprecedented ways. The impact on employment is tangible, with job displacement becoming a growing concern. AI, like ChatGPT, is automating tasks traditionally done by humans, causing a workforce transformation that’s both exciting and daunting.
The automation challenges are real. From customer service to legal documentation, AI’s capabilities are redefining roles, sometimes even rendering them obsolete. However, it’s not all doom and gloom. While some jobs are disappearing, others are evolving, and new ones are emerging. Future job prospects lie in roles that leverage AI, not compete with it.
Think of AI as a tool, not a threat. It’s here to augment human capabilities, not replace them. Harnessing its power can lead to increased efficiency, productivity, and innovation. Yes, there’s displacement, but there’s also opportunity. As traditional jobs transform, the key is to adapt, upskill, and embrace the change. AI is reshaping the world of work, and it’s up to you to seize the potential it brings.
ChatGPT’s Potential Applications
Building on the idea of AI as a tool, let’s explore how you can harness the potential of ChatGPT in various industries. This AI is revolutionizing customer service. It can answer queries swiftly and accurately, reducing wait times and improving user experience.
Legal document automation is another area where ChatGPT shines. It can draft basic legal documents, saving time and minimizing human error. This automation can streamline workflows, freeing legal professionals to handle more complex tasks.
But ChatGPT isn’t just about mundane tasks, it’s also about creativity. It can write compelling narratives, opening up possibilities for storytelling with AI. It’s revolutionizing the literary landscape, offering a new way to generate content.
AI in code writing is another exciting application. ChatGPT can generate snippets of code, helping programmers automate repetitive tasks. This not only saves time but also reduces the likelihood of bugs, enhancing the quality of software.
The key is adapting workflows for AI integration. It’s about understanding how to best use ChatGPT, aligning its capabilities with your needs to optimize productivity and efficiency. Trust me, once you begin to understand and utilize ChatGPT, you’ll wonder how you ever managed without it.
Job Automation and Opportunities
The shift toward AI like ChatGPT in various industries isn’t just about automating tasks, it’s also about the new job opportunities it creates. AI job displacement is a real concern as AI-driven innovation automates tasks once done by humans. Yet, it’s crucial to see this as part of a broader workforce transformation rather than a simple replacement scenario.
Emerging job opportunities arise as AI uncovers new areas of potential. Roles in AI ethics, AI training, and AI system management are just a few examples. With the right training and adaptability, you can position yourself to tap into these new opportunities despite the automation challenges.
Consider this table:
AI Job Displacement | Emerging Job Opportunities |
1. Customer service agents | AI trainers |
2. Data entry clerks | AI ethics managers |
3. Assembly line workers | AI system managers |
4. Retail cashiers | Data analysts |
5. Truck drivers | AI application developers |
Adapting to AI Capabilities
In your industry, you’ll need to adapt to the capabilities of AI tools like ChatGPT, as they’ll play a significant role in shaping the future of work. AI capabilities are not just about automating tasks, they’re about transforming workflows, innovating business models, and restructuring organizations.
You’ll see workflow transformation as AI takes over mundane tasks, allowing you to focus on more strategic roles. You’ll need to innovate your business model to leverage AI’s potential, creating more value for your customers and stakeholders. Organizational restructuring will be inevitable as you align your human resources with AI to optimize productivity.
Skill development is another facet to consider. AI competence is becoming a critical skill in many industries. You’ll need to invest in upskilling your workforce, equipping them with the necessary knowledge to work alongside AI tools.
AI Versus Human Reasoning
As you navigate this new AI-driven landscape, it’s vital to consider the ongoing debate between AI and human reasoning. AI’s impact on decision making is profound, yet it’s not without flaws. For instance, AI lacks the emotional intelligence and contextual understanding integral to human reasoning.
However, human adaptability is a remarkable trait that allows us to coexist with AI. Despite AI’s effect on the job market, potentially replacing certain roles, it also creates new opportunities where human skills are indispensable.
It’s paramount to balance AI and humans in the workforce. While AI can handle repetitive tasks efficiently, it still requires human oversight to manage unpredictability. This interplay confirms that AI is not set to replace us, but rather reshape our roles.
Human Adaptation to AI
How will you adapt to the increasingly AI-driven world, especially with tools like ChatGPT transforming traditional industries? This question is both exciting and daunting as you navigate the ethical implications and potential job displacement created by AI developments.
The key lies in skill enhancement and the cultivation of unique human traits. AI may generate code and write articles, but it lacks emotional intelligence and the ability to make nuanced ethical decisions. Therefore, enhancing your abilities in these areas could make you invaluable in an AI-enhanced workplace.
Consider collaborative intelligence, the intersection of AI capability and human ingenuity. By learning to work alongside AI, you can complement its strengths and compensate for its weaknesses.
Consider the table below, which evokes the emotional journey you may experience in this process:
Stage | Emotional Response | Action |
Realization | Anxiety | Seek knowledge |
Adaptation | Acceptance | Enhance skills |
Integration | Confidence | Collaborate |
Don’t fear AI but instead, see it as a tool that, when combined with your unique human skills, can lead to greater productivity and creativity.
Disruption of Traditional Judgment
Every sector of industry faces significant shifts in decision-making as AI tools like ChatGPT disrupt traditional judgment methods. This disruption exposes the flaws of AI, yet also highlights its transformative power. You can no longer rely on old models when AI is redefining how industries operate.
AI’s continuous evolution brings new business models and workflow changes to the table. It’s not just about automating tasks; it’s about reimagining how work is done. For instance, AI can handle customer service inquiries, which frees you to focus on tasks that require human touch. The transformation of industries is inevitable, and embracing this change is the key to staying competitive.
However, this transformation isn’t without challenges. Adapting to these changes requires lessons learned from previous tech disruptions. Remember, every innovation comes with its own set of drawbacks and AI is no exception. It’s crucial to be aware of AI’s limitations, like its requirement for large amounts of data and its vulnerability to biases.
Expert Opinions on AI Disruption
In light of AI’s growing influence, you’ll find it interesting to explore expert insights on the disruption caused by tools like ChatGPT. The consensus among experts is that AI disruption brings future challenges but also opportunities for industry transformation.
Consider three experts: Ajay Agrawal, Joshua Gans, and Avi Goldfarb. They co-authored ‘Prediction Machines’ that explores AI’s economic impact. Their research predicts creative disruption but also highlights the need for effective adaptation strategies.
Expert | Key Insight |
Ajay Agrawal | AI is a tool that lowers the cost of prediction, transforming industries |
Joshua Gans | Humans will need to focus on decision-making, where AI falls short |
Avi Goldfarb | The economic impact of AI will be vast, but manageable with strategic adaptation |
They argue that AI tools like ChatGPT are not a threat, but a tool to be leveraged. It’s a shift that’ll require a new mindset and adapting to AI-driven changes. The question is not whether AI will disrupt industries, but how and when. The key is in understanding the potential, preparing for changes, and strategically adapting to the new AI-driven landscape.
Influence of AI on Entrepreneurship
As an entrepreneur, you’ll find that AI’s disruption, especially tools like ChatGPT, can significantly reshape your approach to business. This AI-driven innovation offers you exciting entrepreneurial opportunities, paving the way for AI-powered startups to dominate the future of entrepreneurship.
Consider the following:
- AI disruption strategies
- You could leverage ChatGPT to automate customer service, reducing costs and improving efficiency.
- Innovating new services using AI could put your startup ahead of the curve.
- AI can help you understand your customer base better, leading to more targeted marketing strategies.
- Entrepreneurial opportunities
- With AI, you could develop unique solutions to problems, creating niche markets.
- AI can help you scale your business faster and more efficiently.
- Future of entrepreneurship
- AI is bound to become an integral part of business, and early adoption could give your startup a competitive edge.
- The evolution of AI technologies like ChatGPT could lead to previously unimagined business models.
Embrace the change, adapt, and you’ll find that AI isn’t an adversary, but a partner pushing your entrepreneurial journey to new heights.
AI’s Role in Healthcare
Shifting your focus to healthcare, you’ll see that AI’s transformative power, exemplified by ChatGPT, can revolutionize this industry in unimaginable ways. It can contribute to diagnostics, patient care, drug development, and medical research, while challenging the ethics in healthcare.
Consider AI in diagnostics. It can analyze patterns in data to detect diseases, even before symptoms appear. Then there’s patient care. AI can enhance care by predicting patient needs and personalizing treatment plans.
Next, think about drug development. AI can accelerate the process, predicting how different compounds interact, thus decreasing the time and cost of creating new drugs.
Finally, AI in medical research can process vast amounts of data, finding correlations and making predictions that would take humans years to identify.
AI in Healthcare | Benefits | Challenges |
Diagnostics | Faster, accurate detections | Ensuring accuracy & reliability |
Patient Care | Personalized treatment plans | Maintaining privacy |
Drug Development | Speeding up processes | Ensuring safety & efficacy |
Medical Research | Processing vast data | Ensuring scientific validity |
However, while AI’s potential is enormous, it’s crucial to consider the impact on healthcare ethics. Balancing AI’s benefits with privacy, accuracy, safety, and scientific validity is paramount.
AI as a Disruptive Economic Force
You might wonder how AI, like ChatGPT, acts as a disruptive economic force in the world of industries. The economic implications are profound as this technology is driving industry transformation. By harnessing the capabilities of ChatGPT, businesses can innovate, altering traditional models to become more efficient and competitive.
However, with technological disruption come challenges:
- Workforce displacement: This is the elephant in the room. As AI takes over more tasks, the potential for job losses is real. You might feel uneasy, even anxious about this.
- But remember, new technology also creates new jobs. The key is to adapt and reskill.
- Industry transformation: AI is not just changing how we work but what we work on. Entire industries are being shaken up.
- This can be scary, but also exhilarating. Change brings opportunities.
- Business model innovation: Companies need to rethink their strategies and business models to incorporate AI.
- This might seem daunting, but it’s also a chance for businesses to become more efficient and competitive.
Lessons From the Taxi Business
Understanding the disruption in the taxi industry gives insight into how AI, like ChatGPT, could potentially shake up traditional business models. The transformation was swift and pervasive, as apps like Uber and Lyft leveraged technology to offer better services. They’ve effectively redefined the taxi industry.
Here’s a comparison:
Taxi Industry Disruption | Lessons for AI Adaptation |
Traditional business model disrupted by tech | Existing workflows could be disrupted by AI |
Need for rapid adaptation | Businesses must adapt quickly to AI |
Creation of new opportunities | AI, like ChatGPT, offers new opportunities |
Transformation of customer experience | AI can enhance customer interactions |
Resistance and eventual acceptance | Initial resistance to AI will give way to acceptance |
Similarly, ChatGPT, with its language processing capabilities, is poised to transform creative work. It’s not just about replacing humans; it’s about redefining roles, creating new opportunities, and enhancing the value delivered to customers.
Adapting to AI isn’t optional; it’s essential. The lessons from the taxi industry disruption serve as a guide. Just as the taxi industry evolved, so too will industries impacted by AI. The future belongs to those who can harness AI’s potential effectively.
AI’s Impact on Creative Work
Just like the taxi industry, your creative work sector stands on the brink of significant transformation due to AI tools like ChatGPT. This shift isn’t merely about automating tasks; it’s about reshaping the very fabric of creativity.
Consider these key areas:
- AI’s impact on artistic expression:
- Imagine AI helping you unlock new artistic styles, or crafting narratives you’d never considered.
- Fear not the machine, but instead see it as a collaborator in your creative journey.
- Transforming the advertising industry:
- ChatGPT and AI can generate compelling ad copy and innovative campaigns.
- The future of advertising could be a dance between human creativity and AI efficiency.
- AI in music production and AI’s influence on content creation:
- AI can compose melodies or write engaging blog posts.
- The fusion of AI and human creativity promises a symphony of innovation.
The future of AI in design, and indeed all creative work, is not about replacement but about transformation. It’s a brave new world, one where you’ll harness AI to push the boundaries of what’s possible in your creative work. Embrace the change, and let’s shape this future together. | <urn:uuid:e1462f8f-b45d-41fd-993d-d5c41cea955c> | CC-MAIN-2024-38 | https://disruptionhub.com/how-does-chatgtp-affect-industries/ | 2024-09-20T08:08:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00586.warc.gz | en | 0.919501 | 3,504 | 2.53125 | 3 |
In the evolving landscape of modern manufacturing, a transformation is underway. By blending Computer-Aided Manufacturing (CAM) with Additive Manufacturing (AM), or more commonly known as 3D printing, industries are gearing up for a significant upheaval. This convergence, known as hybrid manufacturing, promises to optimize production workflows by harnessing the respective strengths of both manufacturing processes. As we delve into the nitty-gritty of hybrid systems, we see a future where high-density parts adorn complex geometry and production steps are condensed for ultimate efficiency.
The Emergence of Hybrid Systems in the Manufacturing Landscape
Addressing Remote and Constrained Production Environments
Aboard the USS Bataan, a U.S. Navy ship sailing through vast and treacherous waters, a revolution has taken root. In this isolated environment, traditionally unreachable for rapid manufacturing aid, a hybrid manufacturing system emerges as a beacon of self-sufficiency. With metal Additive Manufacturing and CNC milling available in the hull, the crew can fabricate critical parts as needed, overcoming the obstacles of distant shores and supply chain constraints.
In such remote settings, space is invaluable. The compact footprint of hybrid systems is a game-changer, enabling operators to perform additive processes to build a part and then seamlessly transition to subtractive methods to refine it. This dual capability not only saves precious space but also reduces the logistical burden of storing a vast array of spares.
The Benefits of Combining Additive and Subtractive Manufacturing
In aerospace, defense, automotive, and mold-making industries, the demand for parts with complexities is non-negotiable. Hybrid manufacturing rises to this challenge with aplomb. It merges the additive process of building layers upon layers for the initial form with the subtractive precision of milling to sculpt the final design. This symbiosis offers an unparalleled level of detail and density, typically unachievable through singular manufacturing methods.
Not stopping at just delivering superior parts, the hybrid process paves the way for streamlining. Direct transitions from 3D printing to finishing processes eliminate the cumbersome steps of moving components between different machines, slashing time and labor expenses dramatically. Furthermore, this integration promotes sustainability—by commingling these methods, there is a substantial reduction in material waste, as only the needed amount is added, and excess can be precisely trimmed.
Hybrid Manufacturing in Action: Case Studies and Applications
Industry Leaders and Their Hybrid Systems
Companies like Phillips and 3D Systems are not just mere participants in this trend but are driving forces. Phillips pairs the Meltio wire-laser metal 3D printing platform with the muscle of Haas TM-1 CNC capabilities, exemplifying a sterling instance of hybrid ingenuity. Meanwhile, 3D Systems ushers in its brand of innovation by combining pellet extrusion with CNC milling, underscoring the versatility that can be achieved within hybrid systems.
The foray of these frontrunners into hybrid manufacturing is no random venture; it’s a calculated stride toward customization. Each application in these diverse industries is considered, and hybrid solutions are tailored to meet their unique demands. From producing large-scale, high-mass parts to intricately designed elements necessitating labor-intensive adjustments, these systems adapt with astonishing nimbleness.
Operational Advantages in Challenging Scenarios
The strategic adoption of hybrid manufacturing sheds light on a crucial aspect—the quest for autonomy in production. The U.S. Navy’s foray with a Phillips Additive Hybrid system exemplifies this advantage. Beyond the bravado of technical superiority, these next-gen systems signify an era of enhanced operational readiness and logistical prowess, particularly in daunting scenarios where traditional supply chains falter.
In such spheres, the importance of immediate repairs or modifications is paramount. Hybrid manufacturing caters to this urgency by allowing in-situ adjustments and tooling changes, redefining the whole notion of ‘manufacturing lead time.’ This approach is not just a convenience; it’s a vital component in maintaining continuous operations where downtime equates to liability.
Limitations and Challenges in Adopting Hybrid Manufacturing
Skill Gaps and Training for Hybrid Manufacturing
Despite the allure of hybrid systems, they are accompanied by their share of hurdles. A significant concern is the training and skills required to operate these multi-faceted machines. Mastery over both additive and subtractive manufacturing becomes essential. Nevertheless, hybrid systems could be a beacon of hope, simplifying the complexity by providing an integrated solution that requires a less steep learning curve, despite the intricacies involved.
Issues such as maintenance and potential downtime cannot be overlooked. When multiple technologies conflate into one, the risk of system failures or bottlenecks might increase. It’s a delicate balance between enjoying the fruitfulness of a collective system and being vulnerable to the intricacies of its integrated nature.
Weighing Efficiency Against Flexibility
In high-volume production environments, the transition to hybrid systems is not as clear-cut. The thought of integrating processing may be enthralling, but it can’t always match the agility some manufacturers need. For mass production, separate specialized systems may still hold the edge over hybrid solutions, particularly for items requiring less intricate machining or those with extensive print times.
However, hybrid systems are tailoring themselves to the realms where they shine the brightest—parts that need repairs, modifications, and those with significant weight. The operational efficiency in these particular scenarios hints at a bright future for hybrid manufacturing, where its application is strategic and impactful.
The Future of Manufacturing with Hybrid Systems
Streamlining Production Workflows
Imagine a workflow so in tune that it bridges the intricacies of 3D printing and the artistry of CNC milling, all within a single, seamless process—this is the vision wire-laser DED technology offers. With such capabilities, the manufacturing scene is on the cusp of an era where precision is heightened, and production times are dramatically reduced. Not only does this bode well for cost savings, but it also spells an upturn in the quality of the final products.
As the world of manufacturing laps up this vision, the various sectors are waking up to the fact that a synergy between printing and machining is within reach. A promise of a new age of manufacturing beckons, where these merged workflows become less of an exception and more the rule.
Pioneering On-Site, On-Demand Production Capabilities
In the dynamic arena of today’s manufacturing scene, a remarkable shift is taking place. The integration of Computer-Aided Manufacturing (CAM) with Additive Manufacturing (AM), also known as 3D printing, is setting the stage for a significant industrial revolution. This blend, termed hybrid manufacturing, is slated to revolutionize production processes by leveraging the unique benefits of both CAM and AM approaches.
Hybrid manufacturing is emerging as a cutting-edge method that synergizes the digital precision of computer-aided designs with the layer-by-layer construction capabilities of 3D printing. This merger facilitates the creation of intricately designed, high-density components while streamlining production steps, thereby ramping up efficiency to unprecedented levels. The implications for the future are clear: as industries adopt these powerful hybrid systems, we can anticipate the production of complex parts with greater speed and exactitude.
Through this evolution, manufacturers are not only looking at enhanced productivity but are also contributing to innovative advancements in design and engineering. The result is expected to be a significant elevation in the quality and performance of manufactured goods. As these technologies continue to mature and their adoption broadens, hybrid manufacturing stands poised to redefine the standard for what is possible in the manufacturing domain. | <urn:uuid:9c6717ec-e2cd-4247-b0ca-a3d6b667df7c> | CC-MAIN-2024-38 | https://manufacturingcurated.com/manufacturing-technology/hybrid-manufacturing-fusing-3d-printing-with-cnc-milling/ | 2024-09-09T09:25:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00686.warc.gz | en | 0.92092 | 1,578 | 2.828125 | 3 |
Retail, CPG and Logistics
The role of big data analytics in logistics optimization
In today's fast-paced and interconnected world, logistics optimisation plays a crucial role in efficient and cost-effective supply chain management. Big data analytics has disrupted the traditional logistics management process to enhance logistics operations. Businesses can access vast amounts of data to mitigate risks and ensure hassle-free supply chain management.
Big data has made “big” strides in the logistics ecosystem. Research shows that 98% of third-party logistics companies and 93% of shippers believe that big data analytics facilitates intelligent decision-making.
Big data in logistics entails the collection, processing, and analysis of complex datasets about logistics management. It integrates various systems like sensors, RFID tags, GPS devices, and Enterprise Resource Planning (ERP). Big data analytics facilitates a comprehensive approach that enhances transportation, inventory and warehouse management, demand forecasting, and more.
How Big Data Analytics Optimise Logistics Processes and Improve Overall Efficiency
- Facilitates Real-time Tracking and Visibility
- Demand Forecasting and Inventory Management
- Route Optimisation and Fleet Management
- Risk Mitigation and Supply Chain Resilience
- Continuous Improvement and Cost Reduction
- Maintaining Assets
One of the key benefits of big data analytics in logistics optimisation is the ability to track and monitor shipments in real time. Big data gathers and analyses data from multiple sources such as GPS trackers, sensors, and RFID tags and provides valuable insights into the location, condition, and status of goods. This real-time visibility facilitates proactive decision-making, allowing companies to address any potential issues promptly and ensure timely delivery.
Big data analytics empowers logistics professionals to make accurate demand forecasts and optimise inventory levels. Big data analyses historical sales data, market trends, and customer behaviour to predict future demand patterns. These forecasts help businesses adjust their inventory accordingly which helps in reducing stockouts, minimising excess inventory, and ultimately enhancing customer experience.
For logistics firms, effective route mapping holds paramount importance in cutting transportation expenses and enhancing delivery timelines. Leveraging big data analytics empowers businesses to scrutinise elements like traffic flow, weather forecasts, and past data for route and schedule optimisation. Pinpointing the most optimal routes enables companies to curtail fuel usage, diminish carbon footprints, and elevate the management of their entire fleet.
Big data analytics plays a vital role in identifying and mitigating risks in logistics operations. It analyses data from multiple sources such as weather forecasts, supplier performance, historical data, and more. These insights enable logistics teams to identify potential disruptions and take preventive measures. They also make the supply chain resilient by minimising the impact of unforeseen events, such as natural disasters or supplier delays,
Big data analytics facilitates continuous tracking and analysis of operations helping logistics professionals to identify areas for improvement. Businesses can also identify bottlenecks and inefficiencies with the help of analytics data based on factors such as analysis of data on factors such as transportation costs, warehouse efficiency, and order fulfillment helps businesses implement targeted improvements, streamline processes, and reduce costs, ultimately enhancing profitability.
Big data analytics can be leveraged to predict equipment failure and maintenance needs. It helps reduce equipment downtime, enhances equipment utilisation, and extends the lifespan of assets.
Key Technologies Used in Big Data Analytics:
Machine Learning (ML):
ML algorithms can identify patterns and trends in logistics elements such as delivery times, transportation routes, inventory levels, and more. The analysis of these factors helps logistics teams to optimise operations, boost supply chain visibility, and streamline data-driven decisions.
Artificial Intelligence (AI):
AI bots automate processes such as logistics scheduling and routing. With the help of predictive analysis, AI enables logistics professionals to anticipate potential risks and mitigate them.
Internet of Things (IoT):
IoT sensors are deployed in logistics management to gather data on shipments, vehicles, and warehouse operations. This information can be analysed to optimise supply chain visibility and improve decision-making. IoT sensors can be used for tracking inventory levels, monitoring the condition of products in transit, optimising warehouse space, etc.
Big data analytics has revolutionised the logistics industry by providing businesses with valuable insights and tools to optimise their operations. From real-time tracking and visibility to demand forecasting and route optimisation, the role of big data analytics in logistics management cannot be overstated. By harnessing the power of data, businesses can enhance efficiency, reduce costs, and improve customer satisfaction, ultimately gaining a competitive edge in the dynamic world of logistics. | <urn:uuid:caff56ce-5c8d-427b-b703-6ce97592029e> | CC-MAIN-2024-38 | https://www.infosysbpm.com/blogs/retail-cpg-logistics/the-role-of-big-data-analytics-in-logistics-optimization.html | 2024-09-09T09:40:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00686.warc.gz | en | 0.907652 | 932 | 2.640625 | 3 |
ITIL service knowledge management system
Apart from solution articles, other details like user/technician data, analysis from the reports, metrics monitored, and many others fall under the umbrella of "knowledge." This knowledge present in the service desk usually lacks structure and is hard to document. Preserving the integrity of the knowledge in a database is also challenging. These challenges are solved using service knowledge management systems (SKMS).
The SKMS is the central repository of the data, information, and knowledge that the IT organization needs to deliver services. The SKMS stores, manages, updates, and presents all information that an IT service provider needs to manage in a full life cycle of IT services.
A configuration management system (CMS) and a configuration management database (CMDB) are also a part of SKMS. The CMS contains all configuration items (CIs) within the service desk along with the relationships between those items. The CMDB contains all configuration data related to an organization's IT infrastructure. The relationship between SKMS, CMS, and CMBS is illustrated in the image below.
The SKMS represents all the knowledge within the service desk, helping tie together all the information needed to deliver a service. It makes it easier for technicians and end users to access information they need when they need it.
The SKMS is virtually divided into four distinct layers. These four layers are built in to most service desks.
Data and information layer:
This layer of SKMS focuses on storing all the data in an IT service desk. This layer houses various databases such as the CMDB and the known error database (KEDB). This layer also includes other configuration, management, and audit tools and applications.
Information integration layer:
The data entered gets integrated into the service desk in this layer. From the data stored in the data and information layer, this layer gives relationship and structure to data. For instance, the CIs and their relationships with other CIs can be accessed from various parts of the service desk, such as an incident ticket.
The knowledge base exists in this layer, where any knowledge required can be accessed. The knowledge processing layer acts as a interface for the user to analyze and obtain reports of the all the information present.
All the knowledge gathered and analyzed is presented to the end users in this presentation layer. For example, the self-service portal can act as the presentation layer. Knowledge is structured in a conceivable format for ease of access and use. With visual materials and other features like browse and search buttons, this layer helps users get all the knowledge they need. | <urn:uuid:f980e869-ada7-434b-86cd-9841a950e17e> | CC-MAIN-2024-38 | https://www.manageengine.com/in/products/service-desk/itsm/service-knowledge-management-system.html | 2024-09-10T14:19:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00586.warc.gz | en | 0.916127 | 524 | 2.90625 | 3 |
AI is rapidly transforming the healthcare landscape. Its remarkable ability to analyze vast datasets and uncover hidden patterns has opened new frontiers in disease diagnosis, treatment optimization, and patient management.
While implementing AI in healthcare presents special considerations and challenges, its successful adoption has the potential to revolutionize the field. From early disease detection to personalized treatment plans, AI is on track to become an essential tool for healthcare professionals.
Unleashing AI’s Potential in Healthcare: Benefits and Use Cases
AI offers a wide range of applications that enhance patient care, streamline processes, and drive innovation. Here are some of AI’s top benefits and use cases in the healthcare industry.
Enhancing Patient Care with AI
AI’s potential to improve patient care is immense. By analyzing patient data, AI can identify trends, predict health outcomes, and personalize treatment plans, particularly for chronic diseases. For example, AI algorithms can analyze medical images, such as X-rays and MRIs, to detect early signs of cancer, often with greater accuracy than human radiologists. This early detection allows for timely intervention, leading to improved treatment outcomes and increased survival rates.
It’s important to emphasize that AI is a tool designed to complement, not replace, human judgment. The most effective approach involves integrating AI-generated insights with the expertise of healthcare professionals to create a comprehensive and informed decision-making process. This collaboration ensures that clinical experience remains at the heart of patient care, mitigating the risk of over-reliance on technology.
AI in Drug Discovery and Development
Beyond patient care, AI is revolutionizing the way new drugs are discovered and developed. By analyzing vast datasets, AI algorithms can rapidly identify potential drug candidates and predict their efficacy. This significantly accelerates the research and development timeline, potentially bringing new treatments to market faster and at a lower cost. Additionally, AI can optimize clinical trials by analyzing patient data to identify suitable candidates, improving the efficiency and success rates of these trials.
The application of AI in drug discovery showcases its potential to transform healthcare on a global scale. By providing more accurate and efficient methods for identifying new therapies, AI can address unmet medical needs and improve health outcomes for patients worldwide.
AI in Population Health Management
AI offers powerful tools for managing the health of entire populations. By analyzing data from various sources, including electronic health records, social determinants of health, and environmental factors, AI can identify patterns and predict disease outbreaks. This enables healthcare providers and public health officials to take proactive measures, such as targeted interventions and resource allocation, to prevent and manage health crises.
For instance, AI can analyze environmental data, socioeconomic factors, and healthcare access to pinpoint communities at higher risk for specific diseases. This information can guide public health initiatives, ensuring that resources are directed where they are needed most and that vulnerable populations receive adequate care.
AI and Wearable Technology
Wearable technology, such as fitness trackers and smartwatches, is generating an abundance of health data. AI can leverage this data to provide personalized health insights and recommendations. Wearables can continuously monitor vital signs, activity levels, and sleep patterns, alerting users to potential health issues before they escalate. This continuous monitoring is particularly beneficial for managing chronic conditions, offering real-time feedback and early warnings to both patients and healthcare providers.
AI-powered apps can analyze data from wearables to identify trends and offer personalized suggestions for improving health. For example, an AI-driven app could analyze sleep patterns and provide recommendations for improving sleep quality, ultimately enhancing overall health and well-being. The combination of wearables and AI empowers individuals to take a more active role in managing their health and making informed decisions about their lifestyle choices.
Key Considerations of Implementing AI in Healthcare
While the potential benefits of AI in healthcare are vast, its implementation requires careful consideration of various challenges and ethical concerns. Balancing innovation with responsible use is crucial to ensure that AI serves the best interests of patients and the healthcare system as a whole.
Ethical Considerations in AI Implementation
The ethical implications of AI in healthcare are significant. Ensuring data integrity is paramount. The potential misuse of data, such as data poisoning (where corrupted data leads to inaccurate diagnoses and treatments), poses a serious risk. Even minor inaccuracies can have severe consequences for patient outcomes, making it imperative to maintain the accuracy and reliability of data used in AI systems.
Algorithmic bias, which can result in unequal treatment of patients based on race, gender, or other factors, is another critical concern. Rigorous testing and validation of AI algorithms are essential to mitigate these biases and guarantee fair and equitable treatment for all individuals.
Security Concerns and the Role of CAIOs
Patient data privacy and security are paramount in healthcare. Any breach can erode trust in AI technologies and have far-reaching consequences. To safeguard sensitive patient information, robust data governance frameworks are essential. This includes implementing stringent security measures, adhering to data protection regulations (such as HIPAA in the U.S.), and ensuring that AI applications are used equitably across all demographic groups.
Integrating AI into existing healthcare workflows can be complex. While cloud-based solutions offer flexibility and scalability, many organizations prefer on-premises infrastructure due to concerns about data security and regulatory compliance. Striking a balance between innovation and security is crucial for the successful adoption of AI.
Chief AI Officers (CAIOs) play a pivotal role in mitigating AI security concerns. CAIOs manage data strategies, ensure data privacy, and develop synthetic data capabilities for training AI models. They bridge the gap between the technical and clinical aspects of AI, ensuring that technologies are practical, compliant, and ultimately beneficial for patient care. CAIOs also focus on building robust data infrastructures, including data lakes for storing and organizing vast amounts of patient data, which form the foundation for developing meaningful AI applications.
The Importance of Interdisciplinary Collaboration
Successful AI implementation in healthcare requires collaboration among diverse disciplines, including medicine, data science, ethics, and law. Interdisciplinary teams can proactively address potential issues and ensure that AI systems are not only technologically advanced but also ethically sound, transparent, and aligned with patient needs. Involving ethicists and legal experts early in the development process helps identify and mitigate ethical and legal risks, fostering responsible AI development.
The Importance of Explainability in AI
Explainability is a fundamental aspect of AI in healthcare. Complex AI models often operate as “black boxes,” making it difficult to understand their decision-making processes. This lack of transparency can hinder trust and acceptance among healthcare providers and patients. Developing AI systems that provide clear and understandable explanations for their decisions is crucial for building confidence and ensuring that AI is used responsibly. Explainability also helps identify and address potential biases in AI systems, ensuring recommendations are based on sound reasoning and accurate data.
The AI-Powered Future of Medicine
AI holds the promise of revolutionizing healthcare, offering improved diagnostics, personalized treatments, and enhanced patient care. However, realizing this potential requires addressing significant challenges, including data privacy, ethical considerations, and integration into existing workflows. Robust data orchestration and governance frameworks are essential to ensure that AI is developed and implemented responsibly.
As we continue to explore and innovate, the collaboration between technology and healthcare professionals will be key to unlocking AI’s full potential. By working together, we can harness the power of AI to transform healthcare and improve patient outcomes. For more insights and updates on AI in healthcare, visit our dedicated website at www.connection.com/helix. | <urn:uuid:d65a6d57-ece3-45dc-aefc-f9513708bddb> | CC-MAIN-2024-38 | https://community.connection.com/ai-in-healthcare-a-revolution-in-progress/ | 2024-09-11T21:21:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00486.warc.gz | en | 0.930129 | 1,549 | 2.9375 | 3 |
Your Life Is the Attack Surface: The Risks of IoT
To protect yourself, you must know where you're vulnerable — and these tips can help.
Today, there are more connected devices than humans. The unprecedented growth of connected devices has created innumerable new threats for organizations, manufacturers, and consumers, while at the same time creating opportunities for hackers. The world has seen the risks of this firsthand: Internet of Things (IoT) devices now constitute the largest-scale botnets, able to take down major websites like Twitter, GitHub, and the PlayStation Network. The many ways a hacker could access this data is apparent and quite disconcerting. The first step to protecting yourself is knowing where you're vulnerable.
Connected Devices as the Fastest-Growing Attack Surface
A growing number of households now have an IoT hub — be it Echo or Google Home — a device that takes the place of or attaches to your wireless router and has permissions to do things on your behalf. One of the most immediate security concerns comes with this permission. If your device is set up to purchase things on your behalf, there is nothing to stop someone else within the microphone's listening range, even on your TV or radio, from commanding "Alexa" to buy something for you.
This issue extends to other personal devices as well. For home security cameras, it might be backing up or storing video images. For health tracking devices, it's personal health data such as heart rate, pulse, diet, etc. An Internet-connected stuffed animal was recently found to have exposed more than 2 million voice recordings of children and parents, as well as e-mail addresses and password data for more than 800,000 accounts. In other words, this seemingly innocuous data is highly personal on the individual level and therefore a great risk to individual security.
The Role of Policy and Defenders
Thus far, IoT has gone unregulated and largely unsecured, and given the rapid growth of IoT devices it's no surprise that these devices represent a major and growing threat — and a major opportunity for adversaries. The sheer number and types of the devices being networked and connected to cloud interfaces and on-the-Internet APIs is one the greatest challenges in security today. Each device has its own set of technologies, and thus its own set of security vulnerabilities. Additionally, some of these industries have never dealt with Internet-facing devices before, and their development staff is just not trained in the ways of web application security. High pressure, low awareness, and the absence of a governing body to police the market has resulted in an increase in attacks on these devices. That's why it's becoming imperative to implement global security standards.
Before the industry really starts inking policy, however, we'll continue to rely on hackers to identify vulnerabilities and ultimately improve the way the industry addresses potential risks. This group will be essential for improving the security maturity of the market and ensuring the implementation of security controls for IoT devices, such as toys, thermostats, and even smart cars, which provides a fascinating breeding ground for best practices.
How to Prevent Cyberattacks
There is a lot of work to do for manufacturers, policymakers, researchers, legislators, and companies that are releasing IoT devices, identifying risks, and creating regulations. And unfortunately, IoT extends far beyond household gadgets. From your car to your pacemaker and your Fitbit, any device that connects to the Internet is a potential attack surface.
While the broader security industry addresses these issues, how can you personally prevent cyberattacks in your own digital life?
Research your device before purchase: For any device you're considering buying that's connected to the Internet, determine whether the vendor is paying attention to security. Does it have security notes online? Has it had any security research directed at it before, and if so, has it responded well to that research? Use the answers to make a decision about which device to purchase. Amazon reviews and Better Business Bureau reports can be great indicators here.
Use strong Wi-Fi encryption: Securing your Wi-Fi at home goes beyond plugging it in and setting a password. The choices for encryption standards typically can be found on vendors' websites, so if you're unsure, it's a good idea to do some due diligence before choosing one. Implementing the most advanced encryption that your router can support (usually called WPA) is the difference between offering someone easy access to your home network and being secure.
Check the device for additional security configurations: While updating the device regularly will help avoid unnecessary breaches, it's also a good idea to ensure additional security configurations are in place if available. To find these, log in to the control panel of the device. In the settings section, there will often be additional controls. They can be cumbersome to set up but useful to keep you secure.
Disable features not being used: These features will vary by device, but an example would be your laptop's webcam, which could be a threat if it's not disabled or obscured, especially in light of numerous well-documented attacks. Being aware of all enabled features is a great way as a consumer to protect yourself against IoT hacks and malicious actors accessing your personal devices on your network or other places you use devices.
The Future of IoT Security
From the takedown of Dyn to the distributed denial-of-service attack on Brian Krebs' website, the industry has learned some major lessons around IoT security in the past few years. This is causing standards to be created that will help reduce risks. However, change takes time. IoT security is in the standards phase right now, which means that legislators haven't yet prescribed specific policies around what security devices need to have in place for manufacturers to ship them. Given this, consumers must take personal action and be aware of the risks.
About the Author
You May Also Like
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:fe9ae07c-5961-472a-9218-5463ff4794f3> | CC-MAIN-2024-38 | https://www.darkreading.com/endpoint-security/your-life-is-the-attack-surface-the-risks-of-iot | 2024-09-13T00:37:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00386.warc.gz | en | 0.953237 | 1,250 | 2.71875 | 3 |
IPA Vs RPA
With the digital revolution in full swing, and companies striving to become more resilient than ever before, automation is gaining momentum. To have successful automation, businesses are turning to various automation technologies like Robotic Process Automation (RPA), Intelligent Process Automation (IPA), Artificial Intelligence (AI) and so on. As a lot of comparisons have been drawn between RPA and IPA, businesses are confused on which to implement for their needs.. Here we discuss the key differences between RPA and IPA.
a. What is IPA? What IPA can do?
Intelligent process automation (IPA) is a combination of technologies used to manage and automate digital processes. IPA should increase operational efficiency, worker performance and response efficiency to customers. They should also reduce operational risk, with the goal of creating an effective operations environment.
Intelligent Process Automation (IPA) can do following things:
- Reduce your business costs.
- Save time for more important tasks.
- Avoid the risk of human error.
- Ensure traceability for analytics and audits.
- Reconcile data from various systems.
- Improve service times for clients.
- Increase end customer satisfaction.
b. What is RPA? What RPA can do?
Robotic process automation (RPA) is the use of computer software ‘robots’ to handle repetitive, rule-based digital tasks such as filling in the same information in multiple places, re-entering data, or copying and pasting.
Robotic process automation (RPA) is a software technology that makes it easy to build, deploy, and manage software robots that emulate human actions interacting with digital systems and software.
c. Key Differences
RPA vs IPA:
The difference between Robotic Process Automation and Intelligent Process Automation is that RPA is a component of IPA rather than an alternative to it. RPA is used to perform repetitive tasks with minimal variation whereas IPA (RPA + AI) is used to tackle more complex end-to-end processes.
RPA is rules-based and applies rules stipulated by humans to perform a task, for example, sending an automatic reply to an email. IPA, meanwhile, incorporates AI technologies such as machine learning and so it can perform tasks that require judgement and analysis (a loan approval is a good example), without the need for human intervention. Plus, IPA is capable of handling exceptions and continuously learns from patterns in data to improve performance.
Additionally, RPA can only deal with structured data whereas IPA can handle both structured and unstructured data.
Although RPA alone can do wonders for productivity and cutting costs in a short timeframe, IPA’s extended capabilities provide insights into process improvement opportunities across the enterprise enabling you to transform the way you work for long-term success.
d. What is suitable for your business?
Both are suitable. Preferable is RPA where we are the key partners of Ui Path & Automation Anywhere RPA tools.
About the author
Suman Mahapatra is working as Delivery Manager- RPA for DynPro and has overall 17 years of experience spanned across RPA, Testing, Software Development and Business operations. He has been instrumental in team management and delivering RPA projects in ERU, Healthcare, Banking, Media sectors across US, UK and Australia. You can reach out to him at firstname.lastname@example.org | <urn:uuid:1fa97c5f-a846-4937-a3a8-d33bf17d78a5> | CC-MAIN-2024-38 | https://www.dynpro.com/rpa-vs-ipa/ | 2024-09-19T02:56:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00786.warc.gz | en | 0.929308 | 707 | 2.671875 | 3 |
Environmental Monitoring Systems are of critical importance across a variety of industries. At its heart, an EMS is a collection of sensors for detecting environmental conditions. Temperature sensors, humidity, airflow, and water leak sensors are commonly deployed. These sensors are usually connected to a device that monitors the parameters and will alert the operator should a sensor go outside of pre-determined thresholds. Alerts can be by c-mail, SNMP traps, or SMS messages.
Data Center Environmental Monitoring
Data Center Environmental Monitoring typically consists of temperature sensors, humidity sensors, and water leak detection. Rack-mounted servers have optimum operating temperatures that improve their reliability and longevity. Heat can kill electrical components quickly, and a lot of the supporting infrastructure in a computer room is aimed at controlling the environmental conditions. Racks have cooling fans that draw cool air from outside into the cabinet, and the room itself is equipped with air conditioning units and other HVAC equipment. In order to properly control the conditions data center monitoring systems with temperature and humidity sensors are vital. Raised access flooring is also monitored for water leakage. The sensors are monitored and alerts are generated in the form of SNMP traps, e-mails, SMS, etc. This allows quick response to potentially catastrophic situations that could lead to permanent damage and downtime of the rack-mounted servers or other electrical equipment. Sensor-controlled relays can also be deployed to switch on additional cooling fans (ACU) should high temperature or humidity be detected.
Environmental Monitoring For BTS locations
BTS Telecom tower sites contain shelters and cabinets that house sensitive equipment. EMS is employed to monitor these sites for environmental damage caused by high temperatures, failed air conditioning units, and water damage. The usual remote nature of telecom tower infrastructure and the number of sites involved can make managing the maintenance and uptime of these sites a challenge. An EMS solution that is designed for monitoring remote and multiple sites is the solution for this. Hundreds of thousands of telecom tower shelters can be monitored for various environmental and physical threats with all the data feeding back to regional or centralized remote management software. This allows for a smaller number of engineers to monitor and maintain a larger number of sites. Respond to alarms and alerts as necessary and avoid tower downtime caused by faulty equipment.
Environmental Monitoring For Pharmaceutical Industry
Controlling the environment in which pharmaceutical goods are manufactured is an important aspect of GMP. Temperature, humidity, airborne particles, and microbiological contamination all require monitoring and control in any manufacturing plant if they are not to negatively affect the quality of the product being created. In this post, we’ll look at environmental microbiological contamination.
Depending on the nature of the product and the desired administration method, pharmaceutical production techniques might be non-sterile, aseptic fill, or use terminal sterilization. In all circumstances, however, it is critical to retain control and knowledge of the microbial environment. Contamination of non-sterile or aseptically filled items might cause the medication to lose its efficacy due to microbial metabolism of the API, or have a negative effect on the patient. Prior to terminal sterilization, increased bioburden in medications raises the risk of the sterilization system failing. Remember that sterilization technologies rarely sterilize; instead, they guarantee substantial log reductions in bioburden. As a result, the higher the initial bioburden, the more likely an organism will survive the sterilization process.
Environmental monitoring strategies and data will help to make the invisible environment visible, allowing decisions to be made to improve your environmental quality, much as you can’t ‘test quality into a product.
Environmental Monitoring Systems From AKCP
AKCP provides a solution for EMS in almost any industrial or commercial application. With a variety of intelligent base units (sensorProbe, securityProbe, and the sensorProbe+ series) and sensors together with central management software (AKCPro Server) a system that meets the demands of the industry can be found. | <urn:uuid:0574e471-0ecd-459d-937f-63b148502f5b> | CC-MAIN-2024-38 | https://www.akcp.com/blog/environmental-monitoring-systems/ | 2024-09-09T11:05:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00786.warc.gz | en | 0.909326 | 802 | 2.859375 | 3 |
RMI and RMI-IIOP
This description compares the two types of remote communication in Java™; Remote Method Invocation (RMI) and RMI-IIOP.
RMI is Java's traditional form of remote communication. It is an object-oriented version of Remote Procedure Call (RPC). It uses the nonstandardized Java Remote Method Protocol (JRMP) to communicate between Java objects. This provides an easy way to distribute objects, but does not allow for interoperability between programming languages.
RMI-IIOP is an extension of traditional Java RMI that uses the IIOP protocol. This protocol allows RMI objects to communicate with CORBA objects. Java programs can therefore interoperate transparently with objects that are written in other programming languages, provided that those objects are CORBA-compliant. Objects can still be exported to traditional RMI (JRMP) and the two protocols can communicate.
A terminology difference exists between the two protocols. In RMI (JRMP), the server objects are called skeletons; in RMI-IIOP, they are called ties. Client objects are called stubs in both protocols. | <urn:uuid:b56b6b42-cb06-4803-a3d5-247b209d0eb9> | CC-MAIN-2024-38 | https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.4?topic=orb-rmi-rmi-iiop | 2024-09-12T00:06:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00586.warc.gz | en | 0.918072 | 233 | 3.15625 | 3 |
Artificial intelligence (AI)
Artificial intelligence (AI) is the theory and the development of computer systems that can perform tasks that commonly require human intelligence. AI leverages computing power and applications such as Machine Learning (ML) to access data to "learn" how to mimic human reasoning to take on various tasks. Applications of AI include nature language processing, speech recognition, and robotics.
Examples of such tasks include speech recognition and computer vision.
AppDirect AI Marketplace—Pioneering AI Innovation & Monetization
It’s clear that the rise of artificial intelligence (AI) has ushered in a new era of innovation and efficiency. Generative AI-powered bots (a.k.a. companions, co-pilots), in particular, have become indispensable tools for businesses seeking to enhance customer engagement and streamline operations. We've already seen a significant surge in demand from companies eager to create AI bots for internal use while also exploring opportunities to monetize them externally.
Read more about AI on our blog | <urn:uuid:eaed1b8c-d96e-4dc2-b6a3-2313778662f8> | CC-MAIN-2024-38 | https://www.appdirect.com/resources/glossary/ai | 2024-09-15T17:24:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00286.warc.gz | en | 0.944491 | 208 | 2.890625 | 3 |
WHAT IS SCALE-OUT STORAGE?
Connected by an infrastructure of expandable storage arrays, scale-out storage is a system that can be built up to facilitate more storage capacity depending on the needs of an enterprise.
Why is scale-out storage necessary?
Scale-out storage is a storage system that supports heavy workloads. Network and memory are scaled within a single cluster, which optimizes functionality. Being able to scale storage gives organizations the flexibility to control costs as their needs expand. In the past, organizations were either deeply restricted by limited storage capacities, required to purchase excessive amounts of storage for fear of running out of space, or lacked information to determine how much storage to purchase.
Scale-out storage ensures that scalability of storage can be established quickly and efficiently by simply adding hardware to the existing arrays via network-attached storage (NAS). This expansion process occurs through groups of servers forming clustered arrays and sharing files over a given network. The benefit of scale-out storage is not only in the ability to build or decrease at any moment, but also in the simplicity of the setup. Increasing capacity occurs by simply adding nodes. As continual rapid data growth demands the need for new storage, an enterprise can build storage capacity onto the hardware in a scale-out architecture as needed.
How is scale-out storage set up?
The infrastructure of scale-out solutions is made up of nodes that form clusters which incrementally combine processing power, RAM, and storage. This setup enables capacity and performance to grow simultaneously. Scale-out storage works with structured and unstructured data; however, its best application is through unstructured data.
One element of scale-out storage does require some level of maintenance; it is not simply housing data. The structural setup requires users to store data in a ground-up approach to prevent bottlenecking on the nodes. This will not only prevent a slow-down in processing speeds, but also boost data-processing efficiency.
Scale-up vs. scale-out storage: What’s the difference?
The sheer volume of data being processed and analyzed is exponentially greater than it was even just a few years ago. Given that, a storage solution’s ability to maintain and manage incoming data is critical to enterprise success. It is also a deciding factor in which type of data storage systems are required in certain industry settings.
Scale-up storage was the traditional system of block and file storage. This system has two parts: a set of controllers and a shelf of drives. In order to acquire more storage capacity, another shelf of drives needs to be added. The physical architecture of a scale-up storage system requires more time and physical effort to add to.
Due to the rapid uptick in data volume, scale-up storage cannot keep up. Scale-up storage has been around for a longer time, and it typically struggles with handling the workload that comes with the rapid increase in data volume that today’s enterprises are processing. Its internal architecture is only as scalable as the separate controllers, which make integration and accessibility of new data slower when each new drive is added to the existing server shelves.
There are definite benefits to both scale-up and scale-out storage; however, there are varying levels of scalability in either system. For instance, when there is a need for an increased amount of storage space and processing, scale-out storage is incredibly easier to scale.
This is where the usability and capacity of scale-out storage surpasses scale-up storage. Adding storage capacity is seamless, as nodes are added to the cluster. This also directly affects processing capacity, efficiency, and speed, as it is not limited to any single controller the way scale-up storage systems are.
What is the difference between scale-out storage and object storage?
Scale-out network-attached storage and object storage are both scalable solutions for unstructured data storage. Both systems are adaptable for ever-changing business requirements and are two of the most popular storage architectures. While similar when it comes to overall capacity and performance abilities, there are several differences in application settings.
Object storage is best suited for retaining an immense volume of unstructured data. It is also significantly adaptable and scalable. Object storage systems utilize unique identifiers (metadata) that flag files according to tasks and promote correlations. This allows for quicker processing and analysis of data. The capabilities are in storage, archives, backups, and the general management of unstructured data. This is the preferred method of storage for files that need to be accessed but will not need to be changed.
Scale-out NAS is a desirable option for storage solutions when your data storage infrastructure requires adaptability to constantly changing requirements and needs. A scale-out NAS grows by adding nodes to the existing storage array. With a large part of the workforce transitioning to hybrid settings, there is an increasing need for hybrid file storage. When using cloud-native data management, scale-out NAS supports enterprise storage infrastructure through data accessibility. Within a scale-out storage architecture, stored data can be easily managed and performance will remain adequate, even as the storage exponentially scales up with growth. In fact, as this storage system’s capacity grows, performance only increases.
What are the benefits of scale-out storage?
Scalability and Limitless Growth
Since there is no limit to how many nodes or clusters you can add to your server systems, there are also no limits on data processing within your systems. Being able to build onto the existing servers is one of the most desirable elements of implementing this system. This system will balance the workload, and invariably improve the performance of the storage architecture by preventing network bottlenecks through the equal and even distribution of tasks across respective clusters.
Accessibility of Data
Scale-out NAS is best for environments where files need to be edited, accessed, reviewed, or shared by various users. There is minimal lag when locating or retrieving specific data from scale-out storage systems. Such a high level of accessibility is due to scale-out storage systems utilizing metadata to communicate between the nodes.
Through enterprise use of scale-out storage systems, as the system expands, processing speeds and RAM capacity increase. Every time you upgrade your storage infrastructure, the system works faster and faster. It is not overwhelmed by more data; rather, it grows and learns as tasks are funneled through the clusters in an efficient manner due to increased availability of nodes. The dual growth between the physical system and the capabilities of the system are hugely beneficial to overall efficiency.
When an enterprise needs varying amounts of storage capacity, scale-out storage is the best solution. Storage capacity, capabilities, and efficiency can be determined on an as-needed basis. If the volume of data approaches current capacity limits, adding storage capacity is as easy as building onto your current system with additional hardware. There are endless options to configuring the existing arrays to expand capacity while simultaneously boosting performance and efficiency.
HPE scale-out storage solutions
HPE data storage solutions deliver optimized business outcomes and generate the full potential of your data workloads. Data insights, analytics, AI, and deep learning are all readily available to be accessed, developed, and applied within a moment.
With built-in buffer within the storage service offerings, there is no risk of overprovisioning, which provides even greater support and eliminates the worry of handling an unexpected demand in resources.
Your data is one of the most invaluable assets as it drives innovation. Because of this, it needs to be protected, automated, and accessible at all times. Maximize the potential of your data storage infrastructure with HPE GreenLake through intelligent storage. Consumption-based offerings allow for storage capacity and support on an as-needed basis, saving both time and capital.
For data-intensive apps, HPE Scality provides on-premises cloud storage solutions by allocating unstructured data to tiers, arrays, or archives. Enterprise IT can format and build object storage clusters on-site with HPE Scality.
HPE GreenLake services support your IT department by maintaining storage management needs and allowing them to focus on high-value strategic business initiatives.
With HPE as a partner, your data ecosystem has the potential of comprehensive insights and support. | <urn:uuid:b873bddf-6c11-4699-a34d-851e19fd2602> | CC-MAIN-2024-38 | https://www.hpe.com/es/es/what-is/scale-out-storage.html | 2024-09-08T12:28:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00086.warc.gz | en | 0.937535 | 1,686 | 3.09375 | 3 |
Until now, Artificial Intelligence processing has been a centralized function. It featured massive systems with thousands of processors working in parallel. But researchers have discovered that lower-precision operations work just as well for popular applications like speech and image processing. This opens the door to a new generation of cheap and low-power machine learning chips from companies like BrainChip.
AI is Moving Out of the Core
Machine learning is incredibly challenging, with massive data sets and power-hungry processors. Once a model is trained, it still requires some serious horsepower to churn through the real-time data. Although many devices can perform inferencing, most use dedicated GPU or neural net processing engines. These typically draw considerable power since they’re large and complex, which is why most machine learning (ML) processing started in massive centralized computer systems.
One reason for this complexity is that machine learning processing chips typically use multiple parallel pipelines for data. For example, Nvidia’s popular Tesla chips have hundreds or thousands of cores, enabling massive parallelism and performance. Many of these chips are designed as graphics processors (GPUs), so include components and optimization that goes unused in ML processing. These cards typically draw quite a lot of power and require dedicated cooling and support infrastructure.
Recently, a new generation of ML inferencing processor units has been appearing. Apple’s Neural Engine, which first appeared in the A11 CPU in 2017, is optimized for machine learning in portable devices. The company has improved this ever since, with the latest A14 boasting 11 TOPS of ML performance! Huawei recently introduced their neural processors in the Kirin mobile chip, and we expect others to follow.
Reducing Complexity with BrainChip
But, even these processors aren’t bringing machine learning right to the edge. Internet of Things (IoT) devices must be far cheaper and less complex than flagship mobile phone processors, and they must be capable of running on battery power for months, not days. IoT devices often have to make do with a tiny power budget, and their processors cost pennies to produce.
If ML is to jump from mobile phones to every device, a radical rethink is needed. I recently spoke with chip design company BrainChip, who told me about their super low-power chips. Their Neural Processing Engine (NPE) is designed to be added to popular IoT cores to enable on-chip machine learning. The company claims that its design reduces memory, bandwidth, and operations by 50% or more, resulting in a dramatic reduction in power and transistor count.
BrainChip’s secret is relatively straightforward: They claim that most edge and IoT use cases don’t need even 8-bit precision, so the Akida processor relies on 4-bit operations. This reduces every aspect of the chip design, requiring less memory, bandwidth, and operations. The Akida chip is also designed to be scalable, with the option to add multiple 8-core Neural Processing Units, all communicating over a mesh network.
ML For IoT
This sounds great, but does it actually work?
IoT devices don’t typically perform complex or critical operations. They have simple jobs to do, including speech recognition, object detection, and anomaly detection. In their tests, BrainChip found that 4-bit precision was almost as accurate as 32-bit in Google Speech, ImageNet, Visual Wake Word, and similar processing tasks. It also reduced memory usage in a medium-scale CNN from 30 MB to under 10 MB, compared to a generic DLA.
This is perhaps the most exciting aspect of BrainChip’s presentation. If low-precision ML processors can reduce power usage and complexity, they enable machine learning to move into every device. This reduction in complexity also reduces cost since it can use a larger and cheaper process node. There is no need to use a fancy 5nm process like Apple if the chip uses orders of magnitude fewer transistors for processing and memory.
Many people are scared of ‘AI everywhere’, but those of us in the industry know that these devices operate more like ants than killer robots. Cheap low-power chips bring simple speech and image processing to every device, opening new doors for convenience and productivity. Plus! Consumers will love the new world of appliances, lightbulbs, and door handles that recognize their faces or voices. BrainChip’s 4-bit breakthrough makes all this possible! | <urn:uuid:1b6223a8-103b-46c5-887e-2665497d539d> | CC-MAIN-2024-38 | https://gestaltit.com/exclusive/stephen/brainchip-brings-ai-to-the-edge-and-beyond/ | 2024-09-09T14:43:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00886.warc.gz | en | 0.944792 | 901 | 3.234375 | 3 |
Niederösterreichisches Landesarchiv secures document preservation for posterity through digitisation
Protecting irreplaceable historic records while transforming operational efficiency.
Conserving one of Austria’s largest public archives
Located in St. Pölten, the Landesarchiv stores records produced by the administrative bodies and courts that have been and are currently active in Lower Austria. Extensive holdings, which date back to the Middle Ages and the time of Emperor Frederick Barbarossa, include manuscripts, administrative and court records, photographs, plans and cadastral items as well as other sensitive materials that are to be preserved for posterity.
“All citisens have a right to inspect records and we very often attract researchers, historians, and government officials seeking clarification of specific events,” said Dr. Stefan Eminger, Head of Contemporary History at Niederösterreichisches Landesarchiv. That demands not only a data- protected environment but also careful handling of the documents in terms of conservation.
Taking charge of history
The archive’s principal duty is to preserve all records of state administration worthy of archiving, including physically endangered and very old documents that should remain untouched and safe from harmful environmental factors like damp and dust.
Dr. Eminger added: “The best way to store an asset would be if no one had access to it, but that misses the point of an archive because our goal is to make historical documents accessible to everyone. Our role as an archive means we must meet both objectives and through our restoration department ensure the originals remain in good condition.”
With that goal in mind, as part of a large international digitisation project the archive was looking to convert official documents from the Holocaust and nazi regime, many of which were fragile paper files and handwritten notes. Thereby improving access for researchers, relatives of victims, and other interested parties around the world. | <urn:uuid:a1406d1f-60d7-49e7-ace7-4e5b00dfa576> | CC-MAIN-2024-38 | https://www.ironmountain.com/en-ie/resources/case-studies/n/niederosterreichisches-landesarchiv-secures-document-preservation-for-posterity-through-digitization | 2024-09-09T15:46:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00886.warc.gz | en | 0.941383 | 399 | 2.625 | 3 |
Significant Leap Forward on Path Toward Quantum Internet
(DigitalJournal) The path towards a quantum Internet has seen a significant leap forward as scientists demonstrated a different method for transmitting quantum information across long distances.
The quantum Internet is a concept based on the theoretical use of quantum computers to construct a new kind of network. Where the traditional internet operates through the use of binary signals in data packets, the quantum Internet would instead utilize quantum signals. This process essentially involves the systems forming the Internet sending information to each other via quantum signals.
One area hampering progress is stability. To achieve a quantum Internet light signals need to be able to interact with electron spins inside distant computers. New research from Osaka University achieves this by applying laser light to transmit quantum information via a quantum dot by changing the spin state of a single electron.
The research has been published in the journal Nature Communications, in a paper called “Angular momentum transfer from photon polarization to an electron spin in a gate-defined quantum dot.” | <urn:uuid:915ebabe-1ca2-46d4-b0b9-89fb5bd1b501> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/significant-leap-forward-path-toward-quantum-internet/ | 2024-09-12T03:22:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00686.warc.gz | en | 0.889877 | 203 | 3.359375 | 3 |
In the ongoing aftermath of the COVID-19 pandemic, a new and concerning phenomenon has emerged: post-COVID-19 pulmonary fibrosis. This condition, which involves the scarring and thickening of lung tissue, bears a striking resemblance to idiopathic pulmonary fibrosis (IPF), a chronic and often fatal lung disease. What makes post-COVID-19 fibrosis particularly troubling is its apparent ability to develop in individuals who previously had no underlying lung conditions, or in those with preexisting interstitial lung disease (ILD), exacerbating their condition.
Medical Concept | Simplified Explanation | Relevant Details | Examples |
Pulmonary Fibrosis | A condition where the lung tissue becomes thickened and scarred, making it hard to breathe. | It can be caused by various factors, including infections like COVID-19. | Difficulty in breathing, dry cough, and fatigue are common symptoms. |
Idiopathic Pulmonary Fibrosis (IPF) | A type of lung disease where the cause of lung scarring is unknown. | “Idiopathic” means the cause is not known. It leads to stiff, thick lung tissue, making it difficult for oxygen to pass through. | Patients may need oxygen therapy or lung transplants as the disease progresses. |
Interstitial Lung Disease (ILD) | A group of diseases that cause scarring and inflammation of the lung tissue. | Includes diseases like IPF and those related to connective tissue disorders. | Exposure to toxins, infections, or autoimmune diseases can cause ILD. |
Small Airway Disease | A condition affecting the tiny airways in the lungs, leading to inflammation and narrowing. | Often occurs early in lung diseases like IPF and can result in breathing difficulties. | Smoking, viral infections, and pollution are common causes. |
ATP12A | A protein involved in maintaining the pH balance in the lungs, which can affect mucus production. | Overexpression of ATP12A is linked to increased mucus production and lung scarring. | Blocking ATP12A could be a potential treatment to reduce lung damage. |
Mucus Accumulation | Buildup of mucus in the airways, which can block airflow and lead to infection. | In conditions like COVID-19, excessive mucus can worsen lung function. | Similar mucus buildup occurs in cystic fibrosis, leading to breathing problems. |
Goblet Cells | Cells in the lungs that produce mucus to trap and remove particles from the airways. | Overactive goblet cells can lead to excessive mucus, contributing to airway obstruction. | Increased goblet cell activity is seen in conditions like asthma and chronic bronchitis. |
In Situ Hybridization | A lab technique used to detect specific RNA or DNA sequences in tissue samples. | Helps in visualizing where certain genes, like ATP12A, are being expressed in the lungs. | Used to study gene expression in diseases like cancer and genetic disorders. |
Fluorescence Immunohistochemistry | A method used to visualize proteins in tissue samples using fluorescent dyes. | Allows scientists to see the presence and location of proteins like ATP12A in lung tissues. | Commonly used in research to study diseases like cancer and autoimmune conditions. |
MUC5AC and MUC5B | Types of mucus proteins produced in the lungs. | Overproduction of these proteins can contribute to airway obstruction and fibrosis. | MUC5AC is often elevated in asthma, while MUC5B is linked to pulmonary fibrosis. |
COVID-19 Pulmonary Fibrosis | Scarring of lung tissue following a severe COVID-19 infection. | COVID-19 can cause lasting damage to the lungs, leading to fibrosis even after the infection has cleared. | Patients may experience long-term breathing difficulties and reduced lung function post-recovery. |
ECMO (Extracorporeal Membrane Oxygenation) | A life-support machine that takes over the function of the heart and lungs. | Used in severe cases where the lungs cannot provide enough oxygen to the body on their own. | Commonly used for critically ill COVID-19 patients. |
SARS-CoV-2 | The virus responsible for COVID-19. | It can cause severe respiratory illness, leading to complications like pneumonia and pulmonary fibrosis. | The virus primarily spreads through respiratory droplets. |
RNA Sequencing | A technique used to study the complete set of RNA in a cell or tissue sample. | Helps in understanding which genes are active and how they contribute to diseases. | Used in research to study everything from cancer to infectious diseases like COVID-19. |
Mice Models in Research | Laboratory mice used to study human diseases and test potential treatments. | Mice share many biological similarities with humans, making them useful in medical research. | Often used to study the effects of drugs, vaccines, and disease progression. |
Proton Pump Blockers | Medications that reduce the production of acid in the stomach, and can also affect lung pH. | In this context, they may help reduce lung damage by blocking ATP12A activity. | Commonly prescribed for acid reflux, but research is exploring their use in lung diseases. |
Bleomycin Mouse Model | A research model where mice are given bleomycin to induce lung fibrosis, mimicking human conditions like IPF. | Helps researchers study the development of fibrosis and test potential treatments. | Used to test the effects of ATP12A inhibitors in reducing fibrosis. |
Post-COVID Complications | Health issues that arise after recovering from COVID-19, including lung damage and fibrosis. | These complications can be long-lasting and require ongoing medical care. | Other post-COVID complications include heart issues, fatigue, and cognitive problems (“long COVID”). |
The Role of Small Airway Injury and Loss
Small airway injury and loss are now recognized as potential early events in the development of fibroproliferative diseases such as IPF. In the context of COVID-19, small airway disease has been identified as a significant feature, not just during the acute phase of infection but also in the long-term sequelae observed in survivors. Recent research has highlighted the upregulation of ATP12A, an α-subunit of the nongastric H+, K+-ATPase that plays a crucial role in acidifying the air surface liquid in the distal small airways of the lungs. This upregulation has been observed in the explanted lungs of patients with IPF, suggesting a potential link between ATP12A expression and the pathogenesis of lung fibrosis.
In a study that sought to investigate this connection further, researchers focused on ATP12A expression in lung samples from patients with post-COVID-19 pulmonary fibrosis. The results of this investigation have provided new insights into the mechanisms underlying the development of post-COVID-19 fibrosis and suggest possible therapeutic interventions.
Patient Characteristics and Study Design
The study involved lung explant samples from 18 subjects who underwent lung transplantation. Of these, 11 had non-COVID-19-related fibrotic ILD (including eight with IPF and three with fibrosis associated with connective tissue disease), while seven had post-COVID-19 pulmonary fibrosis. The researchers also included control samples from donor lungs unsuitable for transplantation or lobectomy samples. The clinical characteristics of the patients, including age, sex, race, and pre-transplant pulmonary function, were documented, providing a comprehensive overview of the study population.
To assess ATP12A protein and RNA expression, the researchers used fluorescence immunohistochemistry and in situ hybridization, respectively. They also examined the expression of mucin proteins MUC5AC and MUC5B, which are associated with mucus production in the airways.
Findings: ATP12A Upregulation in Post-COVID-19 Fibrosis
The study found that ATP12A expression was significantly upregulated in the small airways of patients with post-COVID-19 pulmonary fibrosis compared to those with non-COVID-19-related fibrotic ILD. This finding suggests that ATP12A overexpression may play a key role in the fibroproliferative process observed in post-COVID-19 lungs.
One of the critical aspects of this study was the correlation between ATP12A expression and distal airway mucus obstruction. The researchers demonstrated that higher ATP12A expression was associated with increased mucus viscosity and accumulation, which are hallmarks of both IPF and post-COVID-19 pulmonary fibrosis. This relationship was further supported by the analysis of single-cell RNA sequencing data from post-COVID-19 lungs, which revealed that ATP12A is overexpressed in several airway cell types, including goblet cells, which are known for their role in mucus production.
Mechanisms of Mucus Accumulation and Airway Obstruction
The accumulation of mucus in the airways can lead to significant clinical complications, including airway obstruction and an increased risk of bacterial infections. The study’s findings suggest that ATP12A overexpression contributes to mucus accumulation through both pH-dependent and pH-independent mechanisms. This dual mechanism of action makes ATP12A a particularly intriguing target for therapeutic intervention.
In the context of COVID-19, excessive mucus production and accumulation have been well-documented. The study draws parallels between the thick, viscous respiratory mucus observed in COVID-19 patients and the mucus found in cystic fibrosis (CF) patients. Notably, ATP12A has also been shown to be overexpressed in the airways of CF patients, further supporting the hypothesis that ATP12A plays a critical role in mucus-related pathologies.
Animal Models: Evidence from SARS-CoV-2 Infected Mice
To further investigate the role of ATP12A in post-COVID-19 fibrosis, the researchers conducted experiments using BALB/c mice infected with a mouse-adapted strain of SARS-CoV-2. The lungs of these mice were examined 21 days post-infection, and the researchers observed significant upregulation of ATP12A in the small airways. This upregulation was associated with increased mucus accumulation, mirroring the findings in human post-COVID-19 lungs.
These animal model results provide compelling evidence that SARS-CoV-2 infection can induce ATP12A expression in vivo, leading to mucus accumulation and potentially contributing to the development of pulmonary fibrosis. The study also found that ATP12A induction began as early as four days post-infection, suggesting that early intervention could be crucial in preventing the progression of fibrosis.
Therapeutic Implications: Inhibition of ATP12A as a Potential Treatment Strategy
Given the role of ATP12A in promoting mucus accumulation and fibrosis, the study explored the potential of targeting ATP12A as a therapeutic strategy. Previous research has shown that inhibition of ATP12A with a potassium-competitive proton pump blocker, such as vonoprazan, can block the augmentative effects of ATP12A-mediated expression in fibrosis models. This finding suggests that ATP12A inhibitors could be a promising avenue for treating post-COVID-19 pulmonary fibrosis.
Moreover, the study’s findings underscore the importance of early detection and intervention in patients at risk of developing post-COVID-19 fibrosis. By identifying those with elevated ATP12A expression, clinicians could potentially intervene with targeted therapies before significant fibrosis develops, improving patient outcomes.
A Path Forward for Post-COVID-19 Pulmonary Fibrosis
The study of ATP12A expression in post-COVID-19 pulmonary fibrosis represents a significant step forward in understanding the mechanisms underlying this condition. The upregulation of ATP12A in small airways, its association with mucus accumulation, and the potential for therapeutic intervention provide new insights into the pathogenesis and treatment of post-COVID-19 fibrosis.
As the world continues to grapple with the long-term effects of the COVID-19 pandemic, understanding and addressing conditions like post-COVID-19 pulmonary fibrosis will be critical. The findings from this study suggest that targeting ATP12A could offer a novel and effective approach to managing this challenging condition, offering hope to the many individuals affected by the long-term consequences of COVID-19.
Further research is needed to fully elucidate the role of ATP12A in pulmonary fibrosis and to develop effective therapies that can mitigate the impact of this condition. However, the current evidence provides a strong foundation for future investigations and underscores the importance of continued research in this area.
In summary, the increased expression of ATP12A in small airway epithelia of post-COVID-19 pulmonary fibrosis patients highlights a potentially critical pathway in the development of fibrosis. By targeting ATP12A, there is potential to reduce mucus accumulation, prevent airway obstruction, and ultimately slow or halt the progression of pulmonary fibrosis in COVID-19 survivors. This study represents an important step in the ongoing effort to understand and treat the long-term pulmonary consequences of COVID-19.
resource : https://www.atsjournals.org/doi/10.1165/rcmb.2023-0419LE | <urn:uuid:0385e9be-0b22-49a4-8aac-e954881a746b> | CC-MAIN-2024-38 | https://debuglies.com/2024/08/23/increased-expression-of-atp12a-in-small-airway-epithelia-of-post-covid-19-pulmonary-fibrosis-a-critical-analysis-and-implications-for-future-treatments/ | 2024-09-13T09:27:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00586.warc.gz | en | 0.931454 | 2,803 | 3.09375 | 3 |
In an era defined by rapid technological advancement and digital transformation, two technological forces have emerged as pivotal in reshaping the landscape of modern businesses: Artificial Intelligence (AI) and Robotic Process Automation (RPA). These technologies, once considered separate domains, are now converging, creating a synergy that is driving unprecedented levels of innovation and efficiency in the corporate world.
AI, with its ability to mimic human intelligence and decision-making, has revolutionized the way businesses interact with data and extract insights. It has opened doors to predictive analytics, intelligent decision-making, and personalized customer experiences. On the other hand, RPA has made significant strides in automating routine and repetitive tasks, allowing businesses to streamline operations, reduce errors, and free up valuable human resources for more strategic initiatives.
As we delve into this discussion, it’s crucial to understand how the integration of AI and RPA is not just an enhancement of capabilities but a transformative movement in the business world. This combination, often referred to as Intelligent Automation, is redefining what automation means – turning it from a tool for cost reduction into a driver of innovation and strategic growth. In this article, we will explore how AI and RPA are working together, the innovations they are fostering, and the impact they are having across various industry sectors.
Understanding AI and RPA
In the dynamic world of technology, AI and RPA stand out as two of the most influential developments. Understanding how AI and RPA work together and how they differ, will help users appreciate their combined power.
Artificial Intelligence refers to the simulation of human intelligence in machines. These AI systems are designed to think like humans and mimic their actions, encompassing learning, reasoning, problem-solving, perception, and language understanding.
Capabilities of AI
- Learning and Adaptation: AI systems can learn from data and experiences, adapting to new inputs and changes in their environment.
- Decision Making: AI excels at making informed decisions by analyzing large volumes of data, identifying patterns, and predicting outcomes.
- Natural Language Processing: AI can understand and interpret human language, facilitating interactions between humans and machines, as seen in chatbots and virtual assistants.
- Problem Solving: AI can tackle complex problems, offering solutions that might be challenging or time-consuming for human beings.
Robotic Process Automation: Definition and Primary Functions
Robotic Process Automation involves using software robots or ‘bots’ to automate routine, rule-based tasks that are usually performed by human workers. These bots interact with digital systems and software to execute business processes.
- Primary Functions of RPA:
- Task Automation: RPA bots can handle tasks like data entry, invoice processing, and email responses, automating repetitive and time-consuming work.
- Workflow Management: RPA can manage workflows by routing information to the right systems and stakeholders, ensuring smooth operational processes.
- Data Manipulation: These bots can extract, organize, and process data, facilitating data-driven business processes.
Differences and Individual Strengths of AI and RPA
AI is designed for tasks that require cognitive abilities and decision-making, whereas RPA is best suited for repetitive, rule-based tasks that don’t require complex decision-making.
- Learning and Adaptability: AI systems can learn and adapt over time, improving their performance and capabilities, while RPA bots follow predefined rules and processes.
- Complexity and Application: AI can handle complex tasks involving unstructured data and natural language, making it ideal for applications like customer service, predictive analytics, and personalized recommendations. In contrast, RPA is more focused on structured tasks in defined environments, such as back-office processes and data management.
How Do AI and RPA Work Together: The Synergy
The integration of Artificial Intelligence and Robotic Process Automation marks a significant milestone in the evolution of business automation. This synergy, often termed as Intelligent Automation, capitalizes on the strengths of both technologies to create systems that are not only efficient but also intelligent and adaptive. Understanding how AI and RPA complement each other reveals the immense potential of this blend in revolutionizing business processes.
Complementary Nature of AI and RPA
- Enhanced Decision-Making: RPA, in its traditional form, follows predefined rules and lacks decision-making capabilities. AI fills this gap by providing the cognitive ability to analyze complex data, make informed decisions, and learn from outcomes. This combination allows for the automation of more complex tasks that require both action and decision-making.
- Increased Scope of Automation: While RPA is excellent at handling structured data, AI extends this capability to unstructured data, such as text, images, and speech. This broadens the scope of automation to include tasks like data interpretation, sentiment analysis, and natural language processing.
- Adaptive and Scalable Solutions: The incorporation of AI into RPA solutions allows for more adaptable and scalable automation. AI algorithms can learn and evolve, enabling the automation system to improve over time and adapt to changing business environments.
Intelligent Automation: The Blend of AI and RPA
Intelligent Automation represents the fusion of AI’s cognitive capabilities with RPA’s efficiency. It’s a holistic approach to automation that encompasses learning, analyzing, and decision-making, alongside executing routine tasks.
- Enhancing RPA with AI: Integrating AI technologies like machine learning, natural language processing, and computer vision into RPA systems elevates the functionality of bots from rule-based tasks to more intelligent operations.
- Dynamic Process Optimization: Intelligent Automation enables dynamic optimization of processes. It allows businesses to automate complex workflows that require both repetitive actions and cognitive input, leading to greater efficiency and accuracy.
Examples of AI-Enhanced RPA Automation
- Customer Service Chatbots: AI-powered chatbots integrated with RPA can handle customer queries, provide personalized responses, and perform backend tasks like updating records or processing orders, offering a comprehensive customer service solution.
- Financial Data Analysis: In finance, AI can analyze market trends and company performance, while RPA automates routine financial operations like data entry and report generation, resulting in efficient and insightful financial analysis.
- Healthcare Patient Management: In healthcare, AI can assist in diagnosing and recommending treatment plans, while RPA takes care of administrative tasks like patient scheduling and record maintenance.
AI and RPA Integration Challenges and Solutions
Integrating Artificial Intelligence with Robotic Process Automation can unlock tremendous value for businesses, but it’s not without its challenges. These challenges range from technical issues like data quality and system compatibility to strategic concerns such as aligning AI and RPA with business objectives. This section explores these common hurdles and offers strategies to effectively overcome them.
- Data Quality and Accessibility: High-quality, accessible data is crucial for AI algorithms to function effectively. However, businesses often struggle with fragmented or poor-quality data, which can impede the effectiveness of AI-enhanced RPA solutions.
- Strategy: Implement robust data governance practices to ensure data quality and accessibility. This involves standardizing data collection processes, ensuring data accuracy, and making data readily available for AI and RPA applications.
- System Compatibility and Integration: Integrating AI with existing RPA setups and other business systems can be challenging due to compatibility issues, legacy systems, or different technology stacks.
- Strategy: Conduct a thorough assessment of current systems and infrastructure. Choose AI and RPA solutions that are known for their interoperability and offer flexible integration capabilities. Employ middleware or API-based integrations to bridge gaps between disparate systems.
- Balancing Automation and Human Oversight: Finding the right balance between automation and human intervention is critical. Over-reliance on automation can lead to oversight of complex issues that require human judgment.
- Strategy: Establish clear guidelines on the roles of AI and RPA in business processes. Maintain a system of checks and balances where automated solutions are regularly monitored and supplemented by human decision-making where necessary.
- Skill Gaps and Training Needs: The successful implementation of AI and RPA integration often requires skills that may not be present in the existing workforce.
- Strategy: Invest in training programs to upskill employees in AI and RPA-related technologies. Consider hiring or collaborating with experts in the field to fill immediate skill gaps.
Strategies to Overcome the Challenges
- Phased Implementation Approach: Adopt a phased approach to integration, starting with smaller, manageable projects before scaling up. This allows for the identification and resolution of issues in the early stages.
- Tactic: Begin with pilot projects that demonstrate the potential of AI and RPA integration, then gradually expand to more complex applications as confidence and expertise grow.
- Strong Collaboration Between IT and Business Units: Encourage collaboration between IT departments and business units to ensure that the AI and RPA integration aligns with business goals and operational needs.
- Tactic: Establish cross-functional teams that include IT specialists, business analysts, and end-users to guide the integration process.
- Continuous Monitoring and Optimization: AI and RPA integrations require ongoing monitoring and optimization to ensure they continue to meet business needs effectively.
- Tactic: Implement continuous monitoring tools and conduct regular reviews to assess the performance of AI-RPA systems and identify areas for improvement.
The Future of How Do AI and RPA Work Together
As businesses continue to navigate the digital landscape, the integration of Artificial Intelligence and Robotic Process Automation is poised to create more profound impacts across various industries. Understanding the future trends and potential advancements in these technologies can help organizations stay ahead of the curve and fully harness their transformative power.
Predicting Future Trends in AI and RPA Integration
- Greater Cognitive Capabilities: Future AI advancements are expected to bring enhanced cognitive capabilities to RPA, enabling bots to handle more complex decision-making and problem-solving tasks. This will likely broaden the scope of automation to include processes that require sophisticated analyses and judgments.
- Hyper-Automation: The trend towards hyper-automation, where businesses automate as many processes as possible, is expected to gain momentum. AI and RPA will play a central role in this movement, driving the creation of highly efficient, automated digital ecosystems.
- Seamless Human-AI Collaboration: The future will see more seamless collaboration between humans and AI-enhanced RPA systems. This collaboration will focus on complementing human creativity and strategic thinking with the efficiency and accuracy of AI-RPA systems.
Potential Impact of AI and RPA on Different Industries
- Healthcare: In healthcare, AI and RPA integration can revolutionize patient care management, from automated patient scheduling and billing to AI-driven diagnostics and personalized treatment plans.
- Finance and Banking: The finance sector may see significant advancements in fraud detection, risk assessment, and customer service optimization, with AI and RPA offering more secure, efficient, and personalized financial services.
- Manufacturing and Supply Chain: In manufacturing and supply chain, AI and RPA can lead to smarter inventory management, predictive maintenance, and optimized logistics, enhancing operational efficiency and reducing costs.
Advancements to Watch for in AI and RPA Technologies
- Enhanced Learning Algorithms: The development of more sophisticated machine learning algorithms is expected, which will enable RPA bots to learn and adapt to new tasks and environments more quickly and effectively.
- Improved Natural Language Processing: Advances in natural language processing (NLP) will allow for more nuanced and complex interactions between AI systems and human language, enhancing the capabilities of AI in understanding and responding to human inputs.
- Integration with IoT and Edge Computing: The integration of AI and RPA with IoT (Internet of Things) and edge computing is likely to grow, allowing for real-time data processing and automation in a broader range of contexts, including remote and mobile environments.
- Autonomous Decision-Making: Future developments may enable AI-enhanced RPA systems to make autonomous decisions based on real-time data and predictive analytics, further reducing the need for human intervention in routine processes.
Getting Started AI and RPA Integration
Integrating Artificial Intelligence with Robotic Process Automation is a strategic move that can significantly enhance business operations. However, for many organizations, embarking on this journey can seem daunting.
- Assess Your Business Needs: Begin by identifying the areas within your business that could benefit most from automation. Look for processes that are repetitive, time-consuming, and prone to human error. Consider how AI can add value, perhaps through enhanced decision-making, data analysis, or customer interaction.
- Start Small and Scale Up: It’s advisable to start with a small, manageable project. This approach allows you to test the waters, understand the integration complexities, and gauge the impact before scaling up to more complex systems.
- Build a Skilled Team: Assemble a team with the right mix of skills. This team should ideally include members with expertise in AI, RPA, data analysis, and project management. If such skills are not available in-house, consider hiring new talent or upskilling existing employees.
- Prioritize Data Management: Good data is the lifeblood of effective AI-RPA integration. Ensure that you have access to high-quality, relevant data and that your data management practices are up to the mark in terms of accuracy, consistency, and accessibility.
Tips on Selecting the Right Tools and Partners
The market is replete with AI and RPA tools, each offering different features and capabilities. Evaluate tools based on how well they align with your business processes, the level of customization they offer, and their ease of integration.
Opt for solutions that are scalable and flexible. As your business grows and evolves, your AI and RPA solutions should be able to adapt to changing needs and scale accordingly.
Choose vendors that offer robust support and have an active user community. Good vendor support can be invaluable during the integration process, and a vibrant community can provide additional insights and troubleshooting assistance.
Collaborating with experienced partners can significantly smooth out the integration process. Look for partners or consultants who have a proven track record in implementing AI and RPA solutions in your industry.
Once the tools are selected, invest in training your team to ensure they are equipped to use the new systems effectively. Continuous learning should be encouraged to keep up with the evolving nature of these technologies.
The integration of AI and RPA heralds a future where automation extends beyond routine tasks to encompass complex decision-making and problem-solving capabilities. This evolution paves the way for businesses to not only optimize efficiency and reduce costs but also to unlock new opportunities for innovation and strategic growth. With AI-enhanced RPA, organizations can tackle more intricate tasks, gain deeper insights from their data, and offer more personalized and effective services to their customers.
Looking ahead, the potential impacts of AI and RPA integration span across industries, reshaping everything from healthcare and finance to manufacturing and customer service. This integration promises to bring about smarter, more responsive, and adaptive business solutions. As these technologies continue to advance and mature, we can expect to see even more innovative applications and solutions emerging. | <urn:uuid:c3c54cfa-0850-47d5-9d8b-8679dce2ae62> | CC-MAIN-2024-38 | https://www.itconvergence.com/blog/from-automation-to-innovation-how-do-ai-and-rpa-work-together/ | 2024-09-15T20:59:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00386.warc.gz | en | 0.920729 | 3,127 | 3.15625 | 3 |
In a world facing pressing environmental challenges, the role of technology in driving positive change has never been more critical. From devices that can measure water usage, smart vehicles that make transporting goods more efficient, or even waste containers that are smarter than your average trash bin, IoT is being applied for good in new and surprising ways.
These emerging advancements are exciting, and they offer incredible potential for the way we live and how businesses operate. With each new development, however, we often must look back and ask: is this benefiting the Earth? Let’s explore how IoT for Good initiatives have emerged as a powerful tool for environmental monitoring, conservation, and sustainability.
Reducing Carbon Footprint through Renewable Energy
Renewable energy sources like solar, wind, hydroelectric, and geothermal power are increasingly becoming mainstream. In 2023, 50 percent more capacity for renewable energy was added to energy systems around the world.
Technology is actively tackling some of our biggest issues concerning non-renewable energy, as well as reducing our carbon footprint, and making the deployment of renewable, clean energy options easier and more efficient. For instance, imagine all energy systems connected by Artificial Intelligence (AI), monitoring usage and eliminating high emissions.
This kind of technology makes the reliance on renewable energy systems increasingly easier. Experts anticipate that renewable energy will exceed the usage of coal as the largest source of electricity by 2025.
Optimizing Resource Management
Effective resource management is essential for preserving Earth’s natural resources. Technologies such as AI, drones, and sensors play a crucial role in monitoring and optimizing resource usage and can be implemented in a variety of ways such as:
- Agriculture: The ability to precisely monitor crop growth and harvest yields allows for an increase in food accessibility. By leveraging connectivity solutions, BinSentry makes feed monitoring more efficient while reducing waste.
- Monitoring Water Usage: Water is one of our most critical resources, which makes responsible management that much more important. By leveraging ground sensors in agriculture and recreation, GroundWorx is not only reducing water usage but also saving money and enhancing smart output.
- Waste Management: Smart waste container management systems help monitor and minimize waste while promoting recycling and paving the way to a more sustainable future.
Promoting Sustainable Transportation
Transportation is one of the largest sources of carbon emissions, but technological advancements offer promising solutions. The goal is to use technology to create more efficient methods of transportation that cut down on emissions while also taking advantage of monitoring systems.
KORE’s Fleet technology, for instance, enables businesses to optimize route selection and increase productivity while minimizing the overall environmental impact and time spent on the road.
Harnessing the Power of IoT for Good
Let’s harness the power of IoT for Good to create a sustainable future for generations. Together, we can pave the way to a greener tomorrow through innovative technology and collective action. At KORE, we are committed to leveraging our expertise in IoT connectivity to support environmental sustainability efforts around the globe. | <urn:uuid:1569a649-c88a-49ec-8ed4-0db85ccba7c9> | CC-MAIN-2024-38 | https://www.iotforall.com/how-technology-is-driving-sustainability-a-look-at-iot-for-good-initiatives | 2024-09-07T10:59:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00350.warc.gz | en | 0.930261 | 610 | 3.375 | 3 |
As more companies adopt remote and hybrid work models, cybersecurity risks also increase. A 2022 Verizon Mobile Security Index report revealed that 45 percent of respondents had suffered a cyberattack in the past year, with attacks rising by 22 percent compared to 2021. Sometimes, the success of a cyberattack is the result of a highly sophisticated, well-planned, coordinated attack. In other instances, a successful cybercrime results from human error, with employees falling victim to malware attacks, phishing scams, or accidental information leaks.
Companies can mitigate the risk of cyberattacks by implementing a zero-trust security model. Zero trust is a security approach following the “never trust, always verify” principle. It requires constant monitoring and verifying of all users and devices to ensure they meet an organization’s security rules. In a zero-trust system, each action and connection needs to be authenticated and authorized.
This post will discuss the main sectors that would benefit from adopting a zero-trust infrastructure, plus the top tools and software to implement it.
What Is Zero-Trust Infrastructure?
Implementing zero-trust infrastructure is one of the most efficient ways for companies to control access to their platforms and digital interfaces. But what is this type of cybersecurity measure?
First conceptualized in 2010 by cybersecurity analyst John Kindervag, zero trust is a comprehensive cybersecurity defense framework that ensures that every attempt to access secure network information passes through a series of security checkpoints. Unlike other systems, such as VPNs (Virtual Private Networks), which may grant restricted access to pre-approved employees, clients, customers or colleagues, zero-trust infrastructure assumes that every login presents a possible threat.
This approach makes secure company data inaccessible to anyone who fails to prove that they are a legitimate system user.
The zero-trust method reduces the possibility of human error; since every login is viewed with the same threat level and restriction on authority. It becomes significantly more difficult for hackers to impersonate authorized employees or enact phishing campaigns to gain login information. Each device and user must pass through advanced authentication and authorization security protocols, so even an authorized user attempting to log in through a compromised device, or via an unauthorized network, will be liable to restrictions on system data access.
The U.S. Government recently made a public announcement about their widespread adoption of zero-trust infrastructure for shoring up national cybersecurity. Their plan outlines advanced-level adoption of multi-factor authentication, segmentation of the network, deep-level encryption methods, tightened identity management, and continuous, ongoing enforcement of cybersecurity policies.
Cybersecurity professionals like Therese Schachner have pronounced that all industries should follow the lead of the U.S. government, noting that companies and other organizations that contain sensitive medical, financial, and personal data should adopt zero-trust infrastructures to “help stave off cyberattacks and keep this data private.”
Zero-trust infrastructure can protect data for companies that consumers rely on heavily, including water and electricity providers, software developers, and shipping and transportation operations.
Manufacturing will also benefit from adopting this advanced-level preventive approach. In the last several years, manufacturing has seen widespread adoptions of digital, AI and machine learning technology. For example, 43% of manufacturing companies have reported increasing their budget for machine learning development by up to 25 percent, while 29 percent of manufacturing companies increased their AI and ML budgets by up to 50 percent.
To meet this rising technological adaptation, manufacturing companies will need to develop equally advanced cybersecurity measures to protect private software, strategies and data.
Zero trust will ensure a streamlined security system, as every login attempt is routed through the same singular connection point. This tightens the hold of security, as missteps and potential bad actors can easily be identified and rooted out at the first point of entry.
A zero-trust approach also boosts access security for hybrid and remote companies. Remote workers will have to go through the same level and process of authentication and authorization as other employees, meaning they can access top-level, sensitive data more securely.
Zero-trust infrastructures also ensure that businesses and organizations across all industry sectors remain compliant with the latest cybersecurity policies. Since zero trust assumes that 100 percentof all interactions present a possible threat, it presents the most industry-compliant level of cybersecurity for businesses and organizations.
Top Implementation Tools and Software
Implementing zero-trust infrastructure provides a more streamlined approach to protecting company resources. For companies with hybrid business models, specific traits will be invaluable, such as emphasis on secure network login attempts, while large-scale in-person corporations will need more protection when it comes to identity theft and customer data protection.
For companies that are ready to adopt zero-trust infrastructure to shore up their cybersecurity, there is a bevy of zero-trust tools and software options to choose from.
Some options, such as BetterCloud, provide advanced customer service and fast responses but are designed to be implemented by an in-house IT expert or team. Other tools, such as GoodAccess, a cloud-based VPN with zero-trust policies, are designed to be implemented by the CEO or company owner.
Each company must consider its needs and requirements before choosing the appropriate zero-trust software. Luckily, many cybersecurity software developers offer free trial periods, allowing companies to test before they invest.
At the same time, businesses should hire backend developers who understand security and make sure they have the budget for this. Generally, companies can expect to pay at least $60 per hour for an experienced freelance developer who understands security and zero-trust measures, which should be cheaper than working through an agency.
The networks that companies and organizations are increasingly relying on are highly complex entities. They are distributed across vast geographical regions and accessed via countless devices. Each network presents a dynamic landscape with a continuously shifting arrangement of users, applications, devices, services, and data flowing in and out of the network.
Since this is such a dynamic situation, the boundaries are constantly shifting, making it a difficult space to protect. The zero-trust infrastructure approach addresses the potential cybersecurity risks, monitoring the access of each user, application, piece of data, and attempted interaction.
This advanced-level cybersecurity infrastructure protects all sensitive systems and data sets, no matter where users may be located, or who is trying to access these details. And it does so without interrupting or noticeably affecting user engagement.
Isla Sibanda is an experienced cybersecurity analyst and penetration testing specialist with a background in computer science and ethical hacking. | <urn:uuid:eb5e818a-266e-4c33-b460-f446f6763b4e> | CC-MAIN-2024-38 | https://www.mbtmag.com/cybersecurity/blog/22863150/investing-in-zerotrust-infrastructure | 2024-09-07T12:14:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00350.warc.gz | en | 0.935408 | 1,326 | 2.765625 | 3 |
Scalability and elasticity are related, though they are different aspects of database availability. Both scalability and elasticity help to improve availability and performance when demand is changing, especially when changes are unpredictable.
Scalability refers to the capability of a system to handle a growing amount of work, or its potential to perform more total work in the same elapsed time when processing power is expanded to accommodate growth. A system is said to be scalable if it can increase its workload and throughput when additional resources are added.
A related aspect of scalability is availability and the ability of the system to undergo administration and servicing without impacting applications and end user accessibility. A scalable system can be changed to adapt to changing workloads without impacting its accessibility, thereby assuring continuing availability even as modifications are made. In other words, a scalable system can be adjusted without requiring any downtime.
Elasticity is the degree to which a system can adapt to workload changes by provisioning and deprovisioning resources in an on-demand manner, such that at each point-in-time the available resources match the current demand as closely as possible.
The goal of elasticity is to balance the amount of resources allocated to a service with the amount of resources it actually requires. In so doing, both over- and under-provisioning can be avoided. Over-provisioning is when more resources are allocated than are required, and it should be avoided, particularly in a cloud model because the service provider must pay for all allocated resources, which can increase the cost. With under-provisioning fewer resources are allocated than are required, and this can be problematic because it usually results in performance problems. When performance is slow enough it can look like downtime to the end user, resulting in customers abandoning the application… and that has a financial impact.
From a database perspective, elasticity infers a flexible data model and clustering capabilities. The greater the number of changes that can be tolerated, and the ease with which clustering can be managed, the more elastic the DBMS.
Ways to Scale Databases
There are two broad categories for scaling database systems: vertical scaling and horizontal scaling. With vertical scaling, also called scaling up, resources (such as memory or more powerful CPUs) are added to an existing server. Removing resources, on the other hand, is called scaling down.
Replacing or adding resources to a system typically results in performance improvement, but realizing such gains often requires reconfiguration and downtime. Furthermore, there are usually limitations to the amount of additional resources that can be applied to a single system, as well as to the software that uses the system.
Vertical scaling has been a standard method of scaling for traditional RDBMSs that are architected on a single-server type model. Nevertheless, every piece of hardware has limitations that, when met, cause further vertical scaling to be impossible. For example, if your system only supports 256 GB of memory, when you need more memory you must migrate to a bigger box, which is a costly and risky procedure requiring database and application downtime.
Horizontal scaling, also known as scaling out, is the process of adding more hardware to a system. This typically means adding nodes (new servers) to an existing system. Doing the opposite, that is removing hardware, is referred to as scaling in.
With the cost of hardware declining, it can make sense to adopt horizontal scaling using low-cost commodity systems for tasks that previously required larger computers. Of course, horizontal scaling can be limited by the capability of software to exploit networked computer resources and other technology constraints. Keep in mind, too, that traditional database servers cannot run on more than a few machines (that is, you can typically scale to 2 or 3 machines, not to 100 or more).
Horizontal and vertical scaling can be combined, with resources added to existing servers to scale vertically and additional servers added to scale horizontally when required. It is wise to consider the tradeoffs between horizontal and vertical scaling as you consider each approach.
Horizontal scaling results in more computers networked together and that will cause increased management complexity. It can also result in latency between nodes and complicate programming efforts if not properly managed by either the database system or the application. That said, depending on your database system’s hardware requirements, you can often buy several commodity boxes for the price of a single, expensive, and often custom-built server that vertical scaling requires.
On the other hand, depending on your requirements, vertical scaling can be less costly if you’ve already invested in the hardware; it typically costs less to reconfigure existing hardware than to procure and configure new hardware. Of course, vertical scaling can lead to over-provisioning which can be quite costly.
Finally, you might want to look into some of the newer database systems (such as NuoDB, Cockroach, and FaunaDB) that were architected with scalability and elasticity in mind. | <urn:uuid:49f58e4e-7c13-404b-90dd-a8b4b6fb33d6> | CC-MAIN-2024-38 | https://www.dbta.com/Editorial/Think-About-It/DBA-Corner-Database-Scalability-Elasticity-and-Availability-130315.aspx | 2024-09-08T16:11:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00250.warc.gz | en | 0.944204 | 1,013 | 2.921875 | 3 |
The New York Times recently published a captivating article that delves into the mesmerizing yet dangerous phenomenon of drifting sheets of ice. These vast, floating expanses of ice, which form and break away from larger ice formations, are both a wonder of nature and a stark reminder of the delicate balance within our planet’s ecosystems. In this piece, we explore the highlights of the NYT article, shedding light on the beauty, the science, and the growing peril of these drifting ice sheets.
The Allure of Ice Sheets
There’s something undeniably magical about vast sheets of ice floating across the ocean. Their shimmering surfaces reflect the sunlight, creating a dazzling display that has inspired artists, photographers, and nature lovers for centuries. These ice sheets are not just visually stunning; they play a crucial role in regulating the Earth’s temperature by reflecting sunlight back into space, a process known as the albedo effect.
However, the NYT article highlights that beneath their serene appearance lies a dynamic and often treacherous environment. Ice sheets are constantly in motion, driven by wind, ocean currents, and changes in temperature. This movement can create breathtaking ice formations, from towering icebergs to intricate patterns etched into the ice by the forces of nature.
The Science Behind Drifting Ice
Drifting sheets of ice are formed when large pieces of polar ice break away due to natural processes or climate change. As temperatures rise, the stability of these ice sheets is compromised, leading to the formation of floating ice masses that drift across the ocean. The NYT article emphasizes the interconnectedness of these ice sheets with global climate systems. As they drift and eventually melt, they contribute to rising sea levels and disrupt marine ecosystems.
Scientists have been studying the behavior of drifting ice sheets for decades, using satellites, drones, and underwater sensors to monitor their movement and predict their impact. The data collected from these studies is crucial for understanding the broader implications of melting ice on a warming planet.
The Perils of Drifting Ice
While drifting ice sheets are undeniably beautiful, they also pose significant dangers. The NYT article discusses the peril they represent for ships navigating the polar regions. Modern technology, such as icebreakers and satellite navigation, has reduced these risks, but the danger remains, particularly as climate change increases the unpredictability of ice movement.
Beyond the immediate hazards to human activity, the melting of these ice sheets has far-reaching consequences for global sea levels. The NYT article underscores the alarming rate at which these ice sheets are melting, contributing to rising sea levels that threaten coastal communities worldwide. The loss of ice also disrupts habitats for polar species, from seals to penguins, which rely on stable ice for breeding and hunting.
The Cultural and Environmental Significance
Drifting sheets of ice have long held cultural significance for indigenous communities living in the Arctic and Antarctic regions. The NYT article explores how these communities have traditionally relied on the ice for hunting, fishing, and travel. However, as the ice becomes less predictable and more prone to melting, their way of life is under threat. The article highlights the voices of indigenous leaders who are calling for greater awareness and action to address the impacts of climate change on their communities.
They are a critical component of the Earth’s climate system, helping to regulate temperatures and maintain oceanic currents. The NYT article draws attention to the need for global cooperation in addressing the root causes of climate change, including reducing greenhouse gas emissions and protecting vulnerable ecosystems.
The Beauty in the Fragility
Despite the dangers and the environmental challenges posed by drifting ice sheets, the NYT article doesn’t shy away from celebrating their beauty. The article features stunning photography that captures the ethereal quality of these floating ice masses. From the vibrant blue hues of ancient ice to the stark contrast of white against a deep blue sea, the visual appeal of drifting ice is undeniable.
This beauty, however, is inextricably linked to its fragility. The NYT article poignantly reminds us that what we find so captivating about these ice sheets—their pristine, untouched quality—is also what makes them so vulnerable. As they melt and disappear, we lose not only a key element of our planet’s climate system but also a source of inspiration and wonder.
The NYT article on drifting sheets of ice serves as both a celebration and a warning. It invites readers to appreciate the natural beauty of these ice formations while also urging them to recognize the urgent need to protect them. As climate change continues to accelerate, the fate of these drifting ice sheets hangs in the balance, with consequences that reach far beyond the polar regions.
In the end, the article leaves us with a powerful message: the time to act is now. Preserving the beauty and stability of our planet’s ice sheets requires collective effort, global cooperation, and a commitment to sustainable practices. Only then can we hope to safeguard the delicate balance of our world’s ecosystems and ensure that future generations can marvel at the wonders of drifting ice. | <urn:uuid:b6b521be-adf9-4b15-b2ad-e9f33b068e50> | CC-MAIN-2024-38 | https://diversinet.com/drifting-sheets-of-ice-nyt/ | 2024-09-11T03:17:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00050.warc.gz | en | 0.928522 | 1,018 | 3.578125 | 4 |
A quantum computing task force should be created for the UK financial services sector to help businesses harness the potential of quantum technology, an industry group says. The task force could also help organisations guard against the risks posed by quantum algorithms, according to two new reports published by UK Finance.
The papers contain a series of opportunities and threats posed by quantum technology, a new type of computer currently in development. Once reliable quantum machines are in circulation, it is likely they will be able to far exceed the processing power of classical machines for performing specific tasks. Many experts believe this point – dubbed quantum advantage – will occur at some point in the next decade.
Financial services is often cited as an area where the first quantum computers could be deployed, which is why UK Finance, a body representing 300 banking and finance companies, has been studying the technology.
Jana Mackintosh, managing director of payments, innovation and resilience at UK Finance, said quantum computing “is going to transform the financial services industry, it’s just a matter of when”.
Mackintosh said: “It represents a multibillion-pound opportunity, from new levels and speeds of risk analysis to radically streamlining compliance processes that currently cost the sector billions every year. We cannot ignore the risks, however, and the reality that when a cryptographically relevant quantum computer is developed, it could break the encryption underpinning all payments and electronic commerce.”
The opportunities and risks of quantum computing for financial services
The reports, produced for UK Finance by IBM consulting, outline eight areas where the authors believe quantum computing can benefit financial services companies. These are: enhancing risk analysis capabilities, streamlining compliance processes, improving portfolio optimisation and asset management, securing sensitive data, revolutionising data management, optimising operational processes, enhancing sales strategies and improving pricing models.
However, they don’t want that unless companies in the sector are “systematically safeguarded against this risk, [as] a cryptographically relevant quantum computer would be able to unravel the security on all payments systems and access secure personally identifiable information. This refers to the ability of quantum computers to crack current encryption standards used to secure transactions. Post-quantum cryptography, which will be resistant to such attacks, is currently in development by teams around the world.
Other risks highlighted are possible market instability caused by unequal access to the technology, insufficient quantum talent in the UK, a technological debt from legacy systems, as well as a number of environmental considerations associated with quantum technology.
Is a quantum task force needed in financial services?
The report makes seven recommendations, including the setting up of a quantum computing task force “responsible for monitoring advancements in quantum computing, identifying potential use cases relevant to the financial services sector”. This “should also, as a priority, work on developing a strategic roadmap for safely integrating quantum computing solutions,” the report says.
Alongside this, a separate quantum-safe task force should be established with a mandate to “develop and implement sector-wide quantum-safe transformation strategies”. It should “represent all parts of the financial services ecosystem as well as national critical infrastructure”, the authors write.
UK Finance’s Mackintosh added: “Industry and government must work together now to secure the financial services industry against these threats, and put the UK at the head of the pack to seize the opportunities.”
As reported by Tech Monitor, financial companies including HSBC and Goldman Sachs are already experimenting with quantum technologies. The UK government set out a £2.5bn quantum strategy, aimed at boosting UK expertise in the technology, earlier this year. | <urn:uuid:bfe73395-3372-40bb-9239-39eb1a717ad2> | CC-MAIN-2024-38 | https://www.techmonitor.ai/hardware/quantum/quantum-computing-financial-services | 2024-09-12T06:44:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00850.warc.gz | en | 0.938296 | 743 | 2.5625 | 3 |
When the European Commission proposed the first EU regulatory framework for AI in April 2021, few would have imagined the speed at which such systems would evolve over the next three years. Indeed, according to the 2024 Stanford AI Index, in the past 12 months alone, chatbots have gone from scoring around 30-40% on the Graduate-Level Google-Proof Q&A Benchmark (GPQA) test, to 60%. That means chatbots have gone from scoring only marginally better than would be expected by randomly guessing answers, to being nearly as good as the average PhD scholar.
The benefits of such technology are almost limitless, but so are the ethical, practical and security concerns. The landmark EU AI Act (EUAIA) legislation was adopted in March this year to overcome these concerns, by ensuring that any systems used in the European Union are safe, transparent and non-discriminatory. It provides a framework for establishing:
Rules to ensure AI technology is used in a responsible and fair manner.
Safeguards to prevent individuals from being impacted by decisions made by AI.
Mechanisms to verify the decisions made by AI are correct.
Avenues to ensure individuals can be held accountable for incorrect decisions made by AI.
Approaches for enforcing governance, ethics and safety requirements of AI.
The regulations apply to any company involved in building or using AI systems, regardless of industry or sector. Given the utility and predicted ubiquity of AI, this means virtually every large organization needs to start looking at how the legislation affects them and what needs to be done to ensure compliance.
Intelligent Regulatory Approach
Regulating any new technology is always difficult, much less one that’s evolving at the breakneck speed of AI. It can be tricky even to define what counts as an AI system, never mind parsing which elements need to be regulated and how. Moreover, unlike other tools and technologies, AI systems are highly unpredictable, as the models evolve with each interaction. This means even the engineers who build them often can’t be sure how their AI will behave in any given situation.
Nevertheless, regulation is important and it’s encouraging that so many stakeholders in both the public and private sectors agree on the need for effective guardrails. At the same time, overly burdensome regulations could hold back the development of important innovations. As such, AI regulations must strike the right balance between mitigating potential harms and abuses, while also enabling companies to reap the rewards, such as improved R&D. Happily, by opting for an outcomes- and use-case-based approach to regulation instead of a process-based one, the EUAIA should avoid subjecting companies to overly onerous red tape, while still putting effective safeguards in place.
Brand New Skillset
Yet even well-designed regulations place several obligations on businesses and the EUAIA will require all affected organizations to be a great deal more transparent about their use of AI. By June 2026, organizations will need to have classified and registered their AI models based on the four risk categories outlined by the EU. They must also establish the data governance, risk and quality management, and incident reporting processes needed to ensure the accuracy, robustness and security of their AI systems. Finally, they must incorporate the tools to maintain human oversight of AI systems and build out the technical documentation needed to assess and demonstrate compliance.
Organizations will therefore need to create some brand new workflows, as well as reengineering existing ones. Unfortunately, this is unlikely to be a simple process. New roles, such as chief AI officer, will need to be created and integrated into existing management teams, while functions like content creation and visual design will find that they have new responsibilities that require in-depth training.
Managing such a significant organizational change in mere months is a huge challenge for any organization, especially large enterprises. It is even more difficult given the worsening AI skills gap. The EUAIA hasn’t even come into effect, but demand for AI professionals is far outstripping supply. Indeed, between November 2022 and August 2023, the number of AI-related job roles rose 21-fold. With thousands of organizations potentially having to present metrics, models and reports to demonstrate compliance, the AI skills gap could quickly slip into a full-blown crisis.
The first step in preparing for the EUAIA is therefore to develop a comprehensive strategy that helps organizations to identify and address their own skills gaps that need to be filled to ensure implementations of AI are effective and compliant. Given that barely one in ten companies have a chief AI officer, the development and execution of this strategy will likely be driven by other technology executives in conjunction with appropriate AI experts. If that subject matter expertise does not exist internally, businesses will need to turn to external partners, to ensure they can avoid poor decision-making, misaligned expectations and regulatory non-compliance.
Dawn of the AI-Powered World
The potential benefits of AI in almost every industry, from medical science and green tech to communications and content creation, are enormous. However, any technology as powerful as AI presents some very serious risks and it is hugely encouraging, therefore, that the EU is moving swiftly to bring in robust but flexible legislation to protect citizens while not stunting innovation.
Meeting these obligations is important, but businesses will need to plan carefully and invest wisely to do so. However, they must also remember they are not alone. The breakneck speed at which AI is evolving means that organizations are going to have to be more collaborative, sharing knowledge and expertise, to successfully make the transition. By working with experts who have the insights and knowledge to deliver successful projects in a way that is compliant with the evolving regulatory landscape, businesses can position themselves to be dominant players in the new AI-powered world.
About the Author
You May Also Like | <urn:uuid:f09cc93e-8f15-4b55-b498-b6ecf7294fa3> | CC-MAIN-2024-38 | https://aibusiness.com/ai-ethics/why-businesses-can-t-go-it-alone-over-the-eu-ai-act | 2024-09-13T12:23:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00750.warc.gz | en | 0.963268 | 1,189 | 2.625 | 3 |
With the cost of a data breach at an all-time high of $4.35M and more data security and privacy regulations coming into force, organizations worldwide must ensure that they have the security controls and measures to keep their data safe.
- What is the Center for Internet Security (CIS)?
- What are the CIS Controls?
- Why are the CIS Controls important?
- The CIS Controls’ structure
- CIS Critical Security Controls
- Inventory and control of enterprise assets
- Inventory and control of software assets
- Data protection
- Secure configuration of enterprise assets and software
- Account management
- Access control management
- Continuous vulnerability management
- Audit log management
- Email and web browser protections
- Malware defenses
- Data recovery
- Network infrastructure management
- Network monitoring and defense
- Security awareness and skills training
- Service provider management
- Application software security
- Incident response management
- Penetration testing
- How do the CIS Controls work with other compliance standards?
- How to implement the CIS Controls
- How to achieve CIS Compliance with NordPass
By implementing the Center for Internet (CIS) Security Controls, organizations can establish a firm base for an effective cybersecurity approach to counter ever-evolving cyber threats. Today, we’re talking CIS Controls compliance.
What is the Center for Internet Security (CIS)?
The Center for Internet Security (CIS), formerly known as Critical Security Controls, is a non-profit organization established in 2000. With the primary goal of enhancing cybersecurity readiness and response, the CIS leverages its expertise to promote and share essential cybersecurity guidelines
The CIS aims to develop and promote best cybersecurity practices to help individuals, businesses, and other organizations protect themselves from evolving cyber threats.
What are the CIS Controls?
The CIS Controls are a set of prescriptive cybersecurity best practices and frameworks. The CIS Controls provide guidance and a clear path for organizations to improve their cybersecurity posture and mitigate the risk of the most pervasive and dangerous security threats. One of the main benefits of the CIS Controls is that they provide a small but focused set of actions that almost any organization can achieve. And yet, these simple steps still yield significant results in deterring possible cyber threats.
Developed by top security experts from around the world, the CIS Controls are regularly updated according to the latest trends in the cyber threat landscape. The latest version of the CIS Controls includes 18 controls that aim to prevent cyberattacks, support compliance, and improve security posture.
Why are the CIS Controls important?
The CIS Controls are important because they combine the easiest and most effective recommendations to reduce the risks of cyberattacks. In the current cybercrime landscape, the CIS Controls can help organizations establish a secure perimeter and maintenance of necessary defenses.
The CIS Controls’ structure
The 18 CIS Controls are split into three implementation groups: basic cyber hygiene (IG1), enterprise-level protection for regulated businesses (IG2), and protection against targeted and zero-day attacks (IG3).
The IG1 controls are a set of 56 safeguards that provide a basic level of cyber hygiene for an organization. Organizations should implement IG1 controls to defend against the most common cyber threats.
The IG2 controls include 74 additional safeguards on top of the 56 in IG1. The IG2 controls and safeguards require specialized expertise and dedicated individuals to manage the security infrastructure. Organizations with more resources and a moderate risk of data exposure should implement IG2 controls alongside IG1 controls.
The IG3 controls come with 23 additional safeguards designed to help mature organizations facing significant security risks or handling the most sensitive data.
CIS Controls Version 8
With the ever-evolving threat landscape, the Center for Internet Security (CIS) continuously updates its set of best practices and frameworks to stay abreast of cybersecurity developments.
The latest iteration of these guidelines, known as the CIS Controls Version 8 or the CIS Critical Security Controls V8, which incorporates revisions and enhancements from the previous Version 7, was introduced in 2021. It builds on its predecessors by refining existing controls and introducing new measures that reflect current security needs.
CIS Controls V8 is designed to address the risks of today's working environment, which heavily involves cloud-based interactions and remote work. For more comprehensive insights into the changes and updates in CIS Controls Version 8, refer to the official CIS documents and Security Metrics' detailed analysis.
CIS Critical Security Controls
Inventory and control of enterprise assets
You need to know what you must protect. Therefore, the first CIS Control focuses on identifying all the assets within the organization. Those assets can include mobile devices, network devices, servers, and Internet of Things (IoT) devices connected to the infrastructure, whether they are connected virtually, physically, or remotely. By getting an accurate inventory of your assets, you will understand what you need to monitor and protect within the organization.
Inventory and control of software assets
The second control on the list focuses on providing safeguards to ensure you know what software your team is using and that no unauthorized software can make its way onto your network.
The third CIS Control urges organizations to develop a technical process and security controls to classify, identify, handle, retain, and dispose of data.
Secure configuration of enterprise assets and software
Default software configurations are usually developed to prioritize ease of development or ease of use rather than security. This CIS Control focuses on establishing a more secure configuration and maintaining it to lower the risk of cyber threats.
The fifth CIS Control focuses on using tools and processes to manage how, when, and why accounts are issued and ensuring they are terminated once they’re no longer in use.
Access control management
Like the previous controls, this control urges organizations to use tools to create and manage access privileges to enterprise assets and software.
Continuous vulnerability management
The cyber threat landscape is constantly evolving. Following the seventh CIS Control will help you develop a strategy to ensure continuous vulnerability management and analysis.
Audit log management
The eighth CIS Control sets out guidelines on how the organization should collect, audit, and examine its logs to ensure they are protected.
Email and web browser protections
This CIS Control focuses on using appropriate security measures to ensure browsers are not vulnerable to threats and email communications are always secure.
The 10th CIS Control outlines how organizations can prevent malicious software from entering the organization’s network with the help of appropriate security tools.
The 11th CIS Control focuses on developing a data recovery plan and processes that would be sufficient to restore any enterprise assets to a pre-incident state.
Network infrastructure management
To help organizations lower the potential attack vector, this CIS Control outlines how to actively manage organizational networks and the devices on those networks.
Network monitoring and defense
CIS Control number 13 encourages companies to monitor their network for suspicious activity and ensure that appropriate defense mechanisms are in place to deter bad actors.
Security awareness and skills training
The 14th CIS Control outlines the benefits of regular cybersecurity awareness and training sessions to ensure that the entire organization understands the risks and security strategy.
Service provider management
These days, supply chain attacks are rampant. To reduce the chances of being hit by a supply chain attack, this CIS Control urges companies to develop a comprehensive process to evaluate their partners and service providers to ensure their security practices are up to standard.
Application software security
CIS Control number 16 urges organizations to manage the security of the software they develop or use on commission to prevent security vulnerabilities.
Incident response management
To help organizations manage crises effectively and efficiently, this CIS Control stresses the importance of developing a comprehensive cybersecurity incident response.
The final CIS Control defines the importance of testing the resilience and effectiveness of organizational security posture and encourages the company to run penetration tests that simulate real-life cyberattacks.
How do the CIS Controls work with other compliance standards?
Today, any business that wishes to handle sensitive data must adhere to various security and compliance regulations, such as the General Data Protection Act (GDPR) or California Consumer Privacy Act (CCPA). The CIS Controls are cross-compatible with various compliance regulations because they provide clear and in-depth best practices regarding cybersecurity in a business environment, which are in line with compliance standards.
CIS vs. NIST: What’s the difference?
In the cybersecurity world, CIS and NIST are two often-mentioned acronyms. Understanding the differences between the CIS framework vs. NIST is critical for effective information security. Both contribute significantly to cybersecurity, but their focuses and approaches vary.
Organizational background: The Center for Internet Security (CIS) is a non-profit organization established in 1999, known for creating the CIS Controls, a set of best practices for securing systems and data. NIST (the National Institute of Standards and Technology), a federal agency within the US Department of Commerce, has a broader mission, focusing on advancing measurement science, standards, and technology, with a recent emphasis on cybersecurity issues.
Mission and resources: CIS focuses specifically on cybersecurity, offering resources like security benchmarks and threat intelligence. In contrast, NIST's scope is wider, focusing on developing standards and guidance.
Approach: CIS takes a pragmatic approach, collaboratively developing tools and resources, while NIST's research-oriented approach focuses on studying and understanding cybersecurity issues.
Size and impact: CIS, with a staff of just over 100, influences numerous organizations globally, whereas NIST, with a staff of over 3,000, primarily impacts US government agencies.
While both offer valuable tools, they differ in approach. The differences between NIST vs. CIS could influence their relevance depending on your specific needs and the nature of your organization.
How to implement the CIS Controls
In order to successfully implement the CIS Controls, organizations must be ready and willing to take action to establish a secure organizational perimeter. The CIS Controls implementation guide recommends taking a phased approach.
Phase 1 should include an audit of your entire network, what’s connected to it, and why. During this phase, you will clearly understand your organization’s cybersecurity baseline.
During phase 2, you should focus on securing your baseline through staff education and implementing security tools and processes. Phase 3 is the time to craft incident-response plans to ensure your organization acts in a well-coordinated manner in case of an emergency.
How to achieve CIS Compliance with NordPass
Besides the fact that a business password manager is a must-have tool for any organization that seeks to remain secure these days, corporate password managers are also handy compliance-wise.
A password manager such as NordPass Business can help organizations meet many of the benchmarks set by CIS.
NordPass provides companies with a single secure place to store passwords, credit cards, and other sensitive data. Thanks to end-to-end encryption and zero-knowledge architecture, everything stored in the NordPass vault is highly protected.
Furthermore, NordPass’ advanced security tools, such as Password Health and the Data Breach Scanner, can help organizations further assess their password strength and determine whether any of their emails or domains have appeared in a data leak.
Finally, NordPass Business allows organization owners and admins to have a complete overview of user activity and manage access privileges according to specific needs.
If you are looking for a business password manager for your organization and want to know how NordPass Business can improve your organization’s overall security stance and compliance posture, please get in touch with our representative to schedule a demo. | <urn:uuid:a46d48c1-d661-4c2d-ab83-eabb07cf3fe2> | CC-MAIN-2024-38 | https://nordpass.com/blog/cis-controls-compliance/ | 2024-09-13T12:44:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00750.warc.gz | en | 0.920808 | 2,341 | 2.625 | 3 |
Grouping data in Excel is pivotal for anyone dealing with large or complex data sets. It not only helps in tidying your spreadsheet but also enhances your ability to analyze the data profoundly. This functionality allows users to collapse and expand data views, helping focus on necessary data segments without distractions. It is particularly helpful in summarizing large data amounts, making it easier to view only what is necessary at any given time. Grouping thus acts as a powerful tool to boost productivity and efficiency, especially when working with extensive spreadsheets that might otherwise seem overwhelming.
Welcome to our latest tutorial on Excel Tips & Tricks! In this video, you'll learn how to group data by rows and columns in Excel, a powerful feature that can help you organize and analyze your data more efficiently. Whether you're working on a complex data set or simply want to tidy up your spreadsheet, this guide will walk you through the steps with clear, easy-to-follow instructions.
In this tutorial, you'll discover the basics of grouping rows and columns in Excel. You'll also learn how to collapse and expand grouped data, which is essential for managing large data efficiently. These tips are designed to help you streamline your workflow, enhance productivity, and improve your data analysis capabilities.
Grouping data in Excel is particularly beneficial for those looking to summarize and analyze large amounts of information. It allows users to concentrate on specific sections of a spreadsheet, thereby improving readability and efficiency. Understanding how to effectively group data can significantly enhance your ability to work with large sets of information.
Grouping in Excel is a fundamental skill for anyone dealing with large datasets or complex information structures.
The grouping feature is tailored to both novice and advanced users, ensuring that anyone can improve their spreadsheet management skills. This function not only aids in data analysis but also simplifies the navigation and organization of information.
Using the grouping function in Microsoft's spreadsheet software allows for a tidier display of data and facilitates a focused analysis. By collapsing sections you don't immediately need to observe, your workspace becomes less cluttered, enhancing both focus and performance.
Certainly, mastering these grouping techniques provides significant advantages in various professional settings, making the process an indispensable part of any data analysis toolkit. This video tutorial aims to provide viewers with the confidence to apply these techniques in their everyday tasks to maximize productivity and efficiency.
For those interested in further exploring the capabilities of grouping in spreadsheets, the video provides a comprehensive guide that includes a practice sheet available without external links. This resource is designed to offer hands-on experience, allowing you to apply what you've learned directly to real-world scenarios.
This tutorial offers a unique opportunity to deepen your understanding of Excel's features, ensuring you can handle your data with greater expertise. Whether for personal projects or professional assignments, these skills will help you manage and display your data more effectively.
This tutorial from Teacher's Tech highlights critical features and benefits of grouping in Excel, making it a valuable resource for anyone looking to improve their data management skills. The techniques discussed are applicable in a variety of contexts, enhancing the utility of Excel for users across different disciplines and industries.
By embracing these grouping techniques, users can significantly enhance their analytical capacities, leading to more insightful data interpretations and better decision-making based on well-organized information. This guide is an essential step towards mastering Excel and leveraging its full potential to benefit your data analysis and presentation needs.
Excel remains a cornerstone tool for data analysis and management, offering a range of features to enhance user experience and productivity. Grouping data in Excel is one such feature that simplifies data handling and improves the efficiency of data-related tasks. This tool is especially beneficial in environments where large amounts of data need to be organized and analyzed quickly and effectively.
Mastering grouping techniques allows professionals in various sectors to streamline their workflows, making data easier to navigate and insights simpler to derive. As such, learning how to group rows and columns effectively is an essential skill for anyone looking to enhance their analytic capabilities using Excel. The ability to quickly segment and manage data can help transform complex data sets into more understandable and actionable information.
For users on Windows, the combination of Shift, Alt, and the Right Arrow key allows for grouping of the desired rows or columns. Mac users should replace "Alt" with "Option" to perform the same task.
After executing the horizontal grouping, a vertical line will appear on the left side to indicate the grouping.
In Excel, you can manage grouping under the Data tab within the Outline group by clicking the "Group" button. Alternatively, the keyboard shortcut Shift + Alt + Right Arrow can be used. If selected areas are not entire columns, a Group dialog box will prompt you to confirm your grouping preference, typically opting for Columns and confirming with OK.
In Excel, the functionality to group two or more rows or columns is accessed through the shortcuts Shift+Alt+Right Arrow for grouping, and Shift+Alt+Left Arrow for ungrouping, facilitating easy management of grouped data through collapsing or expanding.
Excel group rows, Excel group columns, group data Excel, Excel tutorial, Excel grouping guide, organize Excel data, Excel data management, Excel rows and columns grouping | <urn:uuid:1930b84a-65b9-462b-9854-0694bf616873> | CC-MAIN-2024-38 | https://www.hubsite365.com/en-ww/crm-pages/how-to-group-by-rows-and-columns-in-excel-ddbeb6ee-cff8-45fe-96ac-4a2b40206f17.htm | 2024-09-13T10:45:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00750.warc.gz | en | 0.89328 | 1,058 | 3.109375 | 3 |
This chapter covers the following topics:
Characteristics of roaming
Layer 2 roaming
Layer 3 roaming and an introduction to Mobile IP
This book covers the major components of 802.11 wireless LANs (WLANs). Fundamental concepts such as medium access mechanisms, frame formats, security, and the physical interfaces build the foundation for understanding more advanced and practical concepts.
In keeping with this theme, this chapter covers mobility. Mobility is the quality of being capable of movement or moving readily from place to place. 802.11 WLAN devices provide this kind of untethered freedom. But there's more to mobility than the lack of a network cable. Understanding how mobility is implemented in 802.11 arms you with the knowledge you need to support or facilitate mobile applications. Many terms describe mobility, but this chapter uses the terms mobility and roaming to describe the act of moving between access points (APs).
Characteristics of Roaming
Defining or characterizing the behavior of roaming stations involves two forms:
Seamless roaming is best analogized to a cellular phone call. For example, suppose you are using your cellular phone as you drive your car on the freeway. A typical global system for mobile (GSM) communications or time-division multiple access (TDMA) cell provides a few miles of coverage area, so it is safe to assume that you are roaming between cellular base stations as you drive. Yet as you roam, you do not hear any degradation to the voice call (that is what the cellular providers keep telling us). There is no noticeable period of network unavailability because of roaming. This type of roaming is deemed seamless because the network application requires constant network connectivity during the roaming process.
Nomadic roaming is different from seamless roaming. Nomadic roaming is best described as the use of an 802.11-enabled laptop in an office environment. As an example, suppose a user of this laptop has network connectivity while seated at his desk and maintains connec-tivity to a single AP. When the user decides to roam, he undocks his laptop and walks over to a conference room. Once in the conference room, he resumes his work. In the back-ground, the 802.11 client has roamed from the AP near the user's desk to an AP near the conference room. This type of roaming is deemed nomadic because the user is not using network services when he roams, but only when he reach his destination.
What happens to application sessions during roaming? Many factors influence the answer to this question. Consider the following:
The nature of roaming in 802.11.
The operation of the application. Is the application connection-oriented or connectionless?
The roaming domain. Does roaming occur with a single subnet or across multiple subnets?
Roaming duration. How long does the roaming process take?
The Nature of Roaming in 802.11
802.11 roaming is known as "break before make," referring to the requirement that a station serves its association with one AP before creating an association with a new one. This process might seem unintuitive because it introduces the possibility for data loss during roaming, but it facilitates a simpler MAC protocol and radio.
If 802.11 were "make before break," meaning a station could associate to a new AP before disassociating from the old AP, you would need safeguards in the MAC to ensure a loop-free topology. A station connected to the same Layer 2 broadcast domain via simultaneous network connections has the potential to trigger broadcast storms. A "make before break" architecture would necessitate an algorithm such as 802.1D spanning tree to resolve any potential loops, adding overhead to the MAC protocol. In addition, the client radio would have to be capable of listening and communicating on more than one channel at a time, increasing the complexity of the radio (and adding to the overall cost of the devices).
Operation of the Application
The way the application operates directly correlates to its resilience during the roaming process. Connection-oriented applications, such as those that are TCP-based, are more tolerant to packet loss incurred during roams because TCP is a reliable and connection-oriented protocol. TCP requires positive acknowledgments, just as the 802.11 MAC does. This requirement allows any 802.11 data lost during the roaming process to be retrans-mitted by TCP, as the upper-layer protocol.
Although TCP provides a tidy solution for applications running on 802.11 WLANs, some applications rely on User Datagram Protocol (UDP) as the Layer 4 transport protocol of choice. UDP is a low-overhead, connectionless protocol. Applications such as Voice over IP (VoIP) and video use UDP packets. The retransmission capability that TCP offers does little to enhance packet loss for VoIP applications. Retransmitting VoIP packets proves more annoying to the user than useful. As a result, the data-loss roaming might cause a noticeable impact to UDP-based applications.
Chapter 1, "Ethernet Technologies," defines a broadcast domain as a network that connects devices that are capable of sending and receiving broadcast frames to and from one another. This domain is also referred to as a Layer 2 network. The concept holds true for 802.11 as well. APs that are in the same broadcast domain and configured with the same service set identifier (SSID) are said to be in the same roaming domain. Recall from Chapter 2, "802.11 Wireless LANs," that extended service set (ESS) is similarly defined as multiple basic service sets (BSSs) that communicate via the distribution service (wired network). Therefore, a roaming domain can also be referred to as an ESS.Why are 802.11 devices limited to a Layer 2 network for roaming? What about roaming between Layer 3 subnets? Remember that 802.11 is a Layer 1 physical interface and Layer 2 data link layer technology. The 802.11 MAC protocol is Layer 3 unaware. That is not to say that Layer 3 roaming is impossible because it is not. It means that Layer 2 roaming is natively supported in 802.11 devices, and some upper-layer solution is required for Layer 3 roaming.
The distinction between whether a device roams within a roaming domain or between roaming domains has a large impact on application sessions. Figure 5-1 depicts a Layer 2 roaming domain. The roaming user can maintain application connectivity within the roaming domain and as long as its Layer 3 network address is maintained (does not change).
Figure 5-1 Roaming in a Layer 2 Roaming Domain
Figure 5-2 illustrates roaming across roaming domains. The roaming user is roaming from an AP on Subnet A to an AP on Subnet B. As a result, the Layer 3 network address must change to maintain Layer 3 connectivity on Subnet B. As the Layer 3 address changes, the station drops all application sessions. This scenario is described later in this chapter in the section, "Mobile IP Overview."
Figure 5-2 Roaming Across Roaming Domains
Roaming duration is the time it takes for roaming to complete. Roaming is essentially the association process that is described in Chapter 2; it depends on the duration of the following:
The probing process
The 802.11 authentication process
The 802.11 association process
The 802.1X authentication process
The cumulative duration of these processes equates to the roaming duration. Some appli-cations, such as VoIP, are extremely delay-sensitive and cannot tolerate large roaming durations. | <urn:uuid:fbb431f1-4fdb-49b1-b352-cc69f6eab9e0> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=102282 | 2024-09-16T00:18:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00550.warc.gz | en | 0.942073 | 1,530 | 3.421875 | 3 |
Passkeys are a new way of signing into websites and
applications without using passwords. They are based on a FIDO web
authentication standard that uses cryptographic keys to verify an individual’s
identity. Passkeys are more secure and convenient than passwords because they replace
passwords with cryptographic keys, and they can be used with biometric sensors
(such as fingerprint or a facial matching) or PIN. You can create and use
passkeys on multiple devices, such as Windows, Android, or iOS or dedicated
hardware tokens such as a YubiKey.
: Passkeys provide a superior level of
security compared to traditional passwords. They are based on public-key
cryptography, effectively replacing passwords with a secure cryptographically
signed authentication process, eliminating the risk of interception and
compromise, effectively providing passwordless experience. Additionally,
passkeys are resistant to phishing attacks, as the private key used for
authentication remains securely on the device.
User-Friendly Authentication: Passkeys eliminate the need
for traditional passwords, which are often forgotten, weak, or susceptible to
social engineering tactics. Instead, users can authenticate using their
device's biometrics or PIN, providing a more convenient and secure login
experience. With Passkeys there are no
more passwords to remember as they are simply replaced with a more secure and
No More Password Resets: With passkeys, password resets
become a thing of the past. Users no longer need to wait for emails, enter
recovery codes, or navigate complex password reset processes. This reduces the
burden on IT support and improves user productivity.
Cross-Platform Compatibility: Passkeys are designed to work
seamlessly across different devices and platforms, including Windows, macOS,
Android and iOS. This ensures that users can access their accounts securely and
conveniently from any device they prefer, as the underlying WebAuthn technology
is widely supported by various platforms.
Reduced Risk of Data Breaches: Passkey-based authentication
can significantly reduce the risk of data breaches and associated financial
losses. By eliminating the vulnerabilities of passwords, organizations can
protect sensitive data and minimize the likelihood of cyber-attacks.
Compliance and Regulatory Benefits: Passkeys adhere to FIDO
Alliance standards and help you to meet compliance requirements, such as NIST
SP800-63B, PCI DSS, GDPR, and HIPAA. Adopting passkeys can help organizations
demonstrate their commitment to data security and comply with industry
Passkeys bring a variety of benefits across the enterprise
- Improved security - based on a public-key cryptography, Passkeys remove the need to enter your email address and password into login fields. Passkeys are phishing resistant and significantly better protection than passwords as they cannot be guessed or shared as the user doesn’t know the key.
- Easy integration – MyID MFA allows organisations to protect cloud, on-premise applications and the Windows Desktop login process with Passkeys by supporting standard protocols such as OpenID Connect and SAML.
- Securely synced across devices - Passkeys can be synced across devices enabling recovery and backup.
- Reduction in account lockouts - As users don't need to remember passwords, they are less likely to be locked out of their accounts, meaning no more password resets.
- Reduced Password Fatigue - Passkeys eliminate the need to create, remember, and manage complex passwords for multiple accounts. They are automatically generated and stored on the user's device, making the authentication process seamless and hassle-free.
- No need to remember multiple credentials - simply uses FaceID or TouchID for identity authentication.
- Enhanced Security: Passkeys are significantly more secure than passwords due to their use of asymmetric (private/public key) cryptography and device-based authentication. They are stored securely on the user's device and never transmitted over the internet during authentication, eliminating the risk of phishing attacks and password breaches
Compliance with Regulatory standards: Gives your organization the peace of mind that you have systems in place to meet regulatory cyber security standards and are following best practice.
- Improved User Experience: Passkeys offer a more seamless user experience compared to passwords. Users don't need to juggle multiple passwords or deal with the frustration of forgotten or compromised credentials. The intuitive authentication process reduces friction and improves overall user satisfaction.
BENEFITS OF PASSKEYS
Passkeys offer a compelling alternative to traditional passwords, providing enhanced security, a simplified authentication process, and reduced administrative burden. As passkeys become more widely adopted, they are expected to become the preferred authentication method for modern digital interactions, providing a more secure, efficient, and user-friendly experience for both organisations and their users.
Trusted by Governments and Enterprises Worldwide
Where protecting systems and information really matters, you will find Intercede. Whether its citizen data, aerospace and defense systems, high-value financial transactions, intellectual property or air traffic control, we are proud that many leading organizations around the world choose Intercede solutions to protect themselves against data breach, comply with regulations and ensure business continuity. | <urn:uuid:6ba94880-9958-4cc7-ac4d-6f537c007f13> | CC-MAIN-2024-38 | https://www.intercede.com/solutions/technologies/passkeys/ | 2024-09-19T16:33:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00250.warc.gz | en | 0.927328 | 1,066 | 2.65625 | 3 |
This blog post is a transcript of Christian Espinosa’s explanation of cybersecurity hashing and collisions and covers the following:
- What is hashing?
- What is a hashing collision?
- What are hashing birthday attacks?
- Includes a demonstration of a 3-way MD5 collision
Check out my latest book: https://christianespinosa.com/books/the-smartest-person-in-the-room/
In Dec 2020, Alpine Security was acquired by Cerberus Sentinel (https://www.cerberussentinel.com/)
Need cybersecurity help? Connect with me: https://christianespinosa.com/cerberus-sentinel/
Complete Cybersecurity Hashing and Collisions Explanation Video Transcript
How’s it going? This is Christian Espinosa with Alpine Security. Today’s topic is on hashing and collisions. First off, what is hashing? A lot of people think hashing is encryption, but hashing is really not encryption. It is a mathematical function, but when you do encryption you typically need a key to decrypt something. With hashing, there’s no key involved. Hashing is really a one-way function. What that means is if we take some data and cram it through a hashing algorithm, out spits what’s called a message digest. The message digest is a fixed linked. The data we cram through the algorithm can be a variable linked. We can cram one byte or one terabyte through let’s say an MD5 hash algorithm and out spits a 128 bit message digest, regardless of the size of data we put through it. It’s called a one-way function because you can’t take the message digest value and go backwards through the hash algorithm and reproduce the data that was used to create the message digest. It doesn’t work that way. It’s only one way.
Hashing is used for integrity and to store passwords. Passwords should never be stored in cleartext. They should be stored in a hashed format or message digest. That way if someone types in a password on the password system, the password is hashed. If the hash value matches what’s stored, you know what the password is. There’s a little bit more to it than that, but that’s basically the concept.
Hashing is also used for integrity because if I take some data and run it through the hash algorithm, I get a message digest. If the data changes that all and I run it through the hash algorithm, I should get a different message digest, which tells me that the data has changed. If the data is the same, I’ve run it through the hash algorithm, I get the same message digest, I know I have integrity and the data has not been altered. One of the primary uses of hashing is for integrity and another use is for passwords.
There’s a lot of discussion about collisions and I’m going to do a demonstration here. Collisions are when you have two different inputs, you take it through the hash algorithm, and both different inputs produce the same message digest. That’s called a collision. Two different things produce the same thing, collision. Technically, that should not happen with hashing. If it does happen, it kind of breaks the concept of hashing because now we can have two different things that produce the same message digest. If we think this thing here hasn’t been altered, but it has been altered, and if we look at hashing to prove that it hasn’t been altered, we may have a false sense of integrity because the two different things can produce the same output or the same message digest, which is the collision.
I’ll give an example of collision. MD5 is a common hash algorithm that’s been broken, as they like to say, and there are examples of collisions with MD5s. MD5 uses 128 bits. SHA1 is another type of hash algorithm that uses 160 bits, and there’s quite a few other ones. SHA512 uses 512 bids. But basically the larger the message digest, the more bits, the less likely of a collision, based on simple math at math. With MD5, there’s 128 bits. There’s a lot of discussion about collisions, but the bottom line is collisions are highly unlikely, even with MD5. It’s very unlikely somebody can intelligently alter some bit of data and generate the same message digest as some other signed piece of software, for instance.
Let’s just look at the math for this. I’m going to bring up a calculator in scientific mode and with 128 bits, that’s 2 to the 128th power, because we have a zero or one, which the 2, 128 bits. Those are the number of combinations with 128 bits. Let’s just look at that. So 2 to the 128, equals 3.4 whatever. Basically, that’s a really large number, so the probability of creating a collision is very small. It is possible though. This probability is going to be smaller with SHA512, there’s more bits.
The example that this one guy has figured out with MD5, is he’s had three images that produce the same message digest. What this guy did, if you actually look at the images in a hex editor, is he basically took one base image, which had a specific message digest, then he took another image and altered a bit at a time in the header of the image and kept running it through the hash algorithm until it basically produced the same messages digested. Then he stopped altering the bits, and the images look different because we’re not looking at the metadata, we’re just looking at the actual image data, visually. Then he did it again with a third image. This is the proof that there are collisions with MD5.
et’s look at this here. We’re going to use this tool called HashCalc to do the calculation. HashCalc is one of my favorite hashing calculators. If we just searched for White, Black and Brown MD5, we should find what this guy did. Three-way MD5 collision, Nat McHugh’s the dude’s name. Basically, we have these three pictures, Jack Black, James Brown, Barry White, I believe. I’m going to save each of these. I’m going to save this one as Jack Black. This one is James Brown. This one is Barry White. Then we’ll run these three through the hash calculator. And you can see, I’ll go ahead and open the images. If we look at the images, this one is obviously different than this one, and different than this one right here. The three images are different. The three data sets are different, but they produce the same MD5 message digest.
Let’s check this out. Here’s HashCalc. I’ll put the link to HashCalc in the video. With HashCalc you can just drag and drop the image. Here is Black, I’m going to drag it over here. Before I do that, I’ve got MD5 selected and I also have SHA1 selected. We’ll do the message digest for both of those algorithms. So there’s Black, here’s the message digest. I’ll go ahead and copy this. I’ll put this here in our notepad. Black equals this. That’s the MD5, and the SHA1 though, is this, for Black. That’s the SHA1. Let’s try White. White equals… Let’s take White and we’re going to drag the White image to HashCalc. So the MD5 for White is this. That’s MD5. The SHA1 is this. And with Brown, let’s try that one. You kind of know where this is all headed, right? With Brown, we expect the MD5 to be the same. We’ll drag it over here, I’ll go ahead and copy that. Brown MD5, it looks like it’s the same. The SHA1 should be different.
All right. With Jack Black, MD5 right here, Barry White right here, and James Brown, same MD5, three different inputs, that’s a collision. You notice the SHA1 is different for all three though. It is extremely, extremely unlikely you can create a collision that works with all hash algorithms. I’ve never seen that done. I doubt anyone can do it. If you want to check for collisions, just simply use two different hash algorithms.
I was going to talk about the birthday attack. That’s the other thing I have in the list here. The birthday attack is simply this mathematical probability. They like to talk about it a lot in certifications like the CISPD and Security Plus. All it really is, is probability. The whole idea is if given enough sample size and with birthday is we mean, if we have more than 23 people in a room, or 23 or more, the likelihood of two people sharing the same birthday, not the same birth year, but the same birthday, is over 50%. That’s because we’re not comparing everyone to you. We’re compared to everyone to each other. It’s just simple probability.
They like to use this concept to say how easy it is to create a collision with hashing. It’s not really that easy. It might be relatively easy for MD%, but still like the Jack Black, James Brown and Barry White, those are simply images where the header or metadata of the image was altered. To be able to intelligently alter something, to make a malicious file look like a signed piece of software, in my opinion, it’s not going to happen, regardless of what people say about this birthday attack and how easy this is, it’s really not that simple.
That’s all I wanted to talk about today, what hashing is, one-way function. You can’t take the message digest and go backwards and recreate the piece of data used to generate the message digest. We talked about collisions. There are collisions with MD5. It’s been broken per se, so has SHA1. But if you use two different hash algorithms, even if you have a collision on one, you could easily tell the data is different. We went over an actual example with collisions. We also went over HashCalc. I’ll put the link to HashCalc as well as a link to Nat McHugh’s site where you can download the Barry White, Jack Black and James Brown images and test out the collisions yourself.
If you have any questions or comments, you can lead them beneath the video. Please subscribe to our channel. Click on the little bell so you get notified when we have new videos. Thanks for watching. Have a great one. | <urn:uuid:89791656-d6b8-492b-a8f2-f6a94749c772> | CC-MAIN-2024-38 | https://christianespinosa.com/blog/tag/md5/ | 2024-09-10T01:33:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00250.warc.gz | en | 0.931353 | 2,333 | 3.21875 | 3 |
Digital Signal 3 (DS3)
DS3 stands for “Digital Signal Level 3.” A DS3 is a type of circuit.
How is Bandwidth involved with DS3
In each market, or LATA, Bandwidth has at least one (sometimes multiple, for heavy traffic markets). The DS3 is a Point to Point (P2P) circuit, that has one end attached to the LEC’s central office, and the other to a Bandwidth data center. This ‘pipe’ helps calls deliver from LEC switching/equipment to Bandwidth followed by our customer and then ultimately reaching the end customer.
What are the benefits of Bandwidth’s DS3
Bandwidth uses DS3 in order to ensure that our services are efficiently and successfully being transported from their origination source to the intended end user. By using DS3, calls are being connected and our customers can be confident in their voice integrations. Bandwidth has an extensive background within the telecommunications industry and the evolution of communications software. As an industry leader, Bandwidth handles communications channels from start to finish so you can focus on what matters most to your business. | <urn:uuid:5080fa7f-7f7b-45ce-9970-0e950ad85000> | CC-MAIN-2024-38 | https://www.bandwidth.com/glossary/digital-signal-3-ds3/ | 2024-09-10T01:58:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00250.warc.gz | en | 0.909811 | 241 | 2.796875 | 3 |
What is Core Layer?
The Core Layer in networking serves as the backbone of a hierarchical network design, forming a critical component within the three-layer model alongside the Access and Distribution layers. Situated at the center of network architecture, the Core Layer is designed for high-speed, high-capacity packet switching, ensuring swift and efficient transport of data across the entire network.
Unlike the Distribution Layer, the Core Layer typically focuses on rapid data transfer without applying extensive processing or policy-based decision-making. Its primary objective is to facilitate seamless and fast communication between different parts of the network.
Duty of Core Switches
In the enterprise hierarchical network design, the core layer switch is the topside one, which is relied on by the other access and distribution layers. It aggregates all the traffic flows from distribution layer devices and access layer devices, and sometimes core switches need to deal with external traffic from other egresses devices. So it is important for core switches to send large amounts of packets as much as possible. The core layer always consists of high-speed switches and routers optimized for performance and availability.
Located at the core layer of enterprise networking, a core layer switch functions as a backbone switch for LAN access and centralizes multiple aggregation devices to the core. In these three layers, core switches require most highly in the switch performance. They are usually the most powerful, in terms of forwarding large amounts of data quickly. For most of the cases, core switches manage high-speed connections, such as 10G Ethernet, 40G Ethernet or 100G Ethernet. To ensure high-speed traffic transfer, core switches should not perform any packet manipulation such as Inter-Vlan routing, Access Lists, etc., which are performed by distribution devices.
Note: In small networks, it is often the case to implement a collapsed core layer, combining the core layer and the distribution layer into one as well as the switches. More information about the collapsed core is available in How to Choose the Right Distribution Switch?
Factors to Consider When Choosing Core Switches for Enterprises
Simply put, core layer switches are generally layer 3 switches with high performance, availability, reliability, and scalability. Except for considering the basic specifications like port speed and port types, the following factors should be considered when choosing core switches for an enterprise network design.
The packet forwarding rate and switching capacity matter a lot to the core switch in enterprise networking. Compared with the access layer switches and distribution switches, core switches must provide the highest forwarding rate and switching capacity as much as possible. The concrete forwarding rate largely depends on the number of devices in the network, the core switches can be selected from the bottom to the top based on the distribution layer devices.
For instance, network designers can determine the necessary forwarding rate of core switches by checking and examining the various traffic flow from the access and distribution layers, then identify one or more appropriate core switches for the network.
Core switches pay more attention to redundancy compared with other switches. Since the core layer switches carry much higher workloads than the access switches and distribution switches, they are generally hotter than the switches in the other two layers, the cooling system should be taken into consideration. As often the case, core layer switches are generally equipped with redundant cooling systems to help the switches cooling down while they are running.
The redundant power supply is another feature that should be considered. Imagine that the switches lose power when the networking is running, the whole network would shut down when you are going to perform a hardware replacement. With redundant power supplies, when one supply fails, the other one will instantly start running, ensuring the whole network unaffected by the maintenance.
FS provides switches with hot-swappable fans and power supply modules for better redundancy.
Typically core switches are layer 3 switches, performing both switching and routing functions. Connectivity between a distribution and core switches is accomplished using layer 3 links. Core switches should perform advanced DDoS protection using layer 3 protocols to increase security and reliability. Link aggregation is needed in core switches, ensuring distribution switches delivering network traffic to the core layer as efficiently as possible.
Moreover, fault tolerance is an issue to consider. If a failure occurs in the core layer switches, every user would be affected. Configurations such as access lists and packet filtering should be avoided in case that network traffic would slow down. Fault-tolerant protocols such as VRRP and HSRP is also available to group the devices into a virtual one and ensure the communication reliability in case one physical switch breaks down. What’s more, when there are more than one core switches in some enterprise networks, the core switches need to support functions such as MLAG to ensure the operation of the whole link if a core switch fails.
QoS is an essential service that can be desired for certain types of network traffic. In today’s enterprises, with the growing amount of data traffic, more and more voice and video data are required. What if network congestion occurs in the enterprise core? The QoS service will make sense.
With the QoS capability, core switches are able to provide different bandwidth to different applications according to their various characteristics. Compared with the traffic that is not so sensitive about time such as E-mail, critical traffic sensitive to time should receive higher QoS guarantees so that more important traffic can pass first, with the high forwarding of data and low package loss guaranteed.
As you can see from the contents above, there are many factors that determine what enterprise core switches are most suitable for your network environment. In addition, you may need a few conversations with the switch vendors and know what specific features and services they can provide so as to make a wise choice. | <urn:uuid:797074ca-9e8d-477d-992d-10a25efcae76> | CC-MAIN-2024-38 | https://www.fiber-optic-cable-sale.com/category/core-switch | 2024-09-10T00:23:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00250.warc.gz | en | 0.931275 | 1,153 | 3.421875 | 3 |
In this course, instructor Alan Thorn takes you through the process of building a 3D puzzle game inside of the Unity engine. Here he will cover a range of topics including project creation, level design, lightmapping, skinning, animation state machines, collision detection, and camera creation just to name a few.
(Students - please look under Section 1 / Lecture 1 downloads for the source files associated with the lesson.)
More about the Instructor:
Alan Thorn is a game developer, author and educator with 15 years industry experience. He makes games for PC desktop, Mobile and VR. He founded 'Wax Lyrical Games' and created the award-winning game 'Baron Wittard: Nemesis of Ragnarok', working as designer, programmer and artist. He has written sixteen technical books on game development and presented ten video training courses, covering game-play programming, Unity development, and 3D modelling. He has worked in game development education as a visiting lecturer for the 'National Film and Television School', as a Lead Teacher for 'Uppingham School', and is currently a Senior Lecturer at 'Teesside University' where he helps students develop the skills needed for their ideal role in the games industry. | <urn:uuid:78c0ca78-9e17-49ad-8945-a47294a5f24f> | CC-MAIN-2024-38 | https://www.mytechlogy.com/Online-IT-courses-reviews/29985/learn-to-build-a-3d-puzzle-game-with-unity/ | 2024-09-10T00:57:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00250.warc.gz | en | 0.952645 | 247 | 2.75 | 3 |
Definition of XHTML in Network Encyclopedia.
What is XHTML?
XHTML stands for Extensible Hypertext Markup Language, a proposed version of Hypertext Markup Language (HTML) from the World Wide Web Consortium (W3C). XHTML 1 is basically a reformulation of HTML 4 in Extensible Markup Language (XML) and can smooth the migration from HTML to XML by allowing developers to create HTML documents that contain XML functions.
The advantages of using XHTML instead of HTML for Web content development include the following:
- Easier portability to nonstandard user interfaces
- The ability to create new document type definitions (DTDs)
Web sites can already be migrated to XHTML because XHTML conforms to existing Hypertext Transfer Protocol (HTTP) user agents (Web browsers). Migrating ensures that content is XML-conforming, which is advantageous because XML is the future Web content paradigm.
Validating XHTML documents
An XHTML document that conforms to an XHTML specification is said to be valid. Validity assures consistency in document code, which in turn eases processing, but does not necessarily ensure consistent rendering by browsers. A document can be checked for validity with the W3C Markup Validation Service. In practice, many web development programs provide code validation based on the W3C standards.
The root element of an XHTML document must be html
, and must contain an xmlns
attribute to associate it with the XHTML namespace. The namespace URI for XHTML is http://www.w3.org/1999/xhtml
. The example tag below additionally features an xml:lang
attribute to identify the document with a natural language:
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
In order to validate an XHTML document, a Document Type Declaration, or DOCTYPE, may be used. A DOCTYPE declares to the browser the Document Type Definition (DTD) to which the document conforms. A Document Type Declaration should be placed before the root element.
The system identifier part of the DOCTYPE, which in these examples is the URL that begins with http://, need only point to a copy of the DTD to use, if the validator cannot locate one based on the public identifier (the other quoted string). It does not need to be the specific URL that is in these examples; in fact, authors are encouraged to use local copies of the DTD files when possible. The public identifier, however, must be character-for-character the same as in the examples.
A character encoding may be specified at the beginning of an XHTML document in the XML declaration when the document is served using the application/xhtml+xml
MIME type. (If an XML document lacks encoding specification, an XML parser assumes that the encoding is UTF-8 or UTF-16, unless the encoding has already been determined by a higher protocol.)
For example:<?xml version="1.0" encoding="UTF-8" ?>
The declaration may be optionally omitted because it declares as its encoding the default encoding. However, if the document instead makes use of XML 1.1 or another character encoding, a declaration is necessary. Internet Explorer prior to version 7 enters quirks mode, if it encounters an XML declaration in a document served as text/html | <urn:uuid:68b76d44-5562-403f-969f-e1ee60b8db4e> | CC-MAIN-2024-38 | https://networkencyclopedia.com/xhtml/ | 2024-09-11T07:21:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00150.warc.gz | en | 0.81859 | 704 | 4.09375 | 4 |
Whenever you are struggling with slow internet speed, the frustration will be real, and the new page will take forever to open up, there are multiple reasons for a slow internet connection, but sometimes, the slow computer system is the culprit behind slow internet. That’s to say because sometimes, the computer might have memory-intensive apps and programs or is very old that negatively impacts the internet speed.
First of all, the internet plan will directly impact the internet connection. In addition, people choose the computers that suit their needs and help them perform regular tasks. For instance, the Mac computers are usually chosen for web designing, but the PC is suitable for regular tasks, such as MS Office. On top of everything, the computer speed can impact the internet speed as well.
Can Computer Speed Affect Internet Speed?
For instance, computer speed will adversely impact internet speed if the Wi-Fi adaptor is not designed to support high-speed internet protocols. Consequently, the router will slow down the fast internet speed. So, this will create a perspective that the computer system is slowing down the internet speed. In this article, we have outlined different components that impact the computer system and speed.
The processors are designed to directly impact the internet tasks happening in the background. Usually, the Macs are designed with faster processors, which is the prime reason that internet speed is faster of Macs and iOS devices. On the other hand, the Windows PC has a slow processor that adversely impacts the internet speed.
The RAM directly impacts the loading speed of the website. When there is more than one page open in the internet browser, there will be an additional burden on RAM, leading to slow internet speed. When it comes down to RAM, Windows PC float the boat of higher RAM space.
3. Hard Disk Drive
When it comes down to the Macs, the hard disk drive space is usually lower as compared to the Windows PC. The hard drive space will directly influence the internet speed. That’s to say, because hard drives are responsible for reading and writing the commands, in addition to download and upload of files, along with temporary data storage in the browser. With this being said, the Windows PC will have better internet speed if the hard disk drive storage is higher.
On top of everything, multiple other factors can impact the internet speed, such as the number of connected computers to the network. In addition, if your computer has an outdated driver and computer software, the internet speed will be downsized. When there are malware and viruses in the computer system, the speed will be impacted significantly.
Before you complain about the internet speed to your ISP, it is better to run an internet speed check and compare it with computer performance. The bottom line is that computer types don’t impact the speed, but there are contributing factors, such as RAM, hard disk drive storage, and processors (and how can we forget about the viruses?) | <urn:uuid:b5e3a0f0-7fea-40ec-b541-d61457b3c4cc> | CC-MAIN-2024-38 | https://internet-access-guide.com/can-computer-speed-affect-internet-speed/ | 2024-09-16T03:51:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00650.warc.gz | en | 0.932637 | 597 | 3.125 | 3 |
In order to get maximum performance from the main CPUs or coprocessors, it is necessary to understand the algorithms as well as help the compiler generate optimal code, in terms of the resulting performance. Although the compiler can do an excellent job given the right conditions, there is a lot of work that can be done by the developer to help the compiler.
- Create loops that can be vectorized – understand the limits to the compiler and write code that the compiler can vectorize.
- Aid the compiler by putting in directives into the source code. This allows the compiler to generate more optimum code, overriding its own internal checks.
- Use SIMD directives with care. Make sure that the developer understands the algorithms fully.
- VECTOR and NOVECTOR directives – These directives can be used as hints to the compiler.
- Data alignment – make sure that the data is aligned for the target machine. This allows the compiler to create data objects in memory for faster execution. It is important to also tell the compiler that the data is aligned.
- Tell the compiler through a pragma that the data is aligned. This also allows for faster movement of the data between the main CPU and the coprocessor, such as the Intel Xeon Phi coprocessor.
- Use array sections, which encourages the compiler to use vectorization.
- Don’t overlap array sections.
The vector parallel capabilities of the Intel Xeon Phi coprocessor are similar in many ways with vectorizing code for the main CPU. The performance improvement when coding smartly and using the tools available can be tremendous. Since the Intel Xeon Phi coprocessor can show very large gains in performance due to its extra wide processing units.
Although it is time consuming to look at each and every loop in a large application, by doing so, and both telling the compiler what to do, and letting the compiler do its work, performance increases can be quite large, leading to shorter run times and/or more complete results.
Source: Intel, USA | <urn:uuid:125daab5-2f0c-4c82-9776-9a6e0becd03f> | CC-MAIN-2024-38 | https://insidehpc.com/2016/06/helping-the-compiler/ | 2024-09-17T09:23:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00550.warc.gz | en | 0.933457 | 417 | 2.828125 | 3 |
The uniform is one size too big. The shoes need breaking in. The brand-new backpack hangs awkwardly off a pair of tiny shoulders. There’s a brave smile with a missing tooth or two, a final holding of hands, a hug, a kiss, a hesitant wave, and inevitable tears.
Before COVID-19 disrupted our lives and forced our kids to open their laptops and learn from home, the first day of school was a rite of passage — the start of a life-determining journey that has broadly followed the same shape and rhythm for generations.
From kindergarten to Year 12, classrooms are run by teachers who deliver lessons that start and end with a bell. They set tests, watch over examinations, and post grades that might delight, disappoint, or even surprise parents.
This one-size-fits-all approach to education has been in place for a couple of hundred years. Now, however, it is undergoing unprecedented change and not just because of COVID.
The response to the coronavirus has demonstrated how technology can help transform how we teach and learn. But the push for change started long before the pandemic struck, and it will go on long after the threat subsides. For years, policymakers have been exploring new transformative approaches to K-12 education that go far beyond just online lessons at home.
As lockdowns ease and schools start to reopen in some places across our region, it’s as good a time as any to take stock and look at the likely future of education.
Children who start school from now on will grow up to be workers and leaders in a digital-first world that will demand new skills and new ways of thinking.
To succeed in life and at work, they will need all the social, emotional, and academic support they can get via rich and flexible learning experiences that will differ vastly from the schooldays of their parents.
In short, education’s age-old three Rs – Reading, Writing, and Arithmetic – are being joined by a fourth: Rethink.
New data-based technologies are opening up ways to transform practices, structures, and even cultures in schools.
“Technology has changed many aspects of our society over many years, but school structures have largely stayed the same,” says Sean Tierney, Microsoft’s Director for Teaching and Learning Strategy, Asia.
“Now, we have solutions that have the potential to transform and improve the system so students can achieve more and develop valuable skills with better outcomes. The question for us now is: How can we use technology to rethink education?”
Tierney and others want a systemic shift in which education will move away from “a teaching culture to a learning culture.”
Real-time data, innovations like artificial intelligence (AI), and a range of new devices and tools, will help transform the roles and relationships of students, teachers, and parents.
Students will be empowered to learn for themselves in flexible, often collaborative ways, both inside and outside classrooms at their own pace. They will be able to follow their own interests and be challenged where appropriate. “The real learning is that learning can be hard,” Tierney adds.
Teachers will have access to individualized real-time data on how well each of their students is progressing – scholastically and emotionally – so they can devise new challenges and offer appropriate support for each child to move ahead.
Parents will be better connected to, and involved with, their child’s education with certainty, detail, and confidence.
The classroom, as we have known it for centuries, will also be re-imagined. Anthony Salcito, Vice-President of Education at Microsoft, predicts technology will see schools morphing into “learning hubs.”
“When you think about the three big investments that schools make, they’re constantly thinking about what’s happening with instruction in the classroom, what’s happening with the operations of their school, and also learning beyond the classroom,” Salcito recently told Bett 2020, a global education conference.
“Over the past few decades, the focus has been heavily weighted on the classroom experience. I think we will see a shift where schools will create a foundation of inclusive, flexible, data-driven buildings and spaces that will enable students to learn beyond those walls.”
Tierney also sees the physical formality of classroom culture melting away. “The classroom was important when you had to broadcast a certain message at a certain time to a certain group of kids. You had to have them in proximity. But this management and teaching model doesn’t have to dominate anymore,” he explains.
“In many ways, the classroom has become a physical barrier and just a way of holding onto the past. We are no longer bound by limitations that used to require us to have 30 kids in a classroom with one teacher.
Beyond classroom walls
“Now we can rethink that model. It can be multiple teachers with multiple kids. They can be places where kids can move around more flexibly. They don’t have to do the same thing at the same time in the same way. Schools have been exploring this for some time – technology changes the success rate.”
Nonetheless, Tierney believes bricks-and-mortar schools will play a valuable role in the future. For instance, a school is a safe place for children to learn social skills while their parents are at work.
“That won’t change. But with data-based technologies, educators will also be able to create flexible learning spaces and continuous on-learning environments, which will spread across the home, schools, and communities.”
Perhaps technology’s most direct impact will be the emergence of “personalized learning” where each student enjoys focused individual attention from teachers who will access real-time data on their progress and problems.
Tierney regards this as a fundamental breakthrough for learning. “Knowing what is happening in the lives of each student might spell the difference between a toxic path and a prosperous path in the future. With data-rich models, we can help support kids holistically.”
He further explains: “If I am one teacher and I have 30 kids in my class, I will only have the chance to have a cursory look at every child. But if I had ten really experienced teachers in that classroom, they could watch three kids each closely and look for problems and opportunities for each.
“We now have technology that can act as those ten extra teachers. It can give me the ability to observe all kinds of details and to understand at a much deeper level what the needs of those kids are. Not just in regard to content, but also pastoral care and life in general.
“Technology can recognize patterns and certain conditions that might need intervention. We can become much better at supporting them. Some educators describe this as data-driven learning. But that is a horrible term. It’s really people-driven learning.”
Social and emotional well-being
Personalized learning is a holistic approach that must do more than only focus on academic progress.
“It will also help teachers stay on top of, and adjust to, factors that affect social and emotional well-being. Teachers will be able to ensure students feel inspired, safe, valued, and able to learn in ways previously not possible.”
New learning tools will also be able to adjust to the needs of individual students – without instructions or intervention from their teachers.
“It would be like one of those virtual ten teachers turning up the brightness of a screen without bothering to tell the teacher. The smarter the technology gets, the more the teacher is supported and empowered.”
Personalized learning and real-time data could also see an end to the current cycle of lessons and tests.
“A test gives a teacher a snapshot in time about a whole bunch of kids. But once you have the results, it can be very difficult to adjust your teaching to address shortfalls because it is too late,” he says.
“Whereas if we are measuring all the time in real time, we know exactly where every child is because each will be on a continuum at any point in time. So they will still be graded, but based on real-time assessment that looks at a much deeper range of intelligences.”
To make all this work, the profession of teaching must transform, and that will be a challenge for some, Tierney admits.
Teachers learning alongside students
“There are teachers who teach in the traditional way. And there are great teachers who are also model learners. They learn with the kids. They don’t feel like they have to know everything, but they have to show what great learning looks like,” he says.
“Overall, it means inspiring students onto a path of lifelong self-learning. And that can include learning about new technology, which they can learn with the kids. If they can explore new ways of doing things, they can all grow together.”
Tierney says some teachers might struggle with this cultural shift. “When traditional teaching is your paradigm, you can get trapped inside a rigid mindset of feeling that you must know everything about the subjects you teach and that you can’t show weakness.”
Instead, teachers of the future “may need to spend less time designing the content component (of their subjects) and more time around the learning experience so that kids can find and create their own meaning around that content.
“A teacher should be an expert in learning and demonstrate the habits of mind that require great learning. They should be a model on these things for their students.”
The ability of teachers to keep adapting and innovating will be crucial, according to Salcito.
“What we want educators to do is not be bound by the structure of a 40-minute lecture, classroom dynamic, or assessment that’s connected to a curriculum, but recognize their goal and mission to expand upon every student’s potential.
“The best innovation that inspires most young people is the teacher.”
The post Schools after COVID-19: From a teaching culture to a learning culture appeared first on Microsoft Malaysia News Center. | <urn:uuid:7db4a933-3cb7-4260-9920-813a584756f3> | CC-MAIN-2024-38 | https://www.glocomp.com/schools-after-covid-19-from-a-teaching-culture-to-a-learning-culture/ | 2024-09-18T14:03:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00450.warc.gz | en | 0.967754 | 2,141 | 2.78125 | 3 |
How A SIP Trunk Works
Posted on: 2016-05-16 | Categories: SIP SIP TrunkingOrganizations looking to upgrade their voice communication systems are quickly inundated with a barrage of technical terms, jargon and buzzwords. It is often difficult to distinguish between genuine product/service differentiation and what is just a marketing term. But for any company to realize the many advantages of VoIP, an understanding of the underlying technology and implementation practices will be immensely helpful.
Anyone who has started looking into the field will come across to phrases – hosted VoIP and SIP trunks. Both these configurations are used to deliver VoIP services to enterprises but the difference lies in the method of delivery. With hosted VoIP, all the equipment is owned operated and maintained by the external service vendor and the various phone services are delivered over the Internet for the company. SIP trunks are slightly different and come in many configurations.
What does SIP trunk mean?
To those working in the daily communication field, a trunk or trunking is a pretty familiar word. It is simply a physical connection between the switching equipment located in two places – the business and the phone operator. It is a dedicated circuit that allows concurrent calls to be placed, depending on the number of channels.
An SIP trunk is also connection between the client and the ITSP but this connection is IP-based, rather than being physical. With SIP trunks, companies are able to extend VoIP calls beyond the enterprise firewall. Normally this would require a PSTN gateway but utilizing SIP trunking eliminates that requirement. So instead of routing voice calls over dedicated TDM circuits, SIP trunks do so over data networks including the public Internet.
How a SIP trunk works
As the term itself hints, SIP trunks utilize the Session Initiation Protocol which is an IETF standard that dictates how VoIP sessions are initiated, handled and terminated. While there are other proprietary protocols available, SIP has become the most commonly utilized and implemented format. The benefit of using it is that most equipment and hardware that you purchase will be interoperable as long as it is compatible with SIP.
When a phone call is made over an SIP trunk, it may originate as an IP call or be converted into one at some point. This point maybe an ATA adapter, the IP PBX itself or even a SIP compatible router. Once it becomes an IP call, it is transmitted over the data network to the ITSP office and then on to its destination. The last leg of the journey may be over the PSTN which necessitates the payment of termination charges. Quite a few calls never land on the PSTN at any point which is how service providers are able to offer free calls and inexpensive international calling.
How is SIP trunking useful?
In many organizations, SIP trunking powers more than just voice calls through VoIP. It is also the basis for unified communication systems since SIP trunks can carry instant messages, different types of media including video and many other types of real-time communication services. SIP trunks can convey user presence information and offer an alternative to emergency dialing referred to as E911.
SIP trunks bring enterprise class communication within the reach of smaller organizations which previously could not afford dedicated, leased lines for their business. SIP trunks are easier to design, operate, develop and upgrade since it uses packet switching technology. It allows providers to deliver these services at low costs, which is a boon for businesses which have large call volumes.
From a business perspective, it allows consolidation of two different networks – voice and data into a single one. This reduces maintenance costs as well as the time spent in troubleshooting and bug fixing. If and when something was wrong, the company does not have to deal with multiple vendors and points of failure. SIP trunks and VoIP are also more suitable for modern business processes and offered a much wider range of communication services than traditional carriers.
Should you go for SIP trunks?
Whether or not SIP trunks are suitable for your business depends on a number of variables. The business requirements, external environment and even the comfort level of employees with new technology can influence the decision. Switching to VoIP does not require employees to relearn processes such as making or receiving calls but they will have to become familiar with the workflows/procedures for utilizing advanced features including visual voicemail, digital faxing etc.
In some cases, industry regulation and legislation can have an impact as well. Various nations around the world have different levels of VoIP adoption, legal frameworks and broadband penetration. So even if your business switches to SIP trunks, offices in certain regions may have to stick with PSTN services for security, privacy and legislative reasons.
Since on premise SIP trunking requires considerable investment, some organizations may choose to go with hosted VoIP as a sort of demo or trial before committing to the new technology. Larger organizations that have in-house expertise and personnel who can deploy the new systems will generally prefer to go with SIP trunks, as it is cheaper than hosted VoIP over the long-term.
If you do chose to go with SIP trunking, the choice of SIP trunk provider will greatly influence your experience with VoIP. Other than the price of the offered services, the portfolio, customer and technical support as well as the experience of the vendor are all aspects to be considered. SIP trunks enable companies to be agile, flexible and scale according to their needs, so it is no wonder its adoption is growing exponentially. | <urn:uuid:069a344c-2b6d-45a4-8591-881fbd225c36> | CC-MAIN-2024-38 | https://gotrunk.com/blog/sip-trunk-works/ | 2024-09-19T21:43:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00350.warc.gz | en | 0.961164 | 1,149 | 2.765625 | 3 |
What is Personally Identifiable Information (PII)?
Reading time: [readingtime]
Personally Identifiable Information (PII)
Personally Identifiable Information, also known as PII, is any information that can be used to identify an individual. This kind of information can be used on its own, or in conjunction with other information that helps to identify an individual. Some examples of PII would be your full name, driver’s license number, social security number, passport number, email address, vehicle registration number, biometric data (face, retinal, or fingerprint scans), birthdate, and telephone number.
Keep in mind that the above list is not “all-inclusive”, and there are many other forms that PII can take, like your medical, employment, or financial histories.
Why should I care about my PII?
Your PII is just that – yours. So, issues arise when PII is mistreated and ends up in the wrong hands. Most often, PII is referenced when speaking about identity theft, data breaches, or general online privacy and security issues. When a company experiences a data breach, any PII that is being stored by that company can end up being sold on the dark web with the intention to be used to commit identity theft.
Once your PII ends up on the dark web, it can be assembled by hackers and ID thieves to take over your identity. Typically, a single piece of PII is not enough for an ID thief to commit fraud, but by compiling a few pieces of PII from sources on the dark web or from data brokers, an ID thief can begin strategizing against a potential target.
Identity Theft is Real
It’s important to emphasize that identity theft is very real, and should not be taken lightly – someone new is impacted by identity theft every two seconds.
According to Javelin Strategy & Research in 2017:
- There was a record high 16.7 Million victims of Identity Fraud in the U.S.
- That’s 6.6% of all American consumers – yikes!
- 30% of U.S. consumers were notified of a data breach in 2017
- 1.5 million victims of identity fraud had an intermediary account opened in their name
Your PII and You
PII serves at least one legitimate purpose online: to allow marketing companies to serve you better ads. By compiling general information like your email address, gender, age range and browsing tendencies, data mining companies can create profiles about you which are then sold to marketing companies who serve ads on your Facebook or Instagram feeds, or banner ads on that website you’re visiting.
Unfortunately, there are negative implications with data profiles like this being created. Medical records have been sold to data brokers since the introduction of HIPAA (Health Insurance Portability and Accountability Act) in 1966 using “anonymized data” (health records with unique identifiers removed like name, social security number, or address). Even though the data is mostly “anonymized”, when cross-referenced against other databases, it can be used to build a health record about you, in-turn destroying the data’s anonymity.
These kinds of profiles open the doors to prejudice from health insurance providers or even potential employers – imagine a world where you’re denied a job or health insurance because you did a Google search for “cancer symptoms”.
Protecting Your PII
Even in today’s world of frequent data breaches, consumers are still forced to give out their personal information on a regular basis in order to use the products and services that they need. Because of this, it seems like an impossible task to try and protect your PII from getting into the wrong hands.
Thankfully, you’re not helpless, and you can use these tips to protect your PII:
- Be cautious of what you share on social media
- Remove your personal information from data broker websites (or use DeleteMe)
- Use a Masked Email address when signing up for a new service or mailing list online
- Use a Masked Credit Card when asked for a credit card online
- Use a VPN to disguise your device’s IP address and encrypt your browsing activities
- Request that your information be erased from a company’s database if you no longer need their product or service
The simplest and best thing you can do to protect your PII is to stop giving it out whenever it’s asked of you. Unfortunately, many times you’re still required to provide your real information in order to access a product or service, and you’re forced to trust the company you’re giving it to. Still, there are many things you can do to protect your PII, and your safety should always be prioritized.
Our privacy advisors:
- Continuously find and remove your sensitive data online
- Stop companies from selling your data – all year long
- Have removed 35M+ records
of personal data from the web
Don’t have the time?
DeleteMe is our premium privacy service that removes you from more than 750 data brokers like Whitepages, Spokeo, BeenVerified, plus many more.
Save 10% on DeleteMe when you use the code BLOG10. | <urn:uuid:56a57608-9e3c-48b1-a90b-10330994cbc7> | CC-MAIN-2024-38 | https://joindeleteme.com/blog/what-is-personally-identifiable-information/ | 2024-09-07T18:38:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00550.warc.gz | en | 0.931219 | 1,105 | 3.09375 | 3 |
Engineers at General Atomics have completed fabrication of the first ITER Central Solenoid Module for nuclear fusion. When the ITER fusion energy project in France powers up in 2025, its first operations will be enabled by the largest pulsed superconducting magnet ever built – a magnet that has roots in Southern California.
The magnet is known as the ITER Central Solenoid, and a critical milestone was reached this month when General Atomics (GA) completed fabrication of the first of six modules that will make up the solenoid (a seventh is being fabricated as a spare). The Central Solenoid, sometimes referred to as the heart of ITER, will be 59 feet tall and reside in the center of the ITER tokamak. Operation of the magnet will enable ITER to create and sustain fusion on a scale unprecedented on earth by providing 15 million amperes of plasma current. This current simultaneously heats and confines the plasma – a state of matter with large quantities of ionized particles – enabling sustained temperatures in ITER that are over ten times those of the sun.
The Central Solenoid is essential to the operation of ITER, which makes this a significant milestone for the global fusion energy community,” said Jeff Quintenz, Senior Vice President of GA’s Energy Group. “The engineering challenge we took on with this project was enormous, but today we are seeing the payoff for that effort. The successful completion and operation of ITER should help lead to the development of fusion energy, a nearly limitless, clean energy source for our world.”
The completed Central Solenoid will consist of six stacked modules made of niobium-tin cable encased in a stainless-steel jacket. Fabrication of each module occurs over a 24-month period, beginning with precise winding and joining of seven conductor segments totaling 3.6 miles to create a layered module with strict tolerances – the two ends must be within a quarter of an inch. The module then spends a month in a massive oven to convert the niobium and tin core into superconducting material. Every turn of the conductor must be separately wrapped with six layers of insulating tape, totaling over 180 miles per module. The entire module is then insulated before being encapsulated in a special epoxy resin to stabilize and protect the coil.
All of the manufacturing processes were developed by GA under a contract from UT-Battelle, which manages Oak Ridge National Laboratory (ORNL) for the U.S. Department of Energy (DOE).
“GA has done an outstanding job to reach the difficult and important milestone of completing Module 1 fabrication,” said Wayne Reiersen, ORNL-based US ITER Magnet Team Lead. “This is the culmination of an eight-year effort involving concurrent engineering of the module design, creating a facility in which these powerful superconducting magnets could be built and tested, qualifying the manufacturing processes, and then building this first-of-a-kind module. It is a tribute to the excellence of the GA team and the corporate commitment of GA to the success of this project.”
GA is a leader in the fusion energy community, with nearly six decades of cutting-edge research addressing the most challenging aspects of fusion. In addition to its work on ITER, the San Diego-based company operates the DIII-D National Fusion Facility owned by DOE, which is the largest magnetic fusion research facility in the U.S.
In this video, researchers describe the full scope of the international ITER project.
Like the other six ITER members (the EU, Japan, South Korea, Russia, China and India), the U.S. directs much of its ITER contribution toward domestic manufacturing of components. This approach allows member countries to support the global effort while developing expertise and resources at home. While the U.S. is contributing about nine percent of ITER’s construction cost, it will have access to 100 percent of the results. The U.S. investment in ITER to date surpasses $1.1 billion, with more than 80% spent in the U.S. to enable domestic industries, national labs, and universities to deliver U.S. in-kind contributions for the ITER project.
GA will now perform rigorous testing on the first module to confirm coil performance. The module will be placed on a specially designed trailer after testing is complete and trucked to a port to be loaded on a ship bound for France. | <urn:uuid:28de5823-2112-4f90-9841-af86d8f9defd> | CC-MAIN-2024-38 | https://insidehpc.com/2019/05/general-atomics-fabricates-first-iter-central-solenoid-module-for-nuclear-fusion/ | 2024-09-08T22:58:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00450.warc.gz | en | 0.932268 | 919 | 3.390625 | 3 |
In a digitally interconnected world, hackers, cybercriminals, and other adversarial parties are provided with unprecedented opportunities to launch cyber-attacks and commit cyber-crime. Cutting edge technologies like broadband connectivity, Artificial Intelligence (AI), Distributed Ledger Technologies (DLT), Cryptocurrencies, and the Internet of Things (IoT) provide enterprises with a host of innovation opportunities, yet they also increase the number and type of their cybersecurity risks. This is evident in a significant number of notorious security incidents during the past decade. For instance, the WannaCry ransomware attack back in 2017 targeted Windows computers at a large scale and demanded ransom payments in Bitcoin cryptocurrency. Likewise, in 2016 the world witnessed the first large IoT-based cybersecurity attack, when the Mirai malware exploited IoT devices to launch a distributed denial of service attack. More recently, in 2020, the SolarWinds hack took place, when cybercriminals leveraged a vulnerability in SolarWinds’ Orion software to penetrate thousands of organizations worldwide, including parts of the United States federal government. The SolarWinds attack enabled the installation of additional malware in the compromised computers and resulted in a series of serious data breaches. SolarWinds customers comprise many of the US Fortune 500 enterprises, including some of the world’s top telecommunications companies and financial firms. Moreover, this cybersecurity breach affected the US Military, the Pentagon, the State Department, and 100s of research and academic institutions around the globe, which is indicative of the impact of the attack.
All these recent cyberattacks took place despite the ever-increasing investments on cybersecurity solutions and cybersecurity consulting. Cyber security solution providers are offering novel cyber-defense solutions, yet they can rarely address all the different types of cyber crimes.
Digital Infrastructure Complexity and Cybercrime Intelligence
There are various reasons why it is so difficult for cybersecurity professionals to address modern cyberattacks. First and foremost, cyber security experts are nowadays confronted with a very broad spectrum of cybersecurity threats and vulnerabilities. This is because of the expansion of the IT infrastructures in every aspect of the modern enterprise environment, as well as due to the deployment of novel and complex IT technologies. For instance, the expanded use of Machine Learning (ML) and Artificial Intelligence (AI) brings to the foreground new types of cyber-attacks, like for example data poisoning and evasion attacks. Such attacks used to be very rare before the advent of AI/ML in enterprise environments.
Another reason behind the spread of cybercrime lies in the innovation and intelligence of hackers, who are finding novel ways to launch adversarial attacks. Once upon a time, distributed denial of service attacks made it very difficult for enterprises to understand and mitigate them. Some years later, hackers invented the ransomware attacks, which yielded monetary benefits for cybercriminals, while putting enterprises in new ethical and technical dilemmas. Recently, the SolarWind hack revealed a novel type of supply chain security attack which is very hard to detect. Specifically, in the SolarWind case, hackers compromised an application monitoring platform (i.e., Orion), which was used as a Trojan horse. Given that large enterprises tend to trust interactions with their major providers, the detection of such trojan attacks across a supply chain of trusted organizations is particularly challenging. Likewise, supply chain organizations are typically unprepared to deal with attacks from trusted parties.
Supply chains tend to be complex from a security perspective. This is mainly because they are as strong as their weakest link. Therefore, hackers are seeking ways for breaking vulnerable parts of the chain, including IT systems and human factors related vulnerabilities. In this context, organizations must invest in cybersecurity solutions and cybersecurity services, while applying cybersecurity best practices from the top cybersecurity companies.
Guidelines for Cyber-Resilience
To boost their cyber-resilience in an hyperconnected and ever evolving digital environment, companies should consider the following best practices:
- Risk Assessment: The planning and implementation of cyber-defense strategies must always start from a risk assessment. The latter is used to identify the assets at are at risk, including the probability and the expected impact of each risk factor. In this way, the outcomes of the risk assessment can be used to prioritize security investments and alleviate the risks with the highest likelihood and impact on the enterprise. This is important given that most companies operate under budget limitations.
- Education: Nowadays there is a proclaimed talent and skills gap in advanced digital technologies like AI, IoT and BigData. This gap extends to their cyber-security implications. Therefore, companies must strive to improve the security knowledge of their employees based on focused upskilling and reskilling processes.
- Up to date Knowledge Bases: Risk assessment and mitigation processes can be automated and made more efficient based on the use of knowledge bases that comprise information about known threats and vulnerabilities. Instead of trying to identify attack patterns, knowledge bases provided readily available information about the characteristics of certain attacks and the measures that can be taken to mitigate them. Hence, it is important for enterprises to deploy and maintain knowledge bases with up-to-date security information.
- Advanced Tools and Automation: Companies had better resort to automation in order to monitor the ever-increasing number of IT assets like computers, devices and software programs. In this direction, technologies like AI/ML can boost automation and enable the development of advanced security monitoring tools. Advanced technology is not only the source of cybersecurity problems: In many cases it is part of effective IT security consulting solutions as well.
- Collaboration and Joint Responsibility: When it comes to securing industrial value chains, cybersecurity is all about stakeholders’ collaboration. In this direction, companies must consider sharing information with their supply chain partners, while at the same time engaging in collaborative security processes (e.g., collaborative cyber risk management) as part of their cybersecurity strategy.
- Consideration of the Human Factors: The development of cybersecurity solutions must consider the human factors as well. Security solutions need to be simple, user-friendly, and effective, in order to be accepted by their end-users. This is particularly important for solutions that address non-tech savvy audiences such as solutions for the public administration, home users and SMB (Small Medium Businesses) that do not typically possess in house security expertise.
Overall, cybersecurity remains a long standing concern for Chief Information Officers (CIOs). CIOs must invest on effective solutions that secure their most important assets against all threats that could cause essential damage to their company. The selection of such solutions requires a deep understanding of the cybersecurity infrastructure and challenges of their organization, along with good knowledge of what the cybersecurity industry has to offer. | <urn:uuid:065c5119-760d-4bda-a2a5-991c02cf176d> | CC-MAIN-2024-38 | https://www.itexchangeweb.com/blog/the-rising-cybersecurity-threats-cios-cannot-afford-to-ignore/ | 2024-09-08T23:25:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00450.warc.gz | en | 0.941378 | 1,336 | 3 | 3 |
It is one thing to know that large-scale, expensive supercomputers back the world’s capability to provide accurate global and regional forecasts, but it is another entirely to understand that the investments made in improving those systems can translate to billions, if not tens of billions, of dollars.
Wrapped together, the agricultural, transportation, tourism, general business, and other economic impacts of accurate (or woefully inaccurate) forecasts are easy to overlook. But an even more difficult tale to tell is just how big of an impact an order of magnitude boost in computing power can turn all of that around. For the public and funding agencies alike, the argument that for the same cost as, say, a fighter jet, the country can get far high-resolution weather models of both global and regional scope that can provide far more accurate results. This counts for both severe weather events and more spot-on extended forecasts that drive our economy in more ways than might be quickly visible (transportation alone is rife with examples).
For a country like the United States, which was the leader for decades in numerical weather prediction modeling and weather systems operation, investments like these dwindled due to lack of leadership, funding, and misalignment with the research community—all of which compounded in some spectacular failures. The most obvious sign of reduced investments in critical forecasting infrastructure is in the computing investments, argues Dr. Cliff Mass, Professor of Atmospheric Sciences at the University of Washington, and prolific writer on all things weather-related, including some detailed pieces on the need for sustained supercomputing investments for global and regional weather prediction and how the U.S. edge has dulled.
Mass says that until three years ago, the United States had fallen so far behind in its computational investments in weather forecasting that it had one-tenth the computing power of the European center. This was an embarrassing place to be overall, but it came to full light during Hurricane Sandy when the Europeans saw the storm, but the U.S. models, which were much lower-resolution, could not come to the same conclusions. Since that time, NOAA has made fresh investments and brought the U.S. back to the top of the pack for global and regional forecasts, but there is still a long ways to go. “Theoretically, the more computing power you throw at weather models like these, the better the predictions get. There is no such thing as having too much compute power here. And the economic incentive is clear.”
As Mass tells The Next Platform, highly publicized failures include the costly inability to deliver accurate forecasts that could have provided faster warning about Hurricane Sandy, and could have diminished the economic impacts of over-predicting the great snowstorm (that really never happened) in New York City last winter. But even with these shortcomings, the case for funding large-scale supercomputers for weather prediction is a tough one, at least compared to how Federal funds seem to be looser when it comes to paying for big supercomputers that target climate modeling research at NASA centers and various national labs.
To put all of this in perspective, consider that the large-scale global forecasts in the United States are limited to two centers in the United States, NOAA and the U.S. Navy’s Fleet Numerical center are the only two organizations tasked with long-range, complex ensemble-based forecasts that require massive supercomputers. To be fair, this limited number is not out of balance with other countries, or even continents for that matter. Although there are many regional centers that run heavy-duty (HPC) forecasts, global modeling requires far more brute computational force—and there are only a few companies on the planet who are catering to this small, but ultra-high margin market. So far, the one company with the dominant share of weather forecasting systems is supercomputer maker Cray.
As we described in detail earlier this year, Cray has been on a winning streak when it comes to top global weather forecasting centers. One of the most notable wins as far as the United States goes—and, in fact, the one that tipped the scales more in the U.S. favor in terms of competitive national weather systems – was a win at NOAA.
The NOAA supercomputer deal will deliver a 10X increase in application performance at the center, and interestingly will do it by moving from a 2.5 petaflops machine to 5 petaflops system. (The performance scales non-linearly and to their benefit with the addition of more cores.) But what is often missing when news hits the mainstream about big investments like this ($44.5 million) is what it means for the future of weather forecasting in the U.S. That 10X improvement will, according to Dr. Louis Uccellini, director of NOAA’s National Weather Service, translate into the ability to process “quadrillions of calculations per second that all feed into our forecasts and predictions. This boost in processing power is essential as we work to improve our numerical prediction models for more accurate and consistent forecasts required to build a Weather Ready Nation.”
As NOAA explained earlier this year, in advance of the upgrade, each of the two operational supercomputers will first more than triple their current capacity later this month (to at least 0.776 petaflops for a total capacity of 1.552 petaflops). With this larger capacity, NOAA’s National Weather Service in January began running an upgraded version of the Global Forecast System (GFS) with greater resolution that extends further out in time. “The new GFS will increase resolution from 27 km to 13 km out to 10 days and 55 km to 33 km for 11 to 16 days. In addition, the Global Ensemble Forecast System (GEFS) increased the number of vertical levels from 42 to 64 and increasing the horizontal resolution from 55 km to 27 km out to eight days and 70 km to 33 km from days nine to 16.”
Cray Supercomputers A Port In Many Storms
And if even that is difficult to process in terms of value, consider the impetus. These investments were spurred during the Hurricane Sandy “wake up call” that Mass says was sad, “but one of the best things that could have happened for weather prediction” because it connected the value of high-resolution weather models to real-life consequences if those fall short. The Disaster Relief Appropriations Act of 2013 went into effect following the public shaming of the U.S. weather system in its failure to accurately predict the storm, which originally to give IBM first crack at the system (although the Lenovo acquisition caused concern, making way for Cray to step into the deal).
Cray had another big win for the Met Office in the UK for $156 million, another for $53 million to provide a new Cray XC40 supercomputer at the Bureau of Meteorology in Australia, which for context, will allow the center to run nearly eight times the number of forecasts it could have done with its current machines. The point is, capacity and capability matter when it comes to global forecasts—and the U.S. is finally getting current. The company is pushing its streak of weather wins further today with the announcement that the Danish Meteorological Institute is investing over $6 million in a new system that will provide a 10X boost to its capabilities, and we expect this run to continue globally.
The point is, at least from the U.S. perspective, delivering highly accurate weather models is critical to national and regional economies. Although weather forecasting at global scale is expensive, the economic impacts of not deploying the highest-resolution models can be catastrophic. Although the White House has recently backed future investments in exascale-class supercomputers for the future (future systems that Mass says can and should eventually be used for highly accurate, long-range weather ensembles), he says it will be an uphill battle to keep investments flowing to extend the capabilities of these new supercomputers once their installed—and to keep pushing the capabilities of ensemble forecasts in the years ahead. | <urn:uuid:926a8209-1c61-4f25-8b24-7bc5f39c13e8> | CC-MAIN-2024-38 | https://www.nextplatform.com/2015/08/18/u-s-rediscovers-the-economic-edge-of-weather-supercomputers/ | 2024-09-10T05:10:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00350.warc.gz | en | 0.955757 | 1,657 | 3 | 3 |
An Introduction to Application Delivery Controllers
Application delivery makes online applications, such as websites, SaaS apps or ecommerce sites, available, secure and reliable. Application delivery solutions ensure access to online services, with the application delivery controller (ADC) the core component of an application delivery model.
An ADC (application delivery controller) is a network appliance that optimizes and manages how enterprise and web application servers connect with client machines. Traditionally these appliances have been hardware-based but software-defined ADCs are increasingly common. Purpose-built networking components, application delivery controllers improve the performance, resiliency, and security of applications delivered online.
In this context, the term controller refers to the data flow between computing systems. The two systems transferring the information might both be servers, as when an application accesses sensitive information. The systems might also be a client and server, as when a user buys items on an e-commerce website through their web browser.
As it manages internet traffic, an ADC directs data packets as they move in and out of the server environment. Modern packet transmission is reliable and fast. The primary goals for application delivery solutions are to ensure that applications are delivered successfully, quickly, and securely.
Today, application delivery includes cloud and mobile access. Applications must function outside a physical work location and across all types of networks. Business applications at the enterprise level have transitioned away from locally installed, desktop-bound software accessed by users across the LAN.
Crucial to this transition are application delivery controllers, which assist applications in adapting to current protocols and networks. ADCs also ensure that applications are always available, perform optimally, and don’t present security risks to the business or to users.
Application Delivery Controller Definition
Often part of an application delivery network (ADN), an application delivery controller is a network device or application that assists with common tasks such as load balancing and SSL termination. For example, many act as web accelerators to ease load from the web servers themselves or offer load balancing. Application delivery controllers are generally located in the DMZ, between a web farm and the outer router or firewall.
Application delivery controllers include traditional load balancing capabilities that automatically distribute communications and processing evenly across computer servers and networks to prevent any single device from becoming overwhelmed. ADCs have additional features such as application access management (AAM), automatic application health checks, RAM caching, proxy and reverse proxy capabilities, SSL offload, TCP reuse, web application firewalls (WAF) and DNS application firewalls (DAF).
Together these ADC features prevent performance degradation and potential downtime, ensuring that an enterprise’s networks and applications remain accelerated, highly available, and secure.
How Do Application Delivery Controllers Work?
An application delivery controller distributes inbound application traffic using algorithms and policies. Basic round robin load balancing forwards client requests in turn to each server, but it will not account for responsiveness or health, assuming all servers are the same. In contrast, an administrator can direct an ADC via additional policies to determine which servers should receive which inbound requests based on a number of criteria. For example, the application delivery controller can inspect and route packet headers based on requested file types or keywords.
In a typical configuration, the ADC mimics a single virtual server from the perspective of an end user, sitting in front of a cluster of application and web servers and mediating responses and requests between them and clients. An application delivery controller employs various methods to improve web application performance.
What Does an Application Delivery Controller Do?
An application delivery controller typically uses a number of techniques to enhance performance:
Administrators rely on application delivery controllers for their monitoring capabilities, which extend to confirming a server’s operability and health far past the standard ping. The ADC avoids potential disruptions by routing traffic to alternate servers when monitoring indicates that specific health criteria for server responsiveness are not being met, or when a server is experiencing an issue.
Application delivery controllers also offer both historical and real-time analysis of all user and network traffic, with metrics for bandwidth usage, round-trip times, and WAN and datacenter latency. This data can minimize the time help desk staff spends identifying the source of problems and provide quick solutions.
ADCs sit in the data path between applications and clients, with the opportunity to offer visibility of application behavior and performance. Application delivery controller systems monitor, analyze, and log incoming client requests and responses from the application, offering real-time insight into bad server and application behaviors. However, traditional ADC appliances focus on delivering packets efficiently and do not offer application insights.
Load balancing means using algorithms to distribute incoming requests across servers. Simpler round robin algorithms send requests sequentially, while more sophisticated algorithms weigh factors such as client location, server capacity, fields in the HTTP header, type of content requested, and more.
An application delivery controller is also capable of global server load balancing, redirecting traffic to server clusters located in a separate datacenter. Another ADC may work in tandem with the first application delivery controller, front-ending those servers at the offsite datacenter. These sites may be configured in active-active mode, in which both sites support inbound traffic actively, or in active-passive mode. Each ADC routes client requests to the nearest datacenter for a given user, minimizing round trip times and latency and ensuring a better experience.
In case of a shutdown, this configuration also supports business continuity, allowing the ADC to divert traffic from the affected area to an available application delivery controller at a co-located site. The new ADC then directs requests to viable resources. Acting as a reverse proxy, the application delivery controller interacts with application servers for the client, performing application monitoring, management, acceleration, and security services right in the data path.
For more on the implementation of load balancers, check out our Application Delivery How-To Videos and watch the How to Setup Applications with SSL Everywhere How To Video here:
Caching reduces server load and speeds delivery by storing content locally on the ADC rather than fetching it from the backend servers again each time a client requests it.
Offloading SSL processing allows the ADC to decrypt requests and encrypt responses in place of the servers, essentially replacing the backend servers as the SSL endpoint for connections with clients. This speeds content delivery by freeing servers to do the tasks they are designed for, handling these computationally intensive operations. Server load balancer systems provide application availability, performance, and scalability.
Application delivery controller platforms guard against and mitigate DDoS attacks. With SSL offloading capabilities enabled, the ADC can detect and block DDoS attacks that use SSL tunneled traffic without exposing servers and applications.
Web Application Firewall
Certain application delivery controller platforms include a built-in web application firewall (WAF). A WAF helps prevent web attacks and protects applications from security vulnerabilities such as cookie poisoning, cross-site scripting (XSS), data form overruns, SQL injection, and malformed HTTP packets.
Application delivery controllers may serve as central authentication points which verify authentication and authorization for clients. In this way, application access management (AAM) systems and ADC systems can interface, offloading the task of processing central authentication services away from the application servers and reducing the complexity of the application environment.
Multi-tenancy designs support multiple application tenants such as engineering, marketing, and sales, or multiple application teams such as customer services, inventory, and payments. Better utilization of underlying infrastructure and the cost benefits that follow drives these designs.
Organizations can achieve agility and savings in administration, acquisition, and ongoing support costs by consolidating more services onto fewer devices, whether virtual or physical. Time for provisioning and de-provisioning new services falls dramatically once the existing infrastructure can be used to provision them without creating or purchasing new components. An expandable pool of resources can service new business units, customers, or projects as it supplies the required layer 4–7 services for those applications.
This rapid infrastructure provisioning not only adds value to service consumers, but also facilitates monitoring and control for the IT department. IT can allot infrastructure costs to specific tenants by using the services fabric to create standardized services and assigning resource controls to the tenants.
Application Delivery Controller Explained
Who Uses Application Delivery Controllers?
Almost any enterprise or company that uses large scale CDNs or those that need to provide fast software/web application services to ensure the availability and security of their high traffic websites uses application delivery controllers. ADCs are frequently used as a reverse proxy server, and to ensure high availability for a seamless end-user experience.
DevOps engineers—who build the applications that depend on application delivery controllers—are increasingly responsible for managing and analyzing ADC performance and optimizing for application acceleration as more companies move toward digital transformation and update their network architectures. However, in many organizations, IT and network teams still manage application delivery, especially where legacy or application delivery controller hardware is deployed.
What Is an Application Delivery Controller in Networking?
Application delivery networking refers to creating a technological framework to provide appropriate levels of acceleration, security, availability, and visibility to networked applications. Typically, application delivery controller technologies reside at network endpoints and feature their own proprietary CPUs. They deploy a range of optimization techniques to enhance application delivery across the internet and use real-time data to prioritize access and applications.
The technologies, once deployed, form an application delivery network (ADN). Application delivery controllers and WAN optimization controllers (WOCs) are the two components that comprise application delivery networking.
The ADC is located near the end of the application delivery network close to the data center. The WOC is located both near the end point and in the data center. It focuses on latency optimization, improving application performance, and also handles compression, caching, protocol spoofing, and de-duplication.
Although an ADN is sometimes called a content delivery network (CDN), ADNs optimize the acceleration of dynamic content while CDNs focus on static content.
What is an Application Delivery Platform?
An application delivery platform (ADP) is a suite of application delivery management technologies that handles services such as security controls, load balancing, and traffic management in cloud environments and data centers. The role of the application delivery platform is to securely and reliably deliver applications to end users.
What is Application Delivery Management?
Application delivery management (ADM) is the implementation of best practices in analytics, application security, throughput optimization, and troubleshooting to achieve fast, secure, responsive, reliable user access to applications.
Web Application Security
Online application delivery generates unique vulnerabilities that are distinct from those that arise with traditional LAN-bound applications. Stricter safeguards against data leakage and external attacks are critical as workers require remote access to data and applications.
Application delivery controllers function as the natural gateway to the network, authenticating each user’s attempt to access an application. The ADC can use an on-premises active directory data store to validate user identity for SaaS-based applications. This eradicates the need to store credentials in the cloud, providing single sign-on capabilities and enhancing the user experience as well as improving web application security.
The XML-based protocol SAML is widely used to simplify the application login process. An application delivery controller can authorize users via data stores, acting as a SAML agent and confirming their identities. For example, when platforms such as Twitter or Facebook grant the use of credentials to applications to validate identity, ADCs can serve as the SAML identity or service provider.
The application delivery controller can guard against specially designed DDoS attacks by protecting internal server resources from being targeted with rate-limiting measures. Once such measures are implemented, the ADC can suppress any unusually large surges of inbound requests and minimize how much available bandwidth they consume—or reject such requests completely.
Traditionally, advanced Layer 7 protection and load balancing were only available as standalone solutions. Application delivery controllers have brought those capabilities together, creating application firewalls that detect malicious scripts or suspicious content in data packet headers that the network firewall may miss.
ADCs support both positive and negative security models. They can integrate with third-party security providers to employ signature-based protection. An application delivery controller can also determine usage patterns in “learning” mode, analyzing which traffic patterns signify normal behavior. Based on that learning, the ADC will automatically flag and block any malicious inbound requests sent by attackers using cross-site scripting or SQL injection. Together, these techniques enable the ADC to generate a comprehensive, hybrid positive and negative security model for users and applications.
Application Delivery Controller Benefits
Enhanced application performance is central to the use of application delivery controllers. Particularly over high-latency and mobile networks, an application delivery controller can employ various techniques to expand application services that improve application performance.
SSL and TLS are essential to e-commerce, but it is very CPU intensive to manage traffic encrypted with new ciphers. Application delivery controllers decrypt traffic before it reaches the server and manage certificates, and can handle massive volumes of both encrypted and unencrypted traffic.
Application delivery controllers also engage in TCP multiplexing to more effectively manage high volumes of inbound server requests. The ADC routes requests using open channels as traffic arrives, eliminating the inefficient open and close overhead for every transaction.
Application delivery controllers can also optimize performance across mobile networks using domain sharding, breaking down content on each page into a sequence of subdomains that enable many channels to open at once. This improves performance and decreases page load time.
Application delivery controllers offer visibility into delivered content, and can further optimize delivery of image-heavy web pages by converting GIF files into PNG formats. ADCs can also reduce download times by compressing other large components of web pages including cascading style sheet (CSS) files by removing white space and unnecessary characters.
Application availability is critical to consumers, and application environments should be built for fault tolerance as much as possible. Typical failover strategies include deploying additional servers at a co-located site or in the datacenter. ADCs can help provide seamless failover, ensuring high availability of applications, by balancing application workloads across multiple active servers and sites.
Application Delivery Controller vs Load Balancer
It is a common error that an application delivery controller is just a next-generation load balancer.
While an application delivery controller is a network component that provides various OSI layer 4-7 services, including load balancing, it also offers many other features. Most ADCs also include GSLB, CGNAT, DNS and IPAM, proxy/reverse proxy, IP SSL offload, traffic chaining/steering, traffic optimization, and web application firewall services. Many also provide more advanced application delivery controller features such as server health monitoring, content redirection, and application performance monitoring services.
Virtual Application Delivery Controllers
All application delivery controllers function as single points of control. They are the location where an application’s security needs, authorization, authentication, and accounting can all take place.
Virtual application delivery controllers offer these same application delivery controller services, but specifically tailored for cloud computing systems and virtualized data centers. Virtual ADCs are similar in architecture and function to hardware ADCs but offer lower performance since they don’t have the hardware acceleration of customized computer chips (ASICs) or specialized network interfaces.
What is Virtual Application Delivery?
Virtual application delivery controllers (vADCs) or virtual load balancers, provide application management and load balancing capabilities. Typically this is the same software within a hardware appliance but it runs in a virtualized environment in a cloud. In contrast to a hardware-based ADC, virtual application delivery can run on any infrastructure, including the public cloud. The software from the appliance is virtualized to run in a cloud environment. However, virtual ADCs are still architected for monolithic applications and would typically benefit from the hardware acceleration of the appliance, so not as performant running on standard cloud infrastructure. This is a hybrid application delivery controller architecture that might be suitable for organizations wanting to maintain a familiar hardware control interface, but it lacks the full potential of a cloud-native platform.
With the adoption of cloud computing, many public cloud providers recognized the short-comings of virtual application delivery and developed their own cloud-native solutions. Systems such as Amazon ELB (Elastic Load Balancer) offer the advantage that they were built for microservices applications and can scale up as demand increases unlike a virtualized version of a traditional ADC. Tools like Amazon ELB offer the basic features to enable organizations to run their applications in the cloud, but do not provide the advanced capabilities – such as intelligent autoscaling, service discovery and automation – provided by software-defined, cloud-native application delivery platforms.
Some vendors also offer the application delivery controller as a service. This is a SaaS-based approach to hosting the control application with all the benefits in terms of uptime, maintenance, patching, and scaling which come with on-demand applications. The data plane or agents must still be deployed locally to the applications in this architecture.
Benefits of Virtual/Cloud-based Application Delivery
There are several important benefits inherent to cloud-based application delivery:
Cloud-based application delivery replaces a hardware-based solution. A public cloud service is better-suited to deliver quality while scaling globally.
A cloud-based application delivery process obviously reduces hardware acquisition and maintenance costs. Businesses also spend less on support as application performance improves—and user experience with it.
Cloud-based application delivery enables faster performance from cloud-based applications, and users can immediately access services and information anywhere, from any device.
History of Application Delivery Controllers
Beginning in around 2004, first-generation ADCs offered simple load balancing and application acceleration. By 2006, application delivery controllers began featuring advanced application services such as caching, compression, traffic shaping, connection multiplexing, SSL offload, application layer security, and content switching. Together with existing services in an integrated framework, application delivery controllers optimized and secured application flows that were mission-critical.
By 2007, many companies made application acceleration products available. Cisco Systems offered ADCs until 2012, and Citrix, F5 Networks, and Radware maintained the application delivery control market in its absence.
Application delivery controller market shares divided into two areas: general network optimization and framework/application-specific optimization. Both ADCs improve performance, but framework/application-specific optimization is typically more focused on optimization strategies for particular application frameworks, such as optimization for AJAX or ASP.NET applications.
In 2012, Avi Networks introduced a software-defined networking architecture. This is a cloud-native approach that separates the central control plane from the data plane of load balancers. This enabled centralized configuration, management, governance and monitoring of load balancers across a distributed infrastructure fabric including on-premise, hybrid and public cloud. This architecture enables intelligent automation, autoscaling, centralized policy management, observability of applications no matter where they run from a central console.
As of 2016, Webscale launched a cloud-based application delivery platform, and today cloud-based application delivery solutions are common. | <urn:uuid:6e4414db-ee5b-49e5-bf5a-d9f6813396b8> | CC-MAIN-2024-38 | https://avinetworks.com/what-is-an-application-delivery-controller/ | 2024-09-12T15:06:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00150.warc.gz | en | 0.916961 | 3,912 | 2.578125 | 3 |
Going back to the basics – what exactly is IVR?
IVR stands for Interactive Voice Response and describes the technology that lets humans communicate with computers via voice.
In the world of telephony, IVR is an automated system that can interact with users to effectively process their requests. This may mean routing calls, providing information, or many other things.
Bicom Systems PBXware has an IVR feature.
More posts like this one: | <urn:uuid:190d8e36-fcc8-41df-b6f1-702a1cbb6068> | CC-MAIN-2024-38 | https://blog.bicomsystems.com/what-is-ivr/ | 2024-09-13T22:52:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00050.warc.gz | en | 0.935885 | 93 | 2.5625 | 3 |
Spilled Water on Laptop – What’s At Risk?
A glass of water, juice, tea, or anything else can wreak havoc on your laptop computer’s keyboard. Spilling water on laptop computers can mean that water seeps through the keyboard and get on the motherboard as well, and perhaps even get on components like the RAM or CPU. The liquid can cause electrical shorts or corrosion on the electronics, resulting in failure. Every model of laptop has its guts arranged a little differently, but all the same, only specially-constructed laptops are designed to resist failure due to water or liquid damage.
Internal corrosion can be a serious issue that can permanently damage your computer. All laptops have some type of iron inside the computer. When iron is exposed to water, it will begin a process called oxidation, where the iron is converted into a more chemically stable oxide. This process is where rust comes from.
When you get water damage to your laptop, the worst thing that can happen is water damage to your hard drive. Other components may be expensive, but they are interchangeable. If your hard drive suffers permanent damage from water, replacing it with a new one will also replace all of your data.
Fortunately, the engineers who design computers usually take this into account. Your hard drive is likely tucked away from the areas that are most likely to get wet in the event of a spill. Another factor that is working in your favor is that most hard drives have some level of water resistance themselves.
With the exception of high-end helium-filled drives, hard drives aren’t hermetically sealed. Most drives need the air pressure inside and outside the drive to be equal. However, hard drives have filters protecting their breathing holes. Even if they penetrate the laptop keyboard cover, water or other liquids can usually only get into the drive with a great deal of force. This is why we usually only see internal water damage in cases of flooding—it usually takes a lot of water pressure to force liquid into the inside of a hard drive.
There is one part of the hard drive on the outside, though, that is vulnerable to liquid damage. Liquid on the laptop hard drive’s printed control board (PCB) can cause an electrical short and stop the drive from spinning up. This is what happened to the client in this data recovery case study. When a hard drive’s PCB sustains water damage, the computer may display a computer not recognizing hard drive or device not detected error; even if you swap the hard drive over to another computer.
Are you dealing with a wet laptop and no data?
Why Hard Drive PCB Repair is Best Left to Data Recovery Experts
Every so often, we get a data recovery case where the hard drive’s owner, or a computer repair shop technician, tried to replace the drive’s PCB to get it up and running again. Inevitably, the replacement components wouldn’t work. The hard drive still wouldn’t spin up. Or, if it did, the hard drive would start clicking instead of running smoothly.
Put simply, hard drive control board swaps don’t work anymore. Or, at least, not without putting extra work into it. When hard disk drives were simpler, the control boards from identical models were identical as well. This hasn’t been true, though, for over ten years. Increases in the capacities of hard disk drives means massive increases in the density of their hard disk platters. In order to accurately read and write data, the hard drive’s components must be calibrated. The drive consults this calibration data, which is stored on a ROM chip on the PCB, all the time. If the calibrations are wrong, the hard drive can’t work. A bad problem can even become worse!
And yet, we still see people trying simple board swaps before sending their failed drives to us, even though that technique hasn’t worked reliably for over a decade.
If you’re going to swap a control board, make sure you have an experienced electrical engineer to handle moving the ROM chip from the bad board to the good board. You also need an experienced engineer to make sure the failed PCB isn’t the only problem with the components of the laptop’s drive. Sometimes a hard drive can have a problem compounded by failures from multiple components. Even if properly swapping the control board proved easy enough, fixing any other problems would not. Hard drive control board swaps and HDD PCB repair are best left to the kind of professional laptop data recovery specialists you can find at Gillware Data Recovery.
Gillware Data Recovery Case Study:
Drive Model: Seagate Momentus Thin ST500LT012
Drive Capacity: 500 GB
Operating/File System: Windows NTFS
Data Loss Situation: Orange juice spilled onto laptop-hard drive not spinning
Type of Data Recovered: Photos and documents
Binary Read: 45.1%
Gillware Data Recovery Case Rating: 10
Breakfast in bed is nice, unless you try to bring your laptop with you. You could be one errant gesture away from giving your computer a wholly-unnecessary bath. In this laptop recovery situation, the client wasn’t having breakfast in bed when they had their accident. But they did spill a glass of orange juice onto their laptop keyboard.
The client couldn’t turn the computer on after that – the power button was unresponsive. With the power off, they couldn’t even get into the BIOS. They pulled the hard drive out of their laptop and found that the drive wouldn’t spin up when they tried to power it on. Thankfully, rather than trying DIY fixes that could make the problem worse (like isopropyl alcohol, a hair dryer to dry the liquid, or trying to fix the hard drive outside of a lint free cleanroom environment), at the behest of one of their local computer repair shop technicians, the client got in touch with the specialists at Gillware.
HDD PCB Board Repair Results
Our engineers didn’t find any additional problems with other components after successfully replacing the hard drive’s PCB. With the control board successfully replaced, the hard drive came back to life. This water spillage on laptop scenario ended up with a very happy result for our data recovery client. We successfully recovered all of the client’s user-created files from the failed hard drive, including all of their photos and documents after the water spill on laptop incident. We rated this “water spilled on laptop” data recovery case a perfect 10 on our case rating scale. | <urn:uuid:a464b845-5e0c-4dfd-aaff-a503c312c57e> | CC-MAIN-2024-38 | https://www.gillware.com/hard-drive-data-recovery/laptop-water-damage-data-recovery/ | 2024-09-16T06:58:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00750.warc.gz | en | 0.94752 | 1,364 | 3.109375 | 3 |
BGP is a protocol which performs routing information exchange among routers to determine the optimal paths for the traffic flow. A BGP router forms a neighbor relationship by connecting to its neighbors and exchanging the routes, once the connection is established.
BGP Best Path Selection Algorithm is used to choose and install the best routes into the router’s routing table. Due to the fact that the full Internet BGP routing table includes way over 400,000 routes and because BGP router can receive numerous copies of those routes from various providers, it has to compare the multiple BGP routing tables and choose the optimal route on the router.
If there are no specific settings that can affect the outcome, BGP Best Path Selection Algorithm determines the best route by selecting the shortest path to the destination. An Autonomous System is a single network or a set of networks and routers, which are under the control of one administrative entity. Nevertheless, network administrators frequently manipulate such options as local preference, lowest multi-exit discriminator and weight.
The list of the selection criteria is presented below in the same order in which BGP uses them to select the optimal routes to be injected into the IP Routing table:
1) Weight — weight is the first criterion used by the router and it is set locally on the user’s router. The Weight is not passed to the following router updates. In case there are multiple paths to a certain IP address, BGP always selects the path with the highest weight. The weight parameter can be set either through neighbor command, route maps or via the AS-path access list.
2) Local Preference — this criterion indicates which route has local preference and BGP selects the one with the highest preference. Local Preference default is 100.
3) Network or Aggregate — this criterion chooses the path that was originated locally via an aggregate or a network, as the aggregation of certain routes in one is quite effective and helps to save a lot of space on the network.
4) Shortest AS_PATH — this criterion is used by BGP only in case it detects two similar paths with nearly the same local preference, weight and locally originated or aggregate addresses.
5) Lowest origin type — this criterion assigns higher preference to Exterior Gateway Protocol (EGP) and lower preference to Interior Gateway Protocol (IGP).
6) Lowest multi-exit discriminator (MED) — this criterion, representing the external metric of a route, gives preference to the lower MED value.
7) eBGP over iBGP — just like the “Lowest origin type” criterion, this criterion prefers eBGP rather than iBGP.
8) Lowest IGP metric — this criterion selects the path with the lowest IGP metric to the BGP next hop.
9) Multiple paths — this criterion serves as indication whether multiple routes need to be installed in the routing table.
10) External paths — out of several external paths, this criterion selects the first received path.
11) Lowest router ID — this criterion selects the path which connects to the BGP router that has the lowest router ID.
12) Minimum cluster list — in case multiple paths have the same router ID or originator, this criterion selects the path with the minimum length of the cluster list.
13) Lowest neighbor address — this criterion selects the path, which originates from the lowest neighbor address.
BGP best path selection algorithm also provides a mechanism to discard paths that are not considered as candidates for the best path. The following Paths will be discarded:
- “Not synchronized” marked paths;
- Paths without access to the NEXT_HOP;
- Paths originating from an eBGP neighbor, in case the local AS is shown in the AS-PATH
- In the case that BGP enforce-first-as is enabled and the update does not contain the AS of the neighbor as the first AS number in the AS-SEQUENCE;
- “Received-only” marked paths;
Best Path selection algorithm makes basic decisions to select the best routes to be installed into the routing table. Currently, BGP is considered to be a standard protocol for the inter-domain information exchange. | <urn:uuid:cd31c6d5-7481-4f9d-a9de-472b88eb3d22> | CC-MAIN-2024-38 | https://www.noction.com/blog/bgp-best-path-selection-algorithm?ref=bitsinflight.com | 2024-09-17T11:32:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00650.warc.gz | en | 0.914675 | 872 | 3.65625 | 4 |
The globe develops 57 million lots of plastic air pollution yearly and spreads it from the inmost seas to the highest possible mountaintop to the within individuals’s bodies, according to a brand-new research that likewise claimed greater than two-thirds of it originates from the Global South.
It suffices air pollution yearly– regarding 52 million statistics bunches– to fill up New york city City’s Central Park with plastic waste as high as the Realm State Structure, according to scientists at the College of Leeds in the UK. They took a look at waste generated on the regional degree at greater than 50,000 cities and communities throughout the globe for a research in Wednesday’s journal Nature.
The research took a look at plastic that enters into the open atmosphere, not plastic that enters into land fills or is effectively shed. For 15% of the globe’s populace, federal government falls short to gather and deal with waste, the research’s writers claimed– a huge factor Southeast Asia and Sub-Saharan Africa generate one of the most plastic waste. That consists of 255 million individuals in India, the research claimed.
Lagos, Nigeria, produced one of the most plastic air pollution of any type of city, according to research writer Costas Velis, a Leeds ecological design teacher. The various other largest plastic contaminating cities are Brand-new Delhi; Luanda, Angola; Karachi, Pakistan and Al Qahirah, Egypt.
India leads the globe in creating plastic air pollution, generating 10.2 million bunches a year (9.3 million statistics bunches), much more than double the following big-polluting countries, Nigeria and Indonesia. China, commonly villainized for air pollution, places 4th yet is making incredible strides in minimizing waste, Velis claimed. Various other leading plastic polluters are Pakistan, Bangladesh, Russia and Brazil. Those 8 countries are accountable for over half of the world’s plastic air pollution, according to the research’s information.
The USA places 90th in plastic air pollution with greater than 52,500 bunches (47,600 statistics bunches) and the UK places 135th with virtually 5,100 bunches (4,600 statistics bunches), according to the research.
In 2022, a lot of the globe’s countries accepted make the initial legitimately binding treaty on plastics air pollution, consisting of in the seas. Last treaty negotiations happen in South Korea in November.
The research made use of expert system to focus on plastics that were incorrectly shed– regarding 57% of the air pollution– or simply disposed. In both situations extremely small microplastics, or nanoplastics, are what transform the trouble from a visual annoyance at beaches and a marine life problem to a human health and wellness hazard, Velis claimed.
Numerous research studies this year have actually checked out just how common microplastics are in our drinking water and in individuals’s cells, such as hearts, brains and testicles, with physicians and researchers still not quite sure what it means in regards to human health and wellness hazards.
” The majorly bomb of microplastics are these microplastics launched in the Global South mostly,” Velis claimed. “We currently have a big dispersal trouble. They remain in one of the most remote locations … the peaks of Everest, in the Mariana Trench in the sea, in what we take a breath and what we consume and what we consume.”
He called it “everyone’s trouble” and one that will certainly haunt future generations.
” We should not place the blame, any type of blame, on the International South,” Velis claimed. “And we should not applaud ourselves regarding what we perform in the International North by any means.”
It’s simply an absence of sources and capacity of federal government to offer the needed solutions to people, Velis claimed.
Outdoors specialists fretted that the research’s concentrate on air pollution, as opposed to general manufacturing, allows the plastics market off the hook. Making plastics gives off large amounts of greenhouse gas that add to climate change.
” These men have actually specified plastic air pollution in a much narrower method, as truly simply macroplastics that are discharged right into the atmosphere after the customer, and it risks us shedding our concentrate on the upstream and claiming, hey currently all we require to do is take care of the waste much better,” claimed Neil Tangri, elderly supervisor of scientific research and plan at GAIA, an international network of campaigning for companies dealing with no waste and ecological justice campaigns. “It’s needed yet it’s not the entire tale.”
Theresa Karlsson, scientific research and technological consultant to International Pollutants Removal Network, one more union of campaigning for teams on atmosphere, health and wellness and waste concerns, called the quantity of air pollution recognized by the research “worrying” and claimed it reveals the quantity of plastics being generated today is “uncontrollable.”
However she claimed the research misses out on the value of the worldwide sell plastic waste that has abundant nations sending it to bad ones. The research claimed plastic waste profession is lowering, with China outlawing waste imports. However Karlsson claimed general waste profession is really raising and most likely plastics with it. She pointed out EU waste exports going from 110,000 bunches (100,000 statistics bunches) in 2004 to 1.4 million bunches (1.3 million statistics bunches) in 2021.
Velis claimed the quantity of plastic waste traded is tiny. Kara Lavender Legislation, an oceanography teacher at the Sea Education And Learning Organization that had not been associated with the research, concurred, based upon united state plastic waste fads. She claimed this was or else among the a lot more detailed research studies on plastic waste.
Authorities in the plastics market applauded the research.
” This research highlights that outstanding and unmanaged plastic waste is the biggest factor to plastic air pollution which focusing on sufficient waste administration is essential to finishing plastic air pollution,” Chris Jahn, council assistant of the International Council on Chemical Organizations, claimed in a declaration. In treaty settlements, the market opposes a cap on plastic manufacturing.
The United Nations projects that plastics production is most likely to increase from regarding 440 million bunches (400 million statistics bunches) a year to greater than 1,200 million bunches (1,100 million statistics bunches, claiming “our earth is choking in plastic.”
Jennifer McDermott added from Divine superintendence, Rhode Island.
Comply With Seth Borenstein on X at @borenbears
Find Out More of AP’s environment insurance coverage at http://www.apnews.com/climate-and-environment
The Associated Press’ environment and ecological insurance coverage obtains financial backing from numerous exclusive structures. AP is entirely in charge of all web content. Locate AP’s standards for dealing with philanthropies, a checklist of advocates and moneyed insurance coverage locations at AP.org. | <urn:uuid:d4628fc6-0e3b-4ae0-9df9-71747f8222f9> | CC-MAIN-2024-38 | https://ferdja.com/index.php/2024/09/04/the-world-is-pumping-out-57-million-tons-of-plastic-pollution-a-year/ | 2024-09-07T20:03:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00650.warc.gz | en | 0.9326 | 1,470 | 2.90625 | 3 |
How to Troubleshoot General Networking Issues
Quick Definition: Network troubleshooting is the systematic process of identifying and fixing issues within computer networks. Network engineers perform tasks like verifying device configurations, checking routing tables, and inspecting network interface statuses. Common troubleshooting areas include issues like collisions, broadcast storms, and duplicate addresses.
Troubleshooting network problems constitutes a foundational aspect of both the Network+ exam and the role of a network engineer. The consistent uptime of their network often reflects the credibility of a network engineer.
High levels of uptime correlate with a systematic and structured approach to diagnosing a network. Whether you are troubleshooting an internal or external network, there are tried and true methods for achieving success.
In this article, we’ll walk through numerous considerations when troubleshooting a network as well as some common follies. First, let's explore five distinct approaches to consider when evaluating any network.
Why is Network Troubleshooting Important?
One fundamental principle in engineering is that any human-made creation will inevitably experience wear and eventually fail. Whether it is the international space station or a sump pump, something is bound to fail – including your network topology.
Having a rigorous and structured plan is not for “just in case” but to prepare for the inevitability of network downtime. Establishing a consistent troubleshooting pattern streamlines and consolidates your problem-solving strategy, ultimately saving time and providing a traceable process for auditing purposes. Let’s walk through five troubleshooting topics to incorporate into your attack plan.
5 Common Considerations in Network Troubleshooting
As you gain experience troubleshooting the network, clear patterns emerge in the chaos. The same five or so issues are often the culprit, attributing to degraded connectivity. Let’s talk about them in more detail, starting with device configuration.
Time and time again, device misconfiguration is the culprit. If network access is blocked, verify the port is open on your firewall software. Often, ports are closed by default for security purposes. Additionally, check the router and switches to verify port forward rules are enabled. It is critical to have a diagram of your network’s topology to verify the data’s chain of custody.
Routing tables often reside on the router or appliance configured to transmit data over the network. Verify all the routes are correct, none are missing, and none of them conflict with each other.
That list is not exhaustive but is often the source of network woes. The routing tables should always be on your network diagnostic chart.
Verifying interface status is generally the very first step of troubleshooting. Even at home, it is easy to check — is the router “internet light” red or green? Verify all network interface devices in the chain of custody between the internet and your device.
This process can be done by physically checking the machines, pinging relevant appliances, or using administrative tools like ipconfig, ifconfig, or netstat.
VLANs are used to logically separate a physical network into smaller, more manageable logical networks. However, they can often be a source of heartburn when troubleshooting connectivity issues. VLANs are often used to enforce network policies and are often misconfigured. Node overlap can also contribute to network problems as well. Node overlap is when one device is incorrectly mapped to more than one VLAN.
Network Performance Baselines
Network performance baselines are well-documented cases of the network operating at optimum efficiency.
Monitor any deviation from its typical performance, and consistently gauge whether your troubleshooting actions have brought your transmission speed closer to baseline.
13 Common Areas to Troubleshoot Network Issues
Now that we have walked through broader considerations, let’s walk through common issues where network issues come into play. There are only so many moving parts in a network, and it is important to have each one listed to ensure all possible components are verified.
Collisions often occur when a network does not have enough bandwidth to support the amount of requests. This occurs on older topologies such as mesh or bus topology and is rarely seen in a modern environment.
Broadcast traffic consists of data packets that are sent from one device to every device on the network. (Think of how a TV station “broadcasts” shows to all viewers.) Broadcast storms stem from misconfigurations that cause nodes to respond to broadcasts with broadcasts, thus commencing an infinite loop. This broadcast storm can cause significant latency and downtime.
Duplicate Addresses (including MAC and IP)
This can occur in a few different ways. If the IP addresses are static, then the network administrator puts the same IP address twice.
Also, if the IP addresses are dynamic, the DHCP server was misconfigured, allowing for duplicate addresses.
Lastly, some DHCP servers allow administrators to reserve specific IP addresses for devices based on their MAC addresses. If the administrator configures conflicting reservations or makes mistakes in the reservation settings, duplicate address conflicts will arise.
Multicast differs from broadcast in that it is sent to multiple recipients but only to recipients with a specific interest in the message being transmitted. Multicast flooding is when ALL machines receive the message, regardless of whether they care about it or not.
Asymmetric routing is when a packet is sent to a recipient, but the recipient returns transmission on a different route. This can be a security issue because Access Control Lists (ACLs) are often configured with symmetry in mind. Additionally, asymmetric routing is far more difficult to troubleshoot.
Switching and Routing Loops
These are characterized by the unintentional looping or packet forwarding of data within a network. This can cause extensive network latency and is generally due to misconfigured switches.
Rogue DHCP Server
Rogue DHCP servers are unauthorized servers connected to the network that distribute IP addresses, subnet masks, gateways, and more.
Rogue DHCP servers are typically introduced to a network maliciously or accidentally. These can cause duplicate addresses and significant latency.
Certificate issues occur when an HTTPS certificate is not renewed in time. Often, this occurs when the network administrator forgets to renew them. Certificate renewal should always be accomplished by a scheduler like cron.
Hardware failures are when significant components of the network become compromised. For example, if the router short circuits, this will bring traffic to a dead halt.
In my experience, this is always the biggest culprit. Often, firewalls are misconfigured to block ports that should otherwise be open. For instance, if a firewall closed port 443, then no internet traffic would enter or leave the network. Modern firewalls generally have the ability to block by service, port, or address.
Licensed Feature Issues
Yet another common issue. When licenses on network devices expire, the network grinds to a halt. That is because proprietary software requires valid licensure to operate.
Network Performance Issues
Most network performance problems are usually rooted in one of the mentioned issues. By examining baseline performance expectations, you can pinpoint the most probable cause of the network's troubles.
As with anything, effectively troubleshooting a network comes with time and experience. However, life will be much easier if a strategy is already in place to diagnose the issue. For any structured analysis, start with some common problems mentioned earlier. Verify nothing is broken first, then verify interface statuses. For example, which nodes can be pinged from where — that will help deduce the problem.
Compiling and thoroughly reviewing these solutions will not only prepare you for the Network+ exam but also establish you as the trusted network expert within your organization.
Not a CBT Nuggets subscriber? Sign up for a 7-day free trial.
delivered to your inbox. | <urn:uuid:1d8a9c45-862e-45a1-9bf5-1b701027f23d> | CC-MAIN-2024-38 | https://www.cbtnuggets.com/blog/technology/networking/how-to-troubleshoot-general-networking-issues | 2024-09-10T07:26:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00450.warc.gz | en | 0.930391 | 1,595 | 2.78125 | 3 |
November 30th is National Computer Security Day. First established in 1988, National Computer Security Day is the perfect time for businesses to launch new security awareness initiatives. In fact, this holiday spawned from one of the world’s first cyberattacks: an attack on ARPANET (the internet’s predecessor) that harmed 10% of connected computers. In the wake of that eye-opening incident, IT professionals realized that there was a need for increased security awareness in order to reduce the chance of future similar events. The Washington, D.C. chapter of the Association for Computing Machinery’s Special Interest Group on Security, Audit and Control chose November 30th to remind folks to be security conscious before the busy holiday season.
Read more about the fascinating history of National Computer Security Day and look at the timeline of cybercrime here: https://nationaltoday.com/national-computer-security-day/
Learn the secret to ransomware defense in Cracking the RANSOMWARE Code. GET BOOK>>
How Can Organizations Benefit from National Computer Security Day?
When everyone is more security conscious, the whole company benefits (especially the IT department!). National Today offers a great menu of options that enable companies and IT professionals to choose fun, engaging ways to fight back against surging cybercrime by raising security awareness in their organizations like:
- Get high-level buy-in for security awareness initiatives and launch cyber security awareness programming to communicate critical messages about risks like phishing.
- Host a lunch-n-learn to reinforce positive and motivational messages about cybersecurity.
- Recycle material from a webinar in a simple, fun trivia contest with fun prizes.
- Run a competition for IT team members that requires resolving simulated cyberattacks to win a prize.
- Answer employee questions. Your organization may have unique programs, policies and best practices. Help employees understand material for which they cannot find answers on the internet.
- Share information about the latest phishing tactics and popular methods of propagating cyber threats like ransomware. Explain who to contact in the event of questions about a specific online interaction or email.
- Include content about remote work safety as pandemic challenges evolve. The majority of remote workers communicate primarily through email, elevating phishing risk.
- Remind employees that security protocol extends to every work device they’re using, even when using them for personal tasks, and vice versa. Remote workers regularly use the same devices for work and personal tasks, making security awareness paramount.
See how to avoid cybercriminal sharks in Phishing 101. DOWNLOAD IT>>
Strong Security Starts with Phishing Awareness
A huge part of any company’s security awareness initiative has to be phishing awareness. After all, stopping ransomware, fraud, business email compromise, malware and credential compromise starts with stopping phishing. Make sure that staffers know how to spot a phishing email, and what they should do when they encounter a dodgy message. Here’s a guide to red flags that may indicate that a message is actually phishing.
Red Flags That Point to Phishing
Adapted from our blog post What is a Common Indicator of a Phishing Attempt?
Is the subject line accurate? Subject lines that feature oddities like “Warning”, “Your funds has” or “Message is for a trusted” should set off alarm bells. If the subject or pre-header of the email contains spelling mistakes, usage errors, unexpected elements like emojis or other things that make it stand out from emails you regularly receive from the sender, it’s probably phishing.
If the greeting seems strange, be suspicious. Are the grammar, punctuation and spelling correct? Is the greeting in a different style than you usually see from this sender? Is it generic when it is usually personalized, or vice versa? Anomalies in the greeting are red flags that a message may not be legitimate.
Check the sender’s domain by looking at the email address of the sender. A message from a major corporation is going to come from that company’s usual, official domain. For example, If the message says it is from [email protected] instead of [email protected], you should be wary.
Word Choices, Spelling & Grammar
This is a hallmark test for a phishing message and the easiest way to uncover an attack. If the message contains a bunch of spelling and usage errors, it’s definitely suspicious. Check for grammatical errors, data that doesn’t make sense, strange word choices and problems with capitalization or punctuation. We all make the occasional spelling error, but a message riddled with them is probably phishing.
Does this look like other messages you’ve received from this sender? Fraudulent messages may have small variations in style from the purported sender’s usual email style. Beware of unusual fonts, colors that are just a little off, logos that are odd or formats that aren’t quite right.
Using malicious links to capture credentials or send victims to a web page that can be used to steal their personally identifiable information (PII) or financial information is a classic phishing scam. Hovering your mouse or finger over a link will usually enable you to see the path. If the link doesn’t look like it is going to a legitimate page, don’t click on it. If you have interacted with it, definitely don’t provide any information on the page that you’re directed to because it’s almost certainly phishing.
Never open or download an unexpected attachment, even if it looks like a normal Microsoft 365 (formerly Office) file. Almost 50% of malicious email attachments that were sent out in 2020 were Microsoft Office files. The most popular formats are the ones that employees regularly exchange every day — Word, PowerPoint and Excel — accounted for 38% of phishing attacks. Archived files, such as .zip and .jar, account for about 37% of malicious transmissions.
Is this someone or a company that you’ve dealt with before? Does the message claim to be from an important executive, politician or celebrity? A bank manager or tax agent you’ve never heard of? Be cautious about interacting with messages that seem too good to be true. Messages from government agencies should also be handled with care. Phishing practitioners love using fake government messages. In the United States, the federal government will never ask you for PII, payment card numbers or financial data through an email message out of the blue – that’s phishing.
The road to security success begins with 5 Steps to Ransomware Readiness! GET IT>>
Increase Computer Security by Protecting Computers from Phishing-Related Threats
The fastest and easiest way to prevent a phishing-related disaster is to prevent employees from receiving a phishing message with Graphus. Automated, AI-powered protection for email boxes is the best way to guard against phishing risk – an automated security solution like Graphus catches and kills 40% more phishing threats than conventional security or a SEG. The ideal choice to combat the flood of dangerous phishing email heading for every business, Graphus layers security for more protection with three powerful shields.
- TrustGraph uses more than 50 separate data points to analyze incoming messages completely before allowing them to pass into employee inboxes. TrustGraph also learns from each analysis it completes, adding that information to its knowledge base to continually refine your protection and keep learning without human intervention.
- EmployeeShield adds a bright, noticeable box to messages that could be dangerous, notifying staffers of unexpected communications that may be undesirable and empowering staffers to report that message with one click for administrator inspection.
- Phish911 enables employees to instantly report any suspicious message that they receive. When an employee reports a problem, the email in question isn’t just removed from that employee’s inbox — it is removed from everyone’s inbox and automatically quarantined for administrator review.
What’s next in phishing? Find out in the 2021 State of Email Security Report! GET IT NOW>> | <urn:uuid:2410ea3b-6c03-4b97-be5e-8c01fcc6d91d> | CC-MAIN-2024-38 | https://www.graphus.ai/blog/national-computer-security-day-tips-to-reduce-your-companys-phishing-risk/ | 2024-09-10T08:22:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00450.warc.gz | en | 0.905223 | 1,669 | 2.71875 | 3 |
In an increasingly digital world, protecting your online privacy is more important than ever with VPN. One effective way to safeguard your personal information and hide your location is by using a Virtual Private Network (VPN). This guide will explain how VPNs work, their benefits, and how to use them to hide your location and enhance your online security.
What is a VPN?
A VPN, or Virtual Private Network, is a service that encrypts your internet connection and routes it through a server. This is located in a different geographic location. This process masks your real IP address and replaces it. This is also with the IP address of the VPNs server, effectively hiding your true location from websites and potential hackers.
1. Enhanced Privacy: By masking your IP address, a VPN ensures that your online activities cannot be easily traced back to you. This is particularly useful for protecting your privacy when browsing the internet, using social media, or accessing sensitive information.
2. Secure Browsing: VPNs encrypt your internet connection, making it difficult for anyone to intercept and read your data. This added layer of security is especially important when using public Wi-Fi networks, which are often vulnerable to cyberattacks.
3. Access Restricted Content: Many online services and websites restrict access based on geographic location. By connecting to a VPN server in a different country, you can bypass these geo-restrictions and access content that may be unavailable in your region.
4. Avoid Censorship: In countries with strict internet censorship, a VPN allows you to access blocked websites and services by routing your connection through a server in a different country.
How to Use VPNs to Hide Your Location
1. Choose a Reliable VPN Provider: Select a reputable VPN provider that offers robust security features, a wide range of server locations, and a strict no-logs policy. Some popular VPN services include NordVPN, ExpressVPN, and CyberGhost.
2. Sign Up and Download the VPN Software: Visit the website of your chosen VPN provider, sign up for a subscription plan, and download the VPN software or app compatible with your device (Windows, macOS, iOS, Android, etc.).
3. Install and Launch the VPN: Install the VPN software or app on your device and launch it. You may need to log in using the credentials you created during the sign-up process.
4. Connect to a VPN Server: Within the VPN app, choose a server location from the list of available options. Selecting a server in a different country will change your apparent location to that country. Click the “Connect” button to establish a secure connection to the VPN server.
5. Verify Your New IP Address: Once connected, you can verify that your IP address has changed by visiting a website like WhatIsMyIP.com or IP Location. These sites will display your current IP address and location, which should match the location of the VPN server you selected.
6. Browse the Internet Securely: With the VPN connection active, your internet traffic is encrypted, and your real IP address is hidden. You can now browse the internet securely and access geo-restricted content as if you were in the location of the VPN server.
1. Keep Your VPN On: For maximum privacy and security, keep your VPN connection active whenever you are online, especially when using public Wi-Fi networks or accessing sensitive information.
2. Use Strong Passwords: Combine the use of a VPN with strong, unique passwords for your online accounts to further protect your personal information from unauthorized access.
3. Regularly Update Your VPN Software: Ensure that your VPN software is always up to date with the latest security patches and features to protect against new vulnerabilities and threats.
4. Be Mindful of VPN Limitations: While a VPN provides significant privacy benefits, it does not make you completely anonymous online. Be cautious about the information you share and the websites you visit, and consider additional privacy tools if needed.
Using a VPN to hide your location is an effective way to enhance your online privacy and security. By following the steps outlined in this guide, you can protect your personal information. Also the access restricted content, and browse the internet with greater peace of mind. Embrace the power of VPNs to take control of your online presence and keep your digital life secure. | <urn:uuid:e48196cf-dcaf-4719-bf0e-68a760c039e9> | CC-MAIN-2024-38 | https://garage4hackers.com/vpns-to-hide-your-location-a-step-by-step-guide/ | 2024-09-12T18:33:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00250.warc.gz | en | 0.915549 | 882 | 2.9375 | 3 |
Cybercafes and libraries are perfect places to surf the Internet in a relaxed atmosphere. Unfortunately, they are also the perfect place for something else: man-in-the-middle (MITM) attacks.
An MITM attack is just like the childhood game of keep away, where two people throw a ball back and forth and a person in the middle tries to take the ball.
This childhood game is what happens in a MITM attack with a few key differences; rather than playing in a park, users are on their computer. And, instead of passing a ball back and forth, they're passing their personal information. But the biggest difference is that unlike the children with the ball, users are unaware that the MITM is trying to get their information.
An MITM can use various software programs to sniff Internet traffic at a cybercafe, coffee shop, library, hotel, or university to discover the IP addresses of potential victims and the WiFi router on the network.
Through ARP spoofing, the attacker can redirect the traffic to flow through their own computer before getting to the server or hotspot. In essence, the attacker's computer becomes (at least as far as the victim's computer is concerned) the WiFi hotspot.
While in this position, anything passed between the user and the router the attacker can see in plain text (usernames and passwords, credit card numbers and PINs, account numbers, and any other sensitive information).
You can imagine what an attacker could do with your sensitive information. If an attacker intercepted your username and password while you were on a website the attacker could do anything on there without you knowing.
Once you found out that your username and password had been compromised your trust in that particular website could be irrevocably damaged. All this can be done within a few minutes, by anyone with access to the software and without you knowing that your information had been compromised.
However, as with any attack, there are ways to help prevent you and your users from becoming victims.
Unless you are absolutely sure that you are safe, leave these types of Internet usage for a secure network.
Web browsers come with a list of trusted Certificate Authorities (CA). If there is a problem with the certificate because it has expired or was not installed correctly, a warning message will pop up. Similar messages will pop up if the certificate is not trusted or if it is a self-signed certificate.
There are a couple visual cues admins can educate users to look for before they log in to an email or bank account. The first visual cue is the https in the address bar. The s means the site is secure. When the s is not present users should not enter sensitive information in to the website.
Any keen or even not so keen Internet surfer has no doubt received the message “Do you trust this computer?” or “This Connection is Untrusted.” With so many of these messages popping up it is very easy for a user to become frustrated. Once users become frustrated they may ignore the message and continue doing what they were doing, making them vulnerable to MITM attacks.
There are many benefits to using an EV SSL certificate. For example, the green browser address bar that an EV certificate provides is a better visual cue for users than the https is alone. Studies have shown that customers are more likely to purchase from a site displaying an indicator that the site is secure.
By securing your site with an EV SSL certificate you are showing you clients that you care about keeping their information safe.
While man-in-the-middle attacks aren't new, it's important to make sure you're taking the necessary precautions to guard against security attacks.
As an SSL Certificate provider, DigiCert is aware of the threats to your data security and is constantly working to prevent cyber criminals from accessing your data. We promote high-assurance certificates and are on the forefront of emerging markets, as demonstrated by our work with securing healthcare record transfers and using certificates to secure the Internet of Things. | <urn:uuid:ab0d3b39-3072-4c8b-908f-a36f2f6904fe> | CC-MAIN-2024-38 | https://www.digicert.com/blog/thwarting-man-middle | 2024-09-12T20:05:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00250.warc.gz | en | 0.965439 | 824 | 2.9375 | 3 |
What Is Social Engineering?
Social engineering is when someone manipulates someone else into sharing confidential information (personal or financial) or performing an action they wouldn’t normally do that compromises security (for example, giving access to a computer system or a physical location).
Social engineering exploits the natural human tendency to trust. It relies more on human vulnerability than on vulnerabilities in software or hardware systems.
Social engineering refers to all techniques aimed at talking a target into revealing specific information or performing a specific action for illegitimate reasons. – ENISA
Social Engineering Techniques
Common social engineering techniques include:
- Phishing. Sending fraudulent emails or messages that mimic legitimate organizations/individuals to get recipients to reveal sensitive information or download malware.
- Spear phishing. When the attacker customizes their phishing approach for a specific individual or organization, often using personal information to enhance their credibility.
- Pretexting. The attacker invents a scenario or pretends to need information for a legitimate reason, like a security verification or company survey, to extract sensitive data from their target.
- Baiting. This technique involves offering something enticing to the victim in exchange for private information. It could be as straightforward as a free music or movie download that leads to malicious software installation.
- Quid pro quo. Similar to baiting, but the attacker promises a benefit in return for information. This could involve a hacker posing as a technical support agent offering to resolve a non-existent issue on the victim’s computer.
- Tailgating or piggybacking. The attacker seeks physical access to a restricted area by following closely behind a legitimate employee or convincing someone to hold a door open for them.
- Vishing (voice phishing). Using phone calls to scam the user into divulging confidential information, often with the pretense of being from a legitimate organization like a bank or tax authority.
- Smishing (SMS phishing). Like vishing, but instead of calls, the attacker texts their victim.
- Whaling. Attack that targets high-profile individuals like top executives, politicians, or celebrities. These high-level targets are referred to metaphorically as “whales,” hence the term “whaling.”
- Business email compromise (BEC). When attackers compromise or impersonate corporate email accounts to defraud the company, its employees, customers, or partners out of money or sensitive information.
- Honey trap. The use of romantic or sexual attraction to manipulate, deceive, or capture a target, typically for political, military, or espionage purposes. The person setting the trap will use allure and charm to form a relationship with the target, gain their trust, and then extract sensitive information or influence them to take certain actions.
- Watering hole attack. Compromising a frequently visited website to target specific groups of users, infecting their devices with malware when they visit the site.
Why Social Engineering Is Dangerous
One of the main reasons social engineering is so dangerous is because it’s tough to spot and stop.
Unlike traditional attacks, which target technical vulnerabilities, social engineering exploits basic human traits such as trust, fear, and curiosity. These psychological manipulations are much harder to detect and prevent.
It doesn’t help that attackers often personalize their approach based on detailed research about their targets. This makes their fraudulent requests or offers seem more legitimate and increases the likelihood of success.
Everyone is susceptible to social engineering, regardless of their position or the level of their technical expertise. Even the most cautious individuals can be deceived if the attacker’s approach is convincing enough.
4 Stages of Social Engineering
Social engineering typically happens in four stages:
The attacker gathers as much information as possible about the target. This can include information about the individual, such as their job role, interests, social habits, and broader information about their organization.
Public sources like social media, company websites, data brokers, and professional networking sites are common research tools. This stage is crucial for planning and making the attack as believable and effective as possible.
2. Establishing trust and rapport
Once the attacker has enough information, they initiate contact with the target.
The goal here is to establish a relationship and build trust. The attacker may impersonate someone the target knows or create a scenario that seems legitimate.
They might pose as a co-worker, an IT department member, a trusted vendor, or even law enforcement. The attacker uses the information gathered in the research stage to make their approach convincing.
With trust established, the attacker exploits the relationship to manipulate the target into divulging confidential information or performing actions that compromise security.
This can involve asking for sensitive information directly, tricking the target into breaking normal security procedures or convincing them to install malicious software.
In the final stage, the attacker uses the information or access they’ve gained to achieve their goal. This might be stealing funds, obtaining confidential data, installing malware, or gaining access to restricted areas.
After reaching their objective, attackers try to cover their tracks to avoid detection.
Reducing the Risk of Social Engineering Attacks
While training can help reduce the risk of engineering attacks, it’s not foolproof. If an attacker can find enough personal information about a target, they will more than likely be able to socially engineer them.
For this reason, one of the best ways to protect yourself against social engineering attempts is to reduce your digital footprint and opt out of data brokers. Data brokers compile vast amounts of personal information, including addresses, phone numbers, email addresses, and employment history.
Removing your information from these databases means there’s less publicly available data for social engineers to use in crafting targeted and convincing scams. If a data broker doesn’t have your details, it’s more challenging for an attacker to impersonate someone in your circle or to create a scenario that feels personally relevant to you.
Even if you’re not the direct target, removing your personal information from data broker sites means that attackers trying to get to someone through you (like in a business email compromise attack) will find it harder to use your information as a leverage point.
However, remember to opt out of data brokers regularly. These sites relist people’s information as soon as they find more of it online. Alternatively, you can subscribe to a data broker removal service like DeleteMe.
Don’t stop at data brokers, either. Look at your overall digital footprint. If someone were to search for you online, what would they find? Depending on what comes up when you search for your name on your preferred search engine, you might want to remove your personal information from forum posts and old blogs and make your social media profiles visible to friends and family only.
Other steps you can take to avoid social engineering attacks are to use antivirus software, enable multi-factor authentication, and be skeptical of unsolicited requests. | <urn:uuid:ec82c05a-819f-45f5-8a18-a01eca9db729> | CC-MAIN-2024-38 | https://joindeleteme.com/glossary/social-engineering/ | 2024-09-15T06:42:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00050.warc.gz | en | 0.931735 | 1,420 | 3.46875 | 3 |
From cell phone producers to automotive manufacturers, and everything in between, the electronics industry is facing an unparalleled shortage of chips.
Laptop on back order? New phone model release delayed by two months? Unclear delivery window for office desktops? The global chip shortage is to blame.
We’ve gathered some resources below that explain the shortage—and how we got here—in detail.
What is a chip and why does it matter?
Microchips are small electronic devices consisting of integrated circuits laid out on a thin wafer of semiconductor material (typically silicon). These chips are at the heart of practically every electronic device used today. This includes computers, cell phones, appliances, televisions, GPS trackers, and credit cards among countless other devices.
What has caused this chip shortage?
There are several factors at play here. Shortages in labor caused by COVID-19 combined with political tensions and an untimely demand cycle have resulted in what many are referring to as the “perfect storm”.
Is the answer as simple as just producing more chips?
Unfortunately, no. It costs billions of dollars and many years to build out a production facility due to the complex nature of chip manufacturing. On top of that, competitive pressures in the chip manufacturing market are forcing out all but the finest (and most cost-effective) manufacturers.
How does this affect me?
IRIS is committed to fulfilling orders as quickly as possible. We are working with our suppliers to ensure we are able to provide our clients with the products you need, when you need them. That being said, we may occasionally run into some delivery delays. Rest assured we will set clear expectations during the ordering process and will of course keep you updated with shipping developments. In an effort to get products into your hands as soon as possible, we may have to revert back to the quote approver with equivalent, but suitable, alternative products should product availability become scarce after quote approval. | <urn:uuid:db967f83-491b-4de5-9e1e-b6beb72747c5> | CC-MAIN-2024-38 | https://www.ipv4.irissol.com/blog/semiconductor-chip-shortages-causing-worldwide-supply-chain-delays/ | 2024-09-20T02:58:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00550.warc.gz | en | 0.966624 | 402 | 2.59375 | 3 |
When discussions of cybersecurity arise, there’s one topic that’s impossible to avoid these days: ransomware. Ransomware is a type of malware that infiltrates computer systems and networks and restricts access to data by encrypting files until the victim agrees to pay a ransom. In theory, once the ransom is paid, the cybercriminal provides a decryption tool so companies can regain access to their critical information.
These types of attacks have been surging in the last year, with threat actors striking major pieces of critical infrastructure like the Colonial Pipeline and JBS. But these large companies are far from alone. According the Department of Justice, more than 4,000 ransomware attacks have occurred daily in the U.S. since 2016, and ransomware is currently the most prominent malware threat going. Everyone from higher education to health care facilities to small manufacturers is in the crosshairs, as attacks continue to get more sophisticated and more costly. The pandemic, coupled with more people working from home, has only increased the prevalence of ransomware.
So why are these attacks, especially on national critical infrastructure, on the rise?
“The one thing about critical infrastructure is that it’s all based about productivity,” said Ron Brash, director of cybersecurity insights at Verve Industrial Protection. “If the product needs to move from point A to point B, in the case of a pipeline, if you’re manufacturing, most of the stuff is just-in-time, so you’re going to have a sense of urgency. And that sense of urgency and that nonstop flow makes you a very good target for criminals because they know that you’re more likely to pay.”
Mastering the basics
In most of these incidents, companies that have fallen victim to ransomware are missing the cybersecurity basics, making them more susceptible to attack. Often, critical cybersecurity positions are going unfilled, legacy systems are left unpatched and significant budgets are not being appropriated to information technology (IT) and operational technology (OT) security. And it’s not just small companies leaving the front door unlocked, said Brash. Large, sophisticated technology companies like SolarWinds and Kaseya are just as guilty.
“The problems that we’re seeing in ransomware and cybersecurity basics are largely around the fact that, to correct the rot, the security rot, that’s present in these environments is going to be very costly,” Brash said. “Most organizations don’t put aside the appropriate amount as a capital expenditure to put in place the basics, to re-orchestrate their networks, to harden those old legacy systems that they’re dependent on. Like scheduling for pipelines to get product in and out, those are ancient systems that are probably running on an old IBM AS400 kind of system. That’s the nature of the beast.”
One of the reasons the cybersecurity basics haven’t improved in many companies is the process requires downtime. Systems might need to go offline, which means loss of profit. If a pipeline, for example, can’t move oil, the pipeline company can’t make money until its systems are back online.
“Basically, we bought a car, we didn’t change the oil in it, we didn’t change the brakes on it, and we’re wondering why the engine is blown,” Brash said. “That’s where we’re at today with ransomware.”
To protect against ransomware and other cyberattacks, many companies are using tools like sensors and passive network monitoring. While those do have value, they should not be a starting point. Brash said they are a capability best added later, once you have the basics down pat.
“What good is having an alarm on your house if the thief is going to come into my back office here, smash the window, grab my laptops, grab my monitor and be gone in 30 seconds,” he said. “That’s ransomware. They get in, it’s over in a couple of minutes and it’s spreading like wildfire. If you’ve got alarms, what are you going to do? By the time the service ticket winds up on a bunch of people’s desks, it’s already over. You’re dealing with an incident.”
Money well spent
Having a good backup is an essential part of cybersecurity, but that’s only part of the solution. Brash said company money might be better spent elsewhere.
“If I were to spend $300,000 on sensors in the equipment, I would much prefer that asset owner spend 300 grand on other things that are more enabling,” he said. “If your network is on old managed switches or 1 gigabit but you actually need 4 gigabits of bandwidth, spend the 300 grand on things that are going to give you something today but also into the future. That are going to allow you to then do backups and recovery at scale, because now you have the bandwidth to do so on your network infrastructure. You need those core things.
“If you build a house, if you’re going to build it on sand, you’ve got to drive down piles. You don’t want to have a house on shaky ground. So do the right things to solidify that basis before you move forward into bells and whistles and silver bullets that don’t solve your problems.”
As Brash said, ransomware attackers are always looking for critical pain points, industries that must maintain their service and can’t afford to go offline. This can include everything from manufacturers to hospitals. Most cyberattacks begin on IT systems, but because modern networks are interconnected, they often spill over onto OT systems. Threat actors understand they don’t have to break or disable complicated OT technology to shut down production.
Brash uses the example of a pulp and paper manufacturer. There are machines that make big rolls of paper, but there are also machines tracking the paper and checking for defects. This eventually leads to accounts receivable so companies can handle payment and processing.
“If all that goes down, you don’t even have to break the pulp and paper machine,” Brash said. “You just need to break the tracking, and [systems go offline]. That was true at Honda. Honda had a bunch of just-in-time manufacturing facilities. So you’ve got a bunch of cars on the road. They’re all welded, they’re all set up, and they’re waiting on the engines to be delivered that day for that batch at a certain time. No engines, the line doesn’t move, cars don’t come off the line into the next step, which might be putting wheels on it. You break the system very easily, even though it’s not proper OT or industrial control system-related.”
And businesses need to game plan for almost any cyber eventuality. Once networks go down, many are forced into using old-school paper and pen tracking, so they need to have systems and training in place to facilitate that.
“It’s not just about backups and it’s not just about basics, but do you have the processes and the business continuity and disaster-recovery plans, DCPs (data continuity plans) and BRPs (business resumption plans), to continue moving on?” Brash said. “That’s the piece that’s not there yet, and that’s why people choose to pay the ransom. A: because there’s no consequence really for paying it, and it’s cheaper than actually doing the right thing in the first place.”
The ransomware payout
With ransomware, attackers are usually just looking for a simple — and sizable — payout. The question of whether a company should pay or not is a complex one, but the answer often comes down to a cost-benefit analysis.
If a company’s profit margins are strong and they’re making money, the last thing they’re going to want to do is shut down. Companies generally look at how much the outage is costing them and compare that to the amount being demanded by the attacker. Unfortunately, it’s often cheaper to pay the ransom and move on. But Brash said there’s more to consider than just the profit-loss sheet.
“Because [many businesses] just pay it, as if it’s like a tax that someone decided — it’s like a toll going over a bridge that you didn’t really want to pay, but you will pay — they’ll just do it and they’ll write it off. It’s a business loss. Great. Shareholders don’t care. The company is still making money. Everything’s wonderful.
“That’s where we start to wind up in problems, where you start to apply the ethics of it. Does it make sense to be paying someone that’s very likely to attack you again? Or are you financing something else that you shouldn’t be financing in another country? That’s another conflict of it. So I think what needs to happen is paying ransom should not be your playbook. That should not be what your go-to plan is when this event occurs.”
Brash argued that company money is much better spent preventing attacks than responding to them. Instead of losing profits to hackers, forward-thinking manufacturers should put that money into improving their systems. If threat actors are willing to expend time and effort, businesses must be, as well.
“When you think about it, most critical infrastructure and most manufacturing environments are papier-mâchéd together. We’ve got to start doing some proper capital expenditures to actually get those things moving and get them maintained,” Brash said. “To fix it, you’ve got to at least put in the core infrastructure that would enable your business to run on the worst days possible.”
Keep an eye out for Part 2 of our interview with Verve’s Ron Brash in the coming weeks, where he will discuss the government response to increasing cybersecurity threat. And check out our Industrial Cybersecurity Pulse YouTube page to view previous installments from our expert interview series. | <urn:uuid:28fb821f-49be-4a5c-9ab2-0c02fe73f3da> | CC-MAIN-2024-38 | https://www.industrialcybersecuritypulse.com/networks/defending-against-ransomware-expert-interview-series-ron-brash-verve-industrial-protection/ | 2024-09-08T00:19:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00750.warc.gz | en | 0.95731 | 2,189 | 2.625 | 3 |
Tips to protect yourself from Cyberstalkers
What is cyberstalking?
Cyberstalking's definition is quite simply, “the use of the internet, or other electronic means, to harass and intimidate a selected victim”.
Common characteristics include (but aren't limited to) classic 'stalking' behavior — tracking someone's location and monitoring their online and real-world activities. Cyberstalkers have been known to fit GPS devices to their victims' cars, use geolocation spyware on their phones, and obsessively track their victims' whereabouts through social media.
Cyberstalking can include other behavior that's intended to intimidate victims or make their lives unbearable. For instance, cyberstalkers might target their victims on social media, trolling and sending threatening messages; they might hack emails, to communicate with the victim’s contacts, including friends and even employers. Social media stalking can include faking photos or sending threatening private messages. Often, cyberstalkers will spread malicious rumors and make false accusations, or even create and publish revenge porn. They might also engage in identity theft and create fake social media profiles or blogs about their victim.
So, we know what cyberstalking is. But who are its victims? You might be surprised. While most cyberstalking victims are women, 20 to 40 percent of victims are actually men.
Cyberstalking goes a lot further than just following someone on a social network. It's the intent to intimidate, which is the defining characteristic of cyberstalking.
How to avoid being stalked online
One good exercise you should carry out now is to Google yourself and find out just what information a potential cyberstalker could find online. You may be shocked by how easy it is to track you down. Not to mention, find your home address, phone number, and other personal details.
And if that's bad, you might want to check how much data someone could compile on you if they had access to your friends' and family's social media, too. For instance, they might find out which bar you were in, with which friends, or where you'll next be going on holiday and when.
You might even find stuff purporting to be from you that someone else has uploaded: a fake blog, or a Craigslist account putting your phone number and home address out there.
This is how cyberstalkers get started - Googling their victims and finding out everything they can. That means you’ll certainly want to make that information as hard to obtain as possible.
Tips for protecting yourself from cyberstalkers
Increase your privacy settings
Start off with your own data. Take a good look at your social media accounts and if you haven't done already, enable strong privacy settings.
- Make your posts 'friends only' so that only people you know get to see them.
- Don't let social networks post your address or phone number publicly. (You might even want to have a separate email address for social media)
- If you need to share your phone number or other private information with a friend, do so in a private message - not in a public post.
- Use a gender-neutral screen name or pseudonym for your social media accounts — not your real name
- Leave optional fields in social media profiles, like your date of birth, blank.
- Only accept friend requests from people you have actually met in person. Set your social networks to accept friend requests only from friends of friends.
- Disable geolocation settings. You may want to also disable GPS on your phone.
If other personal data is up on the web outside your social media accounts, start removing it. In the case of your SSN being displayed, Google will help you remove that. You may need to contact third party websites to get some of the data taken down. If you need a postal address for business, or for registering your web domain, use a post box address or office address (like your accountant's, for instance), not that of your home.
If you are using an online dating service, don't provide your full ID on the site or over email. Only give out your phone number to someone you've actually met and wouldn't mind seeing again. The best security advice is to not even give your full name online, only your first.
Be cautious of any incoming phone calls or emails which ask you to give personal information, however reasonable the purported request. If a bank or credit card company phones, get off the phone, and use another phone (for instance, if they rang your landline, use your cellphone) to ring back to verify, using the HQ or branch phone number that's on your paperwork — not the one you've just been given. And never, never, never give out your SSN.
Secure your PC and phone
Securing your data won't help you if your smartphone or PC is hacked. To prevent being stalked online you should build basic security into your online life.
- Be wary of public Wi-Fi, which can be hacked easily. If you need to log on in Starbucks or hotels, ideally use a Virtual Private Network (VPN) to prevent anyone from eavesdropping on your communications. Kaspersky's VPN can give you a secure connection wherever you are.
- A VPN will also hide your IP address - which could be used to track your internet provider account, and through that, your address, credit card number, and so on.
- Be careful where you leave your smartphone. It's not difficult to install spyware without leaving a trace - just leaving your phone on your desk for a few minutes is long enough.
- Make sure your phone and computers are password protected. Use a strong password, not something easy to guess, and reset your passwords regularly.
- Use anti-spyware software to detect any malicious software that's installed. Delete it, or better still, back up your data, then do a factory reset to ensure the spyware is completely eradicated. Kaspersky's antivirus comes in both PC and Android versions to keep all your devices safe.
- Remember to always log out of your accounts when you're done — don't leave social network accounts running.
- Beware of installing apps that want access to your Facebook or other contact lists. Do you know what they're planning to do with it?
What is catfishing?
Catfishing is a form of fraud or abuse where someone creates a fake online identity to target a particular victim. Catfishers may lure their victims into providing intimate photos or videos, then blackmail them, or may develop a relationship and then ask for money for a sudden emergency.
Catfishers can be very convincing, but you can discover their scam in several ways.
- If all their online photos are selfies or studio shots, with no other friends, no family, and no context, that's a big clue.
- Do a Google reverse image search against the online photo on a dating site. You may find the person has multiple online profiles with the same photo but different names.
- Ask if you can do a video call on Skype. Guess what? Catfishers will usually make their excuses - and you won't hear from them again.
What to do if you're cyberstalked
If you're being stalked online don't wait and hope the problem will go away — act immediately.
- Make it clear to the cyberstalker that you don't want to be contacted. Put it in writing, and warn them that if they continue, you'll go to the police. Don't engage with them at all once you have issued this warning.
- And if they continue, go to the police. Many police departments have a special cyberstalking team, but they're not going to quibble about a cyberstalking definition. If you've been threatened or you're being harassed and intimidated, then they'll deal with it—whether it's on Facebook, email or through spyware on your phone.
- If you think someone is tracking you through spyware, don't use your own computer or phone to get help - borrow a family or friend's phone.
- Get your computer and phone checked over by a professional for spyware or other signs of compromised accounts.
- Change all your passwords.
- In the case of social media stalking, use your privacy settings to block the person, and then report the abuse to the network. You can easily find out how to report cyberstalking in most social networks' help and support pages.
- If you have been sent abusive or threatening emails, you probably know the stalker's ISP - the bit after the @ in their email address. Contact abuse@domainname or postmaster@domainname. Most ISPs take cyberstalking very seriously. If they're using Gmail, there's a reporting mechanism you can use at https://support.google.com/mail/contact/abuse.
- You can filter abusive emails to a separate folder so that you don't have to read them.
- If you think the cyberstalker might harass you in the workplace, tell your employer.
Save copies of any communications involved, including your own, police reports, and emails from the networks. Back up the evidence on a USB stick or external drive.
Cyberstalking is subject to general laws on harassment, such as the Violence Against Women Act 1994 in the US, and the Protection from Harassment Act 1997 in the UK. California created the first state-level law specifically addressing cyberstalking as an offense in 1999, and other states have followed suit.
It's good that cyberstalking is now being recognized as the serious crime that it is: cyberstalking can wreck people's lives, but it doesn’t have to wreck yours. | <urn:uuid:8200eed3-f9a2-4977-bfc6-2a2f282a9259> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/threats/how-to-avoid-cyberstalking | 2024-09-09T04:47:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00650.warc.gz | en | 0.950569 | 2,020 | 2.703125 | 3 |
I recently attended the 65th annual Arizona Association of School Business Officials (AASBO) conference with K-12 administrators from throughout the state.
While there, I asked the CIO of a large Arizona school district what he did. Without hesitation, he responded: “My primary responsibility is the safety and security of the students entrusted to us.”
I was taken aback. I was expecting something about computers and connectivity. But his ultimate goal was the kids’ safety and security.
Parents concerned for their children’s safety at school
His concerns are shared by parents. In a recent PDK Poll, 34 percent of parents indicated that they fear for their child’s physical safety at school—a sharp increase from 2013 when that number was just 12 percent.
Certainly, recent school shootings play a part in these parents’ fears. But there’s a continuing pattern of day-to-day violence and threats—from bullying and fighting to assaults and destruction of school property to vaping and “Juuling”—that also play a role. Consider these statistics from the Centers for Disease Control & Prevention (CDC):
- 8 percent reported being in a physical fight on school property.
- 6 percent reported that they did not go to school on one or more days because they felt unsafe on their way to or from or at school.
- 1 percent reported carrying a weapon (gun, knife or club) on school property.
- 2 percent reported being bullied on school property and 15.5 percent reported being bullied electronically.
See something, say something…do something
There was significant discussion about safety and security at the conference and the current “see something, say something” school safety campaign. The message is important, to be sure.
But how do we better involve kids in school safety and help them continue to get the word out?
My own kids attend a large Los Angeles area high school. Every morning, an administrator makes announcements over the public address system—the same way I received them twenty years ago! Parents also receive autodialed messages over the phone, as well as a monthly newsletter.
Compared to the available technologies, these communication methods are archaic—particularly for students who were raised on and are completely comfortable with technology.
Newer digital displays and software, for example—relatively inexpensive tools—can involve students and help them create and broadcast videos throughout the school.
3-point action plan for K-12 school safety
In the PDK Poll mentioned previously, an astonishing 72 percent of parents were “somewhat” or less than confident in their school’s security. Clearly, there’s much to be done to help increase parents’ confidence in physical school safety. This 3-point action plan for school safety is a great start.
- Assess the current threat environment: Get stakeholders involved in assessing the current threat level in your school or district. Develop criteria to assess and evaluate threats, and then determine what controls are currently being used. Use an “inside out” approach—starting from the inside of the school and working your way out to school grounds and the perimeter.
- Formalize a communications plan: Evaluate your current communications and determine ways to streamline and improve communications. Transform legacy communications systems into modernized communications that students will embrace. For example, instead of morning announcements over the PA system, consider adding them to student information systems or learning management systems. Or send SMS messages to students’ and parents’ mobile phones or devices.
- Perform a gap analysis: Find the gaps in your current security posture so you can take action. What aren’t you seeing on the video surveillance system? Can unauthorized persons walk onto school grounds and into your school without being detected? How long does it take to summon help? How quickly can you notify students, parents, teachers, and other stakeholders of an emergency? Analyze where the gaps are and how you can fill them.
The Logicalis approach to K-12 school safety
Logicalis got its start in security and networking, and we now have a dedicated GovEd practice with considerable experience in K-12 education. We can work closely with you and your stakeholders—teachers, administrators, students, parents, school boards, law enforcement, and elected officials—to provide a thorough assessment of the current threat level, formalize a communications plan, and perform a gap analysis.
We’ll then provide objective advice on the best solution for your school or district using industry-leading access control, video surveillance, and mass notification technologies. Newer technologies can offer significantly better functionality than legacy equipment—for example, better images, more and expansive ways to notify, and automated door locks. Based on your needs, we’ll also add our award-winning services—from assessment and design to implementation and integration to training and support—for a complete solution.
So if the kids are concerned enough to stage a National School Walkout, shouldn’t schools make protecting those students their primary responsibility?
Download our K-12 school safety brief to learn more about how Logicalis can help you improve school safety for your students.
Adam Petrovsky is the GovEd Practice Leader for Logicalis US, responsible for defining go-to-market strategy for K-12, Higher Education, Local Government and State Government. | <urn:uuid:4ea79346-7821-44da-a466-12be7be188e6> | CC-MAIN-2024-38 | https://logicalisinsights.com/2018/08/10/a-3-point-school-safety-action-plan-for-k-12-administrators/ | 2024-09-10T10:41:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00550.warc.gz | en | 0.955974 | 1,107 | 2.578125 | 3 |
The Law of Georgia On Personal Data Protection (N5669-RS, 28/12/2011) (‘PDP Law’).
Definition of Personal Data
Personal data: any information connected to an identified or identifiable natural person. A person is identifiable when he/she may be identified directly or indirectly, in particular by an identification number or by any physical, physiological, psychological, economic, cultural, or social features specific to this person.
Definition of Sensitive Personal Data
Special category data: data connected to a person's racial or ethnic origin, political views, religious or philosophical beliefs, membership of professional organisations, state of health, sexual life, criminal history, administrative detention, putting a person under restraint, plea bargains, abatement, recognition as a victim of crime or as a person affected, also biometric and genetic data that allow to identify a natural person by the above features.
Biometric data: Any physical, mental, or behavioural feature which is unique and constant for each natural person and which can be used to identify this person (fingerprints, footprints, iris, retina (retinal image), facial features).
Genetic data: Unique and constant data of a data subject relating to genetic inheritance and/or DNA code that makes it possible to identify them.
With certain exceptions (discussed below), there is no requirement under PDP Law to notify or register before processing personal data.
The registration requirement applies to the databases. According to the PDP Law, a database is any structured set of personal data where data is arranged and can be accessed based on certain criteria. The PDP Law uses the term filing system to denote a database. For example, a customer database or a registry of employees and clients that is subject to processing may qualify as a filing system.
The data controller is obliged to have a catalogue on each filing system that provides a detailed description of the filing system's structure and content.
According to the PDP Law, before creating a filing system and entering in any new category of data, a data controller shall notify the State Inspector and register the following information about the filing system:
- The name;
- The names and addresses of a data controller and a data processor;
- The place of storing or processing of data;
- The legal grounds for data processing;
- The category or categories of data subjects;
- The data category or categories in a filing system;
- The purpose of data processing;
- The period of data storage;
- The facts and grounds for restriction (if any) of any data subject rights;
- The recipient of data stored in a filing system, and their categories;
- Information on any cross-border data transfer and transmission of data to international organisation and the legal grounds for the transfer;
- A general description of the procedure established to ensure data safety.
The data controller shall regularly update the filing system catalogue and notify the Inspector about any alteration made to the information, no later than 30 days after the alteration.
The notification requirement also applies to cross-border data transfer and a private organisation's processing of a biometric data.
Before using the biometric data, a data controller must provide the State Inspector with the same information that is provided to the data subject, specifically the purpose of data processing and the security measures taken to protect the data.
The following minimum requirements must be met when collecting or otherwise processing the personal data:
- A proper legal ground (for example, a data subject's consent) exists to process the data;
- The personal data is processed for specific, clearly defined, and legitimate purposes;
- The personal data is processed only to the extent necessary for legitimate purposes;
- The personal data is adequate and proportionate to the purpose or purposes for which it was collected and processed;
- The data is kept only for the period necessary to achieve the processing's purpose;
- The data controller or data processor takes technical and organisational security measures to ensure the protection of personal data against accidental or illegal destruction, modification, disclosure, access, and any other form of illegal use or accidental or illegal loss;
- The security measures implemented are appropriate to the risks related to the data processing.
Transfer of personal data outside Georgia is admissible without a separate authorisation from the State Inspector if one of the two following conditions apply:
- A respective legal ground for data processing exists and the proper standards for the safety of data are secured in the relevant country. The State Inspector has approved the list of such countries;
- The processing of data is stipulated under an international agreement between Georgia and the relevant country;
However, the general data processing rules will still apply, including securing a necessary legal ground such as the data subject's consent and the requirements of proportionality and necessity.
If neither of these conditions apply, then there should be a formal written agreement between the transferor and the data's recipient under which the data's recipient shall commit to ensure proper guarantees to protect the data. In this case, the State Inspector must be presented with such agreement and other relevant information or documents for data transfer approval.
A data processor must implement technical and organisational security measures to ensure the protection of personal data against accidental or illegal destruction, modification, disclosure, access, and any other form of illegal use or accidental or illegal loss. The security measures implemented must be appropriate to the risks related to the data processing.
A record must be kept of all data processing activities carried out on personal data stored in electronic form. A record must also be kept of any disclosure or modification of personal data contained in non-electronic form.
Employees of a data controller or a data processor who are involved in data processing must not act beyond the scope of the powers conferred upon them. Employees must be bound to protect confidentiality of the personal data, including after termination of their official duties.
The State Inspector has power to carry out inspections of any data controller and data processor on its own initiative or based on complaints received from data subjects.
The State Inspector may order:
- Temporary or permanent termination of data processing;
- The blocking, destruction, or depersonalisation of personal data;
- The termination of transfer;
- An issuance of administrative fines.
The State Inspector also has a duty to report any violations of a criminal nature to the competent authority. The liability for violation of the data privacy can be criminal, administrative or civil.
Criminal liability: a fine, correction labour, imprisonment for three years, or all three may result from illegal collection, retention, use, or dissemination of personal data that caused substantial damage; A legal entity may be imposed a fine, deprivation of the right to run the business, liquidation and a fine for the same action.
Administrative sanctions: ranging from GEL500 (app. USD 160) to GEL10,000 (app. USD 3200) depending on the type of violation.
Civil: claims can be brought by individuals, depending on the damage the breach of the PDP Law caused.
PDP Law defines direct marketing as offering of goods, services, employment, or temporary work by mail, telephone calls, email, or any other telecommunication facility.
Consent is not required to process personal data obtained from public sources for direct marketing purposes. The data permissible to be collected from publicly available sources is limited to: name and surname, telephone number, email, and fax number.
However, written consent is required if the data processor wishes to use other types of personal data for direct marketing purposes.
Individuals are entitled to demand the termination of using their data for direct marketing purposes at any time in the form under which the direct marketing is conducted.
There is no special regulation with respect to cookies and general rules on data collection and processing applies. Georgian web-sites routinely ask for cookie consent.
There is no requirement to store data in Georgia. However, rules on cross border data transfer will apply. | <urn:uuid:b5e9d9fc-4f70-46ed-8deb-dda875ebf2f4> | CC-MAIN-2024-38 | https://www.dlapiperdataprotection.com/index.html?t=data-protection-officers&c=GE | 2024-09-12T22:50:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00350.warc.gz | en | 0.894537 | 1,629 | 2.609375 | 3 |
In this Introduction to Cybersecurity training course, you will learn to protect your organization from social engineering attacks and cyber threats through disaster recovery methods that ensure continuity of operations. In this cyber security awareness course, you will learn how to establish security in your organization by assessing your environment, identifying needs and deficits, and enumerating critical elements of security awareness.
Introduction to Cybersecurity Delivery Methods
Introduction to Cybersecurity Course Information
In this course, you will learn how to:
- Achieve the goals of cybersecurity.
- Promote user responsibility and data security.
- Examine malicious insider threats.
- Fortify the organization against outside attacks.
- No prior experience is required
- Identifying threats, vulnerabilities and consequences
- Examining and testing malware - rootkits and backdoors
- Assessing defenses against cyber threats
- Being prepared for social engineering attacks
- Preparing for problems - disaster recovery and continuity of operations | <urn:uuid:0333ad87-e97e-4fdb-add3-56eb210d7019> | CC-MAIN-2024-38 | https://eresources.learningtree.com/courses/it-security-online-cyber-security-training/ | 2024-09-15T08:57:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00150.warc.gz | en | 0.921261 | 189 | 2.859375 | 3 |
NASA used an ATK-built rocket motor and test vehicle for an exercise the agency carried out to evaluate a vehicle the agency wants to use for landing payloads on Mars.
The Low-Density Supersonic Decelerator test at the Pacific Missile Range Facility in Hawaii carried the vehicle from the test range into the atmosphere, ignited the ATK STAR 48B motor and accelerated the craft to speeds up to Mach 3.8, ATK said Tuesday.
“ATK was an integral part of the team for Mars Pathfinder and Mars Exploration Rover, providing propulsion for the Delta II launch vehicle, and retro rockets and gas generators for the entry, descent and landing system used to safely deliver the rovers to the surface of Mars,” said Cary Ralston, vice president and general manager of ATK’s missile products division.
The flight test was developed to represent the low pressure and high-speeds in payloads dropped into Mars’ atmosphere, the company said.
An ATK-made STAR 48B rocket motor served as the base for two technologies produced out of NASA’s Jet Propulsion Laboratory — an inflatable Kevlar tubing known as the Supersonic Inflatable Aerodynamic Decelerator; and a gargantuan drag device called the Supersonic Disk Sail parachute.
Both technologies are built to help bushwhack the path to effortless delivery of larger payloads to the surface of Mars.
Two more NASA test vehicles are scheduled for testing in Summer 2015. | <urn:uuid:b7d778d6-6453-4f59-b501-622fde6804b3> | CC-MAIN-2024-38 | https://executivebiz.com/2014/07/cary-ralston-nasa-uses-atk-rocket-motor-test-vehicle-for-mars-payload-exercise/ | 2024-09-17T19:36:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00850.warc.gz | en | 0.922865 | 309 | 2.8125 | 3 |
Are you ready to step into the fast-paced world of digital disruption? Just like a tornado sweeping through a quiet town, digital disruption is reshaping industries and revolutionizing the way we do business. But why is it so important? Well, digital disruption is a catalyst for change, forcing companies to innovate and adapt. It pushes businesses to rethink their strategies, technology, and customer experiences. In today’s ever-evolving market, staying agile and anticipating possibilities is crucial for growth and remaining competitive. In this article, we will explore the definition and impact of digital disruption, uncover the latest disruptive technologies, and provide real-life examples of its transformative power. Get ready to embrace the potential of digital disruption and propel your business forward.
Definition and Impact of Digital Disruption
Experiencing digital disruption can be transformative for businesses, propelling them into a new era of innovation and growth. Digital disruption refers to the fundamental shifts in business models, technology, and customer behavior that drive the need for transformation. It revolutionizes industries, creates impact, and adds value to the customer experience. The role of technology is crucial in enabling digital disruption, with advancements like blockchain, cloud computing, artificial intelligence, and virtual reality shaping the landscape.
In various industries, we can observe the examples of digital disruption. Ride-sharing apps like Uber and Lyft have transformed the taxi business, while online video streaming services like Netflix and Amazon Prime Videos have replaced physical storage devices. Email has disrupted the traditional way of writing letters, and platforms like Airbnb have transformed the accommodation market. Short-form video consumption on platforms like YouTube, Instagram, and TikTok has disrupted long-form video consumption.
Embracing digital disruption offers numerous benefits for businesses. It opens up new opportunities and allows companies to enter existing markets or create new ones. It improves products and services, enhances customer experiences, and boosts efficiency and productivity. However, embracing digital disruption also comes with its own set of challenges and risks. Companies must navigate through the complexities of changing technologies, adapt to evolving customer demands, and address potential security and privacy concerns.
Importance of Digital Disruption
You need to adapt to digital disruption in order to survive and thrive in today’s business landscape. The importance of digital disruption cannot be overstated. It has the power to transform industries, revolutionize business models, and create new opportunities for growth and innovation. Here are three key reasons why digital disruption is crucial:
- Role of innovation in digital disruption: Digital disruption is driven by innovation. It pushes businesses to think outside the box, challenge traditional norms, and embrace new technologies and strategies. By fostering a culture of innovation, companies can stay ahead of the curve and capitalize on emerging trends and customer demands.
- Digital disruption and business transformation: Digital disruption forces businesses to undergo transformation. It requires organizations to reevaluate their strategies, processes, and technologies to stay relevant and competitive in the digital age. Embracing digital disruption is not just about survival, but also about thriving in a rapidly changing landscape.
- Impact of digital disruption on industries: Digital disruption has already had a profound impact on various industries. Companies that fail to adapt to the digital revolution risk becoming obsolete. On the other hand, those that embrace digital disruption can gain a competitive edge, attract new customers, and open up new revenue streams.
Role of Technology in Digital Disruption
Digital disruption is driven by the role of technology in transforming industries and revolutionizing business models. The rapid advancements in technology have paved the way for disruptive innovation, which has had a profound impact on various sectors. The role of technology in digital disruption cannot be overstated. It has enabled businesses to embrace digital transformation and leverage technological advancements to gain a competitive edge.
Digital transformation involves the integration of digital technologies into all aspects of a business, fundamentally changing how it operates and delivers value to customers. This transformation is made possible by the continuous evolution of technology, which provides new opportunities for businesses to innovate and disrupt traditional industries.
Technological advancements have facilitated the emergence of disruptive innovations that challenge established norms and reshape industries. These innovations often leverage technology disruption to create new business models, products, and services that meet evolving customer needs.
Technology disruption has enabled businesses to enhance efficiency, improve customer experiences, and drive growth. It has allowed for the automation of processes, the collection and analysis of vast amounts of data, and the creation of personalized and targeted marketing strategies.
Examples of Digital Disruption in Industries
Industries across the globe have been profoundly impacted by the disruptive power of digital technologies. The following examples illustrate the significant effects of digital disruption in various industries:
- Retail: The rise of online shopping has revolutionized the retail industry, challenging traditional brick-and-mortar stores. Companies like Amazon have disrupted the retail landscape by offering convenient and personalized shopping experiences.
- Transportation: Ride-sharing apps such as Uber and Lyft have transformed the taxi industry by providing affordable and efficient transportation alternatives. These platforms have disrupted the traditional taxi business model and changed the way people commute.
- Media and Entertainment: Streaming services like Netflix and Amazon Prime Video have disrupted the traditional television and movie industry. With on-demand content and personalized recommendations, these platforms have changed the way people consume entertainment.
Digital disruption in industries has had a profound impact on business models, customer behavior, and technology. Embracing disruptive change is crucial for companies to stay competitive and thrive in the digital age. By recognizing the transformational technologies and disruptive innovation examples, businesses can adapt and evolve to meet the changing needs and expectations of their customers.
Benefits of Embracing Digital Disruption
By embracing digital disruption, businesses can unlock numerous benefits that propel them ahead of their competitors and position them for long-term success. The advantages of embracing digital disruption are vast, offering transformation opportunities, a competitive edge, business growth, and innovation potential.
One of the key benefits of embracing digital disruption is the opportunity for transformation. By adopting new technologies and innovative business models, companies can revolutionize their operations and stay ahead of the curve. This transformation allows businesses to adapt to changing customer expectations and market dynamics, ensuring their relevance in the digital age.
Embracing digital disruption also gives businesses a competitive edge. By leveraging digital technologies, companies can streamline processes, improve efficiency, and deliver better customer experiences. This enables them to differentiate themselves from competitors and attract and retain more customers.
Furthermore, embracing digital disruption opens up new avenues for business growth. By embracing digital transformation, companies can tap into new markets, reach a wider audience, and expand their offerings. This not only increases revenue potential but also allows businesses to diversify their portfolio and mitigate risks.
Finally, embracing digital disruption unleashes innovation potential. By embracing new technologies and digital strategies, companies can foster a culture of innovation and creativity. This enables them to develop new products, services, and business models that can disrupt the market and create new growth opportunities.
Challenges and Risks of Digital Disruption
Navigating the challenges and risks of digital disruption requires businesses to anticipate and adapt to the ever-changing landscape. In this era of rapid technological advancement, companies face numerous challenges and risks that can hinder their growth and success. Here are three key challenges and risks of digital disruption:
- Cybersecurity threats: With the increasing reliance on digital technologies, businesses face the constant risk of cyberattacks and data breaches. These threats can result in significant financial losses, reputational damage, and legal implications. To mitigate these risks, businesses must invest in robust cybersecurity measures, including encryption, multi-factor authentication, and regular security audits.
- Skill gaps and talent shortage: The fast-paced nature of digital disruption often outpaces the skills and capabilities of the workforce. Businesses struggle to find employees with the necessary digital skills, such as data analytics, artificial intelligence, and digital marketing. To overcome this challenge, organizations need to invest in training and upskilling programs to bridge the skill gaps and attract top digital talent.
- Resistance to change: Digital disruption requires businesses to embrace new technologies, processes, and business models. However, resistance to change can hinder the adoption of digital transformation initiatives. To mitigate this risk, organizations need effective change management strategies that involve clear communication, stakeholder involvement, and a focus on the benefits and opportunities that digital disruption can bring.
Strategies for Successful Digital Disruption
To achieve successful digital disruption, businesses must implement effective strategies that embrace innovation and adapt to the changing landscape. Implementing disruptive technologies can be challenging, but with the right strategies in place, businesses can overcome these obstacles and enhance the customer experience. By future-proofing through digital disruption, companies can stay ahead of the competition and thrive in the digital age.
Here are some key strategies for successful digital disruption:
Digital Disruption Strategies | Implementing Disruptive Technologies |
Embrace Innovation | Identify and adopt new technologies and business models that disrupt the industry. |
Foster a Culture of Agility | Encourage a mindset of adaptability and flexibility to quickly respond to market changes. |
Invest in Digital Talent | Attract and retain skilled professionals who can drive digital transformation and innovation. |
Collaborate with Partners | Form strategic partnerships with other businesses to leverage complementary capabilities and resources. |
Enhance Customer Experience | Use data analytics and customer insights to personalize and optimize the customer journey. |
Digital Disruption and Customer Experience
Enhancing the customer experience is a crucial aspect of digital disruption. In today’s rapidly evolving business landscape, companies must leverage technology and innovation to transform customer engagement and achieve customer centricity. Here are three key ways in which digital disruption is impacting customer experience:
- Role of innovation in customer experience: Digital disruption has pushed companies to think outside the box and come up with innovative solutions to meet customer needs. This includes developing new products and services, improving existing offerings, and finding unique ways to engage with customers.
- Digital disruption in the retail industry: The retail industry has been greatly impacted by digital disruption, with the rise of e-commerce, mobile shopping, and personalized experiences. Companies that embrace digital transformation are able to provide seamless online shopping experiences, personalized recommendations, and convenient delivery options, all of which enhance the customer experience.
- Impact of digital disruption on customer loyalty: Digital disruption has changed the way customers interact with brands, making it easier for them to switch to competitors if their needs are not met. However, companies that successfully leverage technology to enhance the customer experience can build strong customer loyalty. By providing personalized experiences, proactive customer service, and convenient digital platforms, companies can create a loyal customer base that is less likely to churn.
Future Trends in Digital Disruption
As we look ahead to the future of digital disruption, it is important to understand the emerging trends that will shape the business landscape. Emerging technologies and disruptive innovations are driving transformational shifts in various industries, leading to an industry revolution. These trends will have a profound impact on future markets and the way businesses operate.
One of the emerging technologies that will play a significant role in digital disruption is artificial intelligence (AI). AI has the potential to automate processes, improve decision-making, and create personalized experiences for customers. It will revolutionize industries such as healthcare, finance, and manufacturing.
Another trend to watch out for is the Internet of Things (IoT), which refers to the network of interconnected devices and objects that can communicate and exchange data. IoT has the potential to transform industries by enabling real-time data collection, improving operational efficiency, and creating new business models.
Blockchain technology is also set to disrupt various industries by providing secure and transparent transactions. It has the potential to revolutionize sectors such as finance, supply chain management, and healthcare. | <urn:uuid:4aa004fb-e882-4d66-a2c9-1a4414eaf230> | CC-MAIN-2024-38 | https://disruptionhub.com/why-is-digital-disruption-important/ | 2024-09-20T07:02:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00650.warc.gz | en | 0.929718 | 2,384 | 2.75 | 3 |
Telecommunications copper cables mainly belong to the group of outside plat cables (aerial cables, duct/buried cables, last link wire cables) and to the group of central office cables, which serve for signal transmission within buildings. Another producer describes their copper cables per usage, as such: “conventional telephone cables, RF carrier high frequency cables and xDSL cables, automation and control cables, signaling cables and special railway signaling cables, telephone switchboard cables.” Specialized manufacturers improved copper cable construction over time, yet fiber-optic cables gained ground over copper cables, since their efficiency in data transmission is considerably higher. In 2012 the Japanese from NTT showcased a speed of one pentabit per second via a special fiber that “packed 12 cores into a single strand”, marking the beginning of a new era. Telecommunications companies re-oriented versus fiber-optic wiring since it is considerably more efficient.
August 2016 saw the FCC urging carriers to upgrade their networks to fiber to the detriment of copper cables, now deemed outdated. Let’s see what would be the benefits and the pitfalls in this process, for both customers and carriers.
Fiber cable benefits and disadvantages
This type of cables comprises “single-mode, multi-mode or mixed cables, indoor cables (FR LSZH), underground ducting or direct burial cables, aerial installation cables ADSS or shape” and full dielectric cables or steel reinforcement (CSTA or SWA).
Of course, fiber being a step ahead in infrastructure for all involved in telecommunications services, it brings a suite of advantages, such as:
- Better data transmission speeds (as we have seen above);
- Increased bandwidth potential;
- Cost-saving infrastructure;
- Increased efficiency;
- Less mandatory maintenance, less incidents due to cables being sensitive to moisture or temperature fluctuations;
- Smaller size and weight of the cables, which makes them take up less space;
- Longer lifetime expectancy;
- The potential to fully support future Telco services without failing.
On the other hand, fiber-optic cables have one major fault that generated additional FCC regulations – it is incapable to withstand power outages, unlike the traditional landline.
Where copper-based phones used central office electricity in the event of an outage, fiber-based phones are dead as soon as the power goes out locally. Individual backup systems have to be set up in order to provide fiber-cable telecommunications service a minimum of functionality during outages. Mind you, this is not feasible for the cases where clients replace the old landline with wireless-based telecommunications, since the networks might also go out, leaving no signal available for the phone users. During emergencies the telephone communications are compromised. This issue generated a lot of concern and debates, as well as opposition to complete fiber adoption in some areas; Verizon reacted by calling the fiber-cable opponents in South Jersey “outdated”.
Therefore, the FCC intervened with a set of regulations destined to protect customers against the effects of copper retirement. The telecommunications companies whose tasks include making the switch have been trying to elude some of the spending on their side, since it may be cost-effective to make the transition to fiber, but it is surely a burden to provide alternative solutions for permanent connectivity to all the transitioning clients.
Another disadvantage is related not so much to the fiber-cable itself, but rather to the moment of its full-on implementation. Regions where maintaining copper is not economical, and nor is implementing fiber-optic cable risk to remain uncovered once copper cables wear out, a thing companies have been accused to intentionally allow/encourage.
The copper-to-fiber transition has also raised monopoly issues, since wireless facilities (the solution pushed in areas where cost-efficiency for new fiber is reduced), are not a must share element for giant Telco companies, therefore smaller entities find themselves excluded, unless they pay fees some of these companies claim they cannot afford.
Ethernet over copper (EoC) is envisaged (as an alternative to unfeasible fiber networks in less populated areas), by Sprint and Windstream, but the potential Verizon and AT&T cooperation in this field might “choke off enterprise business solution competition nationwide”.
Copper, fiber optic and wireless
Since regulations seem to leave a certain liberty of choice regarding which type of cables the telecommunications companies go with (while nevertheless moving on towards the most progressive available solution), a differentiated analysis on what is best for each area would be only practical and fair to the customers.
Hybrid cables also exist, yet replacing the old copper infrastructure with hybrid-based networks might prove too costly. In rural or remote areas, the already existing copper cables and their maintenance costs could in fact be the most economical way to go, when taking into consideration the need of those customers to be able to always connect to emergency services, when necessary, as well as their lack of viable alternatives.
Meanwhile, for urban, modernized and equally overpopulated areas, the most sustainable solutions are fiber-optic and wireless-based services, which are open for whatever the technology of the future might bring in terms of innovation and increased data traffic – at least for the next 4 to 9 years.
Balancing these extremes, while adding more in-between solutions that satisfy both customers as well as Telco companies and their sub-contractors, should be a quite ample activity for the near to medium future – and hopefully a successful one. | <urn:uuid:c9993b2e-3265-4743-8536-a7192dc4c813> | CC-MAIN-2024-38 | https://networkingcurated.com/editorial/what-are-the-dilemmas-in-copper-moving-out-to-let-fiber-in/ | 2024-09-20T06:43:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00650.warc.gz | en | 0.942977 | 1,129 | 2.84375 | 3 |
There might not be a word that hits harder in the workplace than “credentials.” Need to access the building? Credentials. Need to login to your machine? Credentials. Want to keep the “bad” out of anyplace they shouldn’t be? Credentials.
So, when it becomes known that there are actors out their trying to phish your company’s credentials, credential phishing prevention should be taken as serious as any other cyber threat—with the goal of protecting your brand and bottom line.
What is Credential Phishing?
Credential phishing is a type of cyber attack that involves tricking individuals into providing their login credentials, such as usernames and passwords, to unauthorized third parties. Credential phishing is typically performed through deceptive methods, such as fraudulent emails, websites, or messages that appear to be from a legitimate source, such as a trusted organization or service.
Read more about impersonation attack examples
As you might have guessed, the goal of credential phishing is to gain unauthorized access to sensitive information, such as personal or financial data, by exploiting individuals’ trust and willingness to provide their login details. Then, once attackers have obtained these credentials, they can use them to access the victims’ accounts, steal information, conduct fraudulent activities, or even launch further attacks.
Despite increasingly sophisticated filters created to defeat these schemes, credential phishing is a growing problem.
The first step toward credential phishing prevention? Understanding how attackers are going about their business.
Credential Phishing Techniques
While there are various techniques used in credential phishing attacks, one common method is email phishing, where individuals receive emails that appear to be from a reputable organization, such as a bank or online service provider, requesting them to verify their account information or reset their password. These emails often contain links to malicious websites that mimic the legitimate ones, tricking users into entering their login credentials, which are then captured by the attackers.
Another technique is known as spear phishing, which is a more targeted approach. In spear phishing attacks, the attackers gather information about their victims, such as their job title, company, or interests, and then customize their phishing messages to appear more genuine and relevant to the recipients. This digital manipulation increases the likelihood of victims falling for the scam and providing their user credentials.
A third method is vishing (voice phishing), where attackers use phone calls instead of emails to deceive victims into revealing their credentials. The attacker might impersonate a trusted entity, such as a bank representative or IT support, and create a sense of urgency to coax the victim into disclosing sensitive information. For example, they may claim there is a problem with the victim’s account that requires immediate verification of login details.
Last, smishing (SMS phishing) is another prevalent technique. In smishing attacks, cybercriminals send text messages that appear to be from legitimate sources, such as a bank or service provider. Similar to the above, these messages often include urgent requests to verify account information or follow a link to prevent account suspension. The link typically leads to a fake website where the victim is tricked into entering their login credentials, which are then harvested by the attackers.
Credential Phishing Prevention & Defense
To protect against credential phishing attacks, it is important for individuals and organizations to be aware of the signs of phishing.
Here are some best practices to follow for credential phishing prevention:
Be cautious of unsolicited emails: Be skeptical of emails that ask for your login credentials or personal details. Legitimate organizations usually do not request such information via email, so this should be an immediate red flag.
Verify the source: Before entering your login credentials on a website, ensure that the website is legitimate. Check for secure connections (https://), valid SSL certificates, and familiar domain names.
Keep software up to date: Regularly update your operating system, web browsers, and security software to protect against known vulnerabilities.
Use strong, unique passwords: Choose passwords that are long, complex, and difficult to guess. Avoid using the same password for multiple accounts. Might seem basic, but you’d be surprised!
Enable multi-factor authentication (MFA): MFA adds an extra layer of security by requiring additional verification, such as a code sent to your phone, in addition to your login credentials.
Educate employees: Organizations should provide training on phishing awareness and best practices to their employees. This includes teaching them how to identify phishing emails, what to do if they encounter one, and how to report suspicious activity.
Implement email filters and spam detection: Implementing email filters and spam detection can help identify and block phishing emails before they reach users’ inboxes.
Regularly backup data: Regularly backing up important data can help strengthen your credential phishing prevention defense. In the event of a successful attack, organizations can restore their data from a backup and minimize data loss.
Conduct phishing simulations: Organizations can conduct phishing simulations to test the awareness and response of their employees. This can help identify areas of improvement and provide targeted training.
Stay informed: Stay up to date on the latest phishing techniques and trends. Cyber criminals are constantly evolving their tactics, so it is important to stay informed and adapt security measures accordingly.
By following these best practices, individuals and organizations can significantly reduce their risk of falling victim to credential phishing attacks. And it’s not just a “one and done” approach to credential phishing prevention—implementing a multi-layered approach that combines technology, employee education, and regular security updates is crucial in protecting against this growing threat.
Bolster proactively monitors for potential threats and provides options for neutralizing those threats. From malicious app store postings, to threats on the dark web, Bolster can provide support that’s tailored to your business. Request a demo with us today to start protecting your business. | <urn:uuid:8bfbe74b-8820-41cc-80aa-d7a7882759db> | CC-MAIN-2024-38 | https://bolster.ai/blog/credential-phishing-prevention | 2024-09-09T08:40:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00750.warc.gz | en | 0.946138 | 1,237 | 3 | 3 |
A blind signature scheme is a type of digital signature that conceals the identity of the message contents and the sender.
In these schemes the sender’s message is concealed — or blinded — prior to the recipient signing it. Blind signature schemes are useful in applications where information on the sender is an important feature of the communication.
Electronic voting is a real-world example of a blind signature scheme is a properly functioning electronic voting system. In this example, the sender (voter) and the signer (recipient, voting authority) are unrelated and the sender’s personal privacy and voting preference are paramount. The signature is considered valid enough to warrant the vote being recorded with confidence and the voter remains anonymous.
Going further, should the voting authority be called upon to validate the information they received they are able to validate the message’s authenticity but unable to connect it with the sender (called unlinkability).
"Digital currency transactions on the blockchain use a blind signature scheme so that the sender and transaction detail are secure and untraceable." | <urn:uuid:71ad63b1-69be-4e57-8888-a034add18e83> | CC-MAIN-2024-38 | https://www.hypr.com/security-encyclopedia/blind-signature-scheme | 2024-09-10T14:11:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00650.warc.gz | en | 0.905995 | 215 | 3.34375 | 3 |
The database manager compares character data using a collating sequence. This is an ordering for a set of characters that determines whether a particular character sorts higher, lower, or the same as another.
The Unicode Collation Algorithm (UCA) uses weight tables to determine the collating sequence.
For example, a collating sequence can be used to indicate that lowercase and uppercase versions of a particular character are to be sorted equally.
The database manager allows databases to be created with custom collating sequences. The following sections help you determine and implement a particular collating sequence for a database.
Each single-byte character in a database is represented internally as a unique number between 0 and 255 (in hexadecimal notation, between X'00' and X'FF'). This number is referred to as the code point of the character; the assignment of numbers to characters in a set is collectively called a code page. A collating sequence is a mapping between the code point and the requried position of each character in a sorted sequence. The numeric value of the position is called the weight of the character in the collating sequence. In the simplest collating sequence, the weights are identical to the code points. This is called the identity sequence.
For example, suppose the characters B and b have the code points X'42' and X'62'.
If (according to the collating sequence table) they both have a sort
weight of X'42' (B
), they collate the same.
If the sort weight for B
is X'9E', and
the sort weight for b
is X'9D', b
will be sorted before B
. The collating sequence
table specifies the weight of each character. The table is different
from a code page, which specifies the code point of each character.
Consider the following example. The ASCII characters A through Z are represented by X'41' through X'5A'. To describe a collating sequence in which these characters are sorted consecutively (no intervening characters), you can write: X'41', X'42', ... X'59', X'5A'.
The hexadecimal value of a multibyte character is also used as the weight. For example, suppose the code points for the double-byte characters A and B are X'8260' and X'8261' respectively, then the collation weights for X'82', X'60', and X'61' are used to sort these two characters according to their code points.
The weights in a collating sequence need not be unique. For example, you could give uppercase letters and their lowercase equivalents the same weight.
Specifying a collating sequence can be simplified if the collating sequence provides weights for all 256 code points. The weight of each character can be determined using the code point of the character.
In all cases, the Db2® database uses the collation table that was specified at database creation time. If you want the multibyte characters to be sorted the way that they appear in their code point table, you must specify IDENTITY as the collating sequence when you create the database.
Once a collating sequence is defined, all future character comparisons for that database will be performed with that collating sequence. Except for character data defined as FOR BIT DATA or BLOB data, the collating sequence will be used for all SQL comparisons and ORDER BY clauses, and also in setting up indexes and statistics.
- An application merges sorted data from a database with application data that was sorted using a different collating sequence.
- An application merges sorted data from one database with sorted data from another, but the databases have different collating sequences.
- An application makes assumptions about sorted data that are not true for the relevant collating sequence. For example, numbers collating lower than alphabetics might or might not be true for a particular collating sequence.
A final point to remember is that the results of any sort based on a direct comparison of character code points will only match query results that are ordered using an identity collating sequence.
In a Unicode database, graphic data is sorted using the database collation mechanism. In a non-Unicode database with SYSTEM collation, the graphic data is collated based on the weight of each byte. | <urn:uuid:29149b20-604e-4970-9eed-6fcbbd5ccdf5> | CC-MAIN-2024-38 | https://www.ibm.com/docs/en/db2/11.5?topic=collation-collating-sequences | 2024-09-11T21:32:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00550.warc.gz | en | 0.916357 | 903 | 4.03125 | 4 |
At the most basic level, the Internet consists of interconnected networks that communicate using standard protocols such as the Border Gateway Protocol (BGP) and the Domain Name System (DNS). As such, it is built on trust or an honor system—trust that routing requests received from another network are valid, and the traffic sent in response to requests is legitimate.
Far from this system of trust that the Internet runs on is the reality that end-users face, however. Too many threat actors lurk behind requests and traffic, all of whom are looking for weaknesses that they can use as points of entry into target networks.
Phishing, for instance, works by abusing the trust that users place on reputable organizations. And so threat actors pretend to be employees or officials of these organizations, leading users to visit fake websites. The goal is to inject malware into their computers or extort money.
Anyone Can Pretend to Be Part of the Government
Government agencies, surprisingly, are not safe from impersonation. Even the .gov top-level domain (TLD) reserved for government agencies can easily be used, as Brian Krebs revealed. Kreb’s informant claimed to have successfully registered exeterri[.]gov by falsifying the documents required by the U.S. General Services Administration—the agency that oversees the registration process of all .gov domains. The goal was to impersonate a Rhode Island town’s website, which uses the .us TLD.
While the informant claimed he didn’t abuse the said domain and only went through the registration process as an experiment, his success tells us that any cyber attacker could quickly obtain a .gov domain and use it in phishing attacks.
Our Investigative Tools: Email Verification API, Reverse MX API, and Others
Pretending to be part of the government is not a new attack method. In 2015, a Twitter user posted a screenshot of a fake subpoena from a U.S. District Court.
Notice that the sender’s email address is subpoena@www1[.]united-usa[.]org. To learn more about the email address, we ran the said address on Email Verification API. We found out that its mail exchanger (MX) record points to phishguru[.]com.
We then ran the MX record on the Reverse MX API to determine all of the domains that used the same mail server and found 20 of them.
The domain united-usa[.]org is among the results returned, along with others spanning different fields such as:
While we did not dig further into the connected domains, any of them could be easily used for phishing campaigns.
The underlying question, though, is who is behind phishguru[.]com. Are they threat actors? To find out more, we ran the domain on WHOIS Search but found that its registrant details have been redacted, so we could only see that its registrar is Instra Corporation.
We then used WHOIS History Search to see the domain’s registrant details before the redaction. We found that its registrant from 2012 to 2018 is a company called Wombat Security Technologies.
The company develops security training software for end-users, and PhishGuru is one of its products. PhishGuru, according to a quick web search, trains employees to spot phishing attacks using mock attempts. The suspicious-looking domains may thus be part of their training resources.
* * *
Not all phishing emails are part of a simulation training, however. The vast majority are part of actual attacks. Around 1.5 million new phishing domains are created every month, in fact, so you’ll never know whether you should trust a domain or not unless you thoroughly check its background.
Investigations into potential phishing attacks are made possible with the help of domain verification and research tools such as Email Verification API, Reverse MX API, WHOIS Search, and WHOIS History Search. | <urn:uuid:9aefba85-556c-4818-b46d-dcc53548ff54> | CC-MAIN-2024-38 | https://circleid.com/posts/20191217_need_for_email_address_verification_subpoena_themed_phishing | 2024-09-19T04:08:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00850.warc.gz | en | 0.94295 | 817 | 2.5625 | 3 |
NAT, which is critical to the IPv4 networks we still use today, has been hotly debated as the IPv6 grows with more addresses. However, since the IPv6 is not full-fledged, the existence of NAT still makes sense. Here I will introduce NAT definition and figure how NAT works and why we need it.
What Is NAT?
NAT, known as network address translation, is the method adopted by a firewall or router to assign the public addresses to the devices work in the a private network.
It translates the private IPv4 addresses we use in our internal networks into public IPv4 addresses that can be routed over the internet. As we all know, the private addresses may be occupied by connected local service—computers, game consoles, phones, fiber switches etc. to communicate with the modem/router and other devices on the same network. However, the home network connection uses a single public IP address to access the internet. Given this, NAT is responsible for translating the IP address of every device connected to a router into a public IP address at the gateway. Then those devices can connect to the internet.
NAT: Why We Need?
Assume that you have 3 PCs, a gigabit Ethernet switch which connects 6 PCs, a 10 gigabit switch connecting 6PCs and one smart phone, two ipads and all of them need to work at the same time, then you need to get each of them an IP address accessible to the Internet. But due to a lack of IPv4 IP address space, it is hard to handle the massive number of devices we use every day. Well, the network address translation, proposed in 1994, has become a popular and necessary tool in the face of IPv4 address exhaustion by representing all internal devices as a whole with a same public address available. Together with its extension named port address translation, the network address translation can conserve IP addresses.
Safety, another issue we may concern when accessing the external internet, can partly be addressed by network address translation which servers as a strict controller of accessing to resources on both sides on the firewall. The hackers from outside cannot directly attack the internal network while the internal information cannot access the outside world casually.
How Does NAT Work
A router carrying NAT consists of pairs of local IP addresses and globally unique addresses, by translating the local IP addresses to global address for outgoing traffic and vice versa for incoming traffic. All these are done by rewriting the headers of data packets so that they have the correct IP address to reach the proper destination.
There are generally two types of NAT: dynamic and static.
In dynamic NAT, we map inside local addresses in internal network to global addresses so that they can access resources on the internet. The router responds to the hosts who want to access the internet with an available public IPv4 address so that they can access the internet.
In static NAT, we usually map an internal local address to a global address so that hosts on public networks can access a device in the internal network.
In a word, before the full transition of IPv6, NAT can guarantee the smooth internet surfing no matter how many devices you’ve got. Knowing what it is and how it works with network address will help you establish a clear understanding of it so that you can make good use if it. | <urn:uuid:a5b75199-b2bd-4beb-970a-811978138649> | CC-MAIN-2024-38 | https://www.fiber-optic-components.com/tag/nat | 2024-09-08T10:34:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00050.warc.gz | en | 0.941248 | 668 | 4.03125 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.