text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Table of Contents
BGP Confederations is an another mechanism to avoid iBGP Full Mesh topology like Route Reflectors. As a basic definion, BGP Confederaition mechanims is Autonomous Systems (AS) in Autonomous Systems (AS). In other words, in this BGP mechanism, there are Sub Autonomous Systems inside the iBGP Topology. The top Autonomous System is still exist and the new Sub ASs are connected to this AS and form BGP Confederation.
BGP Confederation Sub ASs aer created with the Private AS Numbers between 64512 and 65535.
With BGP Confederations, BGP Autonomous System can be divided into small Autonomous Systems. This reduces the connection number that can be evry high without using Confederations and Route Reflectors. So, the confederation mechanism in BGP is very important especially for the large networks.
In the Autonomous System, there can be one more Sub Confederations. These Confederations can connect together like eBGP neighbors. And the routers inside any Confederation can exchange routes like they are iBGP neighbors. And routers in SubASs should be connected as Full Mesh.
Next Hop, Local Preference and Metric values ara preserved in this mechanism. Only AS Path is changed. Although the mechanims is like this inside the top Autonomous System, from the outside of AS, BGPConfederation that has different Sub ASs, seems as one Autonomous System (AS).
In the BGP Confederation mechanism, we should use a Loop Preventation mechanism to avoid Routing Loops. In Confederations, to avoid Routing Loops, AS Path Path Attribute is used.
BGP AS Path Attribute is a Well Known Mandatory Path Attribute. Basically it has a list consist of AS Number of the originating router and the routers it traverse during its destination. From the beginning to the destination, at every AS, AS Number is prepended to this AS List.
Normally two parameters are used with AS Path. These AS Path parameters are given below:
AS_SET is the list of ASs that is unordered and AS_SEQUENCE is the list of ASs as ordered.
With Confederations and Sub ASs, two additional parameters are added to AS Path attribute of BGP. These new parameters are : | <urn:uuid:a6269f4e-c1b3-4c4c-abdd-9ef868f3ae10> | CC-MAIN-2024-38 | https://ipcisco.com/lesson/bgp-confederation/ | 2024-09-20T15:02:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00105.warc.gz | en | 0.890668 | 480 | 3.34375 | 3 |
There are currently believed to be seven billion internet of things (IoT) devices worldwide. Forecasts vary, but the consensus is the number will grow exponentially over the coming years, with some estimates as high as 40 billion connected IoT devices by 2025.
Rob Scammell, Technology reporter at GlobalData, noted: “Security in IoT devices has repeatedly been shown to be lacking: from a vulnerable child location-tracking watch to office printers at risk to Russian cyberattacks.
“Often, this is as simple as device owners failing to change the password from a weak factory setting. In the race to get products to market ahead of competitors, security is also often an afterthought.
“The ever-growing number of IoT devices, in combination with this lax security, is a perfect storm for cyberattacks.”
The proliferation of “stupid” internet-connected smart devices will be the “IT asbestos of the future”, cybersecurity expert Mikko Hyppönen has warned in an interview with GlobalData’s Verdict website:
“Asbestos was such a great innovation. It looked like a miracle material, originally,” explained Hyppönen, chief research officer at Finnish cybersecurity firm F-Secure.
Hyppönen draws parallels between the rampant use of cancer-causing asbestos in the 1960s and 1970s to the cybersecurity risks that come with the explosion of smart devices worldwide today.
“Such a great innovation, which then decades later turned out to be the worst innovation.
“What’s happening right now, around us, I guess would be characterised as IT asbestos.”
“We are currently in the early stages of this revolution, but eventually anything that uses electricity will be online.
“So this is going to happen, whether we like it or not. Everything will become a computer and right now this seems like an excellent idea, to many of the companies in this business.
“This is not the first time technology has taken us in the wrong direction. So I think this is dangerous. It’s very dangerous for our privacy. It’s dangerous for our security.
“This is going to be the IT asbestos of the future. This is what our kids will hate us for.”
“As connectivity becomes cheaper and cheaper, eventually, it’s not going to be just smart things going online, it’s going to be stupid things. I’m actually much more worried about stupid things online than smart things.”
He gives the example of smart toasters and fridges – “things consumers don’t really need to be online”.
“For tech companies this data will be valuable – the time you toast, your favourite settings, how many people are making toast around the world, the country that makes the most toast, and so on. However, there is an asymmetry in value for the consumer and for the company and when the security risks are factored in, it becomes a pretty bad deal for consumers.” | <urn:uuid:d8551e40-5318-42c9-83af-5616aa622ebe> | CC-MAIN-2024-38 | https://drasticnews.com/iot-devices-could-be-asbestos-of-the-future/ | 2024-09-08T10:27:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00305.warc.gz | en | 0.965487 | 643 | 2.6875 | 3 |
‘Phishing’ is an attempt by fraudsters to ‘fish’ for your personal / financial / investment details via email.
‘Phishing’ attempts usually appear in the form of an email appearing to be from the very known company to gain readers trust. Within the email you are then usually encouraged to click a link to a fraudulent page designed to capture your details.
While some e-mails are very easy to identify as fraudulent due to poorly designed and bad grammar, others may look legitimate source. However, you should not rely on the name or address in the “From” field alone, as this can be easily manipulated. For example, look at the below image, which looks to arrive from PayPal asking you to take some action.
However, one must be very careful about the link destination i.e. if you hover on the link then it would be enough for you to identify authenticity of this email.
Although it can be difficult to spot, ‘phishing’ emails generally ask you to click on a link which takes you back to a spoof web site that looks similar to the one mentioned in email, wherein you are asked to provide, update or confirm sensitive personal information. To prompt you into action, such emails may signify a sense of urgency or threatening conditions.
The information most commonly sought through such means can be:
Website spoofing is the act of creating a website, as a hoax, with the intention of performing fraud. To make spoof sites seem legitimate, phishers use the names, logos, graphics and even code of the actual website. They can even fake the URL that appears in the address field at the top of your browser window and the Padlock icon that appears at the bottom right corner.
Fraudsters send e-mails with a link to a spoofed fraudulent website asking you to update or confirm account related information in the email only and submit. These emails also direct you to fraudulent Web sites and pop-up windows and try to collect your personal information.
This is done with the intention of obtaining sensitive account related information like your User ID, Password, bank details, etc.
One way to detect a phony Web site is to consider how you arrived there. If you type, or cut and paste, the URL into a new Web browser window and it does not take you to a legitimate Web site, or you get an error message, it was probably just a cover for a fake Web site.
Below are two most important points to help to identify a genuine website | <urn:uuid:2fcc8319-718d-4a64-9523-cef0e087d759> | CC-MAIN-2024-38 | https://www.adwebtech.com/phishing-and-spoofing-your-guide-to-protect-against-them/ | 2024-09-10T19:53:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00105.warc.gz | en | 0.945843 | 518 | 2.9375 | 3 |
Meet the Father of Cloud Computing
Cloud computing is reshaping the technology world. A genuinely innovative medium of the computing world, enabling leaps of discovery and innovation. It is easy to forget where it all started when dealing with something of that mass. Let us take a trip to where it all started and meet the father of cloud computing.
Who is the father of cloud computing
Doctor Joseph Carl Robnett Licklider was the pioneer and formulated the earliest grasp on the global networking idea. Time after time, he showcased great mastery of the science of computing. Well respected and revered, Licklider had far-sighted ideas ahead of his time. He is responsible for the foundation of many features of our current day internet. From user-friendly interfaces to cloud computing and all that’s in between, Licklider never failed to deliver the most extraordinary innovation.
Additionally, in 1963 while assuming the role of director at the U.S department of defense advanced research projects agency (DARPA), Licklider struck gold. Through an imposing and detailed plan to challenge the time-sharing network and communication as we know it, Licklider created ARPAnet. Before his work, People thought of computers as just machines to finish Mathematical chores and tasks, but throughout his career, Licklider unlocked the communication aspect of computers.
Finally, in the late 1960s, Licklider envisioned a world in which everyone was connected to a grid of computers—granting people the ability to access programs and data regardless of location. ARPAnet is considered the predecessor of the internet, and it served as the foundation for the internet we know today.
What Did the Father of Cloud Computing Create?
Besides laying the foundation of cloud computing and the internet, the ideologies and theories that Joseph Carl Robnett Licklider developed concerning computing and information technology. With his ideas on creating a galactic network, he also contributed to funding the creation of the systems that gave rise to modern computers and the foundation for the internet. LickLader envisioned the future of what is conceivable and defined what would be required to achieve it at a time when computers needed analog controls and assumed the entire room. Even among the first academics to make predictions about the development of artificial intelligence
Duplex Pitch Perception Theory
Furthermore, Licklider published this article in 1951 for a project with the Department of Defense. The theory and how it is precieved is described by the article. This paper describes the earliest known recording of binaural audio recognition using mathematical theory and the fundamentals of sound. This paper cemented J.C.R. Licklider’s reputation as a recognized expert in psychoacoustics.
The Intergalactic Computer Network’s Members and Affiliates, a memorandum
Finally, we have the famous memo sent to all the localized small research organizations across the nation. It benefited from funding and assistance from ARPA’s IPTO. The memoires outlined early cloud storage and distributed network applications across what he called a galactic network. It also hypothesized the requirement for global interoperability. He also added the need for higher-level computer languages, issues with managing memory, and the need for creating international norms and standards.
His last chapter
The father of cloud computing lived a long and fruitful life. Academically eventful life, he was loyal to his wife, Alberta Louise Carpenter. The couple had two children, and after a long life-giving to the world. innovating technology with each effort he could, Licklider passed away in 1990. The same year, ARPAnet, his most prized invention, was shut down. Even though it is a point of contention as to who coined the name or even who was an influential figure, it is safe to say that Licklider is the father of cloud computing as we know it. The internet today stands on the ruins of his creation.
Lastly, the world of cloud computing is vast and offers many opportunities to gain experience and achieve greatness. From cloud storage to cloud gaming, entertainment and efficiency combine to form endless possibilities. The world is currently trying to get the best out of cloud computing, and hopefully, it is safe to say it is succeeding. There are drawbacks and even skeptics, but it is a healthy incentive to push great minds like Licklider to do more for the community and the world.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Community section to stay informed and up-to-date with our daily articles. | <urn:uuid:35f77a9e-9fc6-41d9-a162-029d7bd29055> | CC-MAIN-2024-38 | https://insidetelecom.com/meet-the-father-of-cloud-computing/ | 2024-09-12T02:05:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00005.warc.gz | en | 0.956921 | 945 | 2.8125 | 3 |
Table of contents
Life before document scanning seems almost alien in our current age, doesn't it? From heaving stacks of papers to sleek digital records, the journey has been nothing short of revolutionary. Join us as we flip through the pages of history, decipher the present-day scenario, and peek into what the future might hold.
Evolution of Document Scanning
The good ol' days: an era of ink-stained hands and weighty tomes. But then, technology happened.
The Humble Beginnings
It might boggle your mind, but the concept of scanning dates back to the early 1860s, with the invention of the pantelegraph. It was a contraption straight out of a Jules Verne novel, transmitting images over telegraph lines. The advent of the fax machine in the 20th century gave a push to the notion of 'sending' documents electronically.
Enter Digital Scanning
Fast forward to the late 20th century, and the digital revolution was afoot. Early scanning systems, bulky and costly, were the reserve of elite corporations. As technology sprinted ahead, these devices became more accessible, and suddenly, a paperless office wasn't just a pipe dream anymore.
Today’s Document Management
In today's age, document scanning services have become the backbone of efficient enterprises. A blend of AI, Optical Character Recognition (OCR), and cloud storage has made data retrieval, sharing, and management as easy as pie. Organizations are leaning into this trend, leveraging these technologies to drive efficiency and security.
A Glimpse into the Future
So, where's this journey headed next? Think Augmented Reality (AR) for instant document overlays. Ponder over quantum computing bolstering the speed and storage capacities. The horizon is shimmering with possibilities of instant data transfer, holographic displays, and perhaps even brain-computer interfaces. The sky's the limit, folks!
Now, if you're looking to stay ahead of the curve and integrate modern scanning solutions, a top-tier document scanning company is what you need.
FAQs: Quenching the Curiosities
How did businesses manage data before the age of document scanning? Good ol' physical storage! Think file cabinets, storage rooms, and yes, the occasional lost document saga.
What's OCR and why's it a big deal? OCR stands for Optical Character Recognition. It translates scanned images into editable and searchable data. In short, it's a game-changer!
Is there a notable difference between the early scanners and today's devices? Night and day! Earlier devices were bulkier, pricier, and less efficient. Modern scanners? Compact, affordable, and lightning fast.
Why are businesses today opting for digital formats? It's simple – space-saving, cost-effective, eco-friendly, and enhances data accessibility and security.
What role does AI play in modern document scanning? AI enhances accuracy, automates categorization, and even predicts storage needs. Talk about smart, eh?
Are paper documents becoming obsolete? Not entirely. But with the digital wave, they're certainly taking a backseat.
The realm of document management has evolved, adapting and morphing with each technological stride. From rudimentary image transmissions to sophisticated document scanning services, this journey reflects humanity's quest for efficiency and innovation. With such a vibrant past and present, one can only anticipate a future brimming with potential. Ready to be a part of this future? DocCapture is here to guide you on this exciting path.
You May Also Like
These Related Stories | <urn:uuid:2b47aba6-23ac-45db-b935-3c4df1ddf540> | CC-MAIN-2024-38 | https://www.documentscanning.ai/blog/the-evolution-of-document-scanning-past-present-and-future | 2024-09-15T16:19:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00605.warc.gz | en | 0.930248 | 736 | 2.671875 | 3 |
Hello Friends, today we will learn about Sniffing network traffic ? So, Have you ever sniffed a network before or do you know what that is , don’t worry because you are going to learn a lot in this article about it. Which is going to be really valuable later on. Another popular technique that can be used to gain access to systems is network sniffing. Sniffing is the process of capturing and viewing traffic as it is passed along the network. Several popular protocols in use today still send sensitive and important information over the network without encryption. Network traffic sent without using encryption is often referred to as clear text because it is human readable and requires no deciphering. Sniffing clear text network traffic is a trivial but effective means of gaining access to systems. Before we begin sniffing traffic, it is important that you understand some basic network information. The difference between promiscuous mode and non-promiscuous network modes will be discussed first.
By default most network cards operate in non-promiscuous mode. Non-promiscuous mode means that the network interface card (NIC ) will only pass on the specific traffic that is addressed to it. If the NIC receives traffic that matches its address, the NIC will pass the traffic onto the CPU for processing. If the NIC receives traffic that does not match its address, the NIC simply discards the packets. In many ways, a NIC in non-promiscuous mode acts like a ticket taker at a movie theatre. The ticket taker stops people from entering the theatre unless they have a ticket for the specific show. Promiscuous mode on the other hand is used to force the NIC to accept all packets that arrive. In promiscuous mode, all network traffic is passed onto the CPU for processing regardless of whether it was destined for the system or not.
In order to successfully sniff network traffic that is not normally destined for your PC, you must make sure your network card is in promiscuous mode. You may be wondering how it is possible that network traffic would arrive at a computer or device if the traffic was not addressed to the device. There are several possible scenarios where this situation may arise. First any traffic that is broadcast on the network will be sent to all connected devices. Another example is networks that use hubs rather than switches to route traffic. A hub works by simply sending all the traffic it receives to all the devices connected to its physical ports. In networks that use a hub, your NIC is constantly disregarding packets that do not belong to it. For example, assume we have a small 8-port hub with 8 computers plugged into the hub. In this environment when the PC plugged into port number 1 wants to send a message to the PC plugged into port number 7, the message (network traffic) is actually delivered to all the computers plugged into the hub.
However, assuming all the computers are in non-promiscuous mode, machines 2–6 and 8 simply disregard the traffic. Many people believe you can fix this situation by simply swapping your hubs with switches. This is because unlike hubs that broadcast all traffic to all ports, switches are much more discrete. When you first plug a computer into a switch, the MAC address of the computer’s NIC is registered with the switch. This information (the computer’s MAC address and switch’s port number) is then used by the switch to intelligently route traffic for a specific machine to the specific port. Going back to your previous example, if a switch is being used and PC 1 sends a message to PC 7, the switch processes the network traffic and consults the table containing the MAC address and port number. It then sends the message to only the computer connected to port number 7. Devices 2–6 and 8 never receive the traffic.so there you have it, I hope this can get you started you can research on tutorials and stuff and work some network sniffing projects. | <urn:uuid:4bb55598-f630-4c39-bbfe-6e7877a05f17> | CC-MAIN-2024-38 | https://www.hackingloops.com/tag/how-to-sniff-network-traffic/ | 2024-09-16T23:30:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00505.warc.gz | en | 0.949392 | 803 | 2.875 | 3 |
Vulnerabilities are the anomalies such as programming errors or configuration issues of the system. Attackers exploit the weaknesses in the system and can, in turn, disrupt the system. If these vulnerabilities are exploited, then it can result in the compromise of confidentiality, integrity as well as the availability of resources that belong to the organization.
Table of Content
How Can We Detect and Prevent These Vulnerabilities?
Vulnerability assessment is the risk management process that defines, identifies, classifies, and prioritizes vulnerabilities within computer systems, applications as well as network infrastructures. This helps the organization in conducting the assessment with the required knowledge, awareness, and risk posture for understanding the cyber threats. Vulnerability assessment is conducted in two ways.
Types of Vulnerability Assessment
Automated tools such as Vulnerability scanning tools scan applications to discover cyber security vulnerabilities. These include SQL injection, Command Injection, Path Traversal, and Cross-Site scripting. It is a part of Dynamic Application Security Testing that helps in finding malicious code, application backdoors as well as other threats present in the software and applications.
Manual testing is based on the expertise of a pen-tester. They are the experts that dive deep into the infrastructure that will help them in finding out the vulnerabilities that cyber attackers can exploit.
Different Types of Manual Testing
Following are the types of vulnerability assessment and penetration testing:
Application Security Testing
It is the process of testing and analyzing a mobile or web application. This methodology helps pen-testers in understanding the security posture of websites and applications.
The application security testing process includes:
- Password quality rules
- Brute force attack testing
- User authorization processes
- Session cookies
- SQL injection
- Server Security Testing
Servers contain information including the source code of the application, configuration files, cryptographic keys as well as other important data. Pen-testers perform an in-depth analysis of the server in the server security testing. Based on this analysis, testers perform an approach to mimic real-time cyber attacks.
- Infrastructure Penetration Testing
Infrastructure penetration testing is a proven method to evaluate the security of computing networks, infrastructure as well as the weakness in applications by simulating a malicious cyber attack.
- Cloud Security Testing
Every organization that keeps its platforms, customer data, applications, operating systems as well as networks over the cloud; must perform cloud security testing. Cloud security is essential for assessing the security of the operating systems and applications that run on the cloud. This requires equipping cloud instances with defensive security controls and regular assessment of the ability to withstand cyber threats.
- IoT Security Testing
With our increasing engagement with technology, we are becoming more advanced in incorporating technology with things that we use on a daily basis. Pen-testers are aware of the complexities and how cyber criminals exploit them.
IoT penetration and system analysis testing considers the entire ecosystem of IoT technology. It covers each segment and analyses the security of the IoT devices. The testing services include IoT mobile applications, communication, protocols, cloud APIs as well as the embedded hardware and firmware.
Which is the Better Method of Vulnerability Assessment?
Manual vulnerability assessment is better than vulnerability scanning tools since automated tools often give false results. This can seriously hamper the process of vulnerability assessment. Although automated tools make the assessment process faster and less labor-intensive, the tools are not capable of identifying vulnerabilities.
This can be far better done by observant pen testers who use systematic technology with years of experience. Manual vulnerability assessment requires time but, it is far more effective and accurate than vulnerability scanning tools. The reason behind preferring manual assessment is the lack of an in-depth understanding of the system to discover vulnerabilities. Therefore, it is always better to consult a leading cyber security company for investing in VAPT services that can help you strengthen your organization’s security infrastructure.
Turn Your Employees Into A Cyber Threat Shield
Make your employees proactive against prevailing cyber attacks with ThreatCop!
Content Marketer and Team Leader | <urn:uuid:f0dd006f-a8ae-439e-a813-789c8dc60ddd> | CC-MAIN-2024-38 | https://kratikal.com/blog/vulnerability-assessment-importance/ | 2024-09-19T10:07:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00305.warc.gz | en | 0.928234 | 818 | 3.40625 | 3 |
The struggle within the educational system in the United States is significant, with growing classroom sizes, a shortage of staff, and widespread teacher burnout. National assessments indicate many students don’t meet academic expectations, suggesting systemic issues in education. However, Artificial Intelligence (AI) offers a promising solution, capable of personalizing learning to each student’s unique needs, and potentially addressing the inadequacies of traditional educational practices.
The Crisis in Education and the Call for Innovation
The call for educational innovation has never been more critical. Teachers and resources can’t keep up with the unique needs of individual students, and the traditional educational models fail to offer the required support, leaving many students at risk. Technological advancements and customized teaching methods are being eyed as the key to a much-needed paradigm shift.
The Limitations of Traditional Education Models
Traditional one-size-fits-all teaching methods are being replaced by the growing awareness that individualized instruction is necessary for effective learning. However, the optimal one-on-one teaching model is difficult to implement due to financial, logistical, and temporal constraints, leading to a growing demand for innovative solutions that can address individual learning needs.
AI as a Catalyst for Tailored Education
Artificial Intelligence is becoming instrumental in creating customized learning experiences. Through AI, educational content and methodology can be tailored to the pace and style of each student, allowing for more effective teaching and learning processes. This new era of personalized education offers a solution to the challenges faced by conventional methods.
Insights from the Stanford University AI+Education Summit
At the Stanford University AI+Education Summit in 2023, experts discussed how AI could overhaul curriculum design and assessments. AI’s potential to help fine-tune learning paths was a key topic, suggesting a future where teachers can shift their focus towards deep, meaningful interactions with students rather than administrative tasks.
AI-Driven Creativity in Learning
AI’s ability to adapt educational material to match student interests presents a unique opportunity for creativity in learning. For example, converting study materials into music can significantly enhance student engagement and knowledge retention, showcasing the vast potential for personalized learning experiences that AI can offer.
The Human Touch in a Technologically Advanced Classroom
Despite AI’s advancements, it cannot replicate human attributes such as empathy and motivation, which are crucial in teaching. The role of educators transcends instruction—teachers inspire and motivate, proving that the essence of teaching lies in human connection. The technology should complement, not replace, the educator’s role to ensure that the educational experience remains rich and personal.
Preparing Students for a Technologically Fluent Future
Integrating AI into education prepares students for a future where technology prevails. This step is pivotal, analogous to the impact of the internet on information access. Introducing students to AI equips them with essential technological skills for the workforce, creating a tech-fluent generation.
Balancing AI Potential with Educational Values
As AI reshapes education, we must ensure it aligns with the fundamental principles of the field. AI should augment rather than substitute the invaluable human elements that educators bring. Striving for balance ensures that technology enhances the educational process, with teachers continuing to play a pivotal role. By prudently leveraging AI, we can craft an educational system that combines the best of technology with the noble essence of teaching. | <urn:uuid:9b2d6794-b435-49c5-ba6e-be17d88f5ef2> | CC-MAIN-2024-38 | https://educationcurated.com/education-management/can-ai-revolutionize-personalized-learning-in-schools/ | 2024-09-15T19:45:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00705.warc.gz | en | 0.934033 | 684 | 3.3125 | 3 |
The venerated BBC World Service recently commissioned a polled involving more than 27,000 people across 26 countries. The findings are unremarkable: some 87% of Internet users believe that Internet access should be a basic right, and more than 70% of non-users believe that they should have access to it.
Depending on your country, the Internet has been available for ten years or more, and for individuals—at least in the developed world—it has since become ingrained in psyches as an essential commodity, akin to access to fixed-line telephony, electricity and potable water. For a growing number, the Internet is essential for work, for a greater number it is the first port of call for problem solving and information (Wiki and online Yellow pages come to mind) or getting things done (banking, finding out timetables for travel, etc). Most governments, too, now take the Internet as a key component of infrastructure, crucial to a nation’s future socio-economic potential.
What governments may do with the Internet is another matter. A decade’s experience and use of the service has enabled a growing number of governments to manhandle the potential dangers of hacking, fraud and privacy as a means to tighten the screws on their own control of access, and of their nationals’ use of it. This is rightly opposed by the users themselves, over half of whom surveyed for the BBC believing that no government should be empowered to regulate the Internet.
In Europe, the ‘three strikes rule’ threatens to become more fashionable, following measures first proposed in France: there, the Création et Internet Bill failed in 2009 when France’s Conseil Constitutionnel ruled that it leaned too much to ‘guilty until proven innocent’ and that it threatened major sanctions (Internet disconnection and a national blacklist on access) without judicial oversight. Nevertheless, the government shoehorned the Bill a second time, which this month came before the National Assembly for debate.
The Bill proposes that the scheme be administered by a newly formed group called HADOPI. ISPs notified about alleged file-sharing would be required to send an e-mail to the customer involved, a registered letter at the second alleged offence and, for a third offence, terminate access for up to a year. A database managed by HADOPI could presumably prevent blocked users from switching ISPs.
Italy looks like adopting a similar approach. Having in 2009 sued the Swedish The Pirate Bay site and attempted to force ISPs to block access to its content, the more recent charging of Google executives with criminal charges resulting from YouTube content denotes a government leaning towards authoritarianism regarding the Internet. The Italian three-strikes proposal would be complemented by a requirement that all blogs register with the government.
In the UK, meanwhile, the government is pushing through its controversial Digital Economy Bill, which proposes empowering regulators to disconnect or slow down Internet connections of persistent illegal file-sharers. Amendments to the Bill passed this month at the report stage at the House of Lords before its third and final reading in the House of Commons, could in theory force sites such as YouTube which host copyright-infringing material to be blocked or forced offline. The UK’s three-strikes rule is similar in its essentials to those of France and the UK, with disconnection following two warnings.
At the European Union level, the European Parliament was initially critical of the three-strikes schemes, largely due to the absence of judicial review. However, this month the Anti-Counterfeiting Trade Agreement (ACTA) was put forward for debate between the US, the EC, Japan, Switzerland, Australia, New Zealand, South Korea, Canada and Mexico. Aimed at preventing online counterfeiting, it threatens to punish ISPs for content delivered.
Polls show the sincerity of popular regard for a free Internet, and suggest that to tackle piracy other solutions than blocking ISPs and throwing citizens offline should be considered. Until they are considered, citizens should, as always, be vigilant about what their governments are legislating, lest they find themselves with a thoroughly policed Internet far removed from what they now know it to be. | <urn:uuid:884d0695-ba45-400a-9025-4326c8de614a> | CC-MAIN-2024-38 | https://circleid.com/posts/20100311_the_free_internet_in_jeopardy | 2024-09-17T01:15:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00605.warc.gz | en | 0.954128 | 848 | 2.59375 | 3 |
The concept of gamification – the principles and theories behind playing games – has been central to the swift progression of virtual training. Gamification techniques are now being applied to real-world industrial learning use cases. One of the most notable international gamification successes was the smash-hit Pokémon Go in 2016.
In the last five years, augmented reality has been adapted to industries as varied as oil plants and marine yards, alongside other gamification technologies such as virtual reality. Within these industrial environments, gamification helps simplify the transfer of complex knowledge for field and plant operators or to general staff.
In the future, the single biggest game-changer for virtual training will be an extended reality experience created with a potent and realistic combination of artificial intelligence, virtual reality, augmented reality and mixed reality, powered by the cloud.
Extended reality is a catchall term referring to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables.
The global success of Pokémon Go demonstrated how users are far more engaged when immersed in contextual gameplay. In the industrial world, the very same principles apply for improving trainee interactivity, retention and engagement.
When artificial intelligence is fused into the gamified training process, it is possible to leverage the principles of gamification in real time. However, it is still vital that the learning system is closely mapped to the company’s actual processes.
In an industrial learning sense, it is now possible to familiarise yourself with the digital asset and see the real-time outcomes of your actions as-you-go in case of malfunction or failure. This type of training will only become more immersive as nascent virtual reality and augmented reality technologies become further actualised by developments in artificial intelligence.
What’s more, entire teams are learning simultaneously – as well as learning how to communicate with each other on the job. This type of collaboration will only accelerate as extended reality technology advances.
By leveraging extended reality, organisations can simultaneously run employee-training programmes while the plant is being constructed. This way, the plant is up-and-running as soon as it is opened, saving on valuable and costly downtime. Entire teams can be trained up during the construction process, while also learning to work together in a replicated virtual environment.
Augmented reality can help guide new hires through step-by-step procedures while the plant operates normally – again, saving on idle plant time. Extended reality makes it possible to simulate several tasks and processes overlaid over real data.
Most 3D visualisation takes place by using overlay apps over tablets or monitors. In the future, the human machine interface HMI will be formed of wearable devices. This means there will be no time lag and the user will be able to place multiple data streams in context.
Trainees will be able to use their normal vision to see live complex data streams in 3D vision, as well as connect with others in the same domain. In the coming years, we will be able to view multiple complex data sets through new and exciting lenses.
Most global extended reality adoption will take place in the next five to ten years.
- The global success of Pokémon Go demonstrated how users are far more engaged when immersed in contextual gameplay
- Gamification techniques are now being applied to real-world industrial learning use cases.
- When AI is fused into the gamified training process, it is possible to leverage principles of gamification in real time.
- By leveraging extended reality, organisations can run employee-training programmes while the plant is being constructed.
The single biggest game-changer for virtual training will be extended reality created with artificial intelligence powered by the cloud. | <urn:uuid:770201d4-231e-472b-b870-737fd1597dde> | CC-MAIN-2024-38 | https://globalcioforum.com/extended-reality-is-transforming-industrial-training/ | 2024-09-17T01:09:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00605.warc.gz | en | 0.94384 | 746 | 3.0625 | 3 |
What is Zero-Trust and why does your business need it?
What is the Zero-trust framework? How to implement a zero-trust strategy?
WHAT IS A ZERO-TRUST STRATEGY
As stated by VMWare, "Zero Trust is the name for an approach to IT security that assumes there is no trusted network perimeter and that every network transaction must be authenticated before it can transpire."
Zero trust is a strategy that treats every network connection as untrusted by default and requires users to be authenticated before accessing any private information in or outside the organization's network. This approach uses advanced technologies, including multifactor authentication and identity and access management (IAM), to verify the user's identity.
Zero Trust is a significant evolution from traditional network security known as the "trust but verify" method. The "trust but verify" assumes that users and endpoints are trustworthy as long as they are within the organization's perimeter.
While this strategy may seem safe, it actually leaves the company vulnerable to both malicious internal actors and external malicious actors who might take over legitimate credentials and who will have wide access once inside.
This cybersecurity strategy that worked for businesses operating in a homogenous corporate environment became obsolete with the advent of cloud computing and distributed workforces due to the COVID-19 pandemic.
HOW DOES THE ZERO-TRUST STRATEGY WORK?
The Zero Trust strategy relies on other network security methods, such as strict access controls, but also network segmentation. Network segmentation is a network security technique that divides a large and complex network into smaller subnetworks, each with its own unique rules for sharing information.
Zero Trust policies rely on real-time visibility into user and application identity attributes such as:
- User identity and type of credential
- Credential privileges on each device
- Connections patterns for the credential and device (considered as normal behavior)
- Endpoint hardware characteristics
- Authentication protocol
- Operating system versions and patch levels
- List of the applications installed on an endpoint
- Security or incident detections (including suspicious activity and attack recognition)
Contrary to the "trust but verify" model, in a Zero Trust network environment, the location of a resource is no longer an indication of its security. Moreover, instead of being segmented into rigid networks, data and other network elements are protected by software-defined micro-segmentation. This allows organizations to keep them secure anywhere, whether in your data center or in distributed hybrid and multi-cloud environments.
BENEFITS OF A ZERO-TRUST FRAMEWORK:
- It reduces the number of potential entry points for a cyberattack, protecting an organization's network and data from unauthorized intrusion while reducing the time and cost of responding to and cleaning up after a breach.
- It enables network traffic and user activity detailed monitoring.
- It addresses the modern challenges of securing remote workers, hybrid cloud environments, and ransomware threats.
- It reduces the risk of a data breach: every request is examined, users and devices are authenticated, and permissions are evaluated before the system allows access. This granted trust is then continually reassessed as context changes, such as the user's location or accessed data.
HOW TO GET STARTED WITH A ZERO-TRUST STRATEGY?
There is no magic bullet or one-size-fits-all solution to implementing Zero Trust. The Zero Trust framework will depend on the size of the protected surface and its micro-segmentation. While designing the Zero Trust network architecture and policies, it is crucial to consider their impact on the user experience for affected applications, databases, and other resources.
As a first step, we recommend answering these two questions:
- "What are we trying to protect?"
"From whom are we trying to protect it?"
Answering these questions will help to design the best architecture possible. Then, the most effective approach is to layer technologies and processes on top of your strategy, not the other way around. Worried about implementing a whole zero-trust strategy at once? It is totally possible to take a phased approach before implementing the strategy more broadly.
To support a Zero Trust model, organizations use a variety of tools, including:
- Zero trust network access (ZTNA) or software-defined perimeter (SDP) tools
- Secure access service edge (SASE) or VPN solutions
- Microsegmentation tools
- Multifactor authentication (MFA)
- Single sign-on (SSO) solutions
- Device approval solutions
- Intrusion prevention systems (IPS)
It can be difficult to implement a homogenous and comprehensive zero-trust strategy because many of these tools are specific to operating systems, devices, and cloud providers.
Feeling overwhelmed? We get it. This is why at LENET, we propose both cybersecurity strategy consulting and solutions to make sure that you implement the right strategy for your organization. | <urn:uuid:b0aa29b8-21b4-4c01-b5f6-27b5963025ae> | CC-MAIN-2024-38 | https://lenet.com/blog/what-is-zero-trust-and-why-does-your-business-need-it | 2024-09-07T11:31:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00669.warc.gz | en | 0.916853 | 1,018 | 2.671875 | 3 |
Data is a valuable tool that helps firms run more efficiently, make better decisions, and gain a competitive advantage in the marketplace. Unfortunately, fraudsters wishing to access and tamper with sensitive information may readily target today's data. This is why cybersecurity is rapidly becoming a key strategic concern for organizations and companies of all sizes.
As the threat of cyberattacks grows, security teams are putting in more effort to develop the capabilities needed to plan for and respond to these threats. And, as data quantities increase, remote working models need enterprises to strike a balance between productivity and data accessibility while maintaining security.
Furthermore, maintaining compliance with the General Data Protection Regulation (GDPR) and other data legislation implies that, in addition to securing information, organizations must also ensure its privacy in terms of how data is acquired, transferred, and used. As a result, regulatory data governance is in high demand.
In this article, we’ll break down what data governance is and why it’s so essential to cybersecurity.
What is Data Governance?
Data governance is the practice of applying internal data standards and regulations to manage the availability, accessibility, integrity, and security of data in business systems, as well as to regulate data consumption. Data governance guarantees that data is accurate, reliable, and secure, as well as that it is not misused. As organizations face new data privacy regulations and rely on data analytics to help them simplify operations and make better business choices, it's becoming increasingly vital.
A properly done data governance program often includes a number of teams, including a governance unit, a steering committee, and an individual or group of data stewards. They collaborate to develop data governance principles and standards, as well as implementation and enforcement methods, which are frequently performed by data stewards. In addition to the IT and data management teams, executives and other authorities from an organization's business operations should participate.
While data governance is an important part of an overall data management strategy, success depends on firms focusing on the intended business advantages of a governance program.
Why Does Data Governance Matter in the Context of Cybersecurity?
Without appropriate data governance, data inconsistencies in multiple systems within a corporation may not be managed. This might make data integration and data integrity more complex, lowering the accuracy of BI, corporate reporting, and analytics systems. In addition, data mistakes may go undiscovered and untreated, decreasing BI and analytics accuracy.
Data governance issues might potentially stymie regulatory compliance efforts. Companies that must comply with a rising number of data privacy and protection rules, such as the GDPR of the European Union and the California Consumer Privacy Act, and may face difficulties as a result. An effective enterprise-level data governance program often entails the creation of company-wide data definitions and standard data formats that are implemented across all business systems, resulting in improved data consistency for both business and compliance purposes.
The Relationship Between Cybersecurity and Data Governance
At its most fundamental level, cybersecurity is safeguarding an organization's infrastructure and data from assault, damage, or unauthorized access. Data governance, on the other hand, aims to specify what data assets the company has, where it resides, and who may act on it when and under what conditions.
This is why data governance is so important when it comes to implementing a company's security plan. Because you can decide how to effectively devote resources to secure your data until you know how much it's worth, where it's stored, and who has access to it. To put it another way, once you know how sensitive a data collection is, you can apply the proper information security rules to keep it safe. In this approach, good data governance is an important part of a company's overall cybersecurity strategy.
Understanding the Main Elements of Good Data Governance
Data governance requires that data be protected according to its worth or sensitivity. The strategy adopted for data discovery and classification, on the other hand, can make or fail a data governance program that aims to show compliance by regularly applying rules and standards.It will be hard to properly use, administer, or safeguard your data assets if you don't know what they are or where they are. The increased use of cloud-based 'as-a-service' platforms and technologies has made this work far more difficult.
The data discovery process, which is the first step toward efficient governance, necessitates an end-to-end software solution that can connect to any sort of data source and identify data assets – no matter where they are located. If an unprotected data asset suffers a security or privacy breach, this skill is critical; otherwise, companies will be exposed to substantial risk.
Similarly, firms' data categorization methods for identifying unique assets and applying the right level of security will be crucial. Organizations should use easy classification categories based on well-understood rule sets – such as GDPR and CCPA sensitivity or Personal Information (PI) and Personal Identifiable Information (PII).
This can be a stumbling block for organizations that don't have a discovery tool that can tell the difference between PI data (which doesn't identify a specific person and isn't usually responsible for governance violations) and the more sensitive PII data in order to provide a truly precise classification. Users will have no alternative but to manually process and separate more sensitive PII data if this feature is not provided.
Organizations will be able to make better-educated decisions about what level of data protection and security policies should be implemented for each data collection once they have these data governance insights. It can also help with better decision-making when it comes to allocating security resources to meet data protection objectives.
Without these insights, some businesses may take a "belt and braces" approach, applying security technologies such as least privilege management to all data sets they hold, regardless of classification. While zero trust is a highly effective way to improve security and protection, it may rapidly become a costly choice if used across the board rather than simply for high-value or high-risk data assets.
Other companies, on the other hand, will treat batches with zero trust depending on their view of which function originates and consumes the most sensitive data. Finance and human resources are common examples. However, without a thorough data discovery and classification procedure, it's possible that other vital data sources inside the ecosystem could be overlooked, possibly resulting in major governance blind spots.
The Benefits of Data Governance
A fundamental goal of data governance is to break down data silos inside a firm. Individual company divisions build their own transaction processing systems without the need for a centralized coordination data architecture, which results in silos. Data governance aims to integrate the data in such systems in a collaborative manner including stakeholders from a variety of business divisions.
Another purpose of data governance is to guarantee that data is utilized correctly, both to avoid introducing data mistakes into systems and to prevent the abuse of personal data and other sensitive information about clients. This may be achieved by establishing consistent data-use regulations, as well as mechanisms for monitoring and enforcing the policies on an ongoing basis. Furthermore, data governance can aid in achieving a balance between data gathering techniques and privacy regulations.
Data governance delivers enhanced data quality, cheaper data management expenses, and increased access to essential data for data scientists, other analysts, and business users, in addition, to more accurate analytics and higher regulatory compliance. Finally, data governance may assist executives in making better company decisions by providing them with more information. This, in theory, will result in competitive advantages as well as greater revenue and profits.
Implementing Data Governance
Organizations should make data governance a strategic priority. You'll need to identify data assets and current informal governance mechanisms, improve end users' data literacy and abilities, and determine how to evaluate a governance program's performance. Before creating a data governance framework, you must first identify the owners or custodians of various data assets within an organization and include them — or designated surrogates — in the governance program. The program's structure is then created by the acting CDO, executive sponsor, or dedicated data governance manager, who then works to staff the data governance team, select data stewards, and establish the governance committee.
The true job of data governance begins after the structure is in place. Data governance policies and standards, as well as regulations defining how data can be utilized by authorized individuals, must be defined.
Cybersecurity necessitates data governance. Organizations must know what data to safeguard and how to effectively protect it in order to defend against attacks. Data governance enables an organization to identify its high-value, high-risk datasets and, if necessary, commit additional resources to their protection. | <urn:uuid:c2d33a2b-b5cb-4575-bc0f-1d6a00093928> | CC-MAIN-2024-38 | https://www.data-sentinel.com/resources/what-is-data-governance-and-its-role-in-cybersecurity | 2024-09-07T11:55:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00669.warc.gz | en | 0.934305 | 1,749 | 2.875 | 3 |
What does privacy mean? Short answer: not much—at least not in our current state of digital transformation. Indeed, while I love technology and everything we’re now able to do to learn more about our customers through personalization, the cold hard truth is that there is no way to guarantee privacy in today’s digitized world. It was reported this month that Alexa is listening and recording your conversations. Data breaches continue to happen. And even though Mark Zuckerberg is preaching the future is about privacy, can we really trust him? It’s time to be honest. I don’t think so. At least, not as long as someone is using a computer, or living in proximity to one. Will that change? I sure hope so. Here’s why.
What Does Privacy Mean: A User-Focused Definition
Although we bandy the terms around loosely, there is a big difference between security/protection and privacy. Privacy is the empowered act of boundary-setting that allows me and other users to determine who can access what data about me, where, when and for what purpose. Privacy is—or at least should be—an inside job. It’s the act of determining when you want to close the blinds on your internet surfing, shopping habits, music preferences, or book club selections. It’s deciding when you are okay with someone gaining financially from your personal information and when you aren’t.
On the other hand, security/protection is the nuts and bolts of how a company keeps our personal user data safe. It’s the firewalls, the two-step authentication, the password guidelines, and the laws that require it. Security protection is the company’s responsibility—and not all companies take that responsibility seriously. In fact, we know most companies will experience a data breach of some kind. That means ultimately, the protection of one’s personal data is forever out of users’ hands.
Transparency is Required
Unfortunately, the only way a user can be empowered to establish privacy boundaries is through transparency from companies themselves. For instance, users need to know what types of information is being gathered from them in order to be able to make informed decisions about what information they want to share. Unfortunately, most companies bury this information in terms and services. Has anyone ever read one of those fully before clicking I agree?
But it gets complicated. We get ads for things we just had a private conversation about with a friend on the phone, for instance, or we get a text message requesting that we give to a certain political campaign or charity after listening to a certain podcast. Or what about the story of Target sending a coupon to a woman who didn’t even know she was pregnant yet. Clearly, some companies are overstepping. But many of us are okay with that.
The Privacy Paradox
According to a recent study, 54 percent of Americans don’t believe companies have their best interest at heart but will still give up their data if they get some benefit in return. Consumers want personalized experiences. They want to be treated like a human not a number. But in order to do that, data has to be collected. It’s a trade off and companies need to do a better job of building trust when it comes to privacy and security.
In addition to being transparent up front with data collection, businesses need to be transparent if a breach does happen. Be more like Tylenol and less like Toyota. Companies are learning the hard way that consumers will see through any phony ploy to hide a data breach and apologize for it months later. We are giving up our privacy, becoming loyal customers and we want to be treated with respect and assured that our data is secure.
Legacy-Era Definitions Will Die—Eventually
Mark Zuckerburg recently put together a fairly lame attempt at a privacy manifesto that discussed Facebook’s commitment to reducing the permanence of information gathered on Facebook and making systems more interoperable. His manifesto was lame because despite its name, it didn’t address the underlying issue of privacy. Can you say what data Facebook collects about you? Is it just your general information? Information about your friends? Your shopping habits? Music tastes? Movie preferences? Political leanings? While I’d venture to guess those are all yeses, we don’t know for sure. People are starting to see through his phony dedication to privacy.
In my view, Facebook is a good example of a new type of “legacy era” company—digital businesses who got it wrong the first time around. Facebook is the perfect example of a company who made the wrong assumptions—built an empire on sand—and is trying to double back but is finding it difficult to rework its old system to meet new user demands. (Sounds similar to the issues experienced by legacy-era businesses today, doesn’t it?)
At the end of the day, companies that incorporate transparent privacy policies into the building blocks of their companies are the ones that will see increased brand loyalty moving forward. They’re the ones who are actively pursuing ways to incorporate blockchain into the processes—who are actively working to not just meet but exceed the guidelines of the General Data Protection Regulation. They’re the ones who actively empower their customers to offer them information, knowing it will be used to enhance their user experience—no more, no less.
What does privacy mean? Today, not much. It’s a buzzword without much traction in the wild west of AI and digital transformation. But in the next five to 10 years, I anticipate privacy will become a game-changer for the companies that do it right. It will bolster trust—and ultimately sales. And customers will, thankfully, be all the wiser for it.
The original version of this article was first published on Forbes.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future. | <urn:uuid:3b182f08-b0c3-49fa-b24b-a8e78ba30487> | CC-MAIN-2024-38 | https://convergetechmedia.com/what-does-privacy-mean/ | 2024-09-08T15:35:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00569.warc.gz | en | 0.955447 | 1,397 | 2.6875 | 3 |
Here are ten detailed reasons with examples to debunk the myth that small organizations are not on hackers’ radar:
1. Valuable Data: Even small organizations possess valuable data, such as customer information, financial records, or intellectual property. Cybercriminals can exploit this data for financial gain or espionage.
Example: A small law firm may hold sensitive client data, making it an attractive target for cybercriminals seeking to steal confidential information.
2. Supplier Vulnerability: Hackers often target smaller organizations as entry points to larger supply chains. Weak security in one small company can lead to compromises in more extensive networks.
Example: An HVAC contractor with weak cybersecurity can be exploited to infiltrate a larger retail chain’s network.
3. Lack of Resources: Smaller organizations may have limited cybersecurity resources, making them easier targets for cybercriminals. They often lack dedicated IT staff to monitor and respond to threats.
Example: A small online retailer may not have the resources for robust cybersecurity measures, making it vulnerable to attacks like DDoS or data breaches.
4. Botnets and Automation: Hackers use automated tools and botnets to scan the internet for vulnerable targets. They don’t discriminate based on an organization’s size.
Example: A small non-profit website can be targeted by a botnet conducting brute force attacks to gain unauthorized access.
5. Ransomware: Ransomware attacks, which lock or encrypt data until a ransom is paid, target organizations of all sizes for financial gain.
Example: A small manufacturing company may be hit with ransomware, crippling its operations until a ransom is paid.
6. Distributed Denial of Service (DDoS) Attacks: Small organizations can be victims of DDoS attacks that disrupt their online services, impacting their reputation and revenue.
Example: An e-commerce startup can be targeted by DDoS attacks, leading to service interruptions and financial losses.
7. Social Engineering: Hackers often use social engineering tactics like phishing emails to trick employees into revealing sensitive information or providing access to systems.
Example: A small tech startup’s employees can fall victim to phishing attacks, compromising login credentials.
8. Branding: Cybercriminals may target small organizations to exploit their branding and reputation for phishing scams or distributing malware.
Example: A small charity’s website can be compromised to host phishing pages impersonating well-known brands.
9. Cryptocurrency Mining: Hackers use compromised systems for cryptocurrency mining, consuming resources without the organization’s knowledge.
Example: A small healthcare clinic’s servers can be hijacked for cryptocurrency mining, slowing down their operations.
10. Competitive Advantage: Smaller companies in competitive industries may be targeted to steal proprietary information, gaining a competitive edge.
Example: A small technology startup’s innovative designs could be stolen by a competitor through cyber espionage.
The notion that small organizations are immune to cyber threats is a dangerous misconception. It’s crucial for businesses of all sizes to implement robust cybersecurity measures to protect themselves from the evolving tactics of cybercriminals. | <urn:uuid:b4eb68e2-2efc-4c48-8bca-93af31e4edfd> | CC-MAIN-2024-38 | https://foresite.com/blog/myth-were-too-small-to-be-hacked/ | 2024-09-08T17:12:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00569.warc.gz | en | 0.90616 | 643 | 2.765625 | 3 |
Librarians and archivists have been saving artifacts, newspapers, photographs and books for years to preserve historical records. Today, their work is made even more complicated because so much of our unfolding history is chronicled on the Internet.
That has meant changes to archiving procedures, including decisions over what electronic information should be saved and how to store it.
According to Elizabeth Adkins, global information manager at Ford Motor Co., “There is a general recognition in the archival profession that people want to do this, but technology hasn’t caught up yet.”
These issues also affect businesses, Adkins and others pointed out. But in many cases, they said, companies haven’t been paying close attention to their own digital histories. Although large, established companies have for years saved much of their past on paper, it’s unclear whether they have been as thorough with their first forays on the Web.
“There’s a whole lot that’s come and gone,” said Carol Baroudi, owner of research firm Baroudi and Associates in Arlington, Mass.
Some companies have made progress. Amy Fischer, corporate archivist at Procter & Gamble Co., said the Cincinnati-based consumer products maker has been saving digital records since it started its first Web site in 1994.
But there has been no archiving road map, she said, and much of what has been done was by trial and error. “There’s been a lot of hand-wringing in the business archive community,” Fischer said.
Procter & Gamble, established in 1837 as a small, family-operated soap and candle business, has for years collected and archived its printed ads for products ranging from Ivory soap to Pepto-Bismol. “It makes sense that we save the electronic stuff, too,” Fischer said.
The company saved its earliest Web sites only on paper printouts because no one knew how to save them digitally at the time, she said. Now all of the company’s Web and intranet sites are archived electronically to maintain a link to the company’s past.
“Not everyone is doing it,” Fischer said of other companies. “But by now, most people are aware that they need to be doing it.”
Adkins said Ford is getting there. Several years ago, the company began looking at what to do about its digital history. Next year, it will begin a formal archive using Portable Document Format files created with Adobe Acrobat software, she said.
Ford’s first Web site was launched in 1995, but neither that version nor the updates that followed were ever officially saved, she said.
“The Internet is a tool that’s designed for moving a business forward, and the people involved don’t necessarily think of the ground they’re breaking,” Adkins said. “People are more caught up in getting the job done than about preserving their efforts.”
Last month, the Library of Congress announced it had begun collecting a massive digital record comprising gigabytes of data culled from thousands of news, personal and tribute Web sites related to the Sept. 11 terrorist attacks on the U.S. The project began when a library official realized that if the electronic history of the unfolding events wasn’t carefully preserved, it would probably be lost forever.
The official, Diane Kresh, director of public service collections at the Washington, D.C.-based library, said the agency entered into a US$100,000 contract with non-profit Internet Archive, which has been saving history digitally on its network of servers since 1996. Under the deal, Internet Archive continues to do daily digital captures of Web sites related to the disaster. The archive did a similar project last year for the Library of Congress, creating and maintaining a digital history of the tumultuous U.S. presidential election that included news Web sites and the original campaign sites for both George W. Bush and Al Gore.
Brewster Kahle, CEO of San Francisco-based Internet Archive, said his group has been saving Web pages from approximately 1,900 sites for the Sept. 11 project. The organization’s mission is to build a digital library of the Internet, which it’s doing with several partner groups. The data is stored using software called a Wayback Machine, which lets users go back to sites and see them as they were when first posted on the Web.
Internet Archive has a total of about 100TB of storage space so far and has currently saved 5TB of information in the Sept. 11 archive, which can be seen at http://september11.archive.org.
The goal is to save “material that is here today and gone tomorrow,” Kresh said. “This was an event that had a profound influence,” she said. “The Internet has become the public commons and has connected people in ways that were unimaginable.” | <urn:uuid:e700bba7-7bba-4a3b-b00c-0935a6e09c41> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/lost-in-cyberspace/33368 | 2024-09-09T21:42:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00469.warc.gz | en | 0.972843 | 1,040 | 2.59375 | 3 |
In today's technology landscape, disk imaging and disk cloning still play significant roles in IT workflows. But it's crucial to understand the benefits and limitations of each technology before deciding which one to use and how it aligns with your organization's IT infrastructure. While many IT professionals have relied on both techniques for years, some companies are now exclusively adopting disk imaging.
Defining Disk Imaging and Disk Cloning
Disk imaging and disk cloning are often mistakenly used interchangeably because they achieve the same outcome—creating an exact copy of a drive. This copy includes all data, files, software, the master boot record, allocation table, and other components necessary for booting and running an operating system. However, there are fundamental differences between the two methods.
Disk Imaging: Larger Files, (Sometimes) Slower Restores
TechTarget defines disk imaging as “software…used to make an image of a completely configured system. Because of its large size, a disk image file is compressed and stored on a secondary storage device or in cloud storage. Hard drive disk images can be saved as a virtual hard disk.”
Disk imaging involves creating a byte-by-byte archive of a hard drive, resulting in a compressed file format (typically saved as an ISO file). These compressed image files are often stored on external drives or in the cloud due to their substantial size. However, they offer the advantage of granular data restoration.
There are two types of disk image files: full and differential. Full images encompass the entirety of the source drive, while differential images capture only the changes made since the creation of the last full image. As a result, differential images do not contain all the necessary data for drive restoration; a full disk image is required.
Restoring and accessing data from disk images involves using imaging software to install and open them on a new or existing drive, which can be a time-consuming process. Additionally, disk images can consume significant backup storage space.
Arcserve ShadowProtect saves your backups as images but eliminates the time it takes to restore data using patented VirtualBoot technology to instantly boot a backup image into a virtual machine. And ShadowProtect lets you run automated, easy-to-use tests on your backup images, so you’ll be sure your backups will work when a disaster strikes.
Disk images differ from snapshots in that a differential image stores the changes made to a file system since the last full image. In contrast, a snapshot reflects the contents of a persistent disk at a specific point in time. Arcserve Unified Data Protection (UDP) software offers hardware snapshot support for fast backups and recovery.
Disk Cloning: Uncompressed Replication
TechTarget defines disk cloning as “the act of copying the contents of a computer's hard drive. The contents are typically saved as a disk image file and transferred to a storage medium, such as another computer's hard drive.”
Disk cloning entails creating an exact, uncompressed replica of an entire drive or specific partitions. Since disk clones are uncompressed, they can be immediately replicated to a target backup drive or the cloud, resulting in an up-to-date, identical copy of the data. The critical advantage of cloning over disk imaging is speed. In the event of a hard drive failure, the cloned drive can be quickly replaced, minimizing downtime.
Disk Imaging vs. Disk Cloning: Pros and Cons
Both disk imaging and disk cloning offer advantages and disadvantages. Cloning excels in some rapid recovery scenarios, while imaging gives you greater backup flexibility. But taking incremental backup snapshots stands out because it lets you store multiple images without using a lot of storage space. Snapshots can be invaluable when recovering from ransomware attacks or other data disasters, where reverting to an earlier disk image is your best chance of recovery.
Bit cloning limits you to a single copy per drive. Although you can overwrite one clone with another, each clone version requires its own drive. Meanwhile, imaging gives you the convenience of remote storage, employing compression to minimize storage demands and letting you store images in a way that best suits your infrastructure.
Find Out Which Is Best for You
By talking to an Arcserve technology partner, you’ll have access to the expertise you need to decide the best data resilience solutions for your organization, including assessing disk imaging vs. disk cloning and where snapshots fit into the picture.
Find an Arcserve technology partner here.
You May Also Like
- Backup and Disaster Recovery Business Continuity Cloud Data Protection Data Resilience Data Storage
Microsoft 365 Backup Is Here: How Its Integration With Arcserve SaaS Backup Ensures Cost-Effective Backup and Recovery
August 15th, 2024 - Backup and Disaster Recovery Business Continuity Compliance Cybersecurity Data Management Data Protection Data Resilience Data StorageAugust 13th, 2024 | <urn:uuid:0197a0ba-f2b1-4110-9188-4774ff558c36> | CC-MAIN-2024-38 | https://www.arcserve.com/blog/disk-imaging-vs-disk-cloning-key-differences | 2024-09-16T00:50:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00869.warc.gz | en | 0.91006 | 984 | 2.71875 | 3 |
A virtual machine can be considered as a virtualized instance that can perform almost all of the same functions as a physical server, including running applications and operating systems.
Virtual machines are used for running programs and operating systems, storing data, connecting to networks, doing other computing functions, and require maintenance such as updates and system monitoring.
In this article we will go through steps to create a virtual machine in Google Cloud Platform and do the configuration.
Log in to your Google Cloud Platform https://console.cloud.google.com
Switch to the project in your organisation in which you wish to create the Virtual Machine. Click on the drop down button and select the project name
Navigate to Compute Engine
Once the Compute Engine window opens click on Create Instance.
Enter the Name of the instance you want to give.
You can click on the Add Labels button and enter the Key and the Value and then select Save.
Select the Region and Zone in which you want to create the instance.
In the Machine Configuration section, select the Series and Machine Type
In the Boot Disk section, change the Operating system, the Version, the Boot disk type, and the Size according to your requirement. Then click on Select.
In Identity and API access, select the Service account and the Access Scope.
Select the Firewall rules to allow specific network traffic.
Select Create and the Virtual Machine will be created successfully.
This article gives you a step by step guide to create a Virtual Machine on Google Cloud. If you are a beginner or just want to start a compute as smoothly as possible, the instructions in this article will help you get started. | <urn:uuid:2cfa070e-15a2-4198-9bc6-edb50b6bbfd8> | CC-MAIN-2024-38 | https://kloudle.com/academy/how-to-create-a-virtual-machine-on-google-cloud/ | 2024-09-18T11:06:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00669.warc.gz | en | 0.885854 | 338 | 3.171875 | 3 |
Originally published October 10, 2019 and updated July 13, 2021.
It may be daunting to broach the subject of mental health and students, but there’s reason to be optimistic: 70% of mental health cases that appear in children can be addressed through early intervention. Getting ahead of mental health disorders and offering support to those who need it helps at-risk students before their mental health issues become more serious. The good news? There are a multitude of ways to do just that.
While there are technological tools such as web filtering and monitoring that are essential for safeguarding students and can help identify mental health issues early, we believe it’s even more important to learn how to identify students struggling with mental health issues and focus on helping students become resilient. In this blog post, we have put together a list of resources you can use to help promote mental health awareness in your school and create a positive mental health culture.
Why mental health awareness in schools is important
The World Health Organization estimates that close to 20%, or one in five students, are actively dealing with a mental health issue. For students, suicide is the second leading cause of death. The number of students today experiencing mental health problems is also on the rise.
For help identifying students at risk of suicide and how to help, download our Suicide Awareness and Prevention in Students Whitepaper.
Identifying the signs that something is wrong
There are many signs and circumstances that may identify a student suffering from mental health issues. Small changes in behavior (such as being withdrawn or outraged after texting or using the internet) can be an indication of a much bigger problem (bullying). It’s important that we all become aware of the signs associated with mental illness, so we can provide support for those that need it. The earlier an issue is picked up, the better it is for the student.
For help identifying the students struggling with a mental health issues, plus tips on how to help them, download our Children and Mental Health Whitepaper.
What can I do to increase awareness?
It is essential that teachers provide students with the opportunities, resources, and support they need. Here are seven ways you can promote mental health awareness in your school:
- Promote positive self-esteem. Provide students with the tools and skills necessary to resolve conflict and the inevitable setbacks they’ll face. Boost their self-confidence by supporting good decision making, assertiveness, perseverance, and self-determination
- Encourage students to eat healthy and provide healthy eating options
- Provide outlets to relieve anxiety and stress. Physical activity, meditation and the arts are super for self-expression, growth, and work wonders on a student’s overall mental health and ability to handle stress.
- Promote school policies that support mental health such as bullying prevention
- Display relevant material to students and families, such as the materials found on the websites for the National Alliance on Mental Illness and the World Health Organization
- Have an open-door policy – communicate to your students that you are available to listen to their concerns and issues. Communicate openly, honestly and often. Notice the little conversational openers students may offer up and ask non-judgmental questions. Be sure to pause and listen to what they have to say
- Provide places where students can relax, such as quiet areas and lounge chairs
Mental health disorders in students is a complex issue that requires a coordinated effort and multilevel approach from parents, schools, health care organizations, digital media outlets, and community outreach. Early detection and intervention are crucial factors in the goal towards reaching at-risk students before conditions manifest into more serious issues.
For help identifying the students struggling with a mental health issues, talk to a safeguarding expert today. | <urn:uuid:08779cd2-6d94-4d71-a212-913b80244a44> | CC-MAIN-2024-38 | https://www.netsweeper.com/education-web-filtering/7-ways-to-promote-mental-health-awareness-in-schools-2 | 2024-09-18T12:35:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00669.warc.gz | en | 0.957802 | 769 | 3.703125 | 4 |
Collaborative working can be an arduous process. The scenario is all too familiar: emails get lost or accidentally deleted; versions of spreadsheets or word processing documents are fuddled; a member of the project team goes on leave and no one can access their files.
Desperate for an effective alternative, companies are turning to an open-source online collaboration tool called the wiki. Taken from the Hawaiian word for "quick", a wiki is server-based software that enables users to create and edit content on simple web pages from any browser. Wikis solve many of the problems associated with traditional content management systems, yet they are simple enough for non-technical workers to use.
"Think of enterprise wikis as an electronic version of the post-it note," says Jonathan Spira, CEO and chief analyst of research group, Basex. "They are informal, quick, easy to use, and allow workers to record thoughts and comments contextually."
The success of the Wikipedia, an online encyclopaedia written collaboratively by the public, has attracted attention to the technology – as well as to the risks of letting anybody anonymously edit content. While most articles are accurate, its founders admit that errors and even malicious changes have been discovered in some prominent entries.
In spite of the negative press that Wikipedia generated in the latter half of 2005, wikis continue to be taken up by enterprises for a variety of purposes, including document management, project management and as an enterprise knowledge base, working best when recording ephemeral knowledge.
Knowledge workers – particularly those involved in group projects – are increasingly on the lookout for substitutes to endless group emails, which jam up inboxes and fail to capture information in an accessible place. Currently, 90% of collaboration and 75% of a company's knowledge assets exist in emails, according to Ross Mayfield, CEO and founder of wiki vendor Socialtext, but there is no value for the organisation apart from what people produce from the information.
"Wikis work better than email as a group productivity tool because they offer project groups the benefit of capturing the tacit knowledge that manifests itself as a by-product of daily work," says Mayfield.
Alongside the growing email headache, the fact that wikis are affordable and easily adopted, as they are largely open-source and web-based, has prompted a rash of recent adoption of this decade-old technology.
Wikis are often grouped with web logs ('blogs'), another technology that has risen to prominence in the last two years, under the heading of 'social software'. But, says Martin Wattenberg, a researcher on the collaborative user experience team at IBM, the two play different roles.
"Blogs are based on an individual voice; a blog is sort of a personal broadcasting system," he says. "Wikis, because they give people the chance to edit each other's words, are designed to blend many voices. Reading a blog is like listening to a diva sing; reading a wiki is like listening to a symphony."
For example, a blog might be great for reporting a series of customer focus group discussions, but a wiki would be better suited to listing project issues, modifying them and later adding the resolution of each.
Speed is another winning feature. Content in a wiki can be updated without time delay, without administrative effort and without the need for distribution – users simply visit and update an existing web page. "The key difference between wikis and other enterprise collaboration tools – whether portals, Lotus Notes, intranets or the Internet – is how much work is required to get information into or out of the system," says Spira.
Mayfield upholds that wikis can accelerate a project cycle by as much as a quarter or a third. "They are even more asynchronous than email, so they work well across time scales in a corporate setting," he says.
But while user groups are increasingly adopting the collaboration technology, the very nature of a wiki – organic, constantly evolving and unstructured – is posing a challenge to CIOs, who are used to far more rigid information management environments.
Advocates argue that what is lost in control is more than made up for in information flow. But the case of the Los Angeles Times newspaper, which used a wiki to allow the public to edit an online opinion piece, only to have it vandalised with pornography, will prompt many IT managers to think twice before adopting a public-facing wiki.
Intranets are safer places to deploy a wiki. But, like any business tool, it will be a boon to some companies and almost useless to others, according to Elton Billings, a worldwide authority on intranets and creator of wiki forum, Cluebox.com. "Much of this difference has to do with corporate culture and the type of work being done," he says.
Benefits of enterprise wikis:
- No prior knowledge of HTML editing tools needed
- Low upfront investment
- Wiki pages can be edited online in a browser, rather than requiring a separate application
- Facilitates processes of collaborative writing, editing, peer review, commenting and annotating
- Searchable content
- Customisation and branding options are available for the enterprise
- Wikis are scalable and many include useful features for large deployments such as advanced access control lists and integration with directory services
Other useful features include revision control to access any previous versions of each page, rollback to older versions and tracked changes.
However, the lack of quantifiable benefits makes it difficult to measure the return on investment (ROI) provided by wikis. Billings suggests that the same was true of email when it was first adopted in the enterprise, and, instead of ROI, suggests CIOs consider the potential business advantages wikis could yield.
In this assessment, they should consider the types of work being done within their organisation, and whether their corporate culture truly promotes sharing of information and knowledge. "If people routinely work in teams and successful teamwork is a strongly rewarded behaviour, wikis can be a great tool for promoting even greater success," he says.
Entering the mainstream
Smaller companies have often been the pioneers of this technology. A recent survey of 73 companies by IT analyst group Gilbane found that SMEs have been the first to trial wikis, thanks largely to their affordability and ease of use.
Research group Gartner observes that the use of wikis among SMEs is currently dominated by open-source versions of the technology – the low implementation cost is clearly a factor in their adoption – but notes that "the commercialisation of wiki products has now begun".
In the past 18 months, many large companies, including mobile phone giant Nokia and investment bank Dresdner Kleinwort Wasserstein (DrKW), have adopted commercial versions of wikis. DrKW uses its wiki to encourage geographically dispersed employees to publish and collaborate on reports, for tracking project development, for reducing the number of emails in circulation and in storage, and for sharing and developing new product specifications.
According to Gartner, the three leading vendors of enterprise wikis at present are JotSpot, Socialtext and Atlassian Software Systems. Each of these commercial wiki vendors offer value-added features such as a more user-friendly interface, database integration and forms, while also contributing to the open-source community.
Gartner predicts that by early 2006, one-third of collaboration tools vendors, including IBM and Microsoft, will have begun to incorporate wiki-like functions into existing content management systems and workgroup software. Ward Cunningham, inventor of the wiki, worked at Microsoft for two years before leaving in October 2005 to join the Eclipse Foundation, which promotes open-source development tools.
Currently, the biggest obstacle to the adoption of wikis in the enterprise is acceptance. Users need time to adapt to the notion of editing and adding to other people's work, while managers need to become more trusting of users and embrace the idea that any user with access to the wiki can make changes responsibly.
But Cluebox's Billings argues that CIOs should consider the risk of not implementing wikis. "For companies that do truly promote teamwork and information sharing, employees will likely be looking for any tool that can help them be successful in those areas. If this need for an information sharing tool is not met through a corporate initiative, employees may simply implement wikis on their own," he says.
The pattern will be familiar to any organisation that was slow to recognise other popular, easily adopted technologies. 'Rogue' wikis can become important enough to demand support from the IT department – as was the case with instant messaging when it first emerged in the enterprise.
"Suddenly, the CIO will be faced with the choice of inheriting and supporting multiple different solutions, or migrating existing wikis to a consolidated platform that lends itself to enterprise-level support," says Billings. "Many companies are still trying to figure out how to deal with departmental intranet sites that were established before their IT departments had a web publishing solution in place."
To avoid repeating past mistakes, and for both the benefits of the business, IT management would be wise to investigate wiki technology sooner rather than later. | <urn:uuid:3e25feb5-648c-49fd-86f0-63ff4e137d11> | CC-MAIN-2024-38 | https://www.information-age.com/wiki-wild-world-21021/ | 2024-09-19T18:27:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00569.warc.gz | en | 0.9452 | 1,879 | 2.546875 | 3 |
IT teams in most organizations are familiar with disaster recovery and business continuity processes. However, some may not be aware of the importance of conducting a business impact analysis (BIA). A BIA is one of the most important elements of a business continuity plan. It helps companies determine the financial impact of outages or any other disruption to their business.
What Is a Business Impact Analysis and Why Is it Important?
A BIA identifies the impact of a sudden loss of business functions, usually in terms of cost to the business. A BIA also identifies the most critical business functions, which allows you to create a business continuity plan that prioritizes recovery of these essential functions. However, the reason behind the business disruption is not important. It could be due to negligence, natural disaster, cyberattack or other causes. Instead, it looks at the business impact of the disaster, prioritizes resources and determines the best approach to recovery.
A BIA is comprised of three key components:
- Business impact
- Time frames
Each of these is discussed further below. As a part of the foundation of a business continuity plan, a BIA is essential to business recovery in the event of a disaster.
Determine the most critical business functions based on cost to the business
A BIA determines a company’s most important functions that keep it afloat — its comprehensive set of business processes, the resources needed to execute these processes and the systems required for these. The potential cost associated with a business disruption, such as loss of revenue, regulatory compliance penalties, contractual penalties due to missing service-level agreements (SLAs), increased operational costs, etc., is calculated in terms of real dollars for each business function.
To assess the financial impact, one approach is to use a questionnaire to ask questions, with answers rated on a scale from 1 to 5. For example:
- What would the potential loss in revenue be if this business function went down?
- What fines and penalties would the business incur?
- What increase in operating costs would the business experience?
There could be non-dollar costs to the business as well. These include reputation damage and loss of goodwill. Your questionnaire could also include questions such as:
- What would be the potential damage to the business’ reputation?
- What would be the impact on customer service?
Identify potential threats to these functions
Once your BIA identifies the critical business functions, it determines the risks associated with them as well as the conditions that may trigger a business process outage and the probability of the recurrence of the risk.
There are three timeframes that your BIA should address:
- Recovery Point Objective (RPO) — Typically the time between data backups that represents the maximum time during which data may be lost during a disaster.
- Recovery Time Objective (RTO) — The time it would take you to recover from backup.
- Maximum Allowable Downtime (MAD) — The maximum tolerable period of downtime a particular business function can afford. It should include the time it would take to restore the function to full operation after a backup has been restored.
A BIA should determine the dependencies between business processes and systems. This helps prioritize the systems that need recovery first. A BIA helps you discern the order in which lost functions or processes must be restored. A business function that has more business processes relying on it to be operational will have a higher priority in the recovery process than others.
There could also be dependencies regarding certain vendors that you’ll need to work with to restore various systems and functions. These could include IT vendors, and Internet service providers, and should be documented in your BIA.
Are There BIA Standards?
Several standards provide guidance on how to create a BIA. These include the International Organization for Standardization (ISO) 22301, National Fire Protection Act 1600 and the Federal Financial Institutions Examination Council’s (FFIEC) BCP standard for financial institutions.
Business Impact Analysis as Part of Business Continuity Planning
A business continuity plan (BCP) describes what steps must be taken in case of an outage or disruption, whereas a BIA identifies the risk that could prompt the outage as well as the critical business functions that could be impacted by the outage and prioritizes these for recovery. A BIA lays the foundation for a solid business continuity plan and prepares an organization for the inevitable effort required to recover from a business disruption. BCPs not only focus on technical operations (hardware/software issues) but also take into account the personnel and other resources associated with business continuity.
Once your BIA is in place, it is a good practice to periodically review and update it, as your business changes over time. This allows you to leverage the BIA effectively to handle new risks and challenges. It is recommended that you do this at least every two years. A BIA, in conjunction with business continuity planning, enables an organization to minimize downtime and ensure workforce productivity even in the event of a crisis.
Learn more about business continuity planning in our ebook Transforming a Crisis Into an Opportunity. | <urn:uuid:9d3a39b7-4fe0-4e51-8fbc-f0a335e7d5c8> | CC-MAIN-2024-38 | https://www.kaseya.com/blog/business-impact-analysis-and-business-continuity-planning/ | 2024-09-12T12:02:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00369.warc.gz | en | 0.946782 | 1,046 | 2.5625 | 3 |
What is HIPAA?
The Health Insurance Portability and Accountability Act (HIPAA) is a federal law enacted in 1996. It sets national standards for protecting the privacy of individually identifiable health information (PHI) transmitted, stored, or maintained electronically by covered entities (healthcare providers, health plans, and healthcare clearinghouses). HIPAA compliance ensures that patients have control over their medical information and reduces the risk of healthcare fraud and abuse.
Amazon MSK and HIPAA Compliance
Before discussing Amazon MSK's specifics, it's crucial to grasp the nuances of HIPAA compliance. HIPAA is a complex regulatory framework that imposes stringent requirements on covered entities and their business associates. While MSK offers robust security features, it's essential to understand that the platform does not guarantee HIPAA compliance. Instead, it provides a foundation for organizations to build a comprehensive compliance program.
Ensuring HIPAA compliance is paramount for healthcare organizations leveraging Amazon Managed Streaming for Apache Kafka (MSK) to process and analyze patient data. Fortunately, Amazon MSK offers a robust set of security features that can significantly aid in achieving HIPAA compliance:
- Customer Key Management (CMK): Amazon MSK lets you encrypt data at rest and in transit using your AWS Key Management Service (KMS) keys. This ensures that only authorized entities have access to decrypted data, even if AWS personnel were to gain access to the underlying storage infrastructure.
- Data Access Control (IAM): Fine-grained access control through AWS Identity and Access Management (IAM) enables healthcare organizations to restrict PHI access based on the least privilege principle. This ensures that only authorized users with a legitimate need to access PHI can do so.
- Auditing and Logging: MSK integrates seamlessly with AWS CloudTrail, which provides detailed logs of all API calls made to MSK. CloudTrail logs capture who, what, when, where, and how actions were performed on MSK resources. These logs can monitor user activity and identify potential security breaches.
- Network Isolation: Amazon MSK utilizes Virtual Private Clouds (VPCs) to isolate customer data from other workloads running on the AWS cloud. This network segmentation prevents unauthorized access to PHI across different accounts or services.
- Secure Communication: MSK enforces encryption for all data in transit between clients and MSK clusters. This protects against eavesdropping or data tampering during network communication.
- Compliance Certifications: Amazon MSK adheres to a wide range of security standards and certifications relevant to HIPAA compliance, including ISO 27001, SOC 1 & 2, and PCI DSS. These certifications demonstrate AWS' commitment to security best practices.
Additional Considerations for HIPAA Compliance
While Amazon MSK offers robust security features, achieving complete HIPAA compliance requires a comprehensive approach. Here are some additional considerations:
- HIPAA Security Rule: Ensure your organization implements administrative, physical, and technical safeguards outlined in the HIPAA Security Rule. This includes conducting security risk assessments and implementing a data security and privacy program.
- HIPAA Privacy Rule: Understand and comply with the HIPAA Privacy Rule, which governs the use and disclosure of PHI. This includes obtaining patient authorization to use their data for specific purposes.
By leveraging Amazon MSK's security features along with a comprehensive HIPAA compliance strategy, healthcare organizations can effectively protect sensitive patient information and ensure patient trust. Remember, a layered security approach encompassing infrastructure, access control, data encryption, and ongoing monitoring is crucial for achieving HIPAA compliance in the cloud.
Want to build secure healthcare infrastructure but don't know how? | <urn:uuid:7efb952e-3eb2-4196-9356-2ab598d9454f> | CC-MAIN-2024-38 | https://mactores.com/blog/hipaa-compliance-securing-patient-data-with-amazon-msks-key-features | 2024-09-13T17:52:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00269.warc.gz | en | 0.901123 | 733 | 2.640625 | 3 |
Over the past year, businesses around the globe have accelerated their digital transformation efforts to stay operational. The explosion in demand for remote connectivity required extensive infrastructure remodelling, and whilst these changes took place at pace, they also opened the door to a greater threat of cyber crime. Although cyber crime in its many forms saw a boost, whether it be double extortion ransomware or the supply chain attacks most notably seen with SolarWinds, distributed denial-of-service (DDoS) attacks have perhaps seen the greatest uptick. The most recent NETSCOUT Threat Intelligence Report revealed record-breaking DDoS activity in 2020, as attackers launched more than 10 million DDoS worldwide.
These DDoS attacks have targeted a variety of different sectors, from manufacturing to financial services, travel, education and more, but fundamentally the goal has been the same: to take critical systems offline and cause maximum disruption.
Creating and rolling out an effective cyber security strategy
New attack types – reflection/amplification
What’s more, attackers are leveraging new DDoS attack types in their bids for disruption, and the rise in threats has forced security professionals to forge new strategies for securing networks and systems. One such popular attack type is the reflection/amplification attack, which enables attackers to generate higher-volume attacks. Attackers achieve this by combining two distinct methods.
The first, reflection attacks, see adversaries spoof a target’s IP address and send a request for information, using either the User Datagram Protocol (UDP) or the Transmission Control Protocol (TCP). The server responds to the request, sending an answer to the target’s IP address instead of the adversaries IP address. This “reflection”— spoofing the source IP of the packet and having the reply reflected to the target’s IP address — is why this is called a reflection attack. As such, any UDP- or TCP-based service on the Internet can in theory be targeted and used as a reflector.
The second method of attack, amplification attacks, generate a high volume of packets which overwhelm the target website without alerting the intermediary. The attacker sends a small request, often called the trigger packet, which results in a vulnerable service responding with a large reply. The attacker sends many thousands of these requests to vulnerable services, resulting in responses which are far larger than the original request, amplifying the size and bandwidth issued to the target. The amplification can include multiple response packets to a single packet, or larger packet sizes than the original.
By using both of these approaches in concert, attackers can both magnify the amount of malicious traffic they can generate, as well as direct the flood of malicious traffic towards the target. The millions of exposed DNS, NTP, SNMP, SSDP, and other UDP/TCP-based services have made reflection/amplification attacks highly prevalent.
The seven elements of successful DDoS defence
Defending against reflection/amplification
As with all DDoS attacks, the ultimate goal of reflection/amplification is to overwhelm systems and cause disruption to services. However, what makes this type of attack especially dangerous is that ordinary servers or consumer devices, with no clear sign of compromise, can be used as a vector. This, coupled with the low barrier to entry to launch a reflection/amplification attack (it doesn’t require any sophisticated knowledge or tools), means that not only are these attacks hard to detect, but attackers can create very large volumetric attacks using only a modest source of bots or a robust server.
Blocking the spoofed source packets is the first line of defence against reflection/amplification attacks. However, because the spoofed attack is often crafted to look like it stems from legitimate services and trusted sources, it’s difficult to determine which traffic is genuine user workloads and what is reflected traffic generated by attackers.
Additionally, when a service does indeed come under attack, legitimate user traffic may be forced to retry responses due to the slowdown in service. These retries could then themselves be falsely identified as a DDoS attack, obfuscating the real cause of the slowdown. However, there are several means to mitigate reflection/amplification attacks.
Rate limiting restricts sources and destinations based on previously established access policy. However, rate limiting may inadvertently impact legitimate traffic, so rate limiting must be used only as a last resort type mitigation.
Port blocking unneeded ports can reduce organisations’ vulnerability to attacks. However, this doesn’t prevent attacks on ports that are used by both legitimate and attacker traffic.
Traffic signature filters can be used to identify those repetitive structures that can signal an attack. However, the downside of filtering is a potential impact on performance.
Threat intelligence services can be a useful tool for security professionals for identifying the vulnerable servers which attackers use to launch attacks on a regular basis. This enables organisations to more proactively block IP addresses and cut attacks off at the knees.
Whichever method, or combination of methods, an organisation chooses, they need to take action sooner rather than later. The DDoS attacks being mounted against businesses today are more sophisticated and coming in higher volumes than ever before. As digital transformation efforts continue at pace, it’s likely cyber criminals will see more opportunities to profit, and businesses need to ensure their security posture stacks up to their changing infrastructure. | <urn:uuid:e3072ac6-5680-43c3-a004-a024e8bd36a4> | CC-MAIN-2024-38 | https://www.information-age.com/digital-transformation-growth-ddos-attacks-18095/ | 2024-09-13T17:50:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00269.warc.gz | en | 0.945542 | 1,094 | 2.8125 | 3 |
The cyber security industry is a good example of a field where artificial intelligence (AI) is both being looked to as a near-magical perfect solution while also already being deployed in a practical way every day. But can we trust it?
The cyber world is notoriously unbalanced, with the hostile attackers having their pick of thousands of vulnerabilities to launch their strikes, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the security teams tasked with defending a system have to stop every attack, every time.
The inhuman speed and power of an advanced AI would be able to tip these scales at last, levelling the playing field for the security practitioners who are constantly on the back foot.
The perfect AI would be able to detect and thwart even the most well-planned, high level attacks – all without the need for any human intervention.
What we currently have – machine learning helping people
While we wait for our perfect, genius artificial intelligence to appear, AI is currently being heavily used in the security industry in the form of machine learning (ML).
Essentially a system that can learn without being explicitly programmed to do so, ML lacks the self-awareness that is popularly ascribed to AI. However, it is still incredibly valuable when it comes to handling large amounts of data and identifying patterns and trends.
This capability is used by cyber security practitioners to better get to grips with the vast amount of potential evidence they need to sift through after a cyber attack.
One of the earliest tenants of forensic criminal investigations is Locard’s Exchange Principle, the idea that all crime scenes involve an exchange of the perpetrator taking something away, but leaving something behind in return. The investigators are tasked with finding and understanding these traces in order to help them understand what has happened, and hopefully track down the criminal.
Cybercrime has upended Locard’s Exchange Principle because the average attack creates an exponentially larger amount of potential evidence to be examined – with many specifically designed to conceal or disrupt the evidence and hinder the investigation.
Analytical tools powered by ML enable cyber investigators to regain the advantage by handling the heavy lifting of sorting through the enormous piles of digital evidence and breaking it down into key points and trends. Rather than having to tediously comb through everything themselves, the human practitioners can focus on the most important evidence first.
Every minute counts when it comes to investigating a breach, so the support of ML is proving to be increasingly invaluable.
But humans still have a place – turtle stuff
As powerful an asset as ML is however, I believe it will be sometime before the self-sufficient cyber security AI the industry dreams of becomes reality. One of the biggest challenges facing AI developers in every field is the fact that unlike a real human brain, an artificial mind is not truly capable of intuition or assumption, and runs purely on data.
A powerful example of this issue appeared in October when MIT researchers were consistently able to trick Google’s AI into identifying a 3D printed model of a turtle as a rifle. MIT’s Labsix team achieved this bizarre feat by adding visual noise designed to confuse the AI – a concept known as an adversarial image. The turtle was so successful it fooled the AI from every angle, and the researchers also had success in convincing it that a baseball was an espresso and a kitten was a bowl of guacamole.
This same technique could also be applied by cyber criminals to fool a forensic AI and throw it off the scent of an investigation.
If attackers are aware of the data points and trends used by the AI, they can hide their activity within noise that will have the AI thinking nothing is amiss, or lead it in the wrong direction.
Just as the human eye will obviously know the difference between a turtle and a rifle, a real security professional will be able to see through this adversarial noise.
The human brain’s capacity to make logical leaps and work by intuition rather than cold data means that the expertise of real human professionals will be a vital part of cyber security for many years to come.
Originally posted at ACCESS AI | <urn:uuid:189417dd-384e-4e72-a123-8b6bdd8c8cfc> | CC-MAIN-2024-38 | https://resources.experfy.com/bigdata-cloud/should-we-give-ai-the-key-to-our-security/ | 2024-09-16T07:22:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00069.warc.gz | en | 0.968065 | 851 | 2.65625 | 3 |
Adam Katz shares how to design your life with the power of design thinking
NEW YORK, NY / ACCESSWIRE / August 19, 2020 / Adam Katz learned about the concept of design thinking while attending the School of Visual Arts MFA in Design. It was an eye-opening experience for him and allowed him to find his true calling in the world of design. Adam Katz, formerly with Google Creative Lab, now uses design thinking with many clients. He’s able to find innovative solutions that better his client’s lives.
Design Thinking Isn’t Just for Designers
According to Adam Katz, designers can and should do everything. They aren’t limited to choosing fonts or crafting the pixel-perfect layouts. Designers aren’t just here to create new products or change the way we interact with old ones. They are here to change everything for the better with design thinking. Design thinking isn’t just limited to designers. Adam Katz believes there’s a designer in all of us, and design thinking can allow it to shine.
What is Design Thinking?
Adam Katz says the simplest definition for design thinking is, “taking a holistic approach to developing solutions to complex problems”. Design thinking has five stages.
These five stages of design thinking are:
Empathize: understand the user’s p.o.v.
Define: state the user’s problem and needs
Ideate: generate ideas move past previous assumptions
Prototype: create testable solutions
Test: test your solutions and ideas
Designers often run several stages at one time or run them out of order. Designers rarely think or do things linearly, so it’s no surprise they use the stages non-linearly. However, if you are new to design thinking, following these stages linearly can help you get accustomed to design thinking according to Adam Katz. He explains that we are all creatures of habit. Structure can help us create new habits and guide us when we are learning something new. So, design thinking allows you to think creatively while still providing a process for you to follow, making it easier to transition to this problem-solving style.
Be human-centered: it’s always about the user or viewer.
Be unique: it’s about being creative, playful, and fun.
Be iterative: reexamine, redesign, and redefine often to add improvements.
Be collaborative: bring together diverse groups to solve problems uniquely.
Be action-driven: prototype, build, and create always to represent an idea.
Design Your Life With Design Thinking
It’s easy to see how design thinking can apply to creating new products or solving problems in the workplace. Adam Katz says it can also transform your life. You can adjust the processes to help you solve any problem life throws at you. Design thinking gets you to think proactively and to take action, instead of endlessly analyzing trying to come up with the perfect solution. Adam Katz says it can help you find your way when you don’t know where you want to go.
Web Presence, LLC
SOURCE: Adam Katz
View source version on accesswire.com: | <urn:uuid:961b0638-bb4c-467d-8838-3cd48817d6b2> | CC-MAIN-2024-38 | https://itbusinessnet.com/2020/08/design-thinking-adam-katz-believes-its-the-best-thing-youve-never-heard-of/ | 2024-09-18T16:01:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00769.warc.gz | en | 0.92251 | 664 | 2.609375 | 3 |
In the simplest terms, a logical data model is a visual representation of the business rules and requirements covering the universe-of-discourse for a given solution or enterprise, along with some textual support. Keeping this in mind, the logical data model is a metaphor describing the piece of the organization under analysis. A first step in composing a logical data model is listing the entities, or business objects, that are within the scope of the solution in focus. These entities are the important nouns for the elements that exist or need to be created. The list can start off hitting the obvious items, others can be added later. Next, start working through the interrelationships of the business objects. Objects relate in a limited array of possibilities. There is a low end and a high end of occurrences; there is optional vs. mandatory—zero, one, only one, many. As an example, a CUSTOMER has zero to many ORDERs. Make sure you think through each direction. Flipping our CUSTOMER and ORDER gives us the idea that an ORDER has one and only one CUSTOMER. Every one of these relationships is a business rule that should be enforced by one’s finished solution.
Progress through every entity and identify the valid possibilities. What does each listed object relate to? Think through every rational relationship and make decisions. And again, make sure you address each direction of a relationship. As the pieces start taking shape, you may determine a few more objects that need to join the party. While the business objects and their relationships are taking shape, dig deeper into the nature of each object. What pieces of information, or attributes, need to exist for a given entity? A CUSTOMER entity might need a First Name, Last Name…. These attributes are additional business rules, such as “A customer must have a Last Name; A customer might have a First Name” and so forth. Of those attributes, decide which ones may serve to uniquely identify an instance of each object. Often this labeling is straight-forward and obvious, like using a “Customer Number” or an “Email Address” or whatever may be most relevant to your organization for the CUSTOMER object. And yet again, these choices too are business rules, “A customer is uniquely identified by a Customer Number.”
Is your model good or bad? Does it successfully convey an understanding of the focused-on-area? Or at least does it provide an understanding of the area given a certain perspective? The success or failure of your data modeling efforts is directly linked to this rationalizing of the business. A data model that accurately describes the organization will be considered good and fit-for-use. At the highest level, a good logical data model should make sense to business personnel. The reason for this connection is that the model involves logic, and not technology. Rational thought is all that is needed; no expertise in a specific technology or tool is required. If business personnel cannot relate, then perhaps the data model might need to be refined and simplified.
Building a logical data model can be done quickly. The steps for composing that logical data model are as set forward above. What are the objects of significance? How do they relate to each other? How does one identify an instance of each object? What things need to be known about those objects? What does EVERYTHING mean? As the answers to these questions are unraveled, the data model arises. Your data model is a creature built of logic and as such is something that can be easily digested by business personnel. Each and every item, be it entity, attribute, or relationship, is instantiating and riddled with business rules. And as one should expect, the business rules rule. | <urn:uuid:6f6d68fb-9189-4db5-a363-53e0d7b9a7de> | CC-MAIN-2024-38 | https://www.dbta.com/Columns/Database-Elaborations/What-Becomes-a-Logical-Data-Model-Most-155872.aspx | 2024-09-19T20:47:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00669.warc.gz | en | 0.949153 | 765 | 3.5625 | 4 |
In this post, I’m going to go over some email basics you should know. I apologize in advance for those of you who are well-versed in email terms. Still, we often have people who log into Remote Technical Support with email issues. Many of these problems could be resolved by the customer if they knew these basic facts.
POP Vs. IMAP
For home customers, there are two kinds of email services. The first is called POP. That’s another of our famous computing-related acronyms. POP stands for Post Office Protocol. POP is the older of the two services dating back to 1984. With POP, a computer user downloads all their emails onto their computing device and the email is deleted from the server. In the early days of computing, when most of us only had one computer, POP worked just fine. However, it doesn’t work well when you are receiving email on more than one device. You could wind up having to move or delete an email individually on each of your computers. Or, you might find that an email downloads onto one device, is deleted from the server and never shows up on additional devices.
Next, we have IMAP. This acronym stands for Internet Mail Access Protocol. In this method, emails stay on the server. This means all devices can receive (sync) emails from the same account. Additionally, if you move or delete on one device, that email will be moved or deleted on all other devices, as long as all devices are set up with IMAP. With the explosion of cell phones, tablets and smartwatches, almost all email is now set up as IMAP.
Webmail Vs. Email Clients
Of all the email basics I’m explaining in this post, these are two of the most misunderstood terms. Yet, they really are the simplest to define. If you open a browser like Microsoft Edge, Google Chrome or Firefox to go to your email, you are using webmail. On the other hand, if you open an app to read your email, that app is called an email client. Some examples of email clients:
- Microsoft Outlook
- Mozilla Thunderbird
- Mac Mail
- The email app on your smartphone or tablet
- Yahoo Mail app
- Gmail app
If your email is set up on all your devices as an IMAP account, whether you use webmail or a client doesn’t really matter. In the old days, when most email was POP, there was a difference. Mainly, if you used an email client, you were responsible for backing up your email. If your computer crashed and you didn’t have your email backed up, you lost the emails you had saved. With IMAP, your email provider is responsible for backing up your email on their server.
However, I would mention two situations where you might lose all your IMAP mail: 1) if a hacker breaks into your account, he or she could delete all your email (and contacts, by the way) and you’d lose everything; 2) if the email provider has a server crash that only affects a few people, they usually will not restore from backups. (I once lost 6000 emails in this situation. Generally, a provider will restore from backup if an entire email server goes down but not for just a few people.)
You should note: there is a disadvantage to using IMAP and/or webmail. All email providers have a limit on how much space you can use on their server. For instance, Gmail limits you to 15GB of free storage space. Once you hit that number, you either have to delete emails and attachments or purchase more space. (Fortunately, extra storage space is reasonably priced.)
When emails were stored on our own computers via POP, the only limitation was hard drive size.
Another Of The Email Basics: Encryption
The email most of us use on a daily basis is open text. This means that anyone who intercepts our email can read it. For this reason, you should never put critical personal or financial information in an email. Never email your credit card information, your social security number or your banking log-in data.
True, there are some encrypted email providers such as Proton and Hushmail. There are two downsides to using these types of services: 1) most of them are not free; 2) all parties in an email must have the same service or it’s not true encryption. Still, these types of services are available if you and someone else needed encryption. | <urn:uuid:7f733431-fc5c-4e03-89e6-32981eb0abd4> | CC-MAIN-2024-38 | https://www.4kcc.com/blog/2021/07/22/email-basics-you-should-know/ | 2024-09-21T03:53:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00569.warc.gz | en | 0.956135 | 935 | 2.515625 | 3 |
According to the National Institute of Standards and Technology (NIST), complex passwords that contain a variety of characters are strong, but the longer a password is,
As AI becomes more advanced, it’s important to consider all the ways AI can be used maliciously by cybercriminals, especially when it comes to cracking passwords. While AI password-cracking techniques aren’t new, they’re becoming more sophisticated and posing a serious threat to your sensitive data. Thankfully, password managers like Keeper Security exist and can help you stay safe from AI-password threats. Keeper auto-generates strong and unique passwords for each website, application and system you use – then, it securely autofills your login credentials so that you don’t have to worry about remembering complex passwords.
Read on to learn how cybercriminals are using AI to crack passwords and how Keeper Password Manager can help protect you from AI password cracking.
What Is AI?
Artificial Intelligence, also known as AI, refers to computer systems that can perform tasks that usually require human intelligence. These systems can learn from data and experiences, allowing them to make decisions, solve problems and recognize patterns. AI is used in various applications such as language translation, image recognition and making predictions based on data. You may be familiar with AI because of the increased popularity and use of AI machine learning models like ChatGPT.
What Is Password Cracking?
Password cracking is the process of attempting to guess passwords without authorization from the individual who owns the account the password belongs to. Cybercriminals employ various techniques to uncover passwords, including educated guesses based on a victim’s personal information, trying common or previously leaked passwords and using advanced algorithms that rapidly test a wide range of password combinations. This can be done through the use of powerful computer programs and tools like AI.
When a password is successfully cracked, cybercriminals are able to gain unauthorized access to sensitive accounts, data and systems.
How Cybercriminals Use AI To Crack Passwords
Cybercriminals are using AI to crack passwords by leveraging them for acoustic side-channel, brute force and dictionary attacks.
Acoustic side-channel attack
In an acoustic side-channel attack, cybercriminals use AI to analyze the distinct sound patterns produced by keyboard keystrokes. Each key on a keyboard emits a slightly different sound when pressed, which can be captured and analyzed to determine the characters being typed. For instance, the time delay between keystrokes and the unique sounds of various key combinations provide valuable information. By processing these sound patterns using AI algorithms, cybercriminals can determine the password being entered and use it to compromise an account.
Brute force attack
In a brute force attack, AI is used to automate guessing various password combinations until the correct password is found. This method is particularly effective against weak or short passwords because they’re not at least 16 characters long or complex. With AI, cybercriminals can quickly cycle through an immense number of password combinations, dramatically increasing the speed at which they crack passwords. For instance, a simple password like “123456” would be easily cracked using AI-powered brute force due to its limited complexity.
According to a study done by Home Security Heroes, a password that contains numbers, upper and lowercase letters and symbols, but is only 5 characters long, would be cracked instantly by AI. Whereas a password that is 16 characters long, and contains numbers, upper and lowercase letters and symbols, would take 1 trillion years for AI to crack.
A dictionary attack is a cyber attack where cybercriminals use common words or phrases to crack passwords. With the power of AI, cybercriminals are able to automate the testing of a large list of common words and phrases often used as passwords. These lists can include words from dictionaries, leaked password databases and even terms specific to a target’s interests. For example, if “football” is a common word in the dictionary list and someone uses “football” as their password, the AI-powered cybercriminal could quickly identify and exploit it.
Keeper Protects Your Passwords From AI Password Cracking
Securing your passwords against evolving threats like AI-powered password cracking is crucial to keeping your data safe. Using Keeper as your dedicated password manager shields your sensitive data from these sophisticated attacks by aiding you in creating strong passwords, providing weak and reused password warnings, autofilling your credentials and sending you dark web alerts.
Creates strong, unique passwords for you
Keeper Password Manager assists you in generating strong and unique passwords for each of your accounts with its integrated password generator. These complex passwords are designed to resist common password-cracking methods, including dictionary attacks, because they follow password best practices. By incorporating a mix of upper and lower case letters, numbers and special characters, Keeper ensures that your passwords are always strong, protecting them against cybercriminals who use AI algorithms.
Warns you of weak and reused passwords
Keeper constantly scans your passwords for weak and reused passwords and provides you with a warning next to the associated record in your Keeper Vault. This feature helps you identify potential vulnerabilities that cybercriminals could exploit to gain access to your online accounts. Knowing which of your passwords are weak prompts you to change them, minimizing the risk of your accounts being compromised through password-cracking techniques like brute force attacks.
Autofills your credentials
Keeper’s password autofill feature, KeeperFill®, ensures that you can securely and conveniently log in to your accounts. Whenever you go to log in to an account, KeeperFill will prompt you to autofill your credentials if you have them saved in your vault. This not only saves time but also shields you from acoustic side-channel attacks since you won’t have to manually type in your passwords. By streamlining the password input process, Keeper minimizes the exposure of audible keystrokes, safeguarding you against cybercriminals attempting to decipher your passwords by sound.
Sends you dark web alerts
A popular Keeper Password Manager add-on is the dark web monitoring feature, BreachWatch®. BreachWatch constantly scans the dark web for credentials that match the ones stored in your vault. If your credentials are found on the dark web, BreachWatch immediately notifies you so you can change your password right away.
By staying informed about any breaches involving your passwords, you can take swift action before a cybercriminal is able to exploit a compromised credential.
Ensure Your Security Against AI-Powered Password Cracking
In a world where AI poses growing risks to password security, it’s crucial to stay protected with the use of a password manager. With features like strong password creation and autofilling, Keeper Password Manager is a reliable solution against AI-powered password attacks.
See how Keeper enhances your online security by starting a free 30-day trial of Keeper Password Manager today. | <urn:uuid:39936e41-78a0-45bc-86bc-e097f6ae1d81> | CC-MAIN-2024-38 | https://www.keepersecurity.com/blog/2023/08/17/how-ai-can-crack-your-passwords/ | 2024-09-21T02:15:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00569.warc.gz | en | 0.932076 | 1,414 | 3.203125 | 3 |
AI Case Study
Yixue Squirrel AI Learning maximises students' progress through a individualised AI-powered adaptive learning system
Yixue Squirrel AI Learning has developed a system that is able to provide highly personalised course syllabus for each individual student with the use of AI. The technology is used for diagnosis and prescription, where the content of courses is adjusted to each student according to their performance and progress.
Public And Social Sector
Education And Academia
"One such programme, provided by Chinese edtech company Yixue Squirrel AI Learning, goes much further than a simple online course. Boasting over 1,000 education centres across the whole of China, Squirrel AI provides students with AI-powered adaptive learning system which tutors in many different subjects. Working from either a supervised learning centre after school, or via an online space featuring regular video call contact with personal mentors, Yixue tailors a course exactly according to a student’s strengths and progress thus far.
After school, students are able to study in a supervised environment in the education centres or at home. They can work through the course content at their own pace, with each new course item a unique data point of thousands designed to optimize learning efficiency.
Diagnosis & prescription: using AI to adjust content by performance
Any adaptive learning system, explains Yixue’s Chief Data Scientist Dan Bindman, is made possible by three components. The first is a necessary part of all learning: the content itself. Some subjects are better-suited to this than others. Math, for example, has a natural structure and progression to it, whereas more open-ended subjects such as the humanities are more difficult to model. For Bindman, it comes down to the strength of the content itself: “How strong is the content? How complete is the content? How deep is the content? These questions apply whether you’re going to introduce adaptive learning or not: it’s the content you’re going to be teaching.”
Secondly, there’s what Bindman calls ‘diagnosis’. This is where the system identifies exactly what the student does and doesn’t know at an extremely high resolution; something traditionally performed—painstakingly—by a teacher. This is achieved through breaking up the content into units made up of thousands of ‘items’—a group of similar questions that aim to teach students solutions to specific problems.
“If I showed you how to do x+7 = 14 once, I wouldn’t give you that problem again because you’d just do it by memory. Instead, I’ll give you a different problem,” Bindman says. “That makes up a single item, and in all of Beginners’ Algebra there might be 1000 to 2000 items that make up the entire course.”
What makes adaptive learning special is that it figures out exactly which of those items a student is strong and weak at in order to identify what they’re ready to learn. Unlike a traditional classroom where all 30 or 40 students are taught the same thing—regardless of individual progress—adaptive learning is able to provide a highly individualised course syllabus in order to maximise a student’s progress."
"The traditional model of classroom education, sadly, continues to be very much one-size-fits-all. A course syllabus is interpreted and delivered by a teacher en masse to large groups of students, with little flexibility allowed for the bright sparks or the slow learners. This is as true of schools the world over.
However, this one-size-fits-all approach is problematic—especially in highly competitive education systems like that of China. China’s high school and university admissions tests are among the most challenging and demanding in the world, and with every parent looking to get their child ahead, there’s a huge market out there for innovative, individualized e-learning tools." | <urn:uuid:44cf4421-ed39-4d4f-954c-8bd49c4e2384> | CC-MAIN-2024-38 | https://www.bestpractice.ai/ai-case-study-best-practice/yixue_squirrel_ai_learning_maximises_students'_progress_through_a_individualised_ai-powered_adaptive_learning_system_ | 2024-09-07T17:45:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00869.warc.gz | en | 0.939917 | 821 | 2.59375 | 3 |
With the explosion of 5G applications and cloud services, traditional technologies are facing fundamental limits of power consumption and transmission capacity, which drives the continual development of optical and silicon technology. Silicon photonics is an evolutionary technology enabling major improvements in density, performance and economics that is required to enable 400G data center applications and drives the next-generation optical communication networks. What is silicon photonics? How does it promote the revolution of 400G applications in data centers? Please keep reading the following contents to find out.
What Is Silicon Photonics Technology?
Silicon photonics (SiPh) is a material platform from which photonic integrated circuits (PICs) can be made. It uses silicon as the main fabrication element. PICs consume less power and generate less heat than conventional electronic circuits, offering the promise of energy-efficient bandwidth scaling.
It drives the miniaturization and integration of complex optical subsystems into silicon photonics chips, dramatically improving performance, footprint, and power efficiency.
Conventional Optics vs Silicon Photonics Optics
Here is a Technology Comparison Chart between Conventional Optics vs Silicon Photonics Optics, taking QSFPDD DR4 400G module and QDD DR4 400G Si for example:
The difference between a 400GBASE-DR4 QSFP-DD PAM4 optical transceiver module and a silicon photonic one just lies in: 400G silicon photonic chips — breaking the bottleneck of mega-scale data exchange, showing great advantages in low power consumption, small footprint, relatively low cost, easiness for large volume integration, etc.
Silicon photonic integrated circuits provide an ideal solution to realize the monolithic integration of photonic chips and electronic chips. Adopting silicon photonic design, a QDD-DR4-400G-Si module combines high-density & low-consumption, which largely reduces the cost of optical modules, thereby saving data center construction and operating expenses.
Why Adopt Silicon Photonics in Data Centers?
To Solve I/O Bottlenecks
The world’s growing data demand has caused bandwidths and computing power resources in data centers to be used up. Chips have to become faster when facing the growing demand for data consumption, which can process information faster than the signal can be transmitted in and out. That is to say, chips are becoming faster, but the optical signal (coming from the fiber) must still be converted to an electronic signal to communicate with the chip sitting on a board deep in the data center. And since the electrical signal still needs to travel some distance from the optical transceiver, where it was converted from light, to the processing and routing electronics — we’ve reached a point where the chip can process information faster than the electrical signal can get in and out of it.
To Reduce Power Consumption
Heating and power dissipation are enormous challenges for the computing industry. Power consumption will directly translate to heat. Power consumption causes heat, so what causes power dissipation? Mainly, data transmissions. It’s estimated that data centers have consumed 200TWh each year — more than the national energy consumption of some countries. Thus, some of the world’s largest Data Centers, including those of Amazon, Google, and Microsoft are located in Alaska and similar-climate countries due to the cold weather.
To Save Operation Budget
At present, a typical ultra-large data center has more than 100,000 servers and over 50,000 switches. The connection between them requires more than 1 million optical modules with around US$150 million-US$250 million, which accounts for 60% of the cost of the data center network, exceeding the sum of equipment such as switches, NICs, and cables. The high cost forces the industry to reduce the unit price of optical modules through technological upgrades. The introduction of fiber optic modules adopting Silicon Photonics technology is expected to solve this problem.
Silicon Photonics Applications in Communication
Silicon photonics has proven to be a compelling platform for enabling next-generation coherent optical communications and intra-data center interconnects. This technology can support a wide range of applications, from short-reach interconnects to long-haul communications, making a great contribution to next-generation networks.
- 100G/400G Datacom: data centers and campus applications (to 10km)
- Telecom: metro and long-haul applications (to 100 and 400 km)
- Ultra short-reach optical interconnects and switches within routers, computers, HPC
- Functional passive optical elements including AWGs, optical filters, couplers, and splitters
- 400G transceiver products including embedded 400G optical modules, 400G DAC Breakout cables, transmitters/receivers, active optical cables (AOCs), as well as 400G DACs.
Now & Future of Silicon Photonics
Yole predicted that the silicon optical module market would grow from approximately US$455 million in 2018 to around US$4 billion in 2024 at a CAGR of 44.5%. According to Lightcounting, the overall data communication high-speed optical module market will reach US$6.5 billion by 2024, and silicon optical modules will account for 60% (3.3% in 20 years).
Intel, as one of the leading Silicon photonics companies, has a 60% market share in silicon photonic transceivers for datacom. Indeed, Intel has already shipped more than 3 million units of its 100G pluggable transceivers in just a few short years, and is continuing to expand its Silicon Photonics’ product offerings. And Cisco acquired Accacia for US$2.6 billion and Luxtera for US$660 million. Other companies like Inphi and NeoPhotonics are proposing silicon photonic transceivers with strong technologies.
Original Source: Silicon Photonics: Next Revolution for 400G Data Center | <urn:uuid:a7976a41-3d75-40a8-badc-49d161bbbc22> | CC-MAIN-2024-38 | https://www.cables-solutions.com/silicon-photonics-next-revolution-for-400g-data-center.html | 2024-09-07T17:29:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00869.warc.gz | en | 0.900149 | 1,215 | 3.046875 | 3 |
Who are you? What are you allowed to do?
In tech circles, those two questions are formally known as authentication (“who are you?”) and authorization (“what are you allowed to do?”) and are collectively referred to as “auth”.
Clearly, service providers have a vested interest in making sure that they know the correct answers to both of those questions; they don’t want to allow the wrong user to log into an account and they also don’t want to allow users to interact with data that they are not permitted to access.
Authentication and authorization are shared responsibilities between service providers and end-users.
This shared model is necessary to achieve the ultimate goal: avoiding unauthorized account access and keeping data secure. Neither service providers nor end-users can achieve that goal single handedly.
Service providers are responsible for implementing strong, easy to use security features for end-users. They are responsible for documenting how those features work, maintaining them over time, and promoting their use. In turn, users are responsible for taking advantage of those security features and actually enabling them. A user facing feature doesn’t provide any effective security if it goes unused.
Service providers are also responsible for implementing transparent security mechanisms on behalf of end-users. However, regardless of how much effort is taken to make a service secure, there will always be some action that careless users can take to put their account at risk. Users are responsible for educating themselves about the best practices for staying safe online so that they can avoid common pitfalls that are outside the control of service providers.
Let’s explore some specific examples to clarify why service providers and end-users both have a role to play in achieving effective security when it comes to authentication and authorization.
Imagine that you are a service provider
You have a team of the most talented engineers, designers, writers, researchers, marketers, and do-ers of all kinds. You have overwhelming internal support and the necessary budget to do everything within your power to build an inherently secure service.
It sounds like you’re setup for success! Surely, you’ll reach the ultimate goal of achieving effective security, right?! Frustratingly, even after taking all of these necessary steps to secure your service, end-users play a huge role in determining whether their interaction with your service is actually secure.
Here is a (very) non-exhaustive list of security related projects that your team might have tackled:
You implement password best practices
- You enforce minimum password requirements and explain to users how to create a strong passphrase.
- You prevent users from using commonly known passwords like “password”, “123456” and “qwerty”.
- You follow best practices for securely storing passwords in your database.
- You even have a recommendation on your signup page for users to download and use a password manager.
And then a user ends up reusing their favorite password that they use for every other service. One of those other services isn’t as diligent as you are, stores the password insecurely, has a breach, and now the user’s password is in the hands of a hacker. That hacker happily tries the user’s password on your service and is pleasantly surprised when the login succeeds.
This scenario is a scary reality. LastPass found that “91% [of people] know there is a risk when reusing passwords, but 61% continue to do so.” Google analyzed 1.9 billion usernames and passwords exposed via data breaches and estimates that “7-25% of exposed passwords match a victim's Google account.” For a real world example, read about how a Dropbox employee’s password reuse led to the theft of over 60 million user credentials.
You configure HTTPS so that user connections are secure
- You make sure that all connections to your service are served over HTTPS so that the green lock appears in the browser address bar to inform users that connection is secure.
- You use an EV certificate so that your company name appears next to the green lock in the browser address bar for additional user trust.
- You make sure that you support only the recommended versions of TLS, configure the strongest cipher suites and important features, and get high ratings from respected TLS analysis tools.
And then a user gets an email that looks like it came from you, clicks the link asking them to login, and doesn’t verify the URL in the address bar before entering their username and password. They’ve just fallen victim to a phishing attack and handed their credentials over to a hacker.
The Anti Phishing Working Group (APWG) stated in their 2016 Q4 report that phishing attack campaigns in 2016 shattered all previous years’ records with over 1.2 million phishing attacks, a 65% increase over 2015. To give an example closer to home for all of those Gmail users out there, Google recently found that “victims of phishing are 400x more likely to be successfully hijacked compared to a random Google user.”
You support multi factor authentication (MFA)
- You support the best implementations of two factor authentication (2FA).
- You support the best biometric authentication options available in the industry.
- You implement adaptive authentication and dynamically change security and authentication policies based on user and device context.
- You allow administrators to force all users within the organization to enable multi factor authentication (MFA) on their accounts.
And then an administrator chooses not to check the box next to “require MFA on all accounts” and users in the organization choose not to enable MFA. That password leaked in the phishing attack now grants full access to the user’s account.
Duo, a leading provider of two factor authentication (2FA) solutions, found that only ~28% of people in the US use 2FA and only ~56% even knew what 2FA was before their survey. Pew Research Center conducted a cybersecurity quiz in which only 18% of users could correctly identify multi factor authentication (MFA). The adoption rates of MFA are disappointingly low and general education is clearly lacking. However, even tech folks have experienced account hijacks that could have been prevented by MFA. ArsTechnica reported that Fox-IT, a Dutch IT firm, “fell victim to a well-executed attack that allowed hackers to take control of its servers and intercept clients' login credentials and confidential data. The biggest lapse on Fox-IT's part was the failure to secure its domain register account with two-factor authentication.”
You specifically address organizations with multiple users
- You allow organizations to provision accounts for each individual user.
- You implement robust audit capabilities so that administrators can verify which employees accessed certain data and how they used it.
And then users decide to share a single account and password instead of creating one for each employee in the company. Perhaps, one of those employees uses the shared account for something nefarious; good luck figuring out who is responsible!
Laughably, a similar scenario played out recently when UK politician Damian Green got into hot water for allegedly accessing pornography on his government computer. A colleague, Nadine Dorries, jumped to Green’s defense implying it could have been someone else on his PC using his identity.
My staff log onto my computer on my desk with my login everyday. Including interns on exchange programmes. For the officer on @BBCNews just now to claim that the computer on Greens desk was accessed and therefore it was Green is utterly preposterous !!
— Nadine Dorries (@NadineDorries) December 2, 2017
Troy Hunt, a well known security expert, explained why sharing a single account in this way is problematic and also highlighted many common tools which allow users to maintain individual accounts while still sharing data to get their jobs done.
You provide robust permission framework to help restrict access to data
- You implement a robust permission framework which allows administrators to limit sensitive data to smaller groups of users based on their roles.
- You have a team of tech writers who develop useful tutorials, documentation, and examples of how to get the most value out of the permission framework.
And then a user decides to assign everyone in the organization the administrator role. Now, all users have access to all data within the organization and the intern decides to access sensitive customer data just for fun.
Imagine that you are a highly motivated user
You have taken the initiative to research and understand all of the cybersecurity best practices recommended by industry experts. Where appropriate, you have spent a small amount of money so that you have the best tools available to help you stay secure.
It sounds like you’re setup for success! Surely, you’ll reach the ultimate goal of achieving effective security, right?! Frustratingly, even after taking all of these necessary steps to secure your account, the service provider still plays a massive role in determining what level of security you can actually achieve.
Here is a (very) non-exhaustive list of steps you might have taken to remain secure:
You are prepared to enable multi factor authentication (MFA)
- You purchase a few U2F security key that works with your computer and your phone so that you can leverage the strongest two factor authentication (2FA) available on all of your devices.
- You look forward to enabling Push Notification 2FA, such as Google Prompt, where available.
- You are ready to swipe your thumb, scan your iris, or take a photo of your face while enabling biometric authentication options.
And then the service provider claims that the knowledge questions they require you to fill out qualify as 2FA when combined with your username/password and that your account is secure. Of course, you know that knowledge questions count as a duplicate knowledge factor rather than real 2FA and are so weak that they border on useless anyways. Your account remains highly vulnerable since you cannot enable a possession or inherence factor of authentication to achieve actual 2FA.
How about losing millions of dollars for a real world example? Hackers stole millions of dollars worth of bitcoin from a victim’s account that had SMS based 2FA enabled. Wait, what?! They got hacked even with 2FA enabled? Unfortunately, not all 2FA implementations provide the same level of security and SMS is notoriously one of the weakest. The victim’s account likely would have been safe with a strong type of 2FA, but imagine how trivial it would have been for hackers if no 2FA was enabled at all!
You use a password manager
- You use a password manager to ensure that each service has a unique strong password.
And then the service provider stores your password insecurely in the database, has a breach, and your password is now in the hands of a hacker. Since the service provider doesn’t implement actual 2FA, your password and some easily answered knowledge questions allow hackers to easily hijack your account.
It is shocking and frightening that even with all of the readily available resources that cover, in great detail, the best practices for storing passwords in a database, some service providers throw security out the window entirely and store passwords in plain text. In late 2015, Troy Hunt reported that 000webhost.com, a low priced web host, was hacked and lost 13 million user emails and plain text passwords. Similarly, The Hacker News reported in mid 2016 that VK.com, Russia's biggest social networking site, lost 100 million email addresses and plain text passwords in a breach.
You install browser extensions to secure your web traffic
- You install HTTPS Everywhere in your browser so that your connections to websites default to secure HTTPS as much as possible.
And then the service provider only supports deprecated TLS versions, such as SSLv3, or maybe doesn’t even support HTTPS at all. Now, anyone on the same network as you can easily record the data you send to the service, such as your username and password; the hacker sitting next to you in the coffee shop now has your login credentials.
Let’s Encrypt, a project that offers free TLS certificates, reports that the Web went from 46% encrypted page loads to 67% in 2017 - a gain of 21 percentage points in a single year! This is truly encouraging news, but it also highlights that a third of the Web is still using HTTP and is entirely unencrypted! Hardenize reports that only 38% of the top 500 websites are using well configured HTTPS; 19% of the top 500 are using configurations with severe deficiencies. As recently as late 2017, a UK bank called NatWest was serving their homepage over insecure HTTP.
You try to follow the principle of least privilege
- You work with the security and compliance teams at your company to document which information employees need access to in order to efficiently do their jobs. You sit down to configure the service to enforce those permission policies.
And then the service doesn’t support the concept of an organization with individual user accounts and you’re forced to have multiple employees share credentials to a single account. You don’t have any ability to limit access to data based on an employee’s department or role. Everyone has access to everything! Also, you cannot enable 2FA for the account since it is shared between many people and they cannot all share the same trusted device, thumb print, or iris.
We’re all in it together
With a steady cadence of massive data breaches making front page news, cybersecurity is more present in the public conversation now than ever before.
In addition to the less known examples discussed in previous sections, there are many high profile cases where fault lies clearly with the service provider. Equifax exposed the data of 147 million people after it failed to patch server software for two months. Hackers stole the data of 57 million customers and drivers when Uber engineers shared a secret credential in a code repository on GitHub. Sadly, there are too many examples to list here; it is quite clear that service providers have a massive amount of work to do in order to fulfill their responsibilities of implementing secure systems.
Unsurprisingly, users are having trouble trusting service providers to keep their data safe. However, as we’ve discussed, users also have an important responsibility to educate themselves to avoid contributing to the problem. According to a 2017 Pew Research Center survey, “many Americans do not trust modern institutions to protect their personal data – even as they frequently neglect cybersecurity best practices in their own personal lives.”
Tenable conducted a survey in late 2017 which draws similar conclusions on the sad state of cybersecurity literacy among users. They wrap up their report by highlighting the same tenent of the Shared Responsibility Model discussed in this essay: “[Organizations] need to lead the way in basic security practices that keep their customer and critical business data safe. But individuals must do their part, too – both as consumers and, in many cases, as employees of those same enterprises – and that starts with cyber literacy.”
Authentication and authorization are shared responsibilities between service providers and end-users.
All Things Auth is dedicated to helping service providers and end-users alike navigate those shared responsibilities and make progress towards the ultimate goal: avoiding unauthorized account access and keeping data secure. Neither group can do it alone. We’re all in it together. | <urn:uuid:6e0cd05e-a275-4028-8b99-2ce4380c4da2> | CC-MAIN-2024-38 | https://allthingsauth.com/2018/01/12/shared-responsibility-model/ | 2024-09-08T21:40:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00769.warc.gz | en | 0.944153 | 3,169 | 2.640625 | 3 |
California-based SETI Institute has secured a contract to help NASA implement standards across the space agency's planetary protection efforts.
SETI Institute will help the agency's Office of Planetary Protection report on biological cleanliness, review technical matters, administer training, inform the masses about these efforts and develop implementation guidelines, NASA said Saturday.
NASA's planetary protection efforts aim to prevent biological contamination between Earth and planets explored during missions.
“The depth of mission experience and breadth of knowledge on the SETI Institute team will help NASA meet the technical challenges of assuring forward and backward planetary protection on the anticipated path of human exploration from the Moon to Mars,” said Lisa Pratt, planetary protection officer at NASA.
SETI has contributed its microbiology expertise to NASA's planetary protection over the past decade. Efforts with SETI's involvement include the Curiosity rover and the New Horizons flyby mission. | <urn:uuid:70cd6304-d7b1-4a7a-9f29-cf6ea3f61c55> | CC-MAIN-2024-38 | https://executivegov.com/2020/07/seti-institute-awarded-new-nasa-contract-to-support-planetary-protection/ | 2024-09-11T10:18:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00569.warc.gz | en | 0.902463 | 180 | 3 | 3 |
What is Database Virtualization?
Database virtualization, also called Database Cloning, is a method of creating a virtual copy of data without making physical duplicates. Instead of manually copying data, professionals use a mathematical technique to produce exact replicas of the original dataset. This approach is particularly useful in testing and development environments.
Why Database Virtualization Matters
Data accuracy is crucial for data-driven projects, and professionals rely on real-time information to build data models. However, they often work with replicas rather than the original dataset. Database virtualization reduces the need for extra hardware and storage, resulting in cost savings.
Benefits of Database Virtualization
Here are some advantages of database virtualization:
- Improved Agility: Database virtualization allows organizations to swiftly provide data to different teams and departments, expediting application development and reducing time to market.
- Reduced Costs: By creating virtual copies instead of physical ones, database virtualization cuts down on the need for additional hardware and storage, saving organizations money.
- Increased Productivity: Database virtualization eliminates the manual copying and synchronization of data, freeing up resources to concentrate on more critical tasks.
- Enhanced Security: Database virtualization solutions can include features like data masking and encryption to ensure sensitive data remains secure.
- Better Collaboration: Database virtualization lets teams work on the same data sets, ensuring consistency and accuracy across the organization.
In summary, database virtualization offers organizations a flexible, scalable, and cost-effective way to manage data, which is crucial in today’s data-driven business landscape.
Use Cases of Database Virtualization
Database virtualization finds application in various scenarios:
- DevOps and Agile Development: DB virtualization facilitates rapid testing and development by providing instant access to multiple versions of a database, enabling teams to iterate and innovate quickly.
- Data Warehousing and Analytics: It allows for efficient testing of different data models and analytical queries without affecting the production database, ensuring data accuracy in analytical processes.
- Disaster Recovery Planning: DB virtualization assists in creating and managing multiple backup data environments, ensuring organizations can quickly recover and resume operations in case of data loss or disasters.
- Software Testing: DB virtualization is invaluable for testing applications against various database states and conditions, ensuring software reliability and functionality.
- Training and Education: It provides a safe and controlled environment for training users on database management, allowing them to gain skills without the risk of affecting real data.
- Compliance and Security Testing: DB virtualization helps in testing security protocols and compliance requirements in a controlled setting, ensuring that data remains secure and compliant with industry regulations.
- Test Environment Management: Database virtualization allows for rapid provisioning and de-provisioning of test environments. Teams can easily spin up and spin down test environments as needed, streamlining test environment management and reducing resource overhead.
Database Virtualization Tools
Several commercial tools are available for database virtualization:
- ACCELLARIO: Accellario is a database virtualization tool that simplifies data management for development and testing, ensuring efficient replication and access to database copies.
- DELPHIX: Offers secure data management, automation, and fast access for development, testing, and analytics without copying or moving data.
- REDGATE SQL CLONE: Copies SQL server databases efficiently, enabling developers and testers to work with updated and isolated database copies.
- vME (VirtualizeMe): Enables database virtualization and provisioning for testing and development. It is part of the Enov8 Suite and integrates with the sister product “Enov8 TDM” to provide additional capabilities like Data Profiling, Data Masking, and Data Synthetics.
- WINDOCKS: Provides a platform for delivering and managing data for development and testing, supporting multiple databases and applications.
The Database Virtualization Workflow
The database virtualization process typically involves four stages:
- Ingestion: Import data from the source into the chosen tool, view schema or database connections, and verify imported data.
- Snapshot: Take snapshots of the data using the virtualization tool, selecting the data tables to clone.
- Clone: Clone the data snapshots, providing an address for saving the clone data.
- Provision: Import the cloned data into development or testing environments for analysis or testing.
In conclusion, database virtualization is a versatile tool that enhances efficiency, data security, and agility across various domains, making it a valuable asset for modern organizations in today’s data-driven landscape. | <urn:uuid:115edc49-16db-41f9-bbd2-86854573bff8> | CC-MAIN-2024-38 | https://www.dataopszone.com/database-virtualization-tools/ | 2024-09-13T20:53:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00369.warc.gz | en | 0.857304 | 930 | 3.1875 | 3 |
For as long as medicine has existed, cancers have ravaged humanity.
The search for an eventual cure has been a tremendous & tireless pursuit, selflessly undertaken by doctors, researchers, scientists & the entire healthcare community.
With each passing year and the availability of sophisticated technologies, their efforts continue to go from strength to strength, expanding our means to adequately cope, manage and possibly rebuff the disease which has so often reminded us of our mortality.
Up until recently, Chemotherapy, Surgeries, Radiation & Targeted Drug therapies had been the go-to for oncologists when it came to the preferred choice of cancer treatment.
Also known as Immuno-Oncology, it’s a form of cancer therapy which stimulates, improves & harnesses the strength of our bodies’ natural immune system to fend off, fight & eradicate cancerous cells. Immunotherapies can be further categorized as Active or Passive, depending on the purpose they serve.
Some of the better-known varieties are as follows:
- Immune Checkpoint Inhibitors
These refer to certain drugs which allow killer T-cells in our body to identify & obliterate cancer cells.
T-cells are designed to recognize & destroy any foreign bodies and do so, by an exchange of signals called checkpoints where they use their surface proteins (a.k.a receptors). As cancerous cells are mutated body cells, they evade these T-cells by sending deceptive signals to our immune system.
Checkpoint Inhibitors disrupt these efforts by the cancer cells, lifting their veil, hence allowing T-cells to thwart their disguised attempt to hide in plain sight.
- CAR T-Cell Therapy (Adoptive Cell Transfer)
In this method, a patient’s T-cells are extracted & genetically engineered to add a special receptor called chimeric antigen receptor (CAR) having the ability bind to a specific protein only produced by cancer or tumor cells. These T-cells are then grown to a huge quantity (billions) and injected back into the patient’s bloodstream to attack the cancer.
- Monoclonal Antibodies These refer to antibodies created by cloning white-blood cells that are designed to bind to a singular kind of protein or antigen, effectively the tumor cells. Also referred to as targeted antibody treatments they are used for various purposes. Some are required for diagnostic purposes to passively mark the cancer cells to stimulate destructive action by the immune system while others known as antibody-drug conjugates (ADCs) carry highly cytotoxic(toxic to living cells), anti-cancer drugs to induce self-destruction of the tumor cells.
One of the major benefits of immunotherapy is its potential to be more comprehensive in the long-term as well as relatively much less toxic unlike chemotherapy or radiation.
Chemotherapy drugs are not designed to be target specific but attack all rapidly growing cells within our bodies, including but not limited to dangerously multiplying cancer cells.
The occurrence of vigorous hair-fall & visible skin rashes can be accounted to the loss of these frequently replicated yet normally functional cells.
Immunotherapies may work well in certain cases while not responding in a similar manner for other patients. They’re already in use treating cancers like melanoma, lung, kidney, bladder & lymphoma.
Scientists are still trying to fine-tune this method of treatment and it may be prescribed in combination with chemo or in isolation as well.
As research into immuno-oncology deepens, optimization of therapies shall continue with many paths emerging to successfully eradicate cancers as the elusive search for a cure forges on.
The 5th Annual MarketsandMarkets Next-Gen Immuno-Oncology Conference scheduled for 23rd to 24th June, 2022 in Boston, USA aims to discuss these very topics & act as a medium to share this enriching knowledge. | <urn:uuid:4af8b159-981d-4343-8e2c-31f3b24f57ea> | CC-MAIN-2024-38 | https://events.marketsandmarkets.com/blog/immuno-oncology-the-latest-frontier-in-cancer-therapy/ | 2024-09-17T16:15:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00069.warc.gz | en | 0.952708 | 796 | 2.9375 | 3 |
Network Configuration Protocol (NETCONF) is a standard based IETF Network Configuration Management Protocol. With this Network Configuration Management Protocol, we can install, modify and remove the configuration of the network devices. Additionally, NETCONF Protocol reduces the cost. It also reduces the given time to the network device configuration management. Here, NETCONF does the configuration automatically, not manually.
Network Configuration Protocol (NETCONF) is a Network Management protocol like SNMP (Simple Network Management Protocol). But it is a better protocol than SNMP for Network Management.
NETCONF Protocol is used in the Southbound Interface of SDN. As we have talked about before, Southbound Interface is the SDN interface that connects the Forwarding Plane and the Control Plane.
Basically, NETCONF Architecture is consist of two main elements. These elements are:
• NETCONF Client
• NETCONF Server | <urn:uuid:487ab360-5e2e-4c4e-8583-28a2a9fb20da> | CC-MAIN-2024-38 | https://ipcisco.com/lesson/netconf-overview/ | 2024-09-18T19:33:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00869.warc.gz | en | 0.815046 | 184 | 3.390625 | 3 |
The U.K.’s National Cyber Security Centre offers simple advice for people and organizations for more secure options.
Passwords are the keys to our digital lives but a recent analysis by the U.K.’s National Cyber Security Centre found we’re using the same easy-to-guess ones over and over again.
NCSC and Troy Hunt, the security professional behind the breach database Have I Been Pwned, released a data set of the 100,000 most used passwords that are already easily available to potential attackers.
“Attackers commonly use lists like these when attempting to breach a perimeter, or when trying to move within a network to potentially less well defended systems,” NCSC said.
More than 23.2 million accounts could be accessed with the simple password “123456” and another 7.7 million used “123456789.” The rest of the top five is just as dumb, with “qwerty,” “password” and “1111111” each being used more than 3 million times.
The data set also shows that people are hooked on names, sports teams, fictional characters and bands. For example, current Premier League leader Liverpool also top the table of football—uh, soccer—teams that are used as passwords. Superman appears as a password more than 333,000 times, beating out Batman’s 203,116.
People who find their passwords in the data set should change it immediately and avoid reusing passwords, NCSC said.
“Using hard-to-guess passwords is a strong first step and we recommend combining three random but memorable words. Be creative and use words memorable to you, so people can’t guess your password,” NCSC Technical Director Ian Levy said in the blog post.
Though a lot of password security is foisted on the user, NCSC also suggests organizations and systems administrators can take several steps. First, they can look into blacklisting the passwords in the data set. They can also lessen the user’s password burden by offering single sign-on or alternative authentication methods. They should also lets users take advantage of password managers.
“Recognising the passwords that are most likely to result in a successful account takeover is an important first step in helping people create a more secure online presence,” Hunt said in the statement. | <urn:uuid:6b02be88-d69a-4866-b444-086d0b416c7f> | CC-MAIN-2024-38 | https://www.nextgov.com/cybersecurity/2019/04/how-build-better-password-123456/156468/?oref=ng-next-story | 2024-09-20T01:31:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00769.warc.gz | en | 0.945537 | 499 | 2.546875 | 3 |
Interoperability in healthcare is what allows diverse databases to exchange information and data seamlessly and cohesively between systems. It provides a universal language between disparate systems, like mobile apps, electronic health records, third-party systems, and more.
Healthcare interoperability has been essential for the healthcare industry since 2009, when Congress established the American Recovery and Reinvestment Act (ARRA). This law required medical organizations to switch to electronic records, outlining expectations for electronic data exchange. The ARRA is likely the primary factor pushing the healthcare industry toward interoperability.
Why Is Interoperability Beneficial for Healthcare Providers?
Patients commonly receive care from multiple providers operating within separate healthcare systems. Without interoperability, these different healthcare providers can’t know the patient’s care history, leading them to create individual medical files with missing and repeated information.
Consider, for example, if a patient had a bad reaction to medication and was rushed to the hospital via ambulance. The EMT in the ambulance wouldn’t have access to the patient’s list of prescribed drugs and wouldn’t know how to treat them properly.
Developers design interoperability in healthcare information systems to benefit patients. It helps medical professionals, wherever they are, get the information they need to provide the best care possible.
How Does Interoperability in Healthcare Apply to Software Development?
Interoperability in healthcare is what unifies systems. It isn’t easy to do well and offers a unique challenge to software developers. Accomplishing interoperability from a coding perspective isn’t hard, but there’s an abundance of red tape to cut through. Much of the information healthcare professionals need access to is highly confidential and requires intense care and sensitivity.
Software developers may run into some of the following issues:
- Variability in exchange standards: While Electronic Health Records (EHRs) are the norm, their organization isn’t standardized. The lack of standardization requires data manipulation before enabling integrations with other applications.
- Information blocking: Healthcare payers have been reluctant to disclose information outside of their systems, providing a roadblock to interoperability.
- Insufficient resources: Small healthcare organizations may not adopt interoperability because they lack the resources to build and maintain integrated systems.
Fortunately, software development can provide straightforward solutions to these issues. A lot depends on a well-maintained application programming interface (API) toolset.
When you connect health applications, medical records, and lab results using API’s to a connected care platform, patients and healthcare professionals can more easily access important information. In addition, care providers can share information with patients on a patient portal or mobile application.
With the right software development team, healthcare payers and providers can create custom software that can do more with efficiency and transparency from filing insurance claims to updating patient records.
Integrate Your Healthcare Information Today
Do you need to ensure interoperability between your applications and healthcare information systems? Talk with experienced engineers who can help. Fill out our contact page to see how Excel SoftSources can help your interoperability endeavors. | <urn:uuid:e2f839c1-5961-4ee9-a4f1-4f79b0cf7c06> | CC-MAIN-2024-38 | https://excelsoftsources.com/blog/what-is-interoperability-in-healthcare/ | 2024-09-09T03:10:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00869.warc.gz | en | 0.918071 | 636 | 3.265625 | 3 |
October 20, 2023
Lifestyle-mediated chronic conditions, such as heart disease, cancer, obesity and diabetes, account for 74% of annual deaths globally. These conditions are multifaceted in their origins, arising from the complex interactions between environment (diet, lifestyle) and multi-omics profile (genome, epigenome, microbiome, proteome and metabolome). The complex nature of these conditions poses difficulty in their research, prevention, management and treatment.
Individuals' unique multi-omics profile alters their response to dietary intake. This has led to the emergence of ‘precision nutrition,’ which considers the interactions between diet and multi-omics profiles to provide nutritional advice personalized to each individual, using AI.
Traditional statistical approaches are adequate for analyzing single biomarkers but cannot effectively analyze the interactions between diet and multi-omics data. Hence the requirement for machine learning algorithms to analyze and interpret the data.
Supervised machine learning algorithms, such as OPLS-DA and PLS-DA are the most popular approaches to multi-omics research as they are ideal for predicting response to dietary intake. Features with the known greatest impact on the prediction output are analyzed by the algorithm to create a prediction model.
Unsupervised machine learning algorithms, such as PCoA and PCA, are suited to exploratory analysis as they can identify patterns in the data and can effectively stratify populations into subtypes. Semi-supervised algorithms can improve classification with partially labelled data.
The ability of machine learning to analyze the complex interactions between multi-omics data and nutritional intake makes it a valuable tool in the advancement of precision nutrition research.
Obtaining accurate dietary intakes poses a barrier to nutrition research and healthcare. Traditional dietary assessment methods rely on self-reporting, leading to under- and over-reporting.
Image-recognition AI could overcome these problems. Smartphone apps have been developed to identify foods and use food databases and barcode scanning to calculate nutritional content.
Computational models using deep neural networks can learn to identify relevant features from input images by classifying each pixel. Models can learn to segment out different foods in an image by grouping similar pixies. Hence deep learning models have the potential to recognize unlimited food and drink items and determine volume.
Accurately identifying foods and drinks is challenging for AI, as the appearance of food changes during preparation and cooking, multiple foods can be combined, and different foods and drinks can appear similar, resulting in low classification accuracy. Even when the correct food and drinks are identified, the nutritional content of food can be altered by cooking and this needs to be reflected in the databases used by AI.
Deep learning food image recognition models have been typically trained using images of signal food items or fake food images. Accuracy is reduced when determining real-world food images with higher variance. Volume estimation can be improved by providing multiple images of food from different angles to enable 3D image construction. Future image recognition models need to be trained with real-world food images and food databases from global cuisines to improve accuracy.
Apps and commercially available biometric sensors, known as wearable devices like smartwatches, not only record dietary intake but also track and monitor body composition, physical activity, blood pressure and blood glucose levels in real-time.
Accuracy of measurements can vary between devices and between markers being measured. Step count is most accurate in the Fitbit Charge and heart rate is most accurate in the Apple Watch and the measurement of energy expenditure was least accurate across multiple devices. There is a need for validated wearable devices to ensure the accuracy of nutrition research and disease monitoring.
With the use of AI, wearables can non-invasively monitor blood glucose and HbA1c. Glucose variability and HbA1c correlate with changes in autonomic nervous system activity, wearables can measure these changes via accelerometry, heart rate, electrodermal activity and temperature. Researchers inputted these features into random forest models and tuned using leave-one-person-out cross-validation. Root mean squared error and mean average per cent error were used to measure model accuracy. This determined that the models had a high level of accuracy for predicting glucose variability and HbA1c, with the HbA1c prediction model being as accurate as continuous glucose monitoring devices.
These AI models allow for the detection of prediabetes and early preventative intervention and for diabetics to monitor their glucose levels and HbA1c without the need for invasive continuous glucose monitoring and clinical blood tests.
There are commercially available omics testing companies that will test consumers' genetics, epigenetics or microbiome. Based on these tests consumers are offered personalized dietary and exercise recommendations and customized supplements. Some companies have employed algorithms to synergize results from multiple data points into clear recommendations and disease risk scores.
Algorithms have been developed that can predict the optimal nutritional recommendations for an individual, such as the algorithm developed to predict post-meal glucose response based on subjects’ nutritional intake, activity levels, blood biomarkers and gut microbiome. These features were used in a gradient-boosting regression model consisting of thousands of decision trees which were inferred sequentially. The algorithm was assessed firstly by leave-one-out cross-validation and then tested on a separate cohort. The algorithm had higher accuracy in predicting which meals would produce low or high increases in glucose levels compared to a clinical dietitian. This demonstrates the potential of AI in precision nutrition as an effective method of disease management and prevention.
The commercial precision nutrition market currently lacks regulation for data transparency and for upholding scientific rigor in the products and services offered. The demand for precision nutrition is growing faster than the scientific evidence base, leading to unsubstantiated claims based on association studies that lack clinical relevance and actionable advice.
Algorithms with limited predictive power are being implemented and are trained on consumers through use. This is putting the advancement of the industry at the expense of consumers who may be receiving ineffective recommendations.
There is a lack of research on diverse populations, applying the same algorithms developed using one population to another population will reduce accuracy and lead to inappropriate recommendations.
Repeated omics testing is expensive and time-consuming which prevents much of the population from accessing commercially available tests and hinders the implementation of precision nutrition into healthcare.
Future of precision nutrition
The future of precision nutrition requires the implementation of regulations and guidelines for research, industry and health care. Industry standards of scientific rigor will protect the public from unvalidated recommendations and ensure data protection.
Future research needs to focus on the validation and standardization of biomarkers within diverse populations. Further development of AI models will lead to the analysis of increasingly complex data sets and more precise recommendations.
The integration of precision nutrition approaches into health care can shift the focus from management and treatment towards prevention, which could lead to a reduction of disease and improved population health outcomes. Nutritional recommendations need to incorporate individuals' food preferences and cultural and social circumstances to provide a holistic approach.
Current precision nutrition approaches stratify individuals based on similarities in biomarkers and genetic variants. The concept of digital twins involves complete personalization based solely on a single individual. An individual is deeply phenotyped to produce a high-resolution model, which can be subjected to unlimited nutritional interventions to determine dietary effects for optimal health outcomes. The creation of digital twin models requires the combination of quantum and conventional processing to analyze the highly complex datasets of multiple interacting variables. This exceeds exciting computer capabilities but could be feasible in five to 10 years.
AI brings many benefits to the advancement and implementation of precision nutrition. AI can improve data collection in research and enable large multi-omics data sets to be analyzed. AI offers a holistic approach to identifying risk, diagnosis, management and prevention of disease. AI can empower individuals to take control of their health and provide personalized nutritional recommendations to optimize health. There is a need for greater regulation of commercially available wearables and direct-to-consumer testing companies to ensure consumers are receiving validated and scientifically backed information.
Read more about:
Health careAbout the Author
You May Also Like | <urn:uuid:66e849ae-9a78-4e85-93e5-e12fe734824b> | CC-MAIN-2024-38 | https://aibusiness.com/ml/AI-in-the-advancement-precision-nutrition | 2024-09-12T19:54:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00569.warc.gz | en | 0.935152 | 1,650 | 3.34375 | 3 |
Table of Contents
In recent times, the world of cybersecurity has been forced to confront a new and unsettling reality: the emergence of malicious Generative AI, often embodied by ominous names like FraudGPT and WormGPT. These tools, hailing from the darker corners of the internet, present a unique challenge to the digital security landscape. In this blog post, we will explore the nature of Generative AI Fraud, dissect the messaging from those who productized it, and assess their potential impact on the cybersecurity landscape. While it's important to remain vigilant, we'll also emphasize that the situation, although concerning, is not yet cause for widespread panic.
What are FraudGPT and WormGPT?
FraudGPT is a subscription-based malicious Generative AI that employs sophisticated machine learning algorithms to create deceptive content. Unlike ethical AI models, FraudGPT knows no boundaries, making it a versatile tool for various nefarious purposes. It can craft spear phishing emails, fraudulent invoices, fake news articles, and more which can be used in cyber attacks, online scams, public opinion manipulation, and allegedly the creation of "undetectable malware and phishing campaigns".
WormGPT, on the other hand, is FraudGPT's sibling in the dark realms of AI. Developed as an unsanctioned counterpart to OpenAI's ChatGPT, WormGPT lacks ethical safeguards and can respond to queries related to hacking and other illegal activities. While its capabilities are limited compared to the latest AI models, it represents a stark example of the evolution of malicious Generative AI.
Posturing from the GPT Villains
The creators and purveyors of FraudGPT and WormGPT have not hesitated to advertise their wares. These AI-based tools are marketed as "cyber-attacker starter kits," providing a range of attack resources. They are offered for subscription with affordable pricing, furthering the accessibility of advanced tools for upstart cybercriminals.
Upon deeper investigation, the tools themselves don't appear to do much beyond what a cybercriminal could leverage from existing generative-AI tools with request query workarounds. The reason for this could be attributed to the older model architecture and the training data used. The creator of WormGPT claims to have built their model using a diverse range of data sources, with a focus on malware-related data. However, they have not disclosed the specific datasets used.
Similarly, the marketing surrounding FraudGPT does little to inspire confidence in the performance of the LLM. On dark web forums, the creator of FraudGPT describes it as cutting-edge, boasting that the LLM can create "undetectable malware" and identify websites vulnerable to credit card fraud. However, beyond the mention of it being a variant of GPT-3, the creator provides limited information about the architecture of the LLM and no evidence of an undetectable malware, leaving much to speculation.
How Malicious Actors Will Harness GPT Tools
The inevitable use of GPT-based tools like FraudGPT and WormGPT is still a real concern. These AI systems have the ability to generate convincing content, making them attractive for activities like crafting phishing emails, pressuring victims into fraudulent schemes, or even generating malware. Of course, there are security tools and countermeasures to address these new breeds of attacks, but the challenge continues to raise in difficulty.
Some potential use cases of Gen-AI tools for fraud include:
Enhanced Phishing Campaigns:
These tools can automate the creation of super personalized phishing emails (spear phishing), in multiple languages, increasing the chances of success. However, their effectiveness in evading detection by advanced email security tools and vigilant recipients remains questionable.
Accelerated Open Source Intelligence (OSINT) Gathering:
Attackers can expedite the research phase of their operations by using these tools to gather information about targets, such as personal details, preferences, and behaviors in addition to detailed company information.
Automated Malware Generation:
Generative AI has the concerning capability to generate malicious code, streamlining the creation of malware...even for individuals with limited technical expertise. However, while these tools can generate code, their output may still be basic and require additional steps for successful cyberattacks.
Weaponized Gen-AI Impact on the Threat Landscape
The emergence of FraudGPT, WormGPT, and similar malicious Generative AI tools undoubtedly raises concerns in the cybersecurity community. There is potential for more sophisticated phishing campaigns and a proliferation of generic malware. Cybercriminals may leverage these tools to lower the entry barrier into cybercrime, attracting individuals with limited technical expertise.
It's important not to succumb to alarmism in the face of these emerging threats. FraudGPT and WormGPT, while intriguing, aren't game-changers in the world of cybercrime, yet. Their limitations, lack of sophistication, and the fact that the most advanced AI models are not being harnessed in these tools make them far from invincible to more sophisticated AI-powered tools like IRONSCALES that can automatically detect AI-generated spear-phishing attacks. While the actual effectiveness of FraudGPT and WormGPT remains unverified, it is important to note that social engineering and targeted spear phishing have already proven to be successful tactics. These malicious AI tools, however, provide cybercriminals with greater accessibility and ease in crafting such phishing campaigns.
As these tools continue to evolve and gain popularity, organizations must brace themselves for an onslaught of highly targeted and personalized attacks on their employees.
Don't Fear Today, but Prepare for Tomorrow
The emergence of Generative AI fraud, as demonstrated by tools such as FraudGPT and WormGPT, is a cause for concern in the cybersecurity landscape. However, it is not surprising, and security solution providers have been diligently working to address this issue. While these tools present more difficult challenges, they are far from unbeatable, and the criminal underground is still in the early stages of their adoption while security vendors have been at this for a long time. There already exists AI-powered security tools, like IRONSCALES, to combat AI-generated email threats with great effectiveness.
To stay ahead of the evolving threat landscape, organizations should invest in an advanced email security solution that provides:
- Real-time advanced threat protection with specific capabilities to defend against social engineering attacks like BEC, impersonation, and invoice fraud.
- Automated spear-phishing simulation testing to empower end users with personalized employee training.
Additionally, it is important to remain informed about the developments in Generative AI and the tactics employed by malicious actors with these technologies. While there's no need for panic today, preparedness and vigilance are the keys to mitigating the potential risks posed by adoption of Generative AI for cybercrime.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks. | <urn:uuid:796450b0-c715-45a2-b2e3-4088db368ebd> | CC-MAIN-2024-38 | https://ironscales.com/blog/generative-ai-fraud-fraudgpt-wormgpt-and-beyond | 2024-09-12T19:41:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00569.warc.gz | en | 0.934228 | 1,438 | 2.609375 | 3 |
When you think of a TAP, what comes to mind? Perhaps you imagine it to be a water tap on a sink for getting water or maybe a Beer tap?
Well in our case, “TAP” is an acronym for “Traffic Access Point” or “Test Access Point” and is a hardware device inserted at a specific point in a network where data can be accessed for testing or troubleshooting purposes. They are mainly used to monitor the network traffic between two points in a network infrastructure.
A network TAP typically consists of four ports: a network port A and B and two monitoring ports A and B. The network ports collect traffic from the network. Network port A receives the Eastbound traffic and port B receives the Westbound traffic. The monitoring ports provide a copy of this traffic to an attached monitoring device. Monitor port A will copy the Eastbound traffic and monitor port B will copy the Westbound traffic.
Typically, a network TAP is placed between two points in the network. The network cable between points A and B is replaced with a pair of cables, which are then connected to the TAP. Traffic is passively routed through the TAP, without the network’s knowledge. This allows the TAP to make a copy of the traffic, which is sent out of the monitoring port to be used by another tool without changing the network traffic flow.
Why do I need a Network TAP?
There are many different methods for gaining access to your network. Some of the traditional methods used for gaining access to network traffic include using a SPAN/VACL port on your switch or connecting a monitoring device in-line on the network. There are challenges with both scenarios.
Using a SPAN port can often be the lowest cost solution, but using this method has many hazards. Often, when SPAN ports are over-subscribed, packets are dropped before data reaches the monitoring tool. There is also the risk of the losing some of the error packets that may be causing problems. If the data never reaches the monitoring tool because it is being dropped, it is impossible to effectively troubleshoot, no matter how advanced a tool you may be using “You can’t fix what you can’t see!”.
There are different problems when a tool is installed in-line. Especially when dealing with a critical network, it is essential that the network is available all the time because down time can be very costly. When a device is installed in-line, the network must be brought down every time updates are required or the tool needs to be re-booted. Similarly, if the monitoring tool fails, the network will go down as well.
These problems can be solved by using a TAP. When using a TAP, you will be guaranteed that every packet is being sent from the network to the monitoring tool. Because these devices are never over-subscribed, they always pass every packet including layer 1 and layer 2 errors.
Types of Network TAPs
There are several types of TAPs to choose from to achieve different functionality according to the structure and needs of your network.
Breakout TAPs are the simplest form of TAP. A Breakout TAP consists of four ports: two input ports and two output ports. The two input ports each collect traffic from the network; one collecting traffic traveling from point A to point B on the network, the other collecting traffic from point B to point A on the network. The Breakout TAP then sends a copy of this traffic out of the monitoring ports - the A to B traffic is passed out of monitor port C and the B to A traffic out to monitor port D.
Both monitoring ports are then connected to some form of monitoring device. This allows a copy of the traffic from a single network segment to be monitored and/or analyzed without disturbing the network.
Aggregating TAPs allow you to take the eastbound and westbound network traffic and aggregate it out to a single monitoring port. This will allow you to use just one monitoring port instead of two, to see your network traffic saving you ports on your monitoring appliances.
But if your troubleshooting requires that you see all of the traffic with Real Time correlation this may not be a good solution because the aggregating TAP interleaves or aggregates the 1G eastbound and 1G westbound traffic into a single 1G monitor port. This can cause the monitor port to be oversubscribed whenever the link gets too busy. Also, how the data is aggregated is not always correct, especially when there are bursts of data or long files, the timing can be significantly off.
For these and many other reasons the Breakout TAP is the best TAP to use for serious and accurate troubleshooting.
Or, an alternative would be to use a unique Aggregating TAP available from Profitap called the Profishark. The ProfiShark series provides flexibility without compromising performance, as it captures 100% of full-duplex 1G traffic passing through it.
ProfiShark comes in 1 G and 10G versions for deep capture and if added to Iota one can deep capture and visualization Remotely with total Security!
Combine the ProfiShark 1G with a Windows, Linux or macOS computer through its USB 3.0 port for a complete network analysis solution. The ProfiShark 1G captures packets of all sizes and types and offers a wide assortment of capture and configuration options, information, and statistics, through the ProfiShark Manager.
SPAN/Regeneration TAPs will permit you to take unidirectional traffic from one network segment and send it to multiple monitoring tools. This allows you to send a single traffic stream to a range of different monitoring tools, each serving a different purpose.
The SPAN port from Switches and Routers are also subject to losing some traffic because the switch or router will forgo servicing the SPAN ports whenever they get too busy.
Bypass TAP (also known as Inline TAP) allow you to place an active network tool "Virtually Inline". These TAPs are used where monitoring devices need to be placed in-line on the network to be effective but putting these devices inline will compromise the integrity of a critical network.
By placing a ‘Bypass TAP’ in its place and connecting the monitoring tool to the ‘Bypass TAP’, you can guarantee that the network link will continue to flow, and the in-line device will not become a ‘point of failure’.
"When the monitor appliance goes off-line for any reason, the heartbeat packets are no longer returned to the TAP causing the TAP to bypass the monitor appliance and keep the critical link running."
Author - George Bouchard - George is a Technology Writer and Evangelist for ProfiTAP, a worldwide leader in providing unique and the highest quality visibility and access solutions for Network Visibility and Testing.
“It All Starts with Visibility!”
George has been in associated with many network analysis and testing companies in his many years in the networking industry, Network General makers of the original network “Sniffer”, Netcom (now Spirent), NetIQ (now part of Micro Focus) and ClearSight.
The technology industry has always amazed me because the technology of my youth was the Monroe Calculator and the IBM Electric Typewriter (before Selectric) I am always in awe on how far the industry has advanced in my lifetime.
**Note from the Editor - I have known George for many decades and not only is he a super friend but an awesome and very experienced technologist and that is why he is writing the "Know Your Network" series. | <urn:uuid:382cac18-dae4-4bae-86be-29452d3bb5ca> | CC-MAIN-2024-38 | https://www.networkdatapedia.com/post/what-is-a-network-tap-and-why-do-we-care | 2024-09-14T01:42:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00469.warc.gz | en | 0.939981 | 1,609 | 3.390625 | 3 |
It’s 2020 and yes we’re long past the days back when we used to line up inside the vicinity of the school library to get to photocopy a couple of Encyclopedia pages, to use as a reference for our school projects.
With this generation has grown up with the privilege of having access to technology at their fingertips, the arena of education has massively revolutionized and overturned in this digitally driven world. Don't have an Encyclopedia? All you need is an Internet connection! Don't have a chalkboard? Interactive whiteboards are in! In the times of the COVID19 crisis, quarantined to your home and not able to physically attend classes? Enters online classes and tutorials!
Recommended blog - Things To Do In Quarantine
One such technology that has slowly paved its way inside the world of education is Artificial Intelligence. Who said this technology still belongs merely in the sphere of science fiction? Artificial intelligence is all set to sink its paws into any sector dealing with large portions of data, so why should Education get left out? While the Academic sector is still assumed to be a largely human sector, more human than most, yet that doesn’t really reduce the involvement of AI in this sphere. There is still a multitude of ways that teachers and educational staff can gain from employing this technology.
Slowly yet consistently seeping into this sphere AI has slowly begun making its place in the academic sphere, making it more accessible and one of the examples is personalized learning using ai. The technology has overturned the world of learning as educational materials become more accessible to all with the use of smart devices and computers while also automating all complicated administrative tasks, allowing faculties to invest more time in focusing on their students.
Applications of AI in Education
Although it's too early to expect our fantasies of getting to see humanoid robots serving the role of teachers anytime soon, there are a whole plethora of projects being introduced or worked on which are employing AI for aiding the educational faculty as well as the students to get the most out of the educational experience.
Recommended blog - RPA
Below are a handful of applications of AI which are helping in shaping and defining the educational experience of the present and the future.
1. Personalized Learning
Artificial Intelligence is being employed for personalizing learning for each student. With the employment of the hyper-personalization concept which is enabled through machine learning, the AI technology is incorporated to design a customized learning profile for each individual student and to tailor-make their training materials, taking into consideration the mode of learning preferred by the student, the student's ability and experience on an individual basis.
Teachers can break down their lessons into smaller study guides, smart notes or flashcards in order to help the student in comprehending. With AI assisting in generating digital content, learning is proposed to become more digital and less reliant on paperbacks and hard copies.
Recommended blog - AI applications
Various AI-powered apps and systems help the students in accessing instant and customized responses as well as in getting their doubts cleared from their teachers. AI is also playing a role in augmenting tutoring and designing personal conversational, education assistants who can offer them aid in education or assignment tasks. For instance smart tutoring systems such as Carnegie Learning offers instant feedback and works directly with students. These education assistants are also attempting to improve their feature of adaptive learning so that all the students are allowed to learn at their own pace and at their own suitable time.
2. Voice assistants are in
Yet another AI component being fruitfully employed by educators in learning is voice assistants. These include Amazon’s Alexa, Apple Siri, Microsoft Cortana, etc. These voice assistants allow the students to converse with educational materials without the involvement of the teacher. They can be employed in home and non-educational environments for facilitating interaction with educational material or to access any extra learning assistance.
Traditional methods of learning are slowly being discarded in the case of higher education environments, with various universities and colleges offering students voice assistants rather than the traditionally printed student handbooks or complicated websites for assistance with their campus-related informational needs. For instance, as per a podcast by Cognilytica, the Arizona State University is offering various incoming college freshmen of the institution an Amazon Alexa as an attempt to provide them much more regular and concise information regarding their campus needs.
The aim behind these voice assistants is to supply answers for all common questions regarding campus needs as well as for them to be customized for the particular schedule and courses of each student. This helps in reducing the requirement for internal support as well as cuts down the expense of printing college handbooks that are only temporarily used.
The employment of these voice assistant systems breaks the monotony and fetches an exciting prospect for the students. The employment of this technology is expected to escalate in the coming years.
3. Aiding educators in administrative tasks
Teachers don't just battle the work of education-oriented duties but are also handed the responsibility of handling the classroom environment and dealing with a variety of organizational tasks.
They are handed a variety of non-teaching duties which include evaluation of essays, exam papers grading, dealing with the necessary paperwork, handling HR and personnel-related issues, arranging and managing classroom materials, handling the duties relating to booking and managing field trips, interacting and responding with parents, aiding with interaction and issues relating to the second language, keeping track of sick or absentees, as well as providing a learning environment.
Since half the time of the educators is invested in non-educational activities, AI systems have been a crucial aid at dealing with back-office and task-related duties such as grading tasks as well as facilitating personalized responses for students. Alongside they can also deal with the routine and monotonous paperwork, matters related to logistics as well as personnel issues.
Recommended blog - AI in hype or reality
These AI systems can also arrange personal interactions with guardians and parents, facilitate feedback relating to routine issues as well as resource access thus offering more time to teachers to invest in students.
Alongside educators, the administrations have also been enjoying the AI benefits by employing intelligent assistants to aid in various complicated admin tasks such as budgeting, arranging student applications and enrollments, managing courses, HR-related topics, expenses as well as facilities.
The AI-powered systems add to the efficiency of educational institutes, reducing their operating costs and offer, and facilities management. Using intelligent AI-powered systems can greatly improve the efficiency of many educational institutions, lower their operating costs, give them greater visibility into income and expenses, and improve the overall responsiveness of the educational institutions.
In the case of higher education, AI-powered systems are being used to reduce human bias during the process of admission and enhance the credibility of the process since these systems use given specific criteria to select applications in admissions. Hence these systems have helped enhance oversight in the process of admissions.
4. Breaking barriers
Artificial intelligence tools and devices have been aiding in making global classrooms accessible to all irrespective of their language or disabilities. These programs are all-inclusive.
For instance, Presentation Translator is a free PowerPoint plug-in that develops subtitles in real-time of what the teacher is saying. This also helps aid the sick absentees as well as students requiring a different pace or level when it comes to learning or even in case they wish to understand a particular subject that is unavailable in their own school. Barriers are being torn down like never before.
5. Differentiated and individualized learning
Adjusting learning on the basis of the specific requirements of individual students has been the priority of educators for years, yet AI will enable a differentiation level, that is highly strenuous for teachers who have to handle 30 students in every class.
Many companies like Content Technologies and Carnegie Learning are presently developing intelligent instruction design and digital platforms which adopt AI for offering learning, testing, and feedback to students from pre-K to college level that offers them the challenges they are prepared for, detects knowledge gaps, and redirects to fresh topics whenever suitable.
6. Smart Content
Yet another way in which AI revolutionizes the education industry is by adding fresh approaches for students to achieve success. Smart content is a pretty popular term among educators, organizations, students, and educators as it makes learning more simple.
When we refer to smart content we actually imply varying types of virtual content that includes digitized guides of textbooks, video conferencing as well as video lectures.
The learning experience can now be enhanced by robots by developing customizable learning interfaces and digital content that is applicable to students of different grades, for elementary and post-secondary schools. The content becomes easy to grasp by separating it into coherent chunks, shedding light on integral lesson stuff, and also summarizing the main points.
Audio and video content can also be created. Through this, students can easily gain access to all important materials, grasp in a more swift manner and achieve their academic targets.
An example of one such platform is Netex Learning. The platform enables professors to develop, manage as well as update digital content in a single location. It also encourages students with a high-impact learning experience by enabling microlearning, skills mapping, as well as content recommendations
Not so far in the future, we can expect Artificial Intelligence and machine learning to occupy an integral place in all educational experiences. AI has started to prove its advantages and power in a wide range of educational areas, and it remains to be seen how the technology will empower and enhance overall learning outcomes for all. | <urn:uuid:a87564d7-0ded-47fc-8a17-d192aeb3fa54> | CC-MAIN-2024-38 | https://www.analyticssteps.com/blogs/4-major-applications-artificial-intelligence-education-sector | 2024-09-16T11:49:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00269.warc.gz | en | 0.964204 | 1,937 | 3.078125 | 3 |
In this guest blog, Jennifer Major, head of internet of things (IoT) at SAS UK & Ireland, discusses the obstacles and the opportunities involved creating diversity in analytics.
In the face of the constant soundbites we hear about the British tech sector’s “worrying” lack of diversity, what are the steps being taken to resolve this mounting problem?
Globally, the context and cultures groups find themselves in have restricted their potential to excel in tech. Although progress has been achieved towards equality and increasing diversity, it has not been a straightforward process. There is still an uphill battle to be won.
It has to be clarified that this is not about “political correctness”! Increasing diversity in tech makes business sense. The impact that these previously excluded groups can have on science and analytics needs to be realised and harnessed.
The doorway to diversity
Many of the obstacles facing excluded groups are cultural. Habitual bias, or the assumption that a certain race, gender or religion is less capable, can exist in many situations, whether explicitly or implicitly. This might be the case for educational institutions, employers and recruitment processes, or just the domestic environment in which people grow up.
To break down the barriers of exclusion, our societies, institutions and decision-makers need to be made more representative. Diversity isn’t only important to a single race, religion, gender or group – depending on the context, the barriers of bias can limit anyone’s potential.
Educating earlier is the way to encourage greater diversity in science, technology, engineer and maths (Stem) subjects. By focusing on young people as a group and ensuring they have a level playing field on which to start their journey, we stand the best chance of increasing diversity in the field as a whole by helping to build a meritocratic industry.
Equality of educational opportunity is not a new idea, but the need for it is real and pressing. Indeed, in some advanced economies we seem to be moving backwards. In the US, for example, the number of computer science bachelors’ degrees awarded to women peaked in the 1980s and has been dropping ever since.
At the same time, however, there are causes for celebration. Rapid progress has been achieved in BRIC countries where, despite challenges, large numbers of women are eschewing arts subjects for degrees and careers in technology. In China, Brazil and Mexico, women make up 36%, 38% and 45% of the total IT workforce respectively. This is compared to only 17% in the UK.
To make further progress at home, we need to address perception problems towards Stem subjects. Old stereotypes of the sciences being the preserve of middle-class white men or awkward social rejects are being undermined every day. Yet those ideas still shape public thought. If children are repeatedly given perceptions that Stem is for a certain or special kind of person, it can reduce the chances they will pursue careers in maths or science.
We are often told that Stem skills can be the foundation of a lucrative career, but is that the most effective message to get children’s attention? In the end, we need to do more to convince future generations that Stem can be fun, fulfilling and useful. Maths and science aren’t all about dry formulae: they are practical and powerful, driving amazing innovation everywhere.
Analytics for everyone
Success in data science depends on a wide range of complementary skills. As data and analytics move closer to the top of the corporate agenda, the ability to communicate simply and inclusively will be crucial.
The real power of analytics is that it democratises data for everyone, from the boardroom to the factory floor and everyone in between. That means that the demand for more skilled staff in analytics is increasing day by day. Despite the rise of Artificial Intelligence, the crucial role human analytics teams play in extracting valuable insights from raw data cannot be ignored.
Businesses need to ensure they are recruiting as widely and with as much diversity as possible to ensure they get the most skilled people for the job. | <urn:uuid:8bfd09b0-e023-4891-8be5-4e97c9334de5> | CC-MAIN-2024-38 | https://www.computerweekly.com/blog/WITsend/Diversity-in-analytics-how-to-overcome-the-obstacles-and-optimise-the-opportunities | 2024-09-17T19:54:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00169.warc.gz | en | 0.955795 | 837 | 2.53125 | 3 |
Superfast broadband is increasingly recognized as a vital enabler of future economic and social development. Full fiber and 5G could provide an economic uptick of $1.3 trillion, forming the connecting thread for the Internet of Things (IoT), interlinking everything from smart healthcare services to Industry 4.0 assembly lines. The accompanying value chain could provide some 22.8 million jobs and $3.8 trillion in economic output by 2035.
Yet the US currently lags behind international competitors in implementing this vital enabling technology. For example, while Nokia started producing 5G equipment in India in 2008, Ericsson only delivered its first 5G base stations in Texas in 2020. 14% of all US children aged between 3-18 are without home Internet access, and over a third of those in rural areas are denied access to quality broadband.
The US is enacting several measures including financial support to accelerate implementation by diversifying the telecoms market. The Biden administration recently put superfast broadband at the heart of its post-COVID economic rebuild and pledged $100 billion of a $2 trillion infrastructure plan to improve broadband infrastructure, with the aim of providing high-speed broadband connectivity for all citizens by 2029. The FCC has removed regulatory barriers to deploying 5G base stations in cities. This is creating a highly competitive US telecoms marketplace reminiscent of the 1990s era where the industry included a diverse array of players such as Alcatel, Nortel, Lucent, Siemens and Motorola.
It is vital to have a diverse market of start-up providers, Wireless Internet Service Providers (WISPs), electric co-ops, and public and private organizations willing to share spare bandwidth to augment the efforts of large operators to achieve nationwide coverage and reach areas that are otherwise commercially unviable. A telecoms market that is both competitive and cohesive is needed to achieve the ambition of nationwide coverage. This requires collaboration and sharing of dark fiber, data, and even infrastructure across organizations and sectors to avoid unnecessary new infrastructure. It also requires smarter sharing of data on everything from local human settlements and services to meteorological and ecological hazards across each area to help plan the most cost-effective routes to the widest market with new infrastructure.
The Data Divide
The challenge is that many telecommunication operators, fiber network providers, and other commercial and governmental organizations, are not capturing, integrating, or opening up their rich network data resources across departments and across the sector. They are struggling to integrate internal information with data sources outside their organization that could help plan new infrastructure.
Crucially, they rarely link data with location to reveal new pathways to wider coverage and identify the site and source of hazards to predictively maintain infrastructure and reduce the cost and risk of new deployments. Workflows such as engineering and construction and important datasets such as as-builts and asset registries are fragmented in different formats, proprietary apps, or on spreadsheets and paper maps.
Many smaller operators are not documenting their network in a digital System of Record (SoR) and what data does exist is siloed and inaccessible to key stakeholders across the business. Network data is rarely held in open, mobile-friendly formats which can accommodate a remote workforce or BYOD practices and thus excludes vital input from workers and contractors on the ground. All too often data is also inaccessible to other departments or incompatible with other data sources such as local meteorological, social, or economic data.
The absence of effective, integrated, and remotely available systems of record across many providers creates major organizational blind spots and leaves sales, marketing, executive and NOC teams in the dark about their own assets.
Offices are deprived of vital information such as:
- the number of dark strands in every cable,
- the amount of unused glass that is stranded or unallocated,
- the cost of serving new prospects or building-out a region,
- the most critical customers and circuits that serve them,
- and the percentage of dark fiber available for new customers or to lease to others.
Other information they struggle to access includes:
- how many miles of fiber they have in each region,
- whether a particular building is on net,
- which enclosures have reached capacity and cannot be considered for augments, and
- what alternative circuits could restore service to areas with outages and even the exact location of their infrastructure in the field.
- Many fiber network operators rely on free tools such as Google Earth and spreadsheets which are disconnected, lack mobile tools, and are simply not capable of interrogating networks in this kind of depth.
- Network data is also often collected in a form that is not amenable to integration with external datasets.
This data disconnect can hamper efforts to diversify the market as operators and other commercial organizations are missing opportunities to sell data or lease bandwidth to smaller or rural providers. Likewise, data blind spots mean smaller providers cannot easily identify cost-effective routes to expand or improve existing networks. Without a comprehensive overview of their own assets, they cannot easily integrate external datasets such as maps of existing pole ducts or nearby hazards to guide expansion.
There is an urgent necessity for adoption of optimized systems of record technology that can be shared across players in the industry to help smaller operators fill gaps in superfast broadband provisioning and ensure that every inch of unused cable is fully exploited to achieve the widest possible coverage.
As networks expand and grow to be more complex, having an accurate, live, and location-based view of existing assets is increasingly vital to planning efficient new infrastructure investments. A precise overview of where existing networks serve nearby infrastructure can help identify gaps in provisioning and opportunities to reach lucrative new customers or critical services, avoid signal interference from man-made structures or harness rooftops to help signals reach wider areas.
Some smaller organizations are now disrupting the market by adopting modern SoR solutions with a more open and decentralized approach to network data. This involves the decentralized digital capture and consolidation of data from all assets in the field and the use of open APIs to integrate with information from external sources. It involves mobile-friendly geospatial systems capable of drawing on information from a diverse array of sources from sensors to smartphones.
For example, a major US university is using advanced mobile-friendly geospatial systems to plan the most cost-effective fiber pathways to deliver coverage across its satellite campuses and support their local communities by sharing unused fiber capacity. This offers a small glimpse of what can be achieved on a national scale if companies harnessed smart data to identify more cost-effective ways to extend coverage or opportunities to lease out broadband capacity to others.
Another smaller provider in Texas is now using a purpose-built fiber system of record to consolidate location-based data from all network assets to fuel smarter planning, design, and construction, of new fiber networks.
To realize the vision of nationwide and global superfast broadband, the fiber network industry needs to dissolve data silos and embrace open and digital network information. In the future, this enables digital network twins, multi-layered geospatial information systems, and machine learning algorithms capable of everything from predictive maintenance to planning of new provisioning.
Resources and Notes
For more information, visit https://www.iqgeo.com/contact-us, and follow us on Twitter @IQGeo_software. Follow Wade on Twitter @waderyan. You can also follow us on LinkedIn @IQGeo, and on YouTube @IQGeo. | <urn:uuid:28bfb052-7a7a-4c90-bd1a-7e703f693194> | CC-MAIN-2024-38 | https://www.isemag.com/5g-6g-and-fixed-wireless-access-mobile-evolution/article/14266398/the-data-divide | 2024-09-20T04:59:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00869.warc.gz | en | 0.929437 | 1,514 | 2.703125 | 3 |
How did we arrive at this point where a wireless connection, invisible to the naked eye, has become an intrinsic part of our daily lives?
The journey began in the late 90s when technology firms started exploiting the newly available radio spectrum. But without a unified wireless standard, the movement was fragmented. Imagine a world where devices from different manufacturers couldn't connect with each other. Sounds like a tech nightmare.
Enter the Institute of Electrical and Electronics Engineers (IEEE). In 1997, they approved a common standard called 802.11, which served as the foundation for wireless connectivity as we know it today. Two years later, major companies formed the Wireless Ethernet Compatibility Alliance (WECA), now known as the Wi-Fi Alliance, to promote this new wireless standard.
But what's in a name? Well, quite a lot, as it turns out. A marketing firm coined the term 'Wi-Fi', chosen for its catchy sound and similarity to 'hi-fi’.
But Wi-Fi isn't just about a catchy name. It's about constant evolution to meet our ever-growing needs. From the original 802.11 standard that allowed a maximum data transmission rate of 2 Mbps, we've come a long way to the 802.11ax or Wi-Fi 6, introduced in 2019, boasting a maximum theoretical rate of 9.6 Gbps. That's a leap of almost 5,000 times!
The point is, business would look very different today without Wi-Fi. If your business could do with a Wi-Fi boost, let us help. Book a 15 minute call on my live calendar. | <urn:uuid:15268877-ed37-4f4d-a36b-2338bae157c4> | CC-MAIN-2024-38 | https://www.infostream.cc/2023/11/have-you-ever-stopped-to-consider-the-miracle-that-is-wi-fi/ | 2024-09-08T04:29:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00169.warc.gz | en | 0.948846 | 329 | 2.640625 | 3 |
Cyber threat actors use darknet forums to find and participate in "botnet opportunities" which may be both for hacking purposes or for investments in cryptocurrency silent mining. The Dark Web has grown to be an active stage for botnet discussion and commerce, rendering botnet-based cyber-attacks more likely.
Once a cyber threat actor takes control of a computer by using a trojan or another kind of malicious program, full access to the computer is gained and the actor is free to use that access for DDoS attacks, sending spam emails for phishing attacks, for spreading malware, for generating traffic on a website, and for other kinds of attacks. In essence, hackers can use botnets just like weapons.
Darknets provide the hacker with a platform through which an army of botnets can be recruited. Some cyber-attacks consume a massive number of botnets and require a longer preparation period with more intensive efforts on the part of the hacker. For example, in order to generate a successful DDoS attack against a large corporation's DNS server, the hacker would have to recruit a huge number of botnets that would repeatedly send queries until the server crashes. The hacker may be able to reduce some of the preparation burden by purchasing some of the botnets on a Dark Web forum.
With an increasing awareness of the vulnerability of devices in the IoT era, cyber threat actors will likely find the use of botnets more and more attractive. Searching for the right opportunities on the Dark Web, they will no doubt find willing partners for botnet-based cyber-attacks and botnets for sale on demand. | <urn:uuid:14e353da-58dd-491d-ab49-c7a9af894516> | CC-MAIN-2024-38 | https://cybersixgill.com/news/articles/the-dark-web-platform-botnet-commerce | 2024-09-09T07:49:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00069.warc.gz | en | 0.933627 | 323 | 2.65625 | 3 |
Passive optical components market is propelled by the accelerating bandwidth requirements coupled with the growth of passive optical network (PON). Usage of passive optical components to obtain energy efficient network solutions is gaining popularity. This article will introduce some Fiberstore passive optical components.
Optical Attenuators: an optical attenuator is a device that is used to reduce the power level of an optical signal. Optical attenuators are commonly used in fiber optic communications, either to test power level margins by temporarily adding a calibrated amount of signal loss, or installed permanently to properly match transmitter and receiver levels.
Optical Circulator: an optical circulator is a multi-port (minimum three ports) non-reciprocal passive component. The function of an optical circulator is similar to that of a microwave circulator — to transmit a light wave from one port to the next sequential port with a maximum intensity, but at the same time to block any light transmission from one port to the previous port.
Fiber Collimator: a fiber collimator is a device for collimating the light coming from a fiber, or for launching collimated light into the fiber. It is used to expand and collimate the output light at the fiber end, or to couple light beams between two fibers. Both single-mode fiber collimators and multimode fiber collimators are available.
Optical Isolator: an optical isolator is a passive optical component that allows light to propagate in only one direction. Optical isolators are typically used to protect light sources from back reflections or signals that can cause instabilities and damage. The operation of optical isolators depends on the Faraday effect, which is used in the main component, the Faraday rotator.
Fiber Optic Sensor: a fiber optic sensor is a sensor that uses optical fiber either as the sensing element (intrinsic sensors), or as a means of relaying signals from a remote sensor to the electronics that process the signals (extrinsic sensors). Fiber optic sensors are immune to electromagnetic interference, and do not conduct electricity so they can be used in places where there is high voltage electricity or flammable material such as jet fuel.
Pump Combiner: a pump combiner is a passive optical component built based on fused biconical taper (FBT) technique. Pump combiners are widely used in fiber laser, fiber amplifier, high power EDFA, biomedical and sensor system etc. Three types of pump combiners are available: Nx1 Multimode Pump Combiner, (N+1)x1 Multimode Pump and Signal Combiner, PM(N+1)x1 PM Pump and Signal Combiner.
Polarization Components: polarization is the state of the e-vector orientation. Polarization components are used to isolate and transmit a single state of polarized light while absorbing, reflecting, and deviating light with the orthogonal state of polarization. Polarization components can be utilized in high power optical amplifiers and optical transmission system, test and measurement.
Fiberstore has all of the above passive optical components with high quality and reasonable price. You can select excellent passive optical components or other optical products for your network at www.fs.com. | <urn:uuid:1a2a4612-b442-4ffb-83d7-5731709c3e75> | CC-MAIN-2024-38 | https://www.fiber-optic-components.com/tag/pump-combiner | 2024-09-12T22:25:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00669.warc.gz | en | 0.891228 | 661 | 3.359375 | 3 |
ColdQuanta Previews its Cold Atom Quantum Computer Technology Built on its Quantum Core™.
(YahooFinance) ColdQuanta, the quantum atomics company, previewed its quantum computer technology, built on its Quantum Core™.
The ColdQuanta quantum computer is built around a unique glass cell that maintains a vacuum and houses a checkerboard-like array of cesium atoms, each of which acts as an individual qubit. Lasers and other photonic technologies cool the atoms to ten millionths of a degree above absolute zero, then initialize the qubits and orchestrate computations. The final state of the qubit array is photographed and analyzed.
In developing the technology for a quantum computing platform, ColdQuanta is leveraging its deep expertise and more than a decade of experience delivering quantum products and systems. In April of this year, the Defense Advanced Research Projects Agency (DARPA) selected ColdQuanta to develop a scalable, cold-atom-based quantum computing hardware and software platform that can demonstrate quantum advantage on real-world problems. The work is being led by ColdQuanta Chief Scientist Mark Saffman. In October, ColdQuanta recently announced cloud access to a quantum matter system that lets users generate, manipulate, and experiment with ultracold matter.
“ColdQuanta has successfully developed and deployed many kinds of quantum systems, all based on our Quantum Core platform,” said CEO Bo Ewald. “This means that most of the technology needed for cold atom quantum computing has already been validated by customers. This gives us a significant advantage in the race to deliver a quantum computer that can address some of the most complex computing challenges we face today.”
ColdQuanta’s approach has significant advantages over other implementations:
- The qubits are all atoms of the same element and are identical, so there are no manufacturing defects.
- The qubits are cooled to ten millionths of a degree above absolute zero, which is much colder than other technologies. Quantum effects typically operate better and longer at colder temperature. This combination allows for longer and more complex computations.
- Two dimensional cold atom arrays scale from tens to thousands of qubits, enabling bigger computations to address real-world problems. The DARPA ONISQ program, awarded to ColdQuanta, calls for a demonstration of a system with over 1000 qubits running a Department of Defense application.
- Gates can entangle distant qubits, allowing larger logical circuits on the same qubit array. This should allow more computational work to be accomplished per unit time with more advanced qubit connectivity.
- Advanced vacuum cell technology does away with the need for cryogenics.
- The computational platform is dynamically reconfigurable, which shortens the development cycle and leads to quicker system improvement. | <urn:uuid:f910bba6-e8c3-40cf-a636-ff8b79031f8c> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/coldquanta-previews-its-cold-atom-quantum-computer-technology/ | 2024-09-12T21:51:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00669.warc.gz | en | 0.922648 | 574 | 2.53125 | 3 |
That’s a mouthful, but it can be broken down piece by piece.
HIPAA Stands for Portability:
Health insurance portability is an employee’s legal right to maintain group health plan coverage when switching employers or leaving the workforce.
HIPAA Stands for Continuity:
Health insurance continuity is an employee’s legal right to obtain coverage, ensuring continuity of care, even if the employee has a pre-existing medical condition. Title I of HIPAA requires group health plans to enroll qualified individuals with pre-existing conditions. Title I of HIPAA also limits the restrictions that a group health plan may place on benefits for preexisting conditions. A preexisting condition is a health problem an individual seeking to enroll in a group healthcare plan had, before the date the new coverage begins. Under Title I of HIPAA, a group health plan may only refuse to provide benefits that relate to preexisting conditions for 12 months after enrollment or 18 months after for late enrollment.
Title I of HIPAA also allows individuals to limit the “preexisting condition” exclusion period. Group health plans must take into account how long an individual was covered before enrolling in the new plan after any periods of a break in coverage.
Title I of HIPAA applies the concept of “creditable coverage” to preexisting exclusion periods. Creditable coverage is previous health coverage that meets certain requirements. This coverage can be applied, as a day-for-day credit, against the application of a preexisting condition exclusion period when an individual moves from one group health plan to another. The law requires a group health plan to apply day-for-day creditable coverage credit to reduce the preexisting condition exclusion, as long as the enrollee has not experienced a break in creditable coverage that amounts to less than 63 days.
HIPAA Stands for… Simplification of the Administration of Health Insurance:
The long title of HIPAA states that HIPAA was passed “to simplify the administration of health insurance.”
The HIPAA Administrative Simplification Rules were implemented to achieve this purpose. These rules establish nationwide standards for electronic transactions and code sets to maintain the privacy and security of protected health information (PHI). The standards are often referred to as electronic data interchange or EDI standards.
The HIPAA regulations – specifically those found in 45 C.F.R. § 162 (Title 45, Section 162, of the Code of Federal Regulations) simplify the administration of health insurance by streamlining the paperwork associated with billing, verifying patient eligibility, and payment transactions. | <urn:uuid:2858e47d-d716-4760-ac73-ddb5580c8fa3> | CC-MAIN-2024-38 | https://compliancy-group.com/hipaa-stands-for/ | 2024-09-14T03:57:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00569.warc.gz | en | 0.937705 | 539 | 2.5625 | 3 |
Cybersecurity is the process of implementing technologies, procedures, and controls to provide protection for systems, networks, devices and programs, and data from cyber-attacks. Security operation usually requires physical work by humans. These tasks when automated, can increase the rate of accuracy and save time. Automation allows faster analysis, detection, and solutions in situations of threat, with or without human intervention. Automation in cybersecurity will definitely replace many roles or their aspects in the coming years.
Cybersecurity is of extreme importance because it protects any kind of data from theft and damage. It is nearly impossible for companies to function without cybersecurity because protecting sensitive information from breach campaigns is a necessity. It takes ceaseless efforts in keeping data systems from getting breached because attackers keep changing and resorting to new methods powered by social engineering and artificial intelligence to circumvent traditional data security controls
The majority of such third-party enterprises lead to complications because usually, they lack digitalization and a multidimensional approach. Companies have a hard time looking for service providers with sound knowledge regarding technology and efficiency rates on logistical, advisory, and administrative solutions.
This is where entities like HelpSystems come into play. It is a people-first software company that helps exceptional organizations build a better IT™. Its security and automation software simplifies complex IT processes, to ease the mind of the customer and give them a comforting experience. It provides enough resources to fulfill the various responsibilities that rest on companies in charge of cybersecurity. Helpsystems, with the help of intelligent automation, has devised ways in which one can do a lot more, using lesser resources. With traditional automation techniques, it has enabled new levels of productivity by empowering a small group that is capable of doing the work of many with decreased number of errors and reduced costs. Implementing traditional automation with AI and machine learning unlocks its complete potential, helping its users to emerge more accurately with each passing day because it ensures continuous self-learning. As a result, the systems, programs, and devices that need to be protected, also experience a consistent development in productivity.
In a discussion about the expectations that rest on companies such as HelpSystems, the CEO Kate Bolseth said, “Our global customers trust us to provide them with powerful, reliable security software to protect their data and infrastructure from malicious adversaries. Beyond security meets strong demand from overburdened IT and security professionals whose hybrid environments grow more complex every day. We’re pleased to help customers get control of this and are delighted to welcome the team of vulnerability experts get control of this and are delighted to welcome the team of vulnerability experts and their well-known solutions to HelpSystems.”
When it comes to the dealing of data and batch processes, businesses everywhere are hitting the bottoms of their barrels. Automation has grown to emerge as the key to success. HelpSystems is undoubtedly present for its customers across all processes of automation, irrespective of the operating system. Through online and in-person training and services, HelpSystem’s team of product consultants will work with you to attain the best possible solution for your individual technology-related frustrations. Whether you are looking for strategies, best practices, or industry news, the seasoned experts at HelpSystems have created hundreds of helpful resources that are categorized in a manner that allows customers to avail them easily.
HelpSystems is on a journey to help organizations over the world build a better and secure IT™. It has succeeded in assigning consistent successful teams that strive for attainability, and function on accuracy. It aims at making a strong suite of most effective, and time-sensitive solutions that make security simpler and stronger. | <urn:uuid:c4bb8cc9-89ab-46d7-94fc-f17b6e19f672> | CC-MAIN-2024-38 | https://www.ciocoverage.com/helpsystems-cybersecurity-automation-software-solutions/ | 2024-09-14T03:54:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00569.warc.gz | en | 0.944624 | 741 | 2.9375 | 3 |
Companies handle large volumes of information and in recent years, thanks to Big Data, many of them have understood the value of this data. However, organizations are not the only ones that understand how valuable the information they collect is. Cybercriminals also know this and, as a result, set in motion different threats that put the stability of companies at risk.
The virtual world is full of dangers, from phishing attacks and scams that can reach through spam mail or social engineering techniques, to ransomware attacks and other methods of information theft. In 2018, for example, the number of cyberattacks broke records with more than 10.5 billion incidents worldwide.
In the face of these alarming figures, it is vital that companies take urgent action. However, it is important to note that a company’s security must involve each one of its ports of access. This means that organizations must have integrated security, that is, implement a system of protection measures both digital and physical. Key to this is the awareness and training of staff in cybersecurity.
Steps to implement an integrated security system for companies
When we speak of integrated enterprise security, we mean a global and active system characterized by the establishment of maximum levels of protection. In order to achieve this objective, it is necessary to set up a management system to prevent and control the risks to which companies are exposed.
In this sense, it is essential that your organization implements a series of actions that will lead it to protect the information of third parties who want to make malicious use of it. These are the basic steps to follow to establish integrated security in your company.
#1 Define strong security policies
Before establishing any integrated security system for a company, it is essential to analyze the degree of compliance with the company’s risk prevention practices. In other words, what security policies and procedures are being applied and how personnel complies with them.
Once the status of the company is known and its needs with their strengths and weaknesses are detected, security policies adapted to these needs must be defined. These are divided into three categories: prevention, detection and recovery.
The first, prevention, involves taking measures such as creating control of access to the company and its information, as well as an identification and authentication system and establishing a communications security system. These actions will help prevent the risks of data theft or loss. Detection is used to discover if there are violations or attempted violations of system security. And recovery is a procedure that is applied in the event that a system breach has actually been detected.
#2 Create a cybersecurity culture
It is useless to create an integrated security system in the company if users do not comply with established internal policies and procedures. People are usually the weakest element of the security chain in companies. This is so because humans tend to look for shortcuts, can be misled with tricks, and also often act unprepared.
For this reason, it is essential to create a culture of cybersecurity in the company. Employees not only need to know the practices that are essential to keeping data and equipment safe, but they also need to understand the importance of complying with them.
To this end, organizations must conduct workshops and training sessions so that users know the basic concepts of computer security and the actions to protect equipment, applications, and software, physical files, among others. Likewise, constant awareness-raising actions must be implemented to inform about the great importance of each of the users in the company’s security chain, from directors and managers to the work team.
#3 Establish physical security mechanisms
Once you have completed the two previous steps, the organization is now ready to establish physical security mechanisms. These are procedures to control threats to the physical space where the company is located, such as natural disasters, acts of vandalism, theft, or sabotage.
In order to avoid these risks, you can establish different measures such as the implementation of interior access controls that include access control to the enclosure, alarm systems, and CCTV or user authentication. It is also important to restrict access to some spaces with sensitive information, such as the Data Center or areas where physical files are stored.
#4 Defines logical security mechanisms
Once physical security is in place, you will need to create protection mechanisms for your computer systems. This implies the creation of procedures and configurations that allow the protection of the access to data and information of the company. In this way, you prevent it from being used without authorization, either by divulging it, altering it or even deleting it.
Logical security mechanisms are applied in several areas: in the company’s network and infrastructure, in the workplace and, of course, in mobile devices. Among the actions that can be applied in these fields are:
- Data encryption.
- The implementation of new generation firewalls.
- Periodic backups.
- Control of remote access to company data.
- The establishment of password policies or the implementation of multifactor authentication systems.
Among many others.
#5 Monitor the security system
The above measures will serve to establish the integrated security of the company. But in order for them to fulfill their function, it is important to monitor them. It is essential to have a Security Operations Center (SOC), since this is responsible for identifying, prioritizing, and solving problems that could affect the security of critical data of an organization, as well as the infrastructure.
An excellent alternative is USM Anywhere AT&T Cybersecurity, which offers a smarter SOC solution as it enables unified security management. It’s software that brings together all the essential security capabilities your business needs to monitor its local, cloud or hybrid environments and discover critical assets, all from a single, easy-to-manage console.
USM Anywhere AT&T Cybersecurity differs from other similar systems for the following reasons:
- It offers unified and coordinated security monitoring.
- It actively scans critical assets for vulnerabilities that cybercriminals can exploit and prioritizes the most severe vulnerabilities.
- Presents friendly reports for a better understanding of the risks.
- Eliminates security blind spots by correlating events across all devices, applications, and servers.
Enterprise-integrated security delivers increased profitability and business productivity. That’s why solutions like AT&T Cybersecurity are ideal for protecting sensitive information. To learn more about this and other software for your infrastructure, GB Advisors has specialists dedicated to providing you with the information you need. Contact us now. | <urn:uuid:eaf3cc38-fb1f-44c0-9703-307b4cb3dcf5> | CC-MAIN-2024-38 | https://www.gb-advisors.com/business-integrated-security-guide/ | 2024-09-14T04:05:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00569.warc.gz | en | 0.955282 | 1,311 | 2.71875 | 3 |
Exploits and Vulnerabilities
Vulnerabilities – within an operating system (OS) or an application – can result from:
- Program errors
Whereby an error in the program code may allow a computer virus to access the device and take control - Intended features
Legitimate, documented ways in which applications are allowed to access the system
If vulnerabilities are known to exist in an operating system or an application – whether those vulnerabilities are intended or not – the software will be open to attack by malicious programs.
Eliminating System Vulnerability
Of course, it’s possible to design an OS in a way that prevents new or unknown applications from gaining reasonably broad or complete access to files stored on the disk – or getting access to other applications running on the device. In effect, this type of restriction can boost security by blocking all malicious activity. However, this approach will also impose significant restrictions on legitimate applications – and that can be very undesirable.
Closed and partly-closed systems
Here are some examples of closed and partly-closed systems:
- Closed systems on mobile phones
The operating systems in many basic mobile phones – as opposed to smartphones and phones that support the use of third-party, Java-based applications – provide an example of widely-used, protected systems. These devices were not generally prone to virus attacks. However, new applications could not be installed – so, the devices were severely limited in their functionality. - Java virtual machine
The Java machine partly satisfies the ‘closed’ protection condition. The machine runs Java applications in the ‘sandbox mode’ – which strictly controls all potentially hazardous actions that an application may try to execute. For quite a long period, no ‘real’ viruses or Trojans – in the form of Java applications – have occurred. The only exceptions have been ‘test viruses’ that were not particularly viable in real life. Malicious Java applications generally only occur after the discovery of methods that bypass the security system that’s embedded into the Java machine. - The Binary Runtime Environment for Wireless Mobile Platform (BREW MP)
The BREW platform is another example of an environment that is closed for viruses. Mobile phones that run this platform will only allow the installation of certified applications with crypto signatures. Although detailed documentation is published – to help third-party software producers to develop applications – the certified applications are only available from the mobile service providers. Because each application has to be certified, this can slow down software development and delay the commercial release of new applications. As a result, the BREW platform is not as widely used as some other platforms – and some other alternatives offer a much wider selection of applications for users to choose from.
Are ‘closed’ systems practical for desktops and laptops?
If desktop operating systems, such as Windows or MacOS, were based on the principle of the ‘closed system’, it would be much more difficult – and maybe impossible in some cases – for independent companies to develop the wide range of third-party applications that consumers and businesses have come to rely on. In addition, the range of available web services would also be much smaller.
The Internet – and the world in general – would be a very different place:
- A lot of business processes would be slower and less efficient.
- Consumers would not benefit from the rich ‘customer experience’ and dynamic Internet services that they’ve come to expect.
Guarding against the risks
To some extent, the risks that system vulnerability and malware bring may be the price we have to pay for living in a world where technology helps us to achieve our work and leisure objectives… more rapidly and more conveniently. However, choosing a rigorous antivirus solution can help to ensure you can enjoy technology’s benefits – in safety.
Other factors that help malware to thrive
To discover the other factors that enable malware to thrive and survive, please click the following links: | <urn:uuid:0ed827e1-dc8b-478f-80c1-90bcd0c65025> | CC-MAIN-2024-38 | https://www.kaspersky.com/resource-center/threats/malware-system-vulnerability | 2024-09-19T04:06:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00169.warc.gz | en | 0.950468 | 810 | 3.75 | 4 |
5 Digital Health Care Advances
Now that digital health has been a buzzword of sorts for the past decade and a half, we look at overlooked digital health care advances.
September 23, 2019
Like many other umbrella technological concepts, digital health has no clear definition, but its promises are vast. Emerging as a buzzword in the mid-2000s, digital health was a broader and more-advanced-seeming successor to prior terms such as mHealth and wireless health.
The extent to which the health care and health-tracking landscape has evolved in the past 15 years is difficult to answer, but, broadly speaking, it’s perhaps fair to say the health care ecosystem hasn’t yet seen a revolutionary level of change.
Still, it would be a shame to overlook the wave of innovative developments that highlight the potential of technologies like IoT to drive breakthroughs in the field — both in terms of patient care and beyond. Here, we highlight several such advances, including digital technologies to help facilitate quick emergency response times.
An Autism Wristband That Predicts Aggressive Meltdowns
Researchers at Northeastern University developed a wristband that probes physiological data to predict autistic episodes 60 seconds in advance with 84% accuracy. The scientists were able to achieve that outcome by measuring variables such as heart rate, skin surface temperature, perspiration and movement via a wristband. Rather than create a bespoke wristband, the researchers used an off-the-shelf E4 device from a company known as Empatica in a trial involving 20 youth.
Empatica’s E4 wristband.
The E4 includes a photoplethysmography sensor, a 3-axis accelerometer, electrodermal activity sensor, infrared thermopile and an internal clock that can measure time increments as small as 0.2 seconds. While the prediction of an outburst a minute in advance may seem like a relatively small amount of time, it could be enough to enable a caregiver to help calm the patient. “If we could give caregivers advance notice, it would prevent them from getting caught off guard and potentially allow them to relax the individual and make sure everyone in the environment is safe,” said Matthew Goodwin, a Northeastern professor in a statement.
The group experimented with a variety of wearable technologies in the research, but “but ultimately selected the E4 because it includes multiple autonomic [variables],” Goodwin said via email. In addition, the E4 hardware also had the benefit of logging physical activity via accelerometry and made raw data accessible. It was “not pre-processed with a black box algorithm as is the case with most consumer wearables,” Goodwin said. In addition, the system enabled streaming data in real time, which supports cloud-based analytics and IoT integration. Furthermore, E4 has further advantages, according to Goodwin. The system “has been validated in scientific publications, has a relatively discrete form factor, and is packaged in a waterproof and shockproof housing that makes it highly durable. Additionally, the sensor builds off a predicate device that I was involved in developing.” For disclosure, Goodwin is a scientific advisor to Empatica.
Using Drones for Medical Samples (and Emergencies)
The idea of using consumer drones to respond to medical emergencies is not new. In 2014, a Dutch engineer created a drone carrying a defibrillator payload. According to researchers at Delft University of Technology, the technology could boost the chances of survival for a patient suffering from cardiac arrest from 8% to 80%.
Perhaps unsurprisingly, researchers have since demonstrated that drones can respond to simulated emergencies substantially faster than traditional ambulances. In a recent experiment, researchers using the DJI Phantom 3 Professional drone competed against ambulances in the Iraqi city or Erbil. A drone equipped with first-aid supplies reached patients in an average of 90 seconds — a full 120 seconds faster than the ambulance.
While a variety of medical drone-related projects remain in development, a few are being commercialized. Earlier this year, for instance, UPS and urban delivery firm Matternet teamed up to use drones to deliver medical samples at WakeMed, a North Carolina health care system. The samples in question include blood and tissue collected during procedures at WakeMed. FAA sanctioned the project.
A Wearable Sweat Tracker
Those two words have become commonplace in the United States in recent decades, leading belated comedian George Carlin to ask in the mid-1990s: “When did we get so thirsty in America? Is everybody so dehydrated they have to have their own portable supply of fluids with them at all times?”
Yet the risks of dehydration are real. Side effects from drinking insufficient water include increased risk of injury to athletes, stress to the cardiovascular system, temporary brain shrinkage including diminished attention and memory, as well as other problems.
But determining how much water to drink has traditionally been difficult. Researchers from the University of California, Berkeley and at Northwestern University seek to improve the situation with the invention of a patch that measures sodium in sweat from the skin. By measuring sweat through the device, athletes could determine how much to drink to rehydrate.
The research efforts at Berkeley at Northwestern are not related.
The Northwestern effort, backed by Gatorade, enables patients to determine if they should drink more water and replenish electrolytes. The technology could also spot a biomarker for cystic fibrosis. Its diagnostic capabilities could evolve to include other diseases over time. ‘
The Berkeley research aims to improve scientists’ understanding of sweat metrics and understand differences in sweat-to-blood ratios in healthy and diabetic patients. The Berkeley scientists also concluded in an article published in Science Magazine that sweat monitoring is a “convenient tool for tracking hydration status,” while also determining more research is needed to understand how factors such as age, body mass and diet influence sweat composition.
In the long-run, sweat-monitoring sensors could be integrated directly into workout clothing. “For example Lycra or yoga pants would be perfect because they will be permanently in contact with your skin for longer periods of time,” Mallika Bariya from the University of California at Berkeley told NPR.
The underpinnings of the modern insurance industry stretch back to circa 1686, when Edward Lloyd opened a new coffee shop near London’s docks. Patrons came to gather to drink tea and coffee, of course, but they also came to gossip and bet. As BBC explains, some visitors of the establishment bet Royal Navy officer Admiral John Byng would be executed “for his incompetence in a naval battle with the French.” That bet turned out to be right.
Out of that culture of gambling, insurance opportunities began to dawn. If sea captains wanted to insure a ship, they could approach patrons to sign the bottom of a contract. The term “underwriter” emerged from that context, as did the insurance company Lloyd’s of London.
Information — about the ships, crews and anything else relevant — was an important part of the business. But once a seafarer sets sail, it was often difficult to determine what happened to a given vessel until it returned to the London port.
The traditional industry is similarly reactive. In the case of life insurance, for instance, an applicant needs to fill out scores of documents and undergo medical exams while trying to makes sense of commission-driven sales staff.
Companies like Ethos Life Insurance is aiming to change that equation. Arming itself with predictive analytics the company says applying for life insurance can take minutes for the majority of applicants. The majority of applicants also are not required to undergo a medical exam. Other companies such as Ladder, Fabric and Quotacy have similar business models.
From more of an IoT perspective, John Hancock announced it would cease offering traditional life insurance, replacing it with interactive policies that draw data from wearable devices and smartphones. The company launched an interactive life insurance option in 2015. The company’s policy awards policyholders for achieving wearable-based fitness goals and for entering workouts and healthy meals into an app.
Policyholders score premium discounts for hitting exercise targets tracked on wearable devices such as a Fitbit or Apple Watch and get gift cards for retail stores and other perks by logging their workouts and healthy food purchases in an app.
Smart Speaker Skills for the Elderly
While the advocacy group Consumer Watchdog, Electronic Frontier Foundation and others accuse Google and Amazon of spying on users, the devices continue to be popular. More than 133 million smart speakers are in use around the world. The microphones embedded in such devices — and the cameras included in some home hubs — can provide a convenient means of communication with the elderly and their loved ones.
Amazon Echo Show
On that note, State Farm and Amazon have teamed up to create a “skill” for owners of the Amazon Echo Show to help create “a virtual circle of support, coordination and communication at any time of the day while delivering a personalized experience to the senior,” according to a statement shared with CNBC. Owners of the Echo Show device can share alerts and allow visitors and caregivers to check in on them. Separately, Amazon is working on developing mechanisms for its Alexa devices to help manage health information and provide health-related reminders.
About the Author
You May Also Like | <urn:uuid:20771931-b09d-4234-9432-69d8d7197500> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/health-care/5-digital-health-care-advances- | 2024-09-09T12:32:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00169.warc.gz | en | 0.954942 | 1,921 | 2.765625 | 3 |
Access to quality education remains a significant challenge, particularly in areas with limited infrastructure and socio-economic barriers. However, the advent of the Internet has played a crucial role in bridging the gap for remote and economically disadvantaged communities. It has provided equal learning opportunities by overcoming geographical constraints.
Despite these advancements, disparities in teaching standards, outdated curricula, and insufficient funding contribute to an unequal quality of education. These factors create an unfair advantage for a privileged few, while leaving many students behind. Additionally, the lack of qualified teachers, particularly in rural or remote areas, further exacerbates the problem. To address this issue, it is imperative to invest in continuous professional development and training programs for educators, enabling them to enhance their skills and stay updated with evolving pedagogical approaches.
Another pressing concern is the growing educational divide based on income levels. Parents with higher incomes tend to invest more in their children's education, leading to a widening gap in academic achievement, college attendance, and overall success. Schools face the challenge of improving education quality within limited resources, often driven by government regulations that demand high standards. However, low teacher salaries hinder progress, as schools with better funding and incentives can attract more qualified educators. This perpetuates a cycle where underprivileged schools struggle to improve their foundation, attract skilled teachers, and provide better education for their students. Consequently, marginalized communities face limited access to quality education. Insufficient collaboration among teachers and underutilization of technology also contribute to compromised education quality, particularly in underdeveloped communities.
Benefits of Utilizing Mobile Security and UEM Technologies
Effective integration of mobile technologies can help level the playing field for schools lacking in digital resources. Mobile devices enable distance learning through web-based platforms, allowing students to access learning materials anytime and anywhere. Moreover, teachers can collaborate more effectively in designing learning programs and teaching strategies. The mobility provided by these devices extends the benefits of technology beyond the confines of traditional classrooms.
Encouraging programs that provide mobile devices to every student, especially in low-income schools, can significantly impact student retention rates. Digital textbooks and other educational resources can be accessed conveniently, while teachers can monitor students' progress. Mobile clouds facilitate seamless sharing of work, including daily lessons and homework. Instant messaging and online forums enable continuous question and answer sessions, fostering active engagement. Additionally, collaborative projects and lessons can be conducted more efficiently, even among students with conflicting schedules.
Utilizing mobile device management and enterprise mobility security solutions, such as Codeproof Cyber Device Manager MDM, academic institutions can optimize the use of smartphones and tablets to create a technology-driven educational ecosystem. These platforms allow for device management, app installation and updates, web content filtering, device inventory, and even locating missing devices, ensuring a seamless and secure learning experience.
By addressing these challenges and embracing the potential of technology, we can pave the way for a more equitable and comprehensive education system, enabling every student to thrive and reach their full potential. | <urn:uuid:51a875d6-1c92-4085-82b1-f81eeeb40b3d> | CC-MAIN-2024-38 | https://www.codeproof.com/industries/educational-organizations-mobile-security-mdm/ | 2024-09-10T18:36:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00069.warc.gz | en | 0.942117 | 600 | 3.46875 | 3 |
– Brook Reams, Architect, Core Technologies, Brocade (www.brocade.com), says:
The first wave of virtualization, mainframe computing, is the grandfather of all virtualization techniques. It’s the ultimate virtualized computing platform and continues to be deployed today. For example, IBM hosts Linux virtual machines on its mainframe platforms and this option has grown in popularity extending the mainframe into the 21st century. The next wave, minicomputers in the 1970’s and 1980’s also implemented virtualization techniques (anyone remember the VAX cluster?), but that technology wave is largely absent from most data centers today. Shortly after came the UNIX platforms of the 1980’s and 1990’s. HP, IBM and Oracle/Sun adapted virtualization technologies for their UNIX platforms and these virtual platforms are running in just about every data center today.
The latest wave of virtualization adoption targets the x86 platforms from Intel and AMD. Microsoft, VMware and Zen provide virtualization software using a technique called a hypervisor. The hypervisor virtualizes the x86 hardware creating multiple virtual machines each of which hosts an operating system (Microsoft or Linux) and its associated application. As servers designed with x86 microprocessors started to appear in the data center, they often were used to run Microsoft’s Windows operating system and various compatible client/server applications. Soon thereafter, servers began to proliferate. Client/server means the application is broken up into components each of which runs on its own server. For example, the client (your desktop PC) connects over the LAN (or if you are at a hotel, over the internet) to the data center. That connection is hosted by servers called web servers. The application components are running on another set of servers, application servers, and the data is stored on a database hosted on, you guessed it, database servers. Servers, servers everywhere.
When a new IT project was requested by a business unit, it was common to “add servers” to the datacenter to host the required application components. Compared to UNIX servers (or even mainframes), x86 servers were pretty inexpensive, so a project centric “just add more servers” approach for data center infrastructure made economic sense, at least from a capital cost perspective. Fast forward to today. The result of the “add servers” approach is each of those servers is not highly utilized, takes up space, requires power and cooling and has to be connected to each other (as in lots of cables and network switches).
Well, data centers are running out of space, power, cooling and “spaghetti cabling” has become far too common hindering routine maintenance and preventing efficient cooling of the servers adding even more to the cooling bill. Server virtualization can reduce physical servers by 10 to 20 due to running 10 to 20 applications on a single virtualized server. That’s an enormous reduction in space, power, cooling and cables. This is compelling, and it’s what initially made server virtualization attractive and drove the early adoption of this technology.
Virtualization “does more with less”, improves hardware utilization which lowers operating cost, can dramatically reduce the time to deploy or extend the resources applications use, and make applications more available. Virtualization groups a few servers together into a cluster where each server can host 10, 20 or more virtual machines. A small cluster of servers can replace 10 to 20 times as many physical servers. With virtualization, multiple applications securely run on the same server hardware, but each application thinks it has its own dedicated computer system. Hardware resources needed by the application are monitored and can be automatically adjusted as necessary providing high utilization with corresponding reductions in data center operating cost and less intervention required by server administrators.
There are unintended consequences for IT operations and application SLAs. Server virtualization software vendors built a LAN switch (in software) inside the hypervisor so they could manage virtual server movement between physical servers in the cluster. That solved the immediate problem of managing server mobility, but this isn’t scalable in the long run. As the number of virtual machines per physical server grows, the workload on the physical server when moving a virtual machine begins to negatively impact all the other applications and their SLAs on those servers. And, having a LAN switch inside the server complicates IT operations since the LAN doesn’t end at the Ethernet access switch anymore, it extends into the server cluster. That blurs the simple but effective separation of responsibilities between the server and network administration teams making configuration changes complicated, take longer and increase the potential for configuration mistakes.
Virtualization vendors are working on solutions to these unintended consequences. They have plans to support pass through switching so the LAN switch and its configuration and security policies are once again managed by the network administrator. When virtual machines move, the hypervisor isn’t required to handle that workload spike. Instead, the LAN switch would handle migration traffic eliminating the negative impact of this high volume traffic on the hypervisor and the applications SLAs it’s supporting. And, storage access and management continues to require integration of virtualization with proven shared storage technologies including Fibre Channel, Fibre Channel over Ethernet and Fibre Channel over IP. As more critical applications are virtualized they already use Fibre Channel storage and the migration onto virtual servers should not require moving storage off the SAN. As with the LAN, higher utilization of the server when combining 10 or more applications per server also increases the load on storage traffic. Higher speed storage links such as 8 Gigabit and soon 16 Gigabit Fibre Channel are able to stay ahead of the growing storage IO.
When considering wide scale virtualization projects it is important partner with a vendor that has expertise in data center networking solutions for the access, aggregation and core of the network that are optimized for server virtualization. Brocade has a complete line of data center networking products that allow application SLAs to extend across the fabric ensuring high priority applications are segregated from lower priority ones. We are also working with leading virtualization vendors so their management and virtual server orchestration software can plug into our management framework providing in depth understanding of the storage traffic, end to end, from the virtual machine to the storage port. Brocade has pioneered innovations such as Adaptive Networking, Advanced Performance Monitoring, and virtual port technology to simplify operation and management of storage in highly virtualized data centers. | <urn:uuid:0ac0f9c8-61d0-4f4a-8d90-810be454f773> | CC-MAIN-2024-38 | https://datacenterpost.com/virtualization-history-and-benefits/ | 2024-09-16T18:26:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00469.warc.gz | en | 0.92061 | 1,318 | 2.96875 | 3 |
Ransomware can strike any industry, from logistics and media companies to non-profit organizations and governments. Even hospitals are targets for ransomware, holding data and lives hostage.
Ransomware can cause irreparable damage. Understanding how it works and how to detect it can help prevent attacks.
How Does Ransomware Work?
Ransomware is a type of malware that locks files on a victim machine, making data inaccessible. A ransom note appears on the victim's computer with instructions for paying the attacker (usually in a cryptocurrency such as Bitcoin) to unlock the files.
A ransomware attack can originate from a malicious link, email attachment, exploited vulnerability, attack campaign, or worm. After ransomware malware is installed on the victim's machine, the malware often spreads to other devices on a network and connects to a command-and-control (C&C) server controlled by the attacker. The ransomware then waits for a command (such as "encrypt files") from the attacker.
Typically, ransomware locks files with asymmetric encryption, which is a strong cryptographic method that requires two keys (a private key and public key) to encrypt and decrypt data. The attacker controls a private key, and sends a public key to the victim's computer. The ransomware begins encrypting data based on information in the public key. Ransomware usually encrypts non-critical data based on file extensions (such as .txt, .jpg, .xls, or .doc) to make sure that the victim computer functions well enough for the victim to pay the ransom.
After data encryption begins, the encryption process can quickly spread throughout the network and across file shares. The only way to decrypt the data is to receive the private key from the attacker, after paying a ransom.
Examples of Ransomware Variants
Encrypting malware and the extortion activities associated with it have been around for decades. But the type of malware known as ransomware first appeared in 2012. Many variations of ransomware have since emerged, helping attackers evade anti-virus software and network defenses. Let's compare notable ransomware variants to learn how ransomware generally infects computers and encrypts files.
CryptoLocker appeared in September 2013, making it one of the earliest examples of ransomware. CryptoLocker was a trojan virus that spread through a botnet and malicious email attachments claiming to be FedEx and UPS tracking notifications. Files on the local hard drive and mounted file shares were encrypted with RSA algorithms. CryptoLocker was stopped in 2014 when the private keys were captured by law enforcement and a decryption tool was released.
CryptoWall emerged in 2014 and is still seen today. Attackers distribute CryptoWall malware to victims through exploit kits, phishing emails, or malicious links within ads. CryptoWall injects code into explorer.exe, infecting system processes on Windows machines. After the code is run, user information is encrypted and sent to the C&C server to generate a unique public key. Files on the local hard drive and mounted file shares are encrypted with the public key and an algorithm (such as RSA-2048 or AES-256). CryptoWall 3.0 from 2015 has been the most lucrative version for attackers.
Locky emerged in 2016 and is mainly distributed to victims through an emailed malicious Microsoft Word attachment. The Word document includes a malicious macro that, once enabled, downloads a trojan virus with the encryption malware. Keys are generated on the C&C server, and files on the local hard drive and mounted file shares are encrypted with RSA-2048 and AES-128 algorithms.
WannaCry ransomware variants appeared in May 2017 during an infamous global attack. WannaCry spread as a worm (meaning no user interaction was required to install and spread malware to other devices) by leveraging EternalBlue. EternalBlue is an exploit of a vulnerability in legacy versions of the SMB file-sharing protocol (MS17-010). WannaCry leveraged the DoublePulsar tool to install a backdoor on the victim's computer to manage communication with a C&C server. WannaCry was stopped after the discovery of a "kill switch" in the malware.
Petya and NotPetya
Petya (also referred to as GoldenEye) malware appeared in 2016 and was distributed as email attachments. Unlike most ransomware, Petya often encrypted local system files that prevented victims from accessing their machines. Another variant, referred to as NotPetya, appeared shortly after WannaCry in June 2017. This variant spread as a worm through the same EternalBlue exploit seen in the WannaCry attack and encrypted the master boot record on Windows machines. NotPetya did not provide an option for decrypting files and caused billions of dollars in damage across the globe.
Ryuk ransomware appeared in 2018 and initially targeted large enterprises. In 2020, Ryuk ransomware was linked to hundreds of U.S. hospital and healthcare targets. Ryuk is typically delivered through a Trojan virus called Trickbot, which is known to install a backdoor (anchor_dns) on the victim machine. This backdoor manages encrypted communication with a C&C server through DNS tunneling. To spread across the network, the malware leverages a variety of tools and protocols, including Mimikatz, PowerShell, and Remote Desktop Protocol (RDP). Ryuk encrypts files with a combination of symmetric (AES) and asymmetric (RSA) encryption. Files are encrypted with AES-256 and the AES key is encrypted with an RSA public key. The encrypted key is embedded into the executable file sent to the victim. To evade detection, malware components—the executable file and C&C server domains—are unique to each victim.
How to Detect Ransomware
Detection methods can include log, process, and network traffic monitoring. One approach is to monitor logs and processes for binary files involved in data destruction, such as vssadmin, wbadmin, and bcdedit. Ransomware typically destroys shadow copies of data to prevent data recovery efforts that don't involve paying the ransom.
ExtraHop Reveal(x) automatically detects unusually large volumes of file modifications performed over file-sharing protocols such as SMB, as well as the presence of abnormal file extensions and ransom notes.
Specific variants can also be identified by file names or extensions appended to encrypted files. For example, the Brrr variant of Dharma ransomware adds the file extension .brrr to encrypted files. These ransomware extensions identify which files are encrypted and no longer accessible, persuading the victim to pay the ransom.
Network defenders can also leverage threat intelligence, which identifies suspicious IP addresses, hostnames, and URIs associated with threat groups. Threat intelligence data can be found in free and commercial sources provided by the security community. ExtraHop Reveal(x) includes curated threat collections, which are continuously updated to cover new ransomware variants. C&C domains associated with ransomware can be automatically detected by Reveal(x) in HTTP and DNS traffic.
How to Prevent Ransomware Attacks
One way to avoid the damage inflicted by ransomware is to maintain off-site backup files that can restore critical systems. Periodically test these backup files to make sure they are working and updated.
To reduce the number of ransomware attack vectors, disable internet access for internal services, especially services that run over file-sharing or remote access protocols such as RDP. For services that must connect to the internet, monitor incoming connections with a firewall or gateway that scans traffic for malicious content. Another strategy for preventing the spread of ransomware is to segment networks and create policies that limit the device interactions to a sub-network. Finally, make sure that servers are routinely updated and patched to reduce the number of vulnerabilities that an attacker can exploit. | <urn:uuid:6a370983-00cb-4b8b-ac33-acc34499c1ae> | CC-MAIN-2024-38 | https://hop.extrahop.com/company/blog/2020/ransomware-explanation-and-prevention/ | 2024-09-20T14:26:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00169.warc.gz | en | 0.932093 | 1,577 | 3.75 | 4 |
Technical debt arises when IT or development teams do not improve inefficient, outdated processes. Often, technical debt accumulates when teams consciously make a decision to choose a “quick fix” to a problem as opposed to a comprehensive long-term solution. Technical debt can apply to outdated equipment, hardware, or software.
A Technical Definition of Tech Debt
Technical debt accrues when data centers rely on outdated technology. Much like monetary debt, technical debt must be “repaid” some way or another down the line. Like monetary debt, technical debt can accrue “interest” if systematic problems compound overtime. When an organization neglects to modernize a system, which is a potentially costly endeavor, it instead applies fixes to the legacy system in place. These quick fixes turn into bugs, which requires more fixes and results in high maintenance costs. In fact, a recent study found that a typical company wastes 23% to 42% of development time on technical debt.
A Simple Definition
Technical debt is the cost of maintaining outdated systems as a cost-effective measure as opposed to investing in better solutions. When it comes to data centers, technical debt refers to the usage of outdated infrastructure and hardware systems that reduce data center efficiency. This type of debt accumulates when data centers keep legacy systems in place as opposed to investing in new and improved technologies. Data centers with high technical debt typically have higher energy costs that harm sustainability initiatives.
Origins of Technical Debt
First coined in the 1990s by Ward Cunningham in the Agile Manifesto, technical debt originally referred to software coding. “Shipping first-time code is like going into debt. A little debt speeds development so long as it is paid back promptly with refactoring,” Cunningham said in 1992. “The danger occurs when the debt is not repaid. Every minute spent on code that is not quite right for the programming task of the moment counts as interest on that debt.”
Since then, the concept of technical debt has been applied to a variety of operations, including data centers. Every organization has some degree of technical debt, but it’s best to keep debt low so personnel can focus on new initiatives as opposed to applying endless maintenance to outdated systems.
Why Should You Care About Tech Debt?
While it saves money in the short term, technical debt costs more in the long-term. High technical debt can lead to malfunction and outages. A survey indicated that 45% of data center managers said that their most recent outage cost between $100,000 and $1 million in direct and indirect costs. The same study indicated that most outages are preventable with better management and configuration.
How Do You Identify Tech Debt?
It can be challenging to identify technical debt because the causes are myriad. In order to ascertain what type of technical debt there is and how to fix it, you should:
Talk to on-campus workers about possible inefficiencies. Individuals involved in the day-to-day operations will be aware of issues and can advise on how to fix them.
Take an inventory of equipment, assess the lifespan, and determine if it is operating efficiently.
Analyze data to identify potential power waste and opportunities to automize inefficient manual processes.
Types of Technical Debt
Many factors can increase the amount of technical debt an organization faces in its day-to-day operations. Some technical debt is deliberate, such as cutting corners to make a deadline with the idea that the problem will be fixed later on, while other technical debt is accidental, such as systems aging and failing over time. Successful data centers refurbish old equipment regularly and invest in new technology to ensure sustainability and efficiency.
Server racks, power systems, cooling equipment, and other physical infrastructure become outdated and inefficient, requiring a higher cost of power and maintenance. Outdated equipment fails more often and requires repairs, increasing downtime. Systems that require complicated configurations, which are often applied as a quick fix as opposed to a system-wise overhaul, are more prone to manual error.
Older software and applications often have security vulnerabilities that leave data centers exposed to breaches and hacks. Logging is typically inadequate in legacy systems so attackers can take advantage of flaws. Many times these systems are kept in place because operators do not want to touch older (but workable) code, fearing they will break it.
Legacy software may not run with newer equipment. Software built in 2008 may either not run with newer infrastructure or the operating system may sunset support for the program all together. Legacy software also may not automate some functions requiring a greater deal of manual labor, which can result in a higher rate of human error.
Documentation on systems may be old, outdated, or poorly communicated to workers. If workers do not know how systems work or how errors were solved in the past, they will have trouble fixing future problems.
Real World Technical Debt Examples
Seventy-eight percent of data center managers believe downtime is preventable with better management, processes, or configuration. In 2019, Wells Fargo famously experienced an outage that left customers unable to access their bank accounts due to a preventable fire in a data center. Instances of data center outages have gone down in recent years, but the cost of failure went up.
Uptime Institute recently studied data center outages and discovered that many failures are due to operators cutting short-term costs, which made systems more susceptible to catastrophic failure. For example, ATS components were replaced with customized switchgear. These custom parts have a greater number of controls, and maintenance technicians must ensure everything is synchronized. This high-level of customization opens up systems to failure, documentation technical debt, and software technical debt.
Read more about technical debt in these Data Center Knowledge articles:
About the Author
You May Also Like | <urn:uuid:b284e1e0-f0da-474f-9a1d-b37a068d1e31> | CC-MAIN-2024-38 | https://www.datacenterknowledge.com/data-center-infrastructure-management/what-is-technical-debt-definitions-examples-and-more | 2024-09-20T14:51:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00169.warc.gz | en | 0.943394 | 1,184 | 2.796875 | 3 |
AD Set Up: Adding Users, Machines and Using GPO
This scenario serves as a guide on how to add users, machines and using GPO in Windows Active Directory
Active Directory is a directory service developed by Microsoft for Windows domain networks.
OBJECTIVES AND OUTCOME:
After completing this scenario, you will be able to:
– Add users and machines in active Directory and use GPOs.
In order to get the full benefit from this scenario, it is suggested that you have competencies in the following areas:
– A basic understanding of Windows Server and Active Directory.
It is suggested that you consult with these recommended reading resources/scenarios:
This scenario was created by Mercy Mwikali. | <urn:uuid:96e3a466-f938-492a-a41d-1f04ff9a087c> | CC-MAIN-2024-38 | https://www.cyberranges.com/ad-set-up-adding-users-machines-and-using-gpo/ | 2024-09-08T08:47:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00369.warc.gz | en | 0.884345 | 149 | 2.625 | 3 |
Research from University of Chicago Could Pave Way for Future Quantum Communication Channels
(Inverse.com) New research from the University of Chicago has uncovered a potential way to bring quantum technology to our everyday electronics.
The research investigated the properties of a wafer-thin material called silicon carbide and how it could be used to control capricious quantum states. From the two studies, the University of Chicago team determined that could not only tune the silicon carbide’s quantum states using run-of-the-mill electric fields but that the quantum states of silicon carbide uniquely emitted photons conducive for communications.
David Awschalom, the team’s lead researcher and Liew Family Professor in Molecular Engineering at UChicago, said in a statement that this exciting quantum property of silicon carbide could pave the way for future quantum communication channels. “This makes them well suited to long-distance transmission through the same fiber-optic network that already transports 90 percent of all international data worldwide,” said Awschalom.
These results are still very far from commercializing such tech, the authors hope their discoveries will play a part in enabling the next generation of quantum technology. “This work brings us one step closer to the realization of systems capable of storing and distributing quantum information across the world’s fiber-optic networks,” Awschalom said. | <urn:uuid:cd6b812b-e188-47e4-b75d-73b4abfa836a> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/research-from-university-of-chicago-could-pave-way-for-future-quantum-communication-channels/ | 2024-09-08T09:35:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00369.warc.gz | en | 0.878963 | 285 | 2.875 | 3 |
The emergence of network transports has given rise to new applications and services that take advantage of low-overhead access to the network interface from user or kernel address space; remote direct memory access (RDMA); transport protocol offloading; and, hardware support for event notification and delivery. The Direct Access File System (DAFS) is a new commercial standard for file access over this new class of networks. DAFS grew out of the DAFS Collaborative, an industrial and academic consortium led by Network Appliance and Intel. DAFS file clients are usually applications that link with user-level libraries that implement the file access Application Programming Interface (API). DAFS file servers are implemented in the kernel.
DAFS clients use a lightweight Radio Port Controller (RPC) to communicate file requests to servers. In direct read or write operations, the client provides virtual addresses of its source or target memory buffers; and, data transfer is done using RDMA operations. RDMA operations are always issued by the server.
Optimistic DAFS Clients
In DAFS direct read and write operations, the client always uses an RPC to communicate the file access request along with memory references to client buffers that will be the source or target of a server-issued RDMA transfer. The cost associated with always having to do a file access RPC is manifested as an unnecessarily high latency for small accesses from server memory. A way to reduce this latency is to allow clients to access the server file and virtual memory (VM) cache directly, rather than having to go each time through the server vnode interface via a file access RPC.
An Optimistic DAFS improves on the existing DAFS specification by reducing the number of file access RPC operations needed to initiate file I/O and replaces them with memory accesses using client-issued RDMA. Memory references to server buffers are given out to clients or other servers that maintain cache directories. They are then allowed to use those references to directly issue RDMA operations with server memory. To build cache directories, the server returns to the client a description of buffer locations in its VM cache. These buffer descriptions are returned either as a response to specific queries (i.e., client asks: give me the locations of all your resident pages associated with file”) or piggybacked in the response to a read or write request (i.e., server responds: “here’s the data you asked for, and by the way, these are their memory locations that you can directly use in the future”). In Optimistic DAFS, clients use remote memory references found in their cache directories, but accesses succeed only when directory entries have not become stale. For example, this is the result of actions of the server pageout daemon. There is no explicit notification to invalidate remote memory references previously given out on the network. Instead, remote memory access exceptions thrown around by the target NIC and caught by the initiator NIC, can be used to discover invalid references and a switch to a slower access path using file access RPC. Therefore, by maintaining the NIC memory management unit in the case where RDMA can be remotely initiated by a client at any time is tricky, and needs special NIC and OS support. Finally, an Optimistic DAFS requires maintenance of a directory on file clients (in user-space) and on other servers (in the kernel).
Kernel Support For DAFS Servers
Special capabilities and requirements of networking transports used by DAFS servers expose a number of kernel design and structure issues. In general, a DAFS file server needs to be able to:
- Do asynchronous file I/O.
- Integrate network and disk I/O event delivery.
- Lock file buffers while RDMA is in progress.
- Avoid memory copies.
Now, lets look at various kernel support mechanisms for DAFS servers. What follows is a description of each kernel support mechanism:
- Event-Driven Design Support: Consists of kernel asynchronous file I/O interfaces and integrating network and file event notification and delivery.
- Vnode Interface Support: P Consists of a vnode interface designed to address these needs.
- VM System Support: Consists of kernel support for memory management of the asymmetric multiprocessor system that consists of the NIC and the host CPU.
- Buffer Cache Locking: Consists of modifications to buffer cache locking.
- Device Driver Support: Consists of device driver requirements of memory-to-memory NIC.
Event-Driven Design Support
An area of considerable interest in recent years has been that of event-driven application design. Event-driven servers avoid much of the overhead associated with multithreaded designs, but require truly asynchronous interfaces coupled with efficient event notification and delivery mechanisms integrating all types of events. The DAFS server requires such support in the kernel.
Vnode Interface Support
Vnode/VFS is a kernel interface that separates generic file-system operations from specific file-system implementations. It was conceived to provide applications with transparent access to kernel file-systems, including network file-system clients such as a Network File System (NFS). The vnode/VFS interface consists of two parts: VFS defines the operations that can be done on a file-system. Vnode defines the operations that can be done on a file within a file-system.
VM System Support
Maintaining virtual/physical address mappings and page access rights, used by the main CPU memory-management hardware, is done by the machine-dependent physical mapping (pmap) module. Low level machine-independent kernel code such as the buffer cache, kernel malloc and the rest of the VM system, are using pmap to add or remove address mappings and alter page access rights.
Symmetric multiprocessor (SMP) systems sharing main memory can use a single pmap module as long as translation lookaside buffers (TLB) on each CPU are kept consistent. Pmap operations apply to page tables shared by all CPU. TLB miss exceptions thrown by a CPU, result in a lookup for mappings in the shared page tables. Invalidations of mappings are applied to all CPUs.
Memory-to-memory NIC, store virtual-to-physical address translations, and access rights for all user and kernel memory regions directly addressable and accessible by the NIC. Main CPUs use their on-chip translation look-aside buffer (TLB) to translate virtual to physical addresses. A typical TLB page entry includes a number of bits such as verification/validation (V) and Analog Control Channel (ACC)–signifying whether the page translation is valid, and what the access rights to the page are, along with the physical page number. A miss on a TLB lookup requires a page table lookup in main memory. NIC on the Protocol Capability Indicator (PCI–(or other I/O)) bus have their own translation and protection (TPT) tables. Each entry in the TPT includes bits enabling RDMA Read or Write (i.e., the W bit in the diagram) operations on the page; the physical page number; and, a Ptag value identifying the process that owns the pages (or the kernel). Whereas the TLB is a high-speed associative memory, the TPT is usually implemented as a dynamic random access memory (DRAM) module on the NIC board. To accelerate lookups on the TPT, remote memory access requests carry a Handle index that helps the NIC find the right TPT entry.
Buffer Cache Locking
In an RDMA-based data transfer, the server sets up the RDMA transfer in the context of the requesting RPC. Once issued, the RDMA proceeds asynchronously to the RPC. The latter does not wait for RDMA completion. To serialize concurrent access to shared files in the face of asynchrony, the vnode (vp) of a file needs to be locked for the duration of the RPC. However, the data buffers (bp’s) transferred need to be locked for the full duration of the RDMA. Locking the vp (i.e., the entire file) for the duration of the RDMA would also work, but would limit performance in case of sharing, since requests for non-overlapping regions of a file would have to be serialized.
A multithreaded event-driven kernel server that directly uses the buffer cache and does event processing in kernel process context, faces problems in the following circumstances: When a thread tries to lock a buffer, it is already locking (because a transfer is in progress on that buffer) and expecting to block until that lock is released by some other thread; and, when a buffer is released from a different thread than the one that locked it.
Transferring lock ownership to the kernel during asynchronous network I/O, does not help, since the lock release is done by some kernel process (whichever happens to have polled for that particular event), rather than by the kernel itself. The solution presently used, is for the kernel process that issued an RDMA operation to wait until the transfer is done in order to release the lock. This also prohibits that process from trying to lock the same buffer again, thus causing a deadlock panic. A better solution is to enable recursive locking and allow lock release by any of the server threads.
Device Driver Support
Memory-to-memory network adapters virtualize the NIC hardware and are directly accessible from user space. One such example is the virtual interface (VI)–where the NIC implements a number of VI contexts. Each VI is the equivalent of a socket in traditional network protocols, except that a VI is directly supported by the NIC hardware and usually has a memory-mapped rather than a system call interface. The requirement to create multiple logical instances of a device, each with its own private state (separate from the usual device softcopy state), and to map those devices in user address spaces, requires new support from Berkeley Software Design (BSD) kernels.
Network Driver Model
Finally, network drivers in BSD systems are traditionally accessed through sockets and do not appear in the file-system name space (i.e., under /dev). User-level libraries for memory-to-memory network transports require these devices to be opened and closed multiple times, with each opened instance appearing as a separate logical device, maintaining a private state, and be memory-mapped.
Summary And Conclusions
As previously explained, the Direct Access File System (DAFS) is an emerging commercial standard for network-attached storage on server cluster interconnects. The DAFS architecture and protocol, leverage network interface controller (NIC) support for user-level networking, remote direct memory access, efficient event notification, and reliable communication. This article demonstrated how the current server structure can attain read throughput of more than 100 MB/s over a 1.25 Gb/s network, even for small (i.e., 4K) block sizes, when pre-fetching and using an asynchronous client API. Finally, to reduce multithreading overhead, you should integrate the NIC with the host virtual memory system.
About the Author :John Vacca is an information technology consultant and author. Since 1982, John has authored 36 technical books including The Essential Guide To Storage Area Networks, published by Prentice Hall. John was the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John can be reached at email@example.com. | <urn:uuid:856a9560-dc38-4611-902d-e5f34331f61d> | CC-MAIN-2024-38 | https://www.enterprisestorageforum.com/hardware/storage-technology-in-depth-dafs/ | 2024-09-09T14:18:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00269.warc.gz | en | 0.916203 | 2,391 | 2.796875 | 3 |
API security warrants its own specific solution
Application programming interfaces (APIs) enable developers to quickly and easily roll-out services but they’re also equally attractive to attackers. This is because they can provide ready access to back-end systems and sensitive data sets. What makes these attacks so interesting is how they are executed: unlike a traditional “hack,” an API attack doesn’t hinge on there being something wrong with the API. Rather, attackers can legitimately use the way an API functions against it and can simply find out if it hasn’t been developed securely through standard interaction.
The OWASP Foundation recognizes this fact via the API Security Top 10 list of vulnerabilities and security risks. When we look at the list, there are six common methods of execution. Three of the issues occur due to weak access control and three to business logic abuse, with the remainder existing due to insufficient traffic management, application vulnerabilities, lack of visibility and lack of operational security readiness.
These issues are unique to APIs and make them particularly challenging to secure, so let’s look at each in detail.
1. Broken object level authorisation (BOLA)
Formerly known as Insecure Direct Object References (IDOR), BOLA allows the attacker to perform an unauthorized action by reusing an access token. This method has been widely used to attack IoT devices, for instance, as it can be used to allow the attacker to access other user accounts, change settings and generally wreak havoc much to the embarrassment of the IoT vendor.
The attack relies on the API’s resource IDs or objects not having sufficient validation measures in place. In some cases, the data used by the API has no user validation and is accessible to the public, while in other cases error messages return too much information, providing the attacker with more information on how to abuse the API.
Defending against BOLA attacks requires the validation of all user privileges for all functions across the API. API authorization should be well defined in the API specification and random/unpredictable IDs. It’s also important to test these validation methods on a routine basis.
2. Broken user authentication
An attacker can impersonate a genuine user if there are flaws with user authentication. Mechanisms such as log-in, registration, and password reset can be bombarded with automated attacks and, if poorly secured, will allow weak passwords, return error messages to the user with too much information, lack token validation or have weak or non-existent encryption.
Preventing these abuses requires security to be prioritized during development. All the authentication mechanisms mentioned above need to be identified and multi-factor authentication (MFA) needs to be applied. The development team should also look to implement volumetric and account lockout protection mechanisms to prevent brute force attacks.
3. Excessive data exposure
Some published APIs expose more data than is necessary as they rely on the client app rather than back-end systems to filter. Attackers can use this information to carry out enumeration attacks and build up an understanding of what works and what doesn’t, allowing them to create a “cookbook” for stealing data or for orchestrating a large attack at a later stage.
Limiting data exposure requires the business to understand and tailor the API to user needs. The aim is to provide the minimum amount of data needed, so the API needs to be highly selective in the properties it chooses to return. Sensitive or personally identifiable information (PII) should be classified on backend systems and the API should never rely on client-side filtering.
4. Lack of resources and rate limiting
If the API doesn’t apply sufficient internal rate limiting on parameters such as response timeouts, memory, payload size, number of processes, records and requests, attackers can send multiple API requests creating a denial of service (DoS) attack. This then overwhelms back-end systems, crashing the application or driving resource costs up.
Prevention requires API resource consumption limits to be set. This means setting thresholds for the number of API calls and client notifications such as resets and lockouts. Server-side, validate the size of the response in terms of the number of records and resource consumption tolerances. Finally, define and enforce the maximum size of data the API will support on all incoming parameters and payloads using metrics such as the length of strings and number of array elements.
5. Broken function level authorization
Effectively a different spin on BOLA, this sees the attacker able to send requests to functions that they are not permitted to access. It’s effectively an escalation of privilege because access permissions are not enforced or segregated, enabling the attacker to impersonate admin, helpdesk, or a superuser and to carry out commands or access sensitive functions, paving the way for data exfiltration.
Stopping this level-hopping activity requires authentication workflow to be documented and role-based access to be enforced. This requires a strong access control mechanism that flows from “parent to child” and doesn’t permit the reverse.
6. Mass assignment
The attacker discovers modifiable parameters and server-side variables that they then exploit by creating new users with elevated privileges or by modifying existing user profiles. This can be prevented by limiting or avoiding the use of functions that bind inputs to objects or code variables. The API schema should include input data payloads and enforce segregation by whitelisting client-updatable properties and blacklisting those that should be restricted.
Incomplete, ad-hoc or insecure default configurations, misconfigured HTTP headers, unnecessary HTTP methods, permissive cross-origin resource sharing (CORS), and verbose error messages containing sensitive information are, unfortunately, all too common in APIs. They’re usually the result of human error, due to a lack of application hardening, poor patching practices or improper encryption and, when discovered by an attacker, can be exploited, leading to fraud and data loss.
Configuration is all about putting in place the right steps during the API lifecycle, so it is advised to implement a repeatable hardening process, a configuration review and update process, and regular assessments of the effectiveness of the settings. Defining and enforcing responses (including those for errors) can also stop information getting back to the attacker. CORS policies should also be put in place to protect browser-based deployments.
A staple of the OWASP Web Application top 10 list, injection attacks see the untrusted injection of code into API requests to execute commands or to gain unauthorized access to data. These attacks can happen when the database or application lacks filtering or validation of client or machine data, allowing the attacker to steal data or inject malware by sending queries and commands direct to the database or application.
The mitigation of injection attacks requires separation between data/commands and queries. Data types and parameter patterns should be identified, and the number of records returned should be limited. All the data from clients and external integrated systems should be validated, tested, and filtered.
9. Improper asset management
Poorly secured APIs such as shadow, deprecated, or end-of-life APIs are highly susceptible to attack. Other threat vectors include pre-production APIs that may have been inadvertently exposed to the public, or a lack of API documentation that has led to an exposed flaw, such as authentication, errors, redirects, rate limiting, etc.
Here it’s critical to look at the API publication process by replacing or updating risk analyses as new APIs are released. Continuous monitoring of the entire API environment, from dev to test, stage and production, including services and data flow is also advised. Adopting an OpenAPI specification can help simplify the process.
10. Insufficient logging and monitoring
Attackers can evade detection entirely if API activity isn’t logged and monitored. Examples of insufficient logging and monitoring include misconfigured API logging levels, messages lacking detail, log integrity not being guaranteed, and APIs being published outside of existing logging and monitoring infrastructure.
Logging and monitoring need to capture enough detail to uncover malicious activity, so it should report on failed authentication attempts, denied access, and input validation errors. A log format should be used that is compatible with standard security tools and API log data should be treated as sensitive whether in transit or at rest.
All ten attack methods reveal how difficult it can be to secure APIs, which are continuously being spun-up, updated or replaced, sometimes daily. In fact, they’re so numerous that their security can only be enforced using automation. Consequently, many organizations have tried to use rules-based security solutions and code-scanning tools, although these are not equipped to spot the types of abuses identified in the OWASP list. Web application firewalls (WAFs), for instance, offer limited protection because they look for known threats, while an API gateway can create more problems by acting as a single point of failure.
It’s for these reasons that Gartner recently created a distinct API security category, separate from these other tools, in acknowledgement of the fact that APIs have their own set of problems (that are also often unique to the business itself).
In the “Advance your Platform-as-a-Service Security” report, analyst Richard Bartley reveals API security tooling for API discovery and protection should be regarded as having equal importance to and sit between internet edge security (i.e., WAF) and the data plane security layers (i.e., the Cloud Workload Protection Platform or CWPP). This new breed of API security is therefore cloud-native and behavior-based, allowing it to spot and respond to API-specific anomalous activity.
These new tools specifically focus on the prevention of automated attacks against public-facing applications and the persistence of API coding errors. They use machine learning to analyze APIs and web applications coupled with behavioral analysis to determine whether the intent behind API interaction is malicious or benign. They can also act by blocking, rate limiting, geo-fencing and even deceiving attackers, thereby buying time to respond. Such capabilities mean that API-specific security solutions can be applied to aid the developer and to monitor the security of the API throughout its entire lifecycle, thereby preventing the automated attacks and vulnerability exploits identified in the OWASP API Security Top 10.
With APIs continuing to outstrip web apps in the rollout of new services, we must attend to how these are secured or risk building these services on shaky foundations. The hope is that with the OWASP Project highlighting how APIs can be exploited and Gartner creating a distinct new category, the tech sector will finally realize that API security is an anomaly that merits its own solution. | <urn:uuid:59a023f3-9027-426d-8356-4ea09da97811> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2022/06/13/risks-api-security/ | 2024-09-09T15:33:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00269.warc.gz | en | 0.91512 | 2,199 | 2.796875 | 3 |
What Types of Data Breaches do you Need to Know About in 2023?
By now, all firms should be aware that cybersecurity threats are among the leading risks any business faces. Within this, those that directly compromise sensitive data have the potential to be especially damaging.
With the typical cost of a data breach now reaching almost $4.5 million – a 15 percent increase over the last three years – it’s critical that enterprises have the correct defenses in place. This means data protection at every level of the business, from initial perimeter defenses to preventing data exfiltration. But in order to implement this effectively, it’s vital to understand what cybercriminals are looking for and the methods they’ll use to get it.
The Importance of Data Security in the Enterprise
Poor data security can have a wide range of repercussions for a business. Failing in this area can do a lot more than simply disrupt activity in the short term. Serious incidents can not only cost huge amounts of money, but lead to an exodus of customers and even threaten the future viability of the organization.
What Information is Typically Targeted in a Data Breach?
Many hackers today make extracting sensitive and confidential information the primary goal of their attacks. This can be anything from financial details that can be sold on for use in identity theft to trade secrets or intellectual property that would be highly valuable to competitors.
However, some categories of data are particularly valuable. Healthcare information, for instance, is often a key target for hackers, because its sensitive nature means organizations are more likely to give in to any demands in order to restore access or ensure it is not publicly disclosed.
Indeed, BlackFog’s 2022 State of Ransomware report revealed the most-targeted industries include education, healthcare and government, all of which are heavily dependent on confidential citizen data and also often have limited resources to dedicate to defending against attacks.
What are Some of the Main Ways a Data Breach Can Occur?
While there is a popular image of shadowy hackers on the dark web targeting firms with advanced threats, this is far from the only way in which data breaches can occur. In fact, the vast majority of cyber security issues – as much as 95 percent according to some studies – can be traced back to human error within the business.
This could be a mistake that leaves the door open for a cybercriminal. For example, failing to configure systems correctly or not recognizing vulnerabilities can leave firms open to techniques such as an SQL injection attack or advanced persistent threats. Relying on weak or reused passwords can also act as an invitation to hackers.
Falling victim to social engineering attacks is also a common way in which data breaches can occur, so it’s vital all employees are fully trained on best practices for data protection.
Why do Data Breaches Happen?
As well as knowing the ‘how’ when it comes to data breaches, it pays to be aware of the ‘why’. Knowing what hackers are looking for ensures you know what parts of the network to prioritize when building a data breach presentation strategy.
In the past, the main goal for many attacks was to gain access to precious personal data – such as financial details or social security number information – that could be sold on or used for the criminals’ own gain. However, in today’s environment, motivations have shifted. Nowadays, extortion is often the primary aim of a threat actor as it offers a relatively cheap and reliable way of making money.
Many firms will pay up to make a problem go away, regain access to critical files or prevent negative publicity, and those that do are often marked as an easy target that can be attacked again.
What are the Most Common Types of Data Breaches?
In order to prevent data breaches, you must first understand what they look like, the methods hackers use to gain access to businesses, and the different ways data exfiltration can take place. Therefore, familiarizing yourself with the most common types of data breach attack vectors is an essential first step in protecting your most sensitive information.
A wide-ranging term, malware is a catch-all phrase that can refer to any type of malicious software hackers seek to infect a network with. This can then be used by cybercriminals to gain unauthorized access to confidential information, exfiltrate data, disrupt systems, spy on a user’s activities or delete data on the network.
The most common way for malware to enter a network is via a phishing attack, which is the root cause of over 90 percent of incidents. These may invite users to open a file directly in order to inject malicious code or lead them to a website that can use a drive-by download to infect a system.
Getting more specific, ransomware is a particular type of malware that has become one of the most popular forms of cyberattack over the last few years. Indeed, according to Verizon’s 2022 Data Breach Investigations Report, ransomware was involved in 25 percent of all breaches last year.
The nature of these attacks has also changed. Traditionally, a malicious actor seeking a ransom would encrypt data or systems, preventing mission-critical business activities from taking place. They would then demand money in exchange for the decryption key needed to recover.
However, today, by far the most dangerous threat is double extortion ransomware. This type of ransomware attack also infiltrates key business or customer data and then threatens to release it publicly if a ransom is not paid. According to our latest annual data breach report, 89 percent of all ransomware attacks in 2022 involved data exfiltration, a nine percent rise on the previous year.
According to Verizon, 83 percent of data breaches involve external actors – which of course means around one in six incidents originate within your business. Insider threats, as these are known, can either be down to human error, such as an individual emailing sensitive information to the wrong recipient, or be intentional.
A malicious insider threat can do huge damage and be particularly hard to spot. Such individuals often know exactly what data will be the most valuable, how to access it, and how to cover their tracks and evade standard security measures. The motivations for this can vary, from retaliation for a perceived slight to blackmail or bribery.
Technology such as access management tools and anti data exfiltration (ADX) software can be highly useful here, as they can detect unusual behavior within the businesses and block any attempts to exfiltrate business or personal information as it occurs.
As well as being used as a channel to directly deliver malware, emails can present a range of other risks. These include targeted spear phishing attacks that are tailored to an individual victim, business email compromises that appear to be from trusted contacts such as suppliers or executives, and other social engineering attacks.
It’s estimated that over three billion phishing scam emails are sent around the world every day, which makes a strong email security solution an essential first line of defense. This should be used in conjunction with other solutions such as multi factor authentication, which can prevent an attacker from using login credentials they’ve acquired via phishing to access business or customer data.
The other critical defense against email-based threats is employee training. By ensuring everyone in the organization is aware of the threats that arrive in their inboxes and knows what telltale signs to look at or to determine if a message is genuine, firms can greatly reduce their risk.
Finally, there are also data beaches that may not come from a hacker, but from more traditional criminal activities. Lost or stolen devices remain a common source of data breaches and can cause major headaches to businesses, particularly when confidential data is being handled on portable endpoints such as smartphones.
Clear policies to and remind employees of their responsibilities when handling company data are essential in minimizing these risks, but tools that can remotely wipe data from company-owned devices will also be important. However, these may not always be in place if workers are using their personal devices, so it’s vital you have visibility into every endpoint that may hold company data.
Protecting Your Company From Data Breaches
Data breach prevention is always better than cure. Once you discover a breach, the damage is already done. However, by putting in place the right technologies and processes to address the specific risks posed by each of the above types of data breaches, firms can go a long way toward minimizing their risk.
How can you Prevent a Data Breach in Your Company?
Preventing data breaches from happening in the first place is the best form of defense against these attacks. While the first step in this should be solutions such as firewall, email security and antivirus to prevent a hacker from gaining access to your system in the first place, these solutions alone can never be 100 percent effective.
However, even if a cybercriminal has breached your perimeter, there are still steps you can take to stop them from stealing your data. Strong access management tools to prevent unauthorized access and ADX tools that can automatically step in to prevent data being removed from a network can ensure that your sensitive information remains protected.
How can you Tell if You’ve Fallen Victim to a Data Breach?
It can often be very difficult for firms to identify if they have been compromised. Indeed, according to Blumira, the average security breach goes unnoticed for 212 days before detection, while a further 75 days are needed to contain it.
This is where advanced, behavioral-based tools such as ADX come in. Unlike other solutions, which rely on matching potential threats with known signatures, these tools use machine learning to build up a complete picture of what typical user behavior in the business looks like.
If it spots activities that fall outside this – such as a user account suddenly attempting a large data transfer at an unusual time – it can block these before they have a chance to steal information.
What Should a Company do After a Data Breach?
If all else fails, it’s vital to have a comprehensive mitigation plan for responding to a data breach. This should cover a range of issues, from who in the business will take responsibility for managing the response, to determining if and when data protection bodies need to be informed, and putting in place improved systems to avoid future issues.
Even if firms act swiftly the expenses of a breach can be large and wide-ranging. Direct lost business, reputational damage and the prospect of fines and legal action all add to the financial losses firms can face. If customer data is compromised, costs may even include providing credit monitoring services to affected individuals, as well as any regulatory action.
Therefore, it’s clear that data breach prevention to block attacks before they happen is always the better option. While no system can be 100 percent secure, a defense in depth approach that runs from perimeter defenses through to endpoint protection and ADX is the best way to avoid a messy and expensive cleanup operation. | <urn:uuid:9e59f9b4-65e8-4ac9-9741-d782e8c57e86> | CC-MAIN-2024-38 | https://www.blackfog.com/what-types-of-data-breaches-do-you-need-to-know-about-in-2023/ | 2024-09-10T19:56:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00169.warc.gz | en | 0.950995 | 2,228 | 2.609375 | 3 |
Service providers and communities across the United States are always dealing with the same question: preemptively lay fiber optic cables and wait for demand or wait until customers demand the service?
Compared to the rest of the country, Maine ranks 49th in Internet speeds. In an attempt to improve the state’s ranking, the governor’s office spearheaded a report: Broadband: The Road to Maine’s Future. The report determined that Maine citizens don’t use the Internet to its maximal potential for expanding economic opportunities and improving educational outcomes. To bring Maine into the 21st Century, residents need to learn how to the Internet can help them grow and learn.
Compared to other states, Maine residents use the Internet significantly less, and many see no use for it. For example, across the United States, 97 percent of consumers shop online, yet less than 41 percent of Maine businesses have a website. While much of Maine doesn’t regularly use the Internet, that doesn’t mean that there isn’t demand. When new areas get Internet access, an estimated 75 percent of households sign up.
To expand broadband access, Maine’s government needs to actively stimulate Internet use. By training small and medium-sized business to create webpages for business promotion and explaining how they can use Internet services to grow their business, the entire state could benefit from economic development. But for this to happen, communities first need to have broadband access.
As with any broadband fix, there needs to be an investment into infrastructure. Over 40,000 Maine households don’t have Internet access, but the report estimates it will cost an estimated $60 million to build the necessary infrastructure.
At GeoTel Communications, our fiber network maps integrate telecom infrastructure data with geospatial technologies so that cities and local governments can analyze fiber network assets in a spatial, map-like environment and make decisions about planning new networks, such as building their own fiber optic network. To order any of GeoTel’s data sets for a particular city or metro area, give GeoTel Communications a call at (800) 277-2172. | <urn:uuid:0485f1c0-c210-4dbd-ba3f-2de0f8ff075c> | CC-MAIN-2024-38 | https://www.geo-tel.com/maine-build-fiber-wait-demand/ | 2024-09-15T17:51:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00669.warc.gz | en | 0.91498 | 433 | 2.5625 | 3 |
File Transfer Protocol (FTP) Port Exposed to the Internet
Category | SECURITY_MISCONFIGURATION |
Base Score | 3.0 |
FTP, an application protocol that is not encrypted, is exposed to the internet.
Attackers often gain access to file servers through credential attacks by obtaining passwords leaked in data breaches and by password spraying weak passwords. This access allows attackers to steal or ransom off data contained within the file server. In some cases, file server access may allow an attacker to compromise the host. | <urn:uuid:11d784da-94fd-4ca3-ae74-61ce78a9e4fc> | CC-MAIN-2024-38 | https://docs.horizon3.ai/knowledge_base/weaknesses/H3-2022-0008/ | 2024-09-16T22:20:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00569.warc.gz | en | 0.899939 | 111 | 2.859375 | 3 |
Two-factor authentication (2FA) provides an additional layer of security for authenticating users. The user is granted access only after successfully presenting two pieces of evidence (or factors): knowledge (something the user and only the user knows), and possession (something the user and only the user has).
Naverisk uses a password and/or a PIN code as the knowledge factor, and a Time-based One Time Password (TOTP), generated by the Google Authenticator (or compatible) application as the possession factor. A person without knowledge of the Password/PIN code and possession of the device running Google Authenticator is unable to log into Naverisk.
This document details the steps of setting up Google Two-Factor Authentication for users of Naverisk. This guide will also cover setting up Google Authenticator on an iPhone to generate authentication codes.
It is recommended that the Google Authenticator or compatible application such as MS authenticator is downloaded onto a smartphone to generate the authentication code needed when logging in. This can be found by simply searching for the Authenticator app on the Apple App Store (iPhone) or Google’s Play Store (Android).
2.0 Configuring Two-Factor Authentication
2.1 Changing User Password Type in Naverisk
Note that 2FA can only be configured by the individual user for themselves. It is not possible to set up 2FA on behalf of another user, as the 2FA key is not visible for user accounts other than your own.
Log in to Naverisk, click on Home, then click My Profile and go to Settings
2. Under the Authentication section, click the drop-down arrow for Authentication Type.
2.1.1 2FA with PIN
1. Select Google 2FA. (select this for MS authenticator if using)
2. Take note of your 2FA Key, as you will need this shortly. Enter a PIN number that you will remember. PIN Numbers can only contain numbers, and be up to 6 digits long.
Note: while the PIN number is not mandatory, it is strongly recommended to configure a PIN number. Failure to configure a PIN will reduce authentication back to single-factor (can log in with possession of the device with the Google Authenticator app).
3. If you wish to scan a QR code, click on Show QR Code.
4. It will then show you a QR code to scan to your phone.
5. Then click Save.
2.1.1 2FA with Password
1. Select Google 2FA + Password.
2. Take note of your Google 2FA Key, as you will need this shortly. You may optionally also enter a PIN number that you will remember. PIN Numbers can only contain numbers, and be up to 6 digits long.
3. Your original Naverisk password will remain the same when used with 2FA. You can enter a new password if you wish to change it.
4. If you wish to scan a QR code, click on Show QR Code.
5. It will then show you a QR code to scan to your phone.
6. Then click Save.
2.2 Setting up your Smart Phone
On your phone, open your Authentication app.
2. In the Authenticator app, tap the + icon.
3. You can then either choose to scan a QR code or enter manually. In this case, we will enter the code manually.
4. Enter an identifiable name under Account (e.g. Naverisk), then under Key enter the Google 2FA key you noted down above. Then tap the Tick button.
5. If successful, you should then see your Naverisk Authentication code.
Go back to the Naverisk login portal, when you type in your Username you should now see Google 2FA Code next to the password field.
Using your phone, type in the Google Authentication code displayed on the app, followed by your pin number you entered previously (CODEpin). You should now be logged in.
It is important to make sure that your phone is using the correct time, otherwise your Authentication Code will not work – even if the time is slightly out.
If you are running an Onsite instance of Naverisk, please also check that your Site Controller’s time is also correct.
If you are using Naverisk in the Cloud, please contact email@example.com so we can investigate.
If the initial setup of the Authenticator app is not accepting your 2FA key, please ensure you are entering the exact key. It may also be related to time sync problems
Go to the main menu on the Google Authenticator app
Click Time correction for codes
Click Sync now
On the next screen, the app will confirm that the time has been synced, and you should now be able to use your verification codes to sign in. The sync will only affect the internal time of your Google Authenticator app, and will not change your Device’s Date & Time settings. | <urn:uuid:9fc3fd2b-e698-4a43-b35e-fb2adea5229b> | CC-MAIN-2024-38 | https://kb.naverisk.com/en/articles/4454494-2fa-configuration | 2024-09-18T06:14:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00469.warc.gz | en | 0.862523 | 1,033 | 2.8125 | 3 |
Developing technology ethically: who is responsible?
With the rapid advancement in artificial intelligence (AI) and machine learning (ML) capabilities, methods of data gathering, and the associated use cases, have become much more sophisticated for businesses in recent years. This has led in turn to increased concerns around how this data can be governed and used in an ethical way. Ethical management of data is increasingly at the forefront of developers’ considerations when building new applications.
In fact, research by InterSystems found that 82% of developers say ethical concerns are now much more of a consideration than ever before. The COVID-19 pandemic has undoubtedly heightened these perceptions given there is now an increased requirement for people to share sensitive information, such as for Track and Trace purposes with apps collecting customer data in restaurants or gyms. As the use cases multiply, a question persists around who exactly should take responsibility for the ethical aspects of application developments, and what role the developer plays.
The current situation
Organisations are currently split in their ethical approach towards development, as 28% of developers report ethical issues to the legal or HR team, 23% find it’s sorted at C-level and only 19% actually have a dedicated ethics officer. Surprisingly, the actual responsibility for dealing with ethical issues lies with the developer within 16% of organisations. There isn’t a consistent approach across the board, and within many organisations the approach and responsibility is not clear at all.
Due to this confusion around responsibility, and the variety of different roles across organisations that are incorporating the management of ethical issues, it’s no surprise 13% of developers don’t know who to report these concerns to. There is an issue around lack of awareness of ethical issues, but there’s also a major issue around accountability – something that needs to be rectified when sensitive data is at play.
The ethical challenges developers face are likely to only increase in the coming years, whether in relation to cybersecurity, the use of data or the growing use of technologies like AI and ML. With more and more issues relating to the use and treatment of data, organisations need to decide who is responsible for the ethical considerations around data and its usage.
Solutions to the ethical question
In creating clearly defined roles within an organisation, such as a data protection or chief ethics officer, businesses can send a strong message to employees and customers that trust, and by extension, privacy, security, and ethics, are at the forefront of the culture of an organisation.
For developers, having a clear picture of who they should talk to about ethical concerns and the use of data will help them ensure that the applications they are building comply with best practices. Increasingly, more developers (52%) are looking to consult those responsible for ethical considerations as they build applications, both to achieve compliance with regulations today, and to establish technology and processes that ensure ethical data handling including compliance going forward.
Creating strong accountability and processes and removing the onus for these issues from developers, will help with their workload and their stress levels. Currently more than eight out of ten developers feel that they work in a pressured environment. The solution to this can include the use of intelligent data platforms with integrated frameworks to ensure that developers build every application in accordance with regulations.
This kind of solution is particularly crucial for financial and healthcare institutions where personal data is stored in large volumes. The use of technology makes it easier for ethical data handling to be built into applications from the outset and reduces the number of people and processes that need to be involved.
The wider considerations of ethical technology
More than ever, consumers are now choosing the organisations they interact with based on their ethical efforts and no longer just economical or convenience factors. With this shift in consumer demand, how an organisation protects, processes and safeguards personal data is increasingly becoming a competitive differentiator, so businesses must make this a priority when developing new technologies.
As part of this, businesses must continually come back to the idea of what should they do with data, rather than what could they do with it. This mindset should be adopted across the organisation. There should also be individuals whose clear duty is to govern the use of data and ensure that these morals are upheld. By always considering the ethical standpoint, an organisation can ensure compliance to regulations, enhance its brand, reduce stress on its employees, and also ensure the loyalty of its customers.
By Jeff Fried, Director of Product Management, InterSystems | <urn:uuid:90f3e8d1-53f4-41de-9dcb-3a80f178c9d8> | CC-MAIN-2024-38 | https://aimagazine.com/ai-strategy/developing-technology-ethically-who-responsible | 2024-09-19T09:39:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00369.warc.gz | en | 0.956099 | 903 | 2.734375 | 3 |
We describe how to create a universal radio receiver with a recording function based on an Arduino controller. The article explains in detail what components are needed for the project, how to connect the radio receiver and radio transmitter modules, and how to program the Arduino to receive and play radio signals.
Our scheme is connected and controlled from a laptop. We sit and wait in listening mode until someone presses the remote control (opens our slagboom, for example) and we receive a code that is memorized and can be played.
Arduino (eg Arduino uno)
This is our brain. Without it, we cannot transmit and read the signals we transmit/receive. For practicality, we can take the arduino uno controller and plug the wires into it, thereby bypassing the breadboard. But you can also take an arduino nano, but necessarily with soldered legs.
Radio receiver and radio transmitter modules
There are different modules, but I advise you to take the 433MHz frequency because it is more common than 315MHz. Example.
Also, after purchase, you need to solder the antenna to the receiver and transmitter. Video instruction.
Breadboard and connecting wires
It will be more convenient to connect parts on a breadboard, and connecting wires can be connected without soldering.
Laptop for Arduino programming and use
To program the controller, we need to download the arduino IDE environment, as well as the driver for your controller.
https://www.youtube.com/watch?v=3dk8okIryvk&ab_channel=GeekyScript – loading Wednesday.
https://www.youtube.com/watch?v=D94t3Bm9plA&ab_channel=ManmohanPal – driver download for ch340 controller.
To connect parts to the breadboard, it is recommended to watch a video that explains how to use it.
Connect the radio module to the Arduino:
– Receiver VCC to 5V on Arduino
– Receiver GND to Arduino GND
– Data of the receiver to a digital pin (for example, D2) on the Arduino
Connect the radio transmitter module to the Arduino:
– VCC transmitter to 5V on Arduino
– GND of transmitter to GND on Arduino
– Data of the transmitter to a digital pin (for example, D3) on the Arduino
Note – vcc is our + and gnd is – also the data transmitter is the pin with which we will communicate
For the first time, we go in and simply connect the controller and be sure to remove it from the breadboard so that the voltage does not go where it is not needed.
I choose our controller in the list. We also need to download a library for communicating with radio components.
Loading this library (https://www.arduino.cc/reference/en/libraries/rc-switch/)
And download it according to the instructions (https://www.youtube.com/watch?v=jMSic83Prs8&ab_channel=ZenoModiff)
https://pastebin.com/Sina4qkD – project code on the example of 433MHz frequency. Codes are not stored when the power is turned off. Use with a laptop.
The code watches the 433MHz radio band in real-time and adds the codes to the array and wrote the number in the serial under which the code is located, then we will reproduce it. Also, don’t forget to convert your Serial to Arduino IDE to 9600 because the coding will fly.
We now have a powerful device for copying and reproducing radio code. Before creating, I advise you to look at the basics of Arduino, because it is very easy to burn everything.
Disclaimer. This article is created for informational purposes only. All advice and instructions are provided for educational purposes and we are not responsible for any possible consequences related to the implementation of this project. Always use safety precautions when working with electronic components.
If you have any problems or problems, you can contact us via: [email protected]. | <urn:uuid:314372da-6549-4f8b-8995-6b3215d5235e> | CC-MAIN-2024-38 | https://hackyourmom.com/en/osvita/yak-zrobyty-radiopryjmach-iz-funkcziyeyu-zapysu-na-arduino/ | 2024-09-20T17:13:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00269.warc.gz | en | 0.915369 | 851 | 3.078125 | 3 |
Properly securing your network requires a firm understanding of all possible points of entry for an adversary. This is known as an attack surface.
What is an attack surface?
NIST defines an attack surface as “The set of points on the boundary of a system, a system element, or an environment where an attacker can try to enter, cause an effect on, or extract data from, that system, system element, or environment.”
The attack surface should not be confused with an attack vector, which is an individual entry point. However, reducing the number of attack vectors also reduces the size of your attack surface and lowers the risk of a data breach.
Digital versus physical attack surfaces
A network will have two attack surfaces—digital and physical.
Digital attack surfaces
The digital attack surface includes anything that connects to the network. Hardware (servers and switches) and software (applications, websites, ports, and individual pieces of code) are examples.
Physical attack surfaces
Endpoint devices that cyber attackers can gain access to are the most common examples of physical attack surfaces. However, this can also include careless user behavior such as discarded or stolen login information written on a sticky note or other obtainable items like hardware keys or USB drives that disclose login credentials.
Contributors to Attack Surface Size
The size of an organization’s attack surface will depend on the number of devices, or points of entry, on your network. Reducing the number of entry points (vectors), such as eliminating passwords, will reduce your attack surface.
For a long time, IT administrators only needed to worry about equipment and devices within the walls of their organization. Work from home, an increasingly mobile workforce, and COVID-19 changed that. Users now expect to access company resources outside of the office, and IT administrators were forced to adapt.
Other factors, like the Internet of Things (IoT), only exacerbate the problem. With nearly 30 billion IoT devices expected to be connected to the internet by the end of the decade, combined with inevitable software bugs and vulnerabilities, most organizations’ attack surfaces will grow over the next decade.
Many attacks are successful because they exploit employees' often poor security practices. Whether writing credentials on a piece of paper, choosing a weak or reused password, or even attempting to circumvent security measures, organizations are all too familiar with users being the weak point in their cybersecurity efforts.
What is an attack surface analysis?
An attack surface analysis is vital to understanding your vulnerabilities. It will also help you prevent future attacks. Your analysis should:
- Identify any vulnerabilities: As explained, any point of entry, digital or physical, is a weakness attackers can exploit. Carefully review data entry and exit points.
- Know your users and their needs: Each organization will have unique user types, each with specific daily needs.
- Know your risk: With a better understanding of your users, you’ll be able to perform a comprehensive risk assessment. Review and secure your most common operations first.
- Set up monitoring and reporting if necessary: The final step of your analysis is using all you’ve learned to set up real-time monitoring of any potential attack vectors. You should also plan to respond to and comply with any disclosure requirements required by law.
Remember that attack surface analyses are large projects that take weeks or months to complete. While that might be intimidating, you want your analysis to be as thorough as possible.
How to reduce your attack surface
After completing your attack surface analysis, we recommend you consider the following strategies to reduce your attack surface.
- Limit access and permissions: Access management can reduce the attack surface by limiting what a user may have access to while connected to the network. Review access to sensitive information and your logs to determine normal bandwidth usage and access patterns, and adopt zero trust.
- Eliminate passwords: Passwordless authentication uses cryptographic keys, which are unphishable. Each username and password pair is an attack vector, a huge security risk.
- Eliminate complexity: Complex networks can make attack response difficult since you don’t have complete visibility into whom and what is connecting to your network. Begin by inventorying all software, hardware, and access points, and eliminate unnecessary ones. Follow this by adopting centralized identity management.
- Train your employees: Attack surfaces aren’t limited to hardware or software. Human assets and the data they leave behind across the internet adds to this surface. Providing employees and contractors with anti-phishing and social engineering training is one way to limit risk.
How Beyond Identity can help
Beyond Identity’s passwordless MFA combines all of the steps you need to take to reduce attack surface size in a single package we call Secure Workforce. Our platform helps you achieve zero trust by eliminating the password and choosing cryptography and frictionless, invisible MFA over the traditional one-time password or code.
Each key pair is tied to both the user and the device, allowing you to achieve certainty of identity you can’t achieve with the password. Beyond Identity also authenticates the user continuously while monitoring over 25 risk signals of potentially malicious activity.
The best part about Secure Workforce is the ease of integration with the most common authentication platforms, including Auth0, ForgeRock, Microsoft Active Directory, Okta, and Ping Identity, in as little as 30 minutes. Our customers can typically fully deploy Secure Workforce across their organization in 90 days or less.
We’d love to show you how Beyond Identity and Secure Workforce can drastically reduce the size of your attack surface while making authentication seamless, easy, and invisible with passwordless MFA. Ask for a demo today.
Other Glossary Items
Secure Access Platform Overview
Learn more about Beyond Identity's secure-by-design Secure Access platform.
An Avalanche of News About Snowflake Security
Learn the facts about what happened in the recent attack on Snowflake and how Beyond Identity secured Snowflake's enterprise systems. | <urn:uuid:84af45b7-4651-4b65-b8c5-8c293a583c36> | CC-MAIN-2024-38 | https://www.beyondidentity.com/glossary/attack-surface | 2024-09-08T12:51:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00469.warc.gz | en | 0.929072 | 1,225 | 3.625 | 4 |
Ransomware groups have been exploiting the switch to remote work unlike any other. Ransomware attacks increased by more than 485% in 20201. By 2031, a new organization is expected to fall prey to a ransomware attack every 2 seconds2. Multiple reports by threat hunting firms confirm that the primary attack vector they are using to infiltrate corporate networks are poorly guarded Remote Desktop Protocol (RDP) connections.
RDP lets a user access and control another computer located elsewhere. Say computer 1 wants to establish a RDP connection with computer 2. In that case, the former must be running RDP client software and the latter should have RDP server software. Once the connection has been established, the user who initiated the RDP connection will be able to access the device they connected with.
Though RDP has been in use for a long time, the rush to the remote work last year skyrocketed the number of people who rely on it. In early 2020, in just two months, the number of RDP ports exposed to the internet grew from three million to four and a half million. Most employees access their corporate devices via RDP from their homes. Network administrators now use RDP more frequently to troubleshoot remote systems. This waterfall effect has led to an enormous number of RDP ports being left open to the internet. Attackers are taking full advantage of it.
Worrisome is that threat actors aren’t relying on any sophisticated technique to exploit open RDP ports. They are attaining great success with brute-force attacks.
A brute-force attack is a trial-and-error method to compromise user credentials. Security researchers have observed that hackers are using different brute force techniques such as password spray attacks or credential stuffing using RDP credentials that can be purchased on the dark web to compromise RDP connections.
While some basic steps, like closing down unnecessary open RDP ports and periodically re-evaluating who has RDP access, slightly reduce an organization’s chances of being compromised, they aren’t sufficient. Attackers exploit organizations struggling to weed out the poor password practices of their users, and improving password security should be an organization’s continuing priority.
Learn how RDP brute-force attacks can be used to take over an organization’s Active Directory infrastructure.
Review a Dharma ransomware case study that reveals key aspects about the highly profitable ransomware technique that infiltrates through RDP brute-force attacks.
Discover how sound password security thwarts various types of password attacks.
Learn how to deploy the five brute-force attack defensive strategies that keep organizations safe.
It’s true that today’s cybercriminals are always finding novel ways to launch attacks. Organizations can’t afford to let their guard down against simple yet proven techniques like brute-force attacks. | <urn:uuid:e7655672-11bc-48b3-bce6-1a31ec724c10> | CC-MAIN-2024-38 | https://blogs.manageengine.com/active-directory/adselfservice-plus/2021/09/17/how-brute-force-attacks-are-spearheading-ransomware-campaigns.html | 2024-09-11T00:53:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00269.warc.gz | en | 0.940692 | 576 | 2.5625 | 3 |
All these days we have reported about companies that are or have been targeted by ransomware and some remedies to isolate a company’s digital infrastructure from getting infected by the file-encrypting malware.
In this article let’s discuss some common ways on how ransomware can infect an organization irrespective of its size and the business field it holds.
Phishing and Social Engineering led data leaks– As ransomware spreading gangs are becoming sophisticated; they are using phishing emails to gain trust and trick potential victims to submit credentials. As they mimic genuine files, victims often end up clicking on the malicious links which then leads to malicious downloads and data encryption.
Malevolent websites- Often hackers are seen trapping their victims through compromised websites which are becoming easy places to insert malicious codes. E-commerce websites, web browser plug-ins, and payment gateways are few examples of compromised websites which either lead to malware downloads or malware activation through click and bait installers.
Malvertising- Often hackers are seen targeting vulnerable browsers or unpatched Operating systems with malvertising attacks. In such practices, victims are led to ad portals where cyber crooks start inducing malicious codes that eventually lead to malware downloads. Security analysts say that such kind of attacks is notorious as they do not require victims to download a file or to enable macros.
Infected files and applications- Often files and applications downloaded from illegal websites lead to malware infections. For instance, the recently discovered MBRLocker which was distributed by hackers through legitimate websites to deliver infected executables.
Brute Force scams via RDP- As ransomware spreading gangs are becoming highly sophisticated; they are seen targeting endpoints with brute force attacks through the Remote Desktop Protocol (RDP). SamSam ransomware is one such malware which offers the hackers the privilege to use a remote device to launch an attack.
Social Media- Often links being distributed on social media platforms as Facebook and WhatsApp can lead to ransomware infections. One such infection vector is scalable vector graphics which uses the image file extension to distribute payloads. Although the companies offering networking services are trying their best in filtering and isolating such content from their users, still some payloads find their way to user devices in some way or the other.
Note- Creating awareness among staff on the dos and don’ts on a network, installing threat monitoring solutions, maintaining regular data backups are some of the Cybersecurity measures to take well in advance to prevent such malware from targeting your company network-‘Prevention is always better than cure’. | <urn:uuid:0d92d389-afb5-4779-a20e-07886bd85180> | CC-MAIN-2024-38 | https://www.cybersecurity-insiders.com/this-is-how-ransomware-can-infect-a-business/ | 2024-09-11T00:50:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00269.warc.gz | en | 0.947687 | 509 | 2.640625 | 3 |
Hackers can exploit smart home appliances like coffee machines and TVs to steal people’s sensitive information. According to Vince Steckler, Chief Executive at security firm Avast, the Internet-connected devices used in the home, like laptops, mobile phones, and other smart gadgets, aren’t secure as they allow hackers to use them to get hold of bank details and other personal information.
Steckler stated that cybercriminals can make use of potential vulnerabilities in the Internet of Things (IoT) devices and compromise them to steal their owner’s sensitive details.
“Coffee machines are not designed for security. TVs are not designed for security. What they are is additional vectors to get into your network. And you can’t protect them,” Steckler said in a media statement.
Many of the recent surveys discovered the unknown vulnerabilities in the smart devices that we use often. Recently, cybersecurity expert Yossi Atias took the stage at Mobile World Congress to demonstrate a live hack of the Amazon Ring video doorbell, exposing a previously unknown vulnerability in the popular IoT device. The hack revealed unencrypted transmission of audio and/or video footage to the Ring application allows for arbitrary surveillance and injection of counterfeit video traffic, effectively compromising home security and putting family members’ safety at risk.
Earlier in 2018, a research from cybersecurity solutions provider Check Point revealed how organizations and individuals are vulnerable to hacking through their fax machines. The researchers at Check Point stated that fax machines have security vulnerabilities which could possibly allow a hacker to steal data through a company’s network using just a phone line and a fax number. The researchers also showed how they were able to exploit security flaws in a Hewlett Packard all-in-one printer. The findings were presented by Check Point’s researchers Yaniv Balmas and Eyal Itkin at DEFCON 26.
Also, the security experts from the University of Texas stated the hackers can make use of internet-connected light bulbs as a covert channel to exploit the user’s private data. The researchers have taken the LIFX and Phillips Hue smart light systems for the study. The research stated the hackers can launch an attack by manipulating the infrared light by creating a communication channel between the smart lights and a device that senses infrared light. And by installing a malicious agent on the phone the attackers can encode the private data and transfer them through the infrared covert channel.
The researchers also specified that the proposed threats can be mitigated by enforcing strong network systems and reducing the light transmittance and the brightness of the bulbs that stops the attacks to perform. | <urn:uuid:7447ad9e-6c13-4b4e-ae7c-eab3c1da81d9> | CC-MAIN-2024-38 | https://cisomag.com/hackers-can-steal-your-identity-and-bank-details-from-a-coffee-machine/ | 2024-09-12T05:41:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00169.warc.gz | en | 0.933538 | 537 | 2.671875 | 3 |
ISACA COBIT 5 – Define (BOK IV) Part 8
20. Analytical Tools – Tree Diagrams (BOK IV.D.2)
So after talking about affinity diagram, let’s talk about the second tool and which is tree diagram. So tree diagram is to break down a goal or a broad category into fine levels of detail and fine level of details that contributes to that issue. If this doesn’t make sense, don’t worry about that. Once we look at the next slide which actually shows what a tree diagram is, that will make a better sense. So let’s look at an example of tree diagram on the next slide to understand what this diagram is and what purpose this serves. So here I have a tree diagram.
So I have a broad objective or a broad subject which is passing the certified Six Sigma Black Belt Exam. So that’s a broad theme. And then you can subdivide that into number of sections. The subcomponents of that could be motivation. You need to have motivation, you need to have books, you need to have videos, you need to have quizzes, and then in motivations, you need to have organizational support, finance, et cetera, et cetera.
So you keep on breaking one big item into smaller items to have a better understanding of that. Similar thing, you would have seen earlier in WBS work breakdown Structure, where we broke down the physical thing into sub components. This is similar to that. So the similar concept is used in fault tree diagram where you have a fault and then the causes of that you break down. Or this could be similar to cause and effect. For example, let’s take an example of welding, which you take. So here I have welding defect.
This could be crack, this could be porosity. So these are type of weld defects. And then the causes of that porosity could be moisture because of wind. So this is how you use tree diagram to break down a big thing into smaller pieces so that you can take care of those smaller pieces. So if you have to deal with weld defects, then you can look at these components here. Once you take care of these, the weld defect will be taken care of here. Same thing here. If you take care of this, if you have a motivation, if you have a books, handbooks, YouTube videos, a Udemy course, then you can consider that you might be able to pass your certified Six Sigma Black Belt Exam. So that completes our discussion on tree diagram.
21. Analytical Tools – Matrix Diagrams (BOK IV.D.3)
So coming to the next tool, which is matrix diagram, what is matrix diagram? Matrix diagram shows relationship between two or more groups and what are those groups? What relationship? If things do not make sense, don’t worry. Let’s see one example and that will make things much clearer. Let’s go to the next slide and look at one example of matrix diagram. So here I have an example of matrix diagram. This is L shaped matrix because this is in the form of L. If you look at here, this is an L shaped matrix. There are other shapes also we will be talking about that as well in this section. So what does this matrix shows? This matrix shows that in a course, in a training course, there are five sections, section one, section two, section three and section four, five.
And then each of these section is evaluated on a set of criteria and those criteria includes what’s a difficulty level, what is the level of conceptual knowledge in that, what is the level of statistical knowledge in that and what is the business application of that. So each section, if you look at that, let’s look at section one. So section one has a difficulty level of one and all these number are on the scale of one to five, where one is low and five is high. So if I say the difficulty level of section one is one, that means it’s a low difficulty. Conceptual knowledge is four. So it’s sufficiently high conceptual knowledge involved in section one. There is no statistical or a very low statistical knowledge here and there is a limited business application of this.
So that’s how you represent that with numbers in a matrix. Similarly, if you look at some, let’s say section three, section three has a very high business application or if you look at section five, section five has a very high statistical knowledge involved in that. So this forms a matrix which shows relationship between two groups and two groups. Here are the sections of the book or sections of the course and the characteristics of that particular section. So this is how you create an L shaped matrix. You would have seen this sort of a matrix a lot of places. This is quite commonly used tool. So after talking about Lshaped matrix, let’s move on to the next slide to see some other shape. And that shape is Tshaped matrix. Let’s look at a T shaped matrix.
So here I have a T shaped matrix. So earlier in L shaped matrix we were looking at the relationship between two groups. Now here you can have relationship between three groups using a P shaped matrix. So a hypothetical example here is something a simplified version of a car manufacturing company which has five products, product 12345. And then it is looking at the characteristics of that product, the efficiency, pickup, look and comfort and the market of that. So here you have a market here you have the characteristics of that and here you have a product. So in this you have three groups. So if you look at the bottom part of the matrix here, this shows the relationship between the product and the characteristics of the product.
So if you see product number one, product number one has a low rating on efficiency pickup, look and comfort. Similarly, if you look at product number three, product number three has a very good efficiency pickup, look and comfort. So looking at that, you can see that product number three is a sort of a premium product. But how does this behave in the market? For that you can see at the top part of this matrix, which tells that returns are low, complaints are low and sale is high. And once again these numbers are one to five, one is for low and five is for high. Similarly, if you look at the product number one, product number one was a cheaper product, had a low efficiency, low pickup, poor look, less comfort, and same thing here. If you see at the top it has a high return, high complaints and low sale.
So in this example you can see that, you can see the relationship between three groups. If you want to see a relationship between these things, the relationship between the market and the characteristics of the product, that is not shown here. What you see here is the relationship between the product and the market which is at the top and the product and the characteristics which is at the bottom. But you don’t see the relationship between these two things, the market and characteristics. Market and characteristics, you don’t see the relationship between that and what does that mean? Is does look leads to more sale. So if you need to make that sort of a matrix, then you need to go for a Y matrix. So you have a T matrix here, just like this. If I just turn it around by 90 degree, this is your T matrix. And what will happen in Y matrix is something like this. Here you have relationships mentioned here, but you don’t have relationship mentioned here.
But in Y matrix if you just turn these axes and then you can have relationship between this and this here and relationship between this and this, you can mark it here and relationship at the top can be marked here, so that will become a Y matrix. It looks slightly complicated. So that’s the reason I have not covered here with an example. But you can understand, with a slight change your T matrix could be converted to a Y matrix. So after talking about L shaped, T shaped and Y shaped matrix, let’s talk about X shaped matrix. So just like T, we had this portion as a T matrix and here in X matrix this another dimension has been added, another group has been added here. So earlier in T shaped you had three groups. So group one, two and three, the relationship between these three. But in X shape, you can add a group number four also. So that becomes an X shaped matrix.
The next one is roof shaped matrix. So if you look at this, and if you remember earlier, we have talked about quality function deployment. This has been picked from the QFD, quality function deployment here. In roof shaped matrix, we are not showing relationship between two groups. We are showing relationship between one group only within that group. So suppose in the earlier case of car, if we wanted to look at the relationship between the look, the pickup, the comfort, that relationship could have been shown using a roof shaped matrix.
So you would have a look here, you will have a comfort here, you will have a pickup here, and you might have an efficiency here. Then you can look at relationship between these. So if you want to look at the relationship between efficiency and look, so this is the way, this is the box where you will put that relationship between look and efficiency. And if there’s no relationship between that, you will say that there’s a poor relationship between look and efficiency. There is a strong relationship between pickup and efficiency. So that’s where you will put that. Okay, this is what will have a very strong relationship between pickup, pickup and efficiency. So if pickup gets more, efficiency goes low. So that’s how you create a roof shaped matrix. So with that, we completed our discussion on matrix.
22. Analytical Tools – Prioritization Matrices (BOK IV.D.4)
So coming to the next tool which is prioritization matrix. Prioritization matrix is similar to matrix diagram which we have learned earlier. So we talked about Lshaped matrix earlier, let’s look at that L shaped matrix and see how you can use that as a prioritization matrix. And why do you need prioritization matrix? You need that because when you need to choose out of number of options, if you need to select a project, that’s where you need a prioritization matrix. So let’s look at the example of that to have better understanding. So this example is similar to what we learned in Lshaped matrix diagram. So we have a relationship between product and the product characteristics, characteristics for Efficiency, pickup, look and Comfort. And we have five products here, but one thing which has been added here is importance of those characteristics.
What is the importance which is being given to these characteristics? So efficiency has been given an importance of zero three. Pick up zero one, look is zero four and comfort is zero two and together they make it one. So 30% importance is for efficiency and 40% importance is for look. So look is given more importance here. Then you look at each of your product and see where they fall on the scale of one to five. For example, so product number one was at the scale of at two. On the scale of one to five, pick up at one, look at one and comfort at two.
So all these values are low values and if you look at product number three, which is all these are high values, so that means product three looks better, but then which one to select that? So what we do here is mathematical calculation is you multiply this importance with this number, so 0. 3 multiplied by two becomes 0. 60. 1 here multiplied by one becomes 0. 10. 4, which is the importance of look. When you multiply it by rating one, this becomes 0. 4 and when you multiply zero two, which is the importance for comfort with the rating of two, so this becomes 0. 4. And once you add all these things, that gives you this number which is 1. 5. So product number one has 1. 5 as a final rating and product number three has a 4. 7 as a final rating. So based on these ratings, you can decide which product product to launch or which project to go for. So this was a simple example showing how to use prioritization matrix when you have to make a choice.
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More » | <urn:uuid:fff65b3c-1dad-4ccd-b8b1-d21ef3b1cf7b> | CC-MAIN-2024-38 | https://www.examcollection.com/blog/isaca-cobit-5-define-bok-iv-part-8/ | 2024-09-13T11:08:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00069.warc.gz | en | 0.964991 | 3,076 | 3.453125 | 3 |
Edge computing is a distributed computing framework that brings applications closer to the point of data creation such as Internet of Things - Physical objects with the ability to connect and exchange data with each other over the Internet. A network of objects that are embedded with sensors, processing ability, software, to connect to and exchange data with other such objects or networks. More devices (cameras, microphones, and sensors), smartphones, or computers. This proximity to the consumption point can deliver strong business and application benefits, including faster insights, improved response times and better Bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. It generally refers to the size of the pipe and the speed that carries the data back and forth, not necessarily the volume of traffic that passes through. More availability.
In simpler terms, edge computing shifts computing from the cloud to places that are closer to the application or device. Another way to look at this is that the app requesting compute augmentation, such as a user’s computer or IoT device, will run the compute at an edge server located at a cell tower. Shifting compute to a closer location minimizes the amount of long-distance communication that has to happen between a client and server.
What differentiates edge computing from other computing models? The first computers were large, bulky machines that could only be accessed directly or via terminals that were basically an extension of the computer. With the invention of personal computers, computing could take place in a much more distributed fashion. For a time, personal computing was the dominant computing model. Applications Radio Access Network "The part of a mobile network that connects end-user devices, like smartphones, to the cloud. This is achieved by sending information via radio waves from end-user devices to a RAN's transceivers, and finally from the tranceivers to the core network which connects to the global internet" Read Full Definition from Red Hat More and data was stored locally on a user’s device, or sometimes within an on-premise data center.
Cloud computing, a more recent development, offered a number of advantages over this locally based, on-premise computing. Cloud services are centralized in a vendor-managed “cloud” (or collection of data centers) and can be accessed from any device over the Internet.However, cloud computing can introduce Time required to send data over two points in a network. More because of the distance between users and the data centers where cloud services are hosted.
Edge computing moves computing closer to end users to minimize the distance that data has to travel, while still retaining the centralized nature of cloud computing.
- Early computing: Centralized applications only running on one isolated computer
- Personal computing: Decentralized applications running locally
- Cloud computing: Centralized applications running in data centers
- Edge computing: Centralized applications running close to users, either on the device itself or on the network edge
What are the benefits of edge computing?
Edge computing helps minimize bandwidth use, which is finite and costly. As IoT devices such as cameras become smaller and higher definition, the amount of data they produce is increasing dramatically. This combined with people working from home and streaming video calls are taxing the internet connectivity. Even from the business side, every camera can be considered a live video stream and we all have experienced bandwidth bottlenecks when there are too many people in the house streaming video. In terms of industry reports, Statista predicts that by 2025 there will be over 75 billion IoT devices installed worldwide.
Another significant benefit of moving processes to the edge is reduced latency. Augmented reality, virtual reality, and real time applications demand lower latency compute. Every time a device needs to communicate with a distant server somewhere, that creates a delay which can range from annoying to unacceptable.
For example, consider two coworkers in the same city collaborating on a video call. They would notice delays or service outage if their video packets were sent out over the open internet and interference were to occur. If the video application runs at the cell tower and routes the packets locally to the city, the application response time appears faster and doesn’t experience as much network congestion issues. The duration of these delays will vary based upon their available bandwidth and the location of the remote server, but these delays can be avoided altogether by bringing more processes to the edge.
Secure localized data processing is another feature of edge computing. Processing data near the source removes any security risk of transporting the data over the open internet. Additionally, users who need to authenticate to a physical location will have to authenticate to a Citizens Broadband Radio Service Radio frequency band between 3.5 GHz and 3.7 GHz that can be used for 5G, 4G or LTE communication. The FCC has recently opened these band to general use. Learn more about CBRS More radio that has a known and unmoveable physical location.
To recap, the key benefits of edge computing are:
- Decreased latency
- Decrease in bandwidth use
- Increase in data, application, and user security
What are A distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. It is an architecture rather than a specific technology. It is a topology- and location-sensitive form of distributed computing. Alef uses edge computing concepts to offer MNaaS. (Mobile Network as a Service) More Use Cases?
Edge computing can be incorporated into a wide variety of applications, products, and services. A few possibilities include:
- Security system monitoring: Facial recognition via a large number of cameras for access control. All recognition is run on site and response times are near instantaneous.
- IoT devices: Smart devices that connect to the edge can provide more efficient user interactions.
- Self-driving cars: Autonomous vehicles need to react in real time, without waiting for instructions from a server.
- More efficient caching: By running code on a CDN edge network, an application can customize how content is cached to more efficiently serve content to users.
- Medical monitoring devices: It is crucial for medical devices to respond in real time without waiting to hear from a cloud server.
- Video conferencing: Interactive live video takes quite a bit of bandwidth, so moving backend processes closer to the source of the video can decrease lag and latency. | <urn:uuid:3c4e9caa-259e-4597-b779-556d7d448e1a> | CC-MAIN-2024-38 | https://alefedge.com/2021/09/10/what-is-edge-compute/ | 2024-09-19T15:20:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00469.warc.gz | en | 0.934754 | 1,310 | 3.828125 | 4 |
There are millions of pages on the internet however about 90% of the pages are not indexed by search engines like Google, Yahoo, Bing ..etc. Which means only a tiny portion of the internet is accessible through search engines or standard means. Deep Web is the internet that cannot be accessed through standard search engines or the pages that are not indexed in any way.
Surface Web vs Deep Web vs Dark Web
If we imagine web as an ocean, the surface web is the top of the ocean which appears to spread for miles around, and which can be seen easily or "accessible"; the deep web is the deeper part of the ocean beneath the surface; the dark web is the bottom of the ocean, a place accessible only by using special technologies.
- Surface web: Surface web is the portion of the World Wide Web that is readily available to the general public and searchable with standard web search engines. It is the opposite of the deep web. The section of the internet that is being indexed by search engines is known as the “Surface Web” or “Visible Web”.
- Deep web: Deep web is part of the World Wide Web whose contents are not indexed by standard web search engines for any reason.The content of the deep web is hidden behind HTTP forms, and includes many common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, and many more.Content of the deep web can be located and accessed by a direct URL or IP address, and may require password or other security access past the public website page.
- Dark web: The Dark Web is defined as a layer of information and pages that you can only get access to through so-called "overlay networks", which run on top of the normal internet and obscure access. You need special software to access the Dark Web because a lot of it is encrypted, and most of the dark web pages are hosted anonymously.
Surface web VS Deep web VS Dark Web VS Darknet
Clearnet VS Darknet
What Should a CISO be Concerned About?
Once a CISO is aware of what is available on the dark web, deep web or surface web, its easier to take steps to defend & protect those data from being used by the attackers.
- Exposed DB Servers & S3 Buckets (due to misconfigurations etc.)
- Exposed applications & websites, files & documents which are accessible
- Exposed services like APIs, FTP Servers etc.
- Personnel data which is available freely on the internet, including email addresses, phone numbers etc.
For more information on how to Discover & Map your Applications & Services which are publicly exposed on the internet, intentionally or unintentionally: Click Here | <urn:uuid:bfeee05e-33c7-4d12-a0bb-ccdea7ff9c61> | CC-MAIN-2024-38 | https://www.cisoplatform.com/profiles/blogs/surface-web-deep-web-and-dark-web-are-they-different?context=tag-dark | 2024-09-20T19:53:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00369.warc.gz | en | 0.920918 | 581 | 3.25 | 3 |
Connected devices refer to the growing network of devices, appliances, and equipment that are connected to the internet and capable of collecting and exchanging data.
Connected devices, also known as Internet of Things (IoT) devices, are a growing network of physical objects that are embedded with sensors, software, and network connectivity, allowing them to collect and exchange data. These devices can range from simple consumer devices like smart thermostats and wearables, to more complex industrial equipment like manufacturing robots and transportation vehicles.While the proliferation of connected devices has brought numerous benefits such as increased efficiency and convenience, it has also introduced new security risks. These devices often have limited processing power and memory, making them vulnerable to cyber attacks and data breaches. Therefore, specialized security controls and technologies, such as encryption, access controls, and secure communications protocols, are required to ensure the confidentiality, integrity, and availability of information transmitted and stored by these devices. | <urn:uuid:2968ca6a-8419-47ca-b2f5-6d2f7ca4ad52> | CC-MAIN-2024-38 | https://www.kudelski-iot.com/glossary/connected-iot-devices | 2024-09-07T11:05:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00733.warc.gz | en | 0.958605 | 188 | 3.734375 | 4 |
Ten years after the 2003 meltdown, weather-related outages are on the rise.
Today marks the 10th anniversary of the Northeast blackout of 2003, the largest power outage in North American history.
For a while, this colossal disaster was a shared reference point for millions. We salvaged a tub of ice cream on the roof, hitched a ride home with strangers, directed traffic at a darkened intersection, and so on. Everyone had a story.
But a decade later, it seems a faded memory. For those in the affected area, recollections have been supplanted by disaster stories of Superstorm Sandy, and then some. Despite regulations tightened in response to the 2003 blackout, power outages – particularly those caused by the weather – are more common than ever.
According to a new report released by the Department of Energy this month [ PDF ], there were 679 widespread (affecting at least 50,000 people) power outages due to severe weather between 2003 and 2012. The cost of these incidents has been calculated to be anywhere between $18 and $70 billion per year.
There are two explanations for this spike. First, the grid is getting older. Second, the weather is getting (or at least has gotten, in the last ten years) worse .
NEXT STORY: Udall floats plan to streamline tech transfer | <urn:uuid:017eb517-9686-4c74-be2b-ac82ef0afd67> | CC-MAIN-2024-38 | https://www.nextgov.com/digital-government/2013/08/get-ready-more-nights-without-power/68681/ | 2024-09-08T15:24:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00633.warc.gz | en | 0.974317 | 274 | 2.890625 | 3 |
In this article, we continue our series on the top threats to smartphones, with network spoofing and phishing attacks.
This security vulnerability posed to smartphones goes back to the previous example of the public Wi-Fi hotspots that are found in the public places (once again like Starbuck’s or Panera Bread). Before you can actually gain access to the free data connection, you have to login with the username and password that is given (which is also publicly available) and agree to their terms of usage.
Usually this is an entire webpage, and there is no way to prove its authenticity. For the most part, these websites are actually legitimate, but given the level of the sophistication of the cyber attacker these days, a spoofed-up web page can be designed and replaced quite easily.
Typically, the cyber attacker names these spoofed public Wi-Fi hotspots such as “Airport Wi-Fi” or even “Coffeehouse Wi-Fi”. Unlike the legitimate login pages, the customer actually has to create an account in order to gain access the supposedly free data connection.
But, given how much people hate remembering hundreds of passwords, we tend to use the same username/password combination for just about everything we login into. The cyber attacker is also fully aware of this, and after you have created this account and logged in, and also in a manner very similar to that of a man in the middle attack, the cyber attacker will see the websites you are accessing.
If anything looks private and confidential, they will then later on use that same username/password combination that you have created, login, and covertly hijack everything and anything that they can.
This is a type of social engineering attack, and although it is has been around for a long time, it is still widely used by the cyber attacker of today, especially those on smartphones. Phishing can be defined as follows:
“It is a cybercrime in which a target or targets are contacted by email, telephone, or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data such as… banking and credit card details and passwords”. (SOURCE: 1).
The tell-tale signs of a phishing attack include the following:
- The content of the e-mail message has poor spelling or grammar:
Phishing e-mails often contain misspelled words, or even extra digits in the telephone number in the signatory component of the message. At first glance, these can be exceedingly difficult to find, but after a second or third look, they can be spotted. For instance, a phony message would contain the salutary line of “Dear eBay Costumer” instead of “Dear eBay Customer”. Also, look in the subject line as well for any misspellings. Most e-mail applications are good in catching this, but some still fall through the cracks and make their way into your inbox.
- The hyperlinked URL is different than the one that is presented:
Most phishing e-mail messages contain the name of a legitimate organisation, but with a phony URL that is hyperlinked to it. For example, you could get what looks like a legitimate e-mail message from PayPal, and towards the end of the message, it will say something like:
“Check your PayPal account here.”
Obviously, the name looks authentic enough, but instead of taking you to www.paypal.com, the hyperlink displays a different URL.
- The e-mail message has a sense of urgency to it:
The content of a phishing e-mail will often have a strong sense of action to take. For example, it may say that your PayPal account has been closed, put on hold, or that there is even some sort of fraudulent activity that has occurred on it. In these instances, there will also be a link to take you to your account, but once again, it will be a phony one.
- It will contain a suspicious attachment:
Most legitimate business entities or even individuals will not send you an attachment unless you have specifically requested one. Sometimes, phishing e-mails will contain an attachment, which will very often be in a .DOC or .XLS file extension. It will look like that these attachments are coming from somebody you know. These attachments contain a malware or a spyware executable program which will launch onto your computer or wireless device once they are downloaded and opened.
Our next article will continue to examine the top threats to smartphones, focusing upon the latest threat out there, and perhaps the most dangerous: ransomware. | <urn:uuid:05b47974-ae95-4d81-b042-ff4f7035ce45> | CC-MAIN-2024-38 | https://platform.keesingtechnologies.com/smartphone-security-2/ | 2024-09-18T10:53:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00733.warc.gz | en | 0.94594 | 952 | 3 | 3 |
There are several theories floating around about why there aren’t more women in IT — and it’s a pressing matter, considering the impending labour shortage here. Some claim that IT doesn’t appeal to women, or girls just aren’t good at math. But when it comes to IT, are men really from Mars and women really from Venus?
Many industry experts agree there are differences — but it’s the industry that needs to change, not women.
The Athena Factor, a recent report published by the Harvard Business Review, examined the brain drain issue of women in science, engineering and technology.
“What they found was that over half of the women in these fields eventually left their jobs, most of them in their mid ’30s,” said Jenny Slade, communications director with NCWIT (National Center for Women & Information Technology). Reasons ranged from working in a hostile male culture, to not being aware of a career path to the top, to not having the flexibility to juggle small children at home with a fast-track career.
“What surprised us is there’s no lack of love for their careers, and they’re not leaving because they don’t like their work,” she said. “They’re leaving because they’re finding the work incompatible with the environment in which they have to work.”
IT has been taught the same way
Stemming this exodus of women by just 25 per cent would add more than 200,000 women into the IT workforce in the U.S. “They’re trained, they’re skilled, so losing them comes at a huge cost,” she said. “Not just an opportunity cost, but an innovation cost.”
And women aren’t shying away from hard sciences either. In the U.S., women are earning half of all math degrees, more than half of all chemistry degrees and about 60 per cent of all biology degrees. Women also comprise half of all incoming medical school classes.
So why aren’t they jumping into IT?
“The way that technology is marketed to them, frankly, is completely unappealing,” said Slade. “Computer science has been taught the same way for almost 30 years.” Women have few role models and are forced to feel that if they’re not compatible, if they don’t learn well within the computer science paradigm that’s been established at the academic level, then they’re not right for computer science. “We need to stop trying to change the women and realize that it might actually be beneficial…if we change the way it’s taught,” she said.
And this starts at an early age, by trying to make technology cool. Deirdre Athaide, who studied mathematics and computer science at the University of Waterloo, joined IBM in 2004 and now has a patent in digital rights management. It was through IBM that she became involved with the Excite girls’ summer camp program. There are 53 camps worldwide.
Women want an overall vision
At last summer’s Excite camp in Toronto, the girls built robots out of Lego, but instead of having a “Battle of the Bots” competition, they took part in a reality TV show themed “So You Think Your Robot Can Dance” competition — which included compulsory moves and a free dance portion.
So what about the stereotype that girls don’t like computers? “It was totally opposite of the stereotype,” said Athaide, solutions manager for sensor and actuator solutions with the IBM Toronto Software Lab. “They were all over it.”
Anything taught to the girls had a purpose. Camp organizers chose to use a completely graphical user interface, called Scratch, from MIT’s Lifelong Kindergarten Group. In every module, the girls did a non-computing activity and then did it again in a programming environment. In one case, they took part in a physics module that focused on roller coasters. First they built their own accelerometer, then went to Canada’s Wonderland to apply this knowledge.
They don’t care what a programming language is, said Athaide, but they want to know how it relates to YouTube. “Having a vision of what can be accomplished has more of an impact on women,” she said. She believes, however, that oftentimes girls aren’t encouraged enough early on, and those stereotypes feed themselves.
There’s also a perception of what a computer scientist is supposed to look like. “People say, ‘You don’t look like a computer scientist,’” she said. Just because you have a passion for technology, however, doesn’t mean you have to look like you just walked out of Revenge of the Nerds. That’s why, in the Excite camps, girls are presented with a variety of different role models, she said.
Back in 1985, close to 40 per cent of computer science undergraduates were women. Clearly, that number has plummeted. “The whole field of computer science has lost its luster,” said Telle Whitney, president and CEO of the Anita Borg Institute. “It’s not just women, but it has impacted women an inordinate amount because of that image.”
High school girls perceive computing and engineering as geeky, and don’t see how it makes a difference in the world. “If you demonstrate that technology can have a positive impact on the world, it attracts a different type of person because they create a different type of technology,” said Whitney.
Several studies show that girls who have the math and science scores to be able to study IT often don’t — but boys do. And this is where there could be psychological differences: a girl comes out of an exam room in tears because she knows she missed at least five problems, giving her an A-, while a boy comes out doing a happy-dance because he got a passing grade. “Study after study shows that girls just don’t feel confident in their ability to do it, whereas the actual data does not support that lack of confidence,” she said. “And it could be from that lack of encouragement really early on.”
Several social scientists at NCWITT who have studied this area for decades say that women self-select out of studying a field they don’t think is interesting because they’ve been told for half their lives that it’s not interesting. Women believe they’re not supposed to be interested in working with inanimate objects like computers, said Slade, and computing is seen as nerdy and isolating.
The whole field of computer science has lost its lustre.Telle WhitneyText
At April’s International Collegiate Programming Contest (ICPC) sponsored by IBM, only five girls competed on 100 teams — comprised of three students each — from 33 countries in this Battle of the Brains. “I took a female to the contest once,” said Gordon Cormack, who has been coaching the University of Waterloo team for the past 12 years.
IT is more than just crunching numbers
It’s fair to say that females don’t find the contest attractive, he said, because they’re not into the coding marathon. “There’s a distinction between competitive and combative,” he said. “Females compete, but don’t like the same flavour of competition that males do. I don’t think it’s just females — it’s a great sport, but it appeals to a small fraction of males.” Female coaches were much better represented at the competition.
Kaitlin Sherwood, coach for the University of British Columbia team, got her Bachelor of Metallurgical Engineering in 1984 — and she’s used to being one of few women in her field. “Men are good at taking risks,” she said. “With risk, sometimes you fail. Computer science is riskier than vet-med — it goes through boom and bust cycles.” And if you’re trying out for a team, you don’t know if it’s going to pay off — if you’ll get on the team or go to the finals, or if the investment in time might hurt your grades.
“Part of the reason I didn’t go into computer science in the first place was because I wanted to use computers, I didn’t want to study computers,” she said. “We still had the stereotype that computers were used for crunching numbers, and that wasn’t interesting to me.” But using computers to solve mathematical and engineering problems is only a small part of the industry, she added, and far more jobs require business skills than pure algorithms.
“I worry about talent,” said Margaret Ashida, director of talent with the IBM Software Group. “Half of IBM’s business is in services — we need architects who can support our clients.”
And in North America, it’s a generational issue in addition to a gender issue. The millennial generation is wondering how their work is going to make a difference in the world. “We need help in helping people understand how technology makes a difference in the world, such as mapping the world’s DNA,” she said.
In recent research that delved into gender challenges in advanced technology, CATA WIT (Women in Technology) found that in the absence of industry and company strategies, women are dealing with these challenges themselves — either by working harder, working longer or deciding not to get into the field. Challenges include lack of advancement opportunities, lack of role models and lack of work/life balance. This is why we’re seeing fewer women enter computer science at the post-secondary level, or opt out of their careers earlier, said Joanne Stanley, managing director of CATA WIT.
A report recently published by the Information and Communications Technology Council (ICTC) was the result of two roundtables — one in Vancouver and one in Toronto — where senior women in ICT came up with ideas about how to encourage more women to get into the industry. The report outlines an action plan that includes showcasing and profiling more female role models. In the fall, CATA WIT will be launching an online mentoring program, and it’s also looking to create a national network that will profile women in IT.
Less than 26 per cent of the IT workforce in Canada is female, said Stanley, but we don’t have any idea how many women lead companies or hold senior positions. “We need to have that benchmark — where we are today — to know where we’re going,” she said. A number of companies have active “diversity” programs, but we need to hear about what they’re doing in order to share best practices.
Mixed-gender teams make better products
There are 636,000 ICT workers in Canada and 89,000 positions to fill. And there’s a 70 per cent drop in enrollment in computer science and engineering. One university, which can accept 440 first-year computer science students, has only 23 applicants this year. “So in fairness this is an industry issue, not just as it relates to women,” said Stanley. “The industry has to go through a major branding and repositioning.” But when we consider that 52 per cent of our population is made up of women, we’re already losing 52 per cent of the potential workforce, she added.
Aside from the labour shortage, there’s also a requirement for diversity in product research and development. Studies have shown that products developed with a diverse workforce are more successful overall, said the Anita Borg Institute’s Whitney.
Having women on your development team, she said, can make a huge difference if your customers are women. Flickr and Evite were both created by women, for example.
Last year NCWITT ran a study on the U.S. patenting database to see if it could connect diversity and innovation at a statistical level. It found that women’s patenting rates are far below men’s for U.S. technology patents — however, patents from teams comprised of both men and women are cited up to 42 per cent more than patents overall (citation rates are a strong indicator of how useful a patent is).
“If you want to encourage innovation within your own company, think about putting together mixed gender teams and encouraging more women to participate in the patenting process,” said NCWITT’s Slade.
People have been working on this problem for decades, but they’ve been doing it in isolated pockets and haven’t seen a lot of success — in part because they haven’t been able to leverage their efforts. “Our end goal is to put ourselves out of business,” she said, “where we don’t need to exist because it’s become so commonplace for women to choose technology.”
We'd love to hear your opinion about this or any other story you read in our publication.
Jim Love, Chief Content Officer, IT World Canada
Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives. | <urn:uuid:6b5af19a-babe-44db-9817-d638027ebb5b> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/president-and-ceo-the-anita-borg-institute/3203 | 2024-09-19T19:12:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00633.warc.gz | en | 0.967401 | 2,854 | 3.109375 | 3 |
If your programs (or service programs) might be run in multi-threaded jobs, then they must be thread-safe. Among other things, thread-safety requires that if you have any storage that can be accessed by more than one thread, you must ensure that threads do not interfere with each other when they use the storage.
The static storage used by a program can be accessed by multiple threads, and RPG programs are heavily dependent on static storage. If you have an ILE RPG subprocedure that uses only local automatic variables (no global variables), that subprocedure is still dependent on some internal static storage added to your program to handle the statements in your procedure.
If you allow multiple threads to have simultaneous access to the static storage in an ILE RPG module, you may see some very strange, unpredictable, and usually irreproducible results.
To protect the static storage in an ILE RPG module from access by more than one thread at once, you must code THREAD(*SERIALIZE) on the H spec. This causes the RPG compiler to generate code to ensure that only one thread can be running in any procedure in the module at any one time. If thread A is running procedure PROC_ONE and thread B wants to call PROC_TWO in the same module, thread B must wait until thread A has finished running PROC_ONE before thread B's call can complete. This ensures that only one thread can access the static storage at once.
However, while using THREAD(*SERIALIZE) is required for thread-safety, it does not ensure thread-safety. Protection of the module's static storage is only one aspect of thread-safety, and many other aspects of thread-safety cannot be handled by simply serializing access to your ILE RPG modules.
Here are some rules of thumb:
- Code THREAD(*SERIALIZE) in every module that might be run in a multi-threaded job.
- Do not assume that coding THREAD(*SERIALIZE) makes your module thread-safe.
- Do not use OPM RPG in a multi-threaded job.
- Understand everything that "thread-safety" entails before putting a multi-threaded application into production.
For more information on using threads, read the following topics in the
V5R4 Information Center:
- Navigate to Programming -> Languages -> RPG -> ILE RPG -> ILE RPG Programmer's Guide. Start with the "Multithreaded Applications" topic in Chapter 2. Follow the link to "Multithreading Considerations" and read to the end of the chapter.
- Navigate to Programming -> Multithreaded Applications. Read all the topics carefully.
Barbara Morris joined IBM in 1989 after graduating from the University of Alberta with a degree in computing science. Within IBM, she has always worked on the RPG Compiler team. You can contact Barbara at | <urn:uuid:4a735481-2fcc-4aa8-aba3-64d3a2d35d96> | CC-MAIN-2024-38 | https://mcpressonline.com/programming/rpg/techtip-running-rpg-programs-in-multithreaded-jobs | 2024-09-10T01:35:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00633.warc.gz | en | 0.904166 | 599 | 2.515625 | 3 |
1. Cyberattacks Affect All People
Cyberattacks are so common in today’s world and recent reports show that hackers attack a computer in the US every 39 seconds! Once an attack happens millions of people could be harmed, businesses could be forced to shut down, and operations could be halted. Cyberattacks can happen on a global scale as well with hackers breaching government organizations. The National Cyber Security Centre (NCSC) has warned businesses and citizens that Russia is exploiting network infrastructure devices, such as routers around the world. The attackers are aiming to lay down the groundwork for future attacks on critical infrastructure such as power stations and energy grids. It is such a threat that nuclear plants can be attacked causing a nuclear disaster with millions of lives lost. It is important that you are actively taking the steps to protect yourself from cyber attacks. Check out our Cyber Security services page to learn more about how we can help you stay protected.
2. Fast Changes In Technology Are Causing A Boom In Cyberattacks
With the emergence of 5G network, which is a new network “ecosystem of ecosystems” that requires a similarly redefined cyber strategy. This creates a greatly expanded, multidimensional cyberattack vulnerability that is difficult for organizations to secure. Cyberwarriors are increasing their knowledge while hackers can now utilize artificial intelligence and machine learning to trigger automated cyberattacks that can easily compromise secure systems without any human intervention. These automated cyberattacks pose a global scare and can be done on a mass volume.
3. Damage To Businesses And Loss Of Jobs
There has been an influx of hacks and breaches of name brand companies in recent years. These attacks have costed organizations millions of dollars in damages to recover the data and penalties paid through fines. All these expenses could cause people to lose their jobs due to the company needed to cut costs. An attack can also halt your business operations and potentially cause the company to shut down if the proper backup and recovery process is not in place.
4. Cybersecurity Threats Faced By Individuals
It’s not only nations and businesses that face threats from hackers, but individuals face many risks as too. Identity theft is a huge issue, where hackers steal an individual’s personal information and sell it for profit. This also puts the personal safety of an individual and his or her family at risk. This happened numerous occasions and millions of dollars lost at the expense of the victim. In other cases, the hackers use blackmail and extortion after stealing their identity and demand ransom money to take no further action. Hackers also attack through household camera devices to invade people’s privacy. Hackers can speak to individuals that live inside the home, and make ransom demands which causes many privacy concerns.
5. Cyber Concerns May Result In Increased Regulations And Legislation
With cybersecurity threats increasing new laws can be placed to protect the consumer from potential attacks. This would mean that increased regulations and legislation may soon become a reality. Harsher penalties need to be placed on perpetrators of the attack. Citizens need to be made aware of laws passed and make sure that their businesses comply with the laws. | <urn:uuid:9ee17433-3f0d-4e2b-abe8-f2fab2deb613> | CC-MAIN-2024-38 | https://www.bvainc.com/2021/09/16/5-reasons-why-cybersecurity-is-important-now-more-than-ever/ | 2024-09-12T11:37:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00433.warc.gz | en | 0.950906 | 632 | 3.015625 | 3 |
Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) are two different, yet similar, types of cybersecurity attacks that online businesses are at risk for.
During a Denial of Service attack, a computer sends an enormous amount of traffic to the victim's computer. The web resource is unavailable to users by flooding it with more requests than the server can handle. During that attack period, regular traffic will be slowed down or completely interrupted.
There are various ways to perform a DoS attack. For example, an attacker may exploit vulnerabilities in the target application to cause it to crash. Another example of a DoS attack is when the attacker sends many spam requests to a server to overwhelm it.
There are also several types of DoS attacks, including
A DDoS attack is basically a multiplied DoS attack. Instead of using a single computer to send an attack, the attacker uses various internet-connected devices to launch a coordinated attack against the target. The more devices the attacker uses, the greater the possibility of taking the target system offline.
DDoS attacks are usually performed using botnets. Botnets are networks of computers that the attacker controls which can be built using cloud computing systems. However, cybercriminals commonly build botnets from the systems compromised during their attacks.
There are three main categories of a DDoS attack:
As mentioned before, DoS and DDoS attacks are very similar. However, there are key differences.
The following table lays out these differences in an easy-to-read manner:
Regardless of which type of attack, there are various reasons that a cybercriminal may want to take businesses and websites offline.
Typically, the reasons fall into one of the following categories:
Regardless of the reason, a DoS or DDoS attack can do significant harm to a business or website.
Multiple types of cybercriminals could conduct a DDoS or DoS attack. It could be an individual hacker or a hacking group trying to get a large payout from a company.
Anonymous is a hacking group that targets companies that they disagree with politically. In recent years, major websites and services like Wikipedia and Paypal were victims of these groups.
The best way to protect against DDoS and DoS attacks is to deploy anti-DDoS software that identifies and blocks malicious traffic before reaching the mark. However, scrubbing network traffic can be difficult, especially if the attack is highly sophisticated. Experienced DDoS attackers use traffic that is similar to legitimate traffic, which means the scrubber could miss it. Even worse, the scrubber could mistake legitimate traffic for the fake ones, doing the attacker's job for them.
There are some essential security practices that businesses and websites can do to help avoid attackers' attention.
If the site is continuously up-to-date, it helps mitigate the risk of attackers exploiting vulnerabilities.
Additionally, the risk of the site becoming a bot network is significantly reduced if it is updated.
DoS and DDoS attacks exploit issues like Slowloris (a DDoS attack software that allows one computer to take down a web server.) To resolve these issues, enabling a robust security plugin is recommended.
Websites have logs that help identify malicious behavior on the site. These logs allow you to find the exact source of a cyber attack.
It is essential that you enforce strong password policies for every user. Additionally, it is crucial to add two-factor authentication to your website. These security policies make it more difficult for attackers to hack user accounts.
Increasing authentication policies may also lessen your consumer's concerns, as 92% of Americans have concerns regarding their privacy on the Internet.
DoS and DDoS attacks are similar in that they are usually going for the same end but with different methods of attacking. As we can take away from this article, the key differences between them are:
Regardless of how the attack is executed, your site is shut down for a long time, and it can cause serious system malfunctions. Every second your system is down is lost revenue and costly recovery processes.
Speak with the Accountable HQ team today to learn how your business can protect against DoS and DDoS attacks. | <urn:uuid:c9316c5c-2545-42cb-b32b-3afa18c7f4a1> | CC-MAIN-2024-38 | https://www.accountablehq.com/page/the-difference-between-dos-and-ddos-attacks | 2024-09-13T17:21:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00333.warc.gz | en | 0.955391 | 858 | 3.5 | 4 |
In case you missed it, Tesla Motors Chairman Elon Musk is back in the news. His most recent prediction: driving may eventually be banned. The reason? Computers do a much better job of driving vehicles than humans could ever do.
“It’s too dangerous,” Musk noted. “You can’t have a person driving a two-ton death machine.”
At this point, I’m not sure anyone is ready for cars that drive without any human intervention, but we are certainly careening toward autonomous vehicles. Already, a growing number of manufacturers are including LIDAR (Light Detection and Ranging)-based collision avoidance systems, and some cars can park themselves.
Meanwhile, Google’s self-driving car has logged more than 700,000 flawless miles, and Tesla (surprise!) has developed a self-driving vehicle, the Model S.
With all the technology on board and the growing problem of distracted driving, automating functions is a good idea. Over the short term, let’s hope that in-vehicle systems and interfaces such as Apple’s CarPlay and Google’s Android Auto make things safer by getting phones out of motorists’ hands, making it easier to control functions without fiddling with buttons and dials, and, in the end, helping everyone keep their eyes on the road.
Yet, even then, you’re still stuck with horrible drivers who are perpetually in a hurry and are driving recklessly—something that seems to be in abundance these days. There’s also the reality that even the best controls and interface can’t stop some people from creeping into the distraction zone, and safer cars actually cause some motorists to take greater risks, including driving faster.
To be sure, all roads lead to semi-autonomous and autonomous vehicles—perhaps in a decade or two. However, the technology must advance further, bureaucrats and politicians need to move faster, and security must get better.
Then there’s a human element. A lot of motorists somehow equate driving with freedom and want the control of having a steering wheel in hand. No amount of logic or research is likely to change their minds—though radically different configurations and designs might. In fact, Musk later spun into reverse and tweeted that he wasn’t advocating outlawing humans from driving.
Of course, the bigger question is: How do we approach and manage machines that do things better than humans do? The same questions and scenarios will play out across a wide swath of industries and situations.
Do we value safety and lives over choice? Do we value cost and efficiency over the feeling— and sometimes the illusion—that we’re in control? | <urn:uuid:edfd5528-eb4e-40fb-baa1-15c199a8f928> | CC-MAIN-2024-38 | https://www.baselinemag.com/blogs/autonomous-vehicles-drive-forward/ | 2024-09-13T17:04:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00333.warc.gz | en | 0.955653 | 560 | 2.734375 | 3 |
Exciting new innovations and advancements are continually being introduced to us by the ever-changing technological landscape. Among the most recent tech-related stories covered in this monthly roundup are an innovative approach to 3D printing for robots and a fresh proposal for US space regulation.
A Revolution in Robotics: The 3D-Printing Revolution
Thanks to the latest innovation in 3D printing, robotics has taken a giant leap forward. Printing out a robotic hand with polymer bones, ligaments, and tendons was a success for researchers at ETH Zurich. A novel laser method allowed for this success.
Rigid materials, like metal, have traditionally been used to construct robots. But there are a number of benefits to robots constructed from pliable materials.
Robots that can interact with people securely and manage fragile objects with greater ease are now within reach, thanks to the ability to incorporate flexible, elastic, and rigid materials into robotic structures. A number of sectors stand to benefit from this innovation, including logistics, healthcare, and manufacturing.
A Novel Approach to U.S. Space Regulation
The rising demand for oversight of private-sector space activities has prompted the United States administration to announce a new proposal for space regulation. Expanding the responsibilities of the Federal Aviation Administration (FAA) and the Department of Commerce in supervising critical areas, the proposed legislation would divide regulatory powers between the transportation and commerce departments.
Both crewed and uncrewed space activities would be added to the FAA’s purview under the proposed legislation. The Department of Commerce, however, would hold more power over a variety of unmanned spacecraft, including those that service spacecraft while in orbit.
This proposal is in reaction to an international agreement that mandates the authorization and oversight of non-governmental organizations’ space operations by member states. The United States intends to uphold its international responsibilities and guarantee the safe and responsible expansion of the private space sector by strengthening regulatory oversight.
Extraordinary Tech Tales
There are a number of other noteworthy technological stories to mention alongside these ground-breaking innovations:
1. Carbon Dioxide Fuel
A novel method for producing fuel from carbon dioxide (CO2) has been devised by scientists at MIT. This novel method produces formate from carbon dioxide, a solid fuel with possible uses in heating homes and industries. This new discovery provides a viable option for meeting the challenge of lowering CO2 emissions without sacrificing the production of a valuable energy source.
2. Desalination by Wave Power
A novel method for desalinizing saltwater without using fossil fuels has been developed: a floating desalination machine that is propelled solely by waves. This device provides an eco-friendly and long-term answer to the problem of water scarcity on a global scale. A game-changer in desalination could be this cutting-edge technology that uses waves to its advantage.
3. Ariane 6 Rocket Succeeds at Crucial Rehearsal
Airbus and Safran’s Ariane 6 rocket has passed a critical rehearsal, according to the European Space Agency. The rocket is now one step closer to its debut flight, which is set for next year, thanks to this milestone. Future missions may rely heavily on the Ariane 6 rocket, which is a huge step forward in space exploration.
4. The Expansion of the Smartphone Market and Mobile Data Traffic
Mobile data traffic in Europe is projected to triple in the next five years, according to industry group GSMA. This indicates a growing demand for connectivity. The widespread availability of smartphones and other new technologies are two of the main causes of this dramatic increase in mobile data consumption.
For the first time since the middle of 2021, the worldwide smartphone market saw year-on-year growth in October. The smartphone market appeared to be on the mend, despite difficulties like component shortages. The industry’s resiliency and the persistent demand for cutting-edge mobile devices are demonstrated by this growth.
5. A Japanese Manufacturer of Chip Materials Ramps Up Research and Development
A research and development center is set to be established in Silicon Valley, USA, by the Japanese chip materials producer Resonac. The expansion serves as a testament to the company’s dedication to semiconductor industry research and development. Resonac plans to use the knowledge and resources in Silicon Valley to propel innovation and maintain its position as a semiconductor industry leader by setting up shop in one of the world’s most prominent tech hubs.
Technological Projection and Its Consequences
It is essential to keep up with the most recent developments in technology and their possible consequences since technology is changing at a rapid pace. An era of tremendous change, fueled by lightning-fast technological advancements, is about to begin: the Fourth Industrial Revolution. Artificial intelligence, driverless cars, blockchain, and the internet of things are some of the new technologies that are defining this revolution.
Addressing the possibilities and threats posed by the Fourth Industrial Revolution has been a top priority for the World Economic Forum. In order to govern new and emerging technologies, the Forum is working with partners from different sectors to co-design and test agile frameworks through its Centre for the Fourth Industrial Revolution Network. These structures are put in place to make sure that technology progress helps people and doesn’t hurt them.
It is critical to think about the social, legal, and ethical consequences of new technology as we move through this revolutionary age. To maintain equity, privacy, and security while keeping up with innovation, new rules, standards, and regulations are required. The future of technology can be shaped for the betterment of all people if the government, businesses, academic institutions, and civil society work together.
See first source: WEForum
1. What recent innovation in 3D printing has impacted robotics?
Researchers at ETH Zurich have achieved success by 3D printing a robotic hand with polymer bones, ligaments, and tendons using a novel laser method. This innovation enables the use of pliable materials in robotics.
2. What are the benefits of constructing robots from pliable materials?
Robots made from pliable materials can interact with people securely and handle fragile objects with greater ease. This innovation has potential applications in sectors like logistics, healthcare, and manufacturing.
3. What is the new proposal for U.S. space regulation, and who is involved?
The U.S. administration has proposed expanding the responsibilities of the Federal Aviation Administration (FAA) and the Department of Commerce in supervising space activities. Crewed and uncrewed space activities would fall under the FAA’s purview, while the Department of Commerce would oversee unmanned spacecraft, including those servicing other spacecraft in orbit.
4. Why is the U.S. proposing this new space regulation?
This proposal is in response to an international agreement that mandates the authorization and oversight of non-governmental organizations’ space operations. The U.S. aims to strengthen regulatory oversight to ensure safe and responsible growth of the private space sector.
5. What are some other noteworthy technological stories mentioned in the article?
Some other technological stories include advancements in producing fuel from carbon dioxide (CO2), eco-friendly desalination powered by waves, the success of the Ariane 6 rocket, the growth of the smartphone market, and a Japanese chip materials producer’s expansion into Silicon Valley for research and development.
6. What is the Fourth Industrial Revolution, and why is it significant?
The Fourth Industrial Revolution is characterized by rapid technological advancements, including artificial intelligence, driverless cars, blockchain, and the internet of things. It is significant as it brings about tremendous change and requires addressing its possibilities and threats, including social, legal, and ethical consequences.
7. How is the World Economic Forum addressing the Fourth Industrial Revolution?
The World Economic Forum is working with partners from various sectors to co-design and test agile frameworks through its Centre for the Fourth Industrial Revolution Network. These frameworks aim to govern new and emerging technologies to ensure they benefit people and do not harm them.
8. What is the importance of considering social, legal, and ethical consequences in the age of rapid technological change?
Considering these consequences is crucial to maintain equity, privacy, and security while keeping up with technological innovation. It requires the establishment of new rules, standards, and regulations, with collaboration among government, businesses, academic institutions, and civil society to shape the future of technology for the betterment of all.
Featured Image Credit: Photo by NASA; Unsplash – Thank you! | <urn:uuid:a24f7c23-5e79-4876-a18a-a532f709d94a> | CC-MAIN-2024-38 | https://www.baselinemag.com/tech/the-latest-innovations-in-robotics-and-space-regulation/ | 2024-09-13T17:17:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00333.warc.gz | en | 0.937825 | 1,733 | 3.015625 | 3 |
Artificial Intelligence (AI) has made its way into our everyday lives. From interacting with virtual assistant technology such as Siri and Alexa to watching Netflix-recommended movies, AI is present all around us. According to a study by Statista, the global AI market was worth $327.5 billion this year and is expected to grow significantly in the upcoming years. Nonetheless, with all its advantages, the limitations of AI are numerous, such as lacking consciousness, morals, and ethics. But can AI overcome its limitations? Keep reading to find out.
Before we jump into the limitations of AI, let’s understand what AI is. AI allows machines to complete tasks that require human intelligence. Any program capable of learning, thinking, and performing a task like a human is an artificial intelligence program. Applications of AI include natural language processing, speech recognition, and machine vision, such as voice assistants, online search engines, and email filters, to name a few.
There are three main types of AI that are categorized based on their capabilities:
Undoubtedly, AI does a great job of reducing human error, performing repetitive tasks more efficiently, and making smart and fast decisions without wearing out. Nonetheless, artificial intelligence has limits that may be difficult to overcome, especially now. Below are eight limitations of AI:
AI can only perform tasks based on the data it is exposed to and trained on. That’s why AI excels in pattern recognition, which takes much longer for humans to complete.
On the other hand, this means that AI cannot adapt to changing circumstances or think on its own to demonstrate flexibility. As a result, AI cannot derive logical traits, question conclusions, or think critically about problems outside of its monotonous pattern recognition.
Simply put, AI cannot think because it doesn’t have a consciousness, as it is programmed with algorithms. Hence, the algorithm cannot react flexibly to new problems as it cannot recognize or establish any logical connection in concepts outside its program. The fact that AI cannot think also poses severe issues related to self-driving cars. For example, if a traffic sign has been vandalized and looks different, the AI can no longer recognize the sign, which can have fatal consequences.
Nonetheless, there are already attempts to intentionally incorporate errors into the data to increase AI’s ability to learn. However, this approach has been unsuccessful so far.
We know by now that AI can only complete tasks it has been programmed to do. As a result, it frequently fails if asked to complete another job outside of its programmed parameters.
Since AI can only learn over time with past data, it cannot learn to think divergently or be creative in its approach. For example, Poem Portraits is a collaboration with Google Arts and Culture. It generates a poem based on a single word you input using an algorithm and a database of over 20 million words of 19th-century poetry. Hence, AI mimics creativity to create art from previous work, as it cannot read human emotions and only performs based on pre-fed data.
We also know by now that AI is skilled at repeatedly carrying out the same task. However, if we want to make adjustments or improvements to the machine, humans have to alter the codes to make the necessary changes manually. Hence, machines cannot improve themselves.
Only humans can use their cognitive abilities and creative, associative intelligence to conceive and build optimized machines. On the other hand, machine learning can only learn from past data and increase its speed in decision-making.
There is a significant deficit in the traceability of AI decision-making. In other words, AI cannot explain its decision-making process. The process of learning and coming to a result cannot be viewed at any point in time. Thus, this complicates the transparency that is required in competitive activities, such as operating, financing, and investing.
Breaking down the learning and decision process into distinct AI tools is already underway. For example, explainable AI is a field of study that allows researchers to use mathematical techniques to analyze patterns in AI models and draw conclusions about how those models reach their decisions. The explanations could help data scientists get better results or detect faulty models. However, much time is required to achieve a breakthrough in this area.
Although AI aims to eliminate human bias, human biases have made their way into AI models producing distorted results. For example, a criminal justice algorithm used in Broward County, Florida, mislabeled African American defendants as high risk at nearly twice the rate compared to white defendants.
In addition, Amazon found that its hiring algorithm was biased toward women. The computer model favored applicants based on words like “executed,” a similarity commonly found on men’s resumes. Hence, the algorithm learned that male applicants were preferred and penalized female applicants. Nonetheless, Amazon made the necessary changes to create an unbiased model.
AI is built into almost all everyday items, such as our loudspeakers, smartphones, and cameras. One can argue that these digital devices are virtual mini spies that endanger our privacy, even when we are not interacting with them. In addition, our private data, like age and location, including sensitive information, is continuously being processed by AI models to analyze and customize online experiences.
However, this limitation may be overcome with appropriate data protection functions such as immediate deletion after data usage, granting anonymity, and less traceable systems.
You may have heard about killer robots, officially known as lethal autonomous weapons systems (LAWS), that use artificial intelligence and machine learning algorithms to identify and destroy a target autonomously. An example of a killer robot includes the SGR-A1, jointly developed by Samsung and Korea University to assist South Korean troops in the Korean Demilitarized Zone. The robot can identify humans from 2.5 miles away and warn them before shooting them.
AI can make such inhuman, immoral, and unethical decisions because it is programmed to do so. Therefore, the use of LAWS requires ethical guidelines to prevent misuse. An example could be ensuring a degree of human control over them to prevent them from operating independently.
There is a growing fear that automation and AI will replace people and force them into unemployment. For example, robots are frequently employed to replace humans to increase efficiency in monotonous tasks that do not require social or emotional intelligence, such as data entry, reception, or courier services. Nonetheless, AI also creates opportunities for humans to learn new skills and work in more rewarding fields requiring creativity. Recent studies show that it is forecasted that AI will create $2.3 million jobs while eliminating $1.8 million at the same time.
AI cannot exhibit human emotions, think like humans, or make moral and ethical decisions. Nonetheless, regardless of the disadvantages of artificial intelligence, the AI market continues to grow.
That said, humans are responsible for ensuring that the rise of artificial intelligence does not get out of hand. At the same time, until AI learns to overcome its limitations, it doesn’t mean that humans should refrain from using AI-powered solutions in multiple industries, such as healthcare, finance, and marketing.
Try our real-time predictive modeling engine and create your first custom model in five minutes – no coding necessary! | <urn:uuid:1cfd4ac3-c0c3-4b8d-8239-c704de485272> | CC-MAIN-2024-38 | https://plat.ai/blog/can-ai-overcome-its-limitations/ | 2024-09-16T06:38:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00133.warc.gz | en | 0.961537 | 1,472 | 3.546875 | 4 |
Certificate Enrollment Protocol
CEP is a protocol jointly developed by Cisco and VeriSign, Inc. CEP is an early implementation of Certificate Request Syntax (CRS), a proposed standard to the IETF. CEP specifies how a device communicates with the CA, how to retrieve the CA's public key, and how to enroll a device with the CA. CEP uses Public Key Cryptography Standards (PKCS).
CEP uses HTTP as a transport mechanism and uses the same TCP port (80) used by HTTP.
To declare the CA that a Cisco IOS router should use, use the crypto ca identity name command in global configuration mode. The CA might require a particular name, such as the domain name.
Finally, to cover the exam blueprint, this chapter closes with a short explanation of some of the security protocols used in today's networks to ensure security over wireless connections. | <urn:uuid:f221a18a-0a96-4b65-9333-b32e19bcf04b> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=422947&seqNum=6 | 2024-09-16T05:14:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00133.warc.gz | en | 0.894608 | 183 | 2.734375 | 3 |
The ultimate stress test was placed on UK IT infrastructures as a temperature of 40.3°C was reached in July 2022. Data centre operators fought to ensure that cooling functionalities could deal with the challenge to prevent any connectivity outages and service disruptions.
By tradition, many legacy data centres were not designed to manage temperatures, and therefore, struggled to deal with the heat. Keeping close control over the ambient temperature inside the halls of a data centre is a challenge, with even incremental temperatures outside of the building threatening success.
It’s a problem that’s likely to persist in the coming years. In fact, the Royal Meteorological Society states that 40°C temperatures in the summer months will soon become commonplace. It places urgency on operators to prepare their sites as extreme temperatures become the norm. Therefore, what lessons can they learn from the situation last year and what are the actions they can take to deal with the heat in future?
Futureproofing older buildings
Preparing for extreme weather events equates to having the correct processes, procedures and the right infrastructure underpinning them to ensure resilience. Modern data centres are designed to withstand temperatures of around 44°C, but for older ones which were designed for temperatures up to 35-38°C, it’s certainly a challenge to retrospectively fit and prepare them for extreme heat.
It’s not just high temperatures that need to be considered either; events like Sudden Stratospheric Warmings (SSWs), which can cause cold conditions in winter, are also being affected by climate change. The industry needs to be prepared for these colder temperatures in winter too, all the while meeting ESG and sustainability targets.
Making use of hot and cold aisle containment, high temperature liquid cooling systems, low power servers and reusing waste heat are all tools that can help prepare vulnerable buildings in the coming years. Many of these systems also have a dual purpose in making the building more energy efficient. For example, liquid used for immersion cooling is a more efficient conductor of heat than air.
Maintenance and servicing all year round
With the correct infrastructure as the foundation, processes and procedures are the next step. Here it’s about planning the holistic year-round servicing, maintenance and testing cycle with the changing seasons in mind.
In advance of the summer and the subsequent winter season, data centres need to be prepared for the strain ahead. For example, it’s essential that cooling units are in peak condition ahead of summer, whereas some of the cooling equipment can be turned down in winter to ensure energy efficiency.
As the seasons change, regular services and planned preventative maintenance will need to be factored in. This will not only ensure equipment is prepared for the stresses it faces, but also that major works aren’t required to take place during extreme weather events. It’s often during these events that supply issues can occur, so again, preparedness is key.
All hands on deck
With the correct equipment and a holistic maintenance schedule as the foundation, the third component for peak preparedness is proactivity, especially in the case of emergencies. When extreme temperatures strike, a well-rehearsed plan should be implemented to minimise downtime risk and ensure any issues that do occur can be swiftly resolved.
Where non-essential maintenance is postponed in the event of an emergency, engineers should be positioned in strategic locations around the building to react accordingly. It should not be ‘business as usual unless something breaks’. Instead, these teams can jump on issues as soon as they occur, with each minute saved critical. For example, a reactionary team of engineers should be ready on the roof to fix the cooling systems if needed.
Data centres that offer colocation also provide a number of resources, such as people and skills, to ensure continuity of services. For some pieces of equipment, there may only be a handful of engineers deployed by a manufacturer in a certain geographic location. Larger data centre providers have the size, scale and relationships to ensure priority assistance from these providers in an emergency.
Preparing for the new norm
More extreme temperatures are a certainty as climate change escalates. To deal with these changes and protect the UK’s IT infrastructure, data centres need to be suitably equipped and have the right people in place to respond to any issue.
Little separates success and failure for data centres, where one seemingly small hitch can have huge ramifications on operations. The 2022 heatwave made it clear that preparedness and proactivity are critical to ensuring that success remains a reality and sites can continue to be operational as temperatures rise. | <urn:uuid:ca8cb7d6-f010-476f-97d9-48e7d4f1473e> | CC-MAIN-2024-38 | https://datacentrereview.com/2023/03/how-data-centres-can-meet-the-challenge-of-escalating-temperatures/ | 2024-09-18T13:58:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00833.warc.gz | en | 0.949106 | 941 | 2.859375 | 3 |
This chapter covers the following subjects:
- Large-Scale Internet Threats: Reviews IPv6 worms, DDoS attacks, and botnets
- Ingress/Egress Filtering: Describes filtering at network perimeters to prevent spoofed packets
- Securing BGP Sessions: Describes securing the Internet routing protocol
- IPv6 over MPLS Security: Explains security in IPv6 service provider networks
- Customer Premises Equipment: Describes security of IPv6-capable end-user devices
- Prefix Delegation Threats: Describes issues related to providing IPv6 addresses to service provider customers
- Multihoming Issues: Explains connecting to multiple service providers
Many people are surprised to learn that IPv6 is already running on the Internet. The Internet can run both IPv4 and IPv6 simultaneously because the protocols are independent of each other. Those who do not have IPv6 connectivity cannot access IPv6 services provided over the Internet.
There are many large-scale threats on the current IPv4 Internet, and IPv6 will be evaluated to improve this situation. These threats have the potential to deny service to critical services and spread malware. IPv6 can reduce many of the attacks that are so prevalent on the IPv4 Internet. Attackers can forge packets, so filtering based on IP address is a requirement. One of the key security measures when connecting to the Internet is to perform ingress and egress filtering of IPv6 packets. Because the IPv6 addresses are quite different than IPv4 addresses, filtering IPv6 addresses is also unique.
Security within a service provider's environment is also a focus area. How a service provider secures its network directly impacts the security of the Internet at large. Service providers use Border Gateway Protocol (BGP) extensively, so the secure use of this routing protocol is a fundamental practice. Service providers make use of Multiprotocol Label Switching (MPLS) in their core networks. This chapter covers the security of this protocol with respect to IPv6.
Service providers must connect millions of customers and their customer premises equipment (CPE) to the Internet. This must be done securely to provide worry-free Internet access to the general public. Because IPv6 addresses are assigned hierarchically, the assignment of addresses to customers must also be done safely.
Many enterprise customers want to be connected to multiple service providers for added assurance that their networks will remain operational if a single service provider's network has problems. However, this provides challenges for IPv6, so there are some emerging solutions to this conundrum.
This book starts out covering IPv6 security from the outside inward, so it is logical to start by looking at the Internet-facing network components. This chapter covers how to secure your network when it is connected to the IPv6 Internet.
Large-Scale Internet Threats
The Internet is not a safe place anymore. Back in the late 1980s, the cooperative organizations that made up the Internet were primarily universities, research institutions, and military organizations. However, this changed on November 2, 1988, when the Morris Internet worm was unintentionally released. The Morris worm was the first large-scale Internet denial of service (DoS) attack. Until that time, the Internet was a communication tool for sharing information between collaborative and friendly organizations. After that event and as the Internet grew, the Internet started to have a sinister shadow that meant organizations connecting to the Internet needed to protect themselves.
Now that the Internet has evolved to use both IPv4 and IPv6, the threats have also evolved. Packet-flooding attacks are possible using either IP version. Internet worms operate differently in IPv6 networks because of the large address space. Distributed denial of service (DDoS) attacks are still possible on the IPv6 Internet, but there are some new ways to track them. This involves the use of tracing back an attack toward its source to stop the attack or find the identity of the attacker. The following sections cover each of these large-scale Internet threats and discuss prevention methods.
IPv4 networks are susceptible to "Smurf" attacks, where a packet is forged from a victim's address and then sent to the subnet broadcast of an IPv4 LAN segment (for example, 192.168.1.255/24). All hosts on that LAN segment receive that packet (icmp-echo with a large payload) and send back an echo reply to the spoofed victim address. This overloads the victim's IP address with lots of traffic and causes a DoS. Many DoS attacks are easy to disable by simply entering no ip directed broadcasts to every Cisco Layer 3 interface within an organization. However, the default router behavior has been changed so now disabling directed broadcast forwarding is the default setting. This mitigation technique is documented in BCP 34/RFC 2504, "User's Security Handbook."
Because IPv6 does not use broadcasts as a form of communication, you might assume that these types of attacks are limited. However, IPv6 relies heavily on multicast, and these multicast addresses might be used for traffic amplification. An attacker on a subnet could try to send traffic to the link-local all nodes multicast address (FF02::1) and the link-local all routers multicast address (FF02::2).
One such example of using multicast to leverage an amplification attack is demonstrated with The Hacker's Choice (THC) IPv6 Attack Toolkit. It contains two utilities named smurf6 and rsmurf6. They operate much the same as the original IPv4 Smurf attacks but instead use multicast to amplify the attack. The smurf6 tool sends locally initiated Internet Control Message Protocol version 6 (ICMPv6) echo request packets toward the multicast address FF02::1, and then the hosts on that LAN that are vulnerable to the attack generate ICMPv6 echo response packets back to the source, which is the unknowing victim. The smurf6 victim can be on the local subnet with the attacker or on a remote subnet.
Example 3-1 shows how smurf6 can be used to affect a computer on the same subnet as the attacker. If the victim is on a different segment, the systems on this segment send the echo replies to the remote victim's system. The first parameter is the local attacker's interface, and the second parameter is the victim's IPv6 address.
Example 3-1. Smurf6 Attack
[root@fez thc-ipv6-0.7]# ./smurf6 eth0 2001:db8:11:0:b0f7:dd82:220:498b Starting smurf6 attack against 2001:db8:11:0:b0f7:dd82:220:498b (Press Control-C to end) ... [root@fez thc-ipv6-0.7]#
The rsmurf6 tool is coded a little differently. It sends ICMPv6 echo reply packets that are sourced from ff02::1 and destined for remote computers. If the destination computer (victim) is a Linux distribution that can respond to packets sourced from a multicast address, it responds to the source, which causes a traffic flood on the remote LAN. This form of amplification is particularly dangerous because each packet generated by rsmurf6 would translate into numerous packets on the remote LAN. Rsmurf6 is like a reverse smurf6 and only works on incorrectly coded implementations of the IPv6 stack. Therefore, it is not as effective as it once was when more vulnerable operating systems were in existence.
Example 3-2 shows how the rsmurf6 tool can be used. The first part of the example targets a victim's computer on a remote subnet. The second part of the example is destined for the link-local all nodes multicast address FF02::1 and essentially denies service to the entire local LAN that the attacker is connected to. Even the smallest systems can generate 25,000 pps, which is about 25 Mbps of traffic to all hosts.
Example 3-2. Rsmurf6 Attack
[root@fez thc-ipv6-0.7]# ./rsmurf6 -r eth0 2001:db8:12:0:a00:46ff:fe51:9e46 Starting rsmurf6 against 2001:db8:12:0:a00:46ff:fe51:9e46 (Press Control-C to end) ... [root@fez thc-ipv6-0.7]# ./rsmurf6 -r eth0 ff02::1 Starting rsmurf6 against ff02::1 (Press Control-C to end) ... [root@fez thc-ipv6-0.7]#
It should be mentioned that these rsmurf6 attacks are only effective on computers that have IPv6 stacks that allow them to respond to an ICMPv6 packet that was sourced from a multicast address. Most modern IPv6 implementations are intelligent enough to recognize that this is not a valid condition, and they simply drop the packets. In other words, IPv6 hosts should not be responding to echo request packets destined to a multicast group address.
In Chapter 2, "IPv6 Protocol Security Vulnerabilities," you learned that it is a good practice to limit who can send to multicast groups. Because IPv6 does not have broadcast as a form of communications, multicast is the method for one-to-many communications. For this reason, multicast can be leveraged by attackers for packet amplification attacks. Therefore, the solution is to tightly control who can send to multicast groups and when it is appropriate to respond to a multicast packet. Service providers can also consider rate-limiting user connections and particularly rate-limit IPv6 multicast traffic. Most multicasts should be confined to the LAN, so if an attacker is already on your LAN, you need to use other means to protect against that. Physical security, disabling unused switch ports, enabling Ethernet port security, and using an 802.1X or Network Admission Control (NAC) technology are options to prevent unauthorized access to the internal networks.
DoS attacks can be performed using a feedback loop to consume resources or amplify the packets sent to a victim. In Chapter 2, you saw how RH0 packets could be created with a list of embedded IPv6 addresses. The packet would be forwarded to every system in the list before finally being sent to the destination address. If the embedded IPv6 addresses in an RH0 packet were two systems on the Internet listed numerous times, it could cause a type of feedback loop.
Figure 3-1 shows how this type of ping-pong attack would work. The attacker would first send the crafted packet to a network device on the Internet that is susceptible. That system would forward it onto the next system in the list. The two systems could continue to do so until they ran out of bandwidth or resources. However, sometime soon, this type of attack will have limited success because RFC 5095 has deprecated the use of Type 0 routing headers in IPv6 implementations.
Figure 3-1 Internet Feedback Loop
DoS attacks might not just be about flooding traffic. With IPv6, there are going to be a wider variety of nodes attached to the network. IPv6-enabled appliances, mobile devices, sensors, automobiles, and many others can all be networked and addressable. DoS attacks could simply target a specific model of device and render it inoperable. The results could be far more tragic if your IPv6-enabled automobile suddenly stops while on the autobahn. The benefits of using IPv6 are great but so are the consequences if the communication is not secured properly.
Worms are a type of attack that requires no human interaction. This is different than a virus, which usually requires some form of human interaction to activate. Worms spread by themselves, infect vulnerable computers, and then spread further. Worms perform the entire attack life cycle in one small amount of code. That small amount of code contains the instructions for reconnaissance of new systems, scanning for vulnerabilities, attacking a computer, securing its access, covering its tracks, and spreading further.
Worms can be affected by the introduction of IPv6. This new protocol can affect a worm's ability to spread. It can also affect the techniques that worm developers use to make their code propagate. There are already examples of worms that leverage IPv6. The following sections cover these topics and discuss ways to help prevent worms.
Many of the widespread worms in the past eight years have leveraged some vulnerability in software running on a computer. Worms such as Code Red, NIMDA, MS/SQL Slammer, W32/Blaster, W32/Sobig, W32/MyDoom, W32/Bagel, Sasser, and Zotob all took advantage of some Microsoft service vulnerability. Some of them spread over the Internet, and some used email as the medium for reaching other systems. Many worms now spread through email (executable attachments, address books), peer-to-peer, instant message, or file sharing. These types of worm propagation techniques are unaffected by IPv6's introduction.
In the past, worms have used network scanning or random guessing to find other systems to spread to. Worms that spread to random IPv6 addresses cannot spread as fast as in IPv4 networks because IPv6 addresses are sparsely populated while IPv4 addresses are densely populated. Worms have been successful at scanning other IPv4 systems to infect because of the density of the current IPv4 space. Some worms have spread randomly (Code Red, Slammer), while others have spread sequentially (Blaster). It could be postulated that the Sapphire/SQL Slammer worm would not have been as successful on an IPv6 Internet because the size of the IPv6 address space is so large compared to IPv4. The Sapphire/SQL Slammer worm would take many thousands of years to reach its maximum potential on the IPv6 Internet. Given IPv6's immense address space, these types of worms will not be able to guess the addresses of other victims to spread to and infect. Random scanning will not be an option for worms on IPv6 networks. However, if IPv6 addresses are allocated sequentially or are otherwise densely packed, scanning can be just as fast as with IPv4.
Speeding Worm Propagation in IPv6
As worms get smarter, they can overcome many of the issues related to scanning a large IPv6 address space. Worms can increase their scan rate to try to reach more hosts each second. IPv6 worms need to overcome the problems with performing reconnaissance on IPv6 networks. As discussed in Chapter 2, there are many places for a worm to look to help the worm find other hosts to spread to. Worms can also improve their knowledge of the population. This could be done by recognizing only the currently allocated IPv6 address blocks or by seeding their code with several vulnerable systems. Worms could also work to find new targets by looking at other sources of IPv6 addresses.
Worms could consult the infected computer's neighbor cache to find other local systems. The worms would also look anywhere IPv6 addresses are stored to help them identify new targets. Domain Name System (DNS) lookups, local DNS files, /etc/hosts, registries, SSH known_hosts, and other lists of hosts could be consulted. Worms might also listen to the LAN traffic to find other hosts. Sniffing neighbor solicitation packets, Duplicate Address Detection (DAD) packets, and routing updates would help them target specific populations of hosts rather than randomly scanning. Even information about IPv6 addresses stored in logs like syslog, /var/log/messages, and search engine logs would be valuable to a worm.
Worm developers will likely adjust their strategy for IPv6 networks. A worm could infect a single host, and then the worm could use that host's ability to send IPv6 multicast packets within the organization (for example, FF02::1, FF05::1, FF08::1). An example of this can be seen in "Windows Kernel TCP/IP/IGMPv3 and MLDv2 Vulnerability" (MS08-001, CVE-2007-0069), which was discovered early in 2008 (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2007-0069). This vulnerability leveraged a bug in the Windows multicast code using malformed Internet Group Management Protocol version 3 (IGMPv3) packets. A worm could leverage this vulnerability to attack nearby IPv6 hosts and spread to those infected computers. Therefore, a method for mitigating worm attacks could leverage the practice of constraining communication with IPv6 multicast addresses.
It is predicted that worms that check for routable address space can spread even faster. A worm could contain all the routable IP prefixes, and that list would help it eliminate "black" unallocated space. A worm could also look at a host's routing table or passively listen for routing updates (FF01::1 all routers multicast group) on a LAN to learn about other local networks to start scanning. For example, scanning could also be accelerated if the worm could perform a MAC address flood (CAM overflow attack) of the local LAN switch and then listen to all the packets.
Dual-stack worms could leverage either IPv4 or IPv6 protocols to spread in even faster ways than previously using only IPv4. However, with the density of the population using IPv4, worms could spread quickly over only IPv4. Some worms can use a dual-stack approach to infect systems rapidly over IPv4. The worms can check whether the system is dual-stacked and then perform a multicast probe. The systems that respond to the link-local multicast (FF02::1) are then attacked using IPv6. This technique could even accelerate worm propagation in the short term. However, eventually as more IPv6-only hosts exist, this technique will lose its effectiveness.
IPv6 worms must have more advanced techniques to overcome the problem of scanning IPv6 addresses to spread. As these worms are made more sophisticated, more code is required, and the size of the worm increases. This makes it more difficult for the worm to spread because the transmission of the worm requires multiple packets and slows the spread.
Current IPv6 Worms
A few worms have already leveraged IPv6, and unfortunately there will be more in the future. The Slapper worm was released in 2002. It targeted Apache web servers on TCP port 80. After the worm attacked an Apache server, it would then create a copy and spread to other Apache web servers by randomly finding IPv4 servers. It had a sophisticated command and control channel that would allow a hacker to create send commands to the infected servers. One command would send a flood of IPv6 packets toward a victim. Slapper was the first worm that had any type of IPv6 component to it.
W32/Sdbot-VJ is a spyware worm that tries to use the popularity of IPv6 to disguise itself. It does not use IPv6 to spread to other machines; however, it installs the program wipv6.exe and installs several registry entries. The user might be hesitant to delete the file because it might have something to do with the Windows IPv6 drivers. Therefore, it was less likely to be deleted from a computer.
Preventing IPv6 Worms
A few techniques can help contain IPv6 worms. You must keep your antivirus and intrusion prevention system (IPS) signatures up to date so that they can identify new threats. Many worms leverage recent vulnerabilities that have been patched by the manufacturer, but not all customers have implemented the patch. Therefore, keeping software patched on computers and servers is a must. You can also use anomaly detection systems to identify an abnormal spike in traffic of any single protocol type. This would be one way to detect a problem, but the quicker you can detect a rapidly spreading worm and respond to block the propagation, the easier your remediation.
Distributed Denial of Service and Botnets
Sophisticated hackers try to strive for elegant attacks that satisfy their need to prove their superiority. However, many times an advanced attack is not possible and an attacker might still want to perform some type of disruption. Oftentimes it is the less-experienced attackers that simply try to negatively impact a site after they fail at a more sophisticated attack. When their attempts are thwarted, they fall back to trying to cause damage by simply breaking the system and taking it offline. This attack performs a DoS and makes the system unable to provide service to the legitimate users. Attacks of this style that involve a large number of geographically disperse computers are called distributed denial of service (DDoS) attacks.
DDoS attacks are performed by a large set of many Internet-connected computers that have been compromised. These large numbers of computers are controlled by other compromised systems called handlers. The hacker that controls all these computers can send commands to their vast army of "zombies" to send traffic to a victim. These zombie computers are typically Internet-user PCs that have been turned into robots (bots for short) through malicious software. When the "bot herder" directs the botnet to send the large volume of traffic toward the victim, it prevents the victim from being able to communicate. Thus the attack denies the victim Internet access or denies the user's access to the victim's website.
DDoS on IPv6 Networks
DDoS attacks can exist on an IPv6 Internet just like they exist on the current IPv4 Internet. Botnets, which are large networks of zombie infected computers, can be created, and their attacks can be focused on a victim. The use of IPv6 will not change the way that botnets are created and operated. DDoS botnets will unfortunately still exist on IPv6 networks. Botnets can also be used to send email spam and conduct other types of mischief. IPv6 will allow the Internet to contain many more devices than the IPv4 Internet. Imagine if many of these devices were to launch a DDoS attack. The results could be more devastating than today's attacks on the IPv4 Internet.
Because an IPv6 address is allocated in a fully hierarchical manner, it would be easier to track down where the traffic is coming from and going to than on the IPv4 Internet, where addresses are not hierarchical. Because of fully hierarchical addressing, inbound/outbound source IP address filtering and unicast Reverse Path Forwarding (RFP) checks will be possible. Viruses and worms that spread using spoofed source addresses will be limited in an IPv6 network if Unicast RPF checks are deployed. Ingress and egress filtering will also limit these types of attacks.
Figure 3-2 shows how two Internet service providers (ISP) have assigned address space to two organizations. If one organization connected to ISP1 sends a large volume of traffic to the victim's host, it could be filtered by ISP1. The traffic could be validated to have legitimate source addresses coming from its assigned address space. Packets with spoofed source addresses would not be allowed to leave the organization. Therefore, if the victim saw attack traffic coming from the 2001:db8:1000::/48 address space, it could be traced back to its source. If an attacking host was using privacy addressing for the network ID portion of the address, the attack could only be traced back as far as the organization.
Figure 3-2 Internet Ingress/Egress Filtering
The hope is that if all ISPs and end-user organizations were to implement full ingress and egress address-spoofing filtering, this would help with tracking down the DDoS attacks. The infected computers could then be quickly determined, and the malicious software could be remediated more quickly.
In the unfortunate circumstance where you have fallen victim to a DoS attack, your first instinct is to look upstream for assistance. The goal is to try to identify the source of the traffic that is coming your way and stop it as close to the source as possible. You must coordinate with your ISP to help contain a DoS attack. Your organization should not wait until this happens to work out procedures with your ISP to help you handle this. An organization should know ahead of time the contact information and procedures to follow to perform last-hop traceback.
Traceback in IPv6 networks involves finding the source address of the offending packets and then tracking down the offending host to a subnet. Then, tracking down the IPv6 address and the binding to the Layer 2 address (or asymmetric digital subscriber line [ADSL] port) of the host can be done at that site. Then, one could find out what Ethernet port the user is connected to and then investigate further. This procedure should be documented ahead of time so that it can be used quickly during an attack.
This process is time consuming and takes coordination between your own ISP and many others, and it is not applicable if the attack is a DDoS because there are literally thousands of attack sources. If you are trying to stop an attack by a botnet that could potentially contain thousands of bots, the task is overwhelming. Each of these bots is not sending traffic sourced from its own IP address, so tracing back to this many systems seems futile. The zombie hosts create traffic that looks like normal web traffic, so finding out which connections are legitimate is nearly impossible. The traffic patterns that these botnets create can be observed by using NetFlow to track statistics about each protocol flow. The flow records can be checked for traffic coming to or from an organization or service provider network. The collected NetFlow data can help trace the source of the traffic back to the source organization's network. However, the act of reaching out to that many users to have them remediate their systems is not feasible in most situations.
Your organization probably has a firewall, and you might have an IPS. Those two systems can try to stop the attack by filtering out traffic. However, the web requests will not match any known "signature," but either system can easily be configured to simply drop all traffic. That can stop the attack, but it would also stop all other valid users from reaching your servers. Furthermore, your Internet connection can be so saturated with traffic that blocking at your site has limited value.
If the attack that is hitting your network is a SYN-Flood attack, a solution is available. A SYN-Flood attack is where the packets with spoofed source addresses are sent to the web servers and they have the SYN TCP flag set. The server tries the second part of the three-way TCP handshake by sending back a SYN-ACK TCP flag packet to the spoofed source address. Because that packet never reaches the spoofed source, the three-way handshake never takes place and the web server retains the state of the connection for some time. Meanwhile the web server is hit with many of these false connections, and they drive up its CPU and memory utilization.
A technique that would help in this instance is to leverage an application front-end system or server load-balancing system that can terminate those SYN packets and send back the SYN-ACK on behalf of the server. The SYN cookie technique can also be used to verify the initial sequence number (ISN) of the client connection. If the client sends back the legitimate final ACK to complete the three-way handshake, the connection is legitimate. The server load balancer can then make the connection to the web server on behalf of the client, and the HTTP request can take place normally. False SYN-Flood traffic does not reach the server, but legitimate connections are served.
Black Holes and Dark Nets
During any type of attack or for other reasons an ISP can create a situation where traffic destined for a site can be dropped. The traffic is routed into a black hole, where it is simply discarded. To do this, the service provider creates a route to Null 0 on its routers and redistributes that route to the other peering routers in its infrastructure. The route can be for an entire prefix or for a specific IP address. All the routers with this null route simply drop the packets destined for that prefix. This technique was defined in RFC 3882, "Configuring BGP to Block Denial-of-Service Attacks," and is also known as a Remotely Triggered Black Hole (RTBH). The problem with this technique is that it is crude and can block legitimate traffic as well as the malicious traffic from the attack. However, this same technique can be applied to the IPv6 Internet.
ISPs also can use the RTBH technique to trace the source of the malicious traffic. When the traffic is routed to the black hole, ICMP error messages are created. Monitoring the ICMP error messages gives an indication of where the traffic entered the service provider's network. There are many different versions of this same technique. Different ISPs use different solutions to help them track down where the malicious traffic is entering their network. The goal is to identify where the traffic is coming from and then work back toward the source. This usually involves cooperation with other ISPs.
Another technique for learning about Internet threats involves the creation of a darknet for some portion of public address space. A public prefix is advertised by a service provider to the Internet, but that prefix has no services within it. Instead that network contains a computer that is monitoring all traffic coming into that network. Any packets that are on the service provider's network destined for that address space end up being monitored. Because that prefix has never been used, there is no legitimate reason for any packets to be going to it. Therefore, the only things going to the darknet network are transient packets that can be the results of scanning attacks.
Darknets, or network telescopes as they are also known, help researchers understand hacker behavior. They are similar to a honeynet, but there is no interaction with the hacker. No packets leave the darknet, but anything that enters the darknet is seen by a protocol sniffer. The sniffer can archive the data for future analysis and it can also pick up trends. However, few packets enter an IPv6 darknet, so it can be difficult to interpret results. However, there is a lot of public IPv6 address space available to perform these types of experiments. | <urn:uuid:40f6c4e3-b06e-4abe-9649-9161c2e5299e> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=1312796&seqNum=6 | 2024-09-19T20:39:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00733.warc.gz | en | 0.939784 | 6,277 | 3 | 3 |
The Physics Behind Fiber Optics
A fiber-optic cable is composed of two concentric layers, called the core and the cladding, as illustrated in Figure 3-1. The core and cladding have different refractive indices, with the core having a refractive index of n1, and the cladding having a refractive index of n2. The index of refraction is a way of measuring the speed of light in a material. Light travels fastest in a vacuum. The actual speed of light in a vacuum is 300,000 kilometers per second, or 186,000 miles per second.
Figure 3-1 Cross Section of a Fiber-Optic Cable
The index of refraction is calculated by dividing the speed of light in a vacuum by the speed of light in another medium, as shown in the following formula:
Refractive index of the medium = [Speed of light in a vacuum/Speed of light in the medium]
The refractive index of the core, n1, is always greater than the index of the cladding, n2. Light is guided through the core, and the fiber acts as an optical waveguide.
Figure 3-2 shows the propagation of light down the fiber-optic cable using the principle of total internal reflection. As illustrated, a light ray is injected into the fiber-optic cable on the left. If the light ray is injected and strikes the core-to-cladding interface at an angle greater than the critical angle with respect to the normal axis, it is reflected back into the core. Because the angle of incidence is always equal to the angle of reflection, the reflected light continues to be reflected. The light ray then continues bouncing down the length of the fiber-optic cable. If the angle of incidence at the core-to-cladding interface is less than the critical angle, both reflection and refraction take place. Because of refraction at each incidence on the interface, the light beam attenuates and dies off over a certain distance.
Figure 3-2 Total Internal Reflection
The critical angle is fixed by the indices of refraction of the core and cladding and is computed using the following formula:
qc = cos1 (n2/n1)
The critical angle can be measured from the normal or cylindrical axis of the core. If n1 = 1.557 and n2 = 1.343, for example, the critical angle is 30.39 degrees.
Figure 3-2 shows a light ray entering the core from the outside air to the left of the cable. Light must enter the core from the air at an angle less than an entity known as the acceptance angle (a):
qa = sin1 [(n1/n0) sin(qc)]
In the formula, n0 is the refractive index of air and is equal to one. This angle is measured from the cylindrical axis of the core. In the preceding example, the acceptance angle is 51.96 degrees.
The optical fiber also has a numerical aperture (NA). The NA is given by the following formula:
NA = Sin qa = ?(n12 n22)
From a three-dimensional perspective, to ensure that the signals reflect and travel correctly through the core, the light must enter the core through an acceptance cone derived by rotating the acceptance angle about the cylindrical fiber axis. As illustrated in Figure 3-3, the size of the acceptance cone is a function of the refractive index difference between the core and the cladding. There is a maximum angle from the fiber axis at which light can enter the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this maximum angle is the NA of the fiber. The NA in the preceding example is 0.787. Fiber with a larger NA requires less precision to splice and work with than fiber with a smaller NA. Single-mode fiber has a smaller NA than MMF.
Figure 3-3 Acceptance Cone
The amount of light that can be coupled into the core through the external acceptance angle is directly proportional to the efficiency of the fiber-optic cable. The greater the amount of light that can be coupled into the core, the lower the bit error rate (BER), because more light reaches the receiver. The attenuation a light ray experiences in propagating down the core is inversely proportional to the efficiency of the optical cable because the lower the attenuation in propagating down the core, the lower the BER. This is because more light reaches the receiver. Also, the less chromatic dispersion realized in propagating down the core, the faster the signaling rate and the higher the end-to-end data rate from source to destination. The major factors that affect performance considerations described in this paragraph are the size of the fiber, the composition of the fiber, and the mode of propagation.
The power level in optical communications is of too wide a range to express on a linear scale. A logarithmic scale known as decibel (dB) is used to express power in optical communications.
The wide range of power values makes decibel a convenient unit to express the power levels that are associated with an optical system. The gain of an amplifier or attenuation in fiber is expressed in decibels. The decibel does not give a magnitude of power, but it is a ratio of the output power to the input power.
Loss or gain = 10log10(POUTPUT/PINPUT)
The decibel milliwatt (dBm) is the power level related to 1 milliwatt (mW). Transmitter power and receiver dynamic ranges are measured in dBm. A 1-mW signal has a level of 0 dBm.
Signals weaker than 1 mW have negative dBm values, whereas signals stronger than 1 mW have positive dBm values.
dBm = 10log10(Power(mW)/1(mW)) | <urn:uuid:bb78d0fb-0698-44d1-947f-81a1773946f6> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=170740&seqNum=3 | 2024-09-19T19:53:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00733.warc.gz | en | 0.906583 | 1,230 | 4.03125 | 4 |
The recent news of a cyberattack on a water treatment plant carried out by a remote perpetrator came as a shock to organizations around the world.
Earlier this month, an unauthorized threat actor had remotely accessed the plant’s control systems via TeamViewer and used it to increase the amount of sodium hydroxide (lye) in water to dangerously higher levels. Fortunately however, a vigilant operator at the plant identified this anomalous activity in real time and blew the whistle internally to prevent any potential damage. While the incident is still under investigation, security analysts across the globe have unanimously agreed on the fact that poor access controls and security hygiene have paved the way for this incident.
To put things in perspective, there was no sophisticated or complex attack strategy involved in this incident; the attacker was able to breach the public infrastructure by simply taking advantage of the treatment plant’s lax security practices.
The silk route to privileged information
Attackers do not always need to design advanced hacking algorithms to carry out their plans; sometimes they simply pick stolen or compromised credentials from the dark web to hack into critical networks. They may also use simple techniques, such as phishing, keylogging, and brute forcing to gain access to their target machines.
While it is true that attack methods are rapidly evolving, it’s more often misuse of administrative privileges and weak or stolen credentials that are enough to breach any critical infrastructure. Let’s take the attack on the Florida water treatment plant for example—all it took the unidentified perpetrator was one unprotected password to access and handle the control systems remotely.
With work from home being a prevailing necessity among the global workforce, corporate VPNs and privileged remote sessions are the only way through which employees can access their corporate resources. However, with remote work growing popular across the globe, there has also been a significant surge in the number of remote-session-based attacks, where cyber criminals break into critical infrastructure using compromised credentials. Since the credentials are legitimate, attackers can mimic legitimate users to avoid being detected.
Simply put, it is often the known, neglected, and underestimated vulnerabilities that provide cybercriminals with an opportunity to exploit the administrative access to privileged resources. Time and again, incidents like this prove that when passwords are stored in secure vaults and are subject to standard security practices, the chances of getting hacked are far lower.
The Goldilocks approach to proactive cybersecurity
Security is not a one-time process; it has to be approached and improved holistically. While it’s crucial to stay on top of threats by employing advanced defence controls, it is equally imperative to consistently ensure that the often ignored or neglected fundamental elements of security (read credentials) are fortified. This involves following a certain set of basic security hygiene, such as:
Ensuring and mandating strict password policies
Including multi-factor authentication controls
Securing privileged credentials in encrypted databases
Monitoring remote user sessions in real time
Identifying and terminating suspicious user activities
Periodic vulnerability scanning and patching of endpoints
Poor password practices, such as reusing and sharing critical credentials, are not uncommon and could open several security loopholes for attackers to exploit. Manual management and tracking of privileged credentials using spreadsheets is not just cumbersome, but also not reliable owing to the fact that one malicious or ignorant insider is all it takes to expose the credentials to criminals. Furthermore, remote sessions, when accessed by unauthorized users, could open the floodgates to sensitive information worth hundreds of millions of dollars.
That said, it’s imperative for organizations to employ sound privileged access security controls to safeguard access to sensitive information systems and monitor live remote sessions. This can be achieved by investing in a reliable privileged access management (PAM) solution that automates the mundane tasks of:
Discovering, consolidating, and storing privileged passwords in secure vaults.
Automatically resetting passwords based on existing policies and rotating passwords after every one-time use.
Assigning the least privileges possible to normal users and elevating their privileges if and when required.
Enforcing multi-factor authentication controls to authorize access to privileged resources.
Establishing a request-release workflow to validate user requirements before providing them with access to critical resources.
Monitoring remote user sessions in real time, terminating suspicious sessions, and revoking user privileges upon expiration of their sessions.
In addition, PAM solutions can proactively aid in eliminating the silos and monotony associated with access management controls. They provide effective automation to streamline credential and access security workflows, which allows IT admins to save their time and efforts for more important tasks.
How does ManageEngine help?
ManageEngine PAM360 is a unified, enterprise-grade privileged access management solution designed to help IT teams centralize the management and security of privileged credentials, and secure access to privileged remote sessions, such as databases, servers, endpoints, and more. PAM360 is secure by design, and complies with the mandatory requirements of Federal Information Processing Standards Publication (FIPS) 140-2. The salient features of our PAM solution include, but are not limited to:
Enterprise password vaulting and management
PAM360 enables periodic auto-discovery and vaulting of proprietary enterprise entities, such as passwords, documents, digital signatures, SSH keys, TLS/SSL certificates, and more. All credentials are stored in a secure repository that is encrypted using the advanced AES-256 algorithm. The solution also offers options for password admins to reset and rotate passwords periodically based on existing password policies, which helps in eliminating unauthorized access.
Secure remote access to privileged information systems
With PAM360, users can launch direct, one-click remote access to privileged systems, like servers, applications, databases, and network devices without exposing credentials to end users. Our solution provides an encrypted, agentless gateway for initiating RDP, VNC, and SSH sessions with bidirectional secure file transfer capabilities.
Security by design
PAM360 is designed to echo customer data security and offers military-grade protection to end-user data. Our product includes built-in multi-factor authentication controls, where users will be required to authenticate through two successive stages to access the web interface. Further, we provide integrations with top single sign-on (SSO), multi-factor authentication, and identity management solutions to enable seamless and secure logins.
Cutting-edge privileged session management
PAM360 comes with built-in mechanism to monitor, record, shadow, and playback remote user sessions, which not only aids in eliminating blind spots and bad actors, but also provides support for forensic investigation via comprehensive audit trails. Learn more.
Advanced user behavior analytics and event correlation
PAM360 provides contextual integration with security information and event management (SIEM) and IT analytics tools to consolidate and correlate privileged event data with other events across the enterprise. This enables IT teams to build basic user behavior patterns to spot and block malicious actors whose behavior deviates from the baseline pattern. Additionally, security teams can leverage this data to eliminate known vulnerabilities and make data-driven security decisions.
All in all, this cyberattack on a public infrastructure’s control systems just highlights that security breaches not only cost organizations their reputations alongside hefty penalties, but could potentially put many innocent lives at risk. Therefore, this is corroborating evidence of why organizations, especially those that serve the public’s interest, need to tighten their security infrastructure using a bottom-up approach. | <urn:uuid:f9f425da-3243-414b-8a28-33d5e3873cfa> | CC-MAIN-2024-38 | https://blogs.manageengine.com/corporate/manageengine/pam360/2021/02/17/cyberattack-on-floridas-water-treatment-plant-what-it-means-to-global-organizations.html | 2024-09-08T22:52:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00833.warc.gz | en | 0.928422 | 1,527 | 2.875 | 3 |
In this, the next article in our ‘Storage Basics’ series, we take a look at a technology called Network Attached Storage (NAS).
The most widely used method of storage on today’s networks is direct attached storage (DAS). In a DAS configuration, storage devices such hard disks, CD-ROM drives and tape devices are attached directly to the system through which they are accessed. These systems are normally a network server running an operating system such as Microsoft Windows, Novell NetWare, Linux or Unix. The term ‘direct’ is used to describe the connection between device and server because it is exactly that. The connection is normally achieved using a storage device interface such as Integrated Drive Electronics, or more commonly in a server environment via the Small Computer Systems Interface (SCSI). DAS devices can be physically inside the server to which they are attached or they can be in external housings that are connected to the server by means of a cable. Of the two device interface standards discussed, only SCSI supports external devices in this way.
DAS is a very mature technology that can be implemented at relatively low cost. It does, however, have it’s drawbacks. First is that the storage devices must be accessed via the server, thereby requiring and using valuable system resources. There is also the issue that, by placing the information on a server system, a licensed connection is needed to access the data and last, and perhaps least, it restricts the amount of disk space that can be used.
The solution to all of these problems is to take the storage devices away from the server and connect them directly to the network media. This is where NAS comes in. Before we continue our discussion, however, we should just point out that you must not confuse NAS with a storage area network (SAN). While both allow storage devices to be moved away from the server, SANs are mini networks dedicated to storage devices, while a NAS device is simply a storage subsystem that is connected to the network media.
NAS devices are very streamlined and dedicated to a single purpose; make data available to all clients in a heterogeneous network. Because NAS devices are dedicated to a single purpose, their hardware components, software and firmware are tightly integrated leading to more reliability than a traditional file server. With NAS devices, application incompatibilities that can cause a system to crash are a thing of the past, and with fewer hardware devices there is less to go wrong on that front as well.
NAS devices operate independently of network servers and communicate directly with the client, this means that in the event of a network server failure, clients will still be able to access files stored on a NAS device. The NAS device maintains its own file system and accommodates industry standard network protocols such as TCP/IP and IPX/SPX to allow clients to communicate with it over the network. To facilitate the actual file access, NAS devices will accommodate one, a couple, or all of the common file access protocols such as SMB, CIFS, NCP, HTTP and NFS.
One of the major considerations for file serving is of course file system security. NAS devices tackle this issue by either providing file system security capabilities of their own, or by allowing user databases on NOS to be used for authentication purposes. By providing multiple authentication measures, the flexibility of NAS solutions is further enhanced.
Apart from those already discussed, NAS also has other benefits. NAS allows devices to be placed close to the users who use them. Not only can this have the effect of reducing overall network traffic, it also allows users to physically access the NAS device if appropriate. Perhaps the best example of such a situation might be a CD Jukebox NAS system. Users could swap CD’s in and out of the jukebox to make them available on the network. Although there may still be security issues that need addressing, such a situation is far safer than giving a user access to the server room to change the contents of a CD jukebox.
Perhaps one of the biggest advantages of NAS is that it offers platform independent access, an increasingly common consideration in today’s heterogeneous networking environments. Because of the fact that many environments now use more than one operating system platform, NAS provides a mechanism that allows users to access data irrespective of what NOS is used to authorize them on the network.
One common misconception about NAS is that it is considerably faster than DAS, which is not the case. In terms of data retrieval from a storage device, the bottleneck is rarely the speed of storage devices or the server to which they are attached. Far more likely is that the speed of the network is the restricting factor. Consider that storage device throughput is generally measured in Megabytes per second whereas network media is measured in Megabits per second and you’ll see what we mean.
So how easy is it to start using NAS? Well, easier than you might think. NAS devices from a number of companies now offer a great way for you to expand your storage infrastructure. Starting from just a few hundred dollars and going up from there, NAS devices are available as free-standing or rack-mounted units that can accommodate storage devices of all types and all capacities. Many NAS devices also incorporate technologies such as RAID and provide UPS capabilities as well. When you need a storage solution that is ‘outside of the box’ but you don’t want to get involved with the technology and expense of a SAN, NAS is the way to go. | <urn:uuid:96ca81bd-9e21-4a2a-bef6-104e3105a47b> | CC-MAIN-2024-38 | https://www.enterprisestorageforum.com/hardware/storage-basics-network-attached-storage/ | 2024-09-10T03:47:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00733.warc.gz | en | 0.958238 | 1,124 | 3.375 | 3 |
A router has learned three possible routes that could be used to reach a destination network. One route is from EIGRP and has a composite metric of 20514560.
Another route is from OSPF with a metric of 782. The last is from RIPv2 and has a metric of 4. Which route or routes will the router install in the routing table?
Click on the arrows to vote for the correct answer
A. B. C. D. E.E
The router will use a process called "route selection" or "route determination" to decide which route(s) to use to reach the destination network. The process works by comparing the metrics of each available route, and selecting the route with the lowest metric.
In this case, there are three routes available, each learned from a different routing protocol. The metrics for each route are:
Since RIPv2 has the lowest metric, it would be the preferred route for the router to use. However, it is important to note that this is only the case if all three routes are equally viable (i.e., have the same administrative distance).
Administrative distance is a value assigned to each routing protocol that indicates how trustworthy it is. For example, a directly connected route has an administrative distance of 0, since it is the most reliable source of information about a network. In this case, we don't have any information about the administrative distances of the three protocols, so we can assume that they are all equal.
If that is the case, then the answer would be option C: the OSPF and RIPv2 routes. OSPF has a lower metric than EIGRP, so it would be preferred over the EIGRP route. However, since RIPv2 has an even lower metric than OSPF, it would be the route that is installed in the routing table.
It is also worth noting that it is possible for multiple routes to be installed in the routing table, if they have the same lowest metric. In this case, both the OSPF and RIPv2 routes would be installed in the routing table. | <urn:uuid:4acf83fc-81bb-456f-a8c3-1a2b22da6ee8> | CC-MAIN-2024-38 | https://www.exam-answer.com/networking-cisco-200-125-router-routing-table | 2024-09-10T04:49:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00733.warc.gz | en | 0.963438 | 435 | 3.28125 | 3 |
In today's fast-paced digital age, businesses are constantly seeking ways to enhance efficiency and productivity. Employee productivity is a key metric that determines the overall success of an organization. It refers to the output produced by an employee or team for a given amount of input, such as time or resources. There are several key metrics that can be used to measure productivity, such as output per hour, sales generated per employee, or the number of tasks completed within a deadline.
A strong relationship exists between employee satisfaction and productivity. Satisfied employees are more engaged in their work, take less sick leave, and are more likely to go the extra mile. In contrast, dissatisfied employees are often disengaged, unproductive, and more likely to leave the company. There are many factors that can contribute to employee satisfaction, such as a positive work environment, competitive compensation and benefits, and opportunities for growth and development.
Technology plays a pivotal role in driving productivity in today's workplace. By leveraging the right tools and resources, organizations can empower employees to work smarter, not harder. For example, communication and collaboration tools can help employees stay connected and share information more easily. Productivity tools can help employees manage their time, automate tasks, and stay organized. And cloud-based technologies can provide employees with access to their work files and applications from anywhere, at any time.
The impact of improved productivity on business performance can be significant. When employees are more productive, they can produce more output in less time. This can lead to increased revenue, profitability, and market share. Additionally, improved productivity can free up resources that can be reinvested in other areas of the business, such as research and development or marketing.
Assessing Your Organization's Productivity
Before implementing any IT-driven productivity initiatives, it is important to first assess your organization's current productivity levels. This can be done through a variety of methods, such as conducting a productivity assessment, identifying productivity bottlenecks and challenges, analyzing employee feedback and needs, and benchmarking against industry standards.
A productivity assessment is a systematic process of evaluating an organization's efficiency and effectiveness. It typically involves collecting data on employee activity levels, output levels, and time spent on different tasks. Once the data is collected, it can be analyzed to identify areas where productivity can be improved.
Productivity bottlenecks are any factors that impede or slow down the progress of work. Common productivity bottlenecks include inefficient processes, inadequate technology, and poor communication. By identifying these bottlenecks, organizations can take steps to address them and improve overall productivity.
Employee feedback is a valuable source of information about productivity challenges. By soliciting feedback from employees, organizations can gain insights into the factors that are hindering their productivity. This feedback can then be used to develop and implement solutions to address these challenges.
Benchmarking is the process of comparing your organization's performance to that of other organizations in your industry. By benchmarking your productivity against industry standards, you can identify areas where you are falling short and areas where you are excelling. This information can be used to set goals for improvement and track your progress over time.
Ultimately, the goal is to create a workplace where employees are empowered to work efficiently, creatively, and collaboratively. By investing in productivity initiatives, organizations can not only boost bottom-line results but also foster a more engaged and satisfied workforce.
24hourtek, Inc is a forward thinking managed service provider that offers ongoing IT support and strategic guidance to businesses. We meet with our clients at least once a month to review strategy, security posture, and provide guidance on future-proofing your IT. | <urn:uuid:9289656e-48a0-41e1-b6a6-86c7bacd80bb> | CC-MAIN-2024-38 | https://www.24hourtek.com/blog/improving-productivity-in-the-workplace-with-technology | 2024-09-11T09:53:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00633.warc.gz | en | 0.942966 | 740 | 2.640625 | 3 |
Healthcare and AI Security Use Case
- Data privacy challenges while implementing AI applications
- Sharing medical data
- Protecting cross-border data transfers of personal data
- Healthcare regulations and compliance (HIPAA, GDPR)
- Data breaches
The Current Status
Multiple hospitals, for example, may need to share MRI data with research institutions. In this case, a "man in the middle" (the hacker) sits between the hospital and the research center, waiting for that data to appear and breach it at the appropriate time.
The use of AI in healthcare is rapidly expanding for the use of medical devices and other technologies. Healthcare is becoming more automated in order to improve efficiency (for both physicians and medical facilities), as medical applications commonly use AI as a diagnostic or treatment advisor to medical practitioners. However, combining healthcare and AI can be a double-edged sword: the more data you need to be accurate, the more vulnerable you are. For cybercriminals, this is as simple as it gets. If there is a data breach, surgeons will be unable to accurately predict MRI results, and patients will suffer greatly.
The solution HUB Security offers is the Secure Compute Platform, built to secure health data in AI-driven applications across healthcare, so doctors can make faster and more accurate diagnoses. HUB Security utilizes a new security paradigm centered on confidential computing to create a secure enclave for AI models and data, that brings a competitive advantage to healthcare providers working with machine learning and AI. By providing secure, isolated environments to protect the integrity and privacy of the AI models and data, this approach also allows multi-party analytics and collaboration. Protect medical data and applications with the Secure Compute Platform | <urn:uuid:fb44229a-7814-4175-87b2-c212f4ff621f> | CC-MAIN-2024-38 | https://blog.hubsecurity.com/blog/healthcare-and-ai-security | 2024-09-12T16:32:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00533.warc.gz | en | 0.934638 | 345 | 2.671875 | 3 |
Redundant Array of Independent Disks (RAID) takes multiple disk drives and creates arrays that are resilient and highly available by mirroring and striping data across them. It also builds in the means to recover from disk failure using parity data.
The different ways that mirroring, striping and parity are used defines the different RAID levels. Processing is required to carry out those actions, and that can take place on the host server’s OS or in the storage array or controller. This is also called Hardware RAID vs Software RAID.
With Hardware RAID, the processing work is done on a controller card located in the system. This avoids added load on the system’s processor or buses and allows for advanced features such a higher levels of RAID configuration and the addition of a spare pool, in case of failure. Hardware RAID, because of the additional controller card, tends to be more expensive then software RAID. Hardware RAID can be found on the BNAS line of systems.
Software RAID uses disks attached directly to the system and the processing power of the motherboard to create the RAID. All you need to do is connect the drives and configure the RAID level you want. Because the processing for the RAID is carried out on the motherboard, this could slow down the RAID calculations and other operations that are carried out on the system.
RAID 0 and RAID 1 place the lowest overhead on software RAID but adding the parity calculations present in other RAID levels is likely to create a bigger impact on performance. Software RAID can be found on the NetSwap line of our products.
Software replication is the act of taking the data off one disk and copying it to another. Unlike RAID, there is no parity or striping and no built-in failure protections found in certain RAID configurations. It is a cost-effective solution with a high degree of flexibility, depending on needs. Because it is software based, once again the load will be placed on the processor and its buses. However, because there are not necessarily calculations involved, it is a much lower impact on the system than RAID. Software replication can be found on our RNAS line of products via our High-Sync software. | <urn:uuid:35469d37-7853-4543-8564-33dc991cdaa7> | CC-MAIN-2024-38 | https://www.high-rely.com/2019/06/24/hardware-raid-software-raid-and-software-replication-the-difference/ | 2024-09-13T21:34:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00433.warc.gz | en | 0.924697 | 439 | 2.90625 | 3 |
Power Line Communication (PLC) is the communication technology that uses the existing public and private wiring for transmission of signals. PLC signals and high-speed data are transmitted over low-voltage power lines. This means that with just power cables running to an electronic device, users can power it up and retrieve data from it.
Over the past few years, signal processing technology saw a rapid rise in the development of modem chips that are able to overcome any transmission difficulties. This includes the sending of communication signals over electrical power lines. In addition, it supports PLC technology to allow Internet access through transmission lines. This technology is referred to as Broadband over Power Line (BPL).
What is Power Line Communication?
Power Line Communication works the same way as any other electric communication system. It carries data using the same existing network of wires from one end to the other end. In other words, PLC uses the same conductor as AC electric power transmission or electric power distribution to customers.
Most PLC technologies and companies try to limit themselves to only one type of wires. These are premise wiring that can be found within a single building. In rare cases, PLC can cross between two levels like premise wiring and distribution network. Generally speaking, there are four types of Power Line Communication:
1. In-House networking: By using In-House power wiring, users are ensured high-speed data transmission.
2. Broadband over Power Line: With outdoor mains power wiring, broadband internet access can be offered to customers.
3. Narrowband In-House applications: Through In-House power mains, low bit rate data services like home automation and intercoms can be controlled and used for communication.
4. Narrowband outdoor applications: For automatic meter reading and remote surveillance or control, the narrowband outdoor application can be used.
How does Power Line Communication works?
Like other communication technologies, PLC also consists of the sender and receiver. The sender modulates the data that is to be sent through a communication medium, while the receiver demodulates the data for further use. Apart from sending signals for communication, PLC allows users to control and monitor all the connected devices to the power line. This is possible because everything is implemented in the same wiring system.
PLC is widely used for transmitting radio programs, control of switching mechanisms by the utility company, transmission line protection, and automatic meter reading. PLC signal comes with low frequency and high frequency. Low frequency is anywhere from 3KHz to 150Khz, meaning data transmission is at low bit rate but longer range. High frequency is from 1MHz to 100MHz and allows transmission in broadband, but reduces the range.
The advantages of PLC are low implementation cost as it requires no new wires and large reach, which enables communication with hard-to-reach nodes like underground structures and metal walls. Disadvantages of PLC are low transmission speed, sensitivity to disturbance, and the high price of capacitors and inductors.
What is Broadband over Power Line?
Broadband over Power Line (BPL) is, in fact, a method of Power Line Communication. The technology allows for high-speed digital data transmission over the public electric power wiring. BPL uses higher frequencies and a wider frequency range to provide high-rate communication. A significant number of components make up the power distribution grid. All of them are aimed at delivering electricity to customers. Recent technological advancements have led to the development and implementation of a new system that made it possible to deliver broadband services. These systems are comprised of:
• Access BPL
• In-House BPL
Access BPL uses electrical transmission lines to deliver broadband to the customer’s home. It relies on injectors, repeaters, and extractors to deliver the high-speed broadband service. In-House BPL is broadband access within a building using the electric lines of the structure to provide the network infrastructure. Sometimes it can be a combination of both.
The biggest difference between the two is that In-House BPL utilizes the electric wiring in a privately owned building, while Access BPL uses electric power lines owned, operated, or controlled by an electricity service provider. As mentioned previously, BPL technology is designed to provide short-distance communication solutions.
How does Broadband over Power Lines work?
At a high level, the BPL network consists of three key segments: the backbone, the middle mile, and the last mile. BPL vendors are primarily seeking to address the last mile segment in order to successfully make way into customers’ homes. From the user perspective, BPL works by sending high-speed data along medium or low voltage power lines into the customer’s home. The signal traverses the network either through the transformers or by using bridges and couplers. The user only needs to plug an electrical cord from BPL modem into any electrical outlet, then plug an Ethernet cable into an Ethernet card on their PC.
Where is the technology applied?
Power Line Communication is mainly used for telecommunication and telemonitoring between electrical substations. Additionally, it is widely used in technologies like Smart Grid, Advanced Metering Infrastructure, and micro-inverters. Medium frequency PLC is used in homes to remote control lighting and appliances without installation of additional wiring. If the technology continues to evolve and is presented to wider audiences, it will have more adaptation and application. It could be used for traffic light control, have industrial application for irrigation control, machine-to-machine applications like vending machines, telemetry applications like offshore oil rigs, transport applications like electronics in cars, trains, and many more.
BPL opens up exciting possibilities for the future and is currently used in all smart homes or smart rooms. These homes, or even hotel rooms, have various appliances that are switched on and off automatically. However, wider usage is still uncommon. Access BPL failed to gain momentum in various countries that tried to implement it on a larger scale. Failure is attributed to limited reach, low bandwidth, usage of traditional broadband, routers and modems, and BPL incompatibility with mobile devices.
Companies at the forefront of this technology
Power Line Communication market is still growing, and major driving factors behind it are cost-effective installation, enabling wide coverage by using existing electricity distribution network, growing deployment of smart grids, and high penetration of Broadband over Power Line communication.
Key market players and companies at the forefront of this technology are:
• Siemens AG from Germany
• NETGEAR from the United States
• ABB from Switzerland
• AMETEK from the United States
• Schneider Electric from France
• General Electric from the United States
• Hubbel Power Systems from the United States
• TP-Link Technologies from China
• D-Link from Taiwan
• Landis+Gyr from Switzerland
• Belkin International from the United States
• Devolo from Germany
• Zyxel Communications from Taiwan | <urn:uuid:0b0d0377-7b7f-47ca-a354-d87f291db895> | CC-MAIN-2024-38 | https://4imag.com/everything-you-need-to-know-about-power-line-communication/ | 2024-09-18T21:36:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00033.warc.gz | en | 0.922602 | 1,429 | 3.546875 | 4 |
SOA Patterns > Service Implementation Patterns > Redundant Implementation
Redundant Implementation (Erl)
How can the reliability and availability of a service be increased?
A service that is being actively reused introduces a potential single point of failure that may jeopardize the reliability of all compositions in which it participates if an unexpected error condition occurs.
Reusable services can be deployed via redundant implementations or with failover support.
The same service implementation is redundantly deployed or supported by infrastructure with redundancy features.
Extra governance effort is required to keep all redundant implementations in synch.
Having redundant implementations of agnostic services provides fail-over protection should any one implementation go down. | <urn:uuid:e5157e57-be7b-4f17-9c77-a4b0bd6e0ac1> | CC-MAIN-2024-38 | https://patterns.arcitura.com/soa-patterns/design_patterns/redundant_implementation | 2024-09-18T21:36:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00033.warc.gz | en | 0.860859 | 137 | 2.546875 | 3 |
This video shows a volume rendering of a test simulation for the Eagle project. The intensity corresponds to the density of cosmic gas, whilst color encodes its temperature. Red roughly corresponds to 100,000 degrees Kelvin (considered ‘warm’ by astronomers), whilst white corresponds to 10-100 million degrees Kelvin.
The core aim of the EAGLE project is to generate a computational model of the evolution of the Universe, starting from the Big Bang and evolving to the present day some 14 billion years later. The model will examine not only the formation and evolution of galaxies, but also their complex interactions with the gas that surrounds them and occupies the vast regions of space between them. In contrast to other simulation projects, prior to its initiation in late 2012, EAGLE was preceeded by an intensive 2 year development & testing phase designed to ensure that, when complete, the simulation will yield an accurate representation of the present-day Universe that we observe around us.
The visualization was created by Rob Crain (Leiden Observatory, NL) and Jim Geach (Herts University, UK) using their Typhoon toolkit. | <urn:uuid:917b36d1-74b7-4a19-a2f0-0ce7dc142856> | CC-MAIN-2024-38 | https://insidehpc.com/2013/08/video-eagle-simulations-show-temperature-of-cosmic-gas/ | 2024-09-09T06:10:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00033.warc.gz | en | 0.923783 | 229 | 3.3125 | 3 |
To make managing and analyzing a group of related data easier, you can turn a range of cells into tables in Excel (previously known as an Excel list).
How to Use Tables in Excel
Using Excel Tables
A table typically contains related data in a series of worksheet rows and columns that have been formatted as a table. By using the table features, you can then manage the data in the table rows and columns independently from the data in other rows and columns on the worksheet.
*Note: Excel tables should not be confused with the data tables (data table: A range of cells that shows the results of substituting different values in one or more formulas. There are two types of data tables: one-input tables and two-input tables.) that are part of a suite of what-if analysis commands.
To Apply a Table:
- Click inside the data range.
- From the Home Tab in the Styles Group click on Format as Table.
- Click on one of the preformatted options.
- Use the Table Tools to make changes to the Table.
Sorting and filtering
Filter drop-down lists are automatically added in the header row of a table. You can sort tables in ascending or descending order or by color, or you can create a custom sort order.
You can filter tables to show only the data that meets the criteria that you specify, or you can filter by color.
(Drop-down list box: A control on a menu, toolbar, or dialog box that displays a list of options when you click the small arrow next to the list box.)
Formatting table data
You can quickly format table data by applying a predefined or custom table style.
You can also choose Table Styles options to display a table with or without a header or a totals row, to apply row or column banding to make a table easier to read, or to distinguish between the first or last columns and other columns in the table.
Remove a Table
If you chose to remove the Table features from a range of data you can do so by:
- Select the table
- From the Design tab in the Tools Group, click on Convert to Range.
Watch the full training
Watch the full training to learn advanced Excel techniques and best practices. You’ll get a quick overview of Excel’s advanced features and common business use cases. No matter what your experience with Excel is, you’ll leave this training with something new. | <urn:uuid:d64486de-20ab-42e5-99a7-d06ad30f43e9> | CC-MAIN-2024-38 | https://aldridge.com/how-to-use-tables-in-excel/ | 2024-09-14T01:37:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00533.warc.gz | en | 0.835187 | 509 | 3.578125 | 4 |
In the digital age, chatbots have become an integral part of our daily lives, helping us with everything from customer service to online shopping. Among these chatbots is ChatGPT, a powerful tool that boasts an impressive array of superpowers. From natural language processing to machine learning, ChatGPT is a chatbot poised to revolutionize how we communicate. However, with the increasing prevalence of cyber threats, it’s essential to consider whether ChatGPT is trustworthy when it comes to cybersecurity. In this blog post, we’ll explore the capabilities of ChatGPT, its superpowers, and trustworthiness in cybersecurity.
ChatGPT and its Superpowers
Are you ready for an adventure into the world of chatbots?
Let’s start with the basics: a chatbot is an artificial intelligence (AI) program designed to simulate human conversation. Enter ChatGPT, the superhero of chatbots. ChatGPT can understand and respond to human language with remarkable accuracy using natural language processing and machine learning.
But how does it work, you ask? Simple – ChatGPT uses deep learning algorithms to analyze the text and context of a conversation. This allows it to generate intelligent responses tailored to the user’s needs. And that’s just the beginning.
ChatGPT’s superpowers don’t stop there – it also has the ability to learn and adapt to new information in real time, making it an ever-evolving chatbot. With its cutting-edge technology, ChatGPT can provide personalized solutions for all your chatbot needs.
The Pros of Using ChatGPT
With the rise of digital communication, people have come to expect fast and efficient service. ChatGPT delivers just that by providing quick and easy access to information. Gone are the days of waiting on hold for hours or searching endless web pages for answers. ChatGPT allows users to receive instant responses to their queries, saving valuable time and effort.
- Quick Access To Information
With its natural language processing and machine learning capabilities, ChatGPT can analyze and understand the context of a conversation, providing accurate responses to user inquiries. This allows users to get the information they need without the frustration of being misunderstood or receiving irrelevant information.
- 24/7 Availability
ChatGPT is available 24/7, making it a convenient option for users across different time zones. With its round-the-clock availability, users can get the information they need anytime. This is particularly important for businesses that operate in multiple regions and need to provide customer service round-the-clock.
The Cons of Using ChatGPT
While ChatGPT may offer a range of benefits, it’s important to consider the potential drawbacks before entrusting it with your cybersecurity.
- Limited scope
One of the most significant limitations of using ChatGPT is its limited scope. As advanced as its technology may be, ChatGPT still needs to improve its understanding and response to complex queries. It may struggle with tasks that require a deeper understanding of context or nuances in language, leading to inaccurate or irrelevant responses.
- Possible errors
Additionally, like any technology, ChatGPT is not immune to errors. Despite its machine-learning capabilities, it is not infallible and may make mistakes in its responses. This is especially true when presented with queries or requests it has not encountered. While these errors may be minor, they can become a severe issue in sensitive areas such as finance or healthcare.
- Privacy concerns
Another concern with using ChatGPT is the issue of privacy. Chatbots such as ChatGPT rely on access to vast amounts of personal information to provide tailored responses to users. While reputable chatbot developers take measures to protect user data, there is always the risk of data breaches or unauthorized access. This can have severe consequences for individuals and businesses, such as identity theft or loss of sensitive data.
Cybersecurity Risks Associated with ChatGPT
One of the biggest concerns when it comes to using chatbots like ChatGPT is the potential cybersecurity risks they pose. Cybersecurity risks refer to the vulnerabilities and threats that can arise when using digital systems and platforms. While chatbots can provide numerous benefits, they can also be vulnerable to cyber-attacks.
- Vulnerable to cyber-attacks
Chatbots rely on a vast amount of personal data to provide tailored responses to users. This data is stored in servers and can be accessed by hackers if the security measures are inadequate. Additionally, chatbots can be vulnerable to attacks such as phishing, where hackers trick users into providing sensitive information or downloading malware.
- Able to generate malware
There is substantial literature documenting the potential for leveraging ChatGPT for nefarious purposes. While the platform itself is not inherently designed to generate malware. But it has been discovered that certain developers have found ways to circumvent the platform’s protocols to generate highly adaptive and dynamic malware. This capability enables hackers to execute their operations with greater efficiency and efficacy, allowing even novice actors to wield advanced cyberattack methodologies with minimal prerequisite knowledge.
- Can compromise privacy
The use of ChatGPT by organizational employees poses a significant risk to confidentiality. Employees may inadvertently divulge sensitive information about ongoing projects or business activities by submitting queries to the platform. For instance, an inquiry regarding strategies for addressing a cyber incident may inadvertently disclose that the organization is grappling with such an incident. Similarly, technical queries may inadvertently reveal the organization’s interests and direction in the realm of business development. Consequently, the use of ChatGPT in organizational contexts demands heightened caution and discretion to safeguard confidential information.
- Able to produce phishing emails
ChatGPT can fabricate phishing emails that are strikingly authentic and devoid of the usual grammatical errors and suspicious characteristics associated with such attacks. Moreover, the platform’s ability to generate diverse variations based on a given prompt enables it to produce a virtually limitless array of unique and convincing phishing emails. Consequently, there exists a heightened risk of business email compromise (BEC) resulting from the employment of ChatGPT in malicious contexts. The proliferation of such attacks is expected to escalate significantly.
Suggested reading: 11 Tips To Keep Your Business Safe From Cyberattacks (acecloudhosting.com)
How to Protect Yourself
Given the potential cybersecurity risks associated with ChatGPT, taking steps to protect yourself and your information is essential. Here are some simple steps you can take to safeguard your information when using chatbots:
- Check the source
Ensure that you’re using a reputable chatbot provider and that they have a track record of implementing robust security measures. Additionally, only use chatbots that reputable organizations have authorized.
- Use complex passwords.
Avoid using easily identifiable personal information such as your name, birth date, or phone number. Instead, use a combination of letters, numbers, and symbols, and change your password regularly.
- Use two-factor authentication.
2FA adds an extra layer of security to your account by requiring a second form of identification, such as a code sent to your phone and your password. This ensures that your account remains secure even if your password is compromised.
Should You Trust ChatGPT with Your Cybersecurity?
As more and more organizations embrace artificial intelligence (AI) to enhance their cybersecurity posture, the potential benefits of AI are becoming increasingly apparent. However, with these benefits come new challenges and risks, particularly when it comes to the trustworthiness of the AI models themselves. Hence, it is important to ask the question – should you trust your cybersecurity with ChatGPT?
OpenAI’s ChatGPT is an AI language model that can produce responses to natural language queries similar to those of a human being. Its adoption in these instances creates substantial trust and accountability issues, even though it has demonstrated significant potential in various areas, including cybersecurity.
On the one hand, ChatGPT can be used to improve cybersecurity in several ways. For example, it can assist in identifying and mitigating threats by analyzing large volumes of data and generating actionable insights. Moreover, it can be used to train cybersecurity professionals and enhance their threat detection and response skills.
However, there are also legitimate concerns about the use of ChatGPT in cybersecurity. For instance, as noted earlier, it can generate compelling phishing emails, thereby increasing the risk of successful social engineering attacks. Additionally, there is a risk that malicious actors could manipulate the platform to provide misleading or false information, leading to erroneous decisions and actions.
In light of these concerns, taking a cautious and nuanced approach is essential when using ChatGPT in cybersecurity. Organizations must thoroughly evaluate the potential benefits and risks of the platform and implement appropriate safeguards and protocols to mitigate these risks.
While ChatGPT can be a valuable tool in generating AI-generated responses for users, its use must be accompanied by careful consideration of the potential risks and safeguards. Ultimately, the trustworthiness of AI models such as ChatGPT in cybersecurity will depend on the transparency, accountability, and ethical practices of the organizations that use them. | <urn:uuid:a24bba9a-f8f5-4ccb-96d4-3146b241def8> | CC-MAIN-2024-38 | https://www.acecloudhosting.com/blog/how-chatgpt-superpowers-in-conversation/ | 2024-09-16T11:49:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00333.warc.gz | en | 0.928088 | 1,849 | 2.6875 | 3 |
The Northern European newsletters deliver quarterly industry insights to help your business adapt, evolve, and respond—as if on intuition
The use of digital twin and simulation modelling has become an increasingly crucial tool in modern supply chain management to respond to ever-increasing complexity and disruptions. More manufacturers and retailers are embracing the pervasive mission to build digital twins. By creating virtual representations of physical assets, systems, and processes, they facilitate operational enhancements, cost reduction, and risk mitigation. Through the integration of real-time data from diverse sources like sensors and machines, digital twins provide comprehensive, unbiased models of how things behave in the real world. This capability empowers organizations to design, monitor, analyze, and optimize assets and operations in real-time, resulting in more accurate decisions and more efficient operations.
Simulation modelling, on the other hand, is a potent tool that permits organizations to construct virtual representations of their operations and simulate various scenarios based on pre-determined parameters. It encompasses various sophisticated techniques such as Monte Carlo simulation, discrete event simulation, and agent-based modelling. These are integral tools in decision-making for companies like General Motors, Proctor and Gamble, Pfizer, Bristol-Myers Squibb, and Eli Lilly. By utilizing mathematical algorithms and computer simulations, organizations mimic real-world processes and analyze their performance. Highly realistic simulation models, fed with both structured and unstructured data and utilizing advanced analytics, can determine precise outcomes to different scenarios.
Consequently, this relationship between digital twins and simulation modelling is interdependent and mutually beneficial, enhancing the effectiveness and efficiency of both methodologies.
In a multi-site distribution model, digital twin technology enhances supply chain resilience by providing insights into bottleneck identification, inventory positioning and management, transportation routes and mapping, distribution strategies, planning and forecasting.
Businesses are empowered to dynamically adjust these parameters within the virtual environment to optimize resilience and respond effectively to unexpected events or changes in market conditions.
Integrating digital twins into supply chain management improves efficiency, agility, decision-making, and risk assessment. Digital twins visualize and optimize end-to-end supply chain operations, quickly finding and fixing inefficiencies, and bottlenecks. This real-time monitoring and analysis enable fast responses, thus boosting overall supply chain performance and profitability. As digital twins can run many simulations to monitor and study any number of processes, it is possible to experiment with endless design iterations in the virtual world without stopping the production line.
This requires a significant reliance on automated modeling techniques that include hyperparameter tuning, self-supervised learning (SSL) techniques, and model searches for predefined recurrent patterns. In this way, an automated proof-of-concept (auto PoC) can be performed with minimal effort to ascertain whether the volume and quality of data are sufficient to generate a baseline model. Since all validation and reporting processes are automated, this baseline model may subsequently be deployed to the production platform without requiring human intervention. On the one hand, this degree of automation guarantees complete compliance with minimal effort. On the other hand, the “auto PoC” step also has the advantage that it is reiterated every time new or more data comes in, to help deciding on model re-training. Increasing twin automation clearly means envisioning a “twin core product” which is more real-world data-centric than based on time consuming mechanistic models that needs subject matter expert (SMEs) and simulated data not reflecting real world variance and noise.
Digital twin technology provides real-time insights into individual store performance, inventory levels, and demand patterns. By simulating store-specific scenarios, retailers optimize inventory allocation, promotional strategies, loyalty programs, and store replenishment processes, even potentially creating digital twins of customers (DToCs), enhancing customer satisfaction, and maximizing sales performance across their retail network.
Digital twin technology and simulation modelling significantly reduce trial-and-error in supply chain management. Instead of relying on physical changes and evaluations, organizations simulate scenarios virtually, making data-driven decisions that enhance efficiency and reduce costs. By simulating inventory scenarios, optimizing stock keeping unit (SKU) management, and predicting demand fluctuations, the digital twin enables proactive measures to mitigate stockouts even creating specific scenarios for sets of products that have specific and unique requirements or dependencies, improve supply chain resilience.
Digital twins and simulation modelling enhance communication, collaboration, and decision-making in supply chain management. They require extensive data collection, integration, validation, and continuous improvement to accurately reflect and optimize physical supply chain operations.
Combining digital twin insights with pricing and promotion strategies lets retailers adjust pricing and promotions based on SKU performance and market conditions.
Simulations show how pricing changes affect demand elasticity, allowing retailers to optimize revenue generation while maintaining a competitive position. Retailers also use real-time market feedback and customer behavior data to improve assortment planning, store layout design, and promotion effectiveness.
Implementing digital twin technology and simulation modelling varies in complexity and duration based on factors such as supply chain size, data availability, and modelling accuracy requirements. This process involves data collection, data integration, model development, validation, integration with other systems, and real-world testing, offering long-term benefits in operational optimization and risk mitigation.
By simulating inventory scenarios, optimizing SKU management, and predicting demand fluctuations, the digital twin enables proactive measures to mitigate stockouts (e.g., for sets requiring specific components with an elevated risk of shortage) and improve supply chain resilience. Digital twin implementations rely heavily on data integration and analytics. The people side of the implementation must not be overlooked: Even clear mechanistic models, that do not technically suffer from the black box aspect of artificial intelligence (AI), can still feel like a black box to employees not properly prepared for a transformation of these characteristics. Actively mitigating people-related risks such as mistrust, misalignment and misinformation will be paramount to the success of the implementation. Therefore, these are key considerations for successful implementation:
Industry leaders have successfully leveraged digital twins to enhance sustainability within their supply chain. As Springler has reported, Nestle utilized digital twin technology to reconfigure distribution networks, resulting in significant cost and emission reductions. Microsoft achieved substantial cost and emission savings by optimizing its supply chain using digital twin solutions, enjoying a 10% savings in cost and carbon footprint. In addition, Amazon Web Services (AWS) found in a recent article that digital twins help to reduce the greenhouse gas emissions and carbon footprint of an existing building by up to 50%, alongside cost savings of up to 35%. By integrating carbon emission (scope 1,2, and 3) calculations directly into the digital twin’s environment, organizations gain actionable insights to make informed decisions that balance environmental sustainability with supply chain efficiency. Similarly, as reported in MIT Sloan Management review, the home furnishings company Ikea enables digital twins to simulate the performance of new materials in sustainable packaging. Likewise, companies such as Levi’s, BASF and Unilever are reinforcing digital twins with lifecycle assessment (LCA) to track and trace the journey of the materials from the production, to reusing and recycling. Digital twins, especially when coupled with sustainability footprint management software like SAP, is capable of converting noise into signals, insights, and finally actions, which significantly reduce carbon emissions along the end-to-end supply chain.
Digital twin implementations rely heavily on data integration and analytics. Despite its transformative potential, implementing digital twins and simulation modelling in supply chain management poses challenges. These include data quality, privacy concerns, complexity in data integration, and organizational readiness for digital transformation.
The collaboration between Cognizant and Aston Martin in Formula One, resulting in the transformation to Aston Martin Aramco Formula One (AMF1), positions us as leaders in the field of digital twins. We ensure a scalable technology, based on qualitative and automatic data flows, leveraging TwinOps, a platform consisting of building blocks for data ingestion from multiple sources and testing model releases in an automated fashion using continuous integration and continuous delivery (CI/CD), validation to enable explainability through automated reporting, monitoring scripts for data and model drifts, and pipelines to score models against predefined business KPIs, transformation and storage. With the TwinOps platform in place, there is a sustainable and scalable link between data on one side and digital twins on the other. TwinOps is important not only for scalability, speed, and compliance, but also to get the most out of digital twins. The goal is to connect models to their production environments. Data feeds the models, which offers insights based on simulations. Initially, these are then fed back to a human operator, but eventually such a smart system can adjust parameters itself and optimize the entire end-to-end supply chain management.
In conclusion, digital twin technology and simulation modelling represent a paradigm shift in supply chain management. Their ability to create virtual replicas of physical assets and processes, coupled with advanced analytics and scenario simulations, empowers organizations to optimize operations, reduce costs, and mitigate risks. As technology continues to evolve, the adoption of digital twins and simulation modelling will become increasingly vital for organizations seeking to thrive in the dynamic and complex landscape of modern supply chains.
By optimizing inventory management, production scheduling, and distribution strategies within the digital twin environment, businesses will ensure sufficient product availability, minimize stockouts, and capitalize on peak sales opportunities.
Beyond immediate operational benefits, digital twin technology supports businesses’ continuous improvement efforts by enabling iterative testing of new strategies or innovations within the virtual environment. Businesses evaluate the impact of introducing sustainable materials on cost, logistics, and environmental outcomes before implementation in the physical supply chain.
Maria Nikolaidou, Senior Sustainability Advisor
Udit Arora, Associate Director Supply Chain Planning
Nishtha Sharma, Strategy Consultant
Ana Giacone, Change Management Consultant | <urn:uuid:63b07b8c-297f-473f-aeba-f36085adc27a> | CC-MAIN-2024-38 | https://www.cognizant.com/nl/en/insights/blog/articles/harnessing-digital-twins-and-simulation-modelling-for-strategic-advantages | 2024-09-16T13:06:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00333.warc.gz | en | 0.902407 | 2,000 | 2.5625 | 3 |
I will be mentioning you about the installation steps of a Watchguard brand Firewall device from scratch, but first of all, I would like to mention what Firewall is and what it does for those who do not know or beginners in this essay.
What is a Firewall?
Firewall systems aim to control all incoming and outgoing Network traffic, pass it through certain filters, and stop harmful actions in a Network traffic. In this way, Network security is provided. It is a security mechanism that protects your devices and computers on internal Networks against attacks that may come from other external Networks (Internet), and controls Network traffic between internal and external Networks (LAN & WAN) according to certain rules.
Basically, the Firewall decides whether the packets coming to it on the Network can go to the addresses they need to reach (with predefined rules). Protection is provided by blocking the traffic that does not comply with rules specified on the firewall. Firewall systems are divided into hardware and software based. Software-based Firewall applications are generally installed on operating systems on Client or Server. Hardware-based Firewall devices are systems operating on special hardware. Firewall devices or software basically aim to protect the Network you are in against malicious traffic and attackers that may come from untrusted external Networks (WANs) such as the Internet.
While providing this protection, they control your Internet traffic by processing special rules specified on it. If Firewall detects Network traffic that is against your security policy, it provides a secure layer to prevent it from accessing your internal Network and blocks this traffic. Firewall devices create a special layer through which only permitted traffic can pass, and they work by controlling the level of communication between external Networks (WAN) and your corporate or home Network (LAN).
I will show the basic Firewall setup process on the Firewall with the brand and model of WatchGuard T15 in this essay.
You can access the product information of the WatchGuard T15 series Firewall device on the WatchGuard Firebox T15 page.
You can get information about all WatchGuard products from All WatchGuard Products page.
Firewall Device Connection Configuration
1- I would like to mention how a Firewall is located on the Network and how the device cable connections should be made at the most basic level before proceeding with the installation process. All Firewall devices, regardless of brand or model, basically have two basic ports, LAN and WAN.
LAN Port is used for your Internal Network; WAN Port is used for your external Network.
According to this;
• Assuming you have a Modem, a telephone cable with RJ11 connector used for Internet is plugged in to the RJ11 Port on the Modem.
• It is connected to the WAN Port of your Firewall device with a CAT cable from any of the RJ45 Ports on the Modem.
• After this connection between the modem and the Firewall is made, it is connected to any ports on the switch with a CAT cable through LAN port of your Firewall. All computers and other Network devices in your Internal Network (LAN) are also connected to this Switch and taken under the Firewall security umbrella.
WatchGuard T15 Firewall Setup
After configuring the active device connections, it is time to configure the initial basic setup processes.
2- Thinking that you have a Modem in your hand, the first thing to do is to switch the Modem to Bridge Mode.
NOTE 1: Since Bridge Mode settings will vary depending on the brands and models of modems you use, you are suggested to make a research from Google on how to get your modem into Bridge Mode.
NOTE 2: At this point, the Internet connection will be lost.
3- A second CAT cable must be removed from the switch and attached to the Network Interface Card-NIC Port of the computer through which we will perform the installation.
4- Watchguard Firewall Default IP address is 10.0.1.1/24. This IP address is also defined as Default in the LAN Port of the Firewall. So we need to set an IP address from 10.0.1.0 Network ID to the Network Interface Card-NIC configuration of the computer. I entered the IP address 10.0.1.10/24 for the installation process.
NOTE 3: I suggest you to test the connection by pinging the IP address of 10.0.1.1 upon completing the IP address identification.
5- After all these physical preparation processes are completed, it is time to download the Watchguard System Manager application. The WatchGuard System Manager application is a tool that provides management and configuration without the need for both WatchGuard Firewall device setup and Firewall management Web Portal.
You can download the Watchguard System Manager application through official website of WatchGuard.
5.1- When we run the installation file with the .exe extension that we downloaded, the Setup Wizard will appear. I am starting the installation by selecting our language settings in the Setup Language step in the first window on the wizard and clicking on the Next button.
5.2- Client Software and its sub-option WatchGuard System Manager are selected by default in the Select Components step. Since I do not need the other options for now, I am proceeding the installation process by clicking on the Next button while these two default options are selected.
5.3- The setup process has started in Setup Status step.
5.4- WatchGuard System Manager installation is completed. I am ending the Wizard by clicking on the Finish button.
6- After downloading and installing the Watchguard System Manager application, I am running the application and clicking on the Quick Setup Wizard from the Tools menu.
7- I am starting the configuration process by clicking on the Next button in the Watchguard Quick Setup Wizard window.
8- I am clicking on the option of Yes, my device is ready to be discovered in order that my device can be found by bening discovered and I am starting the discovery process by clicking the Next button.
9- Since I have more than one Network Interface Card-NIC installed on my computer, I am selecting the Network Interface Card that I defined the address of 10.0.1.10/24 and proceeding the discovery process by clicking on the Next button in the Select an Ethernet interface on yor computer step.
10- My Firewall device is running the discovery process in The wizard is searching for the WatchGuard device... step.
11- Discovery process f my firewall device has been completed. Information such as model, version and serial number of my firewall device can be seen on The wizard found this WatchGuard device window.
12- I am entering information such as a meaningful name, in other words a friendly name, for my Firewall device, the name of the place where it will be located and the responsible person in the Add device information step. If you wish, you can also select the Send device feedback to WatchGuard option and send the information about the device to WatchGuard. I prefer not to select this checkbox.
13- I am specifying how to configure the WAN connection type in the Configure the External interface of your device step. In this step;
• PPPoE (Point-to-Point Protocol over Ethernet): This option is the option we specify the User Name and Password, previously defined in your modem (before configuring it to Bridge Mode) and given to you by your ISP.
• Static IP Addressing: This option is the option where we will enter IP address, Subnetmask and Gateway information to be given to you in case you use Metro Ethernet or Radio Link (Radio Link) Internet service.
In this step, I am selecting the PPPoE option and clicking on the Next button and going to the next step.
14- I am entering the User Name and Password information provided to you by your ISP in this step since I chose the PPPoE option in the previous step of Configure the External interface of your device. There is an important detail for the IP address in this area.
14.1- It will not make any difference whether we use Obtain an IP address automatically or Use a static IP address since we are using static IP. Whether we get the IP address automatically or log in manually, we will be using the same IP address as a result.
15- I am entering an IP address definition for the LAN Port of my Firewall device in the Configure the Internal interface of your device step. I entered the IP address 10.10.10.254/24 here. You can enter any IP address you would like.
16- We need to enter a DNS IP address so that the Firewall device can connect to the Internet in the Configure the DNS information step.
16.1- The DNS IP addresses of our ISP will be defined automatically in the event we select the DNS server information is provided by my ISP option.
16.2- You can manually set a DNS Server IP address of your choice in this DNS server information option. Since I get my Internet service by TTNET, I can manually enter the IP addresses 188.8.131.52 and 184.108.40.206, which are TTNET DNS IP addresses, in the menu. You can also enter a different DNS IP address of your own ISP that you get service and / or GOOGLE DNS IP addresses, 220.127.116.11 and 18.104.22.168.
17- We need to perform the activation process of our Firewall device in the Activate the software for your device step. We need to import the activation file for activation by clicking on the Browse... button.
17.1- You need to perform product activation processes on official website of Watchguard before importing the activation file. After completing your activation process, you will be presented with a content containing license information.
17.2- I am selecting and copy all of the content which includes license information, and paste it into a TXT file and save it in a directory on my computer.
17.3- When I go back to the window that opens upon pressing the Browse... button, I am selecting the license file, by reaching to the directory where the license file is, and clicking on the open button and importing. Upon perfoming this action, I am proceeding to the next step by clicking on the Next button.
18- I am proceeding my installation process by clicking on the Next button without making any settings in the WebBlocker Settings step.
19- I am performing the actions of determining passwords required for logging into WatchGuard System Manager for the system accounts for administrative settings. These accounts are Status and Admin accounts.
• Status: This account is the account used for logging into WatchGuard System Manager for Monitoring operations. After logging into WatchGuard System Manager with this account, you can configure WatchGuard System Manager, but it is not an authorized account for saving configuration operations to the Firewall device!
• Admin: This account is the authorized account that will be used to save the configuration operations we need to the Firewall device after logging in with the Status account. I am proceeding my installation process by clicking on the Next button after completing the process of determining the passwords for the accounts of Status and Admin.
20- A summary of the configuration settings I have made appears in the Confirm the configuration for your device step. I am performing the process of saving the configuration settings to the Firewall device by clicking on the Next button.
21- The process of saving the configuration settings to the Firewall device has been completed. I am ending the wizard by clicking on the Finish button.
2- I am clicking on the Connect to Device button to connect to the device as shown in the picture below after the settings are saved.
23- On the screen that appears;
• IP Address or Name: I am entering the IP address of the LAN Port I specified for my Firewall device in this field. This IP address is also my Network's Default Gateway IP address.
• User Name: I am typinh the username of the Status account in this field.
• Passphrase: I am entering the password of the Status account in this field.
I am clicking on the Login button to connect to the device.
24- The connection has been successful. There is information about the status of our WAN and LAN Interfaces and the IP addresses the they have received under Firebox Status. The fact that the Interfaces are green means that the Interfaces are active.
25- I am clicking on the Policy Manager button marked in the picture below to access the Policy settings.
26- We see the default Policies and open Ports when the Policy Manager is opened. Two Policies are important here. These, required for accessing to the Internet, are the HTTP-Proxy associated with Port 80 and HTTPS-Proxy Policies associated with Port 443, and again, by default, all clients in the Network environment are open to access the Internet. Afterwards, restrictions and other Policy configurations can be done according to the needs.
27- It is possible to access the management panel of the Firewall device with the LAN IP address and default port number 8080 through a browser. NOTE 4: The settings made on the management panel are directly saved in the Firmware of the Firewall device. For this reason, I recommend you to use WatchGuard System Manager instead of the Management panel.
28- We can view our WAN and LAN Interfaces and IP addresses from the NETWORK> Interfaces menu on the left.
29- We can see the Policies and open Ports created by default in the menu of FIREWALL> Firewall Policies.
I tried to mention about how to make device connections and setup from scratch for the WatchGuard Firewall device. You can adjust other polict and configuration settings as needed after the installation is complete.
I hope it benefits...
You may submit your any kind of opinion and suggestion and ask anything you wonder by using the below comment form. | <urn:uuid:5a10a5c5-5de6-4da9-9d69-0146b7ab8dcc> | CC-MAIN-2024-38 | https://www.firatboyan.com/en/watchguard-firewall-installation-and-configuration.aspx | 2024-09-16T11:50:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00333.warc.gz | en | 0.90042 | 2,886 | 3.234375 | 3 |
Before you jump in to develop a Bot, it would be a good idea to spend some time in understanding what the Bot would be and how it would help you and your business. This understanding would bring clarity and efficiency to the Bot development.
Chatbots are artificial intelligence systems that the users interact with via text or voice interface. These interactions can be straightforward, like asking a bot about the weather report, or more complex, like tracing missing entry in your bank account. Further, the interactions can be structured with user choosing options from a list of items presented or unstructured freestyle flow similar to a conversation with involving a human agent.
Whatever be the type of user interaction, good design helps in building an efficient bot. A good design allows for answering most of the user queries, anticipates all the conversation flows and expects the unexpected.
Platform Recommendations: The following steps can be considered while designing a Bot:
- Understanding the user needs to set the scope of the Bot. The business sponsors, business analysts, and product owners play an important role in identifying the user requirement by gathering market requirements and assessing internal needs.
- Setting the chatbot goals helps create a well-defined use case. This would involve converting the above-identified scope to a use case. It would be advisable to involve the Bot developer in this phase.
- Designing a chatbot conversation to define chatbot behavior in every possible scenario in its interaction with the user. Simulating conversations go a long way in identifying every possible scenario.
Once the bot capabilities and ideal use case are well-defined, Bot developer can begin the process of configuring bot tasks, define intents and entities, and build the conversational dialog.
Things to keep in mind while designing a chatbot: Try to answer the following questions (some if not all):
- Who is the target audience? Technical help bots targetted for a tech-savvy customer need a different design when compared help bots for a layperson like, say, a bank customer. Hence assessing the target audience is always important.
- What bot persona will resonate the most with this group? This will help define how the Bot will talk and act in every situation.
- What is the purpose of the bot? The goal i.e. the customer query that the Bot needs to address will define the end point of any conversation.
- What pain points would the bot solve? The purpose and scope of Bots can be set by identifying what the Bot will address and when the human agent needs to take over.
- What benefits would the bot provide for us or our customer? The main benefit of using a Bot would be time-saving. The user need not waste their time waiting for a human agent to be available for answering their query. You, as the business owner, need not worry about not being there to cater to all customer needs.
- What tasks do I want my bot to perform? Simulation of user conversation would help identify the tasks that need to be catered by the Bot.
- What channels would the bot “live” in? This would to some extent drive the way the Bot would be presented, the various options available for the chatbot would be limited by the channel/medium it would be used.
- What languages should my bot speak? When catering to a multi-lingual community the language support would be imperative and building the dictionary simultaneously would be useful. | <urn:uuid:ae504530-4d90-4ebf-9d2b-31f00a922c3b> | CC-MAIN-2024-38 | https://developer.kore.ai/v7-1/docs/bots/bot-builder-tool/bot-creation/design/ | 2024-09-17T17:39:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00233.warc.gz | en | 0.926284 | 698 | 2.859375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.