text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Last month Gavin gave us an overview of the “Internet of Things” (also known as the “Internet of Everything”) and what it could mean for everyday life. But how far from the revolution are we? We spoke to some experts to find out. You’ve probably heard it being mentioned a few times – that soon everything will be connected to the internet. Phones, of course, already are, as are some cars and some TVs. It makes you wonder what else they could possibly add to the ever growing network. Well, as it turns out, a lot. Because when they say “everything”, they really mean everything. Right now it may be hard to see past today’s familiar old internet connected only to devices with easily recognisable interfaces (e.g. screens), but there are bigger plans. Take your fridge, for example: at the moment it only stores your chilled foods, but connected to the internet it could tell you when your products have expired or whether it needs maintenance, or even tell you which items you’re low on. It sounds a little fanciful, but these are already in development. As Gavin pointed out in the last post, in the not-too-distant future your phone will probably have made a series of helpful decisions and analyses before you even wake up (traffic conditions, weather etc.), and your car will already have planned your route to work. The concept may well ignite a global data eruption, which will help to develop a world where every item we use knows what to do and how to help us even before we do. The Internet of Everything does have its sceptics, but it’s certainly not science-fiction – most tech experts don’t see it as an “if” question, but a “when”. So how close is it? Look hard enough around you and you’ll see that it’s pretty much already happening: Google Glass have now opened up the world to more “Minority Report” kinds of technology; washing machines are being developed that buy detergent automatically if you run out; and there are already some toothbrushes that can monitor how you brush your teeth and send the data to your phone. What will be next? Well, it’s a little hard to tell for those of us in the middle of it, as the network seems to be already growing under our feet; most public transport, for example, is WiFi enabled, and devices like smart watches are on the rise. And after that? Wireless cities. Yes, cities that have wireless internet available wherever you go, even as you walk the streets. Indeed, this development is already underway. How will it all work? It might surprise you to learn that Britain is at the forefront of the technological advances required to take on the challenge. ARM, a tech company based in Cambridge UK, are behind a large proportion of the microchips that enabled the smartphone revolution. It is a similar technology that will run the Internet of Everything. ARM’s CEO Simon Segars suggests that the advancement will cover everything and anything, not just sleek phones and luxury cars – even street lamps will be connected, updating their operators as and when parts need replacing. Ask the experts So where do we stand? When will these slow changes amount to a more completed version of the Internet of Everything? To find out, Purple WiFi spoke to Christopher Barnatt of ExplainingTheFuture.com, expert on future technologies and Associate Professor of Strategy and Future Studies in Nottingham University Business School: Q: So thinking about the “Internet of Everything”, we’ve seen phones and cars going online, but what do you think is the next big change we are likely to see? A: I would highlight three things. Firstly, and in the next few years, I think we will see a lot of consumer appliances connected to the internet for the purpose of monitoring their electricity use, and even allowing devices like heaters or refrigerators to purchase their own electricity. Secondly, the Internet of Things will include many devices for monitoring our health – everything from sensors that take temperature and blood pressure, through to devices that extract data from pacemakers. Thirdly, and probably 5 to 10 years from now, we will start to see highly flexible screen technologies integrated into clothing, so offering the opportunity for clothing to be part of the network – for example jackets with Facebook walls on them. Q: How long do you think it will be until pretty much everything we use is part of one giant online network? A: I think that within 10 years, most of the items we purchase and interact with regularly will have an online presence. However, they will not have an electronic connection to the internet. Rather, advancements in vision recognition in particular, and data mining and Big Data technologies in general, will allow the data shadows of most things in our lives to be accurately monitored. For example, items on most supermarket shelves will not be connected directly to the internet, but in-store cameras and vision recognition technology will track their life cycle in the store, then correlate this with financial, location and other data from the purchaser. Q: What do you think it will mean for very traditional media? Will things like paper be obsolete in a couple of years? A: Paper will not become obsolete, as has been so poorly predicted for many decades. This said, paper use is finally falling, and will continue to do so as tablets in particular become more ubiquitous. I think the big change will be the increasing use of video media rather than text-based media. To get another view, we also had a chance to catch up with Kevin Parker, expert on the future of technology for business: Q: What’s next in the Internet of Everything? A: Every walk of life (and even death) already has an app. In the IoE these apps join forces and assist each other in their tasks. In the kitchen the refrigerator and the pantry will talk to the doctor’s office and will recommend the dinner menu based on what ingredients are available and what the doctor has to say about one’s dietary needs. The oven will pre-heat in time for your arrival home which it will calculate by tracking your location on your commute. Even politics is at risk. National boundaries, especially in Europe, are almost meaningless today. National governments provided a 19th century, centralized solution to the needs of a society where communication was slow and the population largely uneducated. In the 21st century communication is instant and all the world’s knowledge is in the palm of our hand. Why do we still insist on these arbitrary lines on maps that do not reflect who we are but who we were? Q: What does it mean for the future of traditional media? A: Ereaders and iPads are the paper of the future. Most people get there news by device already today. Look at any commuter train carriage. The people with the newspapers are the older generation. Everyone else is locked into their tablet reading, listening and watching the news. Print media cannot compete with the Internet’s ability to let us see what our friends and colleagues are reading. Q: How will businesses profit from the Internet of Everything? A: The new hot job title is going to be CQO, Chief Questioning Officer. This person will be responsible thinking of the right questions to ask and for creating the technology to answer them. Every business, great and small, will be more successful if it delivers better goods and services with greater margins than its competitors. The IoE makes this possible. With all that data out there and with everything connected to every other thing creating more data, new truths are awaiting discovery. Data is the new oil: extract it, refine it and fuel your business with it. And for a final insight we spoke to top Business Technology Futurist and leading future consultant Jack Shaw: Q: What do you think we’ll see next in the Internet of Everything? A: First off, it might be better to talk about the “Internet of Things”, rather than the Internet of Everything – it’s more specific. What could happen next is extremely broad: security, retailing, transportation, manufacturing, healthcare, energy, research – all these sectors have major components that should be online. With micro tech moving to nano tech, we’ll see a mass evolution from a disconnected network of “dumb” things, to a connected internet of “smart” things. For example, your tennis racket could have a chip in it that monitors your playing and reports back to your computer, or you could even be playing wearing your Google Glass. Healthcare will be interesting. The Internet of Things will help reduce costs in this sector – things like patient monitoring. Having a centralised NHS will mean that Britain can lead the way in this respect. In general I think we may witness the growth of an Internet of Autonomous Things, a network with intelligent agents that make decisions independently from us. Q: What do you think it means for the future of business? A: It will affect everything, just like the original internet did when it first came around. Businesses will have to ask themselves where they can reduce costs and better serve their customers by using the fully connected network of things. For starters, right now businesses don’t know where all their computers or hardware are at any one time – they’re constantly losing this stuff. This won’t be an issue once everything is connected. Office buildings and office processes in general will become more efficient. Digital currency could also be something we see emerging in the network, for example Bitcoin. The advantage of Bitcoin is that it has an unchangeable public record, meaning every transaction can be traced. We could get to a stage where business deals are brokered entirely online by an intelligent agent that’s been told to find the best deal of its own accord. Perhaps it could even lead to entirely autonomous corporations that are themselves independent entities. But of course all this will mean that security risks will become even more important to address carefully.
<urn:uuid:fbf53db3-77c2-4f5c-bfea-6117977acbba>
CC-MAIN-2024-38
https://purple.ai/blogs/ask-experts-just-close-internet-everything/
2024-09-18T09:14:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00567.warc.gz
en
0.960289
2,106
2.828125
3
Our interaction with technology could soon be predominantly voice-based. To ask for something out loud and hear the answer is literally child’s play: Just take a look at how effortlessly kids use voice assistants. But new technology always means new threats, and voice control is no exception. Cybersecurity researchers are tirelessly probing devices so that manufacturers can prevent potential threats from becoming real. Today, we’re going to discuss a couple of finds that, although of little practical application right now, should be on today’s security radar. Smart devices listen and obey More than a billion voice-activated devices are now used worldwide, says a voicebot.ai report. Most are smartphones, but other speech-recognition devices are fast gaining popularity. One in five American households, for example, has a smart speaker that responds to verbal commands. Voice commands can be used to control music playback, order goods online, control vehicle GPS, check the news and weather, set alarms, and so on. Manufacturers are riding the trend and adding voice-control support to a variety of devices. Amazon, for example, recently released a microwave that links to an Echo smart speaker. On hearing the words “Heat up coffee,” the microwave calculates the time required and starts whirring. True, you still have to make the long trek to the kitchen to put the mug inside, so you could easily push a couple of buttons while you’re at it, but why quibble with progress? Smart home systems also offer voice-controlled room lighting and air conditioning, as well as front-door locking. As you can see, voice assistants are already pretty skilled, and you probably wouldn’t want outsiders to be able to harness these abilities, especially for malicious purposes. In 2017, characters in the animated sitcom South Park carried out a highly original mass attack in their own inimitable style. The victim was Alexa, the voice assistant that lives inside Amazon Echo smart speakers. Alexa was instructed to add some rather grotesque items to a shopping cart and set the alarm to 7am. Despite the peculiar pronunciation of the cartoon characters, the Echo speakers of owners watching this episode of South Park faithfully executed the commands issued from the TV screen. Ultrasound: Machines hear things people don’t We’ve already written about some of the dangers posed by voice-activated gadgets. Today, our focus is on “silent” attacks that force such devices to obey voices that you can’t even hear. One way to carry out this type of attack is through ultrasound — a sound so high it is inaudible to the human ear. In an article published in 2017, researchers from Zhejiang University presented a technique for taking covert control of voice assistants, named DolphinAttack (so called because dolphins emit ultrasound). The research team converted voice commands into ultrasonic waves, with frequencies too high to be picked up by humans, but still recognizable by microphones in modern devices. The method works because when the ultrasound is converted into an electrical impulse in the receiving device (for example, a smartphone), the original signal containing the voice command is restored. The mechanism is somewhat similar to the effect when the voice gets distorted during recording — there is no special function in the device; it is simply a feature of the conversion process. As a result, the targeted gadget hears and executes the voice command, opening up all kinds of opportunities for attackers. The researchers were able to successfully reproduce the attack on the most popular voice assistants, including Amazon Alexa, Apple Siri, Google Now, Samsung S Voice, and Microsoft Cortana. A choir of loudspeakers One of the weaknesses of DolphinAttack (from the attacker’s perspective) is the small radius of operation — just about 1 meter. However, researchers from the University of Illinois at Urbana-Champaign managed to increase this distance. In their experiment, they divided a converted ultrasound command into several frequency bands, which were then played by different speakers (more than 60). The hidden voice commands issued by this “choir” were picked up at a distance of seven meters, regardless of any background noise. In such conditions, DolphinAttack’s chances of success are considerably improved. A voice from the deep Experts from the University of California at Berkeley utilized a different principle. They surreptitiously embedded voice commands in other audio snippets to deceive Deep Speech, Mozilla’s speech recognition system. To the human ear, the modified recording barely differs from the original, but the software detects in it a hidden command. Have a listen to the recordings on the research team’s website. In the first example, the phrase “Without the data set the article is useless” contains a hidden command to open a website: “Okay Google, browse to evil.com.” In the second, the researchers added the phrase “Speech can be embedded in music” in an excerpt of a Bach cello suite. Guarding against inaudible attacks Manufacturers are already looking at ways to protect voice-activated devices. For example, ultrasound attacks could be stymied through detecting frequency alterations in received signals. It would be a nice idea to train all smart devices to recognize their owner’s voice, although having already tested this on its own system, Google warns that such security can be fooled by a voice recording or a decent impersonation. However, there is still time for researchers and manufacturers to come up with solutions. As we said, controlling voice assistants on the sly is currently doable only in lab conditions: Getting an ultrasonic loudspeaker (never mind 60 of them) within range of someone’s smart speaker is a big task, and embedding commands in audio recordings is hardly worth the considerable time and effort involved.
<urn:uuid:52b0cd0a-6c2f-4c75-bb10-3a65d3593a77>
CC-MAIN-2024-38
https://usa.kaspersky.com/blog/ultrasound-attacks/17137/
2024-09-18T07:17:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00567.warc.gz
en
0.950631
1,194
2.65625
3
MIT Researcher: 6 Ways Technology Will Make Us Immortal, Telepathic And More Making The Impossible Possible Technology has already helped humans achieve some pretty remarkable things -- but that phenomenon is not over yet. That was the message delivered by David Rose, a product designer and researcher at the Massachusetts Institute of Technology (MIT) Media Lab, during his session Monday at XChange Solution Provider 2013. Rose specifically honed in on the emerging "Internet of Things" trend -- or the movement by which embedded sensors are allowing everyday objects to "speak" to one another and connect to the internet -- while providing a glimpse into our technology-driven future. Rose touched on six unique capabilities the "Internet of Things" and technology in general will allow humans to do -- many of which have only been dreamed of in movies, books and fairy tales. Take a look to see what the future holds. Humans have a natural thirst for knowledge that MIT's Rose thinks the Internet Of Things will help quench more than ever. As an example, he pointed to Ambient Devices, a Cambridge, Mass.-based company that produces internet-connected and seemingly all-knowing gadgets that keep their users up to date about the things going on around them. The Ambient Orb, for instance, is a frosted-glass ball that glows different colors to display real-time information related to everything ranging from stock market trends and traffic congestion to pollen forecasts and wind speeds. Ambient Devices also makes Energy Joule, a gadget that displays and communicates real-time changes in energy costs and a home's energy consumption. Rose said the emerging Internet of Things trend will also enable people to be telepathic, or know what those around them are thinking and doing. The MIT-developed LumiTouch, for instance, is a picture frame that allows users to know when their loved ones are thinking about them. LumiTouch users can squeeze the frame surrounding a photo of a family member, and then that family member's framed picture of them (assuming it’s also a LumiTouch) will light up as a result, sending the signal that they're on somebody's mind. Rose also noted other new, Jetson-like gadgets for the home that arm users with a telepathic-like skill. Internet-connected doorbells, for instance, can be programmed to send a specific ring tone to family member's smartphone when another member is approaching the door. The advancement of technology is also empowering humans to better protect themselves, Rose said, sharing a half-astounding, half-frightening fact that a Google search for "teddy bear cameras" came back with 1.8 million results. But, less obvious -- and perhaps less intrusive -- means of protections are emerging because of technology, Rose said, citing Google Glass as an example. Google's new internet-connected eyewear allows users to record every moment of their day, meaning they can perfectly recall conversations, what was going on around them and who was around them, if they ever needed to glimpse back in time for security-related or really any other reason. While the Internet of Things might not help humans live forever (at least not yet), it can certainly help them live longer, Rose said. Take Glow Caps, for instance, a pill bottle cap that alerts users when they forgot to take their medication. Made by a company called Vitality, Glow Caps can send a notification to users via their smartphones, or, as its name suggests, by lighting up a wirelessly connected night light in their homes, to remind them it's time for their daily dose. So far, Glow Caps has been a success; Rose said a recent study proved that users of the device were 98 percent likely to remember their medications every day, compared to the 78 percent likelihood of those not using the device. Unfortunately, technology experts are still working on the whole teleportation thing. But, in the meantime, Rose noted some pretty cool ways they're revolutionizing the travel and transportation industries. At MIT, for example, researchers are working on the City Car Project, an initiative that's yielding a new generation of electric cars that not only save on fuel costs, but can physically "fold" themselves to save room when parking. Rose said these little cars can condense their size so much, that five of them can fit in a parking spot otherwise meant for a single car. Rose also spoke of ambient bus polls, which can be used at bus stops to indicate how far away the bus is. The device will turn different colors based on the bus' location, so passengers can see from a block away if they need to hustle to the bus stop or have some time to kill. Among its many applications, technology is used for human expression, Rose said -- and the Internet of Things is set to accelerate that trend. Take the "I/O Brush" developed at MIT, for example. It's a next-generation drawing tool that lets users paint and draw using the textures and patterns found on everyday objects surrounding them. Using a built-in video camera and touch sensors, the I/O Brush can be swiped across pretty much anything -- flowers, M&M's, even a person's physical features -- "pick up" that pattern, and then transfer that pattern to its drawing canvas. Users can make whatever kind of special "ink" they want, just by exploring the items around them.
<urn:uuid:e5c49f2b-2215-44d6-96f1-81c82e71a65c>
CC-MAIN-2024-38
https://www.crn.com/slide-shows/networking/240150533/mit-researcher-6-ways-technology-will-make-us-immortal-telepathic-and-more
2024-09-20T22:00:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00367.warc.gz
en
0.952711
1,108
2.578125
3
Obscape have been developing, manufacturing and supplying real-time systems for environmental observations for over a decade. Their mission is to make high quality, environment observations easy. Obscape’s instruments are designed to be easily installed, compact, robust and low maintenance. The small size and integrated telemetry and solar power make Obscape’s instruments very easy to use and deploy. Preventing the Causes of Flooding A Government Agency responsible for coastal, storm water and catchment management assigned Obscape to assist with reporting on areas such as – monitoring and reporting on catchment management, flood risk areas, storm drains and drainage ditches with the aid of the RockBLOCK. Blocked, overflowing systems can cause flooding, erosion, turbidity, storm and sanitary sewer system overflow, and infrastructure damage. Combining data received from Rock Seven (now trading as Ground Control)’s Iridium RockBLOCK 9602 modems, which can be located in Obscape’s Powered Telemetry Modules (PTM), Obscape clients are able to monitor and forecast these events visually from their powerful portal from a mixture of time lapse cameras, water level gauges, rain gauges, and weather gauges. Reliable, Accurate Flood Mapping With the aid of Ground Control products, Obscape confirms data by managing and monitoring developments in urban river corridors and wetlands as important natural features within the urban landscape; for the purpose of promoting multi-functional, sustainable use of river corridors and drainage systems. By using Ground Control systems, Obscape can report on stressed areas in urban infrastructure in real-time and forecast where improvements in water infrastructure is required. Collated data received and converted from Ground Control’s Iridium Modules enables Obscape to advise the Agency on predictive capabilities of flood mapping when looking at historical flood data paired with real-time and predicted weather and precipitation data. PTM Modules Obscape have installed for the Government Agency With the help of Ground Control’s location communication systems, Obscape have developed their durable monitoring systems to convey, repeatable accurate reliable information in the most efficient and cost-effective manner. This includes collection of data using the RockBLOCK to transmit from remote areas; with the information transmitted back to the user without need to repeatedly visit a site. Examples of devices which can be installed with the RockBLOCK: Satellite communication information can be relayed through multiple RockBLOCK satcom devices with accumulated reporting to one Data Portal. - Beach Surveys – Monthly - Offshore Mapping - Estuarine Surveys x 12 - Wave Buoys x 4 - ADCPs x 5 - Rain Radar x 1 - Wave Radar x 1 - Tide & Level Gauges x 50 - Rain Gauges x 60 - Weather Stations x 10 - Time Lapse Cameras x 40 - Real Time Water Quality x 17 - AIS Data Logging - LoraWAN Gateway - Offshore Weather Station x 1 “Ground Control improves the efficiency and quality of our environmental data gathering. By installing a RockBLOCK in our PTM we can guarantee a great investment and low running costs, reliability, resilience, operator ease of use installation and operation, data accuracy, quality assurance, quality control and data security.” Obscape and the RockBLOCK in Action Wave Buoy Close Up The solar-powered wave buoys benefit from recent advances in sensor and data technology The wave buoys are rugged, lightweight, reliable and affordable Affordable, solar-powered, robust, and complete wireless, as standard the PTM acts as a weather station, level gauge, time lapse camera, rain gauge and CT station Power and Telemetry Module Close Up The PTM is an ‘all in one’ datalogger with both satellite and cellular connectivity
<urn:uuid:d5b98317-7141-4674-b478-ec7ba1f90199>
CC-MAIN-2024-38
https://www.groundcontrol.com/knowledge/case-studies/environmental-monitoring-and-reporting-with-obscape/
2024-09-20T19:26:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00367.warc.gz
en
0.903954
799
2.53125
3
In today’s highly technological world, home security systems are extremely advanced. From 24-hour video surveillance to electromagnetic lock systems, the number of ways to improve your home security are nearly exponential. But where did this all start? And better yet, how did this all start? The history of home security systems dates back to the early 1700’s, where English inventor Tildesley created the first home intrusion “door alarm.” Even though his design was simple by nature, as it was a set of wind chimes linked to the door handle, it proved as an effective tool in deterring home invaders. The latter half of 1700’s also saw the creation and implementation of some of the first door locks, as the lever tumbler lock came onto the market in 1778. Nearly 30 years later in 1853 in Boston, Massachusetts, Augustus Russell Pope patented the very first electromagnetic alarm system. Pope’s invention is widely recognized as the foundation for the majority of our modern day burglar alarm systems. Gaining increasing popularity, Pope’s invention made its way into New York City, where engineer George F. Milliken built upon Pope’s design by linking the electromagnetic current from the door to windows. However, the public opinion about electricity in the mid-1800s was one characterized by fear. After all, not many systems relied on electricity during these times, and there was ample reason to believe that having electric powered systems in your home could endanger families. So how did home security systems make their way into nearly every American household? That feat was in large part accomplished by Edward Holmes. In recognizing the necessity for electric security systems, Holmes began to campaign for home security by flooding the public with pamphlets filled with testimonials and endorsements from people who had the systems installed in their homes. He also created campaigns that addressed public fears about electricity, which helped them feel more comfortable about using electricity. Years later in the 20th century, electric home security system companies partnered with telephone and telegraph companies to link home security system triggers to emergency call centers. And with some technological advances and tweaks in both design and function, the rest is history!
<urn:uuid:e46aeb13-7ebe-47c3-b147-22586f0ecfbe>
CC-MAIN-2024-38
https://central-alarm.com/2016/08/18/the-revolutionary-history-of-home-security-systems/
2024-09-07T10:26:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00731.warc.gz
en
0.97246
446
2.84375
3
Cyberattacks are on the rise in the United States, and certain industries are more susceptible than others. The financial services sector, in particular, are 300 times more likely to fall victim to cyberattacks than other industries. Within this sector, banks get the most attention from cybercriminals, but insurance companies have also suffered major cyberattacks. In February 2015, hackers attacked major health insurance provider Anthem and compromised 78.8 million company records. The breach exposed personally identifiable information (PII) such as names, contact details, medical IDs, and Social Security numbers. According to investigators, the perpetrators sent phishing emails to Anthem employees and tricked them into downloading Trojan malware that steals passwords and other sensitive information. More recently, State Farm reported a data breach that was a result of unauthorized access using stolen login credentials. Even smaller insurance firms have fallen victim to social engineering and malware-based attacks. So what makes the insurance industry such a popular target? A wealth of sensitive information Cybercriminals are always looking for organizations that harbor massive amounts of data, and insurance companies fit the bill. Insurers have access to their clients’ medical records and financial information, which are extremely valuable on the black market. Medical insurance records, for instance, sell for up to $1,000 and yield even more profit when used for fraudulent insurance claims and purchasing prescription drugs for resale. Meanwhile, access to a person’s address, financial information, Social Security number, and employment details give hackers plenty of options. They can use the information to commit identity theft and make fraudulent purchases until their victim’s account has been emptied out. They can even infiltrate email accounts and defraud more people. Increased susceptibility to attacks However, it’s not just the massive collection of sensitive data that makes insurance companies attractive targets. As insurance companies adopt cloud and mobile technologies, they inadvertently leave business networks vulnerable. At the same time, hackers are constantly developing elaborate malware and denial-of-service attacks designed to compromise insurance systems and render them inoperable. Also, insurance often lags behind other financial services organizations when it comes to cybersecurity. Banks, for instance, are highly fortified, utilizing state-of-the-art security and encryption systems to defend against sophisticated, financially motivated cyberattacks. By contrast, many insurance companies may lack the resources to invest in bleeding-edge security measures. And as banks become more impenetrable, attackers will naturally shift their focus to less secure targets like insurance companies. What’s more, security training is often an afterthought for most organizations. This means employees are more likely to make data management errors, set weak passwords, and interact with dangerous emails. Such a lax approach to cybersecurity enables hackers to easily circumvent security systems and steal data. What should insurance companies do? Data breaches have lasting effects on insurance companies. They can lose thousands of dollars fixing the issue, but there are also extra costs associated with regulatory fines and losing customer trust. The best way to avoid these risks is to take a multilayered approach to cybersecurity. For starters, companies must install cutting-edge encryption software, firewalls, intrusion prevention systems, anti-malware, and software updates. These prevent hackers from exploiting system vulnerabilities and gaining access to sensitive data. According to the National Association of Insurance Commissioners (NAIC), companies must set access restrictions and enable multifactor authentication to be compliant. The former prevents unauthorized users from gaining access to sensitive data while the latter adds another method of identity verification on top of passwords like a fingerprint scan or a one-time authentication code sent via SMS. Organizations must also consider the human element of cybersecurity. This involves educating employees on the latest threats, spotting the telltale signs of a phishing attack (e.g., suspicious links and attachments), and setting unique 12-character-long passwords. For optimal results, organizations should implement ongoing security training and testing to prepare employees for real-world attacks. There’s a lot that goes into cybersecurity, but insurance companies don’t have to do everything by themselves. INFINIT Consulting provides cutting-edge risk management and cybersecurity solutions that dramatically reduce the chances of a breach. If you run an insurance company in California, schedule a meeting with us today and we’ll formulate a cyber defense strategy to keep your assets safe.
<urn:uuid:5082eb11-453b-4d94-94d9-6b7ee5999fa8>
CC-MAIN-2024-38
https://ergos.com/why-are-hackers-targeting-insurance-companies/
2024-09-08T15:50:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00631.warc.gz
en
0.934955
888
2.765625
3
isam.c performs the identical function that lowlevel.c does, and creates a data file and index that matches those created by lowlevel.c. In fact, the data and index files created by one program can be accessed by the other. The primary difference is that isam.c uses the ISAM functions and an ISAM parameter file (which is now considered legacy) instead of low-level functions. In lowlevel.c we established the FairCom DB buffer parameters with a call to InitCTree(), and set up the attributes of the two data files and one index by successive calls to CreateDataFile() and CreateIndexFile(), or to OpenCtFile(). With three files to create/open, we needed four function calls. CreateISAM and OpenISAM With the ISAM functions, we only need one call: CreateISAM() to create the files, or OpenISAM() to open existing files. The only parameter passed to either function is the name of the ISAM parameter file. The parameter file contains all of the information to initialize FairCom DB and to create/open the data files. In our example the ISAM parameter file is isam.p, as follows: 10 1 4 1 0 invent.dat 128 4096 1 1 Delflag Buffer 1 invent.idx 25 0 0 0 4096 1 0 32 1 Itemidx 2 25 2 The first record in the ISAM parameter file is the Initialization record. This contains the same information as is used in the InitCTree() call, organized in a slightly different manner. The first value is the number of index file buffers, as before. The third value is the number of node sectors. The difference is that in InitCTree() we had one figure for the number of data files and indexes that will be opened (2 in this example), while in the ISAM parameter file we must specify the number of indexes (1) as the second parameter, and the number of data files (1) as the fourth parameter. Data Description Record Each data file must have a data description record, the line following the Initialization record in the ISAM parameter file. The first parameter is a file number we assign to each file, followed by the file name, the record length, the extension size, the numeric representation of the file mode, and the number of associated index files. You should be able to match the values in isam.p with the values in the CreateDataFile() calls in lowlevel.c, with the exception of the last one. In our example we have prepared the data description record for use with the r-tree Report Generator by assigning symbolic names for the first and last fields in the data record. If you set up your FairCom DB system for use with r-tree, you need two names here, even if you are not going to use r-tree for the time being. The symbolic names allow us to use this same sample program in the r-tree Reference Guide. Index Description Record If there is one or more indexes associated with a file there must be an index description record for each. The first parameter is the file number we assign to the index, followed by the index file name, key length, key type, duplicate file value, number of additional index members, file extension size, and the numeric representation of the file mode. You should be able to match these with the parameters used in CreateIndexFile in lowlevel.c. In addition, we have parameters for the null key flag and empty character (leave these as 0 and 32 respectively until you understand them fully), and the number of key segments. Itemidx is a symbolic name for r-tree, as discussed above. Key Segment Description Record Each key for an ISAM index must be composed of elements drawn from the data record. You cannot have an arbitrary value from an outside source as a part of the key. Even if this wasn’t a requirement of the FairCom DB product it would make good sense, because you cannot easily rebuild a corrupted index if the key contains information that doesn’t exist in the data record. In addition, keys can be created from multiple portions of the data record, and a number of translations can be performed. This is specified in the Key Segment Description record. In our example, the key is a simple one of one segment. The first value is the offset of the segment in the data record. Starting at zero, the item field starts at byte 2 of the record (bytes 0 and 1 being the delete flag). The second value is the length of the segment, 25 in this case. The third value is the segment mode, which describes how the segment is to be translated. ISAM Functions describes the various segment modes. A value of 0 means no transformation, a value of 2 (as in our example) means that the lower case letters will be translated to upper case. Error handling is essentially the same as before with the low-level sample, except that we look at isam_err instead of uerr_cod, and the variable isam_fil tells in which file the error occurred. This last is important, as ISAM functions may manage multiple files within a single call, and we need to know which file created the problem. Adding the data to the application data files is very simple with the ISAM functions. After we have called the getfld() function to capture the data, we simply call AddRecord(). This takes care of all of the functions performed by transkey(), AddKey(), NewData(), WriteData() in the low-level function example. Record locking is more automatic, as well. Before the call to AddRecord() we enable automatic record locking with the LockISAM(ctENABLE) call, and later we release the lock with the LockISAM(ctFREE) call. GetRecord and DeleteRecord Deleting a record and index entry is almost as simple. We use GetRecord() to determine if the key exists. This also reads the record so we have something to display. We then call DeleteRecord() to delete the key and record. Even if we had multiple indexes for this file we would do just those two calls, since DeleteRecord() deletes all keys associated with that record. The key passed to GetRecord() must be a properly formatted key, just as in lowlevel.c. We use TransformKey() instead of our own function. TransformKey() takes the information found in the key buffer and reformats it according to the values found in the ISAM parameter file. Closing the data and index files is very simple. A single call to CloseISAM() closes all files referenced in the ISAM parameter file. It also calls StopUser() automatically.
<urn:uuid:1ca46370-c41a-4a0d-9e3e-2a771bcbc9aa>
CC-MAIN-2024-38
https://docs.faircom.com/doc/ctreeplus/31117.htm
2024-09-11T02:51:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00431.warc.gz
en
0.875441
1,397
2.515625
3
With innovation, more business and production processes have become automated, relying more on computer software and digital applications. Certain processes in the manufacturing industry have also shifted into the digital age, which may improve efficiency and lower costs. This gave rise to a new wave of technology that incorporates machines, such as conveyors or ones for welding, into the internet. This has led factories to focus on machines more than having skilled workers. Welding In The Modern Age The welding industry greatly impacts the world since many businesses rely on metalwork. To put it simply, welding involves joining two metal ends or surfaces together through the help of filler material. Cars, airplanes, and trains wouldn’t be around without their welded components. Industrial companies weld materials, such as steel tubing and base plates, to produce hydraulic lifts and conveyor belts. Moreover, the construction industry depends heavily on metal trusses and beams that are fastened securely by welding. Yard fences and gates, which are sometimes used to decorate gardens, are a product of welding steel bars and metal art designs. Some household items and kitchen appliances have welded receptacles and parts. Even some works of art and landmarks may not be possible without the help of this process. What Is the Internet Of Things? The internet is a very helpful technology in modern society. When paired with machines, this may create positive impacts such as increasing production and lowering the costs of labor. The Internet of Things (IoT) is the term used when referring to a single or a network of machines that are connected online. The ‘thing’ in IoT refers to anything powered by electricity. This may include everyday devices such as sports bands, electronic keys, or automated switches. On the other hand, IoT may also be applied in bigger industries such as manufacturing goods, distributing electricity through grids, and tracking water supply. Data is usually gathered through sensors and is sent through the internet or through a specific network. Then it may be processed through a unique software fit for the industry. IoT tech applied for the industrial world is most commonly called the Industrial IoT (IIoT). Benefits Of IoT In The Welding Industry Incorporating the internet in the welding process may have several advantages. One of the most notable advantages of IoT in welding is that it may gather and monitor data that may be used in improving efficiency. It may provide insight into what works and what may not work in a timely manner. This results in improvements in terms of quality and productivity. In the long run, this may bring in more revenue for the company since creating quality goods builds a good reputation and attracts more customers. When welding machines are connected to the internet and operated with the use of software, it may enhance output and decrease wastage in materials. Automated welding relies less on the welder and more on artificial intelligence. As a result, many companies are investing more in enhancing their systems. With functional software and quality welding machines, companies may rely less on skilled workers. They may hire more beginners without compromising the quality of their output. Additionally, automated systems may be programmed to react to different factors similar to how a skilled welder works. Another advantage of IoT in welding is that it makes shifting to different materials or products easy. For example, similar welding parameters may be applied in manufacturing spare parts used in cars and trains since they belong in the same industry. A change in the material for production may be accommodated by playing around with temperature and knowing what welding technique best fits the process. With the use of sensors and a custom hardware system, gone are the days of staying on the production floor and manually monitoring parameters. By placing sensors on strategic areas, important data may be recorded and sent to a cloud system for analysis. In turn, results may be displayed in interactive dashboards across different personal devices. Notification systems may be enabled to send alerts before anything reaches a critical point. This is made possible by the real-time recording of temperature and electrical supply. In the event of an emergency, operators may turn off their welding machine remotely to avoid further damages or big accidents. Different industries have benefited from complementing the Internet of Things in welding processes. IoT may help in increasing productivity by closely monitoring different parameters involved in welding materials. Sensors are usually used to gather data that are then analyzed to improve processes. Automating the welding process has also decreased labor costs while increasing productivity and output. This has also helped companies to be flexible in producing similar products by making adjustments through the system. Lastly, integrating the internet into production has made working remotely possible.
<urn:uuid:5dd726ce-4e53-495f-bf6d-44e519d9e4c6>
CC-MAIN-2024-38
https://iotbusinessnews.com/2021/11/12/58255-how-iot-is-helping-to-benefit-the-welding-industry/
2024-09-11T04:19:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00431.warc.gz
en
0.9512
929
3.3125
3
Toshiba Corp is developing a motion processor for recognizing and displaying moving 3D objects on a PC. If you’re wondering what you might do with such a function, Toshiba suggests using it to play the child’s game of rock, paper, scissors against a PC. More practical uses, however, include developing a gesture based interface that recognizes sign language, so that users can give commands with a hand movement rather than via a keyboard. The prototype processor, which can send a seven-bit, 64×64 pixel image to the PC at a speed of 30 or 50 frames per second, is made up of eight light emitting nodes, a lens and a C-MOS image sensor as well as the means to synchronize the transmission and the reception of a light signal. The software development kit is available now.
<urn:uuid:b1f0926a-6222-4769-b80a-1b6615344eb8>
CC-MAIN-2024-38
https://www.techmonitor.ai/technology/toshiba_develops_processor_of_moving_3d_images
2024-09-12T08:37:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00331.warc.gz
en
0.945084
168
2.875
3
Freshwater is increasingly in short supply. The march of industrialization, extensive use of limited groundwater reserves, growing populations, and the impact of climate change mean that access to water is going to become one of the defining struggles of the 21st century. Data centers' drink problem "There's gonna be a 40 percent gap in freshwater supply and demand by 2030 according to the UN," Nalco Water VP and GM Heather DuBois told DCD. "It's just staggering, but yet we need to continue to grow." Data center demand shows no sign of letting up, with gigawatts of capacity expected in the years ahead - much of which will be cooled with water. That means a lot of water, at a time when the world needs it the most. That’s simply unsustainable. Take Digital Realty - back in 2018, its data centers used around 1.4 billion gallons of water in a year, an astronomical sum, and that’s even with the majority of its facilities not using water for cooling. If all of that was potable water, it would be a wasteful travesty. "Water is a key element of our environmental, social, and corporate governance journey," Digital's director of sustainability programs Aaron Binkley said. “43 percent of our water use came from reclaimed sources last year - that's more than 660 million gallons of water across the portfolio.” To get to that point took a significant amount of work, the company’s water sustainability lead Walter Leclerc told DCD. Getting to 100 percent water reclamation will take significantly more effort. First, the company partnered with Nalco and parent company Ecolab to assess the water use and water risk at all of its data centers. “We used the Water Risk Monetizer, which is a tool that Nalco, Microsoft and Trucost put together, to place each data center on a water maturity curve,” Leclerc said. “We've done that at the local level, and now we've prioritized our hotspots. We know as of today which of our sites are high-risk water sites, medium-risk water sites, and low-risk water sites.” This has meant collecting “a tremendous amount of data,” Leclerc said. “It can take three to four months per site to do assessments - Nalco went to every one of our water sites across the globe, they did a design assessment, they took samples. “They first take water quality assessments, and then they do assessments of the watersheds, where those data centers were taking it off of. So it took a long time.” Quality counts, too State and local utility providers offer little in the way of data on water supplies, with limited regulatory data offered, along with a mandatory requirement to test for Legionella. “But that didn't help us from a water stewardship perspective,” Leclerc said. “We need to understand the quality of the water, we're talking about the pH, we're talking about the solids in it, and so on, because we need that information to develop the pipeline of projects at a site.” Armed with up-to-date information on water use, availability, and quality, the company was able to take steps at each site to help reduce freshwater usage. “It can include everything from sulfuric acid dosing to reclaim water, to implementing pretreatment on systems, to looking at collecting rainwater.” Harvesting precipitation “takes a tremendous amount of effort,” Leclerc said. “We actually have two major projects that are about to come to a conclusion now, and they took two years to do.” While the data center company likes to standardize designs where possible, the reality is that different locations have different needs and capacities. “Down in our Phoenix, Arizona, site we initially reclaimed water into that system, but then we outstripped the capacity of the city,” Leclerc said. “So they actually told us to go back onto potable water. Now the city has caught up with us, and they said they can handle our reclaimed water volume, so we're trying to do another project next year to get it back.” In places like Ashburn, where Digital has multiple huge campus developments, “those are opportunities to have a strategic dialogue with the water authority and say, ‘is reclaimed water available next to our campus?’” Digital’s Binkley said. “Or if it's not, we can give them some general guidance on what our water consumption would be and what our demand profile would look like.” Due to the proximity of other companies’ data centers in some areas, it can make sense to collaborate with rivals - as well as non-data center companies - to make a case for reclaimed water availability. “We know Amazon is doing a huge building in Ashburn, so that's going to affect the reclaimed water supply,” Leclerc said. Sometimes the particular location of a data center can open up unique cooling solutions, like Google found with its Hamina data center, which uses a cooling system it inherited from the paper plant which preceded it to draw on seawater. With Digital’s Marseille facility (under the Interxion brand), the team realized that they were relatively near a decommissioned coal mine “that has waste cold water that's coming out of the ground that was traditionally pumped into the river and out to sea,” Binkley said. “That's being piped about 14 kilometers to our Marseille data center campus, and it's being used as a chilled water source to limit cooling energy needs. It's helping to make the facility more efficient by getting free cooling out of what is a wastewater stream from an old coal mine.” While the environmental benefits of such water saving moves is clear, what can make it easier for companies to sign off on is that it can also lead to cost savings. “We want to do the right thing when it comes to water stewardship,” Leclerc said, “but also the return on investment on some of these projects is like seven months,” with reclaimed water usually less expensive than freshwater. “In the old days, the ROI was 10 years,” he added. “We're just finishing up a two year project at our Santa Clara location, where we're going to be taking 16 million gallons of potable water off that watershed and going to reclaimed water. This is great, but I'm also going to be collecting hundreds of thousands of dollars a year from the cost differential for the operations team. So it is a driver, absolutely.” Still, the financial benefits aren’t as profound as one would expect given water’s precious nature. “Water is cheap, even in warm places where it is a scarce precious resource it's still relatively inexpensive,” Binkley added. Further savings can be made with evaporation credits, or in spending more on longer pipes that reach areas with lower water rates. A number of customers are already demanding at least some action on freshwater usage. “Customers are becoming aware of water concerns and many of the big hyperscalers have made commitments,” Binkley said, although the level of demand for action is currently much less than that for renewable energy. Here, Digital is trying to work out how to involve the customer in sustainable decision making. For renewable energy, the company has partnerships with solar and wind companies that mean its customers can decide whether to use renewable energy via power purchase agreements, or just run off of the grid. For water, it’s not immediately clear how much control the customer can have. “We just started those conversations in the last month or so,” Leclerc said. “How can we bring Ecolab into our RFP process, and how can we leverage the partnership to do that with our customers? We're just not there yet. “We're kind of hung up on it. It's easy when it's the customers paying for the source, but if they're just a customer in a site and they're not part of that water matrix perspective, it’s a lot harder.” However that is achieved, it is expected that water sustainability will increasingly feature on customers’ priorities, just as we have seen the rapid expansion of demand for renewable energy sources for data center workloads over the past few years. Water scarcity is only going to get worse, and companies needlessly using limited freshwater resources will rightly face the ire of a desperate populace. “The public is going to hold us to these as we move forward,” Nalco’s DuBois said. “I don't think anybody's going to have a choice as we look to the constraints that the environment is going to have in the coming years.”
<urn:uuid:6120987b-4c65-4b41-9fcd-645ffdc43545>
CC-MAIN-2024-38
https://direct.datacenterdynamics.com/en/analysis/not-drop-waste-curbing-thirsty-data-centers/
2024-09-14T20:11:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00131.warc.gz
en
0.972367
1,895
2.890625
3
The Covid Pandemic hit just as moves to transition from carbon intensive to carbon net zero economics was gaining momentum. Moves to renewable energy sources, ending reliance on fossil fuels are accelerating. “Australia’s abundant and low-cost coal resources are used to generate three-quarters of domestic electricity and underpin some of the cheapest electricity in the world. At present renewable energy sources account for only modest proportions of Australia’s primary energy consumption (around 5 per cent) and electricity generation (7 per cent), although their use has been increasing strongly in recent years” (Source Australian Government). The plan is that by 2030 a significant shift to a new energy mix will have happened. However, exactly what energy sources will be relied on for primary power, how the operation of a sustainable electricity grid will be delivered and how availability will be backed up through long term storage or demand response generation are, as yet, unanswered questions in Australia. The sheer size of Australia is a factor. As the Federal Government says: “With the exception of hydro energy Australia’s large renewable resource base is widely distributed across the country. Apart from wind energy which is growing rapidly, large-scale utilization of Australia’s renewable resources has been constrained by higher transformation costs relative to other energy sources (except for hydro), immature technologies, and long distances from markets and infrastructure.” At the highest level these are major socio-technical-economic challenges. At the data center operational level, where sustainable power comes from, how it arrives at the facility and how it can be efficiently put to doing useful work have huge cost and sustainability implications. How extra power capacity or latent power in data centers might be used to feed power back to a grid are also major technical challenges. The IEA (International Energy Agency) says: “Australia’s power system faces concerns over reliability, particularly amid extreme weather events, and the need to accommodate the world’s high per capita solar capacity.” If the energy production and distribution sectors are directed to de-commission its fossil fuel power stations and create a sustainable grid, then what will replace them? What does any move to more reliance on intermittent renewables such as wind and solar mean for data center power infrastructure? What does it mean for location? What role will natural gas play? Will there be large scale pipe and network infrastructure investment? At the moment, the answers are ‘No-one knows.’ However, several issues are observed as base-load coal-fired power stations are decommissioned and are substituted by renewable sources at lesser kW capacities: a) Energy production diminishes while base-loads increase, and b) Stability issues are becoming more frequent, and c) Peak load periods are becoming more difficult to manage without consumer involvement. This creates the need for solutions such as Inertia, FCAS (Frequency Control Ancillary Services) and BESS (Battery Energy Storage Systems). Network operators are now engaging with the consumer and their technology partners to participate in these solutions to help manage these issues. No Nuclear Option It is accepted that energy sector transition must happen. But the challenges of delivering that transition within a decade while maintaining critical services should not be overlooked or underplayed. How will generation and grid capacity be maintained? How will keeping the ‘lights on’ inside critical infrastructure be achieved as the energy sector decarbonises? Both inside the perimeter and beyond where are the innovations coming from that will provide reliable, efficient power, without sacrificing sustainability? The renewable options available to the Australian Energy sector remain the envy of the world. In Geothermal, Hydro (already established), Wind, Solar, Ocean and Bio, Australia’s energy companies have huge opportunities to innovate. Yet, a cultural shift is also required. Abundance of cheap fuel has fed complacency and, led to a ‘lack of energy’ for innovation. In some places a playbook, box ticking approach has stunted development. Data centers can be part of the solution to Australia’s energy challenges. On-site storage, demand response co-generation and bi-directional transportation technologies exist to help reduce reliance on fossil fuels, improve the sustainability of the grid, of the facility itself and will contribute to the growth of a broader net zero economy. Piller’s electrically coupled new UB-V Series UPS provides a natural solution to integrate with the consumers’ critical power infrastructure and the utility network. In fact, this solution is already operating in certain FCAS arrangements and can be adapted to cover the issues outlined above with superior performance and reliability. While the world strives to de-carbonise in all industry sectors, the modern data center continues to scale up in size and power demand. This paradox drives DC operators to look for more sustainable and emissions reducing solutions in the power system. Having had modernization thrust upon it by the climate crisis the energy sector in Australia must seek deeper engagement and greater collaboration with large energy users to develop sustainable and scalable data center power. Nothing less is required. Register for the Piller session - Sustainable and scalable data center power - led by the author of this feature, Jonathan Davis, either live on the 13th May or on-demand by clicking here. More from DCD Event News “If you don’t measure, you can’t improve” - new research indicates the continuing relevance of the old data center adage Data centers are increasingly challenged to reduce their environmental impacts and, particularly post-2020, digital infrastructure is acknowledged as playing a critical role in shaping a sustainable/net zero future. Texas froze over, data centers burned down, and semiconductor fabs struggled with drought. The last three months have been chaos, but data center resiliency has helped the industry prevail. NSW aims to encourage data center investment in the Australian region through streamlined processes
<urn:uuid:b46282e3-0503-4846-ba67-17e98f6666f9>
CC-MAIN-2024-38
https://direct.datacenterdynamics.com/en/opinions/finding-a-sustainable-roadmap-for-data-centre-power-in-australia/
2024-09-14T19:47:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00131.warc.gz
en
0.932648
1,210
3.25
3
With so many words, phrases, and acronyms used in the industry, learning about cybersecurity can be complicated. To help you better understand some of the more technical jargon, here’s a quick look at some of the most common terms, phrases, and acronyms and what they mean. A program that monitors a computer or network to detect malicious code and prevent additional malware incidents or breaches. Black Hat Hacker: A “bad” hacker who works with malicious intent to infiltrate a system to steal or destroy data. During a penetration test of a business’ security system, the blue team is responsible for establishing the security measures to keep the red team (the attackers) out. The California Consumer Privacy Act is a law that details how a business can handle a person’s private information. This includes the person’s right to know which information is being collected and how it is being used, the right to delete personal information, the right to opt-out of the sale of information, and the right to non-discrimination for exercising these rights. Storage and processing through the internet from remote computing facilities. Compliance Officer (CO): A cybersecurity team member who has knowledge of specific industry regulations, how they apply to your business, and what your company needs to do to stay compliant with them. A data breach occurs when a cybercriminal infiltrates a business and steals information. Any individual device (phones, tablets, computers, etc.) that is connected to a business’ network. Devices like computers, tablets, and phones are consistently connected to your network and can be easy access points for cybercriminals. Endpoint protection focuses on securing these individual devices. A firewall filters out potentially malicious traffic from the internet before it can reach your system. Identity and Access Management (IAM) [Synonym: Access Control]: The power to grant or deny specific requests or attempts to obtain or access information or to physically enter a facility. This is done by developing secure passwords (and secure password storage) and managing who has access to what. A cyber risk to businesses where the malicious actor is somebody—employees, ex-employees, third party partners, etc.—who uses their authorized access to wittingly or unwittingly cause harm. This type of malware tracks keystrokes to gain information like passwords, login credentials, financial information, and more. After a bad actor infiltrates a business’ system, lateral movement refers to their ability to now move into different areas of the network. Software with malicious intent designed to compromise a system by performing an unauthorized function or process. Malware comes in many different forms, including ransomware, spyware, viruses, adware, and more. A managed security services provider is a team of cybersecurity experts using the latest strategies and technologies to keep businesses safe from modern threats. Multi-Factor Authentication (MFA): This is a password security method which uses biometrics, third party applications, or additional devices, to give businesses an added layer of login security. A cybersecurity strategy where a business’ network is monitored by security teams to identify threats quickly. This strategy helps businesses stay secure by segmenting and isolating different parts of a business to reduce lateral movement in the event of an attack. Essentially, if someone infiltrates a system, they wouldn’t have access to everything, just what’s stored in that segment. A form of digital scamming designed to deceive individuals into voluntarily providing sensitive information like Social Security numbers, credit card information, login credentials, and more. Phishing scams are an extremely common attack in which an attacker attempts to trick you into clicking a link sent in an email, text message, or online message that is disguised as coming from a trusted source (a bank, coworker, insurance provider, or family member, for example). This type of malware steals and encrypts a business’ data, keeping them from accessing it until specific terms—typically a monetary payment—are met. Post-incident activities to restore essential services and operations. During a test of a business’ cybersecurity system, a red team is made up of ethical hackers who use real techniques to try and identify vulnerabilities. Security Operations Center (SOC): The home of a cybersecurity team, a SOC contains all the people and technology an MSSP needs to keep businesses protected. This type of malware gains access to a system and, rather than unleashing malicious code, it remains hidden, collecting data over time. To understand how cybercriminals work, threat intelligence continuously gathers data to identify trends, motives, and attack behaviors to learn how to stop them. This type of malware is disguised as a legitimate program to gain access into a system. Virtual Chief Information Security Officer (vCISO): The main point of contact of your cybersecurity team, a vCISO will provide you with personalized expert advice and consultation to help make crucial cybersecurity decisions. vCISO's are always up to date on the latest trends in the industry and will lead a team of DOT experts in implementing your cybersecurity plan. Short for Virtual Private Network, a VPN is a method of connecting computers and devices to a private network, replacing a user’s IP address with the VPN’s IP address. This allows for anonymity while on the internet. Web Application Firewall (WAF): A type of firewall designed to protect businesses by filtering traffic between a web application and the internet. A cyberattack deployed to exploit an unknown security flaw in a piece of software to gain access into a business’ network. What's new in cybersecurity Infographic: The Biggest Cybersecurity Breaches In 2021 What were the biggest cybersecurity data breaches in 2021 and what can businesses learn from them? F... January 14, 2022 7 Cybersecurity Trends In 2021 What were the biggest 2021 cybersecurity trends that shaped the industry last year?... January 20, 2022 8 Top Cybersecurity Threats to Your Business Cybersecurity threats are always changing, in terms of the techniques used by hackers and the vector... January 28, 2022 Time to update your defenses? Connect with a DOT Security expert to protect your business.
<urn:uuid:2e57f8a6-8ab9-4ea3-94a9-08b644adcb35>
CC-MAIN-2024-38
https://dotsecurity.com/insights/cybersecurity-glossary
2024-09-21T00:49:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00531.warc.gz
en
0.924534
1,282
3.125
3
Connected technologies are helping transform vehicles from a mode of transportation to mobile living spaces. Technologies like navigation, GPS, connected infotainment, among others., have become a standard feature in a modern car. To put this in perspective, as per Deloitte’s 2020 report on connected cars, the Indian connected car market is projected to grow at a CAGR of 22.2% and reach $32.5 by 2025 from an estimated $9.8 billion in 2019. Connectivity has been opening doors for new technologies and information sharing across industries. By Alexander Klotz, Head of Technical Center India (TCI), Continental Automotive India However, as the connectivity quotient increases in the car, cybersecurity becomes a key priority for all those involved in the automotive industry as we move closer to autonomous vehicles. Automotive Cybersecurity: The Need A vehicle today has several million lines of code. These vehicles rely on Over-The-Network (OTN) communication interfaces to function, increasing the risk factor for privacy and security. The various Electronic Control Units (ECUs) in a vehicle are linked through an internal network, and a lack of cybersecurity measures makes them vulnerable. For instance, we need to secure the ECUs of a vehicle’s brakes and transmission from a possible hack. Earlier, our concern was to protect the environment from faulty components. Today, it is equally essential to protect the components from hostile environments. Further, a connected vehicle has a network of functions, such as cameras that screen passengers, GPS, seat belt warnings, working together to share the necessary information. If even one of these functions were compromised, the whole system could be affected. The vehicles in the future would be collecting vast amounts of data through the internet, and it will leave the user open to other data threats, including the consumer’s personal information. The result of compromised data would make the car more vulnerable to security threats. The Need for Cybersecurity for the Automotive Ecosystem As we move towards connected and electric mobility, and vehicles becoming IoT devices, we could be looking at a future where a faulty ecosystem can expose the vehicles to threats and vulnerabilities. For instance, many charging stations use outdated open charge point protocol based on HTTP, which does not encrypt data/information, allowing attackers to break into the WIFI signal and rewire the charging gateway. There is a crucial need to develop means based on artificial intelligence and machine learning to fight cyberattacks and minimize their impact on the Internet-of-Vehicles framework. Automotive cloud security could help stakeholders by giving them an entire picture of the data flows in their surroundings. It makes it easy for users to identify threats to one’s network and identify deviation beforehand. However, automotive cybersecurity does not limit just to securing the vehicle. It has to start way earlier. For instance, during the manufacturing process itself. The manufacturing setup requires a robust security system to prevent intrusion of the hackers, who could modify the codes of the components, leading to faulty behavior. Automotive Cybersecurity: Beyond Mobility As we move towards connected plants and advanced technologies, it opens us to new threats and vulnerabilities. Any information that travels through the internet is susceptible to a cyberattack. For example, when manufacturing data migrates from Operational Technology (OT) systems on the factory floor to interconnected Information Technology (IT) systems in the corporate network, new risks evolve. This data is now more vulnerable at this stage. Cybercriminals could potentially gain access to intellectual property, shut down systems, disrupt production timetables, and affect product quality. The manufacturing setting needs to be considered a fully integrated setting, even if some processes are not integrated into the Internet. Although many breaches start in IT networks, the hackers or attackers may jump into other parts of the setting through connected devices. Furthermore, some connected devices may include information about the non-connected process. A secure supply chain ecosystem also requires diligence toward proper vendor management. Any third parties that have authorized access to the company’s network can become unwitting avenues of attack. A bad actor who steals any login credentials of the third party could potentially gain access to the company’s network by pretending to be an authorized user. The solution for automotive cybersecurity needs to be proactive and multilayered. First, individual electronic components/systems must be secured. Following this, the connections between these components/systems need to be made secure. As the next step, the focus should shift towards protecting and securing the external interfaces. Once all external interfacing is secured, as the final step of protection, data processing taking place outside of the car has to be strengthened to prevent data theft and exploitation. Cloud and backend solutions also need to be given importance at the final stage, and they need to be protected from security breaches. Today, teams are working on securing the systems, memory, communication, and supporting infrastructure. Online trust centers secure the crypto keys, penetration test labs that continuously look for vulnerabilities and threats have become crucial to ensure vehicle safety. Cybersecurity can be tackled in three broad critical steps: 1. Prevent – The probability of security encroachment rises in tandem with the degree of networking and the number of in-vehicle interfaces that it necessitates. Hackers are driven by various factors, including data theft, financial gain, and prestige. Manufacturers need to strengthen all potential attack points and lay down security solutions across multiple levels and departments. This can be made possible by identifying the various attack points, understanding the behavior, and designing safety measures to secure the systems. In other words, make it as hard as possible for hackers to attack. This typically involves hardware-enhanced crypto, embedded security software, secure networks, and secure vehicle architecture. In terms of automotive cybersecurity, another example of a preventive approach would be DevSecOps. The practice ensures that developers are using coding practices that are less vulnerable to attacks. 2. Understand – Know that the system is being hacked, identify the point of entry, exposed vulnerabilities, and other critical information in real-time. This involves live monitoring and tracking of connected vehicles. An example of this would be setting up the Security Operation Centres (SOCs). SOCs are needed to ensure real-time detection of any such breach and tackle it in real-time. The SOCs would also help us identify the gaps and do quick patches to avoid long-term exposure to vulnerabilities in on-road vehicles. 3. Respond – Mitigate the damage and immunize the fleet in hours. This involves software updates over-the-air and patch management. Cybercrime is an asymmetric challenge. Although an organization must monitor hundreds of processes, hackers just need to find a single flaw to gain access. It is like a never-ending race between those who want to secure networks and those who want to break them down. This is why, once a loophole is identified, it is vital to act as quickly as possible. Organizations should implement an Incident Response Management System that provides an extra layer of security that reacts quickly if an attack occurs. Millions of cars will be able to upgrade themselves to new protection standards without needing to visit an auto repair shop. Real-time patch management through over-the-air updates is an essential requirement for “Vision Zero” – a future without fatalities, injuries, and crashes. Cybersecurity has always been an industry that has sustained innovation. For instance, earlier, the concern was to protect the environment from faulty components. Today, it is equally essential to protect the components from hostile environments. Today, a considerable section of industry engineers are working on securing the systems, memory, communication, and supporting infrastructure. Today, online trust centers secure crypto-keys, and penetration test labs continuously look for vulnerabilities. As the technology evolves, the industry continuously adapts and responds to the threats, innovating to keep the systems secure. The challenge today we face, as an industry, is a lack of standardization. Standardization is crucial in shaping cybersecurity practices of the future. As an industry, we also need to move towards an information-sharing ecosystem – share information and best practices with peers. As they say, one person’s detection can become another person’s prevention. Another step that could be transformative would be the introduction of AI and predictive modeling. It is accelerating instant detection and response, better communication of risks to the business, and gaining a better understanding of cybersecurity’s situational awareness. About the Author Alexander Klotz is the head of Continental’s Technical Center India (TCI), the in-house R&D center of Continental Corporation. In this role, Klotz is responsible for the growth of the center into a trusted partner augmenting Continental’s global R&D competence. In this regard, he will oversee the growth of competence, innovation potential, and capacity. Prior to this, Klotz was Director R&D within Continental’s Interior division, responsible for advanced product development and systems engineering. Klotz has more than 15 years of automotive engineering experience, in areas spanning vehicle testing, project management, customer and product strategy, simultaneous engineering, and innovation. Klotz also launched and led the Silicon Valley office for Continental in 2013. He studied Mechanical Engineering and Business Administration (German Diplom Wirtschaftsingenieur) at the Technical University Carolo Wilhelmina in Braunschweig, Germany. Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.
<urn:uuid:237ef099-69fa-44db-83cd-285d3ae3f9ad>
CC-MAIN-2024-38
https://cisomag.com/automotive-cybersecurity/
2024-09-07T13:38:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00831.warc.gz
en
0.93895
1,978
2.515625
3
Definition: Breakpoint Debugging Breakpoint debugging is a widely used method in software development where execution of a program is halted at specific points, known as breakpoints. This allows developers to inspect the current state of the program, including variable values, the call stack, and system memory. This technique is fundamental in identifying and diagnosing the behavior of software under development or investigating bugs in existing software. Detailed Overview of Breakpoint Debugging Breakpoint debugging serves as a critical tool in a developer’s arsenal, allowing for efficient and focused troubleshooting and code analysis. It’s particularly valuable in complex software systems where identifying the root cause of a problem simply by observing outputs or logs is impractical. How Breakpoint Debugging Works Breakpoint debugging involves several key steps and components that make it effective: - Setting Breakpoints: Developers can set breakpoints in the source code or sometimes in the executable code itself. These breakpoints signal the debugging tool to halt execution when the program reaches these points. - Execution Pause: When a breakpoint is hit during the running of the program, the debugger pauses execution. This pause happens before the line with the breakpoint is executed. - Program Inspection: At this paused state, developers can examine the current state of the program. This includes checking the values of variables, the status of program counters, or the contents of memory. - Step Execution: Beyond simply pausing at breakpoints, debuggers usually allow the execution of code one line or one instruction at a time. This step-by-step execution can help trace how changes to the state of the program lead to issues. - Modification and Continuation: Developers can modify variables or the state of the program to see how different values affect the execution. After inspection and modification, developers can resume execution of the program either to the next breakpoint or to the end of the program. Benefits of Breakpoint Debugging Breakpoint debugging offers several advantages: - Precision: Developers can precisely control where the program halts. - Efficiency: This method allows quick identification of problems by isolating the exact location and state leading to a bug. - Flexibility: Breakpoints can be set, removed, or modified without changing the code itself, which allows for dynamic analysis. - Integration: Most Integrated Development Environments (IDEs) and many coding languages natively support breakpoint debugging, making it accessible across different platforms and languages. Common Use Cases for Breakpoint Debugging - Bug Fixing: The primary use of breakpoint debugging is in the identification and rectification of bugs in the code. - Code Understanding: For new developers or when taking over an existing project, breakpoint debugging helps understand how code flows and interacts. - Performance Optimization: By breaking on performance-critical sections of code, developers can analyze and optimize the performance of specific code segments. Frequently Asked Questions Related to Breakpoint Debugging What are the main types of breakpoints used in debugging? There are several types of breakpoints commonly used in debugging, including line breakpoints, conditional breakpoints, and exception breakpoints. How do I set a conditional breakpoint in an IDE? To set a conditional breakpoint in an IDE, right-click on the line where you want to set the breakpoint, select the ‘Breakpoint’ option, and then configure the condition under which the breakpoint should be triggered. What is the advantage of using exception breakpoints? Exception breakpoints are advantageous because they allow developers to catch and inspect exceptions as soon as they are thrown, helping to identify and fix underlying errors more quickly. Can breakpoint debugging be used in any programming language? Yes, breakpoint debugging can be used in virtually any programming language, though the specific tools and features available may vary depending on the language and the IDE or debugger being used. What is step execution in debugging? Step execution in debugging refers to the ability to advance the program’s execution one line or one function call at a time, allowing for detailed inspection of each step of the execution process. How can I use remote debugging? Remote debugging involves connecting a debugger to a program that is running on a different machine or server. This is typically set up through network configurations that allow the debugger to communicate with the remote environment. What are log points in debugging? Log points are a feature in some debuggers that allow you to output messages to a log file or console without stopping the execution of the program. They are useful for monitoring the flow of execution and debugging issues that do not necessitate stopping the program. Can breakpoints affect the performance of a program? While breakpoints can slow down the execution of a program during debugging, they do not affect the performance of the program once the debugging session is ended and the breakpoints are removed or disabled. How do I remove or disable breakpoints in an IDE? To remove or disable breakpoints in an IDE, you typically need to go to the breakpoint management panel or simply click on the breakpoint indicator next to the line number in your code editor and select disable or delete.
<urn:uuid:4f6a5fa4-b635-4e2e-8734-7a20055c0775>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-breakpoint-debugging/
2024-09-07T14:10:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00831.warc.gz
en
0.911211
1,057
3.890625
4
Parents have always been concerned about the physical safety of their children, especially when they’re at school. These days, parents also need to be concerned about keeping their child’s identity safe and protecting their personally identifiable information, or PII. What is included in PII? Name, address, Social Security number — basically, all the information that’s requested on the registration, health and emergency contact forms that tend to show up this time of year. Teenagers might be filling out forms for sports, clubs and applications for after-school jobs. If this information falls into the wrong hands, it could be used to commit fraud in your child’s name. According to the Federal Trade Commission’s 2017 Consumer Sentinel Network Data Book, nearly 20 percent of all identity theft victims are age 29 or younger. In light of these stats, here are 6 ways you can protect your child’s privacy and personally identifiable information: Ask your child’s school about its directory information policy. Student directory information can include your child’s name, address, date of birth, telephone number, email address and photo. This is a lot of information to divulge in a public publication. Sign up for a LibertyID’s Family Plan, which covers you, your spouse, parents and children in the event of identity theft. Subscribers call us at the first sign something is amiss and we assign them a personal recovery advocate who will clean up the mess and restore their identity to pre-event status. Sign up for an annual subscription now and rest easy knowing you’re covered by LibertyID. Find out the school’s policy on surveys and what information they might try to gather directly from your child. The Protection of Pupil Rights Amendment (PPRA) gives you the right to see surveys and instructional materials before they are distributed to students. Find out the school’s social media policy. If your child is just starting their school journey and will be attending preschool this fall, it’s worth finding out if the school has Social Media accounts where they post photos of the students. Find out the school’s policy and if you feel uneasy about having your child’s photo posted on accounts that are likely public with no “friends only” security measures in place, then find out if you can opt out. Talk to your child about the importance of being careful with their Social Security number and other private information. Encourage them to be very careful who they give their address and phone number to as well. If something does happen and you feel like the school did something to put your child’s identity at risk, you can file a complaint. You may file a written complaint with the U.S. Department of Education. Contact the Family Policy Compliance Office, U.S. Department of Education, 400 Maryland Ave., SW, Washington, DC 20202-5920, and keep a copy for your records. If your kids are college aged, they need to be extra careful. College kids generally have clean credit reports and might not watch their accounts and credit reports as closely as other age groups, which makes them more of a target. Read our blog for eight tips for college kids on how to keep their identity safe. Are you covered for identity theft?
<urn:uuid:14a9e306-279f-451a-9ee0-d70de460ccc3>
CC-MAIN-2024-38
https://www.libertyid.com/blog/back-school-primer-parents-6-ways-protect-childs-personally-identifiable-information/
2024-09-07T15:32:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00831.warc.gz
en
0.953405
683
2.578125
3
Design for Sustainability: How to Integrate Sustainability into the Product Development Process Today, sustainability is one of the top priorities for technology companies as people worldwide become more aware of how our consumption affects the planet. The need for sustainable solutions is growing, and that's where Design for Sustainability (DfS) comes in. DfS focuses on reducing products' impact on the environment – something that is absolutely essential for the tech industry. According to the Ellen MacArthur Foundation, 80% of a product's environmental impact is identified during the design phase. This fact highlights the importance of making sustainability a core aspect of design best practices. While DfS can be incorporated at any stage of the product development process, it is most effective when considered from the beginning. As per the 2021 UN Global Compact survey, sustainability-integrated companies outperformed peers by 21% in profitability and positive sustainability outcomes. DfS helps reduce the environmental impact of products in several ways, including: 1. Using sustainable materials: DfS enables businesses to actively choose and integrate sustainable materials throughout their product development procedures. By using materials that have a minimal environmental impact, such as recycled or sustainably sourced alternatives, businesses can contribute to a sustainable supply chain and reduce the depletion of resources. 2. Designing for efficiency: DfS enables businesses to optimize resource utilization by embracing innovative design strategies. Through thoughtful design choices, such as streamlining manufacturing processes, reducing energy requirements, and promoting resource-efficient usage patterns, products can achieve higher levels of efficiency throughout their lifecycle. 3. Designing for durability: DfS encourages businesses to prioritize durability in product design, resulting in longer-lasting and robust products. By emphasizing quality craftsmanship, resilient materials, and appropriate maintenance guidelines, businesses can reduce the frequency of replacements and extend the product's lifespan, ultimately reducing the environmental impact of manufacturing and disposal. 4. Designing for recyclability: DfS prompts businesses to incorporate recyclability considerations into product design. By implementing design features like modular components, easily separable materials, and clear labeling for recycling instructions, companies can facilitate the recycling process and divert valuable resources away from landfills or incineration facilities. 5. Promoting a circular economy: DfS fosters a transition towards a circular economy model, wherein products are created to minimize waste and maximize resource recovery. By adopting strategies such as product take-back programs, material regeneration initiatives, and remanufacturing practices, businesses can actively contribute to closing the resource loop and reducing the environmental burden associated with production and disposal. How to Integrate Sustainability into the Product Development Process There are various ways to integrate sustainability into the product development process. Some of the most important steps include: 1. Define Sustainability Goals: To embark on a journey towards sustainable software development, starting with clear and measurable sustainability goals is essential. Collaborate with stakeholders, including developers, project managers, and environmental experts, to define objectives that align with the organization's values and broader environmental responsibilities. 2. Apply Agile and Lean Methods: Agile and lean methodologies promote sustainability throughout the software development lifecycle. These approaches emphasize iterative development, continuous feedback, and waste reduction. By fostering a culture of adaptability and efficiency, teams can respond promptly to changes, optimize resource usage, and minimize environmental impact. 3. Choose Appropriate Technologies: Selecting the right technologies is critical in sustainable software development. Opt for energy-efficient hardware, cloud solutions with green credentials, and programming languages that prioritize resource optimization. Leverage AI to analyze and predict energy consumption patterns, aiding in selecting technologies that align with sustainability objectives. 4. Implement Green Coding Practices: Green coding practices involve writing code that meets functional requirements and minimizes resource consumption and energy usage. Consider the following strategies: - Optimize Code Efficiency: Design clean and efficient code to reduce processing time and energy consumption. - Minimize Server Requests: Implement strategies to minimize the number of server requests, reducing data transfer and energy usage. - Energy-Aware Algorithms: Develop algorithms that consider energy efficiency as a factor, optimizing computational processes. - Automated Code Reviews: Utilize AI-powered tools for automated code reviews that can identify and suggest improvements for energy-inefficient code snippets. Benefits of Design for Sustainability Integrating sustainability into the product development process offers several benefits for businesses. Let's explore five key advantages: 1. Enhanced brand reputation and customer loyalty: Integrating sustainability into product design enhances brand reputation and fosters customer loyalty. Sustainable products create a sense of purpose, increasing loyalty and positive word-of-mouth recommendations. By prioritizing sustainability, businesses differentiate themselves and attract environmentally conscious customers. This loyal customer base contributes to the long-term sustainability of the company. 2. Increased market opportunities: Designing sustainable products creates opportunities in a growing market. Businesses can reach a broader customer base by appealing to environmentally conscious consumers. Accessing new market segments and forging collaborations drive business growth and expansion. Sustainability integration aligns with the goals of companies and organizations, leading to potential partnerships and contracts. 3. Cost savings and operational efficiency: Sustainable design practices offer long-term cost savings by reducing energy consumption and operating expenses. Thanks to economies of scale, sustainable materials can reduce costs over time. Designing for durability reduces warranty claims and product returns, saving businesses money. Optimizing resource utilization and minimizing waste aligns with sustainable practices while benefiting the bottom line. 4. Regulatory compliance and risk mitigation: Integrating sustainability into product development ensures compliance with environmental regulations, providing a competitive advantage. Sustainable practices mitigate risks related to climate change and resource scarcity, enhancing resilience. Adapting to ecological conditions safeguards long-term business viability and prepares for future regulations and market shifts. 5. Innovation and competitive advantage: Integrating sustainability into product development fosters innovation and a competitive edge. It inspires creative problem-solving and drives the development of innovative solutions. Sustainable products differentiate brands and attract eco-conscious consumers, providing a competitive advantage. Staying ahead of sustainability trends positions businesses as industry leaders, capturing market share. DfS offers technology companies a tangible path to transform their products into sustainable powerhouses. These companies can carve a more responsible and eco-conscious path forward by adopting sustainable materials, optimizing designs for efficiency, and championing a circular economy. This approach offers enhanced brand reputation, increased market opportunities, and cost savings. Prioritizing sustainability contributes to a sustainable future while ensuring business success. Intelliswift's dedicated team of experts combines their digital product engineering prowess with a deep understanding of sustainable design, ensuring businesses can seamlessly integrate eco-friendly practices into their product development processes. Get in touch with us for further information on how we can assist you in fostering sustainable innovation and delivering outstanding products.
<urn:uuid:c949353b-aae7-4c53-a93b-9d9e06d4ca1a>
CC-MAIN-2024-38
https://www.intelliswift.com/insights/blogs/design-for-sustainability-how-to-integrate-sustainability-into-the-product-development-process
2024-09-08T20:10:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00731.warc.gz
en
0.911884
1,398
3.0625
3
What are NIST Standards? Some standards, guidelines, and best practices to meet the industrial, public, and federal agencies' needs in Cybersecurity are developed by NIST. The cybersecurity framework by NIST has an outcome-based approach and this set it to be applied in any sector and on any size of business. There are three basic pillars of the NIST cybersecurity framework, namely; - Framework Core - Implementation Tiers The framework core has five major functions: How to Harden your System? The best solution for this challenge is to automate the hardening procedure. A good hardening automation tool should generate an impact analysis report automatically, enforce your policies on your production and maintain your servers' compliance posture. A hardening automation tool is essential for minimizing the attack surface and achieving compliance at large and complex infrastructures.
<urn:uuid:6c28c889-4e7c-4440-991f-177ff7874ebc>
CC-MAIN-2024-38
https://www.calcomsoftware.com/nist-cyber-security-framework-5-core-functions-infographic/
2024-09-10T01:44:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00631.warc.gz
en
0.893084
171
2.734375
3
Explore key resources on edge computing and Internet functionality with Azion's comprehensive Learning Center. Bots are automated software programs that perform repetitive tasks on websites and applications. While some bots like search engine crawlers are beneficial, malicious bots can cause significant harm to businesses in multiple ways. Serverless computing is transforming the way applications are developed, deployed, and scaled. Developers can abstract the underlying infrastructure and focus on write code and deliver value to their users. Web Application Security Get to know the processes, practices, and technologies designed to protect web applications from threats and vulnerabilities that could compromise their integrity, availability, or confidentiality.
<urn:uuid:eb5379ba-b7b5-40a4-a7be-81bdcebcd3dd>
CC-MAIN-2024-38
https://www.azion.com/en/learning/
2024-09-11T07:49:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00531.warc.gz
en
0.907487
126
2.625
3
How Does PGP Encryption Work? PGP uses a combination of symmetric and asymmetric cryptography to safeguard data transfers. In symmetric encryption, a single key serves both encryption and decryption purposes. Asymmetric encryption, on the other hand, uses two distinct keys: a public key and a private key. The public key is widely shared, while the private key remains confidential with the recipient. This combination of encryption methods enables the secure transmission of data, with symmetric encryption offering efficiency and asymmetric encryption providing enhanced security through the use of separate keys. Symmetric encryption relies on a shared “session” key between the sender and receiver for both encryption and decryption. This key is used to perform both processes, ensuring that only authorized parties can access the protected information. However, a significant drawback of symmetric encryption is the need to securely transmit the session key to the receiver. Since the key is essential for both encryption and decryption, transmitting it in plaintext would compromise the security of the communication. This challenge poses a vulnerability that can weaken the overall effectiveness of symmetric encryption. Asymmetric encryption is a very a secure communication method which uses public and private key pairs for both the sender and receiver. The sender encrypts data using the receiver’s public key, ensuring that only the recipient with the corresponding private key can decrypt it. Conversely, the sender can encrypt data with their private key, which can be decrypted by the receiver using their public key. This encryption process requires substantial computational power, making it more resource-intensive than other encryption methods. Symmetric and asymmetric encryption ensure secure communication by employing distinct mechanisms. In symmetric encryption, the sender generates a random session key, encrypts the message using this key, and then encrypts the session key with the receiver’s public key. Both the encrypted message and the encrypted session key are sent to the receiver. The receiver uses their private key to decrypt the session key and subsequently uses the decrypted session key to decrypt the message. The session key facilitates secure communication between the sender and receiver. Asymmetric encryption adds an extra layer of security by using public and private keys. The sender encrypts the message using the receiver’s public key, ensuring that only the receiver with the corresponding private key can decrypt it. This approach enhances protection against unauthorized access and eavesdropping attempts. - Sender generates session key; - Sender encrypts message with session key; - Sender encrypts session key with intended recipients public key; - Sender sends both encrypted message + encrypted session key; - Receiver decrypts session key with private key; - Receiver decrypts the message with the session key. 3 Common Uses of PGP Encryption Through its advanced cryptographic algorithms, PGP empowers users to encrypt and decrypt sensitive information ranging from messages, data, emails, and files to text messages and disk partitions. Additionally, PGP enables the authentication of digital certificates, ensuring the validity and trustworthiness of online entities. By verifying the authenticity of messages, PGP detects message alterations, safeguarding against unauthorized modifications. PGP also grants the creation of digital signatures for private and public keys to establish message ownership. Moreover, it ensures that messages reach their intended recipients without interception or tampering. By distributing public keys in identity certificates, PGP allows for the detection of any alterations, fostering trust within the network. PGP verifies certificate ownership through its web of trust concept, reinforcing the reliability of digital identities and the secure exchange of information. Below are the three most common use cases for PGP encryption: The origins of PGP lie in the desire of activists and journalists to safeguard the sensitive information they exchanged. However, the growing concern over data collection practices by organizations and government agencies has fueled a surge in PGP’s popularity. Individuals increasingly seek ways to protect their privacy and prevent unauthorized access to personal and confidential information, leading to the widespread adoption of PGP as a means to encrypt and protect digital communications. Digital Signature Verification In addition to email verification, PGP also facilitates identity verification through digital signatures. These signatures use cryptographic algorithms to generate a hash function from the email message. The hash function is then encrypted using the sender’s private key. Upon receipt, the recipient uses the sender’s public key to decrypt the message. This process allows the recipient to verify whether the message has been altered during transmission. By doing so, digital signatures ensure the authenticity of the sender, detect any fraudulent signatures, and identify potential attempts at tampering or hacking. This verification safeguards against identity theft, phishing scams, and ensures the integrity of email communications. The PGP algorithm, often implemented using RSA, is widely regarded as impenetrable due to its advanced cryptographic methods. It is the preferred choice for secure file encryption, safeguarding sensitive data from unauthorized access. Additionally, it plays a crucial role in threat detection and response tools, enabling organizations to identify and mitigate cybersecurity risks. The availability of file encryption software has simplified the process of encrypting and decrypting files, making it accessible to both individuals and businesses seeking robust data protection. Advantages and Disadvantages of PGP Encryption PGP encryption offers significant advantages in enhancing communication and system security. It effectively protects user data and resources from cyberattacks by employing encryption algorithms to safeguard sensitive information. This heightened level of security strengthens the overall resilience and stability of a system. However, considerations such as the required level of security and the additional effort required for message transmission and reception should be taken into account. The benefits and challenges of PGP encryption can vary depending on the context of its usage, highlighting the need for careful evaluation to determine its appropriateness in different scenarios. Advantages of PGP Encryption PGP encryption’s advanced algorithm ensures unbreakable security, making it popular for ensuring confidentiality in email and file exchanges. As a leading method for cloud security, PGP effectively safeguards data stored on cloud platforms. It provides a robust defense against unauthorized access, preventing hackers, nation-states, and government agencies from intercepting sensitive information. However, it is important to note that certain PGP implementations have experienced security vulnerabilities, such as the EFAIL exploit, highlighting the need for continued vigilance and updates to ensure optimal protection. Disadvantages of PGP Encryption PGP encryption presents several drawbacks. Its complexity of use can hinder its accessibility and adoption. Encryption processes can be time-intensive, and organizations often must invest in employee training for effective implementation. Key management is also crucial, as a thorough understanding of security protocols is essential to prevent breaches. Key loss or corruption can compromise the entire encryption system. Furthermore, PGP lacks anonymity; messages are traceable, revealing the identities of both sender and recipient. Additionally, the subject line of messages remains unencrypted, leaving sensitive information potentially exposed. Lastly, compatibility issues arise due to the requirement that both parties use the same software version for successful encryption and decryption. What is PGP? PGP is a pioneering software program designed for secure communication. It empowers users with the ability to encrypt and decrypt messages, ensuring their confidentiality. Additionally, PGP allows for the authentication of messages through the use of digital signatures, guaranteeing the sender’s identity. It also provides the functionality to encrypt files, protecting sensitive information from unauthorized access. As one of the earliest freely available forms of public-key cryptography software, PGP has played a significant role in advancing the field of secure communication. How does PGP work? PGP is a multifaceted encryption system that harnesses cryptography, data compression, and hashing methodologies. At its core, PGP employs both private-key and public-key cryptography. Private keys are used to encrypt messages, making them accessible only to the intended recipient who possesses the corresponding public key. Additionally, PGP uses symmetric and asymmetric key encryption. Symmetric keys are employed for encrypting the actual data itself, while asymmetric keys are used for encrypting the session keys. These intertwined layers of encryption contribute to the high level of security provided by PGP. Is PGP the same as GPG? PGP (Pretty Good Privacy) and GPG (GNU Privacy Guard) are distinct encryption programs. GPG is a rewrite and upgrade of PGP, offering several enhancements. Notably, GPG uses the more secure AES algorithm in place of the IDEA algorithm used in PGP. Additionally, GPG’s algorithm data is publicly documented, enhancing transparency. Unlike PGP, which can incur licensing costs, GPG is royalty-free and available for both personal and business use without any financial obligations. How do you get a PGP key? To obtain a PGP key, it is recommended to use a specialized PGP program such as GPG4WIN. Upon launching the program, you can initiate the key generation process typically by selecting a “Generate Key Now” option. As part of this process, you will be prompted to provide your name and email address. Additionally, it is crucial to create a backup of your key and specify a secure storage location. Once generated, you can register your public key to enable the exchange of encrypted messages with others. How safe is PGP? PGP offers exceptional security when employed correctly. Its encryption method uses robust algorithms that are virtually unbreakable. This high level of encryption effectively safeguards data and cloud systems against unauthorized access and malicious intrusions. PGP’s strong encryption capabilities make it a valuable tool for protecting sensitive information and ensuring the integrity of both personal and business-critical data.
<urn:uuid:3d991d95-5569-449b-a0b0-83ba5421d369>
CC-MAIN-2024-38
https://www.lepide.com/cyber-learning/what-is-pgp-encryption/
2024-09-11T06:09:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00531.warc.gz
en
0.908659
1,971
4.15625
4
Overview On China's New Data Protection Law (PIPL) Like any other country worldwide, China also has a system of rules that helps regulate the actions and safety of its citizens. On August 20, 2021, China's 13th Standing Committee of the National People's Congress passed the Personal Information Protection Law (PIPL). This is following the trend of many countries passing legislation to set the standards for the handling and protection of their citizen’s personal data. But What is This Law? It's China's first law that caters to all the personal information of the citizens. It is in many ways modeled after other countries’ broad data protection regulations. Among these inspirations is the EU General Data Protection Regulation (GDPR). Following the initial passing of the PIPL on August 20, 2021, it has just recently become effective as of November 1, 2021. The law will change how companies with data or business-related functions within the country will operate. For these companies to face the added layer of complexity, they'll have to understand and closely follow China's new security and data laws and regulations. Who Does the Law Apply To? The law will govern individuals and entities within China who process personal information. It will also apply to other entities outside China that tend to process the personal data of Chinese citizens. This will include companies offering services or goods to those in China. An outside entity that tries to evaluate or analyze the behavior of specific people in China will also have to face this law. This will help ensure that Chinese residents can have confidence in those who handle their confidential data. How Does the Law Define Personal Information? Personal Information is any information that relates to naturally identified or identifiable persons. Such information can be electronically recorded or obtained by any other means. Even so, the collected data does not include that obtained without a prescription. The processing of this information will entail: - The collection of data - Public disclosure - Deletion of Individual Information What are the Rights Under the PIPL? Generally, the Personal Informational Privacy Law has some relationship with the GDPR (General Data Protection Regulation). Among these is the individuals' data rights. Despite this, the law has no language that addresses such rights. It doesn't state where certain exemptions or restrictions may apply. Besides, it doesn't provide a specific timeline for response. Instead, it relies on processing entities that will later respond to the necessary details. Below Are Some of The Rights That Will Apply in This Law: - Right to access - Right to withdraw consent - Right to erasure - Right to information - Right to correction or rectification - Right to complain with the regulator - Right not to be subjected to automotive decision making - Right to object and restrict the processing of personal data - Right to data portability (has to meet all the conditions as per the Cyberspace Administration of China) Suppose a natural person within China has their request of exercising their rights rejected, With this provision, everyone in China will have access to their personal information rights. Suppose their request is not accepted. They'll have to take legal action against those who prevent them. They'll do this by reporting the matter to the Chinese courts. Afterward, the affected individuals will receive compensation. This will depend on how their rights were affected. What Are the Key Features of The Law? Establishes Guiding Principles for The Protection of Personal Data The law emphasizes that any entity that processes personal information should have a clear and reasonable purpose for doing so. Meaning that no organization will process personal information without having a valid purpose. Even though organizations will still be able to collect individuals' data, this regulation that will prevent them from doing it recklessly or without just cause. After collecting the information, there will be guidelines to ensure that it's well protected from misuse. To achieve this, there are some minimum requirements for the Personal Information Processing Entities (PIPEs). Among these are the expectations that the PIPEs will: : - Establish procedures and policies of protecting all individuals' information - Implement technological solutions for enhanced data security - Ensure there is a risk assessment before engaging in the processing activities - Take a risk-based approach for imposing the minimum requirements in the specified high-risk situations The law insists that all PIPEs must appoint an officer responsible for personal data protection. This will apply to those that exceed the expected minimum requirements. The officer will ensure maximum supervision of the processing of personal data. Suppose there are PIPEs whose internet platforms have a lot of users. They must include an external independent organization. The entity will ensure that all organizations with all the measures of personal data protection. PIPEs will also have to ensure the regular publishing of social responsibility reports. Such reports will focus on the efforts they input to protect the collected data. Besides, there must be additional protection for sensitive individuals' data because some forms of personal data might negatively affect the person if released to the public. This category of information includes: - Special status - Religious information - Biometric data - Medical information - Location data - Financial information Suppose PIPEs insist on using individuals' data. They'll only do it under strict protective measures. Creates Legal Rights for Data Subjects No one within or outside China will be allowed to process individuals' data without a clear statement from the authorities. Besides, entities will at some point get a limitation on how they get their information. This will be important in the following situations: - Handling information for initiating new reporting - Supervising public opinions - Monitoring activities relating to the public interest This entity is also responsible for facilitating a proper way by which individuals can withdraw their consent. Besides, the peoples' information will be safe since the organizations will no longer keep it after getting what they want. Data collectors will also abide by the law by being transparent when collecting data using computer algorithms. Besides, they shouldn't use automated decision-making on individuals' data. It might interfere with the newly collected data since computers are programmed machines that only take command. The Personal Information Privacy Law has up to 74 articles included in at least eight chapters. Among these are the: - Legal liabilities - General provisions - Rules for cross-border provision of personal data - Miscellaneous provisions - Individuals' rights in personal data processing activities - Personal data processing rules - Obligations of individuals' information rules - Departments performing individuals' information protection function Another advantage of PIPL is that it has “extraterritorial effects” which allow the Chinese government to use their legal authority beyond the normal boundaries of their own country and residents. They are able to have this authority in this context to protect their resident’s personal data as it is being processed in any location. While a majority of the data processing of Chinese residents likely occurs within their border, this process can also occur outside China, but it should be for the following reasons only: - Providing products or services to those within China - Analyzing the behavior of individuals within China - Other circumstances condoned within the law and regulation With such extraterritorial effects, foreign companies that process individuals' data for residents of China will have to adhere to the new law just like the local organizations do. The companies operating outside of China should have a dedicated entity to carry out this business in China. Besides, they should have an agent based in China who will be responsible for all their business of collecting individuals' data within the country. It is expected that these local agents will identify themselves with the relevant authorities, sharing their names and appropriate contact information with those authorities before beginning that work. Penalties For Violation If you are found in violation of the Personal Information Privacy Law; you'll face an administrative fine of at least RMB 50 million. The following charges can also apply: - 5% of processor's turnover in the next year - Requisition of unauthorized gains - Revocation of business licenses or permits Other individuals linked to the activity within the PRC will be considered guilty. As a result, they'll be fined up to RMB 1 million. It will also apply to those affected by the extraterritorial effects. As a result, the respective individuals will not serve as: - Personal data protection officer - Senior managers Suppose any processor is going against the rights and interests of personal data; the law will impose tortious liability on them. It does this to facilitate the damaged claims for the defendant's information to impose the burden of proof in a civil action. The processor may face criminal charges or civil claims. This happens when the infringement affects many individuals. Such might be brought to the light by authorized entities from the CAC, prosecutor, or the consumer groups. As stated earlier, the Personal Information Privacy Law is now officially in effect. This means many data processing companies within China should be evaluating their eligibility if they have not already. The new law adopts the international principles of individuals' information privacy protection. It also reflects other international data privacy regulations, including the GDPR and California Consumer Privacy Act (CCRA). This law will also create a new dawn in handling personal data within China. In accomplishing this, the law will enhance new measures to adopt with the developing technology such as AI, facial recognition, and data analysis. With such remarks, the participating companies need to re-examine their qualifications of the new requirements.
<urn:uuid:7076876d-b208-4813-aa96-1bf0aef41d3e>
CC-MAIN-2024-38
https://www.accountablehq.com/post/chinas-personal-information-protection-law
2024-09-13T19:31:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00331.warc.gz
en
0.93634
1,971
2.6875
3
User authentication is an integral part of cybersecurity. The user authentication methods have evolved; from using a complex password to passwordless authentication methods, scores of different techniques have been introduced. But, it looks like even the most stringent security measures may not always be effective. Kevin Mitnick, once known as the FBI’s most wanted hacker who now helps companies defend themselves. In an interview with CNBC, he said: “Just by enabling two-factor authentication, you can’t relax…a smart attacker could get access to your account.” “If we can steal the user session cookie, we could become them, and we don’t need their username, their password, or their two-factor.” We have moved from one-time authentication to multi-factor authentication to make sure that we are on the secure side of the connection. However, technology and cybercrimes are growing too. According to CNN Money, one in two Americans become a victim of identity fraud every two seconds! Therefore, authenticating identities via 2FA, MFA, or any other method just once is not enough; authentication must now be a continuous process. What is Continuous Authentication? Let’s give you an example- You need to access a business application on your computer, you authenticate your identity via MFA and gain access, and you are confident that your access is secure. Now, you move away from your work desk for a while. Chances are, your session is taken over by a hacker or a computer virus leaving the system vulnerable to hacks, phishing, and credential stuffing. Therefore, Continuous Authentication takes place after the session has begun as an on-going process. It is a mechanism that constantly checks on the attributes that change and keeps validating the identity continuously throughout the session and not just at the log-in point. Considering the example given above, say, if someone does take over your session, unique user attributes like how you type, the way you hold the mouse, the number of pauses you take as you type, etc. are examined. For instance, you take a two-second pause after every sentence you type, and the system recognizes a five-second pause, it will immediately prompt a quick verification of the user. So, let’s dig deeper and understand… How does Continuous Authentication work? Mark Diodati, research vice president at Gartner, said: “The technology works behind the scenes, looking at how users behave: the way they type on the keyboard, how quickly they move between the keys, how long they hold a key, how they swipe on mobile devices, how they move a mouse.” With authentication, you secure your identity from a ‘hacker’ and with Continuous Authentication, you secure your identity from a ‘session imposter.’ There are multiple ways to support Continuous Authentication- Physical movement: The sensors track the physical movement of the user; the way a user positions his/her device, the speed at which the user moves the mouse pointer, etc. Facial recognition: Most devices have this feature for authentication. However, Continuous Authentication follows the way a user glances at his device or the facial expressions he gives while unlocking the device. Behavioral attributes: Behavioral patterns or gestures such as the typing speed of the user, finger pressure, the way a user moves his mouse, etc. are noted. Voice recognition: This method of authentication is particularly helpful for banks, call-centers, or services that require the user to interact over a call. Continuous Authentication considers voice pitch, tone, and frequency. Essentially, Continuous Authentication makes use of the attributes that are unique to the users only. Even if someone steals your password, answer to security questions or your tokens; it is not feasible to steal your unique physical movements, voice, or behavioral patterns. Therefore, this method points to the benefits of Continuous Authentication such as high-level security, restricting sessions imposters, bots and other unauthorized activities. At a Marco level, this improves a company’s cybersecurity posture. Also, a company with a secure IT infrastructure will be able to adhere to security compliances well. And, a company that is compliant with security standards makes for a good brand. Choosing Continuous Authentication The best part of Continuous Authentication is that the user needs to make minimal efforts for authentication. However, not every session requires Continuous Authentication. For instance, if you are reading news feed on your mobile and move away from your device, it is very unlikely that such an open session would be prone to hacks, phishing, or credential stuffing. But, if you are working on a customer database- a document that should be protected, such a session requires Continuous Authentication. So, opting for Continuous Authentication depends on the importance of your work, and whether your session must be protected, even when you aren’t around. The future of Continuous Authentication In December 2018, Gartner pushed the focus on Continuous Authentication for 2019 at their annual conference on Identity and Access Management in Las Vegas. The main topic of discussion was their self-coined term, CARTA — Continuous Adaptive Risk and Trust Assessment. It was discussed that CARTA as a concept should be applied across all levels of security as one of the largest areas of impact is around authentication. We have got to the point where assuring ourselves that our identities are protected and secure does not suffice; we need constant reassurance about the security of our identities and looks like Continuous Authentication is the way to go.
<urn:uuid:46c4c21c-62db-44f8-b1ea-7d61c408d57f>
CC-MAIN-2024-38
https://www.ilantus.com/continuous-authentication-the-future-of-authentication/
2024-09-13T18:04:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00331.warc.gz
en
0.948781
1,147
3.09375
3
DigitalGenetics - stock.adobe.co The government’s ambitious 100,000 Genomes Project has finally reached its goal of sequencing 100,000 genomes. The government-funded project is run by Genomics England, a body set up by the government for the specific purpose of mapping the genomes of patients in order to advance treatment of cancer and other rare diseases, and was originally due to be completed by the end of 2017. In March 2018, the government reached the halfway mark, having sequenced 50,000 genomes, and set the goal for 100,000 by the end of this year. The project is now complete, having mapped its 100,000th genome. Health secretary Matt Hancock described reaching the target as “a major milestone in the route to the healthcare of the future”. He added: “From Crick and Watson onwards, Britain has led the world in this amazing technology. We do so again today as we map a course to sequencing a million genomes. Understanding the human code on such a scale is part of our mission to provide truly personalised care to help patients live longer, healthier and happier lives. “I am incredibly excited about the potential of this type of technology to unlock the next generation of treatments, diagnose diseases earlier, save lives and enable patients to take greater control of their own health.” The scheme, which is a collaboration between Genomics England, NHS England and the Department of Health and Social Care, was announced by the then health secretary, Jeremy Hunt, in July 2013, and in 2015 it began recruiting patients for the programme. A total of 85,000 NHS patients and 1,500 NHS staff have taken part in the project across 13 NHS genomic medicine centres across the country. The aim of the project is to use big data and genetics to develop personalised medicine, creating the ability to target treatment for individual patients. According to the government, early analysis “has found genetic changes in more than 60% of cancer patients, which could potentially provide new therapies through clinical trials for some of these patients”. Read more about NHS IT - The health secretary aims to expand the government-funded 100,000 Genomes Project and sequence five million genomes over the next five years and roll out genomic testing across the NHS. - NHSmail system shutdown, which saw staff locked out of their accounts, blamed on “software issues” in supplier Accenture’s internal infrastructure. - Partnership between the Scottish Government and Nesta will see funding available for innovative projects driving the use of technology to give citizens access to data and help them live healthier lives. In October 2018, Hancock announced plans to sequence five million genomes over the next five years, significantly expanding the programme. From next year, the government will launch an NHS Genomic Medicine Service, and also aims to kick-start a UK genomics industry, ensuring the country becomes a leader in the field. Genomics England chair John Chisholm said that at the launch of the 100,000 Genomes Project, it was seen as a “bold ambition to corral the UK’s renowned skills in genomic science and combine them with the strengths of a truly national health service in order to propel the UK into a global leadership position in population genomics”. He added: “With this announcement, that ambition has been achieved. The results of this will be felt for many generations to come as the benefits of genomic medicine in the UK unfold.”
<urn:uuid:5aaa1b94-b2c0-4b11-9e2d-8c07c977ff12>
CC-MAIN-2024-38
https://www.computerweekly.com/news/252453972/UK-genome-sequencing-project-reaches-100000-genomes-goal
2024-09-14T23:09:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00231.warc.gz
en
0.955138
717
2.6875
3
An important aspect of human memory is our ability to conjure specific moments from the vast array of experiences that have occurred in any given setting. For example, if asked to recommend a tourist itinerary for a city you have visited many times, your brain somehow enables you to selectively recall and distinguish specific memories from your different trips to provide an answer. Studies have shown that declarative memory – the kind of memory you can consciously recall like your home address or your mother’s name – relies on healthy medial temporal lobe structures in the brain, including the hippocampus and entorhinal cortex (EC). These regions are also important for spatial cognition, demonstrated by the Nobel-Prize-winning discovery of “place cells” and “grid cells” in these regions—neurons that activate to represent specific locations in the environment during navigation (akin to a GPS). However, it has not been clear if or how this “spatial map” in the brain relates to a person’s memory of events at those locations, and how neuronal activity in these regions enables us to target a particular memory for retrieval among related experiences. A team led by neuroengineers at Columbia Engineering has found the first evidence that individual neurons in the human brain target specific memories during recall. They studied recordings in neurosurgical patients who had electrodes implanted in their brains and examined how the patients’ brain signals corresponded to their behavior while performing a virtual-reality (VR) object-location memory task. The researchers identified “memory-trace cells” whose activity was spatially tuned to the location where subjects remembered encountering specific objects. The study is published today in Nature Neuroscience. Video of example trials of spatial memory task showing memory encoding and retrieval. Credit: Salman Qasim/Columbia Engineering “We found these memory-trace neurons primarily in the entorhinal cortex (EC), which is one of the first regions of the brain affected by the onset of Alzheimer ‘s disease,” says Joshua Jacobs, associate professor of biomedical engineering, who directed the study. “Because the activity of these neurons is closely related to what a person is trying to remember, it is possible that their activity is disrupted by diseases like Alzheimer’s, leading to memory deficits. Our findings should open up new lines of investigation into how neural activity in the entorhinal cortex and medial temporal lobe helps us target past events for recall, and more generally how space and memory overlap in the brain.” The team was able to measure the activity of single neurons by taking advantage of a rare opportunity: invasively recording from the brains of 19 neurosurgical patients at several hospitals, including the Columbia University Irving Medical Center. The patients had drug-resistant epilepsy and so had already had recording electrodes implanted in their brains for their clinical treatment. The researchers designed experiments as engaging and immersive VR computer games and the bedridden patients used laptops and handheld controllers to move through virtual environments. In performing the task, subjects first navigated through the environment to learn the locations of four unique objects. Then the researchers removed the objects and asked patients to move through the environment and mark the location of one specific object on each trial. The team measured the activity of neurons as the patients moved through the environment and marked their memory targets. Initially, they identified purely spatially tuned neurons similar to “place cells” that always activated when patients moved through specific locations, regardless of the subjects’ memory target. “These neurons seemed only to care about the person’s spatial location, like a pure GPS,” says Salman E. Qasim, Jacobs’ Ph.D. student and lead author of the study. But the researchers also noticed that other neurons only activated in locations relevant to the memory the patient was recalling on that trial—whenever patients were instructed to target a different memory for recall, these neurons changed their activity to match the new target’s remembered location. What especially excited Jacobs and Qasim is that they could actually decode the specific memory a patient was targeting based on the activity of these neurons. “Our study demonstrates that neurons in the human brain track the experiences we are willfully recalling, and can change their activity patterns to differentiate between memories. They’re just like the pins on your Google map that mark the locations you remember for important events,” Qasim says. “This discovery might provide a potential mechanism for our ability to selectively call upon different experiences from the past and highlights how these memories may influence our brain’s spatial map.” Jacobs and Qasim plan next to look for evidence that these neurons represent memories in non-spatial contexts to better characterize their role in memory function. “We know now that neurons care about where our memories occur and now we want to see if these neurons care about other features of those memories, like when they occurred, what occurred, and so on,” Qasim notes. Short-term memory (STM), also referred to as short-term storage, or primary or active memory indicates different systems of memory involved in the retention of pieces of information (memory chunks) for a relatively short time (usually up to 30 seconds). In contrast, long-term memory (LTM) may hold an indefinite amount of information. The difference between the two memories, however, is not just in the ‘time’ variable but is above all functional. Nevertheless, the two systems are closely related. Practically, STM works as a kind of “scratchpad” for temporary recall of a limited number of data (in the verbal domain, roughly the George Miller’s ‘magical’ number 7 +/- 2 items) that come from the sensory register and are ready to be processed through attention and recognition. On the other side, information collected in the LTM storage consist of memories for the performance of actions or skills (i.e., procedural memories, “knowing how”) and memories of facts, rules, concepts, and events (i.e., declarative memories, “knowing that”). Declarative memory includes semantic and episodic memory. The former concerns broad knowledge of facts, rules, concepts, and propositions (‘general knowledge’), the latter is related to personal and experienced events and the contexts in which they occurred (‘personal recollection’). Although STM is closely related to the concept of ‘working memory’ (WM), STM and WM represent two distinct entities. STM, indeed, is a set of storage systems whereas WM indicates the cognitive operations and executive functions associated with the organization and manipulation of stored information. Nevertheless, one hears the terms STM and WM often used interchangeably. Furthermore, one must distinguish STM from the ‘sensory memory’ (SM) such as the acoustical echoic and iconic visual memories which are shorter in duration (fraction of a second) than STM and reflect the original sensation, or perception, of the stimulus. In other words, SM is specific to the stimulus’ modality of presentation. This ‘raw’ sensory information undergoes processing, and when it becomes STM gets expressed in a format different from that perceived initially. The famous Atkinson and Shiffrin model (or multi-store model), proposed in the late 1960s, explains the functional correlations between STM, LTM, SM, and WM. Later on, a considerable number of studies demonstrated the anatomical and functional distinction between memory processes as well as neural correlates and functioning of STM and LTM subsystems. In light of these findings, several memory models have been postulated. While certain authors suggested the existence of a single memory system encompassing both short- and long-term storage, after 50 years the Atkinson and Shiffrin model remains a valid approach for an explanation of the memory dynamics. In light of more recent research, however, the model has several problems mostly concerning the characteristics of STM, the relationship between STM and WM as well as the transition from STM to LTM. Short-term memory: meaning and system(s) It is a storage system that includes several subsystems with limited capacity. Rather than being a limitation, this restriction is an evolutionary survival advantage, since it allows paying attention to limited but essential information, excluding confounding factors. It is the classic example of the prey that must focus on the hostile environment to recognize a possible attack by the predator. Given the functional peculiarities of the STM (collection of sensorial information), the subsystems are closely related to the modalities of sensory memory. As a consequence, there have been several sensorial-associated subsystems postulated, including the visuospatial, phonological (auditory-verbal), tactile, and olfactory domains. These subsystems involve different patterns and functional interconnections with the corresponding cortical and subcortical areas and centers. The concept of working memory In 1974, Baddeley and Hitch developed an alternative model of STM which they termed as working memory. Indeed, the WM model does not exclude the modal model but enriches its contents. On the other side, the short-term store can be used to characterize the functioning of the WM. WM refers more to the entire theoretical framework of the structures and processes used for the storage and temporary manipulation of information, of which STM is only a component. In other words, STM is a functional storage element, while WM is a set of processes that also involve storage phases. WM It is the memory that we constantly use, which is always “online” when we have to understand something or solve a problem or make an argument, the cognitive strategies for achieving short term goals. The proof of the importance of this sort of ‘operating system’ of memory shows by the evidence that WM deficits are associated with several developmental disorders of learning, including attention-deficit hyperactivity disorder (ADHD), dyslexia, and specific language impairment (SLI). Short-term and Long-term memory These types of memory can be classically distinguished based on storage capacity and duration. The capacity of the STM, indeed, has limitations in the amount and duration of information it can maintain. In contrast, LTM features a seemingly unlimited capacity that can last years. The functional distinctions between systems of memory storing and the exact mechanisms for how memories transfer from ST to LTM remain a controversial issue. Do STM and LTM represent one or more systems with specific subsystems? Although the STM probably represents a sub-structure of the LTM, which is a sort of long-term activated storage, rather than looking for a ‘physical’ division, it seems appropriate to verify the mechanisms of transition from a memory that is only a passage to a lasting memory. Although the classic multi-modal model proposed that storage of ST memories occurs automatically without manipulation, the matter seems to be more involved. The phenomenon concerns quantitative (number of memories) and qualitative (quality of memory) features. Regarding quantitative data, although the number of Miller of 7 +/- 2 items identifies the number of elements included among individual slots, the grouping of memory bits into larger chunks (chunking) could allow storing a lot more information of bigger size and continuing to keep the magic number. The qualitative issue, or memory modulation within processing, is a fascinating phenomenon. It seems that the elements of STM undergo processing, which provides a sort of editing that involves the fragmentation of each element (chunking) and its re-elaboration and re-elaboration. This phase of memory processing is called encoding and can condition subsequent processing, including storage, and retrieval. The encoding process encompasses automatic (without conscious awareness) and effortful processing (through attention, practice, and thought) and allows us to retrieve information to be used to make decisions, answer questions, and so on. There are three pathways followed during the encoding step: the visual (information represented as a picture), acoustic (information represented as a sound), and semantic encoding (the meaning of the information). The processes interconnect with each other, so that information is broken down into different components. During recovery, the pathway that has produced the coding facilitates the recovery of the other components through a singular chain reaction. A particular perfume, for instance, makes us recall a specific episode or image. Of note, the encoding process affects the recovery, but the recovery itself undergoes a series of potential changes that can alter the initial content. In neurofunctional terms, the difference between STM and LTM is the occurrence, in the LTM, of a series of events that must fix the engram(s) definitively. This effect occurs through the establishment of neural networks and expresses as neurofunctional phenomena including the long term potentiation (LTP) which is an increase in the strength of the neural transmission deriving from the strengthening of synaptic connections. This process requires gene expression and the synthesis of new proteins and is related to long-lasting structural alterations in the synapses (synaptic consolidation) of the brain areas involved such as the hippocampus is the case of declarative memories. The role of the hippocampal network Of note, the hippocampal neurogenesis regulates the maintenance of LTP. However, the hippocampal network, including the parahippocampal gyrus, hippocampus, and neocortical areas is not the place where memories are stored, but it has a crucial role in forming new memories and in their subsequent reactivation. It seems that the hippocampus has a limited capacity and acquires information quickly and automatically without keeping it for long. Over time, the originally available information becomes permanent in other brain structures (in the cortex), independently from the activity of the hippocampus itself. The crucial mechanism of this transfer is the reactivation (“replay”) of the configurations of neural activity. In other words, the hippocampus and the medial temporal structures connected to it are crucial for holding an event as a whole as it distributes in an organized way memory traces. It is an operating system that through different software can store, organize, process, and recover hardware files. This hippocampal-guided reactivation (retrieval) leads to the creation of direct connections between the cortical traces and then to the formation of an integrated representation in the neocortex including the visual association cortex for visual memory, the temporal cortex for auditory memory, and the left lateral temporal cortex for knowledge of word meaning. Moreover, the hippocampus has other specific tasks, for example, in the spatial memory organization. Other brain areas are involved in memory processes; for example, the learning of motor skills has links to the activation of the cerebellar regions and brainstem nuclei. Furthermore, learning of perceptive activities (improvements in the processing of perceptive stimuli essential in everyday life activities such as understand spoken and written language) involves, basal ganglia and sensory and associative cortices whereas learning cognitive skills (related to problem-solving) involve the medial temporal lobes initially. More information: Memory retrieval modulates spatial tuning of single neurons in the human entorhinal cortex, Nature Neuroscience (2019). DOI: 10.1038/s41593-019-0523-z , https://nature.com/articles/s41593-019-0523-z Journal information: Nature Neuroscience Provided by Columbia University School of Engineering and Applied Science
<urn:uuid:9b184f90-c2a1-4e61-bb9d-9b64a66f16c6>
CC-MAIN-2024-38
https://debuglies.com/2019/11/11/neuroengineers-has-found-the-first-evidence-that-individual-neurons-in-the-human-brain-target-specific-memories-during-recall/
2024-09-17T13:07:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00031.warc.gz
en
0.943821
3,221
3.6875
4
A botnet is a large group of compromised machines that are used by an individual or organization to carry out a cyberattack. Botnets are typically composed of infected machines that have been commandeered by the threat actor, who uses them to launch attacks against other targets. A Botnet is a Network of Computers Controlled by a Threat Actor A botnet is a network of computers controlled by a threat actor. Botnets are often used for cybercrime purposes, such as spamming or launching DDoS attacks. Types of Bots A botnet is a type of threat actor who controls multiple bots in a botnet. Botnets are used for a variety of purposes, including launching DDoS attacks, spreading malware, and conducting reconnaissance. Tactics Used by Botnet Operators Botnet operators use a variety of tactics to control their botnets and make them more efficient. One common tactic is to add new bots to the network regularly in order to keep it active and functioning. Botnet controllers also use bots to send spam emails, launch denial-of-service (DoS) attacks, or steal sensitive data. How to Defeat Bots? There is no single term that accurately describes a threat actor who controls multiple bots in a botnet. In general, the term used to describe this type of individual is “botnet master.” Botnets are collections of infected computers that have been joined together to form a network and share data and commands. Botnets can be used for a variety of purposes, including sending spam emails, launching distributed denial-of-service (DDoS) attacks, or infecting other computers with malware. The size and complexity of botnets has increased over the years, as has the sophistication of the tools used to control them. Today’s botnet masters are able to command their bots using a variety of languages and platforms, which allows them to operate in secrecy. A Botnet is a Collection of Computers Controlled by the Same User When a hacker controls multiple bots in a botnet, they can use the botnet to launch distributed denial of service (DDoS) attacks or steal sensitive information. A botnet typically refers to a network of malware-infected computers used to carry out malicious activities without the knowledge or consent of their owners. Botnets can range in size from just a few dozen machines to tens or hundreds of thousands. A Threat Actor is Someone Who Uses Bots to Attack Other Websites A threat actor is someone who uses bots to attack other websites. This can be done for a variety of reasons, such as gaining access to sensitive information, spreading malware, or disrupting services. Botnets are a common tool used by threat actors because they allow them to control large numbers of bots and use them to launch attacks. A Command and Control Server is Used to Control the Bots in a Botnet A threat actor controls multiple bots in a botnet by using a command and control server. The command and control server is responsible for issuing commands to the bots, controlling their actions, and communicating with the threat actor. A Distributed Denial of Service (DDoS) Attack Is When Multiple Bots Flood a Website with Traffic to Cause it to Slow Down or Cease Functioning A distributed denial of service (DDoS) attack is when multiple bots flood a website with traffic to cause it to slow down or cease functioning. DDoS attacks are typically carried out by attackers who use botnets, which are large networks of infected machines used for distributed denial of service (DDoS) attacks. Botnets are often built by compromising unsecured computers and then using them to send spam or launch DDoS attacks. A threat actor who controls multiple bots in a botnet is generally referred to as a “botmaster.” Botmasters use their bots to carry out various attacks, including distributed denial-of-service (DDoS) and spamming, theft of information and intellectual property, and recruitment of other hackers. As bot masters continue to develop new ways to use their bots for malicious purposes, it’s important that organizations keep up with the latest trends and tactics employed by these threat actors.
<urn:uuid:528ffecd-c74a-4279-b942-d78e4204ea2c>
CC-MAIN-2024-38
https://cybersguards.com/what-is-the-term-used-for-a-threat-actor-who-controls-multiple-bots-in-a-botnet/
2024-09-19T20:41:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00731.warc.gz
en
0.948933
849
3.5
4
More than two years have passed since the beginning of the COVID-19 pandemic, and 2022 may be the first year with reduced or even no social restrictions. With the Omicron variant and its sub-variant BA.2 now confirmed to cause less aggressive disease than infection with earlier variants, the US and many other countries around the world seem to be currently preparing for the end of the crisis. However, while preliminary data suggests that Omicron may cause less severe disease, this variant also spreads more quickly than any of the previous strains, which means more and more people will become infected this year. The long-term effects of COVID-19 have worried scientists and doctors alike ever since the beginning of the crisis in 2019. According to the Mayo Clinic, lasting health problems can include breathing problems for some patients, heart complications, chronic kidney disease, ischemic stroke, and even Guillain-Barre syndrome, for others. With the last being a disease that causes temporary paralysis, the only thing that seems to be certain when discussing the long-term effects of COVID-19 is the fact that these effects can be severe, even life-threatening. In fact, recent studies show that COVID-19 is not a respiratory disease but a vascular disease that can, in time, affect numerous organs. What is long COVID and why it matters Long COVID is a condition with so many implications that it has become the subject of multiple concerns, discussions, and studies, including some by the World Health Organization (WHO). For some COVID patients, even if the primary disease and infection are cured, they are followed by multiple negative effects. Some patients come out of the disease with almost all the organs affected. The Office for National Statistics (ONS) in the United Kingdom (UK) estimates that as many as 1 in 10 patients with COVID will ultimately develop long-term symptoms of the disease. Under these circumstances, post-COVID recovery is now becoming the new burden for health systems around the world. Although there is no clear way of establishing what patients actually suffer from with long COVID, the disease covers a broad range of symptoms such as tiredness, muscle pain, and difficulty concentrating. The BBC also lists among the symptoms of long COVID: extreme tiredness, shortness of breath, heart palpitations, chest pain or tightness, problems with memory and concentration — the so-called “brain fog,” — changes to taste and smell, and joint pain. According to the same article, no standard test for long COVID is currently available, and doctors diagnose the disease by ruling out other possible causes for these symptoms. From long COVID to the risk of mental health disorders If long COVID has been theorized from the beginning of the crisis, other negative effects of the pandemic and the virus that caused it are just beginning to show. Among these consequences is the higher risk COVID patients face of developing mental health disorders. According to a new study, suffering from this illness puts patients at a significantly higher chance of confronting new mental health conditions. Symptoms like depression, anxiety, stress, substance use disorders, cognitive problems, and sleep issues also seem to accompany COVID-19. These disorders may add pressure to the already existing crises of suicide and overdoses. According to the same study, patients with more severe cases of COVID, especially those who were hospitalized, face a higher risk of developing mental health disorders. However, even people with mild or asymptomatic cases of the disease are more likely to encounter these negative consequences of the virus when compared to healthy individuals. The new research also showed that patients who suffer from COVID are more likely to face mental disorders. And people already suffering from mental health disorders are more likely to become infected with the virus that causes COVID-19. Treating mental health disorders among the general population and especially among survivors of COVID-19 should probably become a priority in the future. Facing a post-pandemic future With the emergence of Omicron and its sub-variants, the healthcare crisis seems to become more manageable as the symptoms of the disease become less severe. However, symptoms of long COVID and risks of developing mental health disorders are real among COVID-19 survivors, and they are also likely to remain with us for some time. If past efforts were made to develop effective vaccines and treatments, future efforts would most likely concentrate on erasing the negative effects of the virus and the disease it provokes. This, however, may prove equally challenging.
<urn:uuid:20d0b4c1-c1b2-40e1-9adc-c092a8c92ea0>
CC-MAIN-2024-38
https://healthcarecurated.com/editorial/what-are-the-long-term-effects-of-covid-19/
2024-09-07T21:44:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00031.warc.gz
en
0.961485
930
3.40625
3
The alarming growth of malware attacks in the last years should concern each of us, but what is more important, should make us AWARE of the risks and consequences. Taking action and preventing these malicious activities operated by cybercriminals has to be a top priority IF we want to stay safe online. The reality is that cyber attackers now use different strains of malware, much more sophisticated and agile that prove to be effective and successful, challenging us to build a stronger defense against them. Malware evolves at a rapid pace because of advanced malware mastering the art of evasion. Thus, traditional antivirus engines find it difficult to detect attacks in the first stages. Malware is getting bigger and bigger. It fuels growth, innovation and encourages malicious actors to easily reach their goals. In this article, we’ll have an in-depth analysis of malware and learn: where it hides, what are the most dangerous malware attacks so far, why malware a profitable business for cybercriminals and offer actionable security tips to help you better prevent these attacks and keep yourself (and your digital assets) safe. Why malware attacks keep happening? In the context of this ever-changing threat landscape that never ceases to challenge everyone from home users, organizations to security researchers and communities, this question makes a good point. It’s simple. Malware still works, and humans have their contribution to helping attackers succeed with their malicious plans. True fact: Throughout our old habits that seem to die hard (not updating our software frequently, or reusing the same password for various online accounts), we maintain security holes that malicious actors are exploiting and fueling this growing malware business. According to a report from Trustwave security company, 22 percent of respondents (security respondents) said that “preventing malware, including ransomware, was their biggest security threat and obligation for 2018”, while the second biggest pressure was identifying vulnerabilities (17%) and the third one (13%) was preventing social engineering and phishing attacks. Paul Edmunds, Head of Technology at the National Crime Agency’s National Cyber Crime Unit (NCCU) states that: It’s really important to understand the impact that malware has. It’s a massive criminal enabler that underlines most cybercrime. It’s an infrastructure that’s used for compromising devices to conduct most of the prominent attacks that you see. The evolution of malware Before we understand its impact, let’s take a few steps back and have a look at how malware evolved lately to become such a serious and threatening business to everyone. The malware market evolved from something that was tested and probably used for fun, – with hackers creating programs to see how they can gain access to unauthorized places and then focusing on money and going for stealing personal data – into a more targeted attack vector. Did we ask for malware? No, but there’s a big business out there and we are all responsible in a way or another for making it alive and growing. According to Cisco 2018 Annual Cybersecurity Report, the evolution of malware was “one of the most important developments in the attack landscape in 2017”. “Malware is becoming more vicious. And it’s harder to combat. We now face everything from network-based ransomware worms to devastating wiper malware.” This graphic from AV-Test shows the growth of total malware over the last five years: Also, did you know that “in the second half of 2017 on average 795 new malware specimen were discovered per hour i.e. 13 per minute.”? Regardless of the smartphone landscape, mobile malware is one of the fastest types of malware, targeting more and more Android users. In the first quarter of 2018, the G DATA security experts detected “an average of 9,411 new malware every day for the popular Android operating system”. This means: A new malware appearing every 10 seconds. The rise of ransomware attacks Perhaps a clear evolution of the malware economy has seen last year with the two massive and devastating cyberattacks: WannaCry and (non)Petya. The first one was called by Europol an attack of “an unprecedented level” that took down entire networks and caused business disruption across 150 countries and infecting more than 200,000 computers. Not to mention the financial damage caused, because many companies and public institutions have had their computers and data encrypted, and the only way to get it back was to pay a ransom. If during the WannaCry ransomware, cyber criminals used the EternalBlue method, with (non)Petya ransomware outbreak, – that also spread fast and had self-replicating abilities. -, they changed the type of malware from ransomware to wiper. How is this different? The purpose of a wiper is to destroy and damage, while ransomware is mainly focused on making money. In 2018, malware is even more agile, and Gandcrab ransomware is a great example. It is a fast-growing malware that’s been used and spread in waves of spam campaigns. While it reached version 4 already, this piece of malware was initially distributed via exploit kits which abuses software vulnerabilities found in systems. [Tweet “Here’s what you need to know about the growth of malware as a business.”] The newest version 4 of this malware family includes “different encryption algorithms, a new .KRAB extension, new ransom note name, and a new TOR payment site”. So far, Gandcrab is one of the most prevalent and biggest ransomware attacks in 2018. Here’s a more in-depth and technical analysis of how Gandcrab ransomware evolved if you want to dive into this topic. If you’ve been hit by any of these ransomware attacks or others, we strongly advise you NOT to pay the ransom to get your data back. Instead, check out this list of decryption tools to unlock your data for free. 5 key places where malware can hide Malware authors often look out for new techniques to hide their malicious files which often go unnoticed by antivirus software or threat intelligence analysis. Here are the most common places where malware can hide: - Email attachments – Most of the security alerts we’ve written talk about malware being delivered via emails to potentially infect victims’ computers. Sadly, many people still download, open, click and enable malicious attachments to run on their computers. Here the example of a variant of Trickbot malware in which cybercriminals lure victims into clicking on a malicious word document attached in the email. - Links sent via email – Another common place where malware can hide is a link received via email which is more tempting for users to simply click it than downloading an attachment. This mindless clicking behavior is known and exploited by cybercriminals. - Traffic redirect – Another place that malicious actors exploit to hide malware is in the Internet traffic(especially in the browser). As we spend most of the time reading online, browsing blogs or buying on the Internet, it’s easy to become a target. Traffic redirect may be invisible for the unskilled users, so they land on sites where malware is hidden in the code of the page or on the ads listed on the site. - Software updates – Probably the story of compromised versions of CCleaner software apps is the best example here. Hackers spread hidden malware in the version 5.33 of the CCleaner software which has been downloaded by more than two million users. Full story here. - Hidden and infected mobile apps – Given the rise of mobile apps, we’re likely to download and install all kind of apps on our device, without taking any caution. Here’s an example of malware threat known as hidden administrator app that targets Android users. It is an infected app that installs itself with administrator privileges and takes control of your mobile device. If you want to find out more about how and where cybercriminals hide their malicious code in files, links, apps we use on a daily basis, read this guide. Why Malware is a profitable business for malicious authors Just like any other business, the purpose of malware authors is to turn it into a big and profitable business of millions (or even billions of dollars). To do that, it’s important for them to know and ask for the right price. Making money from malware has proved to be a winning option for cybercriminals. Usually, they choose rich and developed countries, target large and successful organizations, from where they can extort a lot of money and access their valuable data. As the number of ransomware attacks continue to grow exponentially, its authors will keep making a lot of money, because most of the victims choose to pay the ransom. According to the Telstra Security Report, more than half of businesses who were victims of a ransomware attack have paid the ransom and they would do it again. “Some 60 percent of ransomware victims in New Zealand and 55 percent in Indonesia paid the ransom, making it the highest for Asia. In Europe, 41 percent of respondent ransomware victims paid up.” On top of that, another research conducted by Cybersecurity Ventures estimates that ransomware damages will cost the world more than $8 billion in 2018 and they will reach $11.5 billion annually by 2019. The attackers behind Wanna cry ransomware may have caused global panic among users and organizations, but what about its financial costs? In total, it has been estimated that they made $143,000 in Bitcoin of this massive attack. The Gandcrab ransomware that continues to evolve and quickly being spread into various spam campaigns “has infected over 50,000 victims and claimed an estimated $300-600K in ransom payments”, according to Check Point Research. In the figure below, you can see the attack by geographic location of a target. The success of Bitcoin cryptocurrency and its price reaching a historic $20K at the end of 2017 influenced the rise of cryptojacking malware attacks. New findings from Check Point research stated that “the number of global organizations affected by crypto-mining malware more than doubled from the second half of 2017 to the first six months of this year, with cybercriminals making an estimated $2.5 billion over the past six months.” The research also discovered that hackers are now targeting cloud services because most businesses store their sensitive data there. And there are more cyber security threats that should concern us and determine to implement solid prevention and security measures. All these examples from above show that malware business is still growing, by switching from a macroeconomic level to microeconomic level. The malware market, like any other, offers a wide range of products to fit users’ diverse needs. You can find APTs, ransomware, banking trojans, cryptojacking, data breach, online scams, malware families with as many names as you can possibly wish for. Just like when you go to the supermarket and you have a plethora of vegetables and fruits to choose from. Today’s malware is more targeted, but not necessarily more sophisticated. They still exploit software vulnerabilities found in devices, and that’s not something too complicated about it. Today malicious actors are both agile and creative and try techniques that still work. Today next-gen malware attacks have the ability to evade detection and bypass antivirus programs users install on their computers to keep their data safe. Security measures to apply against malware attacks We might not have asked for a malware market, but we are still serving it through unpatched software, by not backing up data, not getting enough education and knowledge of cyber security and many more. Time to act is right NOW! Malware threats are wide spreading and difficult to combat, so, once again, we emphasize that prevention is the best strategy to stay safe online. Make sure you don’t fall victim to malware and follow these cyber security measures: - Always keep your software patched and up to date, including the operating system and every application you’re using on a daily basis; - Keep a backup with all your important data on external sources like a hard drive or in the cloud (Google Drive, Dropbox, etc.). This guide shows you how to do it; - Once again, we urge you: Do NOT OPEN emails or click on suspicious files/attachments. Be very cautious! - Remember to set strong and unique passwords with the help of a password management system. This security guide comes in handy. - Use a reliable antivirus program as a basic protection for your device, but also consider including a proactive cyber security solution as a second layer of defense for maximum protection. - Always secure your browsing while navigating the Internet and click on websites that include only HTTPS certificate; - Teach yourself (and master basic cyber security) to easily spot online threats delivered via emails, social engineering attacks or any other method attackers may use. - We remind you that security is not just about using a solution or another, it’s also about improving our online habits and being proactive every day. Will malware as a business continue to grow? I think it will, as long as was – and still is – heavily sustained by ransoms paid by victims who want immediate access to their valuable data. It will continue to grow as long as we don’t apply basic security measures that can make us less vulnerable to these attacks. This article was initially written by our CEO, Morten Kjaersgaard, in 2015, but refreshed and improved by Ioana Rijnetu in July 2018.
<urn:uuid:c1799ea8-aba8-4850-a612-9a51350aa88e>
CC-MAIN-2024-38
https://heimdalsecurity.com/blog/the-malware-economy/?utm_source=Heimdal+Security+Newsletter+List&utm_campaign=d9a7e9e269-RSS_Newsletter&utm_medium=email&utm_term=0_31fbbb3dbf-d9a7e9e269-196325841&goal=0_31fbbb3dbf-d9a7e9e269-196325841
2024-09-07T21:41:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00031.warc.gz
en
0.946148
2,796
2.546875
3
In an ever-evolving world fraught with scams and fraud, the battle countries face to safeguard financial integrity and trust has never been more critical. Strengthening legislation and regulation emerges as a powerful weapon in the arsenal against these deceptive tactics. Deceitful individuals and organized criminal networks have harnessed the power of the digital age to exploit vulnerabilities, leaving no nation untouched by this growing menace. The impact of scams and fraud extends far beyond financial losses, with consequences reaching deep into the social fabric and eroding trust in institutions. As the tactics of scammers continuously evolve, countries face a pressing need to join forces and collaborate on a global scale to mount an effective defense. This blog, in our Addressing Global Challenges series, will delve into the prevailing scams and frauds, the economic and social impact they inflict, and the collective efforts needed to combat this ever-evolving threat. $47.8B lost in online scams in 2021, up 15% from the previous year The Impact of Scams and Fraud The Pervasive Nature of Scams and Fraud The pervasive nature of global scams and fraud is a pressing issue. In the digital age, criminals operate beyond physical borders using online platforms, exploiting vulnerabilities through phishing, fake websites, and social engineering. Tracking down perpetrators is challenging due to internet anonymity. This threat targets individuals, businesses, and even governmental institutions, demanding swift responses from governments and law enforcement agencies to protect citizens and economies. Economic Toll and Loss of Trust The economic toll and loss of trust caused by scams and fraud are significant global challenges. With billions of dollars lost annually, businesses, individuals, and governments suffer financially, impacting economic growth and increasing protective costs. Trust between individuals, businesses, and institutions erodes when people fall victim to scams, affecting investment decisions and consumer behavior. To foster a more secure financial landscape, governments and organizations must focus on transparency, accountability, and effective measures to prevent and combat scams and fraud, rebuilding trust in financial systems. Everyone is at Risk In today’s interconnected world, scams and fraud pose a risk to anyone. Evolving tactics cross demographic boundaries, leaving no one immune. Even tech-savvy individuals and businesses can fall prey to deceitful methods like phishing, social engineering, and investment scams. The vastness of the internet and electronic transactions offer ample opportunities for criminals. As the global nature of scams and fraud continues to expand, it underscores the importance of vigilance and awareness for everyone. Staying informed, maintaining a healthy skepticism, and adopting robust security measures are essential practices to safeguard against the pervasive threat of scams and fraud in our daily lives. As criminal activities transcend international boundaries, cooperation and collaboration between countries become essential for effective enforcement and prosecution. However, differing legal systems, varying regulations, and jurisdictional complexities can hinder seamless cooperation. Scammers often exploit these loopholes, knowing that pursuing them across borders might be cumbersome for law enforcement agencies. The anonymity offered by the internet and digital communication further complicates matters, making it difficult to trace the origins of fraudulent activities. To overcome these challenges, countries must establish robust international partnerships, share intelligence and information, and streamline legal frameworks to enable swift and efficient cooperation in tackling cross-border criminal networks. Only through coordinated efforts can nations fortify their defenses and collectively combat the global menace of scams and fraud. Strengthening Legislation and Regulation Strengthening legislation and regulation is crucial in combating scams and fraud. Governments must update laws to match evolving tactics, define scams clearly, increase penalties, and close loopholes. Comprehensive regulation is needed to monitor financial transactions, detect suspicious activities, and prevent illicit money flows. Enforcement with adequate resources and training for law enforcement agencies is vital. This creates a strong deterrent, safeguarding citizens and businesses from financial loss and restoring trust in financial systems. How Netsweeper Stops the Spread of Scams and Fraud Netsweeper is a vital ally in combating scams and fraud through its advanced filtering and monitoring solutions. By utilizing sophisticated algorithms, it blocks deceptive websites and phishing content, proactively safeguarding users from harm. Its user-friendly interface empowers administrators to customize filtering policies, while its monitoring capabilities detect suspicious activities and potential risks. Our continuous updates and vigilance create a safer online environment, contributing to the collective effort in stopping the spread of scams and fraud on a global scale. Netsweeper works closely with the Global Anti-Scam Alliance and ScamAdviser to stop scams and fraud by leveraging sophisticated algorithms to block access to fraudulent websites and phishing pages, using real-time data and intelligence to stay ahead of deceptive activities. This collaboration enhances our capabilities in creating a safer online environment for users worldwide. For more related content, check out our podcast The Rise of Digital Deception: How to Safeguard Yourself in the Digital Era as we delve into the Global Anti-Scam Alliance’s efforts to implement proactive strategies to combat scams and fraud on a global scale. Next in Addressing Global Challenges Series In the final installment of our blog series, “The Growing Challenge of CSAM: Addressing Child Sexual Abuse Material Worldwide,” we explore the escalating global crisis of online child sexual exploitation and abuse, examining how technological advancements have driven its proliferation and offer potential solutions.
<urn:uuid:6d12216a-dc0d-4f31-85dc-10c4093f09c4>
CC-MAIN-2024-38
https://www.netsweeper.com/government/global-struggle-scams-fraud
2024-09-07T20:44:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00031.warc.gz
en
0.906276
1,070
2.546875
3
The terms information security and cyber security are sometimes used interchangeably, which can cause confusion. While both are tasked with securing data, they are not the same. As more and more organizations move their operations to digital formats, a strong emphasis has been placed on securing digital data. Words like “cyberattack” or “data breach” permeate the marketplace and media. Protecting digital assets is crucial to ensuring the functionality of an enterprise; however, that’s just one area of an organization’s sensitive information. Whether your enterprise utilizes on-premise IT networks, migrates to the cloud, or prints physical documents of sensitive data, it’s important to understand how information security and cyber security go about securing your data. What is Information Security? Information security is data security. Also known as InfoSec, information security refers to the practice of implementing safeguards to protect an organization’s data, such as business records, personal information, intellectual data, and more. InfoSec also includes policies and procedures that outline how an enterprise protects data. While most sensitive information is stored digitally, information security covers the protection of data in all forms, including physical files. The goal is to prevent unauthorized access, which can disrupt, exploit, modify, record, or destroy sensitive information. Physical files and folders are typically kept safe in locked filing cabinets with restricted access. Whether stored in the cloud, an on-premises network, or a filing cabinet, organizations need to set restrictions to limit access to data. Common ways to protect information include: - Access controls - Procedural controls - Technical controls - Compliance controls What is Cyber Security? Cyber security is a subset of information security. The main goal of cyber security is to protect networks, applications, devices, and the data they hold from cyber attacks or cyber breaches. Cyber security looks to identify sensitive data and potential threats to their security. It also determines which measures should be implemented to provide the most robust defense possible. Some cybersecurity measures include firewalls, data encryption, antivirus programs, and strong passwords. They fall under the umbrella of the five types of cybersecurity: Differences Between Information Security and Cyber Security While information and cyber security are closely related, they aren’t the same. Here are five important differences: Cyber security is primarily tasked with overseeing the security of the network, applications, devices, and the data stored within them. It focuses on handling digital threats, cyber breaches, and cyber attacks, such as ransomware attacks, malware attacks, phishing attempts, brute force attacks, and more, to safeguard data. Information security encompasses a broader approach to data protection. Along with digital data, information security also handles the protection of data beyond cyberspace. It includes protective elements such as physical security on-site. For example, physical documents need to be properly stored and secured to prevent unauthorized users from accessing them. Also, physical components such as hard drives need to be secured to prevent threats, such as spooling—the copying of data between different devices. Cybersecurity elements usually include preventative measures like firewalls, antivirus software, data encryption, password management, and more. Information security also includes many of these cybersecurity elements; however, InfoSec also incorporates physical security features, such as secured file cabinets, restricted control access to areas of the office space like departmental offices, and policies and procedures for properly handling, sharing, or disposing of both digital and physical data. Cybersecurity teams run regular diagnostics on their organization’s IT system, looking for software that needs to be patched or updated, monitoring antivirus software, managing password updates, and more. For instance, an enterprise’s cybersecurity policy may have been updated to include two-factor authentication (2FA) or multi-factor authentication (MFA) for all end-users looking to access devices, software, or data. 2FA and MFA are the same with only one difference: 2FA requires only two forms of authentication, while MFA requires two or more. Types of accepted authentication responses include a password, biometric markers like fingerprint ID, voice ID, face ID, or verification code. Information security teams develop disaster recovery plans to minimize an organization’s separation from sensitive data in the event of a data breach, cyber attack, or even a natural disaster. Disaster recovery plans include procedures and steps to regaining data, prioritize the order of data retrieval, and also provide preventative measures to safeguard data, such as storing copies on the cloud. InfoSec specialists test these plans to ensure the procedures work efficiently. How Cyber Security and Information Security are Related Cyber security and information security are very closely related because they share the same primary goal: to ensure data security. Cybersecurity falls under the larger umbrella of information security. Cyber security is one facet of information security. InfoSec casts a wide net by securing all kinds of information, including digital or physical files—cyber security zeros in on securing digital information from malicious threats or unauthorized access. Information security and cyber security also share the same security practices. InfoSec and cybersecurity utilize the CIA model: confidentiality, integrity, and availability of information. CIA is used to enforce an organization’s security procedures and policies. As cybersecurity works to ensure that sensitive digital data can only be accessed by authorized parties, information security makes sure that the data remains reliable. In other words, InfoSec looks to prevent threat actors from modifying the data in any way. With information unspoiled and safeguarded from unauthorized users, the data must also be made readily available for access by the proper users. Whether financial statements, product design information, or something else, organizations need to be able to access information anytime. Cynergy Technology is a leading full-service technology provider that specializes in cloud computing solutions and cybersecurity. With over forty-two years of experience, our team of professionals can assist your organization in finding cybersecurity and information security solutions to safeguard your sensitive data. With the peace of mind and confidence that comes with properly secured data, you can return to doing what you do best—running your business. Contact our team of experts today for a free consultation!
<urn:uuid:226a1571-0a23-4f0e-9630-43e474a80f3d>
CC-MAIN-2024-38
https://www.cynergytech.com/stories/information-security-vs-cyber-security-a-full-comparison/
2024-09-10T03:11:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00731.warc.gz
en
0.918975
1,272
3.515625
4
Confusion has come along with the associated taxonomy of VoIP technology and IP telephony. Both of them refer to use the same IP network to send voice messages. But the main difference between VoIP and IP telephony is that VoIP is connecting old fashion analog phones to specific gateway device who are able to convert analog voice data into digital bits and send them across the internet bypassing the expensive PSTN telephone networks. In the case of IP telephony the phones by them selves are digital devices and they are made to record the users voice directly into digital signal and send it across IP network using special Communication manager devices that are enabling this technology to work. IP telephony technology resides on IP network and natively uses the IP network for communication. In this article we will commence with the definition of voice over IP and bearing in mind that it is truly vital in the corporate world these days. Since voice packets are continually flowing or “gushing” across a “data infrastructure”, many protocols are needed to be arranged or set up in order to maintain it in a preserving manner, and keep a simple call going. Example of VoIP vs IP telephony topology: “Packetized” voice is sent over an IP network by VoIP. Well, characteristically, an IP network can also serve as a data network, the outcome is intended quality of protection problems. But luckily, Cisco will offer you a wide collection of (QoS) quality of service. And many features related to protection and security to make sure the eminence of quality and protection of the “voice transmission”. Several business networks can voluntarily be linked without purchasing committed rent lines between their sites or depend on public switched telephone network (PSTN) with the ability to transmit voice over an IP network. The example is internet, which does not compel to pay charges for particular and specific call types i.e. the international calls or long distance. In order to further illuminate and exemplify the differences between IP telephony and VoIP, you can take a look at upper image. In the topmost and north segment of our figure, the endpoints of the VoIP network are in fact an analog phone which means that they are linked to an analog port through a gateway and also a (PBX) private branch exchange that have been linked to a digital port on a completely different gateway. Since none of these endpoints locally speak IP, the topology is considered to be, none other than a VoIP network. The lower segment of our image shows a ‘Cisco IP phone’ that does not interact locally in IP. This Cisco IP phone has been registered with a Cisco Unified Communications Manager Server; this phenomenon makes the call direction-finding conclusions for the Cisco IP phone. That is why; the lower “topology” in this particular figure is considered to be an IP telephony network. These were the differences on VoIP and IP telephony. You must understand that some particular type of text may use the terms IP telephony and VoIP without any change. The Need and Importance of VoIP Originally, one of the primary business drivers for the adoption of VoIP was saving money on long distance calls. However, increased competition in the industry drove down the cost of long distance calls to the point that cost savings alone was insufficient motivation in order to migrate a PBX-centric telephony solution to a VoIP network. However, several other justifications exist for purchasing VoIP technology: - Low recurring costs: In various typical PBX (centric networks), a particular digital ‘T1’ circuit traditionally can carry either 23 or/and 24 synchronized voice calls and these are based on the type of indication that is being used. A T1 naturally had 23 or/and 24 channels available. Each and every channel had a particular and specific bandwidth of 64 kbps only and it could handle one and only phone call at a time. But, VoIP networks often leverage (codecs) coder/decoders to press the voice. Every voice call can consume only less than 64 kbps of bandwidth per call, thus allowing on-top concurrent calls, when compared to this typical type of technology. - Flexibility/Adaptability: Since these VoIP networks send some ‘voice traffic’ over an IP network, the administrator has a high concentration of manage over this voice traffic. Various clients can be arranged to have access to various voice applications examples include, a messaging application or even a communicative voice response [IVR] application. - Advanced features & functionality: VoIP and IP telephony networks can also offer higher and much superior features, which include the following: - Call routing: Obtainable and accessible routing protocols examples include, EIGRP and OSPF these can also be used to offer and supply fast ‘failover’ to a endorsement link if a most important network link failed. And on top of that, calls can also be routed over various network links based upon the eminence or the link’s traffic load at the time. - Security: If the attacker were to cut off and imprison the VoIP packets, he can play them backward to listen eavesdrop on a conversation, on purpose. Another famous example, a user may enter his/her (PIN) personal identification number in the bank’s specific and particular IVR system, and so as a result the attacker may catch these packets. And so attackers may begin violent rogue devices examples include call agent servers into the network and IP phones. Luckily, Cisco will offer to you a wide and large range of technology and most suitably applicable in order to further solidify the security and protection of a particular VoIP network. - About Messaging: An answer to such a “Cisco Unity” could also be used to supply only one storehouse for a wide diversity of such messaging kinds. Examples include a ‘Microsoft Exchange’ message store can also be used to combine the storage space of fax broadcast, voice mail and e-mail messages. Then the user can do a lot of thing, example include, call in a ‘Cisco Unity’ system and have his/her e- mail read to her by the text-to-speech conversion. - Solutions for Call Center: Cisco can offer a large variety of answers for ‘call centers’. Examples include, Cisco’s Contact Center Express and Contact Center solutions can smartly route-up calls that are about to come in order to appropriately call the center agents. And also because the call center will use Cisco IP Phones, geographically and location wise, these can be separated (examples include, the call center agents that are working from the home). - Customer-searching solutions: Several clients may wish to communicate by a particular chat-interface or even through email, conflicting to talk with a company’s client service envoy. Since a VoIP network can only work on a ‘data network’. The data network features for example e-mail and the chat feature can be incorporated into the client’s specific choice of various contact choice, thus, and increase is created in the client’s satisfaction level.
<urn:uuid:3fdcc5ee-11dd-4b92-9684-515c08bb079c>
CC-MAIN-2024-38
https://howdoesinternetwork.com/2012/voip
2024-09-13T22:09:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00431.warc.gz
en
0.94078
1,487
3.34375
3
It’s official: the Internet of Things (IoT) is really changing our everyday lives. In fact, while the technology that actually powers IoT is still relatively new to the market, it is already having a significant impact on people everywhere. Here are some important things to consider when it comes to how exactly the IoT is actually impacting the world and our lives every single day. Internet of Things (IoT), Explained In its most basic form, IoT is just an interconnected system of technology that works to help enhance our lives. In fact, devices are made to help us live better, smarter, and harder. As a result, it’s the IoT that allows various devices to communicate with one another without a person present. So your computer can share data with your smartphone among other devices as well. Ultimately, when it comes to IoT, the true power really lies in the lack of need for a human being to help devices and technology transmit important data. How Exactly Does It All Work? Again, it can be really complicated to understand the nuances associated with IoT. In fact, the ecosystem of IoTs is generally just web-enables smart devices that ultimately rely on various embedded processors and systems that can efficiently communicate with one another and thereby transmit data across various smart devices. Perks Of IoT There are some significant benefits that comes with IoT. In fact, beyond just making our everyday lives incredibly easy, IoT has other perks that are less clear to see. IoT allows organizations to examine and monitor their overall organizational processes and improve customer experiences — which can ultimately benefit businesses in general. At the end of the day, IoT is here to stay. In fact, IoT has a way of not just helping to make people’s everyday lives easier and more convenient but it also helps businesses work more efficiently and provide customers with amazing customer service experiences which can help boost the company’s success overall. En-Net Services Can Help Today Experience a superior method of getting the public sector technology solutions you need through forming a partnership with En-Net Services. Our seasoned team members are familiar with the distinct purchasing and procurement cycles of state and local governments, as well as Federal, K-12 education, and higher education entities. En-Net is a certified Maryland Small Business Reserve with contract vehicles and sub-contracting partnerships to meet all contracting requirements. To find out more about our hardware services, printing, and imaging services, or to hear more about how a dynamic team can help meet your information technology needs, send us an email or give us a call at (301)-846-9901 today!
<urn:uuid:f276b0f5-4f6a-43da-9a25-f0c243c3e17b>
CC-MAIN-2024-38
https://www.en-netservices.com/blog/how-the-internet-of-things-is-impacting-the-world/
2024-09-15T04:20:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00331.warc.gz
en
0.956236
537
2.890625
3
You may have heard about the deep web. You may have heard about the dark web. Are they one and the same? According to managed IT services experts in Orange County, they are two different things. Knowing the difference is crucial for your technical security. The Deep Web You likely access the deep web every day and don’t know you did. It happens each time you log in to your email or your Facebook account. The deep web refers to any part of the internet that a search engine cannot reach. It may be due to a login requirement or a site that restricts search engines from indexing. If you log in into your bank account, for example, you see information on your balance, purchases, deposits, and payments. Search engines cannot see that information. While the information on the deep web is a bit boring, it also contains personal information that you don’t want criminals to access. The Dark Web According to managed IT services experts in Orange County, accessing the dark web requires more than a standard browser and a password. The dark web requires a specially encrypted web browser that protects your identity as you enter the encrypted network. The dark web contains content that isn’t for the casual internet user. It is not something just anyone will stumble across. It requires technology that obscures the user’s identity and location. Most people associate the dark web with criminal activity. The fact is most of the traffic is from countries with heavy internet censorship like Turkey and China. It allows users to browse the rest of the Internet without being caught. Criminals do make their home in the dark web. Users can find marketplaces to buy illegal drugs, social security numbers, and illicit pornography. Cryptocurrencies, like Bitcoin, are the currencies of choice because they are almost impossible to trace. While it’s difficult to trace criminal activity on the dark web, it is not impossible. Law enforcement agencies around the world have infiltrated the encrypted networks. They have busted drug dealers, hackers, and identity thieves, among criminals of other sorts. Why Do You Need to Know the Difference? Protecting your company from illegal activities begins with knowledge. You may have part of the deep web in your own infrastructure. Hopefully, you don’t have part of the dark web there as well. To learn more or get help with your security, contact us at Advanced Networks. We are the experts on managed IT services Orange County companies trust with their technology and security.
<urn:uuid:da19123e-216e-4fc3-9c81-f2caf8baac7b>
CC-MAIN-2024-38
https://adv-networks.com/managed-it-services-orange-county-talk-about-deep-web-vs-dark-web/
2024-09-16T10:23:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00231.warc.gz
en
0.939185
510
2.65625
3
What is cybercrime? May 12, 2021 Extortion, identity theft, international data heists: these are the realities of the cybercriminal underworld. Hiding behind online anonymity, thieves and hackers can extort money from victims on the other side of the planet. What is cybercrime? How is it committed? And is there anything we can do to prevent it? What is cybercrime? A cybercrime is a criminal act that targets or utilizes a computer, smartphone, or other connected device. It’s a crime that is committed online. Cybercriminals attack a wide variety of targets using different methods depending on the victim. Some online criminals focus on extorting money from individuals, while others target databases of businesses and corporate organizations. While most are motivated by wealth, certain hackers also double as political activists, attacking government bodies they deem corrupt. However, a broad definition of cybercrime isn’t particularly helpful when trying to understand the wide array of criminal acts this term encompasses. Vague phrases like “hacking” — bypassing security restrictions to access private data — refer to an almost limitless variety of actions. Let’s focus on the specific tools, tactics, and intentions of the modern cybercriminal. Malware delivery and infection Malware is a useful catch-all term for different forms of malicious software, but it doesn't refer to one specific kind of virus or attack. Specific types of malicious software are involved in almost every type of cybercrime. If an attacker exploits a weakness in an operating system, spies on a user’s keystrokes, or remotely hijacks a device, they're probably using malware. To benefit from malware, the attacker must first find a way to install it on the target device. This is often referred to as infection, and there are several popular ways to do this: A website can be used as a malware host, infecting any visitors who view the page. To this end, perpetrators design their own domains, building a malicious download function directly into the site. To reach more victims, criminals may send page links in phishing emails, or use a similar domain name to a popular website. Malvertising uses online ads, coded to install malware or redirect users to infectious websites. Cybercriminals try to sneak their pop-ups and banner ads onto legitimate sites, and even if people don’t click on them, some can run automatically as soon as the page loads. A victim may not notice they’ve been targeted; the malvertisement can quietly install its malware and users will continue to browse on their devices, unaware of the infection. But infection is often just a prelude to the main act of a cybercrime. Having installed malware onto a device, the next step will likely involve some form of theft, with money or data (or both) as the trophy. Cybercriminals employ a range of techniques to steal, scam, and extort money from their victims. For example, they may use keylogging malware or Wi-Fi spying techniques to secretly view the victim’s browsing traffic and steal their banking credentials when those are inputted on a compromised device. Targeting both individuals and, increasingly, corporations, some criminals use ransomware — a type of malware that locks the user’s access to a device or database. Once access has been restricted, the perpetrator demands a ransom. With companies paying an average of $370,000 per attack, the global cost of ransomware crime is expected to reach $20 billion next year. For criminals who don’t rely on malware, social engineering tactics can still convince people to part with their money online willingly. While the notorious Nigerian Prince scam is relatively well known, there are many similar pretexting frauds that have seen people send huge sums of money to criminals posing as businessmen, long-lost family members, and prospective lovers. For an increasing number of cybercriminals, the way to make real money online is through data theft, rather than directly targeting the victim’s bank account with malware or social engineering. When it comes to this type of cybercrime, businesses and corporations are tempting targets. Large-scale data breaches will see a company’s private files hacked and their customer information exposed. User passwords, credit card numbers, and other sensitive data can prove incredibly valuable to an attacker, paving the way for more acts of cybercrime in the future. The average employee in the US has access to around 1,000 sensitive files, and many now work from home, where security protocols may not be properly enforced. By successfully compromising just one employee’s device, a cybercriminal could access a treasure trove of private data, which can then be sold on the dark web or used to facilitate identity theft and further extortion. Disruption and hacktivism Not all cybercrime focuses on financial rewards, however. Some criminal acts may be politically motivated, or simply intended to cause disruption. A disrupted denial of service (DDoS) attack, for example, is an illegal procedure in which the attacker overwhelms a website or application with traffic until it is unable to service legitimate users. In practice, that could mean forcing an entire website to go offline, or just disabling specific page features and functions. The rise of hacktivism — politically-charged cyberattacks that often target government or corporate bodies — has seen DDoS attacks widely used as a form of protest. Other acts of hacktivism involve defacing official websites with messages and slogans, or exposing government or corporate data through leaks. Governments have also faced accusations of cybercrime, with China coming under particular scrutiny. When nations and military organisations resort to hacking, they stray into the realm of cyberwarfare. Who investigates cybercrime? Cybercrime can be investigated by different agencies at various levels, depending on the nature, severity, and location of the incident. Because perpetrators may not be in the same country as their victims, law enforcement agencies like the FBI in the United States work closely with their international counterparts overseas. Inter-governmental organisations like Interpol are particularly effective in tracking and apprehending cybercriminals, because they can draw from resources in multiple nations and jurisdictions. They can also train and educate local authorities in different regions on the nuances of responding to cybercrime. Local police forces may struggle to deal with cybercrime due to a number of reasons, like the complexity of the methods used, the difficulties in tracking online perpetrators, and the lack of legal guidance. However, as cybercrime is becoming ever more present in the 21st century, this will have to change. How to prevent cybercrime: 5 simple steps Use antivirus software. While a virus is just one form of malware, antivirus software can be a dynamic and multifunctional tool. A good antivirus firewall detects malicious software and blocks high-risk downloads. Consider using an ad-blocker as well to bolster security and reduce the threat of malvertising. Protect your passwords. Ensure that you're using long, complex passwords without any detectable patterns or words. Combine characters, numbers, and symbols to protect yourself against brute-forcing software. Avoid using the same login credentials across multiple accounts and find a password manager to simplify the process. Be wary of email links. Emails and social media messages may contain infectious links, even if the sender seems trustworthy. The best way to guard against these scams is to exercise caution whenever you're asked to click something online. Before engaging with an email, confirm the sender's authenticity: call the company’s helpline, or search online for news of similar scams. Human caution is a strong defense against phishing tactics. Update your software. Out-of-date software can provide the weak-spots that malware takes advantage of. Anything from a browser extension to your operating system could be the target. Individuals and companies should regularly check for new software patches or set their systems to update automatically. Use a VPN. A virtual private network (VPN) encrypts the device's browsing data, limiting the risks of Wi-Fi spying and endpoint data breaches. With just one NordVPN account you can enjoy end-to-end encryption on six separate devices. For businesses with a network of hardware to protect, the NordLayer service offers an effective corporate security solution. Start enjoying VPN protection today. Read the original blog post on NordVPN.
<urn:uuid:89bd58da-cbae-4e60-b831-58dbed061682>
CC-MAIN-2024-38
https://nordsecurity.com/blog/what-is-cybercrime
2024-09-17T14:14:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00131.warc.gz
en
0.924098
1,717
3.1875
3
October 27, 2015 The Senate overwhelming approved the so-called Cybersecurity Information Sharing Act (CISA) on Tuesday. The measure would allow companies to share consumers’ data with the US government in the event of security breaches or cyber attacks—all in the name of cybersecurity. Edward Snowden, the NSA whistleblower, had declared the measure—which now goes to a conference committee between the House and Senate—a “surveillance bill.” In essence, the measure provides corporate America with legal immunity when sharing data about hacks and digital breaches with the Department of Homeland Security. The DHS can then funnel that information to other agencies, including the NSA and FBI. Senate advocates said there’s nothing in the bill that requires data sharing and that personally identifying information is required to be removed by corporate America if it knows “at the time of sharing” it contains identifying information on their consumers. They say the legislation would help the government and private enterprise coordinate responses to cyber attacks. But some members of Congress don’t see it that way.
<urn:uuid:0fa20529-9182-4b0c-91fe-92327aaa9ea6>
CC-MAIN-2024-38
https://www.cybersecurity-review.com/us-senate-passes-controversial-cybersecurity-cyberspying-bill-74-21/
2024-09-08T00:10:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00131.warc.gz
en
0.955263
216
2.8125
3
Now that I’ve got your attention, what exactly is a kraken? It’s a legendary sea monster that terrorized ships that sailed the North Atlantic Ocean. It was an unknown danger that dwelled the ocean deep and could attack without warning resulting in untold mayhem. Whether the kraken legend originates from a giant squid or octopus sighting is debatable, but it terrorized sailors nonetheless, as they never knew if or when the kraken could be encountered. Legends die hard, but there are real dangers that lurk beneath the oceans of the world, and this is precisely where submarine cables live and work. Hundreds of years ago, when the kraken was terrifying sailors crisscrossing the world’s oceans, ships were the only method of sharing information between continents that were separated by thousands of kilometers of water. This was until the first reliable transoceanic submarine cable was established over 150 years ago, way back in 1866. This pioneering telegraph cable transmitted at rates that we’d scoff at today, but it was undoubtedly a monumental performance leap when compared to sending handwritten letters back and forth between continents, which could take weeks and even months. Imagine you waited months to receive an important letter, but couldn’t read the sender’s handwriting?! Oh, the horror! Most modern submarine cables are based on coherent optical transmission technology, which enables colossal capacity improvements over the early telegraph cables of yesteryear, and can reliably carry multiple terabits of data each second. We’ve come a long way in improving on how much data we can cram into these optical fibers that are the size of a human hair, housed in cables the size of a common garden hose, and laid upon the world’s seabeds for thousands of kilometers. We’ve also come a long way in being utterly and completely dependent upon this critical infrastructure, now carrying $10 trillion – yes, TRILLION – worth of transactions every day, over 95% of all inter-continental traffic, and are experiencing over 40% CAGR growth worldwide. This network infrastructure will become more critical, if that’s even possible! Why should we care about submarine cables sitting quietly upon seabeds of the world? Because there’s no Plan B. There’s simply no viable alternative to the world’s critical submarine cable infrastructure. Satellites need not apply, because they cannot compete with the required capacity, performance, availability, security, or cost points of existing high-speed optical networks, overland or undersea. Most people haven’t even heard of submarine cables, but they’re there, silently and often invisibly carrying information over the biggest construction project mankind has ever undertaken – the Internet. No alternative means that we must continually innovate to increase the information-carrying capacity of these jugular veins of intercontinental connectivity, better protect them from inevitable faults to ensure continual availability, and improve the total cost of ownership to maintain pace with ongoing price erosion – often contradicting goals, n’est-ce pas? We’ve now taken the ground-breaking (sea-breaking?) GeoMesh architecture to the extreme with the launch of GeoMesh Extreme that improves upon its forerunner by incorporating several key enhancements. Introducing Ciena’s New GeoMesh Extreme Back in 2013, Ciena launched a ground-breaking (sea-breaking?) network solution that addressed these challenging design goals with the introduction of GeoMesh, which pioneered PoP-to-PoP networking. By viewing the submarine segment of an end-to-end network similarly to a terrestrial network segment (albeit thousands of kilometers long and sitting on the bottom of the world’s oceans) Ciena erased traditional submarine-terrestrial network demarcation points – relics of historical reasons – in cable landing stations that revolutionized how end-to-end networks, overland and oversea, are designed, deployed, and maintained. Real-world benefits from scalability to availability to simplicity were unleashed the world over. But we didn’t stop there, because the ever-changing requirements placed upon networks never stops, ever. We’ve now taken the ground-breaking (sea-breaking?) GeoMesh architecture to the extreme with the launch of GeoMesh Extreme that improves upon its forerunner by incorporating several enhancements such as WaveLogic Ai, Blue Planet MCP, Blue Planet Analytics, Packet Switching, Protection Switching, and Professional Network Services. This new and open subsea network solution allows submarine cable operators to mix-and-match whatever components they require thus enabling greater choice – the key benefit of Open Cables – which we commercially enabled last November with our new partnership alliance with TE SubCom, an industry pioneer providing undersea communications technology and marine services. Want to know more about GeoMesh Extreme? Check out the video below, which explains what GeoMesh Extreme is and how Ciena will once again change the submarine network seascape, just as its predecessor did back in 2013. Not even the mythical kraken can disrupt submarine networks that are based on GeoMesh Extreme, and that’s a good thing.
<urn:uuid:25230d0b-c9e8-45c0-b58e-4a34b18d1a4c>
CC-MAIN-2024-38
https://www.ciena.com/insights/articles/GeoMesh-Extreme-Release-the-Kraken.html?campaign=X630816&src=blog
2024-09-11T12:25:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00731.warc.gz
en
0.916483
1,074
3.0625
3
Eric Bassier, Senior Director of Products, Quantum, considers the many benefits of using tape storage – in particular its significant contribution to delivering more efficient, affordable and greener IT solutions. As the growth in digital data storage continues to move at an unrelenting rate, there is increasing concern over the environmental impact modern infrastructure is having. In particular Environmental Social and Governance (ESG) considerations have become a major topic for organisations faced with the challenge of ensuring data centres are more sustainable without impacting that all important performance and availability. To balance these increasingly contradictory objectives, IT teams are turning to tape storage. Used for decades as a reliable and affordable solution, its environmental and sustainability credentials are now also providing a greener alternative to disk-based solutions. Among the key use cases for tape is long-term storage. As more data is retained for compliance or business reasons, the problem is that this information may not be accessed again for years, if ever. Using hard disks storage solutions geared towards the constant availability of data is far from efficient, but the challenge is to replace these systems with storage architecture that enables effective management of data over longer life cycles for a wider variety of workloads – at a higher level of sustainability. A powerful argument Historically, one of the leading benefits of tape storage has always been its lower acquisition costs, and per gigabyte, for instance, the cost of tape hardware is about half that of disk-based solutions. Add to that the lower power consumption and heat generation levels of tape storage and the arguments in favour of switching to tape across a wider range of use cases become even more compelling. In practical terms, tape libraries consume very little to no energy unless data is being read or written, with almost no heat dissipation or cooling requirements as a result. This can deliver significant savings in electricity used for data centre operation, particularly given the large size of many modern facilities. This kind of efficiency can also have a direct impact on an organisation’s CO2 emission levels, particularly if their infrastructure is powered by fossil fuel electricity generation. The potential power-saving benefits don’t end there. Aside from computing power, data storage is one of the most power-hungry elements in the modern data centre environment. In a modelled production environment, for example, one study found that migrating data that is less active from disk drives to tape drives could achieve an 85% reduction in CO2 emissions resulting from running approximately 500 TB of less active data on tape versus HDD over a 12-month period. While this number might be on the upper end of potential CO2 reductions, it does underline the significant potential for tape to help deliver a meaningful impact on emissions. Waste and pollution In addition to reducing electricity consumption – and by extension, lowering CO2-emissions – tape storage also has a lower overall impact on the environment than disk-based systems based on the levels of waste created over the lifespan of the technologies. For example, magnetic tape storage has a much longer lifespan than most hard disk drives. Hosted in an environment with stable humidity and temperature sensibly controlled, tape will last anything between 10-20 years, depending on usage levels. In contrast, most hard disk drives in use today will only last three-to-five years, not least because disks rely on many more moving parts that are liable to wear out and fail over time. As a result, disk arrays are typically refreshed over a three-to-five-year refresh cycle, as opposed to tape libraries where the process can easily exceed a decade or more. The environmental benefit is that organisations can achieve anything up to a two-thirds reduction in disposal requirements by using tape drives instead of disk drives. On top of that, using tape storage also reduces e-waste, such as that derived from circuit boards, which can often contain highly polluting components. For those organisations focused on balancing storage infrastructure performance against their environmental responsibilities, tape can play a significant part in delivering more efficient, affordable and greener IT solutions. By implementing tape as a storage tier in their production environments and replacing spinning HDDs with tape media for the less active data, there is scope to save significant sums on energy consumption, while also reducing CO2 emissions and disposal costs. As a technology unrivalled in its longevity, cost to capacity, reliability, portability and security, tape continues to play a crucial role in data protection and archive solutions. Looking to the future, this role seems certain to increase even further as organisations look for a performance and environmental win-win. Click below to share this article
<urn:uuid:6dc7ac45-1bc8-422e-95a8-8317e3e9057f>
CC-MAIN-2024-38
https://www.intelligentdatacentres.com/2022/12/29/tape-the-sustainable-option-for-modern-data-storage-and-archiving-needs/
2024-09-11T12:34:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00731.warc.gz
en
0.948814
931
2.9375
3
[glossary_exclude]Cybersecurity attacks on medical devices can disrupt or deliver inaccurate patient care, as well as negatively impact business operations, resulting in staggering financial impacts due to lost revenue, fines, and penalties. Bruce Schneider, security expert, defines the term “Wicked Problem” in his bestseller, Click Here to Kill Everybody. It’s not evil, just difficult or nearly impossible to solve because defining the problem and requirements are hard enough, let alone creating an effective and sustainable solution. Many healthcare organizations are employing new solutions and resources for additional support. In May of 2017, the WannaCry ransomware attack indiscriminately encrypted 230,000 systems, demanded payment, looked for the next exploitable network device, and then replicated. It infected up to 70,000 devices at the National Health Service (NHS) in England and Scotland including blood-storage refrigerators, MRI scanners, and computers. In 2018, SamSam, yet another form of ransomware was responsible for over 25% of healthcare compromises. This Trojan horse ransomware encrypted the Hancock Health and Cass Regional Electronic Medical Record systems, took down the city of Atlanta, and in 50 minutes infected LabCorp’s 7,000 applications and 1900 servers. Robert Herjavec forecasts 2019 will experience a five-fold increase in attacks on healthcare organizations through the Internet of Things (IoT), ransomware, and Insider Threats. The ECRI Institute forecasts “Remote Access” as 2019’s #1 technology and patient safety threat affecting healthcare institutions. Whether the medical devices are a random victim of malware or a hacker’s focused target, they are the perfect vector to leverage remote-access attacks, exploit IoT vulnerabilities, and monetize ransomware or, worse, impact critical patient safety within healthcare. Like laptops, IoT devices are endpoints connected to a network, often wirelessly. Medical devices are a special category of IoT, with an astounding average of 15 per hospital bed. Connected medical devices, designed for remote access (support), do not permit anti-malware on the device and therefore can be easily hacked and manipulated, often more easily than a computer. Connected medical devices simply have more vulnerabilities and fewer security controls. Hackers hunt to find IoT devices. Shodan, commonly used by hackers, is a searchable database of internet-connected devices. Every compromised IoT device can be a point of entry into the hospital network, allowing cybercriminals to monetize Protected Health Information (PHI) or Personally Identifiable Information (PII). Even worse, IoT-actuating sensors have the ability to reach out from a digital world and make changes to our physical world. Examples include altering the dosage of an infusion pump, modifying the frequency and severity of the shocks from implantable pacemakers, and impacting the accuracy of an MRI. The Ponemon Institute surveyed 300 health systems in a 2018 study titled Medical Device Security: An Industry Under Attack and Unprepared to Defend. A staggering 44% of Healthcare Delivery Organizations (HDO) are aware that patients experienced an adverse event or harm due to an unsecured medical device, while 40% had malicious software installed on the device and 38% admitted to inappropriate treatment as a result. The study found 80% of HDO’s reported medical devices are extremely difficult to secure, but despite these figures, only 15% of HDOs are taking significant steps to prevent attacks on medical devices. Healthcare cybersecurity leads all other industries, but in the bad way. The healthcare industry, per patient record, commands the highest resale (#1 target), the highest organizational breach cost, the worst overall malware detection and containment timeframes, and ranks first in lost clients due to a breach, but yet invests the least in cybersecurity as a percentage of IT budget. Health and Human Services (HHS) suggests using the NIST Cybersecurity Framework (CSF), a risk management tool, in conjunction with the HIPAA Crosswalk, to improve abysmal HIPAA Privacy and Security compliance. Before medical devices can be marketed to HDOs, the Food and Drug Administration (FDA) is required to approve submissions. In 2018, the FDA provided significant leadership through cybersecurity risk management requirements for manufacturer’s connected medical device approval submissions. However, the Inspector General’s September 2018 Report states the FDA needs to take additional steps to more fully integrate cybersecurity into its connected medical device review process. To date, solutions have focused on manufacturers designing cybersecurity into the device, a Software Bill of Materials (SBoM) providing device content and coordinated industry-wide sharing of known vulnerabilities. These efforts do not address the expected long lifecycle of existing medical devices and the threats they currently pose to patient safety. In 2017, Congress mandated the Healthcare Industry Cybersecurity (HCIC) Task Force to conduct a healthcare industry Cybersecurity Risk Assessment. The report findings concluded the healthcare industry is in critical condition. The Task Force defined six Imperatives that need to be addressed immediately, two of which were Information Governance and Medical Device Security. HCIC’s Information Governance Imperative specifies identifying, valuing, and managing assets and risks, which include medical devices and their PHI. These should be achieved through establishing controls, processes and procedures, creating incident response plans, and sharing information. Health and Human Services (HHS) suggests using the NIST Cybersecurity Framework (CSF), a risk management tool, in conjunction with the HIPAA Crosswalk, to improve abysmal HIPAA Privacy and Security compliance. Organizations like the Health Information Sharing Analysis Center (H-ISAC) are challenged to manage known vulnerabilities through a complex ecosystem of coordinated disclosure, discovery, patching, distribution, and deployment. The Medical Device Security Imperative specifically calls out HDOs, manufacturers, and service organizations to address several vulnerabilities. Action items focus on securing legacy systems, upgrading and patching processes, strengthening authentication, implementing superior network segmentation strategies, and mandating an SBoM detailing components within a device. HDOs attempting to pinpoint legacy systems and security weaknesses must first establish an accurate and detailed inventory. Most inventories are conducted manually, are time consuming, and are highly inaccurate. Standard network security tools cannot properly assess medical devices nor provide details like classification, model, Operating System, IP/MAC fields, configuration, serial numbers, known vulnerabilities, how it sits on the network, and whether it stores PHI. Per the HIPAA Security Rule covered entities (CE) and business associates (BA) are required to make reasonable efforts in securing the PHI that is created, used, disclosed, transmitted, or stored. Most HDOs are at financial risk of HIPAA fines because they cannot produce an accurate device list, much less the device location, its risk level, what PHI is on each device, and which devices are missing. HHS convened over 150 cyber and healthcare experts from the government and the industry to deliver creative and practical voluntary best practices to address medical device cybersecurity. In December 2018, this Healthcare and Public Health Sector Critical Infrastructure Security Resilience Public-Private Partnership released the four-volume publication providing guidance for the HCIC Report Imperatives. Medical device security remains a “Top 5 Threat” according to this HHS lead publication. HDOs have recently turned to sophisticated security software architects in hopes of tackling this “Wicked Problem.” These advanced solutions offer several benefits. The first benefit is an automated detailed device inventory, in which devices are discovered and grouped based on information gathered from network behavior and device communication traffic patterns allowing for increased security intelligence. These solutions are generally hyper-focused on medical devices and layered within a broader existing security framework, leveraging existing perimeter security investments and reducing costs. This medical device security software can provide an additional level of visibility and control, but integration with the existing IT systems internal and external networks can result in unique and challenging configurations. Another benefit supports the HCIC report’s #1 action item for medical device security, implementing operationally personalized network segmentation. But manually provisioning a network security policy for each device protracts implementation and increases costs to a point of project failure. Intuitively, these tools leverage inventory, behavior, and risk profile information in order to automate security design and policy enforcement, which significantly reduce the time and expense associated with micro-segmentation. The real sophistication occurs when all of these manically complex features are combined. The system discovers the entire inventory of medical devices, leverages the device details, integrates known and active vulnerabilities, detects network intrusion activity, and determines anomalous device behavior, simultaneously. All of this previously unavailable and unrelated information is correlated by the most advanced solutions in real time, prioritizing device risk and escalating alerts. Where to begin a device security plan: Create a Medical Device and IoT Security Plan Form a multi-stakeholder team by establishing roles Review policies and procedures to include Supply Chain Risk Management Maintain good cyber hygiene and monitoring Conduct an asset inventory (using new automated tools) Prioritize devices based on the business mission Access devices, remediate risks, and harden Correlate the device risk assessment findings with a HIPAA risk assessment Establish effective Governance Prepare a detailed response plan to contain and eradicate Design recovery strategies that leverage forensics and insure resiliency Patient safety, financial loss, and new laws holding executives personally responsible are increasing cybersecurity investment at the board level. The Office of Civil Rights (OCR) continues its stringent enforcement of HIPAA violations, looking back six years when any violation is reported. Increased penalties are the trend, where courts will award damages for potential future harm resulting from yesterday’s breach of PHI. Lost future business, legal costs, and downtime are also financially devastating. These losses, due to confidentiality failures, will pale in comparison to the additional penalties resulting from cyberattacks impacting device availability or integrity that result in adverse patient outcomes. These risks will not magically disappear. The extent they are reduced will be a result of deliberate integrated multi-stakeholder participation. Therefore, progressive HDOs are increasing collaboration, implementing a device security plan, exploring leading-edge solutions, and leveraging additional resources in an effort to solve a “Wicked Problem.” Ty Greenhalgh is a managing member of Cyber Tygr. He has 30 years of healthcare technology and information management experience. Ty is a contributing member of the health and public health coordinating counsel and NCHICA Biomedical Taskforce. He can be reached at [email protected].
<urn:uuid:4b452c6c-168f-4db7-bad5-5eaa153dceaa>
CC-MAIN-2024-38
https://infogovworld.com/ig-topics/medical-device-cybersecurity-a-wicked-problem/
2024-09-15T06:47:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00431.warc.gz
en
0.925184
2,157
2.625
3
June 19, 2020 My previous article about information risk awareness introduced some terminology for describing and analyzing information risk, as well as concepts such as information management vulnerabilities that can be exploited (either on purpose or inadvertently) with negative consequences to the organization. Effective governance of data and information risk management require discipline in specification: identifying and organizing the different types of data vulnerabilities, determining the threats that can exploit those weaknesses, understanding the scope of the consequences, and assessing the probability that a threat will take place and--if it does--assessing the probability that there will be consequences. You might think that the best approach would be enumerating the different vulnerabilities and then working from there to consider how those vulnerabilities can be exploited. And, in fact, there are some published guidelines and practices that suggest surveying your organization to assess, describe, and categorize risks as a prelude to developing controls and monitoring for information risk events. Yet, that approach may not be the most practical to take if your information risk management framework is to be aligned with resource allocation and preventative controls. One immediate challenge is that the domain of potential risks is expansive, and one can survey an entire enterprise of data assets and consider the risks before any substantive vulnerabilities are revealed. Perhaps it might be better to initially focus on the collection of what we might call “relevant” risks--those that have both a high-magnitude negative consequence and a high probability of occurrence. For example, let’s examine the information risk domain of data loss, and look at the vulnerabilities, potential threats and consequences of some use case scenarios. We’ll then consider the probabilities associated with those use cases. Let’s start with a working definition of “data loss” as a situation in which the information presumed to be encapsulated within a data asset is no longer accessible. We can identify and categorize a number of reasons that data sets are lost, which helps to identify the potential vulnerabilities: Accidental data deletion: Data is accidentally deleted, either by a human or by an automated process. Accidental loss of key: The key to a deliberately encrypted data asset is lost. Mechanical environmental failure: There was a failure during a process writing data to a storage location (for example, a power failure in the middle of outputting a data set). Data exchange failure: There was a failure during data transmission/exchange, such as a loss of internet connectivity in the middle of a transmission. Physical asset failure: The physical asset containing the data experienced a failure, such as a hard disk crash. Asset damage: The physical asset containing the data is inadvertently damaged, such as an SD card getting bent or cracked. Software-based corruption: There is damage to or corruption of electronic data assets due to software failure. Misplacement: The physical asset containing the data is misplaced, such as a missing data tape that was not archived properly. Loss: The physical asset containing the data is lost, such as a portable hard drive falling out of a box when moved. Missing devices: The data asset is stored in a medium that can no longer be read because there are no devices available that are capable of reading the medium. Deterioration: The storage medium has deteriorated to the point that it cannot be read. Machine damage: The physical asset containing the data is maliciously damaged. Theft: The physical asset containing data is stolen. Malicious encryption: A data asset is maliciously encrypted, such as a cyber-attack via ransomware. Malicious deletion: A data asset is maliciously deleted, such as a deletion that occurs when a ransom is not paid. At the same time, we can consider the different costs associated with data loss, such as: Recovery, or the costs associated with identifying the loss and restoring from backup Revenue disruption, or the inability to continue taking orders and making sales Operational disruption/downtime, or the inability to operate production facilities Inherent value of the data, such as the costs of acquisition or intellectual property Reputational damage associated with negative perception of the organization’s levels of trust These cost categories help to identify the scale of potential negative consequences associated with a risk of data loss for any particular data asset. For example, an archive containing corporate documents including intellectual property may rate a higher “inherent value” consequence than an “operational downtime” consequence. On the other hand, loss of customer data may rate a high “revenue disruption” consequence. At this point, one can assemble a risk matrix associated with any specific data domain or data asset, with the vulnerabilities along one axis and the consequences along the other axis. The first step is to inventory the data assets and prioritize in terms of relevance. For example, customer data is highly relevant, but a copy of a spreadsheet with an expense report might not be highly relevant. Set a threshold for a level of relevance and for all data assets above that threshold, and estimate the costs associated with loss, vulnerabilities and probabilities that those vulnerabilities can be exploited. In many cases, this categorization and analysis process will not only help reduce the complexity of surveying and assessing risks, but it can also help point you to the preventative measures that can be taken to minimize information risk. About the Author You May Also Like
<urn:uuid:39a49337-00e4-4cc1-baa2-6d47ec8423ee>
CC-MAIN-2024-38
https://www.itprotoday.com/data-privacy/effective-governance-of-data-requires-understanding-of-risk-relevance
2024-09-15T07:43:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00431.warc.gz
en
0.931371
1,103
2.578125
3
Researchers recently announced the presence of a gaping security hole in Spring, a framework widely used by organizations developing Java applications. Designated CVE 2022 2965 and nicknamed SpringShell, the substantial chink in the collective Java development community’s armor left many scrambling for a patch before hackers began exploiting the vulnerabilities. Moreover, the announcement underscored the futility of security strategies that attempt to thwart attackers by individually denying all potentially dangerous functionalities. Instead, the incident pointed to the solution to the growing number of cyberattacks that exploit inherent security weaknesses lies in developing processes for building security into applications in the early software development stages. Static application security testing (SAST) is a process that gives teams a comprehensive toolset for identifying code vulnerabilities in today’s complex and third-party software-reliant applications and libraries before deployment, and during the development stage. Kiuwan is a high-performance SAST platform that gives developers confidence that their code will be free of preventable security vulnerabilities from the outset. SpringShell caught many security teams off guard, especially the majority that rely on open-source libraries and the popular Java Spring development platform. Exploring how a vulnerability like this can appear overnight helps understand how steering away from the putting out fires paradigm in the direction of baked-in code security is the most viable way forward as an answer to help stem the growing epidemic of cyber attacks. What set the SpringShell vulnerability apart was its potential impact on such a large segment of enterprise-level Java applications. To exploit the weakness, attackers must find a target running a combination of Spring, Java 9 or greater, and Tomcat. Finding that configuration isn’t too difficult, considering Spring is the number one Java framework, and Java 9 has had time to broaden its reach in the five years since its release. In addition, Tomcat, the popular web browser for Java applications embedded into Spring Boot and Spring Bean, increases the number of potential targets available to cybercriminals. Spring relies on a deny list to limit access to ClassLoader. However, beginning with Java 9, developers added modules that offered alternative ways to access ClassLoader, which gives users the potential ability to write data to internal objects. Investigators conducting PoC testing discovered that this vulnerability in Spring allows remote code execution (RCE) to configure setters and attributes using ClassLoader access. The method involves finding a POST endpoint in Spring and reconfiguring Tomcat to write a JSP or other executable file type containing malicious code to the Tomcat server. The fix for SpringShell is straightforward action that organizations should have immediately implemented upon the release of the vulnerability: upgrade to Spring framework versions 5.3.18 and 5.2.20. It is safe to assume that a large majority of Spring Web projects running Java 9 and greater were affected by SpringShell, as were many other users of the popular Spring framework. Investigators caution that Tomcat is merely a gadget that hackers can use to initiate the exploitation. Malevolent actors could implement other means beyond Tomcat to infiltrate security holes in Spring. As often occurs, the known vulnerability only opens the door for many possible variations. Resourceful hackers can exploit a known security hole to try out a litany of attacks based on a common weakness. Once researchers or attackers open Pandora’s box by revealing a security weakness, it is too late to mitigate these risks. The way to minimize the exponentially increasing attacks that will likely occur is to take a different approach to security: DevSecOps. Development Security Operations promotes code security to prominence in the software supply chain. This shift in mindset is necessary to combat the growing cybersecurity problem. The only workable system starts at the inception of the application and continues as part of the process until the end of the software’s Lifetime Development Cycle. The potential ability to write an executable file with user-provided data is hazardous for organizations charged with safeguarding financial transactions and customer data security. It is impossible to fathom all of the imaginable exploitations this access affords to cybercriminals, and merely applying the patch will, in all likelihood, be inadequate protection. Data crime is the interception and theft of private or confidential data, including identity theft. With many financial, banking, and other privacy-centric institutions relying on the Spring framework for their Java application development, the opportunity for criminals is abundant. Simply put, when thieves can access restricted areas and write cloaked executable files, it will invariably lead to sensitive data breaches. A lesser-known exploit for SpringShell is the recruitment of bot armies. Hackers can access servers and enslave them as participants in their ill-intended activities like crypto mining and DDoS attack hordes. Vulnerabilities in security are a continually lurking nightmare for many institutions. In addition, the drive for faster production and continuous improvement and lifecycle put pressure on code development teams to rely heavily on open-source and third-party software libraries and functions. Manually checking up to millions of lines of code for adverse interactions of software and known security issues is logistically impossible. However, treating code security as an afterthought to the software supply chain is a mistake many organizations are no longer willing to make. SAST helps development teams identify security issues before they become an inherent part of the code. Kiuwan SAST provides a rigorous approach to detecting software vulnerabilities with multiple language support. As a result, DevOps teams using Kiuwan enjoy the ability to seamlessly manage security within their code as they are writing it. This ability to continually scan and revise code reduces the weak spots for cybercriminals to target and results in more robust security at every application level. Kiuwan is a global company providing a 360° application security platform to help guide code development teams in producing the highest quality, most secure applications possible. With today’s threat landscape, it becomes increasingly difficult to argue that working within the old system of patching and putting out fires is tenable. Schedule an expert demo today with Kiuwan and discover how SAST can benefit the application development process, mitigate problems, and ultimately lead to a smoother development time, resulting in a more secure application.
<urn:uuid:98a91299-f8cc-4bc5-99b8-66d3f0e74d56>
CC-MAIN-2024-38
https://www.kiuwan.com/blog/understanding-the-springshell-vulnerability-in-spring-java-framework/
2024-09-15T06:33:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00431.warc.gz
en
0.930716
1,253
2.765625
3
Routing protocols are used to automatically and dynamically exchange routing information between routers. There are several routing protocols to choose from, each with its own pros and cons, as each routing protocol is designed to be well suited to a particular network implementation scenario. Two of the most popular routing protocols used today are Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP). These are very different in their design, as we shall see. We’ll start with a summarized version of the differences and then explain each protocol separately in more depth. OSPF is an interior gateway protocol (IGP) that can route packets inside a single Autonomous System (AS). Unlike other IGPs, OSPF is a link-state routing protocol. In other words, it relies on link-state information to calculate route paths and make routing decisions. After the protocol starts, each router running OSPF sends link-state advertisements (LSAs) throughout the AS or area containing information about its connected interfaces and routing metrics. When there is a change to any of the routers, the change is propagated to all the routers in the area. Such an update triggers a rerun of the shortest-path-first algorithm. OSPF splits each AS into smaller sections called areas. All the routers in the same area have identical LSA databases. They also have summarized information about the other areas. There are multiple types of OSPF areas, which will be described later in this article. BGP is a routing protocol primarily used to perform inter-domain routing, and is considered an External Gateway Protocol (EGP). However, BGP can also be used to advertise networks within an AS, and when configured to do so, can also function in a similar manner to IGPs. BGP is used to exchange routing information among routers in the same AS or different ASs. An AS is a set of routers under a single administrative authority. An AS path is the route to a destination. It is also a list of ASs that the route passes through to reach a particular router. Each route has additional information attached that comes in the form of path attributes. The path attributes are used in routing policies to influence how the router routes the traffic. Summary of differences between OSPF and BGP Below is a summary of some of the differences between OSPF and BGP. Differences between OSPF and BGP There are a number of differences between OSPF and BGP. To start with, OSPF is an interior gateway protocol. Therefore, it is confined to a single domain for routing (intra-domain). On the other hand, BGP is primarily designed to be used to route between routing domains (inter-domain). OSPF can be successfully deployed in networks with several hundred routers in a single flat area. However, this is in direct relation to the resources available on the routers (read below about the resource requirements). Conversely, the only routing protocol that runs on the Internet exclusively is BGP. Basic configuration of OSPF (say a single area with no fancy features deployed) is relatively easy. Even the most basic BGP configuration requires more effort than OSPF’s basic configuration (and some advanced routing knowledge). While both OSPF and BGP can get very complex, BGP is far more difficult to use due to numerous available features that make it suitable in many situations and corner cases. For example, OSPF primarily focuses on the metric to determine the best route. BGP on the other hand uses a series of attributes that can be adjusted very granularly, to modify routing behavior in multiple ways. OSPF must be deployed hierarchically (we will discuss this in the next section), whereas BGP does not require any hierarchy to scale. In terms of convergence, OSPF reacts faster to network changes than BGP. This makes sense, given that BGP is designed for vast networks where changes happen more often statistically. You would not want routers on large networks to be constantly recalculating routes. As for resource requirements, because OSPF requires constant calculation, it is considered CPU- and memory-intensive. In contrast, BGP does not react that quickly but becomes CPU- and memory-intensive when the size of the routing table increases. Therefore, routers holding the Internet routing table require powerful CPUs and lots of memory. Regarding the metric used to calculate the best route, OSPF uses cost, derived from interface bandwidth plus the cost advertised by the other routers when the LSA is sent. BGP can use any BGP attributes to select the best route. The inner workings of OSPF and BGP This section describes the two protocols in more detail. We will focus on the most common options that we will see configured in the article’s next section. As mentioned in the first section, OSPF is a link-state routing protocol. OSPF routers exchange link-state advertisements that describe networks they know. From these advertisements, each OSPF router builds within its memory a full topology of the network. Before doing this, though, they need to establish an adjacency with neighboring OSPF routers. Before establishing an adjacency, the two routers must become neighbors. The routers find each other using Hello packets. The following information from the Hello packets sent by the two routers must match: - They should be in the same area. - The router ID must be unique. - The subnet must be the same. - The Hello and dead timer must be the same. - The stub flag must match. - The authentication must match. It’s important to know that not all neighbor routers become adjacent as well. Let’s consider the scenario in Figure 1 (a broadcast or a nonbroadcast multiaccess network). The OSPF Hello protocol will elect a designated router (DR) for the network in this particular example. For redundancy purposes, a backup designated router (BDR) will be elected. Every other router from the segment will become a DROther. This means that the DROther will become adjacent only with the DR and BDR, and every DROther router will receive the LSA from the DR (or BDR, in case the DR fails). The purpose of this mechanism is to reduce the amount of routing information traffic exchanged. There are two rules to elect the DR and BDR: - Priority: Highest priority is preferred. - Router ID: Highest router ID is preferred. The router ID is derived using the following options: - Manually set. - The highest IP address from a loopback interface. - The highest IP address from an up physical interface. Should every OSPF configuration be left at default and all the routers be configured at exactly the exact moment, in the above case, R3 will become the DR, R2 the BDR, and R1 the DROther (Figure 2). Above, the DROther sends its updates to the multicast IP address 188.8.131.52, on which only the DR and BDR are listening. The DR sends the update to 184.108.40.206, to which all the routers from the segment are listening. It was mentioned that OSPF was designed to be hierarchical to scale, which is achieved by using OSPF areas. Based on the type of LSAs that can be present in an area, these are the OSPF areas: It is worthwhile to discuss what each LSA type is: - LSA Type 1 - Router LSA: Generated by every router and describes the router links. - LSA Type 2 - Network LSA: Generated by the DR and describes the routers connected to the segment. - LSA Type 3 - Network Summary LSA: Generated by the Area Border Router and sent to another area to represent the destinations outside that area. - LSA Type 4 - ASBR Summary: Used to describe the router that advertises external routes. - LSA Type 5 - AS External LSA: Represents the routes external to the AS. - LSA Type 7 - NSSA External: Represents the external routes from an NSSA area that will be converted to Type 5 LSA. Now let’s discuss the router types in OSPF, as shown in Figure 3. In this particular case, area 0 is the backbone area, the rest of the areas are non-backbone areas, and these are the roles of the routers: - R1 and R2: Internal routers because all their interfaces are in the same area. - R3 and R4: Area Border Routers because their interfaces are in two different areas. They are also called backbone routers because they have at least one interface in Area 0. - R5: Autonomous System Border Router, which redistributes external routes (BGP) in OSPF. As already mentioned, BGP is used to connect ASes or to advertise network reachability information inside an AS. When BGP is configured between routers in the same AS, it is called Internal BGP. When it is configured between routers in different ASes, it is called External BGP (Figure 4). In this example, between R1 and R2, the BGP session is internal, and between R2 and R3, the BGP session is external. The network reachability information is sent through BGP Update messages, allowing the routes’ advertisement and withdrawal. One of the most critical fields in the Update message is the “Path Attributes,” which define what attributes are attached to the routes. There are four categories of BGP attributes: - Well-Known Mandatory: These are recognized by all BGP speakers and must be present in all Update messages. - Well-Known Discretionary: All BGP speakers recognize these messages, but they can optionally be present in Update messages. - Optional Transitive: These may or may not be recognized by BGP speakers, but even so, they are passed to other BGP peers. - Optional Non-Transitive: The BGP speakers might recognize these messages but not pass them to other BGP peers. While OSPF uses cost as a metric to determine the best path, BGP uses BGP attributes to determine the best path. Because it is not uncommon to have multiple paths to the same destination, BGP has a best-path selection algorithm to eventually choose the best path (or paths, if BGP multipath is configured). One thing to remember is that a route will be considered suitable as a candidate for the best path only if the next hop to reach that route is reachable. Consider Figure 5. By default, if no additional actions are taken on R2, R1 will not accept the 10.10.10.0/24 route because, in the BGP Update message, the next hop for the route is R3, and R1 does not know how to reach it. There are multiple ways to solve the problem above, and they are listed below: - R2 can set itself as the next-hop-self for the BGP update sent to R1, so R1 now has the next-hop IP address within its routing table. - R2 can advertise the subnet between R2 and R3 in the IGP of AS 1. This however, is less desirable, since inter AS communication should be restricted only to BGP, otherwise unpredictable routing may occur. It is best practice to keep IGPs operating within the boundaries of their ASes. This section shows how to configure basic OSPF and BGP on Cisco routers. For OSPF, we will use the following diagram of a multi-area OSPF network. In this scenario, R3 is the ABR because it has interfaces in both Area 0 and Area 1. Area 1 is a non-backbone area (not a stub, stubby, or NSSA). Following interface configuration, this is what is required on R1 to be configured as an internal backbone router (the same configuration is done for R2 with the difference of IP addressing). The configuration is similar for R4, except that the interfaces are added in Area 1. R3 has a different configuration. Observe how interfaces are part of different areas. Because all three routers in Area 0 were configured simultaneously and none of the OSPF parameters were changed, R3 became the DR, R2 became the BDR, and R1 became DROther. As mentioned before, the router ID of R3 was the highest because it has the highest loopback IP address out of the three routers in Area 0. If you checked on R3, R1 would be in the state of DROther. Checking the routing table of R1, you would see routes from the same area (Area 0) and Area 1 (with the code of IA). This would be a basic configuration of a multi-area OSPF network. This kind of deployment will suffice for most networks. For BGP, we will use the setup in Figure 7. R1 and R2 are in AS1 (which means they will have an internal BGP between them), and R3 is in AS 2, so there will be an external BGP session between R2 and R3. R3 advertises the route for 220.127.116.11/32. The configuration required for R1 is the one below. The configuration for R2 is the one below. And the configuration for R3. There is an additional command required to advertise the network 18.104.22.168/32, as noted below. At this point, R2 should have in its routing table the network 22.214.171.124/32, but R1 will not have it because the next hop of the route (126.96.36.199) is not reachable by R1. To solve this, we can configure R2 to set itself as the next-hop for the routes it advertises to R1. This will allow R1 to install the route in the routing table. At first sight, the BGP configuration is simple, but this is as basic as it can get. In real life, one would need to play around with BGP attribute manipulation and configure additional features. Some of the attributes that can be manipulated to affect BGP routing behavior include Local Preference, AS_PATH length, and MED to name a few. The BGP configuration can get very complex, and vendors constantly add new complex features to support more and more use cases. The following are lists that summarize the pros and cons of each routing protocol. Note that some of the cons may not actually be considered cons in certain situations, so these should be taken only as guidelines. - Prossome text - Fast convergence - Open standard - Scalable due to hierarchical area implementation - Conssome text - Uses large amounts of system resources (CPU, memory) to run algorithm and maintain full network topology information - Multiple area types and link state advertisement types can become extensively complex - Prossome text - Extremely scalable - Low resource usage even for large BGP tables - Extremely granular routing behavior adjustments - Conssome text - Slow to converge - Complexity can increase if many BGP attributes are tweaked BGP and OSPF are complex protocols. Their configuration can get very difficult sometimes, and it is critical to understand how the protocols work and what their core components are. Failing to do so would not only prevent you from having the right configuration, but also put you at risk of not being able to properly troubleshoot any issues with these protocols. It is also worth checking the latest vendor documentation regarding features and their configuration, because the defaults of each feature might change from release to release.
<urn:uuid:82fcf85f-3cc2-4e55-a3f4-58dca0e2cf9b>
CC-MAIN-2024-38
https://www.catchpoint.com/dynamic-routing-protocols/bgp-vs-ospf
2024-09-17T18:01:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00231.warc.gz
en
0.93565
3,305
4.34375
4
What is sustainability? It is a concept that means we should be responsible for the health and well being of our planet. It is a philosophy that we are all a part of a “world community” and that we all contribute to the earth’s health. Sustainable means that we can continue to live without destroying the planet. In the modern world, it normally refers to the ability for human society and the biosphere to co-exist. The goal of sustainable living is to maintain a quality of life that will ensure that people will have access to enough food, water, shelter and other essentials. There are many ways to get started in having a sustainable lifestyle. We need to make a conscious effort to become more aware of where our food comes from, where it goes after it is used, how it gets transported to our table, and how much waste is produced. This type of awareness will help us make informed decisions about where our food is coming from, how we cook it and how much waste is produced. We can also become more aware of the amount of water that we are using. Many people are aware that we are taking more than enough water from our rivers and streams, but many are unaware that there are so many other sources. Some examples include the treatment of wastewater, which includes the removal of sediments, solids and chemicals such as pesticides. Also, the water used in our drinking water contains so many contaminants that it must be disposed of. Sewage treatment plants treat sewage by breaking down the chemical compounds that have been found in the sewage. Some of these compounds can be harmful to human health. Water conservation is another way to become more aware of the world around us. Water conservation means that we are making an effort to conserve water. This includes, reducing the amount of water that is wasted in the process of making water, reducing the amount of water that is used in the process of heating and cooling water, etc. We need to be aware of the fact that if the Earth’s natural resources are not utilized, then it will be necessary to tap into those resources to meet the demand. Another important way to start living a sustainable lifestyle is to reduce the amount of greenhouse gases that are emitted into the atmosphere because of our use of energy, which is leading to our world’s water crisis. Carbon dioxide is one of the main causes of climate change and global warming. Cutting back on energy consumption, forgoing energy production and switching to renewable energy sources can all help to cut carbon emissions. These steps will have a great effect on the environment as a whole and the quality of our lives. Another way to understand sustainability is to become more aware of the sources of energy that are available to us. Renewable sources such as wind power, solar energy, geothermal and hydroelectricity are all beneficial to the earth’s health. These types of energy are becoming increasingly more accessible. As we can see, sustainability is becoming a common theme among those who are committed to living a healthy lifestyle. This is because a sustainable lifestyle means that we are being responsible for the health of our planet. By following sustainability, we are helping to sustain the earth’s natural resources and ensure that it continues to be a safe place for future generations. In conclusion, people who have chosen to live a more sustainable lifestyle are helping to protect the planet and provide it with the best possible future. In fact, if everyone makes a commitment to living a more sustainable lifestyle they will ensure that the Earth’s resources are used efficiently for a longer period of time. If you are considering a more sustainable lifestyle, you may be wondering what is sustainable living. In fact, sustainability means different things to different people but all agree that this means living a healthier life, a more eco-friendly lifestyle, reducing your impact on the environment and being more environmentally conscious. There are many resources available to those who are planning to live a more sustainable lifestyle. These resources are available online. You can access resources that can help you understand the importance of living a healthy lifestyle and the ways that you can begin to live a more sustainable lifestyle today. Wanda Rich has been the Editor-in-Chief of Global Banking & Finance Review since 2011, playing a pivotal role in shaping the publication’s content and direction. Under her leadership, the magazine has expanded its global reach and established itself as a trusted source of information and analysis across various financial sectors. She is known for conducting exclusive interviews with industry leaders and oversees the Global Banking & Finance Awards, which recognize innovation and leadership in finance. In addition to Global Banking & Finance Review, Wanda also serves as editor for numerous other platforms, including Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.
<urn:uuid:d3665b0d-cb10-4803-9538-9600a7f1e18c>
CC-MAIN-2024-38
https://bizdispatch.com/what-is-sustainability/
2024-09-20T07:18:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00031.warc.gz
en
0.958921
1,022
3.875
4
What is this report about, and what did we learn? Additionally we find that various operating system protections work to limit the amount of data the WeChat application may gather, and encourage users to be cautious with sensitive permissions like location. Many new security features in newer Android versions seek to enforce permission boundaries and limit types of identifiers available to the application. What is WeChat, and how is it used? WeChat is the most popular messaging and social media platform in China and third in the world, with over 1.2 billion monthly active users. According to some market research, network traffic from WeChat made up 34% of Chinese mobile traffic in 2018. Many inside and outside China use WeChat out of necessity. Besides individuals in China, diaspora populations, family members, journalists, international activists, diplomats, people who do business in China, and just about anyone with a relationship in China are also using WeChat out of necessity. What is Weixin, and its relation to WeChat? According to the WeChat Terms of Service, if the user registered using a Chinese phone number (country code +86), they are considered a “Weixin user”. Tencent appears to characterize Weixin and WeChat as two “services” provided within the same “app” based on the language of both WeChat and Weixin’s policies. Both “services” are operated by two separate subsidiaries (WeChat International Pte. Ltd. in Singapore and Shenzhen Tencent Computer Systems Company Limited for Weixin). In the app, the boundary between these two “services” are not clear. There are features operated by Weixin available for WeChat users. From our observation, both services also mostly use the same set of servers. Users of both services can directly communicate with each other. What are Mini Programs? Mini Programs are lightweight apps that can be downloaded and launched within the WeChat app. They can also sync and link with users’ WeChat accounts. The breadth and variety of Mini Programs is essentially the same as any other app ecosystem, like the Google Play Store or the Apple App Store. Mini Programs cover e-commerce, health, public services, gaming, and any other service an app may possibly be used for. This also means that many popular Mini Program apps manage sensitive data. Certain apps manage health data, government services, or perform financial transactions on behalf of the user. How did you conduct this study? To set the stage for this work, we first developed tools to study WeChat network requests. We then used these tools to identify and analyze data flowing from the WeChat client to the server during the usage of various WeChat features. What type of data is sent to WeChat servers during Mini Program execution? The data collection observed on Mini Programs is likely in-place to enable the application monitoring and analytics features provided by WeChat, namely, “We分析” or “WeAnalyze”. However, from our analysis, we find that all Mini Programs are automatically enrolled into the WeAnalyze program and data collection, and there is no reasonable way to opt-out. To put this data collection into perspective, it would be an equivalent privacy violation if the Google Play Store automatically injected Google Analytics tracking scripts into all applications that were available on the platform. What other type of data is sent to WeChat servers? Generally, WeChat collects device and network metadata on top of whatever other data it needs to implement the app’s functionality. If your location permission is granted to WeChat, WeChat enables the “People Nearby” feature, which collects your location when you are using the application. Certain features of WeChat send more usage and tracking data than others. Using Mini Programs or Channels, for instance, collects click/page data and tracks your usage of the app. For a more comprehensive description, check out the full report. Where are WeChat servers located? We observed WeChat reporting to servers that are nominally located in Singapore and Hong Kong. The application also has the capability to contact servers in mainland China. Which servers the app uses may be determined based on your IP address location if you are logged out or your registered phone number if you are logged in. What happens to the data after WeChat/Tencent collects it? What are the limitations of this work? This report only looks at the behavior of a recent version of the WeChat mobile Android app. Even though we look at what types of data are sent to WeChat servers, we cannot always definitively say what WeChat servers are doing with that data. Furthermore, we only investigated the application using a U.S. phone number, which limits the scope of our results to understanding the app’s behavior for users who do not have mainland China accounts. We also cannot test certain features, such as WeChat Pay. Finally, WeChat is a very large app with many features. Although we do our best to be comprehensive, there may be blind spots in our study in which we may have failed to induce the application conditions necessary for the transmission of certain data. What are some recommendations for Tencent? WeChat should also allow users to opt out of extraneous tracking during usage of “Weixin” services. In particular, WeChat should remove forced enrollment of Mini Program analysis and tracking features and switch to an opt-in model. Currently, both developers and users are automatically enrolled into the WeAnalyze (We分析) data collection program with little notification. There is currently no way to opt out of the program for either developers or users. For more recommendations, you can read the WeChat recommendations section of our report. What are some recommendations for users? For general WeChat users, we can provide a few recommendations: - Avoid features delineated as “Weixin services” if possible. Many core “Weixin” services (such as Search and Channels) perform more tracking than core “WeChat” services, and by using “Weixin” services your data is shared with an entity operating in Shenzhen, China. - Use stricter permissions. In modern versions of Android, it is possible to restrict certain permissions (like location access) to only when the application is open on screen or to outright deny these permissions. - Apply regular security and operating system updates. Many new security features on modern versions of Android are working to enforce permission boundaries and limit certain types of identifiers that are available to the application. We recommend regularly updating for additional security features down the line. If I am a high risk user, how can I protect myself? We caution no amount of adjustments can make the app completely “safe” for certain high-risk threat models. We can recommend alternative encrypted or anonymous messaging systems, but we also recognize that most WeChat users are on WeChat out of necessity. For high-risk users, we recommend talking to a security professional about your particular concerns to see what you can do to limit, manage, or reduce your exposure to risk while using the app.
<urn:uuid:827c5854-3365-4408-8aae-97533475477c>
CC-MAIN-2024-38
https://citizenlab.ca/2023/06/privacy-in-the-wechat-ecosystem-explained/
2024-09-20T07:25:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00031.warc.gz
en
0.942561
1,490
2.828125
3
Nanoparticle technology stands on the cusp of a revolution thanks to an innovative assembly technique developed by researchers at the Gwangju Institute of Science and Technology (GIST) in South Korea. The current landscape of device fabrication pivots on the precise and rapid integration of nanomaterials onto wafer surfaces, and the new “one-shot” self-limiting assembly offers a transformative solution. Especially critical for the uniform application of sub-100 nm particles, this method aligns perfectly with industrial demands for efficiency and scalability. Traditional techniques are being outpaced as the GIST method becomes ingrained in the domain like never before, striving to elevate the functional capacities of electronic and photonic devices. Understanding the “One-Shot” Self-Limiting Assembly Unveiling their nature-inspired process, the GIST researchers have taken a proton-assisted electrostatic assembly technique to new heights. Reflecting upon the underwater adhesion strategies of mussels, they have created a system that guides the rapid and consistent formation of nanoparticle monolayers. This not only simplifies the deposition process but also substantially reduces the time consumed by conventional methods. Seconds replace hours without compromising the uniformity and quality of the nanoparticle distribution. A method that once seemed to crawl now sprints, effectively bypassing former limitations and establishing a new gold standard for nanoassembly. Breakthrough in Rapid and Uniform Deposition The leap from lab to industry often suffers from a mismatch in scale and speed, but GIST’s process narrows this gap significantly. Coating a 2-inch wafer now takes only 10 seconds, slashing the timescale and vastly improving throughput. This breakthrough posits an attractive proposition for industries striving to boost production while maintaining excellence in uniformity and performance. It’s not just a step but a leap forward in achieving high-speed, high-fidelity assembly of nanoparticles, catalyzing a potential paradigm shift in a plethora of technological processes where these tiny actors play a star role. Harnessing Electrostatic Charge for Efficiency The secret to achieving such unprecedented efficiency lies in the electrostatic charge phenomenon. GIST’s refined method seeks excess protons to actively remove hydroxyl groups from the wafer surface. This provides an electrifying attraction that draws the nanoparticles into a single, homogeneous layer. The assembled monolayer naturally repels further deposition, which mitigates multi-layer buildup and secures a self-limiting pattern. This precise control mechanism ensures dependable product quality, heralding a promising avenue for consistently reproducible nanoscale assemblies. Impacts on Device Fabrication and Optical Devices The tidal wave of improvements engendered by this technique in the field of wafer fabrication and optical devices cannot be overstated. The self-limiting and self-aligning properties of this nanoparticle monolayer encapsulate a remarkable step forward for the industry. The potential to spawn advancements in full-color reflective metasurfaces through plasmonic engineering opens a kaleidoscope of opportunities. From full-color imaging to optical encryption devices, the scope of applications is expansive, reinforcing the technique’s significance in navigating the future of optical technology. The Future of Nanotechnology in Everyday Life Researchers at the Gwangju Institute of Science and Technology in South Korea have pioneered a “one-shot” self-limiting assembly technique for nanoparticles that promises to revolutionize device fabrication. Modern electronics and photonics hinge on integrating nanomaterials onto surfaces with precision. This groundbreaking method excels in applying sub-100 nm particles uniformly and meets the industry’s call for efficiency and scalability. Traditional approaches are quickly becoming obsolete as this new technique revolutionizes the field, optimizing the functionality of devices. As industries continuously chase smaller and more efficient technologies, the GIST team’s innovation is set to become a cornerstone in the manufacturing of advanced devices. Their method not only aligns with the current industrial demand but also provides a scalable approach, ensuring its potential to reshape how we engineer the electronics and photonics of tomorrow.
<urn:uuid:288a27ca-6e36-449c-b677-c93f58adc0f8>
CC-MAIN-2024-38
https://biopharmacurated.com/tech-and-innovation/how-is-gist-revolutionizing-nanoparticle-assembly/
2024-09-08T03:29:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00231.warc.gz
en
0.881554
840
2.703125
3
Prototype ‘Quantum Lock’ May Foreshadow Super-Secure Applications (SpectrumIEEE) A facial lock developed by researchers at Stevens Institute of Technology involves simultaneously creating twin particles of energy that somehow communicate with one another, across distances. “These quantum properties are going to change the internet,” predicts Huang, who directs the university’s Center for Quantum Science & Engineering and works with graduate students including Lac Nguyen and Jeeva Ramanathan on the quantum lock project. “One big way it will do that is in the enabling of security applications like this one, except on much larger scales. When people stand in front of the camera attached to the lock, the Stevens setup captures information about each person’s face and sends it over the internet to a server housed in a different part of the university. There, facial-recognition computations and matches are done using open-source software. The data exchanged between the two parties is secured by fundamental laws of physics.As facial photos are taken by the video camera, lasers in Huang’s physics lab create twin photons — tiny, power-packed particles of energy — by splitting beams of light with special crystals. The twin photons are then separated. One photon is kept in the lab while the other is sent through fiber-optic lines back to the library. Complex, secret “keys” are instantly generated as the photons are detected at each site; this process will ensure that the secure information meets up with a trusted partner at the other end of the transaction. The keys serve as what’s known in cryptography as a “one-time pad”: a temporary, uncrackable code between the parties that encrypt the images and communications, preventing any hacker from intercepting them.
<urn:uuid:1832b01d-1437-46d5-a2e6-165a9bd55d33>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/prototype-quantum-lock-may-foreshadow-super-secure-applications/
2024-09-12T22:55:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00731.warc.gz
en
0.933133
368
3.03125
3
“The Enabling Middle Mile Broadband Infrastructure Program provides funding for this vital part of our nation's high-speed network. With $1 billion in funding, the program will reduce the cost of bringing high-speed internet to unserved and underserved communities.” Broadbandusa.ntia.doc.gov The “Middle Mile”: The program defines middle mile infrastructure as broadband infrastructure which “does not connect directly to an end-user location” and can include leased dark fiber, interoffice transport, backhaul, carrier-neutral exchange facilities, undersea cables, and transport connectivity to data centers. It also includes “wired or private wireless broadband infrastructure” such as microwave capacity and radio tower access. Why has the Middle Mile been such an issue across much of rural America? There are two primary reasons: - Large, expansive coverage areas – spanning transport connectivity over extremely large areas to reach remote communities can be cost prohibitive and exceedingly time consuming. Not to mention that the remote communities, with low population density, may not have enough subscribers to provide a return on the infrastructure investment. When considering Fiber deployments, this is especially true. Laying fiber and FTTH is very expensive and can take months, if not years, to complete. Wireless has already eased much of these issues and enabled operators to expand their coverage areas. But as the U.S. coverage maps show us, there is still a very high number of unserved/underserved households. - Challenging terrain/geography – much of the rural U.S. consists of mountains, hills, lakes, rivers, and sand, which are near impossible to dig trenches and lay cables across. Therefore, a specific funding program for the Middle Mile has been created. Its goal is to extend the Internet backbone across challenging areas and reach closer to underserved communities where a last mile solution, that can finally deliver broadband to the home, can be implemented. Additionally, the program promotes broadband resiliency through the creation of alternative network connection paths designed to prevent single points of failure on a broadband network. While Ceragon network has solutions for all kinds of rural connectivity, The Middle Mile is where we especially shine. Our millimeter and microwave point-to-point solutions provide superior, multi-gigabit backbone connectivity. Our wireless capabilities allow expansive, long-range connections that can augment fiber networks and that can be quickly, affordably, and easily deployed – even in challenging terrain. It is important to point out, that unlike the BEAD program, which is managed by individual states, the Middle Mile Program will be administered directly by NTIA. All applications will be submitted directly to, be evaluated by, and awarded by NTIA. There are many types of entities that are eligible to apply for this program: - States or political subdivisions of a State - Tribal governments - Technology companies - Electric utilities and cooperatives - Public utility districts - Telecommunications companies and cooperatives - Nonprofit foundations, corporations, institutions, and associations - Regional planning councils - Native entities - Economic development authorities Noteworthy, NTIA will give higher scoring to partnerships of two or more entities described above. Evaluation of Applications: as NTIA will consider the middle mile applications, there are several factors that will help them evaluate and award funds, including: - Adopt fiscally sustainable middle mile strategies - Must offer terrestrial and/or wireless last mile broadband providers connectivity to middle mile infrastructure - Has identified specific last mile broadband providers that have expressed interest in the middle mile connectivity or demonstrated business plans or funding sources to connect to the middle mile - identified supplemental investments or in-kind support that will help accelerate the completion of the project (i.e. waved permitting fees); Demonstrated the middle mile infrastructure will benefit national security interests of the US and the Department of Defense. The application acceptance timeline has just opened on June 21, 2022 and will continue until September 30, 2022. This is a relatively small window. If you are interested in accessing these funds and submitting an application, don’t hesitate to reach out to Ceragon for help in any step – the applications, the middle mile network plan, the deployment, the ongoing management and optimization of the network… whatever you need, we are here to support you. The Act requires the agency to prioritize projects that: (i) leverage existing rights-of-way, assets, and infrastr\ucture (ii) enable the connection of unserved anchor institutions, including Tribal anchor institutions; (iii) facilitate the development of carrier-neutral interconnection facilities; and, (iv) improve redundancy and resilience while reducing regulatory and permitting barriers. NTIA will also prioritize any project in which the applicant has done two or more of the following: - Adopted fiscally sustainable middle mile strategies - Committed to offering non-discriminatory interconnection to wired and wireless last mile broadband providers and any other party making a bona fide request - Identified in the application specific terrestrial and wireless last mile broadband providers that have expressed written interest in interconnecting and demonstrated sustainable business plans or adequate funding - Identified supplemental investments or in-kind support (such as waived franchise or permitting fees) that will accelerate the completion of the planned project - Demonstrated that the middle mile infrastructure will benefit national security interests of the United States and the Department of Defense Connections to Anchor Institutions As covered previously by our blog discussing the introduction to the IIJA funding, connecting “Anchor institutions”, such as schools, libraries, and hospitals, will help, and may be required for application approval. Entities that receive middle mile grants must ensure the network can provide 1 Gbps service to anchor institutions and must offer direct interconnection to anchor institutions located within 1,000 feet of the middle mile facilities. Interconnection and Nondiscrimination An interesting point made by NTIA for the Middle Mile grant recipients, is that their backbone networks must “offer interconnection in perpetuity, where technically feasible, without exceeding current or reasonably anticipated capacity limitations, on reasonable rates and terms to be negotiated with requesting parties.” The nature of the interconnection must include the ability to connect to the public Internet, as well as physical interconnection for the exchange of traffic. Additional Merit Points NTIA will calculate and weigh scores of all applications. Meeting the below points are a few examples that will earn additional merit points for applicants: - Reduce end-user prices - Reduce latency in remote or insular areas - Benefit unserved areas or Tribal Lands - Connect unserved anchor institutions - Demonstrate climate resilience - Provide other benefits (e.g., redundancy, direct interconnect facilities) - Meet community’s needs and complete project in two-year period Matching Funds and Timeline The amount of a middle mile grant awarded to an eligible entity may not exceed 70% of the total project cost – in other words, the applicant must be able to cover 30% of the project cost. This is where critical infrastructure companies can contribute to a successful application by providing in-kind assets, including rights-of-way, existing fiber capacity, tower space and radio equipment infrastructure to support the middle mile project. The contribution of existing infrastructure also will help the applicant meet statutory deadlines. A winning applicant must complete its buildout of the middle mile infrastructure within five years of the date the grant is made available. Ceragon is here to help. We can walk you through some of the basics to help determine if the middle mile program is right for you and help you identify initial preparations, timelines, and network planning. You are not alone. More importantly, our experts can help you create a network deployment plan that best meets your specific needs – challenging geography, remote or sparse subscribers, keeping operating costs low, meeting customer demands, reliability, high-capacity, ongoing optimization, and so much more. Let Ceragon help you formulate your plans and find the solutions that are best suited for you and your community. A quick guide to overall funding from NTIA IIJA Broadband Programs: For full details and other requirements, applicants are encouraged to consult the application packet which will be posted on the NTIA website at https://grants.ntia.gov/. Find out how to address rural broadband challenges
<urn:uuid:b2458507-860f-4d82-aa7a-09f48db52f1d>
CC-MAIN-2024-38
https://www.ceragon.com/blog/iija-funding-smoothing-the-path-to-the-middle-mile
2024-09-17T21:56:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00331.warc.gz
en
0.931616
1,720
2.71875
3
Modern networks can be attacked in a variety of ways, meaning that companies need different types of protection. This article explains some of the risks involved, and provides some easy ways to deal with them. Consumerisation is a problem facing every IT department. Once upon a time, home and corporate computing were entirely separate. During the eighties, the PC was purely a business tool. Then, during the nineties, it became the primary machine for home use as well. During the following decade, the Internet took many applications into the cloud. Today, employees use the same computer and browser architectures at home, as they do at work. This has blurred the lines between computing at home and at work—and has created some unique security challenges in the process. Dazzled by the Web 2.0 sites that permeate their lives at home, employees want the same comforts in the office. Modern web sites offer far more than the one-way Internet experience so common in 1995, where users simply read the information on web sites. A new and dangerous web Instead, today’s web offers a bidirectional, many-to-many experience, in which users are encouraged to participate by submitting their own content. Sites ranging from social networks to online photo sharing services invite users to submit their own information, and even to chat in real time. Facebook, LinkedIn, Wikipedia, Flickr and a panoply of other sites fall into this category. These technologies have bought small to medium-sized businesses the same benefits as their larger counterparts. Online applications, advanced search capabilities, and real-time messaging technologies enable them to build scalable, highly-responsive technology infrastructures to support their businesses. Virtual teams of contractors can now be assembled easily with a collection of free instant messenger clients and a cheap account on a collaborative web site, for example. However, these benefits come at a cost. Many web 2.0 sites have repeatedly been found wanting in terms of security. More functionality breeds more vulnerabilities, and attackers have been quick to exploit them. Malicious software (malware) that infects computers and connections spreads via a variety of channels, including hacked web sites, email, social networks, and instant messenger programs. Even simple search results are being ‘poisoned’ by search engine optimisation experts who want to direct unwitting users to malicious web pages instead of legitimate ones. The dangers extend to the unintended egress of information. Employees may inadvertently send sensitive data outside the company via several channels. Pasting customer information into an email is one example, although it can also be pasted into web 2.0 sites, or sent via instant messaging programs. An example of the danger: real-time chat The encroachment of real-time chat into corporate networks began as long ago as 1996-7, when Mirabilis launched the ICQ chat service, and AOL launched its Instant Messenger program. The software began creeping onto corporate desktops without IT’s permission. That’s the problem with the corporate desktop; it is very difficult to manage effectively. For SMBs especially, who often have a surfeit of IT expertise, trying to lock down desktops is a challenging task. Even those organisations with the wherewithal to do it risk irritating employees who want those comforts on the desktop. With instant messaging becoming an important work tool, it could even be deemed counterproductive for companies to ban it from the desktop altogether. AOL Instant Messenger, MSN Messenger, and Skype are all useful for business purposes, as are other programs such as Google Talk. The irony underlying most instant messaging programs is that while legitimate, they act like malicious software. They are designed to get around network firewalls that might try to block them, by ‘port hopping’ - effectively trying different digital ‘doors’ separating a company’s network from the public Internet, until they find one that is unlocked. The problem of real-time chat as a potential attack vector has been exacerbated with the introduction of web-based online chat mechanisms that need no desktop client at all. Facebook’s built-in instant messaging feature is a good example of this. Defence in depth SMBs with little resource to spare for complex IT security therefore find themselves battling not only real, external threats, but also their own well-meaning employees. They need simple, turnkey solutions to secure their networks, but as we’ve seen, the threats operate at multiple levels. For this reason, security products for SMBs should provide multi-layered protection (otherwise known as ‘defence in depth’ to protect all of the available channels. Defence in depth goes beyond the traditional firewall, which has historically been the main method used to protect the corporate network. These devices did little more than block specific ports on a network to stop external attackers from using them to attack a company’s computers. They did nothing to analyse the actual content of the traffic passing over the company’s network connection. Unified threat management appliances monitor the network for a variety of threats by combining smart firewall technology with email and web content scanning. They can be programmed with rules that stop employees from doing specific things on the Internet at particular times, and can look for suspicious traffic flowing over the network. Protecting the network Network security features heavily in UTM systems, which build on traditional firewall systems with a host of new features. Modern UTMs feature ‘stateful’ packet inspection, which not only monitors specific ports, but also watches what traffic passes through them over time. This ability to watch the traffic passing across the network also allows modern network security products to offer intrusion detection and prevention (IDP) capabilities. The security device monitors network traffic activity to look for patterns that could indicate an attack. An example of a malicious pattern might be a single PC in the organisation which suddenly begins rapidly contacting other PCs using a single port, which could indicate a rapidly spreading piece of malware. The IDP database is constantly updated with new patterns identified by the vendor of the device as new vulnerabilities and attacks appear. Modern network security devices also feature application firewall capabilities. This uses a technique known as deep packet inspection to look inside the small ‘envelopes’ of data that flow over an Internet connection. By examining the content of these packets, a device can determine the type of traffic that they are. They may be video, VoIP, or web traffic directed at a particular application on the company’s network. By analysing the packets, the device can determine whether they are performing legitimate tasks. Multi-layered devices also monitor the content of those packets for warning signs, enabling them to scan incoming and outgoing emails for suspicious content. This enables an organisation to stop spam messages from reaching recipients, using a mixture of spam signatures updated by the vendor, and intelligent heuristic techniques that allow the device to estimate the likelihood of a particular email being spammy. Finally, web security works to protect users both at a content prevention and a URL filtering level. It watches the URLs that users attempt to visit, and can block known malicious sites (such as phishing destinations, or ‘drive-by download’ sites) before the user’s browser has a chance to download malicious or inappropriate content. URL filtering has the added benefit of enabling a company to implement policies controlling social network use. Perhaps managers only want users visiting Facebook pages during their lunch hour, for example. Web security mechanisms will also scan content, watching for content such as pornography, and for malicious code contained on a webpage that might compromise a user’s computer. Covering all your bases It is easy to see how these functions work in unison with each other. For example, attackers often use email to send malicious URLs to users. These may be spotted by email protection functions within a unified threat management system or Internet security appliance. However, if they slip through, they will be caught by the web filtering mechanism, making it doubly hard for attackers to compromise users. Anti-virus mechanisms built into the device will also scan for malware separately, providing yet another level of protection. Defence in depth is a crucial technique for any modern SMB that wants to protect itself against intrusion. Condensing multi-layered protection into a single device, updated by the vendor, provides the best protection for resource-constrained companies. Modern Internet security is an exercise in probability. It is impossible to guarantee 100% security—a determined hacker may still be able to gain access to a company’s system. But the more points protection that a company covers, the more likely it is to fend off the majority of generic attacks on the Internet. Can you afford not to cover your bases?
<urn:uuid:7e9e72e8-ec0d-41a4-92cd-c3102370e3eb>
CC-MAIN-2024-38
https://circleid.com/posts/defending_the_network_several_times_over
2024-09-19T04:45:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00231.warc.gz
en
0.941769
1,798
2.75
3
What is port forwarding and how safe is it? With extensive analysis and hands-on VPN testing, our cybersecurity experts collaborated with the research team to craft 750+ articles over the years. Using first-hand expertise, our main goal is to help our readers make the most informed purchasing decisions. Learn morePort forwarding, or port mapping, allows remote servers and devices on the internet to access the devices that are within your private local-area network (LAN) and vice versa. Without port forwarding, only devices that are part of the internal network can access each other, and with port forwarding, anyone can. Whether you’re making a Minecraft game accessible to your friends or hosting a small website, port forwarding is a useful way to access software running on your computer remotely. Essentially, port forwarding maps an external “port” on your internet-facing IP address to a particular computer on your local private network. This allows you (or someone else) to access something on your computer from the internet. Port forwarding solves all kinds of problems, but it can also be dangerous. If you fail to secure a remote desktop connection, for example, someone could log into your computer from afar. In this article, we’ll extensively explore the topic of port forwarding - how does it work, what it is used for and how to solve any port forwarding problems that might arise. How to use port forwarding safely How does port forwarding work? Ports are how computers distinguish between multiple services listening on one computer. Using ports lets a device run a myriad of different processes and services. Each service has its own port - for example, email servers usually use port 587 while websites use port 80. In total, there are more than 65,000 different ports, but only about 1,000 are used regularly. The others can be assigned to the devices or applications of your choice, and this process is called port forwarding. To fully understand it, you should also know that, thanks to NAT (Network access translation), all the internal devices share the same external IP address. So, let’s use a little allegory to explain how port forwarding works. You can think of ports like doors to a house: your computer is at 1234 Daisy Lane and it has about 65,000 doors. If port 22, used for the SSH remote access protocol, is listening, imagine that door 22 on 1234 Daisy Lane is unlocked. The trouble with NAT is that it provides different addresses internally and externally. To continue the house analogy, imagine that the outside world could only send visitors to Daisy Lane, not specific houses within the neighborhood. If a visitor asks for door 22 on Daisy Lane, the gatekeeper (representing NAT on the router) won’t know which house to send them to. This is where port forwarding comes in. When you set a few router (or other default gateway) settings, it will be able to send inbound connections to the right computer within the network. Types of port forwarding There are several types of port forwarding, with each of them serving different purposes. Local and remote port forwarding uses the TCP port 22, or SSH Tunneling. - Local port forwarding. This type of port forwarding is used when you want to use your LAN device to get data from a destination that you don’t have access to, but a device in the middle, or an intermediate, has. This allows for data to be pulled from the remote destination to your local device. - Remote port forwarding. This type of port forwarding allows your device to be visible to other remote devices or on the internet. In this case, data is being pushed from your device to the remote destination server, and then back to the source port and to your device. With remote forwarding, anyone on the internet or remote device can get access to your device. - Dynamic port forwarding. Dynamic port forwarding is virtually an extension of the local port forwarding. The difference is that any program from your LAN device can use the SSH tunnel and access any remote destination port by using only one port on your side. Dynamic port forwarding works by creating a proxy of sorts. What is port forwarding used for? From the sound of it, port forwarding might seem like it’s in the purview of IT professionals and programmers. While those kinds of people are certainly heavy users of port forwarding, it’s useful for a far wider range of the computer-using population. Here are some of the most common uses for port forwarding: - Hosting game servers for multiplayer gaming accessible from outside your home network. - Running remote desktop protocols for accessing your computer remotely. - Permitting file transfers from your computer to the outside world, or external networks. - Running a publicly-accessible website from your home computer. - Using torrent applications to quickly download files. - Hosting your own VPN server that allows you to access your home network from afar. While many of these tasks can be accomplished without the help of port forwarding, it’s often the easiest solution. Is port forwarding safe? Port forwarding inherently gives people outside of your network more access to your computer. Giving access or accessing unsafe ports can be risky, as threat actors and other people with malicious intents can then easily get full control of your device. Port forwarding requires disabling Network access translation (NAT), the technology that allows multiple devices to share one IP address. NAT also protects your devices from external attacks. When you selectively disable NAT with port forwarding, you open your chosen device up to direct connections from the wider Internet. If you port forward a remote desktop connection to the Internet, anyone from anywhere in the world can connect to your computer if they know the password or exploit a bug. This can be bad. Can you get hacked through port forwarding? Yes. If you take security precautions, such as using a firewall or a VPN for the port forwarding process, is it likely? Not really. More than anything, responsibly using port forwarding requires care and diligence. The following general tips will make sure that you stay safe: - Use strong passwords. If you’re running a remote access connection, your computer is only as secure as the password you set. Hackers try multiple passwords every second on every machine connected directly to the Internet. If possible, eliminate this weakness altogether by using key-based authentication (supported by some protocols like SSH tunnels). - Update your devices quickly. Vulnerabilities are constantly discovered and fixed in operating systems and other software. If you put off updating your computer, there might be a bug that a hacker can use to defeat your security and gain access to your computer. - Don’t expose more than you need. Once you learn how to use port forwarding, you might want to use it with all sorts of devices and services. This is a bad idea. As you expose more surface, the odds of a successful cyberattack against your computer increase. Dangers of port forwarding Even though using port forwarding with a VPN greatly reduces the risk of getting hacked, you should still be aware of the possible dangers. Let’s look at some more specific hypothetical scenarios where port forwarding can be risky. 1. You port forward access to a video game. For convenience, you don’t set a password, thinking that hackers will never guess your IP address. Your friends can join your game with ease, but so can bad actors. - Just like how hackers test passwords to Internet-accessible services multiple times per second, they also automatically attack open protocols like games on every internet-connected device. - Set a strong password and keep your device updated to prevent this issue. 2. You secure your game with a password, but don’t update the game or device. A security issue is discovered in the game, allowing anyone who exploits the bug to hack your computer. - Apply security updates in a timely manner to avoid this problem. 3. You forward a port to use a torrenting application. Even though you think you’re using an anonymizing solution like a VPN, data is accidentally uploaded through your real IP address. If you’re downloading copyrighted material, you could be in trouble. - Always verify that your software is configured correctly. Don’t assume that your traffic is anonymous just because you use a VPN or Tor. How to open ports on a router Because port forwarding involves changing settings on your router, the exact process will depend on your router model. However, the process usually takes the same form regardless of who made your router. In this guide, I’ll use screenshots from a common Comcast modem/router combination. Step 1: Find your router’s configuration page Every router makes its settings accessible through some kind of a configuration interface. Apple AirPort routers are somewhat unique in that they require special software (AirPort Utility) to change their settings. For most routers, you can change settings with a website accessed through a special IP address. Internal networks use IP addresses that follow the form of 10.X.X.X or 192.168.X.X. The 172.16.X.X subnet is less common but also possible. Your router’s configuration page will likely be at the first IP address in its range. To figure out what this address is, first look in your computer’s networking settings to figure out what IP prefix you use. Depending on your computer’s operating system, this setting will be in a different place. On my Mac, it helpfully displays both my computer’s internal IP address and the router’s IP address. You can see that the prefix used on my network is 10.X.X.X and that the router is at the very first possible IP address. Most routers use the first IP address in their prefix, regardless of which prefix your network uses. Next, go to your router’s IP address in a web browser. On the Xfinity (Comcast) router used in these examples, you’ll be greeted by a login page that looks like this: If you see something similar, congratulations! You successfully found your configuration page. Step 2: Log in Now that you’re at the login page, you might not remember your username and password. If you don’t remember setting one in the first place, it’s probably still set to the default. The helpful site RouterPasswords.com offers a database of default passwords that you can try. Step 3: Find the port forwarding option On my router, this feature is hidden in the Advanced menu. If you can’t find it, continue looking through the menus. In the case of this router, it appears that we cannot change port forwarding settings directly from the router’s configuration page. Let’s follow its instructions and visit the other settings website. After logging into Comcast’s website, we can go to See Network: Afterwards, click on Advanced Settings: Now we’ve found it! Step 4: Add the port forward Now that we’ve found the option, it’s time to add the port forward. This screen appears similar on nearly every router. First, we select a device or IP address to use as the destination. This is the device that runs the software we want to forward. Next, choose a common service to forward or manually input a port. If you choose a premade option, your service should work out of the box. Otherwise, you might need to experiment to find the correct port to forward. If you want to forward an entire range of ports or add multiple ports to the forwarding list, you can do this here. Step 5: Test out your program To adequately test whether your port forwarding was successful, you’ll need to use a device outside your local network. Follow the instructions listed later in this article to test out your port forward. We’ll look at common problems and solutions in more detail later on. How to open ports on a VPN Many VPN services allow you to open ports on the other end of the tunnel. Instead of remotely connecting to your computer’s actual IP address, you connect to the VPN’s endpoint IP address. That way, no one has access to your actual device, and any data sent through the secure tunnels is encrypted. Using a VPN to forward ports is a way to deal with the risks that forwarding ports on a router puts your devices and data in, such as hacks, data corruption and/or theft, and malware infections. Compared to forwarding ports on a hardware router, doing the same on a VPN is relatively simple. However, not every provider supports port forwarding, so do your research before purchasing a VPN. Also, since the process of forwarding ports differs for every VPN provider, look for specific instructions on their website. Common problems with port forwarding While port forwarding works most of the time, it can fail on occasion. Whether the root issue is user error or something with the software, port forwarding issues can be difficult to diagnose. Here are a few of the most common issues that can occur with port forwarding: - “Connection refused” errors as if you are not using port forwarding at all. - Slow remote connections that make games and remote desktop unusable. - Constant invalid password warnings from remote desktop software. How to test port forwarding Before you can figure out the cause of any issues, it’s important to have a reliable testing process. To effectively test a port forwarding setup from the comfort of your home, you’ll need the following hardware and software: - A desktop or laptop computer used to host the application being port-forwarded. An additional computer to use as a client. - This computer must have the client software installed for the application you’re testing. - A smartphone with tethering or a secondary Internet connection. Using Minecraft as an example, here’s how to test that your port forwarding worked: - From the server machine, start the Minecraft server and verify that it is running on the port you selected. - Connect the client machine to your smartphone or secondary internet connection. This connection must have a different external IP address. - Open the Minecraft game on your client machine and connect to the first computer’s external IP and port. - Verify that the connection works and the game loads. Don’t worry about speed; if you’re using cellular Internet on the client, it won’t be fast even if you did everything correctly. You can also check your port by using this online open port testing tool. Troubleshooting connection refused errors If you continue to see connection refusals, here are some troubleshooting ideas: - Make sure that you’re connecting to the right IP address. Find your external IP address from the device you want to connect to and use that. - Try forwarding a different port. Some services, like VNC, use entire ranges of ports, so you might need to forward multiple. - Change your firewall settings. If the computer you’re using as a server has a firewall, you might need to allow external connections to the port in question. Does port forwarding slow down the internet? Connection slowness can be more challenging to fix. That said, it’s important to note that port forwarding itself has nothing to do with your Internet speed. If you’re running a high-bandwidth game using port forwarding, it might slow down your connection. However, this isn’t the fault of the port forwarding setup. Fixing invalid password warnings Many kinds of remote desktop software will warn you if someone attempts to log in with an invalid password. Since anything connected to the public Internet will receive dozens of hack attempts per minute, you might see a lot of these. One easy, effective way to decrease the number of invalid login attempts on your computer is to move to a non-standard port. While this approach doesn’t actually increase your security, it does provide some basic obscurity. From your router’s control panel, change the external port to a high number (below 65,535). If the port is not commonly used for other applications, you should see fewer connection attempts. Don’t rely on this approach to make up for a bad password, but certainly use it if you suffer from excessive invalid connection attempts. Port triggering vs. port forwarding: What’s the difference? Port triggering serves many of the same functions as port forwarding, but it works in a different way. Instead of always forwarding a particular port to a certain machine, port triggering works dynamically. Here’s effectively how port triggering works: - A computer on the internal network connects to an external server on a certain port. - The router sees this connection and triggers a port forwarding rule to the internal computer. - Afterwards, traffic that matches the forwarding rule is forwarded to the internal computer for a period of time. If two machines on the local network need to use the same external-facing port, port triggering can be a great solution. However, most of the time, it’s more clunky and difficult to use. Example of port forwarding It is one thing to know how things work in theory, and another it is to have a real life example. So, let’s say you want to set up a public Minecraft server for you and your friends. Setting up the server itself requires a bit of coding, but we don’t need to focus on that here. The most important thing to know is that this Minecraft server is local. This means that it can only be accessed from devices that are connected to your LAN, or sharing the same external IP address. If you want to play on your server with your friends, they won’t be able to access it unless they come to your home and connect to your internet. You need to open your server to incoming connections from remote devices, and this is where port forwarding comes into place. In your router configuration, you need to enter the standard Minecraft server port number, which is 25565. That way, your router will know to forward incoming connections from your friends’ devices to your Minecraft server. If you want to try setting up a Minecraft server yourself, check out this guide. Port forwarding allows you to open up a specific service on your computer to receive inboard traffic from the Internet. From video games to remote desktop, it’s a very useful tool. Port forwarding comes with some security considerations, but they can generally be overcome. Thinking of trying out a VPN service? Read one of our VPN guides or reviews Why does port forwarding have a bad reputation? Port forwarding usually means leaving a gap in your security. This can potentially be dangerous because hackers could also use this to penetrate your network. Consequently, there are some documented cases when an opened port was used as an attack vector. That's why most websites won't recommend you to open ports if you’re not entirely sure what you’re doing. Does port forwarding increase internet speed? Yes, port forwarding can increase internet speed by a few milliseconds. This is because port forwarding redirects the incoming and outgoing through specific ports, allowing for quicker data transferring. Does port forwarding help gaming? Yes, port forwarding can help gaming. It can make your connection more stable or increase its speed. Some games can even improve load times. Does opening ports reduce lag? No, port forwarding does not necessarily reduce lag. If you're experiencing lag, it may be because of the hosting server's connection. You won't improve it by opening specific ports.
<urn:uuid:feb21843-76e1-44ae-be7c-bba83b286ee4>
CC-MAIN-2024-38
https://cybernews.com/what-is-vpn/port-forwarding/
2024-09-14T07:16:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00731.warc.gz
en
0.926318
4,106
3.609375
4
AI (Artificial Intelligence) Human vs AI Agents in Cybersecurity: Who Should Guard Your Data? In the battle against cyber threats, should we trust human experts or AI agents to protect our valuable data? Explore how AI's tireless vigilance, pattern recognition, and rapid adaptation are reshaping cybersecurity. cyber security AI-Powered Cybersecurity: How Artificial Intelligence is Transforming the OSI Model Explore the OSI model's 7 layers, their vulnerabilities in the cybersecurity landscape, and how AI is revolutionizing defense strategies for each layer. cyber security Protecting Business from the Inside Out: A Layered Approach to Cybersecurity Learn how taking an internal, layered approach to cybersecurity – including training staff, controlling access, monitoring activity, and incident planning – helps protect valuable company data and resources from compromise. authentication Authentication Systems Decoded: The Science Behind Securing Your Digital Identity Cybersecurity is a continuous journey, but with solid authentication systems, this trip can be safer for everyone on board. AI (Artificial Intelligence) AI-Powered Cybersecurity: Fortifying Against Data Breaches AI: The game-changer in cybersecurity, empowering organizations to defend against data breaches and cyberattacks proactively cyber security Building a Career in Cyber Security: The Biggest Lie TL;DR: Cybersecurity is a complex and challenging field, and it's important to have realistic expectations about what it takes to get started. Don't believe the hype that you can become a cyber security expert overnight. vulnerabilities Automated Vulnerability Detection: Mitigate Fraud and Strengthen Your Cybersecurity Defense Don't let cybercriminals exploit your weaknesses. Empower your cybersecurity defense with automated vulnerability detection and mitigate fraud effectively. cyber security Cyber Insurance - Your Secret Weapon Against Digital Risk Cyberattacks pose one of the biggest risks to companies today. Yet many still don't understand cyber insurance or how to maximize its benefits. learning How to Learn Cloud Security and Build a Career to CISO Lay the groundwork for a successful career as a CISO with a strong understanding of cloud security. Learn how to get started and elevate your cybersecurity expertise! cyber security The Indispensable Role of Human Expertise in an AI-Driven Cybersecurity World No matter how advanced AI becomes, it still requires human expertise to build and analyze systems, enabling AI to make better decisions digital identity Navigating the Digital Landscape: A Guide for Young People on Managing Their Online Identities Your online identity is precious, and protecting it should be your top priority. Discover practical strategies to safeguard your personal information and maintain control over your digital presence loginradius Delivering World-Class CIAM: The Product Management Journey at LoginRadius Customer-centric product management at the core of its product development process to drive success AI (Artificial Intelligence) How AI Is Shaping the Cybersecurity Landscape — Exploring the Advantages and Limitations This article discusses the relationship between AI and cybersecurity in the modern digital age, exploring the potential benefits and challenges. ecommerce E-Commerce Cybersecurity Trends to Watch in 2023 With the e-commerce market experiencing a surge in demand over the past couple of years, specific security threats that require adequate attention have lingered. AI (Artificial Intelligence) The Impact Of AI On Identity And Access Management IAM systems that are backed by AI offer several benefits in three major aspects: authentication, identity management and secure access hashing Understanding Hashing Algorithms: A Beginner's Guide Understanding the importance of hashing algorithms in securing your data, different types of hashing algorithms, and their unique features MFA Minimizing Credential Theft With MFA Phishing is a significant business threat and can lead to financial and reputational damages. And minimizing the risk of credential theft through phishing attacks requires a rigorous defense against credential theft authentication Built-in Authentication Security Mechanisms to Reinforce Platform Security By incorporating built-in platform authentication mechanisms, including MFA and adaptive authentication, you can build the trust of your customers by showing that you care about their safety. cyber security What is Zero-Day Vulnerability? Zero-day vulnerabilities can be very dangerous because malicious people can use them to access systems and data without being detected. cloud Public Cloud Risks - Is Your Organization Prepared for Cloud Threats? The rapid adoption of the public and hybrid cloud doesn’t necessarily mean that sensitive information stored on remote servers or shared clouds is secure. This blog highlights the risks associated with the public cloud and how businesses can take timely action to avoid the risks. cyber security Why is cybersecurity so crucial for Startups? Startups have unique challenges when it comes to cybersecurity. If not taken seriously, they can lose customers and their trust. zero trust Implementing Zero Trust? Make Sure You're Doing It Correctly Though zero trust architecture may be potent for reinforcing overall security, chances of sneaking and security breaches aren’t always zero. If not implemented correctly, zero trust could lead to various security and user experience issues and hampers overall business growth. cyber security Cloud Security: An Overview of Challenges and Best Practices Cloud systems are frequently shared, and identity management, privacy, and access control are highly critical for cloud security. cloud Securing the cloud – the peerless role of biometrics While businesses embark on a digital transformation journey, cloud adoption is undoubtedly the cornerstone for new-age enterprises. zero trust What Makes Zero Trust Better And Different From Traditional Security Enterprises have already started to embrace zero trust security over traditional security since it offers improved security while simultaneously improving flexibility and reducing complexity.
<urn:uuid:5f9a3cb8-a8ae-4f3a-99fa-d6be7069e437>
CC-MAIN-2024-38
https://guptadeepak.com/tag/cyber-security/
2024-09-15T12:49:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00631.warc.gz
en
0.905744
1,143
2.578125
3
Can a smartwatch be used to spy on its owner? Sure, and we already know lots of ways. But here’s another: A spying app installed on a smartphone can send data from the built-in motion sensors (namely, accelerometer and gyroscope) to a remote server, and that data can be used to piece together the wearer’s actions — walking, sitting, typing, and so on. How extensive is the threat in practice, and what data can really be siphoned off? We decided to investigate. Experiment: Can smartwatch movements reveal a password? We started with an Android-based smartwatch, wrote a no-frills app to process and transmit accelerometer data, and analyzed what we could get from this data. For more details, see our full report. The data can indeed be used to work out if the wearer is walking or sitting. Moreover, it’s possible to dig deeper and figure out if the person is out for a stroll or changing subway trains — the accelerometer patterns differ slightly; that’s also how fitness trackers differentiate between, say, walking and cycling. It’s also easy to see when a person is typing on a computer. But working out what they are typing is way more complex. Everyone has a specific way of typing: the ten-finger method, the one- or two-digit keyboard stab, or something in-between. Basically, different people typing the same phrase can produce very different accelerometer signals — although one person entering a password several times in a row will produce pretty similar graphs. So, a neural network trained to recognize how a particular individual enters text could make out what that person types. And if this neural network happens to be schooled in your particular way of typing, the accelerometer data from the smartwatch on your wrist could be used to recognize a password based on your hand movements. However, the training process would require the neural network to track you for quite a long time. The processors in modern portable gadgets are not powerful enough to run a neural network directly, so the data has to be sent to a server. And therein lies trouble for a would-be spy: The constant upload of accelerometer readings consumes a fair bit of Internet traffic and zaps the smartwatch battery in a matter of hours (six, to be precise, in our case). Both of those telltale signs are easy to spot, alerting the wearer that something is wrong. Both, however, are easily minimized by scooping up data selectively, for example when the target arrives at work, a likely time for password entry. In short, your smartwatch can be used to identify what you’re typing. But it’s hard, and accurate recovery relies on repeat text entry. In our experiment, we were able to recover a computer password with 96% accuracy and a PIN code entered at an ATM with 87% accuracy. It could be worse For cybercriminals, however, such data is not all that useful. To use it, they’d still need access to your computer or credit card. The task of determining a card number or CVC code is way trickier. Here’s why. On returning to the workplace, first thing the smartwatch owner types is almost certainly a password to unlock their computer. That is, the accelerometer graph indicates first walking, then typing. Based on data obtained just for this brief period, it’s possible to recover the password. But the person won’t enter a credit card number as soon as they sit down — or get up and walk away immediately after entering that data. What’s more, no one will ever enter this information several times in short succession. To steal data-entry information from a smartwatch, attackers need predictable activity followed by data entered several times. The latter part, incidentally, is yet another reason not to use the same password for different services. Who should worry about smartwatches? Our research has shown that data obtained from a smartwatch acceleration sensor can be used to recover information about the wearer: movements, habits, some typed information (for example, a laptop password). Infecting a smartwatch with data-siphoning malware that lets cybercriminals recover this information is quite straightforward. They just need to create an app (say, a trendy clockface or fitness tracker), add a function to read accelerometer data, and upload it to Google Play. In theory, such an app will pass the malware screening, since there is nothing outwardly malicious in what it does. Should you worry about being spied on by someone using this technique? Only if that someone has a strong motivation to spy on you, specifically. The average cybercrook is after easy pickings and won’t have much to gain. But if your computer password or route to the office is of value to someone, a smartwatch is a viable tracking tool. In this case, our advice is: - Take note if your smartwatch is overly traffic-hungry or the battery drains quickly. - Don’t give apps too many permissions. In particular, watch out for apps that want to retrieve account info and geographical coordinates. Without this data, intruders will struggle to ascertain that it’s your smartwatch they’ve infected. - Install a security solution on your smartphone that can help detect spyware before it starts spying.
<urn:uuid:70cd7e7a-8354-4cb0-bfd3-7009b25c82d0>
CC-MAIN-2024-38
https://me-en.kaspersky.com/blog/smart-watch-research/11198/
2024-09-08T10:54:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00431.warc.gz
en
0.932454
1,128
2.546875
3
Two world-changing innovations – Natural Language Processing (NLP) and Generative AI – are making sweeping impacts across cybersecurity, fraud detection, and digital communications. And while both technologies harness the power of artificial intelligence, they serve distinct purposes and operate under different mechanisms. What is Natural Language Processing (NLP)? Natural Language Processing, or NLP, is a facet of artificial intelligence focused on the interaction between computers and human language. This technology enables computers to understand, interpret, and generate human language in a meaningful and useful way. NLP is pivotal in performing tasks such as: Text Classification, which involves categorizing text into predefined groups based on its content. For example, sorting news articles into categories like sports, politics, or technology. Sentiment Analysis, which involves analyzing text to determine the sentiment expressed within it, such as positive, negative, or neutral. It’s commonly used to gauge public opinion on social media platforms or in product reviews. Language Translation enables the translation of text from one language to another using NLP models, facilitating communication across different language speakers and enhancing accessibility of information globally. Question Answering, where NLP is used to build systems that can automatically answer questions posed by humans in a natural language. This involves understanding the question context and retrieving relevant information from a given data source. Chatbot Development, which involves creating conversational agents (chatbots) that can engage with users in human-like interactions, providing customer support, gathering information, or helping with navigation on websites (more on this below). In the context of cybersecurity, NLP is particularly beneficial in fraud detection, where, by analyzing unstructured text like customer emails or transaction descriptions, NLP can extract crucial insights that help in identifying fraudulent activities. For instance, it can detect specific keywords related to fraud, assess sentiment, or identify anomalous communication patterns that mirror known fraudulent cases. Moreover, NLP plays a crucial role in improving website security and understanding. By analyzing both the content and the underlying code of websites, NLP models can discern the intent of a site, significantly aiding in cybersecurity efforts. Last, email security is another area where NLP shines, utilizing techniques to analyze content, verify sender authenticity, and understand the contextual use of elements like QR codes to flag potential phishing attempts. Generative AI and Its Capabilities Generative AI, particularly through the use of Large Language Models (LLMs) like GPT-3.5 and GPT-4, represents a gigantic leap forward in how machines generate human-like text. These models are trained on extensive datasets that include diverse sources like Wikipedia, Reddit threads, books, and articles. Their design enables them to predict the next words in a sentence, thereby generating coherent and contextually appropriate responses. In practical applications, generative AI is revolutionizing tasks such as automated customer service, content creation, and even complex legal and technical writing. Automated Customer Service: Generative AI is used to power chatbots and virtual assistants that handle customer inquiries and problems automatically. These systems can understand and respond to customer requests in real-time, improving efficiency and customer satisfaction while reducing the workload on human staff. (To clarify since chatbots were mentioned in both sections, NLP allows chatbots to understand and process human inputs, crucial for interpreting intent and context. Generative AI, on the other hand, focuses on producing contextually appropriate, human-like responses. This combination enables chatbots not only to understand user queries but also to generate dynamic responses, enhancing the interaction quality and making conversations more fluid and engaging.) Content Creation: Generative AI aids in the generation of diverse types of content, including articles, reports, and marketing copy. It can produce creative and informative text that aligns with specified guidelines and styles, making it a valuable tool for writers and marketers. Complex Legal and Technical Writing: In the legal and technical domains, Generative AI helps draft detailed documents such as contracts, legal briefs, and technical manuals. By understanding the specific jargon and stringent formatting requirements of these fields, it assists professionals in creating precise and compliant documents efficiently. In cybersecurity, generative AI is being employed to automate responses in fake app takedown processes. By analyzing the natural language data within emails, these models can draft responses that are indistinguishable from those written by humans, thereby enhancing efficiency and scalability in digital communications. For example, a system can utilize Large Language Models (LLMs) integrated into a software pipeline that not only sends out bulk takedown requests to app stores but also handles incoming responses. When a response is received, the LLM analyzes the email content to determine the next steps—whether to autonomously close the ticket, request further evidence, or, in complex cases requiring expert insight, alert a human analyst. This automated approach enhances the efficiency and scalability of the fake app takedown process. NLP vs. Generative AI: Understanding the Differences While both NLP and Generative AI deal with language, their core purposes differ significantly. NLP is more about comprehension and interaction—understanding human language as it is and responding in a way that is perceived as natural by humans. NLP focuses on parsing language, extracting meaning, and applying this understanding to real-world applications like fraud detection or sentiment analysis. Generative AI, on the other hand, is about creation. It takes the foundation that NLP provides and extends it to generate new content based on learned patterns and contexts. It’s not just about understanding or translating existing information but about creating plausible new content that didn’t previously exist. Both Natural Language Processing and Generative AI are transformative technologies that offer vast potential across various sectors. While NLP provides the tools to decode and understand human language, Generative AI uses these insights to create new, contextually relevant content. As these technologies continue to evolve, their integration into business processes and everyday applications will only deepen, opening new avenues for innovation and efficiency in an increasingly digital world. Bolster effectively integrates both NLP for deep analysis and understanding of text data in communications and Generative AI to automate and scale their response capabilities in cybersecurity tasks.
<urn:uuid:9e1a2bcb-1ed8-4502-950b-26839e920b86>
CC-MAIN-2024-38
https://bolster.ai/blog/natural-language-processing-and-generative-ai
2024-09-09T15:48:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00331.warc.gz
en
0.906665
1,260
3.53125
4
Free cooling, which is the use of naturally cool air instead of mechanical refrigeration, can dramatically reduce the power needed for data center cooling. For colocation tenants, that means dramatically reduced costs. But traditional free cooling systems aren’t practical for most colocation data centers. There are three primary reasons why: - In locations where it gets too hot and/or too humid, traditional free cooling doesn’t work – or it brings cold aisle temperatures to levels that most colocation tenants simply aren’t comfortable with. - Without conditioning (which can be expensive and reduces the efficiency of the system), outside air can introduce contaminants to the data center and/or make the data center too humid or too dry – all of which can cause an outage. - Traditional free cooling systems typically aren’t able to meet the cooling needs of high density IT environments. So there are very few (if any) colocation providers leveraging free cooling at all – they still use the old chiller plant/forced air technology. Their customers, then, are trading energy efficiency and lower costs for location flexibility and reliability. To learn more about the new approach to Data Center cooling download this guide on Chiller vs Free Cooling.
<urn:uuid:bc34be62-1b3f-4d8f-922d-7d6d57d7711f>
CC-MAIN-2024-38
https://www.datacenterfrontier.com/home/whitepaper/11432152/aligned-a-new-approach-to-cooling-changes-the-chiller-vs-free-cooling-conversion
2024-09-13T07:58:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00031.warc.gz
en
0.91175
252
2.609375
3
In December 2019, an outbreak of atypical viral pneumonia occurred in Wuhan, People’s Republic of China, later identified as being caused by a new coronavirus. The disease spread rapidly, resulting in significant global morbidity and mortality, ultimately being named COVID-19, caused by the SARS-CoV-2 virus. This novel virus is part of the beta-coronavirus family, which includes previous pandemic-causing viruses like SARS-CoV in 2002 and MERS-CoV in 2012. Though coronaviruses were not initially considered significant pathogens before these outbreaks, their ability to cause severe respiratory syndromes brought attention to their virulence and the broader implications of coronavirus infections. COVID-19’s spread, along with its high admission rates to intensive care units and associated mortality, especially due to severe respiratory failure, prompted widespread concern and led to unprecedented global public health responses. The impact of COVID-19 extends far beyond acute respiratory distress; long-term effects, particularly regarding carcinogenesis, are becoming a point of focus in scientific investigations. Carcinogenesis, or the process of cancer formation, is influenced by a combination of genetic mutations, epigenetic alterations, and environmental factors, and recent studies are exploring how SARS-CoV-2 might contribute to or exacerbate this process. COVID-19 triggers a profound immune response, often involving a “cytokine storm,” which can lead to sustained inflammation, DNA damage, and subsequent cellular abnormalities. Prolonged inflammatory states have long been associated with increased cancer risk, and the chronic inflammation observed in COVID-19 survivors could create conditions favorable to cancer development. Markers of oxidative stress and persistent immune dysregulation found in COVID-19 patients months after recovery add to concerns that the virus may have a long-term impact on cancer incidence. Category | Detail | Mechanisms | Relevant Pathologies | Virus & Protein Interactions | SARS-CoV-2 Spike Protein (S) | – The spike protein facilitates viral entry into host cells by binding to ACE2 receptors. Activation requires cleavage by host proteases (FURIN, TMPRSS2). Two Cleavage Sites: S1/S2: Exposes the receptor-binding domain. S2′: Triggers fusion with the host cell membrane. TMPRSS2 & FURIN are involved in activating viral entry and potentially cancer metastasis. | COVID-19 (severe cases), Prostate Cancer, Colorectal Cancer (linked to KRAS, BRAF mutations), Lung Cancer (due to TMPRSS2 overexpression) | Immune Response | Cytokine Storm | – Overproduction of pro-inflammatory cytokines such as IL-6, TNF-α, and IL-1β. Associated with severe COVID-19 cases and linked to chronic inflammation in survivors. Pro-inflammatory cytokines drive immune dysregulation, oxidative stress, and DNA damage. Key Inflammatory Markers: IL-6, TNF-α, Ferritin, C-reactive protein. | COVID-19 (acute and chronic cases), Pulmonary Fibrosis, Increased Cancer Risk (due to persistent inflammation), Leukemias and Lymphomas (immune dysregulation) | ACE2 Receptor & RAAS Pathway | ACE2 Receptor | – ACE2 converts AGTII into angiotensin (1-7), mitigating AGTII’s pro-inflammatory effects. SARS-CoV-2 downregulates ACE2 by binding to it, leading to AGTII accumulation. AGTII promotes cellular proliferation, angiogenesis, and inflammation. Loss of ACE2 exacerbates pro-inflammatory pathways, contributing to cancer risk. | Lung Injury, Chronic Inflammation, Increased Cancer Risk (via proangiogenic AGTII activity), Worsening of Pre-existing Conditions | Genetic & Epigenetic Alterations | DNA Methylation, Histone Modification | – SARS-CoV-2 alters DNA methylation patterns, especially hypermethylation of tumor suppressor genes. Modulates histone modifications, impacting oncogenes and tumor suppressor gene expression. Epigenetic changes can lead to malignant transformation by silencing tumor suppressor genes and activating oncogenes. | Increased Risk of Cancer Development, Long-term Genetic and Epigenetic Damage, Potential Role in Post-COVID Carcinogenesis | Protease Activity | FURIN & TMPRSS2 Proteases | – FURIN and TMPRSS2 are crucial for the activation of viral spike proteins, facilitating SARS-CoV-2 entry into host cells. FURIN: Associated with cancer cell proliferation, migration, and invasion, involved in several cancers (lung, head and neck, colon). TMPRSS2: A key player in prostate cancer progression and viral propagation. | Prostate Cancer, Lung Cancer, Colorectal Cancer, Lung Fibrosis | COVID-19 Therapeutics & Risks | Corticosteroids, Immunosuppressants | – Corticosteroids and immunosuppressive drugs are used to manage severe COVID-19 but may impair immune surveillance over time. Long-term immunosuppression can increase susceptibility to oncogenic viral infections (e.g., EBV, HPV). Suppression of immune system leads to reduced ability to eliminate pre-cancerous cells. | Risk of Cancer, Increased Susceptibility to Oncogenic Viral Infections (EBV, HPV), Weakened Immune Response to Cancer Cells | Co-infections | EBV, HPV, and Other Viral Co-infections | – Co-infection with oncogenic viruses (EBV, HPV) may worsen COVID-19 outcomes. EBV reactivations due to immune suppression from COVID-19. Oncogenic potential amplified in co-infected patients due to compounded immune dysregulation and chronic inflammation. | EBV-induced Cancers (Lymphomas), HPV-related Cancers (Cervical, Oropharyngeal), Increased Risk of Viral-induced Carcinogenesis | Cancer Behavior Post-COVID-19 | Changes in Tumor Characteristics | – Emerging evidence suggests COVID-19 may influence tumor aggressiveness and behavior. Higher proliferation rates and increased incidence of aggressive cancer subtypes (e.g., triple-negative breast cancer). Chronic inflammation and immune dysregulation post-COVID could drive more aggressive cancer phenotypes. | Breast Cancer, Triple-Negative Breast Cancer, Synchronous/Metachronous Primary Cancers | Rare Cell-type Cancers | Small Cell Carcinoma, Angiosarcoma | – Evidence suggests a rise in rare cancers like small cell carcinoma and angiosarcoma post-COVID. Immune dysregulation and chronic inflammation post-COVID may promote the development of rare cancer types. Prolonged immune suppression and inflammatory state may contribute. | Small Cell Carcinoma, Angiosarcoma, Rare Aggressive Cancers Post-COVID | The intersection of COVID-19 and cancer risk is also influenced by the virus’s molecular and cellular effects. Specifically, COVID-19 infection reduces levels of angiotensin-converting enzyme 2 (ACE2) in the body, triggering an inflammatory cascade involving angiotensin II (AGTII) and subsequent immune dysregulation. ACE2 plays a key role in regulating blood pressure and inflammation, and its reduction during COVID-19 could lead to chronic inflammatory states that promote carcinogenesis. Furthermore, the viral spike protein, critical for the virus’s entry into cells, engages host cell proteases like TMPRSS2 and FURIN, which are also implicated in cancer progression. These proteases are involved in the cleavage of viral proteins necessary for infection, but they also play roles in normal cellular processes, including tissue repair and immune response modulation. Altered activity of these proteases in the context of COVID-19 may inadvertently promote oncogenic pathways. One area of significant concern is the potential for SARS-CoV-2 to induce genetic and epigenetic changes that could lead to cancer. DNA methylation patterns, histone modifications, and other epigenetic mechanisms that regulate gene expression can be altered by viral infections, including COVID-19. Such changes can silence tumor suppressor genes or activate oncogenes, thereby setting the stage for cancer development. For instance, the hypermethylation of tumor suppressor genes is a well-known mechanism through which viruses can contribute to cancer, and similar patterns have been observed in COVID-19 patients. Moreover, long-term immune dysregulation caused by COVID-19 may exacerbate pre-existing conditions, including cancer. Studies have found that cancer patients who contract COVID-19 are more likely to experience cancer progression, possibly due to the virus’s effects on the immune system. The elevated levels of cytokines like interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α) in severe COVID-19 cases have been linked to worse outcomes in cancer patients, underscoring the complex relationship between the virus, the immune system, and cancer. Table : describes the features of post-COVID-19 cancer development. Cancer Development Feature | Details | Key Findings | Increased cancer risk | COVID-19 survivors show an increased risk of cancer development, particularly due to immune dysregulation and chronic inflammation. | Increased IL-6 and TNF-α levels, oxidative stress markers, prolonged inflammation | Epigenetic alterations | Alterations in DNA methylation patterns, particularly hypermethylation of tumor suppressor genes, contributing to malignant transformation. | Hyperactive oncogenic pathways (e.g. JAK-STAT, MAPK, NF-κB) identified in post-COVID cases | Synchronous and metachronous cancers | Post-COVID-19, there is an increased occurrence of synchronous and metachronous primary cancers, possibly due to immune suppression and chronic inflammation. | Studies show higher incidence of multiple primary tumors and hematological malignancies such as lymphoma and leukemia | Rare cell type cancers | Increased incidence of rare cancers such as small cell carcinoma and angiosarcoma, potentially linked to COVID-19-induced immune dysregulation. | Significant rise in rare cell type cancers documented in post-COVID-19 cases | IL-6: interleukin-6; TNF-α: tumor necrosis factor alpha; JAK-STAT: Janus kinase/signal transducers and activators of transcription; MAPK: mitogen-activated protein kinase; NF-κB: nuclear factor kappa-light-chain-enhancer of activated B cells Research also indicates that certain COVID-19 therapeutics, particularly corticosteroids and immunosuppressive drugs, may have unintended long-term consequences, including increased cancer risk. These drugs, while essential in managing severe COVID-19 symptoms, can impair immune surveillance, allowing for the unchecked proliferation of pre-cancerous cells. The balance between managing acute COVID-19 symptoms and mitigating long-term carcinogenic risks remains a critical challenge for healthcare providers. Co-infections with other oncogenic viruses, such as Epstein-Barr virus (EBV) and human papillomavirus (HPV), further complicate the cancer landscape in COVID-19 patients. These co-infections can lead to more severe disease outcomes and may synergize with COVID-19 to enhance cancer risk. The immune modulation caused by multiple infections can overwhelm the body’s natural defenses against cancer, creating a more favorable environment for tumor growth. Synchronous and metachronous second primary cancers are another concern in the post-COVID-19 era. Synchronous cancers, which are two or more primary cancers diagnosed simultaneously or within a short period, may become more common as the immune system is compromised by COVID-19. Studies have suggested that the persistent inflammation seen in “long COVID” survivors could create conditions favorable to the development of multiple primary tumors, especially in the context of immune suppression and chronic inflammation. Interestingly, changes in cancer behavior post-COVID-19 have also been documented. Some studies report that cancers diagnosed after COVID-19 appear more aggressive, with higher proliferation rates and worse prognoses than those diagnosed pre-pandemic. For example, breast cancer patients diagnosed after COVID-19 are more likely to have triple-negative breast cancer, a subtype that is more difficult to treat and has a poorer outlook. Additionally, the incidence of rare cell-type cancers, such as small cell carcinoma and angiosarcoma, has risen in the post-COVID-19 period. These rare cancers may be linked to COVID-19-induced immune dysregulation and chronic inflammation, though the mechanisms behind this increase are not yet fully understood. Ongoing research is crucial to fully understand the long-term cancer risks associated with COVID-19. Longitudinal studies tracking cancer incidence in COVID-19 survivors, as well as investigations into the molecular mechanisms by which SARS-CoV-2 influences gene expression, are needed to clarify the virus’s role in carcinogenesis. Furthermore, the safety of COVID-19 therapeutics must be continuously evaluated to ensure that their benefits outweigh any potential long-term risks, including cancer development. In conclusion, while the acute effects of COVID-19 on the respiratory system have been well-documented, its long-term impact on cancer risk remains a critical area of investigation. The complex interplay between immune dysregulation, chronic inflammation, and epigenetic changes in COVID-19 patients suggests that the pandemic may have far-reaching implications for cancer incidence in the coming years. Understanding these mechanisms is vital for developing strategies to mitigate the potential increase in cancer cases as a consequence of the COVID-19 pandemic. reference : https://www.cureus.com/articles/289494-covid-19-and-carcinogenesis-exploring-the-hidden-links#!/
<urn:uuid:a3f36654-07d2-4b92-9cfb-aa3cd232caed>
CC-MAIN-2024-38
https://debuglies.com/2024/09/04/long-term-effects-of-covid-19-and-carcinogenesis-molecular-and-pathophysiological-insights/
2024-09-15T15:44:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00731.warc.gz
en
0.897556
2,974
3.375
3
Last year, the UK Government urged businesses to better protect themselves after it revealed that at least two thirds of large organisations in the country had suffered a cyber breach or cyber attack in the past year. According to the 2015 Information Security Breaches Survey produced by the Department for Business Innovation & Skills, the average cost of these breaches for a small company was anywhere between £75k – £311k, whilst a large business could lose up to £3.14m; enough to collapse even the most financially robust enterprise. These figures present a sobering fact: cyber security is fundamental to business continuity and the future prosperity of any company in the UK rests on its digital foundations. The technologies that a company implements underpin these foundations and must be an integral part of a business’s core processes and systems. If a business is to survive in the age of proliferating online threats, then its technology and how it is implemented must be part of the business’ DNA, not just an added extra. Cyber technologies as part of an organisation’s DNA The availability of hacking resources continues to increase and cyber criminals are constantly improving their digital arsenals with the latest weaponry available on the dark web. This shadowy underbelly of the internet in which hackers sell their wares is only going to expand and with it, so will the levels of threat that businesses face. Companies need to evolve their defensive tactics in line with the threat landscape. Technologies that provide comprehensive security can be integrated into every aspect of an organisation’s operations, and should be future proofed not only to offer the best line of defence against the growing capabilities of attackers but to also allow companies to stay one-step ahead of the game. To give one example, new cyber security software is being developed which embeds sensors in lines of an application’s code, allowing the application to check itself for vulnerabilities and protect itself against attacks. This is a highly effective form of defence for companies that have either introduced, or are intending to introduce complex enterprise applications, such as payment processing, human resource management systems or sales force automation. Owing to their complexity, vulnerabilities can often be missed. The security technology becomes a fundamental part of applications across a network and therefore an organisation. Other technologies that create such a dynamic security management posture involve the use of real-time analytics. This kind of data interpretation is becoming increasingly sought after in a wave of industries and cyber security is no different. Such technologies can provide a continuously updated risk profile based on a steady stream of data indicating current activity. Frontline defenders can see and stop a cyber attack at any point during the attack cycle, allowing them to address issues before a system can be compromised. Machine learning technologies use and expand upon similar capabilities. These programmes sit within systems, adapting their behaviour based on what they experience within that infrastructure. The potential this technology has for defending businesses is phenomenal. By studying an organisation’s network the programme can determine what characteristics of the environment are abnormal. Systems using artificial intelligence will gather information about the network and connected devices and subsequently seek out anything that is out of the ordinary. Organisations with a large number of devices on their networks will find machine learning particularly useful. The technology can monitor incoming and outgoing device traffic to create a profile that determines normal behaviour of the ecosystem and react to the slightest irregularities in a way that traditional security software is unable to do. These emerging technologies are allowing businesses to better defend themselves and mitigate the rapidly increasing number of cyber attacks they are forced to contend with, on what has now become a daily basis. At QinetiQ we are looking at these technologies and how they can be integrated into businesses at a core level, so that they span processes, systems and operations. According to the 2016 National Cyber Security Strategy, in order to secure networks and data against an evolving technological landscape and threat, businesses must “identify critical systems and regularly assess their vulnerability”. Ultimately, as the strategy states, “they must invest in technology.” The upcoming CyberUK should act as the starting point for this, bringing all aspects of industry, tech developers, cyber experts and OEM’s together under the umbrella of the new strategy, ensuring the UK creates a safe cyberspace for everyone. [su_box title=”About Bryan Lillie” style=”noise” box_color=”#336588″][short_info id=’101113′ desc=”true” all=”false”][/su_box] The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.
<urn:uuid:0eee7948-4f54-40b6-b793-261cc29cd43d>
CC-MAIN-2024-38
https://informationsecuritybuzz.com/next-generation-cyber-technologies-used-defend-businesses/
2024-09-15T17:04:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00731.warc.gz
en
0.944157
952
2.546875
3
Heatmaps are now becoming a popular visualization technology used outside the fence of data science. They can be used, for example, to produce an overlay on a webpage to let web site administrators easily understand which parts of the site are most interesting (application examples can be found at https://heatmap.me). Generally, heatmap colors are arranged in a scale with shades from blue to red to represent concepts such as “low”, “medium” or “high”; however, by following this interpretation it is not possible to convey dynamic behaviors, e.g. whether temperatures in a given region are rising or not. Our recently published Chartie API works in this direction. It allows to transform a signal (i.e. a numerical array) in trend words such as “strong rise” or “moderate fall” representing signal dynamics. Each color in the scale accounts for a specific trend value in the span from the very strong fall to the very strong rise behavior. Hence, with particular reference to real-time data analytics applications, Chartie introduces a paradigm shift from a static value-oriented view towards a dynamic behavior-oriented view. Figure below compares the two stances. With this perspective in mind, we can now re-interpret heatmap usage in a very dynamic way. Our showcase is a live example of the temperature trends in the Italian Trentino Region. Temperature signals are collected from the open data of about 100 weather stations scattered throughout the region. Raw data are retrieved through a Web call like this: where the URL parameter is the station code. Only temperature data were considered. Data are provided on a daily basis and are returned in XML format with an entry every 15 minutes starting from the midnight. [code language=”xml”]<temperature_aria UM=”°C”> On the server side, a cron PHP job saves these entries in a MySQL table to update the temperature signal of each station with the upcoming values. On the client side, an example dashboard has been setup to manage both visualization and processing. At the visualization level, the open source gmaps-heatmap.js library that creates a heatmap layer for Google Maps has been used. At the processing level, when a user submits a query to retrieve the whole regional trend behavior of a particular period of time, one HTTP POST AJAX call to the Chartie API is performed per each station with the data array returned by the user query. Given the same time window, numerical arrays can be of different length since there can be missing entries in some stations, furthermore sampling time can vary from one station to another. Notwithstanding, thanks to Chartie, signal interpretation is not compromised since result (trend behavior) is brought at a semantic level. As soon as trend behaviors are returned, the heatmap starts filling in with colored spots according to the legend described previously. Reddish areas account for increasing temperature trends, blue areas indicate decreasing trends and green areas are characterized by a balanced behavior. The overall architecture is depicted in the previous figure: each quadrant accounts for both a different stage and a different actor playing in the data flow. Browser scripting works as a broker between database and processing resources (Chartie API). This brokering activity results in a semantically-enhanced output for the end-user obtained through a reintepretation of the geo-referenced heatmap widget in terms of dynamic behavior. At the same time, the data processing layer is decoupled from the data publishing layer thus avoiding additional computational efforts on the server side. Concluding remarks and Applications In sum, by coupling geo-referenced heatmaps with Chartie API, it is possible to provide an easy-to-understand view of a complex dynamic behavior such as the temperature trends of a whole region. A number of possible application contexts can be imagined, we cite just two of them for example. In the scenario of smart cities: producing panel views to display real-time dynamic behaviors of many public utility variables such as: - waste production - air quality - traffic flows - house prices and so on. By substituting the geo-referenced map with the playing field, dynamic heatmaps can be also used in sport settings, for example to identify the increasing or decreasing trend of events in football pitches during live matches. Marco Calabrese is a co-founder of Holsys Società Cooperativa– an Italian startup devoted to developing new generation AI applications. After several months of research, Marco and colleagues recently published Chartie, an API that transforms numerical array into trend words like “strong rise” to lessen the existing semantic gap between data and processing.
<urn:uuid:bdc2b8ca-309a-4453-907d-8df589e78ece>
CC-MAIN-2024-38
https://dataconomy.com/2015/04/14/using-geo-referenced-heatmaps-to-display-real-time-temperature-dynamic-behaviors/
2024-09-16T22:22:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00631.warc.gz
en
0.871555
980
2.734375
3
To understand how an NTP amplification attack works, you need to imagine yourself as a shy 8-year-old child at a family gathering. You’ve slipped away to sit by yourself in a corner reading a comic book so you don’t have to have your cheeks pinched or your hair tousled. Things are going just fine: you’re minding your own business and everyone else is minding theirs. And then your idiot cousin tells your Grandma that you want to hear all about her last hernia operation. You didn’t actually ask for this horrible barrage of information, but because someone requested it on your behalf, it’s coming your way anyway. Combine that with your website going down, a potential loss of revenue and consumer trust, damage to your software or hardware, and stolen consumer information, intellectual property or financial data, and that’s basically an NTP amplification attack. Time for trouble According to internet security firm Incapsula’s DDoS attack glossary, NTP amplification is a variety of DDoS attack in which an attacker uses NTP servers to overwhelm a targeted server with traffic. NTP stands for Network Time Protocol, which is the protocol used to synchronize the clocks of internet-connected machines. It’s obviously an essential protocol. Unfortunately, many older versions of NTP still support a type of monitoring that allows administrators to send a ‘get monlist’ command to an NTP server, which prompts a server to reply with a list of the last 600 hosts that connected to the server in question. In order to make an NTP amplification attack work, an attacker will spoof the targeted server’s IP and then send a ‘get monlist’ command to one or more NTP servers. Because the spoofed IP looks real to the NTP server, it immediately replies to the targeted server with that hefty list of 600 connections. That’s where the amplification part of NTP amplification comes in: the response is much bigger than the initial query. In a typical NTP amplification attack, the ratio weighs in anywhere from 20:1 to over 200:1. The threat to website owners In the past few years you’ve undoubtedly heard about DDoS attacks getting bigger and more devastating, and often involving multiple attack types. NTP amplification is valuable to attackers because it can be used to increase the volume of attacks, which is probably why Incapsula saw a shift towards NTP amplification in 2014. One of the highest-profile multi-vector DDoS attacks in 2014 was a five-vector attack against an online gambling site. This attack utilized NTP amplification and peaked at over 100 gigabytes per second. If that sounds shocking, then prepare yourself. A DDoS attack against a DDoS protection service used NTP amplification among other attack vectors to reach a staggering 400 gigabytes per second. The problem with protocol In terms of different types of DDoS attacks, NTP amplification falls into the protocol attack category. Protocol attacks exploit existing internet protocols to target servers or communication equipment like load balancers or firewalls. Protocol attacks can be hard to deal with because internet or network protocols exist for a reason; they exist to help the internet run. They’re not like bugs or vulnerabilities that can be patched or eliminated. In the case of NTP amplification, traffic from NTP servers can’t be blocked because it’s usually legitimate. The other problem with protecting against NTP amplification is the sheer volume of this type of attack, which easily overwhelms network infrastructure. Dealing with NTP amplification means a combination of traffic filtering and overprovisioning. The most effective mitigation for volume-based attacks like NTP amplification will intercept and filter attack traffic outside of the client network, so only legitimate traffic ever reaches the client. This requires a powerful scrubbing server, the likes of which only a professional DDoS protection service will have. Whether it’s an 8-year-old hearing about a hernia surgery or a website server being slammed with lists of 600 host connections, no one should be on the receiving end of major information they haven’t actually requested. Not much can be done to help that 8-year-old, but your website can be protected. Look into it before it’s too late.
<urn:uuid:9cac1960-801f-41e7-aad4-b87b5bd2dac2>
CC-MAIN-2024-38
https://datacenterpost.com/site-owners-beware-what-you-need-to-know-about-ntp-amplification-a-big-and-bad-ddos-attack-type/
2024-09-18T04:33:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00531.warc.gz
en
0.92353
900
2.609375
3
Good Manufacturing Practices (GMP) are essential regulations that ensure product safety and quality across various industries. This comprehensive guide explains the five key components of GMP: Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training. Quality Assurance (QA) is about ensuring products meet specified requirements consistently, reducing risks, and maintaining high standards. QA activities include inspecting materials, verifying processes, monitoring production conditions, reviewing supplier relationships, and conducting audits. Good Manufacturing Practices encompass manufacturing processes that are consistent, safe, and reliable. They involve proper sanitation, quality control systems, and employee training to meet regulatory standards. Good Documentation Practices (GDPs) ensure accurate and complete records of manufacturing, testing, and distribution processes. GDPs involve document preparation, revision, storage, retrieval, and change management to meet regulations and standards. Quality Control Systems (QCS) monitor and test products to ensure they meet safety and quality standards. QCS includes testing during production, inspections, and audits to detect issues early in the manufacturing process. Training is crucial for educating employees on GMP regulations and best practices. It covers employee onboarding, technology and procedure training, and ongoing compliance education. A glossary provides definitions of key terms like GMP, QA, GDP, QCS, and Training, aiding understanding. The guide emphasizes the importance of following GMP to ensure product safety and quality. Useful resources, such as industry organizations and regulatory agencies like the FDA, provide guidance on GMP compliance. Finally, a FAQ section answers common questions and concerns, assisting businesses in achieving GMP compliance and ensuring the production of safe, high-quality products. GMP stands for Good Manufacturing Practices and it is a rigorous set of regulations that must be followed by companies who are involved in the production and manufacturing of products. GMP is an essential part of any industry, from food production to pharmaceuticals, as it helps to ensure the safety of consumers and their health. GMP also helps to guarantee the quality of the products produced, by ensuring they meet all safety and regulatory requirements. Following GMP regulations ensures that products are safe to use and provides confidence to consumers that the product they are using is free from potential hazards. With this comprehensive guide, you will learn the five essential components of GMP so that you can understand and implement this important practice in your own business. Overview of GMP Good Manufacturing Practices (GMP) is the systems, processes, and rules in place to ensure that products meet safety, quality, and legal standards. The five essential components of GMP are Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training. Quality Assurance ensures that a product meets its desired outcome. Good Manufacturing Practices refer to manufacturing processes that are consistent, safe, and reliable. Good Documentation Practices involve the documentation of processes carried out in an organization to ensure accuracy and compliance with regulations. Quality Control Systems monitor and test products to ensure they meet safety and quality standards. Finally, Training ensures employees are adequately trained and knowledgeable on GMP requirements. Quality Assurance (QA) Quality Assurance (QA) is an essential component of Good Manufacturing Practices (GMP) and involves ensuring that a product meets or exceeds the specified requirements. QA often involves establishing and following processes, procedures, and standards that ensure products are produced consistently, safely, and reliably. This helps to reduce risk by minimizing human error and environmental impact. Examples of Quality Assurance activities include inspecting incoming raw materials, approving and verifying process flows, monitoring product production and storage conditions, reviewing supplier relationships, validating quality testing and control systems, and conducting audits. All of these activities are designed to ensure that manufacturers produce a safe and consistent product that is of the highest quality. QA also ensures that all products meet GMP regulatory requirements and satisfies customer expectations. In addition, Quality Assurance helps to identify, analyze, and address any potential issues before they become a problem. By maintaining high standards of product consistency and quality, manufacturers can ensure that customers receive the best possible product and service. Good Manufacturing Practices Good Manufacturing Practices (GMP) are the set of regulations and standards developed by governing bodies to ensure the quality and safety of goods produced. They cover a range of aspects, from the cleanliness and layout of factories to the qualifications and training of employees. GMP is an essential component of any manufacturing environment as it helps to guarantee a consistent quality of production. In terms of GMP, Good Manufacturing Practices refer to the practices that need to be followed in order to meet all of the regulations and standards set out by the relevant governing body. This can include things like proper sanitation and hygiene protocols, the use of quality control systems, and appropriate employee training. All of these need to be implemented in order for a manufacturing operation to be compliant with GMP guidelines. Examples of GMP in action include regular maintenance and testing of equipment, the implementation of clear product labeling and packaging, and the establishment of traceability systems. The goal is to ensure that each product produced meets the highest quality standards and the customer receives a product that meets their expectations and is safe to consume. It is important that all manufacturers follow GMP guidelines in order to protect the safety and quality of their products. By using these procedures and protocols, manufacturers can help to guarantee a consistent level of quality across their entire product range. Good Documentation Practices Good documentation practices (GDPs) are an important component of GMP. They are a set of guidelines designed to ensure that all documents related to the manufacturing, testing, and distribution of products are accurate and complete. GDPs help ensure consistency across different departments and personnel, and also make sure regulations and standards are being met. GDPs involve the use of visual representations, such as charts and graphs, to illustrate process effectiveness. They also include procedures for generating sustainable records that can be used in audits and other assessments. Lastly, they provide guidance on document control, including when changes need to be made and how to ensure that changes are properly documented. Some examples of GDPs include: - Establishing procedures for document preparation, revision, and approval - Establishing a system of document storage and retrieval - Establishing protocols for making changes to existing documents - Maintaining records of personnel training By following the principles of GDPs, companies are able to ensure that all related documents are accurate and up to date. This in turn helps manufacturers meet all applicable regulations, standards, and customer requirements. Quality Control Systems Quality Control Systems (QCS) are an essential component of Good Manufacturing Practices (GMP). QCS ensure that products meet standards for safety, quality, and legality. Quality control systems involve testing processes, inspections, and other measures to ensure that products meet the specified requirements. QCS provide a method to monitor and assess the manufacturing process as it is happening. For example, a QCS may include measures like testing during production, inspecting samples, or examining product data through an audit. By conducting regular quality testing throughout the production process, manufacturers can detect problems early on and make necessary corrections. It is important to have a reliable and effective QCS in place to make sure that all products manufactured are safe, accurate, and up-to-date. A good QCS should be documented, maintained, and updated to reflect changes in the manufacturing process. As well, any changes or updates to the QCS should be communicated to all relevant personnel. The main goal of QCS is to identify and minimize risk, improve product reliability, and ensure compliance with industry regulations. With a well-designed and implemented QCS, manufacturers can be confident that their products meet the highest standards of quality and safety. Training is a key aspect of GMP (Good Manufacturing Practices). It ensures that everyone involved in the manufacturing process has a good understanding of the standards and protocols of GMP. It also helps to protect the safety of products and consumers, as well as creating an efficient, productive and compliant work environment. Examples of training associated with GMP include employee onboarding and training, training on new technologies, equipment and procedures, and ongoing training to ensure continued compliance. Training should also include general health and safety protocols, such as food hygiene and safety standards and effective product handling procedures. It is important to provide regular assessments to ensure that all personnel have a thorough understanding of GMP and are using it correctly. A training program should also be in place to address any issues or problems that may arise. By establishing an effective GMP training program, organizations are able to maintain safe and quality products, and operate in a compliant manner. GMP regulations are critical for any company that produces and sells products or services. GMP stands for Good Manufacturing Practices and is composed of five essential components – Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training. Collectively, these five components work to ensure that a company’s products and services meet the highest quality standards and protect the safety of their customers. Quality Assurance is the practice of using rigorous processes to check and evaluate product quality. It involves a set of activities and processes to inspect and verify the quality of materials, products, and services. Quality Assurance focuses on preventing any defects or problems in the production process, ensuring that the final product meets the customer’s expectations. Good Manufacturing Practices are processes and procedures to ensure the safety, quality, and efficacy of a product. This includes controlling storage and handling conditions of raw materials, utilizing appropriate manufacturing techniques, inspecting production activities, and regularly evaluating product performance. Good Documentation Practices involve creating detailed records and documents that document a company’s activities and procedures, as well as product information. This helps companies ensure compliance with regulations, track product performance, and improve internal processes. Quality Control Systems are designed to ensure quality control throughout the production process. Quality Control Systems involve checks on the raw materials, intermediate materials, and final product to ensure that all parts meet the company’s standards. Last but not least is Training. Training is essential for educating employees on the importance of GMP regulations and how to properly implement them. Employees should understand the purpose of the five components of GMP, their roles in the production process, and the importance of following the guidelines. In summary, GMP includes five essential components – Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training – which work together to create high-quality and safe products and services. Companies must ensure that they adhere to the principles of GMP to maintain customer trust and protect their reputation. When implementing GMP, it is important to be well informed and have the right resources at hand. To help you stay on top of the latest industry updates, you can visit the website of the International Society for Pharmaceutical Engineering (ISPE). Here you can find useful information on GMP guidelines and best practices. Additionally, regulatory agencies such as the Food and Drug Administration (FDA) provide access to detailed guidance documents on GMP that can greatly assist in creating a safe and effective quality control system. Finally, there are several reputable publications including books and journals which cover the topic in depth. With all these resources available, you can easily stay abreast of any changes or updates in the field of GMP. GMP, or Good Manufacturing Practices, is a set of guidelines and regulations that are used in the food and pharmaceutical industry to ensure safety and quality assurance in products. Commonly asked questions about GMP include: What are Good Manufacturing Practices? What do they consist of? Are there any requirements I need to be aware of? These and other frequently asked questions can be found in this Comprehensive Guide to 5 Essential Components of GMP. In this guide, we will cover Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training. We will also provide some useful resources and a glossary to explain technical terms. By the end of this guide, you will have a better understanding of the 5 essential components of GMP and how they relate to each other. Let’s dive into the details and explore what GMP is and why it is important! When discussing GMP, it can be easy to get lost in the jargon and technical terms. To help you navigate through unfamiliar terminology, we have provided some of the most common definitions and concepts used in this guide. - GMP: GMP stands for Good Manufacturing Practices. It is an internationally recognized system to ensure production processes, services and products meet a certain level of quality. - Quality Assurance: Quality Assurance is a process that involves monitoring and evaluating the performance of products and services. It is used to ensure that the quality specifications are being met. - Good Manufacturing Practices: Good Manufacturing Practices are industry-specific rules and regulations to help manufacturers produce safe, high-quality products. - Good Documentation Practices: Good Documentation Practices are a set of guidelines to ensure effective documentation and records management. It ensures that data is accurate, secure and regularly updated. - Quality Control Systems: Quality Control Systems are methods of assessing and controlling the quality standards of products and services. This includes inspections, testing and reviews. - Training: Training is the process of providing employees with the skills and knowledge needed to perform their job properly. It is an essential part of any GMP system. It is important to understand the five essential components of GMP in order to have a successful and compliant business. By following the knowledge covered in this guide, you will have the tools to create a system with the highest standards of quality assurance, manufacturing practices, documentation practices, control systems, and training. Now that you have the information to make sure your business complies with the GMP standards, it’s time to take action! Start putting into practice what you have learned and ensure that your business creates the best quality products and services. Keep the useful resources that we have provided handy for future reference if needed. Have any questions or concerns? We have included a FAQ section to provide clarity on any doubts you may have. Congrats on taking the first step towards GMP compliance success! 1. What is GMP? Good Manufacturing Practices (GMP) are a set of protocols and guidelines set by regulatory authorities governing the manufacturing process of products for human or animal consumption. 2. What are the 5 Essential Components of GMP? The 5 Essential components of GMP are Quality Assurance, Good Manufacturing Practices, Good Documentation Practices, Quality Control Systems, and Training. 3. What is Quality Assurance? Quality Assurance (QA) is the process of consistently monitoring and evaluating a product in order to ensure that it meets the desired quality standards. 4. What are Good Manufacturing Practices? Good Manufacturing Practices (GMP) are the industry regulations that require manufacturers to maintain proper production, packaging, and storage procedures to ensure the safety and quality of their products. 5. What are Good Documentation Practices? Good Documentation Practices (GDP) are a set of guidelines for documenting manufacturing processes and activities in order to ensure product integrity and compliance with laws and regulations. 6. What are Quality Control Systems? Quality Control Systems (QCS) are a set of procedures and methods used to measure and control the quality of products throughout the manufacturing process. These systems are designed to ensure that any errors or non-conformance issues are detected and corrected quickly. 7. What is Training? Training is the process of teaching team members the skills necessary to carry out their role in the company, and covers both the technical aspects of the job as well as the processes and procedures for following quality standards.
<urn:uuid:4d341f21-ef1d-4301-b5cb-6893c4e265be>
CC-MAIN-2024-38
https://msbdocs.com/security-compliance/gmp-components-guide/
2024-09-18T05:30:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00531.warc.gz
en
0.943379
3,217
3.46875
3
A staggering seven in ten U.S. employees made the switch to remote work during the early days of the novel Coronavirus outbreak. Since then, many employees and businesses have turned to remote working permanently. While remote working presents many advantages for companies and employees, it also has its fair share of challenges. The most common challenge for remote working is a lack of bandwidth connection. For many employees, their at-home internet connections aren’t sufficient to handle their working needs. To guarantee that you or your employees can work from home successfully, it’s important to be sure that you have an adequate bandwidth connection. Here’s how to determine how much bandwidth you have – and need – for working remotely. What Is Bandwidth? Bandwidth is the maximum amount of data that can be transferred across a network or internet connection in a specific amount of time. Its most commonly calculated as megabits per second (Mbps). Bandwidth is often confused with internet speed. To understand the difference, imagine your data transfer as a highway. Your bandwidth would reflect how many cars can physically pass down the highway in a given time by measuring the number of lanes; a two-lane highway will allow far fewer cars than a four- or six-lane highway. In contrast, your internet speed would reflect the maximum speed a car driving down the highway could drive. This example also shows the close relationship between bandwidth and internet speed. Though a car can drive 100 mph, it will be forced to slow down if it encounters a traffic jam. In the same way, fast internet speeds will slow if your bandwidth isn’t large enough to funnel the amount of data you’re transferring. In other words, it’s important to have enough bandwidth to allow your internet speed to work at maximum efficiency. This will improve your remote work experience by allowing faster and more seamless data transfers. Your Internet Service Provider (ISP) can help you determine your minimum bandwidth speed. There are also countless websites such as speedtest.net that can run a fast and easy speed test to reveal what bandwidth your system is running on. Do You Have Enough Bandwidth to Work Remotely? As with many tech-related questions, the answer is it depends. That’s because the amount of bandwidth needed is based on several factors, such as the nature of your job and which apps and/or programs you need to work successfully. Basic functions such as email, text chat, and document creation require very little bandwidth to complete. If your job consists of these types of no-frills tasks, your home bandwidth connection may be adequate to meet your needs. By contrast, if your job requires more complex processes such as video calling, you’ll need a higher bandwidth to achieve them. Another thing to consider is how many users need access to your bandwidth. Thanks to Covid-19, many families have both parents working from home while their children are attending school online. If that’s the case, you’ll need to upgrade your bandwidth accordingly. If yours is the only device using your connection during working hours, you may get away with a smaller plan. Resources for Bandwidth Requirements Browse the links below to view the minimum requirements for the most popular apps used by remote employees: For dedicated IT support and helpful tips on working remotely, contact us today! From cybersecurity solutions to security and compliance training for remote employees, we can help your business convert to working from home as comfortable and productive as possible. Phillip Long – CISSP, CEO of , along with his team of marketing and information technology experts, will walk you through an overview of what your business should be doing to protect your data and plan your digital marketing strategies. is the technology leader on the Gulf Coast and is comprised of four divisions: Information Technology, Web Design & Digital Marketing, Office Equipment and Business Consulting. Together these divisions help local businesses exceed expectations and allow them to group to their full potential while minimizing risks. To learn more about , visit bistechnologygroup.com. You may reach out to us at:
<urn:uuid:75069f87-3f39-4e5d-9415-44c44d0400d4>
CC-MAIN-2024-38
https://www.askbis.com/2020/08/how-much-bandwidth-do-you-need-for-remote-work/
2024-09-10T23:42:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00331.warc.gz
en
0.957314
843
2.515625
3
Table of Contents ToggleAll That You Need To Know About ISO 27001:2013 and ISO 27001:2017 ISO 27001:2013 is one of the most popular information security standards worldwide, recognised as the best practice framework for an Information Security Management System [ISMS]. About the ISO and IEC ISO is the International Organisation for Standardisation and IEC stands for the International Electrotechnical Commission. IEC is a non-profit organisation that works independently of any government. Together the ISO and the IEC form a Joint Technical Committee [JTE], where Standards in IT and Information and Communications Technology [ICT] are developed and maintained. Since ‘International Organization for Standardization‘ would have different acronyms in different languages, for example “IOS” in English for International Organization for Standardization, “OIN” in French for Organisation internationale de normalisation), the Founders of ISO decided to give it the short form “ISO”. ISO is derived from the Greek ‘isos’, meaning “equal”. The History of ISO 27001 The history of ISO 27001 dates back to the British Standard 7799 that was published in 1995 and was originally written by the Department of Trade and Industry [DTI]. Later, ISO turned it into an internationally recognised, best-practice standard in the ISO 27000 Series in order to help organizations secure their information assets. The most current version of the standard is ISO/IEC 27001:2013, which incorporates changes made in 2017. Benefits of ISO 27001: 2013 ISO 27001 helps reduce information security and data protection risks to an Organisation. Several ISO 27001 requirements fulfil those of Data Protection Act compliance and GDPR besides giving much greater information assurance overall. Implementing ISO 27001 confirms to regulatory authorities that your organisation takes the security of information seriously. It helps in identifying the risks as much as is reasonably possible to address them. There has been much scaremongering around the potential fines for GDPR non-compliance, and an Information Security Management System [ISMS] helps in reducing the likelihood of security breaches. It enables organisations to react to them more quickly, and validate all the controls in place, so as to reduce the potential impacts of these security risks. ISO 27001 is an internationally recognised ‘best-practice’ standard. It makes the people you want to work with feel safe and secure. It assures them that you will look after their valuable assets and information security. This assurance enables you to win new Customers and retain their existing business. Why spend more money on information loss for your Customers, when it costs a fraction of that to be better prepared for the future anyway? Additionally, Customers increasingly seek assurance of data protection capabilities and information security management. Instead of unnecessarily adding to the ‘cost-of-sale’ for your organisation, it is better to hold an ISO 27001 certification and minimise the detail you need to provide to provide assurance to your Customers. Overall, you save time and money. An organisation’s reputation is at stake if their systems are hacked and customer data is exposed and exploited. With an ISMS that conforms to the ISO 27001 Standard, an organisation can identify breach risks and prevent them before they happen. ISO 27001 helps boost your Company’s reputation and builds trust in the market Achieving ISO 27001:2013/17 To achieve ISO 27001 certification, an organisation must meet all the core ISO 27001 requirements. One of the fundamental core requirements is to identify, assess, evaluate and treat information security risks. A risk management process will help determine which of the ISO 27001 Annex A Controls should be applied in the management of those security-oriented risks. Some companies may choose not to take their Information Security Management System to certification, but simply align to the ISO 27001 standard. This may help meet internal pressures, but will deliver less value to key stakeholders externally, who increasingly look for an assurance that is certified by certifying bodies such as the United Kingdom Accreditation Service [UKAS]. Neumetric offers ISO 27001 Compliance and Certification. To know how you can get ISO 27001:2013 Certification for your Organization, click here. What is the difference between ISO 27001:2013 and ISO 27001:2017? Practically, there is very little difference between the 2013 and 2017 versions of the ISO 27001 Standard, except for a few minor points and a small name change. The ISO version 2013 was not affected by the 2017 publication and the changes did not introduce any new requirements. The latest published version of the Information Security Management System standard is BS EN ISO/IEC 27001:2017. The changes made in 2017 were introduced to indicate approval by CEN/CENELEC for the EN designation (European Standard). For those seeking a UKAS accredited ISO 27001 certification, UKAS is attributed to the ISO Standard so that there are no modifications that affect your certification status and therefore no additional transition activities are introduced by this revision. What are the ISO 27001 requirements? ISO 27001 is a set of requirements that can be used to manage information security. It covers everything from risk management to security policy, and it’s designed to help companies protect their customers’ data and intellectual property. The standard includes requirements for 7 key areas: - Information security management system (ISMS) - Risk assessment - Controls selection, implementation, and maintenance - Documentation and training - Information security incident management - Measurement, analysis and evaluation (MAE) of the ISMS’s performance in meeting its objectives - Continual improvement of the ISMS What are the three principles of ISO 27001? The three main principles of ISO 27001 are: - Identifying the Information Assets that are Important to Your Organization - Understanding the Information Security Risks Your Organization Faces - Preparing an Effective Plan to Manage These Risks Who needs ISO 27001 certification? - Any company with a large number of employees who work remotely or on mobile devices - Any company that stores sensitive customer data - Any company with a high level of public trust, such as government agencies or schools What are the ISO 27001 controls? The ISO 27001 controls are a set of requirements that help organizations ensure the security of their data. They’re designed to be flexible enough to fit the needs of any organization, from small businesses to large multinational corporations. The controls are divided into management, operational, and technical controls. Management controls include risk assessment and risk management, while operational controls include asset management, contingency planning, and security awareness training. Technical controls include access control systems and encryption software. When an organization implements these controls, it provides itself with a set of guidelines for handling data security issues in a way that is scalable and customizable for its particular situation. This means that even if your business changes or grows over time, you can continue using ISO 27001 as your framework for data protection policies without having to start from scratch.
<urn:uuid:02447394-65a6-49b7-8bc6-6daf7f47c2e3>
CC-MAIN-2024-38
https://www.neumetric.com/everything-about-iso-270012013-and-iso-270012017/
2024-09-14T17:03:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00031.warc.gz
en
0.920692
1,455
2.8125
3
- Bring your own device - Copy protection - Data access control - Data at rest - Data in transit - Data in use - Data leakage - Data loss prevention - Data security - Data security posture management - Data security breach - Data theft - File security - Incident response - Indicators of compromise - Insider threat - Ransomware attack - USB blocker - USB drop attack Deep packet inspection What is deep packet inspection? Deep packet inspection (DPI) is the process of examining data packets traveling through a network. It is used in many security and network management applications, such as those that can detect and block access to unofficial websites, spam, or malicious apps. Also, DPI allows complete visibility into the data entering and leaving organizations, unlike conventional packet filtering. Conventional packet filtering reads only the header of data packets (the details of the destination), whereas DPI reads the content of data packets. How does deep packet inspection work? All data is broken down into smaller data packets before being sent over the internet. As these data packets pass through firewalls, they are analyzed for security loopholes. This includes examining encrypted data packets traversing the network. The entry of the packets is governed by rules set up by IT admins that specify which packets are allowed or blocked. DPI can be enabled in firewalls, in intrusion detection systems, or on gateway servers to guide data packet entry. Benefits of deep packet inspection Deep packet inspection tools are preferred by network admins because they: Offer better cloud visibility: DPI is one of the key strategies in intrusion detection systems, particularly deep content inspection, which is part of DPI. Deep content inspection is the process of reading the metadata of the content to analyze internet access, upload, and download requests. Resolve network latency: In addition to helping you enhance security, the network traffic information collected helps you optimize network bandwidth usage. DPI can prioritize packets with crucial data to be sent first into the network, allowing time-bound requests to be served first. Block peer-to-peer downloads: Peer-to-peer (P2P) downloads may include harmful payloads along with the resources that a user wants. For example, free software downloads from P2P torrent sites often include ransomware payloads. DPI-enabled applications can block these risky P2P downloads and ensure the network is not compromised by them. Censor harmful content: Web apps that contain unsafe content can be blocked by sysadmins to prevent unlawful internet access. This is particularly useful in educational institutions to restrict access to unauthorized resources. Another important use case is identifying and blocking malware packets entering the network. Facilitate data loss prevention: Since DPI provides the ability to control which websites can be accessed, admins are able to not only regulate packets entering the network but also stop sensitive content from leaving the network. Admins can leverage DPI-enabled tools for better network utilization and security. However, there are some points to keep in mind when deploying such tools. DPI stops, reads, and then processes network data packets. This required additional resource utilization can lead to longer loading times for users. Furthermore, as an extension of monitoring the data packet content, users' search history is monitored. This may violate certain privacy policies. When setting up DPI in your infrastructure, ensure you address these points and periodically evaluate your DPI policies and tools to keep them relevant.
<urn:uuid:a4aded91-fb0c-4b7c-a715-7d1ac746963f>
CC-MAIN-2024-38
https://www.manageengine.com/in/data-security/what-is/deep-packet-inspection.html?source=indicators-of-compromise
2024-09-18T09:20:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00631.warc.gz
en
0.910513
718
2.84375
3
Connectivity is the lifeblood of our society. With the increasing importance of the internet in our daily lives, businesses, and education, the need for robust and reliable connectivity has never been greater. While much attention is given to expanding the last-mile infrastructure, it’s equally vital to focus on internet exchange points (IXPs) as a critical component of our digital ecosystem. Expanding IXPs to small cities and towns across America may present a significant shift, more crucial than most people realize. IXPs are physical facilities, where networks of all types come together to exchange traffic in a neutral and efficient manner. These facilities play a pivotal role in ensuring the smooth flow of internet traffic. In today’s digital landscape, IXPs are the backbone of the internet, enabling seamless data transfer and interconnection between networks, content providers, and cloud services. As of now, approximately 126 IXPs exist in North America. These facilities are primarily located in large metropolitan cities, such as Denver, Kansas City, Atlanta, and New York, but the downside is that those IXPs are found in only 57 major cities. This concentration in major urban centers presents several challenges and shortcomings, especially to the 17 states that have no IXP at all. This is an area I’ve been exploring because of the work I do in my official capacity, which has opened my eyes to the underserved markets that digital infrastructure has yet to touch. When it comes to the current IXP distribution, challenges abound. I’ve outlined a few of them for you here. - Rural connectivity — Small cities and towns across America often lack access to nearby IXPs, forcing them to backhaul their internet traffic to distant cities, resulting in higher latency and connectivity costs. - Lack of preparedness — The U.S. needs to prepare for the future of the internet and the rise of emerging technologies, like augmented reality, virtual reality, and low-latency applications. Without local IXPs, many regions will remain unprepared for these innovations. - Digital divide — The lack of IXPs in rural areas exacerbates the digital divide, leaving underdeveloped regions with limited access to high-speed, low-latency internet. This makes it more challenging to compete globally and access critical services, like telehealth and virtual educational resources. Building new IXPs is critical By strategically establishing carrier-neutral IXPs in key locations throughout the U.S., we can extend access to encompass more Americans — millions more. This approach brings with it a multitude of benefits, the foremost being the substantial reduction in latency. As the demand for low-latency applications continues to surge, local IXPs play a pivotal role in ensuring users enjoy a seamless real-time experience. Moreover, the expansion of IXPs facilitates the consolidation of internet service demands in rural areas, ultimately leading to cost savings, improved connectivity, and enhanced service quality. This, in turn, paves the way for significant economic growth, as IXPs attract businesses and stimulate investment and innovation in technology and infrastructure. Additionally, these points of interconnection serve as catalysts for technological advancements, fostering the deployment of emerging technologies, like edge computing, thereby enriching and evolving the digital landscape. In addition to the obvious benefits of cost reduction and enhanced traffic exchange efficiency, it’s crucial to recognize IXPs are the cornerstone of future-ready communities. As we witness the rapid proliferation of IoT devices, the need for efficient traffic routing and seamless local connections with cloud services is becoming more pressing than ever. For areas without access to carrier-neutral facilities and IXPs, the prospect of falling behind in the digital realm is a stark reality. This widening digital gap will inevitably have adverse consequences on economic progress, educational opportunities, health care access, and virtually every facet of our modern lives. In short, IXPs make the internet faster and more affordable. We have to consider the areas that are getting left in the dust from the IXPs going up in major metropolitan markets. To address these issues, it’s crucial to promote the establishment of IXPs in regions that currently lack them. Collaboration between the government, private sector, and regional public universities can be instrumental in achieving this goal. By expanding IXPs to small cities and towns, we can bridge the digital divide, stimulate economic development, and ensure that America is well prepared for the future of the internet. Connected Nation is working to fix this problem, but like all digital infrastructure, no one can go at this alone. Congress has ushered in a transformative era by allocating an unprecedented amount of federal funding to support the expansion of broadband infrastructure. This substantial financial commitment reflects a growing recognition of the vital role that robust internet access plays in today’s society. With considerable resources potentially at hand, there’s a remarkable opportunity to not only bolster existing networks but also to bridge the digital divide in underserved and remote areas, ensuring that every American has access to the benefits of the digital age. This historic investment in broadband infrastructure sets the stage for a brighter and more connected future, making it imperative to seize this moment and strategically channel these funds to create a more inclusive and technologically advanced nation. The expansion of IXPs to small cities and towns across America is not just an option; it’s a necessity for the nation’s digital future. By creating a robust network of IXPs, we can reduce latency, improve connectivity, and empower rural communities to participate fully in the digital age. It is time to recognize the critical role that IXPs play in our digital infrastructure and make the necessary investments to bring them to stretch across all corners of the U.S. Melissa Reali-Elliott has spent over 15 years marketing digital technologies and is a self-professed data center nerd. She holds degrees in marketing, economics, and psychology from the University of Central Florida. Throughout her career, she has supported organizations specializing in gaming software, the IoT, RFID, supply chain, and power distribution to utility markets as well as critical infrastructure industries. Her background in marketing and communications has accelerated her work in developing strategic messaging and media relations to lend her voice as an advocate for diversity and sustainability initiatives.
<urn:uuid:ef5e79ad-cb3f-441c-b4ae-e33ce80ae764>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/95037-internet-exchange-points-are-the-backbone-of-the-internet
2024-09-19T13:22:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00531.warc.gz
en
0.941631
1,260
2.5625
3
The full power of next-generation quantum computing could soon be harnessed by millions of individuals and companies, thanks to a breakthrough by scientists at Oxford University Physics guaranteeing security and privacy. This advance promises to unlock the transformative potential of cloud-based quantum computing and is detailed in a new study published in the influential U.S. scientific journal Physical Review Letters. Quantum computing is developing rapidly, paving the way for new applications which could transform services in many areas like healthcare and financial services. It works in a fundamentally different way to conventional computing and is potentially far more powerful. However, it currently requires controlled conditions to remain stable and there are concerns around data authenticity and the effectiveness of current security and encryption systems. Several leading providers of cloud-based services, like Google, Amazon, and IBM, already separately offer some elements of quantum computing. Safeguarding the privacy and security of customer data is a vital precursor to scaling up and expending its use, and for the development of new applications as the technology advances. The new study by researchers at Oxford University Physics addresses these challenges. “We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity,” said Professor David Lucas, who co-heads the Oxford University Physics research team and is lead scientist at the UK Quantum Computing and Simulation Hub, led from Oxford University Physics. In the new study, the researchers use an approach dubbed “blind quantum computing,” which connects two totally separate quantum computing entities – potentially an individual at home or in an office accessing a cloud server – in a completely secure way. Importantly, their new methods could be scaled up to large quantum computations. “Using blind quantum computing, clients can access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without revealing any useful information. Realizing this concept is a big step forward in both quantum computing and keeping our information safe online’’ said study lead Dr Peter Drmota, of Oxford University Physics. The researchers created a system comprising a fibre network link between a quantum computing server and a simple device detecting photons, or particles of light, at an independent computer remotely accessing its cloud services. This allows so-called blind quantum computing over a network. Every computation incurs a correction which must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers used a unique combination of quantum memory and photons to achieve this. “Never in history have the issues surrounding privacy of data and code been more urgently debated than in the present era of cloud computing and artificial intelligence,” said Professor David Lucas. “As quantum computers become more capable, people will seek to use them with complete security and privacy over networks, and our new results mark a step change in capability in this respect.” The results could ultimately lead to commercial development of devices to plug into laptops, to safeguard data when people are using quantum cloud computing services. Researchers exploring quantum computing and technologies at Oxford University Physics have access to the state-of-the-art Beecroft laboratory facility, specially constructed to create stable and secure conditions including eliminating vibration. Funding for the research came from the UK Quantum Computing and Simulation (QCS) Hub, with scientists from the UK National Quantum Computing Centre, the Paris-Sorbonne University, the University of Edinburgh, and the University of Maryland, collaborating on the work.
<urn:uuid:e369d546-5a33-41f9-bb35-324c47d375f3>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/95197-breakthrough-at-oxford-promises-secure-quantum-computing-at-home
2024-09-20T19:14:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00431.warc.gz
en
0.925637
727
3.09375
3
In today’s digitally reliant world, cyber security has never been more important. A cyber-attack can have disastrous consequences – from outages and downtime to data loss and reputational damage. With 84% of businesses experiencing a malicious attack in the past 12 months, it is no longer a question of ‘if’ but ‘when’ an attack will happen. Is your business prepared? Business must plan to protect AND recover Many businesses have a belts and braces approach to mitigating risks and protect their network with the relevant software, processes and procedures – however they fail to fully prepare to recover in the event of a breach believing that an attack simply will not penetrate their defences. Recovering from a business outage or attack is much more than being able to identify and invoke defence mechanisms. Given that 92% of cyber-attacks result in data corruption or loss, too many businesses fail to consider their response and recovery strategies which are critical components in minimising the effect of an attack. The need for a busines-wide cyber security culture Cyber security is an investment – not only in software and applications within your IT infrastructure, but within the internal processes and culture. Whilst software such as malware detection, backup and data replication will protect the business technically, it is also critical to implement process and procedures in the event of human error and to ensure your business has a security centric culture in place. For example: whether this is colleague training on awareness of phishing attacks through to challenging visitors into secure offices without credentials. The ability to protect and recover from an attack or outage must be a business wide consideration. Cyber security framework: How prepared area you? 5 steps to protect and recover Businesses need to be prepared and have a continuity strategy outlined in the event of an attack or outage, with equal emphasis placed on recovery as detection. The 5-step cyber security framework, developed by NIST(2), outlines the stages to take in helping prepare and mitigate when attacks occur to ensure you can get your business back up and running as quickly as possible with minimal downtime and disruption. Gain an understanding across the business to managing cyber security and risk to systems, people, assets, data, and capabilities to identify what process, polices, software etc need to be put in place. For example: Identifying all physical and software assets within the business and level of protection Protect against them Develop and implement appropriate safeguards to ensure critical infrastructure and data can be protected in the event of an attack. For example: Implement a cyber security policy that includes risk management and governance with secure processes and procedures such as access controls, duo-authentication sign in and background checks. Ensure policies are trained out to all. Detect when they happen Through implementation of software and processes which identify attacks. For example: Install DDoS projection and firewalls to secure your network. Undertake employee awareness and training on attacks and the process to invoke when this occurs. Respond to breaches Create a plan for disaster and information security incidents. For example: Failover systems in place, appropriate Recovery Time Objectives (RTO) that ensure you systems can restore systems and applications, meaning your business can get back up and running. Clear communication plan to the business and any impacted customers. Recover your data Implement recovery plans to ensure business continuity. For example: Recovery Point Objectives (RPO) that ensure your data is backed up/replicated and can be recovered appropriately. Clear communication plan to the business of process and accountabilities. How M247 can help Disaster Recovery as a Service: Powered by market leader Zerto, our Disaster Recovery as a Service (DRaaS) solution enables you to roll-back to a previous version before an attack has encrypted your files. With industry leading RPO and RTO times (which rollback to just minutes prior to an attack) you’ll be able to restore your business immediately, avoiding ransom demands and ensuring your customer experience or business performance are not affected. Business-critical security solutions: Whether you are looking for firewalls, managed DDoS protection or content filtering our range of security options can be tailored to offer your business powerful protection. Designed to work together in almost any combination, as single solutions or as a complete package, and all underpinned by ISO27001 Security Standard. For more information on either of these services please get in touch. Business Resilience Readiness Thought Leadership Survey, IDC, May, 2019 https://www.nist.gov/
<urn:uuid:da08be66-5c09-4fcb-a06a-0e1430d41747>
CC-MAIN-2024-38
https://m247.com/eu/blog/5-steps-to-cyber-security-protection-and-recovery/
2024-09-09T22:13:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00595.warc.gz
en
0.938292
933
2.828125
3
The popularity of social media, online shopping, online gaming, and smart devices, along with the massive growth of Internet of Things (IoT), is creating tremendous amounts of data—much of which is unstructured. Unstructured data presents many challenges—it’s hard to manage, datasets can be extremely large, and it does not have a pre-defined schema. Traditional tools such as relational database management systems (RDBMS) were not architected to store and retrieve unstructured data. Still, enterprises and service providers who manage to tame and mine unstructured data will have the ability to drive true business transformation based on the new insights it provides. Distributed systems solve many of the challenges related to storing and retrieving unstructured data. Some of these challenges include data consistency, maintaining performance, and availability. Distributed systems can be used to extract compelling business benefits, giving organizations the ability to collect, analyze, and gain insights from previously unconnected and unanalyzed data. Why So Much Data? Data is doubling in size every 2 years and the total amount of data will reach 44ZB by 2020, with 80% of that unstructured, according to IDC Research. Sources, types, and volumes of data have changed in ways that we could not have imagined just a few years ago. Twitter, Facebook, online shopping, online gaming, and other everyday personal and business activities are creating a tremendous amount of data—much of it unstructured. We are also now seeing more and more technology based on sensors communicating over the internet to applications, through what is commonly referred to as the Internet of Things (IoT). When many people think of the IoT, they think of personal fitness monitoring such as Fitbits, connected refrigerators, and security systems. However, the variety and proliferation of devices connected together include everything from home gas meters to airplane engines, from weather stations to pharmacy shelves, with millions of sensors placed all over the globe and applications creating tens of millions of data points daily. The innovations in IoT, and the related datasets, are expected to grow exponentially over the next few years. Unstructured datasets can be extremely large and may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. This includes geospatial tracking, web clickstreams, social media content, chat logs, and search data, along with any other data that doesn’t easily fit into a spreadsheet or relational database schema. The wider the range of unstructured data, the more likely it is that the data may lead to insightful analyses or correlations. More accurate analyses, of course, can lead to more confident decisions, and better decisions can mean greater operational efficiencies, increased productivity, cost reductions, and reduced risk. Seeing the Light—Reaping the Rewards of Big Data How do companies benefit from working with unstructured data? Correlation of seemingly disparate data points, driving adjustments to business strategy and execution, can generate sizable gains in productivity and revenues. Take the example of a large retail grocer that, drawing from weather data, was able to determine that certain atmospheric conditions drive people to buy certain things over others. For example, on still days with highs of less than 80 degrees, people respond well to berry ads and specials, buying three times as many as usual. On warm, dry days with high winds, people favor steak, but if the wind dies and the temperature rises, they go for burgers. Aligning beef ads with the shift in weather has increased sales by 18%. Without the data stored and made readily available through a NoSQL database, these realizations, and the resulting revenue gains, would never have come to light. Scalability, Performance, and Global Availability Applications and databases typically work at small scale. However, those working with vast amounts of unstructured data need the ability to scale up, down, out, and in. To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of bigger CPUs or memory to a single computer. To scale horizontally (or scale out) means to add more nodes to a cluster, such as adding a new computer (typically commodity hardware) to a distributed system. Not all distributed systems, such as NoSQL databases, are alike. Effective NoSQL databases must be able to scale both out and in, as well as scale up and down predictably and reliably. One of the important differences and key advantages of NoSQL systems is the concept of relaxed consistency. This relaxed consistency, known as eventual consistency, means that the system will always respond to a request but it may not respond with the most updated object. The CAP theorem defines a natural tension as well as trade-offs between three core operational capabilities in distributed systems and database infrastructure—consistency, availability, and partition tolerance. At first reading, the CAP theorem seems to imply that distributed systems cannot achieve perfect consistency, availability, and partition tolerance. However, by relaxing the requirement for perfect consistency to allow for eventual consistency, distributed system parameters can be tuned to meet particular application requirements. Distributed systems extend beyond the realm of write once, read many found in RDBMS methodologies. In the world of relational databases, strong consistency has reigned as a requirement. In the new world of distributed systems, users must look clearly at the requirement for strong consistency versus eventual consistency. As data accumulates (builds mass), there is a greater likelihood that additional services and applications will be attracted to the data. Services and applications can have their own gravity, but data is the most massive and dense, therefore, it has the most gravity. Data, if large enough, can be virtually impossible to move. “Data gravity” is a term coined by Dave McCrory, CTO of Basho, to describe how the greater mass of data draws services and applications to it. The closer services and applications are to the data that they access, the higher the throughput and the lower the latency. In turn, those applications and services will become more reliant on high throughput and low latency. The requirement for applications to have high throughput and low latency increases the need for data to be located close to applications and services. This is important because the customer experience matters. When access is slow, application usage drops along with productivity. When end users are closer to data operations, they receive better response times and a better end-user experience. The data locality capabilities of distributed systems enable data operations close to end users. Distributed systems must be designed to ensure low latency and high throughput to ensure a great user experience across the globe. Managing Unstructured Data Is Critical for Modern Business Massive increases in data, greatly fueled by the IoT, are pouring into our networks and systems, altering the nature of how we collect and analyze data. Distributed systems solve the challenges posed by enormous amounts of unstructured data. By providing scalability, global availability, fault tolerance, performance, and operational simplicity, distributed systems enable the business benefits that come from storing and retrieving unstructured data. The IoT and the flood of unstructured data are forcing companies, whole industries, and markets to evolve. This evolution can either be an opportunity for growth or render companies obsolete that can’t keep up. Image courtesy of Shutterstock.
<urn:uuid:2606dfac-bb4a-4683-a93d-962e37162f8c>
CC-MAIN-2024-38
https://www.dbta.com/BigDataQuarterly/Articles/Unstructured-Data-Overcoming-Challenges-to-Reap-the-Rewards-103886.aspx
2024-09-09T21:32:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00595.warc.gz
en
0.950129
1,502
2.984375
3
Data has grown in volumes and varieties, computational processing more powerful, and cloud data storage more accessible and affordable. These developments have made it possible to produce models that can analyze bigger, more complex data and deliver faster, more accurate results. This may explain why Artificial intelligence (AI) is gaining popularity in recent years. Many consumers will have their first interaction with AI in the built environment through voice commands given to a smart home personal assistant device such as those by Amazon, Google or Apple. What can we expect of AI in the built environment, going forward? Smart cities and smart buildings around the world generate an incredible amount of data on a daily basis. Yet the data has not been systematically collected, stored, analyzed or leveraged to drive efficiencies or to meet sustainability goals. While AI represents the broader concept of machines being able to carry out tasks in an intelligent way, machine learning is a current application of AI based on the idea that we can give machines access to data, and they can use that data to learn for themselves. Machine learning (ML) is an application of artificial intelligence (AI) that allows systems to automatically learn and improve from exposure to more data without being explicitly programmed. In other words, ML focuses on the development of computer programs that can access data and use it to learn for themselves. As ML evolves, a class of semi-supervised learning has also been utilized. Such semi-supervised learning typically uses a large amount of input data but only a small amount of corresponding output data. With the right algorithm, critical information – such as where the problems are, what’s causing them – can be culled from the data flood, and delivered when and where such insights are needed. With such advanced tools, organizations will be better equipped to identify opportunities and to resolve problems. AI and ML: A trust issue? While AI and ML hold enormous promise, they are still maturing as technologies. Understandably, there are anxieties and concerns about them across economies and societies. Data quality and quantity are vital considerations. Since ML requires a significant amount of data to train the algorithm, it follows that the quality of data input into the model is important, as higher quality data should enable better predictive capabilities. Obtaining sufficient quality labeled data can be a costly proposition and often depends on human experts to perform the labeling. Trusting that the machine and data will make the right decision. As AI and ML become more advanced, they will start to make more sophisticated decisions. Some will question if automated processes could “learn” patterns that lead to undesired or unintended consequences or biases. If the underlying data set included biases, or is collected from a process that structurally included some form of bias, the ML algorithms will replicate and perpetuate those biases. Hence, business leaders will need to pay careful attention to the history and provenance of their datasets. Many ML algorithms and resulting models are created as combinations of thousands of variables and their predictions are not easily explainable, which can make it difficult for observers to trust the output of the algorithm. Will AI/ML take our jobs away? With AI/ML technologies becoming more ubiquitous, economic effects and impacts to the world of work are inevitable. As automation replaces many administrative, predictable physical tasks, data collection and processing, job functions will change. Concerns exist that many jobs as we know them today will be lost altogether or significantly re-shaped, at a minimum. Business leaders and policy makers need to take these concerns seriously and assume some responsibility for enabling and applying “Responsible AI.” Machine learning in action The next-generation smart buildings are about to become self-conscious, self-healing and occupant-driven buildings. Imagine empowering building occupants – such as, employees, visitors, doctors and patients – to interact with their environment for better comfort and productivity. In the built environment, ML is foundational to many systems that rely on biometric recognition to enable physical access. For example, in addition to a physical access control credentials (such as, card or pin), cameras are used to verify users’ identity. The technology powering these cameras uses supervised machine learning techniques like neural networks to identify users. Modern AI algorithms can identify users with very high confidence, and the combination of facial recognition and traditional card access provides a higher level of assurance for secure access while minimizing disruption for the users. ML is also being applied in energy management and predictive energy optimization in commercial settings. Corporate real estate owners and operators can leverage internal and external data to benchmark building performance, monitor building equipment, ensure occupant comfort and forecast operational budgets. With predictive analytics, the technology can analyze load management (for heating, ventilation air-conditioning (HVAC), lighting, appliances and devices, in addition to fault detection and diagnosis. For instance, AI-based capability can predict anomalies in building equipment such as chillers, boilers, cooling tower or lighting, which allows potential issues to be addressed before something serious happens. In addition, predictive analytics can also manage upcoming load, either by preemptively shedding load or optimizing processes to prepare for upcoming challenges. For example, intelligently pre-cool or store chilled water in advance of anticipated loads, creating just the right amount of chilled water early in the day before peak utility rates apply. The technology can also compute operational set-points for various components of HVAC systems with feedback of human comfort and energy consumption. With AI, the energy consumption of a whole building or even a piece of equipment can be predicted, which enables peak shaving, avoiding utility penalty, cost-effective energy supply planning and energy demand planning. Voice control of building functions could be next on the table for commercial buildings. Currently, voice control is not yet big in commercial settings outside of personal devices like the building occupant’s phone, laptop or wearables. However, more organizations are beginning to deploy voice controls in communal spaces, such as in conference rooms. The take-up will largely depend on trust – such as the ease-of-use, how well the technology performs its intended use, and the protection of users’ privacy. Data has become a remarkable resource with truly amazing potential – for those who are able to gather it, understand its meanings, and put it to work. A smart strategy towards facilities management is a data-driven approach where building systems and equipment are linked to provide efficient, centralized control, and powered by AI/ML to mine the data – generated by the linked systems and supplied by external sources – for opportunities for improved efficiency and performance.
<urn:uuid:6a9aa1f2-a1e6-4bfe-90cc-8403ee035798>
CC-MAIN-2024-38
https://www.frontier-enterprise.com/making-the-smart-built-environment-even-smarter/
2024-09-11T04:36:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00495.warc.gz
en
0.944052
1,334
3.53125
4
What are Logs? Also commonly referred to as log files, logs are records activities within a computer system. Logs are written in a simple text format, document event details, typically include a time stamp and can be used for tasks ranging from restoring a database to analyzing website traffic. Nearly all software applications have a logging function often using a protocol called syslog, which is a standardized method of creating logs. There are many different types of logs, tailored to specific systems and functions. A transaction log keeps a record of all changes made to a database and is used in recovery efforts. Network event logs provide a history of what happened within the network, such as traffic and failed password attempts. Web servers also create logs that include information such as the IP addresses of visitors, when they were on the site and which pages they visited. These are just a few examples of the types of logs systems can create. Logs are invaluable for monitoring system health and troubleshooting issues. How NICE can help NICE is the market leader in providing customers the cloud contact center software they need to deliver consistently exceptional customer experiences. Benefits include: - Modern ACD providing digital first omnichannel routing and increased business agility - Integrated and comprehensive workforce management solutions to engage and empower contact center agents to achieve business goals - Automation and artificial intelligence (AI) capabilities to enhance the customer experience and automate routine agent tasks - Omnichannel customer journey management CXone provides an intelligent, unified suite of applications covering the breadth of contact center management disciplines, simplifying administration and streamlining the user experience.
<urn:uuid:db6dcf0f-3842-4072-98b4-80c697b11eaa>
CC-MAIN-2024-38
https://www.nice.com/fr/glossary/what-are-contact-center-call-logs
2024-09-11T02:44:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00495.warc.gz
en
0.946537
325
2.859375
3
Listen to post: Getting your Trinity Audio player ready... We just introduced what we believe is a unique application of real-time, deep learning (DL) algorithms to network prevention. The announcement is hardly our foray into artificial intelligence (AI) and machine learning (ML). The technologies have long played a pivotal role in augmenting Cato’s SASE security and networking capabilities, enabling advanced threat prevention and efficient asset management. Let’s take a closer look. What is Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL)? Before diving into the details of Cato’s approach to AI, ML, and DL, let’s provide some context around the technologies. AI is the overarching concept of creating machines that can perform tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, understanding natural language, and perception. One example of AI applications is in healthcare, where AI-powered systems can assist doctors in diagnosing diseases or recommending personalized treatment plans. ML is a subset of AI that focuses on developing algorithms to learn from and make predictions based on data. These algorithms identify patterns and relationships within datasets, allowing a system to make data-driven decisions without explicit programming. An example of an ML application is in finance, where algorithms are used for credit scoring, fraud detection, and algorithmic trading to optimize investment strategy and risk management. Deep Learning (DL) is a subset of ML, employing artificial neural networks to process data and mimic the human brain’s decision-making capabilities. These networks consist of multiple interconnected layers capable of extracting higher-level features and patterns from vast amounts of data. A popular use of DL is seen in self-driving vehicles, where complex image recognition algorithms allow the vehicle to detect and respond appropriately to traffic signs, pedestrians, and other obstacles to ensure safe driving. Overcoming Challenges in Implementing AI/ML for Real-time Network Security Monitoring Implementing DL and ML for Cato customers presents several challenges. Cato handles and monitors terabytes of customer network traffic daily. Processing that much data requires a tremendous amount of compute capacity. Falsely flagging network activity as an attack could materially impact our customer’s operations so our algorithms must be incredibly accurate. Additionally, we can’t interfere with our user’s experience, leaving just milliseconds to perform real-time inference. Cato tackles these challenges by running our DL and ML algorithms on Cato’s cloud infrastructure. Being able to run in the cloud enables us to use the cloud’s ubiquitous compute and storage capacity. In addition, we’ve taken advantage of cloud infrastructure advancements, such as AWS SageMaker. SageMaker is a cloud-based platform that provides a comprehensive set of tools and services for building, training, and deploying machine learning models at scale. Finally, Cato’s data lake provides a rich data set, converging networking metadata with security information, to better train our algorithms. With these technologies, we have successfully deployed and optimized our ML algorithms, meticulously reducing the risks associated with false flagging network activity and ensuring real-time inference. The Cato algorithms monitor network traffic in real-time while maintaining low false positive rates and high detection rates. How Cato Uses Deep Learning to Enhance Threat Detection and Prevention Using DL techniques, Cato harnesses the power of artificial intelligence to amplify the effectiveness of threat detection and prevention, thereby fortifying network security and safeguarding users against diverse and evolving cyber risks. DL is used in many different ways in Cato SASE Cloud. For example, we use DL for DNS protection by integrating deep learning models within Cato IPS to detect Command and Control (C2) communication originating from Domain Generation Algorithm (DGA) domains, the essence of our launch today, and DNS tunneling. By running these models inline on enormous amounts of network traffic, Cato Networks can effectively identify and mitigate threats associated with malicious communication channels, preventing real-time unauthorized access and data breaches in milliseconds. Eliminate Threat Intelligence False Positives with SASE | Download the eBookWe stop phishing attempts through text and image analysis by detecting flows to known brands with low reputations and newly registered websites associated with phishing attempts. By training models on vast datasets of brand information and visual content, Cato Networks can swiftly identify potential phishing sites, protecting users from falling victim to fraudulent schemes that exploit their trust in reputable brands. We also prioritize incidents for enhanced security with machine learning. Cato identifies attack patterns using aggregations on customer network activity and the classical ML Random Forest algorithm, enabling security analysts to focus on high-priority incidents based on the model score. The prioritization model considers client group characteristics, time-related metrics, MITRE ATT&CK framework flags, server IP geolocation, and network features. By evaluating these varied factors, the model boosts incident response efficiency, streamlining the process, and ensures clients’ networks’ security and resilience against emerging threats. Finally, we leverage ML and clustering for enhanced threat prediction. Cato harnesses the power of collective intelligence to predict the risk and type of threat of new incidents. We employ advanced ML techniques, such as clustering and Naive Bayes-like algorithms, on previously handled security incidents. This data-driven approach using forensics-based distance metrics between events enables us to identify similarities among incidents. We can then identify new incidents with similar networking attributes to predict risk and threat accurately. How Cato Uses AI and ML in Asset Visibility and Risk Assessment In addition to using ML for threat detection and prevention, we also tap AI and ML for identifying and assessing the risk of assets connecting to Cato. Understanding the operating system and device types is critical to that risk assessment, as it allows organizations to gain insights into the asset landscape and enforce tailored security policies based on each asset’s unique characteristics and vulnerabilities. Cato assesses the risk of a device by inspecting traffic coming from client device applications and software. This approach operates on all devices connected to the network. By contrast, relying on client-side applications is only effective for known supported devices. By leveraging powerful AI/ML algorithms, Cato continuously monitors device behavior and identifies potential vulnerabilities associated with outdated software versions and risky applications. For OS Type Detection, Cato’s AI/ML capabilities accurately identify the operating system type of agentless devices connected to the network. This information provides valuable insights into the security posture of individual devices and enables organizations to enforce appropriate security policies tailored to different operating systems, strengthening overall network security. Cato Will Continue to Expand its ML/AI Usage Cato will continue looking at ways of tapping ML and AI to simplify security and improve its effectiveness. Keep an eye on this blog as we publish new findings.
<urn:uuid:5b7e360e-3676-4b00-b0d2-d4898139133f>
CC-MAIN-2024-38
https://www.catonetworks.com/blog/enhancing-security-and-asset-management-with-ai-ml-in-cato-sase-product/
2024-09-16T03:14:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00095.warc.gz
en
0.91573
1,379
2.65625
3
Dissemination of Information refers to the distributing of a company’s or customer specific information to the public, whether through printed or electronic documents, or other forms of media. “Dissemination of information” does not include intra-company use of information or responding to requests for “access to information” from a company’s employees. What should SMB Owners be concerned with on Information Dissemination? From an Information Security perspective, information that needs dissemination should be accurate (have Integrity), should be available when you need it (Availability), and only given out to those with a Need to Know (Confidentiality). These are the CIA best practices of Information Handling. Business owners should have an Information Handling Policy in place that sets requirements for employees to follow based upon the type of information they have access to. This policy should outline the types of data that your business has which falls into certain categories such as Public (General), Restricted (Sensitive), and Confidential (Critical). It should also spell out how that information is to be protected throughout its life in motion and at rest. However, there is more to information dissemination than just information security. The video below covers some leadership best practices around how to disseminate information, where to share it, when to share, and in what manner.
<urn:uuid:89b694dd-b485-4129-8982-1a25f77450e9>
CC-MAIN-2024-38
https://cyberhoot.com/cybrary/dissemination-of-information/
2024-09-17T05:54:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00895.warc.gz
en
0.921781
270
2.78125
3
The number of cyberattacks that affect businesses large and small continues to grow each year. It's estimated that, by 2025, cybercrime will cost companies worldwide $10.5 trillion, up from $3 million in 2015. Cyberthreats grow at the speed of technology. As businesses develop longer and more complex supply chains, increase IoT device usage, and utilize technology for remote work, IT security professionals are forced to accomplish more with less. In the same way cybersecurity professionals leverage technology to detect and respond to threats, cybercriminals develop more ways to spread and use illegal services and products. Various dark web markets exist for cybercriminals to sell leaked information, buy illegal products, and even invest in as-a-service cybercrime. At the same time, the cybersecurity talent shortage has entered its sixth year with no signs of relief. In an industry already spread thin, the global pandemic introduced a deluge of new problems. While businesses closed their doors due to restrictions and adopted remote work structures to stay afloat, hackers increased their efforts to exploit business network vulnerabilities. Cybercriminals recognized the opportunities provided by remote work, phishing attacks that capitalized on pandemic fears, and vulnerabilities presented by new technology. Security professionals were forced to take on more tasks to keep up and faced increasingly stressful situations on a daily basis. By nature, cybersecurity is a stressful profession. IT security teams are lean and pile a heavy workload on each individual. For companies in every industry, cyberattacks are no longer a question of if, but when. Security professionals face hundreds of false alerts daily and always have more work than can be completed. As a result, prioritizing risk is a pivotal part of the job. With the number of breaches soaring yearly, there is more pressure than ever before on security teams to keep businesses secure. Simply put, the job conditions of the cybersecurity industry are a pressure pot with all the ingredients that lead to burnout. Under perfect conditions, it's a profession that must be carefully balanced to avoid burnout, and modern cybersecurity conditions are far from perfect. More than a third of cybersecurity professionals are considering quitting their jobs in the next six months due to burnout caused by high-stress levels and heavy workloads. The effects of such a loss in the industry will not only further burden the industry, but affect all individuals who depend on these professionals to protect sensitive data and keep modern enterprises running in an era fraught with cybercrime. Table of Contents | The Dangers of Burnout Burnout is defined by the World Health Organization (WHO) as a syndrome resulting from chronic workplace stress that has not been successfully managed. It is characterized by three core dimensions. - Feelings of energy depletion or exhaustion - Increased mental distance and feelings of negativity toward one's job - Reduced professional efficacy So, what do these symptoms look like in the world of cybersecurity? Job stress keeps 51% of cybersecurity professionals up at night. Most cybersecurity professionals work over 41 hours a week and some work up to 90. 65% agree that the pandemic made security processes more difficult. The Effects of Burnout Have a Critical Effect on Performance Human error is one of the biggest causes of data breaches in organizations. The risk of falling victim to such an attack is heightened considerably when employees are facing stress and fatigue. Cognitive function and memory are directly impacted by stress and fatigue, making it difficult to focus on the core aspects of cybersecurity tasks. Under this tremendous level of exhaustion, the adoption of new technologies adds to the load by creating ever-changing and unclear job expectations. For professionals pushing forward under these intense conditions, it's impossible to function at top level. In a deluge of false alarms and log information, it becomes less clear what actions are a tiny fraction different in the way that represents a threat. When burnout reaches its inevitable climax, security professionals are too exhausted and indifferent to recognize real threats facing organizations. They no longer see the importance of their role in preventing cybercrime. Perhaps, most worrisome, they no longer want to work in the industry at all. Burnout Leads to Turnover in an Already Overworked Industry There are currently about 435,000 cybersecurity job openings in the US. The unemployment rate in the industry is 0%. Among professionals currently working in the industry, 51% experienced extreme stress or burnout in 2021, and 65% considered leaving their job because of job stress. Only 33% would recommend such a career to others and the same number would also likely discourage people from entering the industry. As burnout leads to turnover, remaining security professionals get more work added to an already full schedule. For those managing work stress, burnout becomes inevitable, creating a cycle that leaves the industry bare. While this is a critical concern for cybersecurity professionals, it's also a problem for the many individuals and organizations depending on these professionals. Ransomware is expected to affect more than 8 million users or 10% of all internet users in 2022. With the talent shortage at current levels, 67% of security professionals say they don't have enough talent on their team, and 17% say it feels like each person is doing the workload of three. If turnover in the industry increases to meet the levels of burnout, organizations worldwide will have little protection against a constantly growing cybercrime wave. In the same way cybercriminals had a clear understanding of the vulnerabilities presented by the pandemic, they're aware of burnout currently affecting the security industry. This knowledge puts criminals in a position to launch more attacks in an effort to reach their objectives. If the number of cybersecurity professionals dwindles, cybercrime will increase in response and more organizations will suffer from catastrophic attacks. The Causes of Cybersecurity Burnout The cybersecurity profession naturally includes all the factors that could lead to occupational burnout. Long hours, a high-stress environment, work overload, and critical responsibilities simply go along with the job. As technology advances faster than ever before and work environments have been upended by the pandemic, demands on security professionals are only increasing. Consider how these recent changes fueled burnout in the cybersecurity industry. Launching a cyberattack has never been easier. From technology growth to global internet communication and cybercrime products and services packaged and sold by professionals, it's easy for inexperienced would-be hackers to launch a successful attack. The cybercrime economy is the 15th largest economy in the world. Ransomware attacks in 2021 saw a 130% increase over those in 2020 and cryptojacking increased 400% in the same time period. Recent major attacks like those launched on SolarWinds, Colonial Pipeline, and JBS Foods make it clear how damaging a catastrophic infrastructure attack could be. As cybercrime grows, businesses have faced pandemic losses and crippling inflation, forcing them to tighten spending. Instead of increasing cybersecurity spending, lean teams are forced to take on more responsibilities with fewer employees and tools. Expanding Attack Surface New technology comes with new vulnerabilities. IoT growth, the increase of remote work, and growing supply chain dependence mean that cybersecurity professionals must protect a much larger network surface to prevent attacks. It's estimated there will be 3 times more networked devices on Earth than humans by 2023. A recent report revealed there are around 35.82 billion IoT devices installed worldwide, and by 2025, it will reach 75.44 billion. IoT devices often lack the security features of other devices, making them a desirable target for threat actors. While companies are reopening to in-office work. Full-time on-site work is not expected to be the norm for many. 59% of employees say that a hybrid work model is the preferred location for the future, and 53% anticipate this will actually happen. Security controls and practices are typically weaker when employees work remotely. This can range from the use of personal devices to cutting corners on security protocols to increase production. Whether employees use their own devices or company devices, networks typically expand with remote work, adding to the attack surface protected by security experts. Lack of Appreciation Cybersecurity is a necessary component of daily business for modern enterprises. Yet, the requirements of maintaining a secure network can affect production and make everyday tasks more cumbersome. As a result, IT security teams are pressured to limit security protocols and restrictions. Since security restrictions affect end users, 80% of IT teams experience pushback when enforcing an organization's security policy. Even worse, 80% of IT teams said that IT security has become a thankless task, and 91% felt pressure to compromise security for business continuity. Such an attitude adds stress to the profession and breeds apathy when it comes to organizational protection. Long Hours and Heavy Workloads Cybersecurity is a 24/7 industry. Yet, humans aren't built to work 24/7. When attacks occur, cybersecurity professionals are placed in high-stress situations and forced to work long hours until the problem is resolved. Pulling an all-nighter isn't uncommon, and is often even applauded in the industry. Unfortunately, the working environment in these situations is so demanding that the mental health of those involved can suffer for months after the incident. Balancing Cybersecurity Burnout with Managed Detection and Response There is no immediate solution to the talent shortage in the cybersecurity industry. Even if thousands of industry hopefuls were to rush toward the industry today, training these individuals would take significant time and effort. When the current burden is having such a severe effect on seasoned industry professionals, recruits will need extensive care when onboarding to avoid turnover. When it comes to addressing cybersecurity burnout, the best solution is to lighten the load on your IT security team. While it's true that many organizations can't source the talent to increase in-house cybersecurity and IT teams, there are other ways to increase headcount and technology to protect your network. For many companies, managed detection and response (MDR) could be the answer to helping your team win the battle against cybersecurity burnout. MDR is a group of services provided by a remotely delivered modern security center with functions that allow organizations to rapidly detect, analyze, investigate, and actively respond to cybersecurity threats. To be classified as MDR, services must include both professional expertise and security tools provided in a fast-to-deploy turnkey service. In an industry plagued with thousands of disjointed tools and services, this might seem like another cybersecurity tool that adds more requirements to an already overbalanced workload. However, MDR has some important distinctions. It's important to note that MDR isn't a tool. It's a collection of services tailored to your organization and installed by your provider. Furthermore, MDR has the crucial requirement of including ongoing assistance through routine and emergency communication with off-site security professionals. MDR stands out in a sea of tools and services as a single solution to cybersecurity burnout with these features. Remote SOC to Address Staff Limitations Heavy workloads and long hours are a chronic hazard for IT security professionals. Long before the pandemic, the industry was plagued with understaffed teams facing a constantly growing workload. With the right tools on hand, automation plays an important part in detecting and mitigating threats. However, humans are a critical part of cybersecurity. If they weren't, cybersecurity professionals could kick back and watch machines take on all types of cybersecurity threats. When it comes to addressing the talent shortage, the off-site SOC is one of the most important parts of the MDR puzzle. The team works as an extension of your existing security personnel to create around-the-clock supervision for an entire organizational network. Expert analysts who work as a part of your MDR solution are designed to work as an extension of your team. They can provide a variety of services based on your needs which may include: - Installation and optimization of software and tools - Testing to reduce false alarms - Applications of updates and patches - Emergency response for real threats and attacks - 24/7 monitoring of your network to identify threats while your team sleeps, eats, and takes vacations In other words, the off-site SOC that works as a part of your MDR services can instantly increase your IT security headcount to address the talent shortage within your organization. A Turnkey Solution Prevents Increasing Workloads There is no shortage of tools available in the cybersecurity industry. In fact, there are likely thousands of different types of tools and software that can be used to address a variety of security issues for specific network components. Unfortunately, more tools can lead to more work for IT security specialists. When multiple tools are deployed to address various threats or network components, these tools must be overseen by IT professionals. Without integration capabilities, tools can perform redundant tasks or even inhibit network performance. Furthermore, many cybersecurity tools require precise optimization to complete the intended task without generating a deluge of false alerts. MDR is required to be a turnkey solution that can be deployed quickly and provide rapid time to value. This means your MDR provider will use a predefined technology stack designed by the company or curated from existing solutions. Pieced together security solutions require your security experts to compare tools, make purchases, and optimize new software for the best results. MDR eliminates adding new burdens to overworked professionals with a turnkey solution that includes installing and deploying specific tools designed to provide a complete cybersecurity solution for your organization. Elimination of Repetitive Manual Tasks with SIEM Automation It's no secret that complete visibility into your network is key to providing adequate protection. However, thorough log collection generates thousands of entries each day. Even with the use of automated tools to provide alerts for suspicious activities, analysts must still pore over a significant amount of data each day. When your MDR solution includes SIEM log management that automatically categorizes and applies context to huge amounts of data, the manual task load for your internal IT team is lightened considerably. At BitLyft, we use some of the top SIEM tools designed to combat advanced threats with an analytics-based approach for the modern hybrid enterprise. The system includes data enrichment that adds context to the data provided by log collection. This contextual information can provide essential details like user identity and access privileges to automatically prioritize the most dangerous alerts. The information collected by your SIEM is made visible to your internal IT team and external SOC with user-friendly dashboards that provide real-time visibility into your network. When your SIEM software is installed and optimized by your MDR provider, the process of adding new technology is streamlined for your IT staff. With rapid time to value and technology to reduce the manual labor of data analysis, you can finally lighten the workload for your internal IT professionals. Endpoint Detection to Tackle the Expanding Attack Surface Pandemic restrictions fast-tracked the implementation of remote work into a variety of industries that would have taken years, or even decades, to adopt the technology under normal circumstances. When forced to adapt, remote work proved to be more effective than many professionals thought possible. More importantly, many employees thrived with the new level of work/life balance offered by the arrangement. As employees face going back to the office, many are determined to maintain a hybrid schedule where remote work will remain part of the new normal. Along with the remote devices implemented by remote work is the growing number of IoT devices that increase convenience and productivity in every industry. Remote devices and increased cloud migration mean that networks are growing exponentially and require new protections to avoid endpoint attacks. Every device that connects to your network presents a risk, and MDR addresses those risks with endpoint detection. Since modern sophisticated attacks are designed to exploit low-level devices and move discreetly through networks, endpoint security is critical. Endpoint detection and response (EDR) deployed by your MDR provider is integrated with your SIEM to provide complete visibility into your entire network. UEBA to Proactively Address Increased Attacks User and entity behavior analytics (UEBA) is a type of artificial intelligence that learns typical behavior that occurs within your network to automatically detect abnormal (suspicious) behavior. By building a complete profile of every entity and user in your network, UEBA can recognize discreet attacks that are in process and provide proactive responses to the threat. The result is a reduction of false alerts and the ability to detect insider threats that are typically logged as authorized activity. 24/7 Response to Reduce Work Hours No individual can be expected to work 24/7. Yet, the stressful nature of cybersecurity means that professionals are often in emergency mode at all times. This "always-on" mentality is a direct contributor to burnout. The technology stack provided by your MDR can finally provide effective relief for your IT team's inability to sleep at night. Security orchestration automation and response (SOAR) brings together the alerts provided through log collection and the actions that need to be taken to immediately protect your network. This means that when attacks occur during off-hours, alerts launch a series of events that work to define the severity of the threat, contain active threats, and conduct actions to mitigate and repair damage. These automated responses reduce dwell time and minimize the damage that can be accomplished by sophisticated threats launched during off-hours. MDR from BitLyft Offers the Most Comprehensive Solution to Cybersecurity Burnout MDR from BitLyft is a single turnkey solution for managed detection and response that goes above and beyond traditional MDR services. With fully integrated features like EDR and UEBA included in the SIEM system, your IT security team gets a streamlined solution without extra working parts and added tasks to perform. By providing our customers with direct access to the dedicated cybersecurity professionals in your off-site SOC who know your environment and unique organizational goals, you get a true extension of your internal team and a way to increase cybersecurity headcount in a competitive hiring market. To learn more about the benefits of MDR provided by BitLyft, download our MDR buyers guide.
<urn:uuid:dd39f3a0-8af3-49a3-bac1-32193a5b3984>
CC-MAIN-2024-38
https://www.bitlyft.com/resources/managed-detection-and-response-beat-burnout
2024-09-17T05:51:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00895.warc.gz
en
0.949695
3,645
2.640625
3
Artificial Intelligence (AI) has been a buzzword across sectors for the last decade, leading to significant advancements in technology and operational efficiencies. However, as we delve deeper into the AI landscape, we must acknowledge and understand its distinct forms. Among the emerging trends, generative AI, a subset of AI, has shown immense potential in reshaping industries. But how does it differ from traditional AI? Let’s unpack this question in the spirit of Bernard Marr’s distinctive, reader-friendly style. Traditional AI: A Brief Overview Traditional AI, often called Narrow or Weak AI, focuses on performing a specific task intelligently. It refers to systems designed to respond to a particular set of inputs. These systems have the capability to learn from data and make decisions or predictions based on that data. Imagine you're playing computer chess. The computer knows all the rules; it can predict your moves and make its own based on a pre-defined strategy. It's not inventing new ways to play chess but selecting from strategies it was programmed with. That's traditional AI - it's like a master strategist who can make smart decisions within a specific set of rules. Other examples of traditional AIs are voice assistants like Siri or Alexa, recommendation engines on Netflix or Amazon, or Google's search algorithm. These AIs have been trained to follow specific rules, do a particular job, and do it well, but they don’t create anything new. Generative AI: The Next Frontier Generative AI, on the other hand, can be thought of as the next generation of artificial intelligence. It's a form of AI that can create something new. Suppose you have a friend who loves telling stories. But instead of a human friend, you have an AI. You give this AI a starting line, say, 'Once upon a time, in a galaxy far away...'. The AI takes that line and generates a whole space adventure story, complete with characters, plot twists, and a thrilling conclusion. The AI creates something new from the piece of information you gave it. This is a basic example of Generative AI. It's like an imaginative friend who can come up with original, creative content. What’s more, today’s generative AI can not only create text outputs, but also images, music and even computer code. Generative AI models are trained on a set of data and learn the underlying patterns to generate new data that mirrors the training set. Consider GPT-4, OpenAI’s language prediction model, a prime example of generative AI. Trained on vast swathes of the internet, it can produce human-like text that is almost indistinguishable from a text written by a person. The Key Difference The main difference between traditional AI and generative AI lies in their capabilities and application. Traditional AI systems are primarily used to analyze data and make predictions, while generative AI goes a step further by creating new data similar to its training data. In other words, traditional AI excels at pattern recognition, while generative AI excels at pattern creation. Traditional AI can analyze data and tell you what it sees, but generative AI can use that same data to create something entirely new. The implications of generative AI are wide-ranging, providing new avenues for creativity and innovation. In design, generative AI can help create countless prototypes in minutes, reducing the time required for the ideation process. In the entertainment industry, it can help produce new music, write scripts, or even create deepfakes. In journalism, it could write articles or reports. Generative AI has the potential to revolutionize any field where creation and innovation are key. On the other hand, traditional AI continues to excel in task-specific applications. It powers our chatbots, recommendation systems, predictive analytics, and much more. It is the engine behind most of the current AI applications that are optimizing efficiencies across industries. The Future of AI While traditional AI and generative AI have distinct functionalities, they are not mutually exclusive. Generative AI could work in tandem with traditional AI to provide even more powerful solutions. For instance, a traditional AI could analyze user behavior data, and a generative AI could use this analysis to create personalized content. As we continue to explore the immense potential of AI, understanding these differences is crucial. Both generative AI and traditional AI have significant roles to play in shaping our future, each unlocking unique possibilities. Embracing these advanced technologies will be key for businesses and individuals looking to stay ahead of the curve in our rapidly evolving digital landscape. We have only just started on the journey of AI innovation. Recognizing the unique capabilities of these different forms of AI allows us to harness their full potential as we continue on this exciting journey.
<urn:uuid:1d8bc52f-198c-43ff-8861-64ba0c5f9f44>
CC-MAIN-2024-38
https://bernardmarr.com/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?paged1119=4
2024-09-18T12:58:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00795.warc.gz
en
0.936498
981
3.28125
3
Using Test TCP (TTCP) to Test Throughput You can use the Test TCP utility (TTCP) to measure TCP throughput through an IP path. In order to use it, start the receiver on one side of the path, then start the transmitter on the other side. The transmitting side sends a specified number of TCP packets to the receiving side. At the end of the test, the two sides display the number of bytes transmitted and the time elapsed for the packets to pass from one end to the other. You can then use these figures to calculate the actual throughput on the link. Since it is most common to evaluate connect speeds in kbps (kilobits per second, or 1000 bits per second) rather that kbps (kilobytes per second, or 1024 bytes per second), we must use the information from TTCP to calculate the bit rate (in kbps). Use the number of bytes received and the transfer time to calculate the actual bit rate for the connection. Calculate the bit rate by converting the number of bytes into bits and then divide this by the time for the transfer. For example, if the host received 409600 bytes in 84.94 seconds, you can calculate the bit rate to be (409600 bytes * 8 bits per byte) divided by 84.94 seconds=38577 bps or 38.577 kbps.
<urn:uuid:b03d9ba6-15e8-4691-9786-a933eff52883>
CC-MAIN-2024-38
https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k-r6-9/system-monitoring/configuration/guide/b-system-monitoring-cg-asr9000-69x/testing-throughput-using-test-tcp.html
2024-09-11T07:33:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00595.warc.gz
en
0.863282
279
3.65625
4
Do you know the best language learning tips? My 13-year-old cousin watches all the movies in the original English language. “I want to study English later and travel a lot”. You can see the motivation in her shining eyes. “It makes me learn faster and better. School lessons don’t necessarily help.” Her story made me sit up and take notice. Contrary to the belief that young people and children are no longer interested in anything today, there is evidence to suggest otherwise. “Many of my friends and I already speak English quite well. But we didn’t get that from school.” Index cards and vocabulary lists are out. We asked students why: “Because it’s boring,” “Because it doesn’t do any good anyway,” “Because I don’t feel like looking at the pile of cards anymore.” Parents who notice a gap in their child’s knowledge of a foreign language usually urge them to “finally learn the words.” They are not entirely wrong about this since vocabulary is the foundation for understanding a language. But the big difference lies in how you learn the words. According to current psychological understanding, institutional language learning should be fundamentally and radically changed. Why this is not happening, we can only guess, which is why we prefer to offer you a practical solution: The Birkenbihl Approach. Best Language Learning: The Birkenbihl Approach For those who learn a new language as a child, teenager or adult in their natural environment (e.g. bilingually or in a foreign language school/kindergarten), their brain processes the new language in a similar way to their mother tongue. Based on this information, management trainer and bestselling author Vera F. Birkenbihl developed the Birkenbihl Approach for language learning. The Birkenbihl Approach, based on decoding (word-for-word translation), allows you to learn the meaning of the words of the foreign language at the same time as the grammar. By decoding, you can progress much faster. Let’s do an exercise: Read the above German sentence and English decoding several times. Now cover the decoding and try to decode the German words yourself. How many words have you translated correctly after such a short time? It’s so easy that you usually get over 90 % right the first time you try it. With a little practice, you quickly reach 100 %. Voilà! You have already mastered a whole meaningful sentence in a foreign language. Isn’t that great? We recommend writing approx. 2 to 3 sentences of the foreign-language text on a sheet of paper. An A3 sheet is best ― as there is enough space on it for comments, drawings and anything else that makes it easier to remember. Of course, decoding also works digitally, e.g. with a text tool like Word. All details and step-by-step instructions for decoding are in our blog post: Easy language learning by Vera F. Birkenbihl ― The Decoding Method. The Birkenbihl method is originally a 4-step method: 1. Decode: Word-for-word translation of a foreign-language text into the native language. 2. Karaoke listening: Listen to the recording of the text and read the word-for-word translation until you understand the meaning of each word. 3. Background Listening: Passive listening to the previously actively heard text as you pursue other things in everyday life. No active concentration is necessary. 4. Activities: Speak the text yourself, have conversations in everyday life, and practise dialogues. We now know that the method can be even simpler. The steps don’t have to follow strictly one after the other. You can, for example, start with background listening and tune into a foreign language (like during a stay abroad, where you are surrounded by it). There are also different ways you can do the word-for-word translation, but more about that later. The significant advantages of the Birkenbihl Approach are apparent: With this method, your brain learns a new language in a very natural way ― without even memorizing a word or grammar rules. Instead, you simulate learning your mother tongue, using it, so to speak as a learning turbo. The method is best suited for students because you can divide the steps optimally between classroom instruction and learning at home. Activities mainly take place in school ― speaking, reading, grammar exercises, deepening knowledge. At home, students can thoroughly prepare for the lesson, learn ahead, relearn and repeat. All they need is foreign language texts. Texts from textbooks that are used later in class are ideally suited for this purpose. In addition to the texts, there are usually audio recordings on a CD or as an mp3 download. Foreign language teaching is, therefore, an optimal prerequisite for the use of the Birkenbihl Approach! Today we want to show you how pupils use the Birkenbihl Approach most effectively ― for good pupils who want to learn even more (like my cousin) and for pupils who have some catching up to do and have a hard time in school. How Pupils Make Optimum Use of the Birkenbihl Approach If you use the Birkenbihl Approach parallel to your school lessons, it is essential to work in advance. For this, you must use the audio material of the workbook. Go through phases 1 to 3 before the lesson in class! Phase 1: Decoding Translate the words of a lesson you already know. If you learn with a friend, you can also compare it with them and add more translations to your text. For the translation of unknown words, use the workbook or (online) dictionaries. In the beginning, you have to translate all the words because you have little previous knowledge. It only takes a little time, and then you only decode new words. If a translation into another foreign language is easier for you than into your mother tongue, or if you want to include links and mnemonics, you can also decode some words into other languages. Drawings of the meanings of words are, of course, also allowed. Phase 2: Karaoke Listening To do this, use the listening material for the course. The order of the learning process in schools is usually unsuitable, as teachers often require speaking a foreign language from the very first lesson. But how do you know how to pronounce words if you’ve never heard them before? Children first hear their language for months before they make their first attempts at speaking. In the beginning, you make mistakes, which you usually correct yourself later. So listen to the lessons several times and read the decoding until you know the meaning of the words and the pronunciation. Phase 3: Background Listening By repeatedly listening to the foreign language in the background, you acquire a perfect pronunciation, preparing you for speaking the language in class ― and later in everyday life. At school: strengthen and practicing With such excellent preparation, the lessons in school become an activity. Even if grammar rules and other theoretical things like that are part of the lessons, you understand the content now, as you have already acquired some knowledge. Grammar exercises become fun for you. How do pupils who have problems at school using the Birkenbihl approach? If we are honest with ourselves, we prefer to do things we are competent in, things we get good feedback for and tasks that give us pleasure. We don’t like to face things we feel overwhelmed by and which lack any fun. Therefore, the first hurdle of working with this new strategy can be a significant burden. Your child may resist. But one thing is sure: After a few days, your child will also recognize the benefits, notice that something is going on, and consequently develop self-motivation. For the time being, the aim should not be to achieve a better mark immediately in the next exam but to increase motivation and gain a certain familiarity with the foreign language. So dear parents: Hang in there. The student then goes back 3 lessons and translates the words of the previous lessons word by word. Doing this, the pupil quickly catches up with the subject matter. Just 10 minutes a day helps enormously! Once the lessons dealt with at school have been made up, it is advisable to “discover” one or two lessons in advance. With this preparation, students are prepared for the lessons and have more fun in the classroom. The lesson then becomes an exercise ― to train pronunciation, to truly understand grammar and to be able to put it into practice. It is fun, and it motivates them to continue and learn. Save yourself expensive tuition! Additional exercise: speak in chorus. Listen to a known text and talk with the audio “in chorus.” This step helps you to strengthen your knowledge of word meanings and grammar and refines your pronunciation. The effect: things that you practice well can then be performed more confidently, i.e. speaking, becomes easier after this exercise. This exercise also works if you practise speaking in your mind, i.e. only speak in thought. Use also feelings and pictures for guiding. Put yourself in the speaker’s place; visualize the pictures of the story in front of you. Best Language Learning Tips: Practice with Listening Comprehensions, Films, and Series Watching a series is fun for all students. In connection with the foreign language, this can be an excellent incentive to continue learning. Experience has shown that there is little resistance from young learners here. A Movie or series can be an excellent introduction to learning: watch a 20-minute series, then decode a textbook text. The brain prepares and dives into the world of the foreign language. Or you work entirely with the text of a series or a film. The scripts for the films are usually on the Internet. They can first be decoded independently, á la Birkenbihl. Then you have the reward of watching the film or series. The biggest problem when learning a language through watching movies and television series is the speed of the audio, especially at the beginning. It’s simply too fast; the pupils can’t follow and don’t understand very much if anything at all. Brain-Friendly offers a solution for this. Brain-Friendly.com has developed Birkenbihl language courses based on a funny series. You watch the show, and a two-line text (similar to a subtitle) appears at the bottom of the screen. This text is the decoding: above the foreign language, below the word-for-word translation into English. Parallel to the visual, the word pair lights up ― like in karaoke singing. So the student can read the text very well. The speed can be adjusted. So you use a slower speed at the beginning, then you can increase it to the original speed. Radio and Your Favourite Song The same works with songs, by the way. The lyrics can be decoded themselves and then listened to in an endless loop. It’s lots of fun, and you find out what the songs are really about. In many cases, they want to say something completely different than one would initially assume. (More tips on learning with music are here: Learning is Fun ― Learn a Language by Listening to Music. Following a program from a foreign radio station can also help to improve listening comprehension considerably. For example, you can recommend your child to choose a French, Spanish or German language channel suited to their musical taste. Due to the possibility of Internet radio, the selection is so vast that almost every young person should find a suitable program. The 3 Best Reasons to (Pre-)Learn as a Student with the Birkenbihl Approach 1. Brain-Friendly Learning Steps First, immerse yourself in the foreign language of choice, then learn the rules. Those who do not learn grammar do not know the corresponding grammatical terms. However, those who learn at school or for a particular type of certification, where these terms are important, won’t learn them with the Birkenbihl Approach. The good news is that the lessons are full of grammar rules. Students, therefore, use the Birkenbihl Approach at home as preparation and to repeat and strengthen. In class, when discussing grammar, students can do very well and truly understand the rules. Now that the content of an example sentence is understood, the rules make sense. 2. Experience of Success in Classrooms If you traditionally learn vocabulary, for example, Dog=Hund, then you have the feeling that you have learned this word. (It would go beyond the scope of this article to explain why this is a deceptive feeling.) Vocabulary trainer apps additionally support this positive feeling by giving points for each correctly translated term. Just because you understand a single word doesn’t mean you win anything. And this happens for pupils in the class. Children get tired of single words and try to understand the teacher’s instructions. If you use the Birkenbihl Approach, it works uniquely. The feeling of success comes when you understand a whole sentence or text. The child who uses the Birkenbihl Method can answer intuitively and automatically. It learns brain-friendly and holistically. The difference is that the brain does not “depend” on individual terms, but learns, understands and reacts holistically. The exercise makes the master: The more often the child uses the Birkenbihl Approach at home, the easier it is to use the language in class. The lessons become games and exercises ― to strengthen and expand their skills. 3. Thinking in the Foreign Language If you learn a vocabulary pair, such as “Messer-knife,” the brain must always translate first when using a foreign language. Only then you can decipher the meaning of the whole sentence and then you can react. You’ll notice, it just takes too long. The Birkenbihl Approach avoids this problem. By learning in whole sentences, you always have a whole sentence, “ready.” You don’t have to think about what “knife” means in German but use the term intuitively. You think in the foreign language right from the start. This level is challenging to attain with traditional school teaching. Pupils are engaged far too little with the language (this is, of course, due to the limited time they spend in class, but also to the lack of motivation to do more at home). Only those who study a foreign language at university or use a foreign language daily at work begin to think in the language and thus use it automatically. The Birkenbihl Approach shortens this process. Pupils can also immerse themselves in the foreign language within a short time and learn it sustainably.
<urn:uuid:a15f8a66-a0ae-4067-95fb-6dad1babb486>
CC-MAIN-2024-38
https://blog.brain-friendly.com/the-best-language-learning-tips-for-students/
2024-09-12T11:50:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00495.warc.gz
en
0.944866
3,119
2.796875
3
IoT devices provide the data and big data analytics allows for extracting insights. However, a monumental challenge arises: Where will all this data be processed and stored? The Internet of Things (IoT) has been a hot area in the last few years. The number of connected devices has been growing steadily with Gartner forecasting that IoT devices will outnumber the world’s population in 2017: 8.4 billion connected things in 2017 and 50 billion in 2020. These connected devices generate massive amounts of data. Today, devices and appliances that were not previously connected (fridges, cars, watches, etc.) are equipped with sensors and peripherals that generate data. Alongside IoT, enterprises are betting hard on big data. Data is the most precious resource of our digital economy. Many enterprises are applying big data analytics to harness this vast amount of data and take advantage of the insights it provides: identifying trends and patterns to deliver improved services and experiences to their customers, helping companies monitor and streamline their operations, or perform preventive maintenance of machinery and infrastructure. The business process is similar across many applications. IoT devices provide the data and big data analytics allows for extracting insights. However, a monumental challenge arises: Where will all this data be processed and stored? The rapid growth of computing devices is not the only driver for the explosion of data challenging the central cloud computing model. Another important trend has caused a shift in the production and consumption of data: user-generated content at the edge of the network. Mobile internet and social media has empowered ordinary people to become producers of data. Today, nearly 500 million photos are uploaded on Facebook and Instagram and roughly 500 thousand hours of video is uploaded to YouTube daily. Also, more video is uploaded to YouTube in one month than the three major US networks created in over 60 years. These figures give a sense of the astonishing amount of data that users generate on a regular basis. In machine applications, there is a similar trend. Edge devices have many embedded sensors or even cameras generating massive amounts of data. Transporting all the data generated at the edge to the central cloud, processing and analyzing it on servers in remote data centres and then transporting it back to edge devices (whether a smartphone, a fridge, a car, or a robot) is not feasible and scalable. Centralized cloud computing has two big limitations when it comes to meet the demands of a connected world: bandwidth and latency. Using central cloud, bandwidth will be the bottleneck for the growth of IoT. Even if the network capacity is miraculously increased to cope with the data, laws of physics inhibit remote processing of data in the central cloud due to large latencies in the long-haul transmission of data. It is clear that we need a new computing model to cope with the hyper-connected world. Decentralization the Future of Computing Computing started with a centralized architecture of mainframes which then evolved to a distributed computing model in the 1980s as personal computers came into play. The Internet era initially began with a centralized client-server architecture that later became the current central cloud computing model. The question is, where are we going next? We clearly need a paradigm shift to transform tens of billions of devices from a challenge to an opportunity, unleashing the power of computing devices at the edge. A pragmatic solution is to build a fully decentralized architecture where every computing device is a cloud server. Edge devices can process data locally, can communicate with other devices directly and can share resources with other edge devices to unburden central cloud computing resources. This architecture is faster, more efficient and more scalable. Also, there are significant social and economic implications. A decentralized architecture is more private in nature since it minimizes central trust entities and is more cost efficient since it leverages unused computing resources at the edge. Does this mean central cloud computing is dead? I believe not. Edge cloud will not replace central cloud. Some applications may be better suited to use centralized resources. However, central cloud (servers in data centers) should be considered as computing nodes working along with all the edge devices to build a distributed edge cloud architecture. Is your business ready to harness computing resources at the edge and achieve better efficiencies, privacy, and create opportunities for new applications?
<urn:uuid:4e25a4bb-b399-480d-ad2d-34084f6334bf>
CC-MAIN-2024-38
https://stg-3x.mimik.com/the-future-of-computing-is-decentralizing-the-cloud/
2024-09-07T20:11:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00095.warc.gz
en
0.927756
863
2.96875
3
How often do you think of hardware in relation to your business’s cybersecurity? If you answered “not often,” then you’re not alone; many companies would have the same answer. When it comes to cybersecurity, individuals and organizations typically think about the use of antivirus programs, firewalls, and intrusion prevention systems (IPS). But little do many know that hardware also plays an important role in security. The case of Spectre and Meltdown In 2018, two critical architectural flaws in Intel CPUs were disclosed. Called Spectre and Meltdown, the two hardware vulnerabilities allowed programs to steal data being processed in a computer. By exploiting Meltdown and Spectre vulnerabilities, malicious actors were able to bypass system security protections to steal passwords, personal photos, emails, and other sensitive information. Spectre and Meltdown affected every computer chip manufactured in the last 20 years. It threatened not just computers, but also servers, smartphones, and Internet of Things (IoT) devices like routers and smart TVs. Since the vulnerability existed at the hardware level, patches could not be deployed without causing a performance hit. What is hardware security? The Spectre and Meltdown attacks are examples of a typical hardware attack. Hardware attacks are usually too difficult or expensive to execute, but they are becoming much easier to carry out these days by taking advantage of vulnerabilities in hardware manufacturing supply chains. This means that each hardware component could be programmed as malicious. The complexity of integrated circuits and microelectronics even make hardware vulnerabilities difficult to detect. Even one physical modification to a single circuit can be hidden among many valid components, and may remain undetected for an extended period. Hardware breaches are carried out by targeting software vulnerabilities, as well as carrying out web application attacks and strategic compromises. These threats put employees and customers at risk, cause reputational damage, and impact revenue performance. Businesses need to protect themselves more from such attacks. According to a study by Dell EMC, almost two-thirds of organizations suffered at least one data breach in the last 12 months as a result of an exploited hardware vulnerability. Almost half of the respondents experienced two hardware-level attacks. The study further notes the lack of a consistent hardware-level security approach. Nearly two-thirds of the respondents have a moderate to extremely high level of vulnerability to hardware supply chain threats, yet only 59% have implemented a hardware security plan. Mitigating hardware threats With the rise of hardware-level breaches, hardware should also be considered an important aspect of any business’s cybersecurity. Once a system is infiltrated, the consequences can be catastrophic for your data and business — data loss, lower financial revenues, diminished competitive advantage, and damaged credibility. To mitigate the risk of hardware threats, businesses have to ensure an accurate threat model. For instance, some businesses may still have threat models that were designed during a time when hardware cost significant money to develop. Now that card skimmers that compromise credit cards are cheaply sold on the black market, organizations have to update their threat models accordingly. Lastly, invest in supply chain validation initiatives to lessen the chances of future hardware breaches. This should involve buying directly from authorized vendors, verifying the hardware, and conducting in-depth inspections. Businesses can also design systems that can detect and contain hardware-level attacks. Now that you understand the effects of hardware vulnerabilities on cybersecurity, it’s time to take the next step. Fidelis offers Managed Security services that include managed endpoint protection software, firewall management, backup systems monitoring, and network risk assessment, among many others. No matter your cybersecurity problem, we can solve it for you. Download our FREE managed services eBook today to know how you can benefit from us.
<urn:uuid:cb506e50-a9b2-44d5-a21a-d49979b60b0a>
CC-MAIN-2024-38
https://www.fidelisnw.com/2020/05/what-is-the-importance-of-hardware-in-cybersecurity/
2024-09-08T23:53:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00895.warc.gz
en
0.957727
756
3.171875
3
Ransomware victims in the US who pay a ransom to cybercriminals for the decryption key may be required to publicly reveal their payment within 48 hours of making it. The Ransom Disclosure Act, introduced by Senator Elizabeth Warren and Representative Deborah Ross of the United States, would force organizations that are victims of ransomware attacks and pay the ransom to disclose the payment details. The amount of the ransom sought and paid, the kind of currency used to pay the ransom, and any known information about the attackers requesting the ransom would all be required to be reported. Within 48 hours of the payment being completed, the information would have to be provided to the Department of Homeland Security (DHS). The bill’s objective is to provide DHS with more information on ransomware attacks. This way, it can better combat the threat they pose to companies and other organizations throughout the country. Nowadays, ransomware attacks are on the rise, yet there isn’t vital information to pursue cybercriminals. The law would also mandate disclosure when ransoms are paid, uncovering how much money cybercriminals are siphoning from American companies to fund criminal operations. Ransomware attacks become increasingly prevalent every year, posing a danger to national security, the economy, and vital infrastructure. Because victims aren’t compelled to disclose attacks or payments to federal authorities, there isn’t essential information required to understand cybercriminal operations and defend against them. The information provided by this law will guarantee that both the federal government and the business sector are prepared to tackle the risks posed by cybercriminals to the country. The Ransomware Disclosure Act is still only a proposal. Before President Biden can approve it to be a law, it must be passed by the House of Representatives and the Senate.
<urn:uuid:abc4ca1e-d813-4644-bc16-421b24600638>
CC-MAIN-2024-38
https://cyberintelmag.com/attacks-data-breaches/us-new-ransomware-law-makes-it-mandatory-to-report-ransom-payments-within-48-hours/
2024-09-12T16:27:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00595.warc.gz
en
0.950185
364
2.71875
3
In network security today, a firewall may be a software or hardware that makes a barrier between our internal network and untrusted external network. You can look at the firewall as a set of related programs that enforce an access control policy between two or more networks. The name “firewall” is very strange, it has been originally used to describe the segment that separated the engine compartment from the interior of an automobile. In the networking world firewall is the first line of defense and the technology that will allow us to segment the network in physically separate subnetworks. In this way it will help us to limit the risk of compromising the entire network in case of security attack. Is much like how original firewalls worked to limit the spread of a fire. - One mechanism blocks traffic. - The second mechanism permits traffic. A firewall is a set of programs located at a network gateway that protects the resources of a private network from users on other networks. These are basic firewall services: - Static packet filtering - Circuit-level firewalls - Proxy server - Application server Firewall is working like a guard that is, either blocking traffic or permitting it based on the Layer 4 port number. Modern firewall designs is much more complex and is developing the ability to block or permit traffic reading the Application layer data. If you are hosting a service for use over the network, firewalls can be used to manage public access to private network resources or they can log all attempts to enter the private network, and some can trigger alarms. Firewalls filter packets based on a variety of parameters, such as their source or destination address and port number. Network traffic can also be filtered based on the protocol used (HTTP, FTP, or Telnet). The result is that the traffic is either forwarded or rejected. Firewalls also can use packet attribute or state to filter traffic.
<urn:uuid:bace068d-3311-43e9-9829-b7046bf7a590>
CC-MAIN-2024-38
https://howdoesinternetwork.com/2012/firewall
2024-09-13T21:13:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00495.warc.gz
en
0.93365
387
3.75
4
What is Open Source Intelligence (OSINT)? Open Source Intelligence is a term originally coined by intelligence agencies. OSINT uses freely available, open sources such as print media, TV or the Internet to collect information and gain intelligence. In addition to government organizations, private sector companies also use various OSINT techniques and tools. In this century, information is abundant and readily available through various digital channels. Open Source Intelligence (OSINT) has emerged as a crucial discipline in the field of intelligence gathering, leveraging publicly available information to analyze and derive insights for various purposes. OSINT involves collecting, analyzing, and interpreting data from openly accessible sources such as social media, websites, news articles, public records, and more. It plays a vital role in understanding current events, assessing threats, making informed decisions, and conducting research across a wide range of domains. - What is Open Source Intelligence (OSINT)? - Clarifying the Concept of OSINT and its Scope - Differentiating OSINT from Other Intelligence Gathering Methods - Sources of Open Source Intelligence - The Role of OSINT in Cybersecurity - Strengthening Cybersecurity Strategies with OSINT Insights - OSINT Tools and Techniques - Ethical Considerations in OSINT - Challenges and Limitations of OSINT - Strategies to Mitigate Limitations and Ensure Reliable Results - Future Trends in OSINT - Using OSINT for Business and Decision-Making - Incorporating OSINT Data in Strategic Planning and Decision-Making Processes - Personal Safety and OSINT Awareness - Frequently Asked Questions - 1. What exactly is Open Source Intelligence (OSINT)? - 2. How is OSINT different from other intelligence gathering methods? - 3. What are the main sources of OSINT data? - 4. How does OSINT contribute to cybersecurity efforts? - 5. Can OSINT be used for competitive analysis in business? - 6. What are some popular tools and techniques for conducting OSINT? - 7. Are there any ethical concerns related to OSINT practices? - 8. What challenges can arise when using OSINT data? - 9. How is OSINT likely to evolve in the coming years? - 10. How can individuals protect their privacy while dealing with OSINT? What is Open Source Intelligence (OSINT)? Open Source Intelligence refers to the collection, analysis, and interpretation of information from publicly available sources. These sources can include online platforms, media outlets, government publications, academic research, and other public resources. The key characteristic of OSINT is that it relies on information that is accessible to anyone without requiring specialized access or privileges. Clarifying the Concept of OSINT and its Scope OSINT encompasses a wide range of activities, including: - Data Collection: Gathering information from sources such as social media platforms (Twitter, Facebook, etc.), online forums, blogs, news websites, public databases, satellite imagery, and more. - Data Analysis: Scrutinizing and evaluating collected data to identify patterns, trends, correlations, and potential implications. - Information Fusion: Integrating OSINT with other types of intelligence (such as HUMINT – human intelligence, SIGINT – signals intelligence, etc.) to create a more comprehensive understanding of a situation. - Threat Assessment: Identifying potential risks, vulnerabilities, or threats based on analyzed information. - Situational Awareness: Monitoring ongoing events, activities, and developments to stay informed about changes that might impact various domains, including security, business, and geopolitics. - Decision Support: Providing decision-makers with timely and relevant information to assist in making informed choices. - Research: Conducting studies and investigations across diverse fields, including academia, journalism, cybersecurity, and law enforcement. Differentiating OSINT from Other Intelligence Gathering Methods OSINT differs from other intelligence gathering methods in several key ways: - Source of Information: OSINT relies on publicly available information, whereas other methods might involve classified or restricted data. - Accessibility: OSINT is accessible to anyone with internet access, making it a valuable tool for researchers, analysts, and the general public. Other methods may require specialized clearance or permissions. - Invasiveness: OSINT is non-intrusive and does not involve direct interactions with subjects. In contrast, other methods like HUMINT involve direct engagement with individuals or sources. - Speed and Timeliness: OSINT can provide real-time or near-real-time insights due to the rapid dissemination of information through digital platforms. - Comprehensiveness: OSINT provides a broad overview by leveraging a diverse range of sources, while other methods might focus on specific aspects or channels of information. OSINT has become increasingly significant in the digital age due to the explosion of publicly available information and the need for timely, relevant insights across various sectors. Its scope encompasses data collection, analysis, and fusion, making it a valuable tool for understanding complex situations, informing decisions, and enhancing situational awareness. Sources of Open Source Intelligence Open Source Intelligence (OSINT) draws from a wide array of sources that provide publicly accessible information. These sources collectively contribute to a comprehensive understanding of various subjects. Some prominent sources of OSINT include: - Online Platforms: Websites, blogs, and other digital platforms where individuals and organizations share information publicly. - Social Media: Platforms like Twitter, Facebook, Instagram, LinkedIn, and others provide a wealth of data in the form of posts, comments, photos, and videos. - News Outlets: Online news articles, broadcasts, and press releases from reputable media sources offer up-to-date information on events, trends, and developments. - Forums and Discussion Boards: Online communities where people discuss topics of interest, often providing valuable insights and opinions. - Government Publications: Reports, policies, legislation, and official statements released by government agencies. - Academic Research: Scholarly articles, research papers, and studies published by universities and research institutions. - Public Records: Official records such as court documents, property records, business registrations, and more. - Satellite Imagery: Images captured from satellites, providing visual data for geospatial analysis. - Financial Data: Stock market data, financial reports, and economic indicators can offer insights into business activities and market trends. - Geospatial Data: Maps, geographic information systems (GIS), and spatial datasets that aid in understanding physical locations and their attributes. - Dark Web Monitoring: Tracking activities on parts of the internet not indexed by traditional search engines, which can reveal illicit or underground activities. - Weather Data: Meteorological information can be useful in various contexts, from disaster response to understanding localized conditions. The Role of OSINT in Cybersecurity OSINT helps detect potential cyber threats by monitoring online forums, social media, and other sources for discussions or indications of malicious activities, such as data breaches, hacking attempts, or phishing campaigns. By analyzing public information about software, hardware, and networks, OSINT aids in identifying vulnerabilities that attackers might exploit. OSINT can assist in tracing the origin of phishing emails, understanding the tactics used, and uncovering the infrastructure behind these attacks. Information collected through OSINT can provide insights into malware samples, their distribution methods, and the command and control infrastructure. Digital Footprint Analysis OSINT contributes to assessing an organization’s online presence and potential weak points that attackers could target. Threat Actor Profiling OSINT enables the profiling of threat actors, understanding their motivations, techniques, and potential targets. During a cybersecurity incident, OSINT can help gather real-time information to support incident response efforts and inform decision-making. OSINT aids in understanding competitors’ digital activities, potential security weaknesses, and strategies. Strengthening Cybersecurity Strategies with OSINT Insights Incorporating OSINT into cybersecurity strategies enhances an organization’s ability to: - Proactively Identify Threats: By monitoring online activities, organizations can detect potential threats early and take preventive measures. - Enhance Situational Awareness: OSINT provides a broader perspective on the cybersecurity landscape, helping organizations stay informed about emerging risks. - Optimize Incident Response: Real-time OSINT data assists in rapidly responding to incidents and making informed decisions to contain and mitigate damage. - Improve Risk Management: By analyzing OSINT data, organizations can identify and prioritize potential risks and vulnerabilities. - Support Decision-Making: OSINT insights guide informed decision-making in deploying security measures and allocating resources effectively. OSINT Tools and Techniques Open Source Intelligence (OSINT) relies on a variety of tools and techniques to collect, analyze, and interpret publicly available information. These tools empower researchers, analysts, and investigators to efficiently gather insights from online sources. - Search Engines: Utilizing advanced search operators on search engines like Google, Bing, and DuckDuckGo to narrow down results and find specific information. - Social Media Monitoring Tools: Platforms like Hootsuite, TweetDeck, and Brandwatch help monitor and analyze social media conversations, mentions, and trends. - Web Scraping: Tools like Beautiful Soup and Scrapy enable the automated extraction of data from websites. - Data Mining: Employing specialized data mining tools to extract patterns, trends, and relationships from large datasets. - Domain and IP Analysis: Tools like WHOIS and DNS lookup services provide information about domain ownership and IP addresses. - Metadata Analysis: Extracting metadata from files (e.g., photos, documents) to gather information about their origins. - Geolocation Tools: Leveraging tools like Shodan and Censys to discover devices connected to the internet and gather information about their locations. - Social Network Analysis (SNA): Using SNA tools like Gephi to visualize and analyze relationships and connections within social networks. - Image and Video Analysis: Tools like reverse image search engines and video analysis software can provide insights into the origin and context of media. - OSINT Frameworks: Utilizing comprehensive OSINT frameworks like Recon-ng and Maltego to streamline data collection and analysis. - Satellite Imagery Analysis: Platforms like Google Earth and Sentinel Hub offer access to satellite imagery for geospatial analysis. - Online OSINT Communities: Engaging with online communities and forums like Reddit, GitHub, and Stack Overflow to gather insights from discussions and shared knowledge. Ethical Considerations in OSINT - Respect for Privacy: OSINT practitioners should avoid invading individuals’ privacy and should not gather personal or sensitive information without proper authorization. - Informed Consent: Obtain consent when sharing or using information from publicly available sources, especially when the source is a private individual. - Legal Boundaries: Adhere to laws and regulations governing data collection and use. Different jurisdictions have varying rules about what information can be collected and how it can be used. - Data Verification: Ensure the accuracy of information before using or sharing it to prevent spreading false or misleading content. - Minimization of Harm: Avoid actions that could cause harm, harassment, or unintended consequences to individuals or organizations. - Attribution and Credit: Give credit to the original creators of content when sharing or using their work. - Consideration of Context: Take into account the broader context of information to avoid misinterpretation or misrepresentation. - Transparency: Be transparent about your intentions and the sources of your information when presenting findings or insights. While OSINT tools and techniques offer valuable insights, ethical considerations are essential to ensure responsible and respectful use of publicly available information. Practitioners must navigate legal boundaries, respect data privacy, and uphold ethical standards to maintain the integrity of their OSINT activities. Challenges and Limitations of OSINT While Open Source Intelligence (OSINT) offers numerous advantages, it also comes with several challenges and limitations: - Data Accuracy: Information gathered from publicly available sources may not always be accurate or up-to-date. False or misleading data can lead to incorrect analyses and decisions. - Misinformation: OSINT can inadvertently propagate misinformation if unreliable sources are not properly vetted or if false information is widely shared. - Data Overload: The sheer volume of available information can overwhelm analysts, making it difficult to identify relevant and actionable insights. - Bias: Biases in data sources or the methods used for collection and analysis can impact the objectivity and accuracy of OSINT results. - Lack of Context: Information gathered from disparate sources may lack context, making it challenging to fully understand complex situations. - Legal and Ethical Concerns: Collecting certain types of data, especially personal or sensitive information, may raise legal and ethical issues. - Language and Cultural Barriers: Analyzing data from diverse sources in different languages and cultural contexts can be challenging and may result in misunderstandings. Strategies to Mitigate Limitations and Ensure Reliable Results - Source Verification: Thoroughly verify the credibility and reliability of data sources before using them for analysis. - Cross-Referencing: Compare information from multiple sources to identify inconsistencies and verify accuracy. - Contextual Analysis: Seek to understand the broader context of information to avoid misinterpretation. - Data Filtering: Use filtering and prioritization techniques to manage data overload and focus on relevant information. - Critical Thinking: Apply critical thinking skills to assess the credibility and potential biases of sources. - Continuous Learning: Stay updated on the latest tools, techniques, and best practices in OSINT to enhance skills and knowledge. - Ethical Guidelines: Adhere to ethical principles and legal regulations while collecting and using information. Future Trends in OSINT - Integration of AI and Machine Learning: AI-powered tools will assist in automating data collection, analysis, and pattern recognition, enabling faster and more accurate insights. - Natural Language Processing: NLP technology will enhance the ability to analyze and interpret unstructured text data from various sources. - Predictive Analysis: Advanced analytics will enable the prediction of future events and trends based on historical data and patterns. - Deep Web and Dark Web Analysis: OSINT will increasingly focus on monitoring and understanding activities in hidden parts of the internet. - Visualization Tools: Enhanced data visualization techniques will make it easier to interpret complex information and identify patterns. - Geospatial Intelligence: The use of geospatial data and technologies will continue to play a significant role in OSINT, aiding in location-based analysis. - Global Collaboration: OSINT practitioners will collaborate across borders to gather insights and share knowledge for more comprehensive analyses. - Focus on Cyber Threat Intelligence: OSINT will play a crucial role in identifying and responding to cyber threats, given the growing importance of cybersecurity. - Privacy Protection: As concerns about data privacy increase, OSINT practices will need to adapt to ensure the responsible use of information. Using OSINT for Business and Decision-Making Organizations can leverage Open Source Intelligence (OSINT) to gather valuable insights that inform their business strategies and decision-making processes: - Market Analysis: OSINT enables businesses to monitor market trends, competitor activities, and consumer sentiments. By analyzing social media conversations, reviews, and forums, companies can identify emerging trends and adapt their products or services accordingly. - Brand Monitoring: Organizations can track online mentions and discussions related to their brand to assess their reputation and respond effectively to customer feedback. - Customer Insights: Analyzing online discussions and feedback can provide valuable insights into customer preferences, pain points, and expectations, helping companies tailor their offerings. - Competitor Analysis: OSINT allows businesses to gather information about competitors’ strategies, product launches, and customer interactions, enabling them to stay competitive and make informed decisions. - Risk Assessment: By monitoring OSINT sources, companies can identify potential risks, such as emerging security threats or regulatory changes, and proactively develop mitigation strategies. - Strategic Planning: OSINT data can contribute to strategic planning by providing a holistic view of market dynamics, customer behavior, and industry trends. - Crisis Management: OSINT can assist in detecting and responding to potential PR crises, enabling organizations to address issues before they escalate. Incorporating OSINT Data in Strategic Planning and Decision-Making Processes To effectively integrate OSINT data into strategic planning and decision-making, organizations should: - Define Objectives: Clearly outline the goals and objectives for using OSINT data, ensuring alignment with the organization’s overall strategy. - Identify Relevant Sources: Determine which OSINT sources are most relevant to the organization’s industry, market, and goals. - Data Collection and Analysis: Establish processes for collecting, filtering, and analyzing OSINT data to extract meaningful insights. - Cross-Referencing: Verify OSINT data through cross-referencing with multiple sources to enhance accuracy and reliability. - Real-Time Monitoring: Implement real-time monitoring to stay updated on evolving trends, threats, and opportunities. - Integration with Existing Data: Integrate OSINT data with internal data sources for a comprehensive view of the business landscape. - Scenario Planning: Use OSINT insights to develop various scenarios and assess their potential impact on the business. - Regular Review: Continuously review and update OSINT-driven strategies to ensure they remain relevant and effective. Personal Safety and OSINT Awareness Individuals can take steps to protect their personal information and minimize their digital footprint in the context of OSINT: - Social Media Privacy Settings: Adjust privacy settings on social media platforms to control who can see your posts and personal information. - Be Mindful of Sharing: Avoid oversharing personal details, location information, and sensitive data online. - Use Strong Passwords: Utilize strong, unique passwords for online accounts and consider using a password manager. - Limit Public Profiles: Consider using pseudonyms or limiting the personal information you share on public profiles. - Regularly Review Accounts: Periodically review and update privacy settings, permissions, and connected apps for online accounts. - Secure Wi-Fi Networks: Use secure, encrypted Wi-Fi networks and avoid connecting to public Wi-Fi without a VPN. - Educate Yourself: Stay informed about common OSINT techniques and how personal information can be exploited. - Monitor Online Presence: Regularly search for your own name and review the information available about you online. - Think Before You Click: Be cautious when clicking on links or downloading files from unknown sources. - Consider a VPN: Use a Virtual Private Network (VPN) to encrypt your internet connection and enhance online privacy. Frequently Asked Questions 1. What exactly is Open Source Intelligence (OSINT)? Open Source Intelligence (OSINT) refers to the collection, analysis, and interpretation of publicly available information from various sources, such as websites, social media, news articles, and government publications. OSINT is used to gather insights, assess risks, and make informed decisions across a range of domains. 2. How is OSINT different from other intelligence gathering methods? OSINT relies on publicly accessible information, while other methods like HUMINT (human intelligence) involve direct interactions with individuals, and SIGINT (signals intelligence) involves intercepting and analyzing communication signals. OSINT is non-intrusive and does not require specialized access. 3. What are the main sources of OSINT data? OSINT data comes from sources like online platforms (websites, social media), news outlets, forums, government publications, academic research, satellite imagery, public records, and more. 4. How does OSINT contribute to cybersecurity efforts? OSINT aids in cybersecurity by detecting and preventing threats through monitoring online discussions, identifying vulnerabilities, analyzing malware, and providing real-time insights during incidents. 5. Can OSINT be used for competitive analysis in business? Yes, OSINT is valuable for competitive analysis. It helps businesses monitor competitors, track market trends, analyze consumer sentiment, and gain insights into industry developments. 6. What are some popular tools and techniques for conducting OSINT? Popular OSINT tools and techniques include advanced search operators on search engines, social media monitoring platforms, web scraping, data mining, domain and IP analysis, geolocation tools, image analysis, and more. Yes, ethical concerns include privacy invasion, misinformation propagation, bias, and legal boundaries. OSINT practitioners should respect privacy, verify sources, avoid harm, and adhere to laws and ethical guidelines. 8. What challenges can arise when using OSINT data? Challenges include data accuracy, misinformation, data overload, biases, lack of context, legal and ethical considerations, and language barriers. 9. How is OSINT likely to evolve in the coming years? OSINT is expected to evolve by integrating AI, machine learning, and automation for more efficient data collection and analysis. It will likely play a larger role in cybersecurity, predictive analysis, and global collaboration. 10. How can individuals protect their privacy while dealing with OSINT? Individuals can protect their privacy by adjusting social media privacy settings, avoiding oversharing, using strong passwords, limiting public profiles, securing Wi-Fi networks, staying informed about OSINT techniques, and being cautious online. In the world overflowing with information, Open Source Intelligence (OSINT) emerges as a powerful tool. It serves as both a sentinel for cybersecurity and a wellspring of insights for decision-makers. While OSINT offers immense possibilities, ethical considerations and awareness of its limitations are vital. As we gaze toward the future, the integration of AI and the responsible use of OSINT promise to shape a world where knowledge is not just accessible, but harnessed for greater good while safeguarding privacy and ethical boundaries. Information Security Asia is the go-to website for the latest cybersecurity and tech news in various sectors. Our expert writers provide insights and analysis that you can trust, so you can stay ahead of the curve and protect your business. Whether you are a small business, an enterprise or even a government agency, we have the latest updates and advice for all aspects of cybersecurity.
<urn:uuid:6b8eed50-750a-4144-8fc6-3c4486ec157f>
CC-MAIN-2024-38
https://informationsecurityasia.com/what-is-open-source-intelligence-osint/
2024-09-13T20:44:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00495.warc.gz
en
0.876817
4,549
3.140625
3
California dairy cows are giving the state's power grid a blast of renewable energy, but the real money maker in bovine biogas may lie elsewhere.Pacific Gas and Electric Company announced Tuesday it will receive up to three billion cubic feet of renewable natural gas a year from BioEnergy Solutions, a waste-to-energy firm. That's enough biomethane to meet the electricity needs of approximately 50,000 PG&E residential customers, the utility company says. PG&E and other utilities are motivated to incorporate renewable energy sources, since California regulators have directed them to make renewable energy at least 20% of their electricity supplies by 2010. Qualifying renewable sources include solar, wind, biomass, geothermal, and small hydroelectric. The technology used to convert grass and grain-fueled bovine emissions into electric energy is itself relatively simple. Manure is collected in a holding area (poo lagoon?), which is sealed in plastic to create an oxygen-free atmosphere that speeds bacterial growth. Methane gas is produced by the natural digestive process of the bacteria. The gas is then collected and "scrubbed" before being sent to the power plant. The trick is not in extracting gas from cows (as anyone who has ever visited a farm can attest), but in converting the gas to methane and then getting it to the power plant. Herd size and proximity to an existing natural gas pipeline are two key factors in a biogas economy. So how many cattle have to be enlisted to fulfill the BioEnergy Solutions contract with PG&E? If a 2,500-head dairy can provide enough waste to power more than 1,000 homes, as the company says, then it will take the efforts of 125,000 cows to light up 50,000 homes. California has about 2,000 dairy farms, with an average of 850 cows on each one. BioEnergy Solutions contracts with dairy farmers to collect methane gas produced by cows. The company designs, installs, and maintains the equipment necessary to collect the methane. Then it splits the revenue from sale of the gas and carbon credits with the farmers. Compared with other renewable sources of energy, the cost of electricity derived from cattle gases breaks down this way, according to the California Energy Commission: Biogas costs an estimated 13 cents a kilowatt hour; Class 5 wind energy costs around 7 cents a kilowatt hour; Photovoltaic solar costs about 46 cents a kilowatt hour. Making energy from manure may seem fairly lucrative, but the real money in the biogas game is situated in other end of the cow -- up front. Professor Jan Bertilsson of the Swedish University for Agricultural Sciences says, "95% of the gas comes from the nose. Only 5% comes from the back of the cow." Bertilsson recently received a half million dollar grant to study belching cows. Ruminate on that. About the Author You May Also Like Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring September 25, 2024
<urn:uuid:38e68c43-bfb9-496e-9c41-50fb18908c5d>
CC-MAIN-2024-38
https://www.informationweek.com/it-infrastructure/a-mighty-renewable-wind-blows-in-california
2024-09-16T10:31:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00295.warc.gz
en
0.946601
656
3.171875
3
How To Run a Successful Network Topology Discovery Traverse is able to determine the layer-2/layer-3 network topology (which device is connected to which other devices) automatically when performing a network discovery. The discovery engine in Traverse uses SNMP to collect various information from routers and switches in the network, including ARP/MAC address tables, routing tables, interface numbering, etc to build the topology. The more information Traverse is able to collect, the more accurate and complete the dependency information will be. Therefore it is essential that the routers, switches, firewalls, wireless access points as well as other devices are configured to support SNMP queries. There are a number of pre-requisites that will ensure an effective network discovery session: Before starting the network discovery, ensure that SNMP enabled devices are configured to allow queries from the DGE. Some devices limit SNMP queries from a fixed number of IP addresses (sometimes categorized as a "management station"). Other devices may implement an access-list/firewall rule that prevent SNMP queries on UDP port 161. If necessary, consult with network/security administrators to authorize such access. On the same note, verify that the devices in the specified subnet are reachable by ICMP ping. In the first stage of the discovery session, Traverse will perform a (staggered) "ping sweep" to determine which IP addresses are active. If there is a firewall/router between the DGE and target network, ensure that ICMP ping (echo) requests will be allowed from the DGE. Provide all the SNMP community strings used within your network in the inital step of a discovery session. Each community string should be specified in a line by itself. The discovery engine will automatically determine which community string is application to each device. Taking these extra steps will ensure that Traverse is able to determine the physical and logical topology of various nodes. All versions of Traverse
<urn:uuid:0bdfb519-e526-4c4f-bdb4-2e63e27fd400>
CC-MAIN-2024-38
https://helpdesk.kaseya.com/hc/en-gb/articles/229041768-How-To-Run-a-Successful-Network-Topology-Discovery
2024-09-17T15:05:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00195.warc.gz
en
0.894096
405
2.625
3
A USDA loan is a mortgage for homes in rural or suburban counties, and you don’t need any money for a down payment. A USDA Loan is designed for individuals with low-to-moderate incomes who are purchasing a home in rural or suburban areas of the United States. These mortgage loans do not requires any down payment, although a minimum credit score of 640 is typically required. Additionally, homes located in rural areas or counties with populations of 20,000 or fewer residents may be eligible for a USDA loan. This guide will explain what is a USDA loan, types of USDA Mortgage. It will talk about who eligible for a USDA loan in United States, the pros and cons of a USDA loan. What is a USDA loan? A USDA loan is a type of mortgage supported by the United States Department of Agriculture, aimed for people with low-to-moderate income levels who are buying homes in rural or suburban regions. In most cases, these people are buying homes for the first time. United States Department of Agriculture (USDA) The United States Department of Agriculture (USDA) is a federal executive department of the U.S. government. It is responsible for various aspects of agriculture, rural development, and food safety. Established in 1862, the USDA’s mission encompasses promoting agricultural trade, ensuring food security, conserving natural resources, supporting rural communities, and fostering the overall well-being of American citizens. There are two primary types of USDA home loans: - Guaranteed: This type is supported by the USDA, and the application process is done through a participating lender. - Direct: The USDA directly provides the loan, requiring direct application with the USDA. USDA Direct loans target lower-income borrowers and have more stringent criteria. Typically, when people mention a USDA loan, they are referring to a guaranteed loan, specifically the USDA Rural Development Guaranteed Housing Loan Program — and this is the type of USDA loan that we are exploring in this article. The advantage of a USDA loan is the ability to purchase a home without making a down payment. However, it’s important to note that only fixed-rate mortgages are available; adjustable rates are not an option for USDA loans. Types of USDA Mortgage Below is how a USDA loan is different than other types of mortgages. However, there are two primary categories of mortgages: conventional loans and government-backed loans. A conventional loan is not backed by the government. It is granted by private lenders like banks or credit unions without government insurance. However, you can opt for a conventional mortgage supported by government-sponsored lenders such as Fannie Mae or Freddie Mac. These mortgages requires a credit score of at least 620, a debt-to-income ratio of 36%, and a down payment ranging from 3% to 10%. On the other hand, a government-backed loan is backed by a federal agency. If you fail to meet your mortgage obligations, the agency covers the lender on your behalf. When a lender extends a government-guaranteed mortgage, it’s akin to obtaining insurance on your loan. It’s generally easier to qualify for a government-backed mortgage compared to a conventional one. Among government-backed loans, the USDA Rural Development Guaranteed Housing Loan stands out due to more lenient eligibility criteria. What are the 3 Types of government-backed Mortgage There are three main types of government-backed mortgages: FHA, VA, and USDA loans, each with distinct characteristics: - FHA loan: A Federal Housing Administration mortgage is not restricted to a specific group. You may qualify with a down payment as low as 3.5%, a debt-to-income ratio of 43%, and a credit score of 580. - VA loan: A Veterans Affairs mortgage targets active or retired military members. While many lenders require a credit score of 660 and a DTI of 41%, no down payment is necessary. - USDA loan: This loan type is designed for low-to-moderate income borrowers purchasing homes in rural or suburban areas of the US. While a credit score of at least 640 and a DTI of 41% are recommended, no down payment is required. Who is eligible for a USDA loan in United States? When checking your eligibility for a USDA loan, a lender carefully examines two factors, namely the type of property you intend to buy and your financial profile. They also take into account various aspects of your income, credit history, and debt obligations. Let’s discuss them one by one. 1. Property eligibility If you are in the process of purchasing a home situated in a rural or suburban locale, you could potentially meet the criteria for eligibility for a USDA loan. These eligibility parameters are contingent on the population thresholds, which stipulate that certain counties have a population limit of 20,000 while others can extend up to 35,000. Should you already possess the address of the intended property you wish to acquire, you have the option to input this information into the USDA Property Eligibility Site. Within this interface, you will be prompted to designate the specific category of USDA loan that aligns with your interest. For instance, if you are considering a guaranteed USDA loan, you will select the “Single Family Housing Guaranteed” option. 2. Borrower eligibility In order to meet the requirements for a USDA loan, several factors or criterias has to be met: - Citizenship or Residency: You are required to be a United States citizen or a permanent resident. - Income Level: Your household’s income should fall within the low-to-moderate range. The specific maximum income threshold is contingent upon your geographical location, which can be ascertained by referring to the income limit designated for your county. - Stable Income: Demonstrating a consistent and stable income for a minimum of the last two years is necessary. - Credit History: A favorable credit history is essential. While many lenders stipulate a credit score of 640 or higher, exceptions might apply. - Affordability Ratio: Your monthly mortgage payments, encompassing loan principal, interest, insurance, taxes, and homeowner’s association dues, should not surpass 29% of your monthly income. - Debt-to-Income Ratio: Your additional debt payments should equate to 41% or less of your monthly income. Notably, a higher debt-to-income ratio may still allow qualification if you possess a notably high or excellent credit score. - Maximum Borrowing Limit: Unlike conventional mortgages, there is no predefined maximum borrowing limit for USDA loans. Lenders will assess your financial profile and approve a borrowing amount accordingly. - How to Find the Best Loan options for a Mortgage or Home Loan - VA Home Loan Buyer’s Guide, Program Benefits, Eligibility Requirements - How to choose Conventional, Government FHA, Fixed or Adjustable Rate? The pros and cons of a USDA loan A USDA loan suite your needs, but it’s important to understand the advantages and disadvantages of USDA loan. Here’s a breakdown of the pros and cons of choosing for this type of mortgage: Pros of USDA loan Here are some positive aspects of choosing a USDA loan: - Low and Favorable Interest Rates: Typically, USDA loans offer lower interest rates compared to conventional, FHA, or VA mortgages. If you have excellent credit, a low debt-to-income ratio, or contribute a down payment, you might secure an even more attractive rate. - Zero Down Payment: Except for VA loans, which are exclusive to military-related borrowers, USDA loans are unique in not requiring any upfront payment. This makes it a feasible option for individuals with limited savings. - Affordable Insurance Costs: While USDA loans involve mortgage insurance expenses, these costs are relatively lower than those associated with other mortgage types. You’ll pay 1% of the principal at closing, along with an annual premium of 0.35% of the remaining principal. In contrast, FHA loans necessitate a 1.75% mortgage insurance premium at closing, coupled with an annual premium ranging from 0.45% to 1.05% of the mortgage amount. Conventional loans often require private mortgage insurance until achieving 20% to 22% equity, which could be time-consuming and costly without a substantial down payment. - Refinancing Options: If you later decide to pursue refinancing for reduced monthly payments or a more favorable interest rate, you can opt for another USDA loan as part of the process. These benefits can make a USDA loan an attractive choice for eligible borrowers. Cons of USDA loan Here are some important considerations to keep in mind when it comes to USDA loans: - Location Restrictions: USDA loans are specifically designed for individuals purchasing homes in rural and suburban areas of the US. Properties in urban or densely populated areas with more than 35,000 residents may not meet the eligibility criteria for a USDA loan. - Income Limits: To qualify for a USDA loan, your household income should fall within the low-to-moderate income range, which varies based on your county of residence. - Fixed-Rate Only: USDA loans exclusively offer fixed-rate mortgages. Adjustable-rate loans are not available under this program. - Single-Family Homes: USDA loans are intended for the purchase of single-family homes. If you’re interested in acquiring a multi-family property, an FHA loan might be a more suitable option. - No Cash-Out Refinances: While you can refinance a USDA loan, it’s important to note that cash-out refinances, which allow you to access cash by leveraging your home equity, are not an option with this type of loan. Understanding these limitations will help you determine if a USDA loan aligns with your specific homeownership goals and financial situation. - 10 Types of Government Grant Money you Should Apply for Today - U.S. Bank Mortgage: Mortgage Types, It’s Term’s and how it Works - What is Mortgage Home Loan, Types and Process of Securing it? - What’s Interest-Only Mortgage: How It Works and it’s Pros and Cons? - What is Mortgage Loan Rates Today: Compare Rates Up-rise and Drops - Advantages and Disadvantages of Joint Mortgage Loan for Family Home
<urn:uuid:ac3c78f5-ce54-47ed-9cde-b32b0d1bfbe9>
CC-MAIN-2024-38
https://hybridcloudtech.com/usda-government-backed-mortgage-home-loan/?amp=1
2024-09-17T16:06:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00195.warc.gz
en
0.938628
2,136
2.625
3
If you’ve ever had a device crash, taking all your data with it, you know how painful the experience can be. When that happens on a large scale, the effects can be devastating. Data loss costs the average business $586,000 a year, and that doesn't address the personal, emotional and cultural costs of losing everything from financial data to great works of art. The worst part? It can easily be avoided. Here are four real-world examples of data that was lost — or nearly lost — but could have been saved with a simple, foolproof backup plan. 1. NASA & the Apollo 11 Moon Landing It’s not just digital data that can be lost. Even analog tapes are subject to loss, theft or accidental destruction through any number of natural or human causes. In one dramatic example, the original tapes depicting the first lunar landing were accidentally erased along with a batch of 200,000 other tapes set to be reused to save money. Luckily, using broadcast footage and other sources, NASA was able to recreate the original tape. Backing up analog materials in multiple digital formats is an important, but often overlooked, step. 2. Pixar & "Toy Story 2" In 1998, a huge chunk of the film "Toy Story 2" was nearly lost forever because of an erroneous command that started deleting the animations. Luckily, a designer noticed that files were being deleted and “pulled the plug,” so to speak. After intense efforts, Pixar employees were able to restore the lost files. This story, like the movie, has a happy ending, but that's not the case with every backup disaster. 3. The Library of Congress & Silent Films Seventy-five percent of the roughly 11,000 silent films ever produced are lost forever. The Library of Congress released these findings late last year, demonstrating that data loss is an issue that has plagued society since we were able to record data in a meaningful way. Today, backup products and services can prevent these types of disasters — but only if we take the time to protect our data. 4. T-Mobile & Sidekick Finally, a more recent example: T-Mobile was in the news in 2009 when Danger, the manufacturer of T-Mobile's Sidekick phone, experienced a major server crash. As a result, a significant amount of Sidekick users' personal data that was stored in the cloud disappeared from their phones for good, including contacts, photos, calendars and to-do lists. This embarrassing snafu was a great reminder for T-Mobile and all observers of the importance of backing up all data. What are some of the worst data disasters you’ve heard about or witnessed? [Image via CanStock] A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses.
<urn:uuid:232e8146-b5d0-4bb8-a3e7-5d7ddd9fba48>
CC-MAIN-2024-38
https://www.acronis.com/en-gb/blog/posts/true-cost-lost-or-nearly-lost-data/
2024-09-07T23:45:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00195.warc.gz
en
0.955348
618
2.546875
3
About Swap: Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available. Swapping is necessary for two important reasons. First, when the system requires more memory than is physically available, the kernel swaps out less used pages and gives memory to the current application (process) that needs the memory immediately. Second, a significant number of the pages used by an application during its startup phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache. Adding Swap: File Method I am using this method over a drive partition simply because i didn’t create a partition to use. 1 – Locate an area on disk to place the swap file. In my Raspberry Pi setup I am going to use /root 2 – Use the following dd command example creates a swap file with the name “swap” under /root directory: # dd if=/dev/zero of=/root/swap bs=1M count=512 3 – Change the permission of the swap file so that only root can access it: # chmod 600 /root/swap 4 – Make this file as a swap file using mkswap command: # mkswap /root/swap 5 – Enable the newly created swapfile: # swapon /root/swap Your done. But wait! I don’t want to turn the swap on each time I reboot, that’s just silly. To make this swap file available after the reboot, add the following line to the /etc/fstab file: /root/swap swap swap defaults 0 0 Now hopefully my Raspberry Pi will be a little less prone to locking up due to being out of memory.
<urn:uuid:23ba12b5-c96f-4f08-8c6d-6276292e21eb>
CC-MAIN-2024-38
https://jermsmit.com/my-raspberry-pi-needs-a-swap/
2024-09-09T04:26:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00095.warc.gz
en
0.871292
433
3.34375
3
You are here:Home/Blog Post/Can Machine Learning Save Us from Us? Among the top headlines in Google News’s Technology section today was criminal hackers use of AI (Artificial Intelligence) and its subset, ML (Machine Learning)1. Opening the article, I found a synopsis of a Tech Republic report, “Cybersecurity: Let’s Get Tactical,” in which the authors give ten ways cybercriminals are attacking with AI2 including phishing attacks, in which, upon gaining credentialed access, automatic scripts can wreak havoc, including draining bank accounts credential stuffing and brute force attacks, in which AI systems try passwords — and password possibilities — on many websites bulletproof hosting services that use automation to hide the tracks of malicious websites, so they can’t be stopped by law-enforcement, or often flagged by network scanning tools The fact is, it’s an arms race. Both malware and criminal sites would be pretty quickly and easily identified on a network by the nature of their activity. So the criminals try to disguise their malware in benign code and their sites in bulletproof hosting schemes. The way they keep the ruse going is through machine learning adapting to changing circumstances. The Good Side The most dangerous cyber threats to organizations and individuals hide within everyday network traffic, cleverly disguised to avoid detection. Faced with a near constant stream of potential threat warnings, actual infections, and information on network activity, organizations of all sizes may struggle to successfully uncover threats. The heart of the issue here is that humans are incapable of handling such an enormous amount of data and data analysis. That’s where we have to rely on the strength of our machines, which have the processing power to comb through and analyze all of the noise, then identify the items that truly need attention. Advanced machine learning can be used to classify the enormous volume of available data that is overwhelming threat researchers and traditional defenses; it can reduce the false positives/negatives, as well as the workload for human analysts, thereby enabling an organization’s staff to focus exclusively on the actual threats themselves.4 Webroot’s AI phishing solution works in real time, so that if an employee clicks a phishing link, the AI system sees that this is not normal behavior, signaling that this could therefore be malicious activity. The AI opens the URL, decides on the quality of the content and has the ability to block it. It provides an intelligent response to unusual behavior. So in this case AI prevents the damage of malicious access, triggered malware, lost credentials or giving criminals access to a network. Spam Wars, Chapter 42: Nothing but Spam Wars A long time ago there was the original spam wars,5 a who’s-got-the-better-weapons battle of good guys versus bad guys. Today it’s really only the tools that are more sophisticated. The good guys’ AI implementations can solve some of the problems (Webroot claims to stop 98 percent of spam,6 AKA 2% of malicious emails get through). But the bad guys are better financed: in Dr. Mark McGuire’s April 2018 study, “Into the Web of Profit”, middle earning cybercriminals make more than $75,000 a month, and the top earners make more than $166,000 per month.7 Because it’s an ever-escalating war, training for all employees to be able to recognize bad emails and websites is still the number one defense. 90% of cyberattacks are deployed through human error.8 And the training needs to be repeated periodically, because the tactics keep evolving. Per the Tech Republic report: even if your organization has implemented the latest and greatest security, it won’t matter if your employees are uninformed. Cover these topics in your training: How to recognize fraudulent emails How to know you can click a link with complete certainty What to do if you question the authenticity of an email What should you do if you click a malicious link What to do when an email or link from an email asks for your credentials Of course let’s continue to move forward with technologies to protect us. But like football helmets have been shown to give players a false sense that they can lead with their head, don’t be lulled by the thought AI is going to keep your business safe from email attacks. AI can’t fix the decision-making of the humans. 1 Artificial Intelligence means incorporating human-like reason and learning into machines: a computer draws conclusions from data. Machine Learning means programming into computers the ability to learn: machines use the provided and accumulating data to make accurate predictions.
<urn:uuid:f12a1496-41cb-4d6a-bf7f-f1132784929f>
CC-MAIN-2024-38
https://www.bryley.com/can-machine-learning-save-us-from-us/
2024-09-09T05:19:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00095.warc.gz
en
0.926444
975
2.640625
3
Content Copyright © 2018 Bloor. All Rights Reserved. Also posted on: IT Infrastructure What between the EU Code of Conduct, Green Grid and PUEs hovering around 1.1 for the most efficient data centres, you might be forgiven for thinking that there weren’t many more energy efficiencies to be squeezed out. Obviously, there are plenty of older data centres that are not particularly energy efficient, but even in new data centres there is still more that can be achieved. I don’t want to get into a big debate about the importance, or otherwise of PUEs. They are a useful indicator of relative energy efficiency, but they are by no means the only measure that needs to be used. More can be done to reduce the power consumption of the servers themselves, and the use of IoT sensors and machine learning can help drive further efficiencies in even the most modern data centre. One of the key guiding principles of the Open Compute Project (OCP) is to drive greater data centre energy efficiency. The OCP sever and rack designs reduce energy consumption by 29% according to a study by CERN. They also allow the aisle behind servers to run much hotter, because all access is from the front, which offers a very efficient means of heating a local housing grid. But not everyone can switch quickly to OCP designs for servers and the data centres themselves. Not everyone can place their data centres in Iceland or the Nordics to take advantage of lower cooling requirements and cheaper sources of renewable energy. New cooling technologies help. Data Centre Infrastructure Management (DCIM) solutions and energy modelling tools capture a lot of data and help in the design and layout of new facilities, but crucially they don’t provide the ability to predict server loads and manage cooling and power utilisation proactively. Internet of Things (IoT) sensors are low cost and simple to install. They can be used on their own, or to supplement existing monitoring devices in data centre equipment that doesn’t perhaps capture the granular, targeted data needed. The trick then is to correlate this sensor data, across all data centre equipment, with data on server loads in near real-time and feed that into machine learning algorithms. A loop back process and simple management console then ensures that this constantly updated and refined information can be used to predict power and cooling requirements against server loads, again in near real-time. One of the by-products of such a predictive approach is that mechanical data centre equipment can be used much more effectively. This reduces energy usage, but also results in a reduction in the running time of the equipment by as much as 35%. This means there is less wear and tear on the equipment, lengthening its potential life, thereby reducing capital expenditure. For hyperscale data centre operators, large cloud service providers and enterprises whose value propositions are based largely on electronic, rather than physical infrastructures, even small reductions in energy usage and capex will have a significant impact on costs and margins. Many enterprises that have a more traditional mix of business models and reliance on technology often have less efficient facilities. Using IoT and machine learning solutions will generate substantial savings and may be a simpler, more cost-effective way of reducing energy costs than investing in expensive DCIM and modelling solutions, or worse, investing in new data centre mechanical equipment that may not have been necessary. I’ll be returning to this topic later this year to review the market and emerging vendors.
<urn:uuid:81d00dc1-7cb4-4ffc-953f-fb4e6fd96142>
CC-MAIN-2024-38
https://www.bloorresearch.com/2018/03/iot-machine-learning-ocp/
2024-09-14T01:48:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00595.warc.gz
en
0.948788
707
2.90625
3
If you’re buying a new data storage device, you’ll need to do some research. This is particularly true if you’re buying a solid-state drive (SSD). While SSDs provide better performance than hard risk drives (HDDs), different models have different features and capabilities — and different levels of reliability. On SSD specification sheets, the number of maximum write cycles (also referred to as program-erase cycles and expressed as total bytes written) is one of the primary metrics used to evaluate SSD lifespans. If you’re looking for a reliable device, write cycles are important, but not necessarily the most important factor to consider. Here’s what consumers should know. 1. All SSDs have a limited number of write cycles. SSDs are a type of flash memory. NAND flash devices use electricity to store data (our article on SSD garbage collection explains flash storage techniques in more detail). As you write and erase data to your SSD, the transistors that hold electrical charges become less stable. Eventually, they aren’t reliable enough to store data. Modern SSDs use wear-leveling algorithms to ensure that all of the transistors receive a roughly equal share of the work, but even with advanced wear-leveling, every SSD will eventually become unreliable. SSD manufacturers are certainly aware of this issue, and most hardware data sheets contain an estimated average of the number of write cycles that an SSD can sustain before it becomes unreliable. This is often expressed as total terabytes written (TBW). For example, a 1TB SSD may have a 560 TBW — in theory, you could completely erase and rewrite the drive 560 times, on average, before data loss occurs. 2. SSD write cycle metrics are not absolute. Your SSD probably won’t fail at the precise point that it reaches a specified number of write cycles. TBW is an average, but many SSDs last much longer — albeit without the protection of the product warranties. That’s also true of Mean Time Between Failures (MBTF), a metric that estimates the operating time (typically, in hours) before an SSD becomes non-functional. Modern SSDs may have a MTBF of 1.8 million hours — but your hard drive could fail much sooner or later, depending on dozens of factors. 3. Other SSD performance benchmarks may be more important for buyers. If you’re purchasing an SSD for high-performance applications that require a tremendous number of write cycles, TBW is an important metric. For example, boot drives (drives that contain your operating system) write and erase data nearly constantly when operating. If you’re buying a boot drive, it makes sense to consider the drive’s endurance. But most consumers will reach capacity limits and upgrade their SSDs long before the hardware begins to wear out. Performance metrics (such as sequential read/write speed) and total capacity are generally more important factors to consider when making your purchase. 4. Every data storage device will eventually fail. While consumers often focus on endurance metrics, it’s important to remember that every data storage device fails. Most SSD manufacturers estimate an average lifespan of around 5 years, which is similar to the estimated lifespan for hard drives. Currently, there’s no “perfect” option for data storage. Even if you carefully control your operating conditions and minimize write cycles, you’ll need to back up your data to keep it protected. If you’ve lost data due to an SSD failure, we’re here to help. Contact Datarecovery.com at 1-800-237-4200 or click here to submit a case online.
<urn:uuid:e789e7fe-0f1b-4ff6-838b-2dc2f7b05b20>
CC-MAIN-2024-38
https://datarecovery.com/rd/ssd-write-cycles/
2024-09-19T01:23:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00195.warc.gz
en
0.92184
774
3.109375
3
Green technology refers to the technology that is used to reduce and reverse the impact of human activities on the environment. To be more specific, green technology is the technology that helps reduce the carbon footprint of human activities and so, is extremely crucial for the efficient and sustainable use of resources. The construction sector accounts for a large share of energy and materials consumption and global emissions, which underlines the importance of adopting green technology. Green or sustainable construction, i.e., the use of green technology in construction, refers to the use of energy-efficient and environmentally responsible processes that can not only help make construction energy-efficient but also make buildings sustainable. The benefits of green construction are many. It increases the market value of buildings, improves the rate of tenant retention, reduces costs for owners in the long-term, offers health benefits to the occupants, and above all, goes an extra mile in protecting the environment by conserving energy and lowering the carbon footprint. So, what are the green technologies that are being used to make green buildings? Here are the 5 most innovative green technologies employed in the construction sector – 1. Solar Energy In construction, solar power is used as active solar power and passive solar power. Active solar power is used for both heating and electricity generation with the help of functional solar systems that can absorb radiation of the sun. Passive solar power, on the other hand, utilizes sun’s rays to warm homes. This is done by using heat-absorbing surfaces and also by the planned and calculated placement of windows. Solar power is an important component of net zero or zero energy buildings that consume zero net energy annually and also do not generate carbon emissions. This is called net zero concept. 2. Cool Roofs Roofs or to be more precise, roof insulation can have a huge impact on energy consumption. Cool roofs are basically green design technologies that help maintain room temperatures within buildings by reflecting heat and sunlight. This is done with the help of special tiles and reflective paints that reflect a large part of solar radiation and absorb much less heat. This, in turn, helps reduce the use of air conditioning and thus, lowers the consumption of energy. 3. Green Insulation Green insulation refers to adequate thermal insulation in order to keep the heat inside and ensure energy-efficient heating. The basic materials used for this purpose include – slag slabs, natural fibre insulation materials, gypsum board, vermiculite, perlite insulation materials, wool insulation materials, and porotherm bricks, etc. 4. Electrochromic Smart Glass Also known as smart glass or electronically switchable glass, electrochromic smart glass is an innovative green technology to be used in modern buildings. The light transmission properties of electrochromic glasses change when the amount of voltage, light, or heat applied on it is altered. To put it simply, by using electrical signals, this unique glass can charge the windows slightly to alter the amount of heat or solar radiation reflected by them. This means that the user can choose the amount of light or heat that is to be allowed to pass through the glass. This can save a lot of energy required for heating and air conditioning. 5. Water Efficiency Technologies A discussion about green technology in construction cannot be complete without talking about the technologies employed for ensuring the efficient use and conservation of water. Water conservation is all about reuse and recycling of water and the use of efficient water supply systems. Water efficiency technologies mainly encompass rainwater harvesting, greywater re-use, dual plumbing, and water efficient fixtures like shower heads, taps, toilets, etc.
<urn:uuid:04766be1-24d9-48b5-ba76-d162ebf05279>
CC-MAIN-2024-38
https://www.alltheresearch.com/blog/5-innovative-green-technology-used-construction
2024-09-19T02:16:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00195.warc.gz
en
0.919406
728
3.34375
3
AI systems have become increasingly popular in recent years due to their ability to quickly and accurately process large amounts of data. However, along with their benefits, they also face various types of attacks. Some of the most common types of attacks include model extraction attacks, Evasion, etc. Developers must be aware of potential attacks and take measures to protect their AI systems, like implementing security protocols and monitoring for suspicious activity. This is where AIShield helps keep your models secure. Attack types | Description | Example | Model Extraction Attacks | Attacker gains information about the model internals through analysis of input, output, and other external information. | Pedestrian detection | Evasion Attacks | Attacker induces an incorrect output from the model by making a very small change to the digital representation of the targeted input | Misclassification of input, malicious output execution | Poisoning Attacks | Attacker corrupts the data used to train the model by adding malicious data or changing data. | Compromising AI inference's correctness, inaccurate and poor decisions | Inference Attacks | Infer sensitive data from ML model outputs by querying and analyzing responses, enabling reconstruction of sensitive or training data. | Face reconstruction from outputs. | Sponge attack | It increases an ML model's energy consumption during inference, causing delays and potential harm, such as collisions in autonomous vehicles. | increase latency and delay operations in tasks. |
<urn:uuid:160c788b-7382-473e-9016-e835d0d7b72a>
CC-MAIN-2024-38
https://docs.boschaishield.com/attack-types
2024-09-08T02:56:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00295.warc.gz
en
0.934488
289
2.671875
3
People dialing 911 trust that emergency services are only one call away. However, IT professionals and providers of unified communications and collaboration systems know that the process is complicated. To ensure technologies comply with current government regulations, IT administrators must know how 911 works and should know how modern legislation has shaped the emergency response system. Established in 1967 as the first national emergency telephone number, 911 connects the caller to the local public safety access point or public safety answering point (PSAP). The Federal Communications Commission (FCC) worked with AT&T to develop 911 and define PSAPs around specific geographical areas where an emergency call must be routed. The FCC lists nearly 9,000 PSAPs across the nation in its database, a number that’s subject to change as PSAPs are combined, separated into smaller regions, or designated as primary or secondary. Once the 911 call reaches the PSAP, the caller’s location information determines which police, fire, or dispatch station can respond the fastest. Thanks to the local phone company’s data, landlines have specific addresses associated with the phone number, generating an automatic location identification (ALI) by referencing the caller’s automatic number information (ANI). If the connection to the PSAP fails, the 911 call reroutes to general emergency services, where the operator relies on the caller for location information and then forwards the call to the correct PSAP within a matter of seconds. But, landlines are a thing of the past, for the most part. In this time of mobile phones and voice over internet protocol (VoIP) devices, enhanced 911 (E911) service provides more detailed location information, such as what floor a caller is on or even what conference room on that floor. Cellular devices use a specific service called radio resource location services protocol (RRLP), which finds the caller’s location through either cell tower triangulation (called radiolocation) or GPS coordinates, then sends this information to the correct PSAP for emergency dispatch. Softphones on laptop or desktop computers work in a similar way, but only after the user’s company IT administrator sets up the device with the user’s physical address or had the user enter the location information before receiving a direct inward dial (DID) number. Two specific and somewhat recent regulations — Kari’s Law and RAY BAUM'S Act — have simplified how 911 and E911 calls can be made and expanded how much information they carry. These laws resulted after a real-life tragedy occurred due to the requirement of dialing a prefix before dialing 911 and from the necessity to locate someone precisely in a multistory office building, respectively. Kari’s Law, passed in 2018, required that all multiline telephone systems (MLTS) pass 911 calls through without dialing an extension, ensuring any fixed phone or softphone in a building can reach emergency services even if there is a standard prefix normally used to dial an outside line, such as dialing “9.” Additionally, the owner of the phone system must also be notified that a 911 call has been placed and provide location information to the phone system administrator. Meanwhile, RAY BAUM'S Act requires the 911 caller’s exact location information be passed through to the PSAP, telling first responders not only the street address but also the building floor, corner, and office number. These laws reveal why it is so important that a regulating body, like the FCC, works with incumbent local exchange carriers (ILECs) (private companies that manage the PSAP database), to put standards of communication in place. In fact, some of these ILECs have contracted with 911 services to put a backup 911 capability in place, essentially becoming a secondary PSAP location. If the primary PSAP is ever unable receive calls for any number of reasons, the contracted service will pick up the call, know which PSAP it should have normally reached, and then act as the 911 operator, gathering the same information that would usually be gathered locally and sending it to the correct authority within the correct municipality. While understanding 911 regulations and compliance can seem overwhelming, IT administrators have help. Working with a reputable and experienced unified communications and collaboration vendor will make it much easier to find the right solution to comply with Kari's Law and RAY BAUM's act — not to mention the many other critical aspects of 911 regulations and compliance.
<urn:uuid:e04eee59-c257-485f-954d-e1e2947e6435>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/94634-the-411-on-911
2024-09-08T04:24:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00295.warc.gz
en
0.931934
911
3.296875
3
Manufacturing products is still a major part of the western economies. Like other businesses, manufacturers are using information technology to fuel and manage their supply chains and business processes. We’ll take a short look at what IT manufacturers use, and how it helps them advance their business. The Production Labyrinth The process of creating products can be quite the maze. If you make the right decisions, operations can go smoothly, but if you take the wrong turns, you could be facing a no-win situation. The use of IT can help navigate the modern manufacturer to a successful end more than it will lead them nowhere, but they need to know where to start. For the modern manufacturer, IT begins as the supply chain starts, during the process of procurement. In order to produce the product to be sold, you need to procure the resources needed to make that product. Since all these resources tend to come from separate places, and are often made by other manufacturers, getting the resources you need to keep production moving consistently is important for the effectiveness of the operation. The most cost intensive part of running any manufacturer is the actual production end, largely because of the capital costs (purchasing the machinery needed to manufacture goods) coupled with the operational costs (payroll, downtime caused by machinery malfunction, and the subsequent maintenance required). As a result, most manufacturers are looking to mitigate wasting capital by instituting some type of IT. With IT comes automation. Enhancements in automation make it possible for businesses to cut their production costs, making them more predictable, and creating a state of efficiency. Distribution of the finished product is the final step for a manufacturer. If costs in this part of the business get too high, it can put a definite squeeze on the potential of the business and create major problems in its ability to offer products at a low-enough price point where retail businesses and other customers will continue to purchase their products. Where IT Fits Fortunately for the small to medium-sized manufacturers, there are now problem-solving technology solutions that can reduce downtime, enhance efficiency, and promote revenue growth. A few of these technologies include: - Asset Tracking – Using sensors, every product and resource can be tracked to provide efficiency. - Customer Relationship Management – This software helps a company streamline their customer service. It’s used to manage leads, opportunities, and customers. - Inventory Management – Manages stock, standardizes and allows for automation in the act of replenishment. - Supply Chain Management – This software helps a company control their entire supply chain from procurement to distribution. Enterprise Resource Planning Each of the processes can be implemented on its own and be of great benefit for the modern manufacturer. If you are looking for an all-in-one solution for your manufacturer’s management, there is a software called Enterprise Resource Planning (ERP). An ERP solution allows for each division of a manufacturer to be managed by one single piece of software that not only works to automate parts of the business, it also allows administrators from different departments to know exactly what to expect. Outfitting your organization with an ERP promotes overall business efficiency, getting your products to market faster, creating better revenue generation, and enhancing customer satisfaction. If you are searching for a way to make your manufacturing business more effective at getting products to market, CTN Solutions has some options for you. Call our professional consultants today at (610) 828- 5500 to learn more about how an ERP solution can improve your business.
<urn:uuid:3300e164-5dc8-4d43-b86f-a59067163e1f>
CC-MAIN-2024-38
https://ctnsolutions.com/taking-a-look-at-a-manufacturers-it/
2024-09-14T05:20:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00695.warc.gz
en
0.947346
717
2.640625
3
Despite the numerous debates on how dependent we are on technology; we cannot deny the fact that this is the machine age. Machines dominate the world which run on the power of its software. Hence, running a quality check on a software to ensure a controlled and defined functioning is very important. The process of identifying the errors and bugs of a software to meet the specified requirements of stakeholders is software testing. A thorough investigation is carried out as a part of the testing process. Although software testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within the software. Not every error is related to coding, many defects are a result of an error made by the programmer. This majorly affects the functioning of the machine and obstructs the efficiency. Therefore, there are many approaches of software testing processes that identify the various defaults of a software. As the term defines it, manual testing is a process carried out by hand to gather more information about it and analyze the reason for its disfunction. Manual test plans vary from fully scripted test cases, giving testers detailed steps and expected results, through to high-level guides that steer exploratory testing sessions. This includes verification in software testing which is a static method of checking files. The main activities involved in this are walkthroughs, reviews, and inspections. While static testing involves verification, dynamic testing is related to validation, which is the process of testing the real product. To ensure the product that is created meets the requirements, dynamic testing is carried out when the program is run. White Box Testing The process that verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user is called white box testing. It is also called glass box testing, clear box testing and structural testing. This process is usually carried out at the unit level. Black Box Testing It is also known as functional testing which examines the functionality of the software without seeing the source code or internal code structure treating it as a black box. The best advantage of black box testing is that no knowledge of programming is required. Grey Box Testing As the name suggests, it is a combination of white and black box testing. The tester needs to have access to design documents which helps to create better tests in this process. It includes reverse engineering to determine the errors. These are just a few of the approaches listed above however, there are numerous other testing approaches that help in making the software application efficient. Quality, security, cost efficiency, and customer satisfaction, depends on testing. Many companies have identified this need and have innovated new methods of software testing making the testing industry a sustainable one. The software industry is vast and requires continuous improvement with the increasing demand since the software testing process is an iterative one. Many a times when one bug is fixed, there can be a possibility of more bugs emerging. It might be time consuming; it might be tedious however it is the most important process in the software industry.
<urn:uuid:eef564c5-35e1-4af6-9cd0-bed22451bf31>
CC-MAIN-2024-38
https://cioviews.com/software-testing-simplifies-the-functioning-of-applications/
2024-09-15T10:32:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00595.warc.gz
en
0.948811
610
3.28125
3
The images shown in this tutorial are based on Ubuntu 16.04. We will be using the GUI installer for configuring software RAID on setup. Setting up RAID after installation is a different process. The setup screens should be similar for both Ubuntu and Debian server installers. For this tutorial, I have setup a virtual machine with 4 HDDs. 1. Most of the time we will do manual partitioning when we configure software RAID. So select "Manual" here. 2. Here, I have 4 HDDs and we'll need to initialize them first. 3. Once that's done, you should see the following layout on the partitioning screen. 4. Now we will create a separate /boot partition on one of the drives. Most of the time it does work when the /boot partition is in the RAID drive but there is a chance that it won't depending on the configuration. To ensure that the operating system will boot properly, we will create a separate /boot partition. Usually you will want to allocate around 1 to 2 GB for the boot partition. 5. Before we setup software RAID, we'll create and label our partition setup. In this case, I'll only be creating a root partition per drive. This is also to ensure we don't get confused later on in configuring RAID when we select which partitions should be arrayed. 6. Once you've gone through all of the drives and replicated the partition layout that you want for each drive, we can now proceed to configuring software RAID. Go to "Configure software RAID" now. You will be asked to save the changes made to the partition layout. Click Yes to continue. 7. You will now be presented with the software RAID configuration screen. Go to "Create MD device". 8. You will be presented with the various RAID levels that you can configure. If you don't yet have a clear picture on RAID concepts, you can look for our tutorial on RAID concepts on our knowledgebase. For the purposes of this tutorial, we will configure RAID 0, known as striping. 9. Now you will be asked to specify which partitions you want to use in the RAID array. In this case, I've selected the 4 root partitions I created earlier. You need to do this for every custom partition you created earlier in the partition screen. After this you will need to save the changes made. 10. Now a new RAID device has been created and should now show up in the partitioning screen. 11. Now we need to provision the new partition or partitions in the new RAID drive to our desired partition layout. In this case, I only need to re-create the root partition on this new RAID drive. By default, the RAID partition is set to "Do not use". We will need to change that so we can configure it as a root partition. Once that's done. We can finish the partitioning and save the changes to the disks. Double check your partitioning screen to see if you have the desired partition layout you want. 12. Now click on "Finish partitioning and write changes to disk". 13. You may be presented with this screen. This will happen if you did not configure any swap partitions. Selecting "Yes" will return you to the partitioning screen. Since my virtual machine's RAM is sufficient for our purposes so we'll just select "No". After this, you will be asked to confirm saving changes to disk. Select "Yes". 14. The boot loader needs to be installed to a hard drive. We made a boot partition earlier in the first drive so we will select /dev/sda in this case. 15. After this, the installation will continue as usual to the other steps. That's it for setting up software RAID inside the installation GUI!
<urn:uuid:757aabe2-b8ab-4ad8-9388-61c2d45df803>
CC-MAIN-2024-38
https://support.intergrid.au/support/solutions/articles/51000266482-installing-ubuntu-debian-with-raid
2024-09-15T10:27:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00595.warc.gz
en
0.908625
788
2.5625
3
Augusta Ada King-Noel, Countess of Lovelace was an English mathematician and writer, chiefly known for her work on Charles Babbage’s proposed mechanical general-purpose computer, the Analytical Engine – which served as a precursor to the modern computer. The Day was only founded in 2009 by Suw Charman-Anderson. It is now held every year on the second Tuesday of October. The second Tuesday of every October marks Ada Lovelace Day to celebrate the achievements of women in STEM careers (science, tech, engineering and maths), and was created in memory of Ada Lovelace, the first computer programmer! #AdaLovelaceDay https://t.co/kquxf5BWkK — Jacqueline de Rojas CBE (@JdR_Tech) October 9, 2018 Why is this day so important? Today is not just a day for celebration: it’s a day for addressing the severe gender imbalance that persists in STEM industries. According to a recent study, women are choosing lower-paid apprenticeships in health, public services and care (35%), in business administration and law (28%), and retail and commercial enterprise (23%). At the same time, more men are taking higher-paid apprenticeships in engineering and manufacturing technologies (53%). The future of being a woman in technology On the bright side, the latest ONS figures reveal that a record number of women are joining the tech sector, with an increase of more than 20% of those pursuing careers. According to Dominic Harvey, Director at CWJobs: “While this indicates progress, there is still much more left to do to ensure the UK is at the forefront of the technology industry’s rapid evolution.” “The Government must seize this opportunity to implement a long-term solution without delay. Establishing an educational platform where IT and tech skills are engrained at a grassroots level is vital to ensuring key technology skills become part of our national curriculum. This will encourage more women to pursue STEM careers and feel confident using the tech skills they have honed from a younger age.” What is holding women back? When you look at the tech sector in general, it’s hard not to identify how much the industry heavily weighs towards men, arguably, by doing this it works against women, in that the corporate culture does not make women feel like they fit in. The lack of female role models within the industry is not helping either. However, it has been noted by some that the imbalance is in part due to wider societal stereotypes and biases. According to a recent study from Nominet, the UK’s domain name registry, parents are gender biased when it comes to the career aspirations they have for their children. According to the research, the top five careers for girls picked by parents were; a doctor (24%), teacher (20%), lawyer (17%) / scientist (17%), nurse/paramedic (14%), business Manager (11%). While for boys it was; engineer (21%), scientist (17%) doctor (16%), tech entrepreneur (13%)/ game developer (13%), architect (12%). Eleanor Bradley, COO of Nominet, said: “Unfortunately, though, as witnessed this summer with the latest GCSE results showing a large disparity between the number of boys and girls taking Computing or ICT, there’s still a perception that STEM careers are largely for men.” “To engage more young women with STEM subjects including technology, we need to combat any unconscious bias that unwittingly steers girls away from STEM-based career paths. Teachers also need to be equipped with the knowledge and confidence to emphasise the huge range of career paths available to girls with STEM backgrounds, as well as role models like Ada Lovelace to inspire them. The future is digital, and we need a diverse workforce to thrive. Doing all we can now is not only essential to ensure more women enter into and prosper within STEM careers, but it is vital for the ongoing health and strength of our digital economy.” How to become more inclusive to women It has been proven that businesses that champion diversity succeed more. Taking into account the digital skills crisis facing every organisation in the tech sector, common sense states that looking to one gender for help will harm business. With this in mind, organisations need to address how they are engaging with women. Training programmes can help people to understand conscious and unconscious bias; assisting organisations to change the way they think and reduce uthe nfair behaviour. At the same time, getting female talent into the industry is only half of the story, making sure they rise up the ranks is also key – with the support of women in leadership training programmes.
<urn:uuid:9022b55b-b911-438c-9312-419f38f9d1ef>
CC-MAIN-2024-38
https://www.information-age.com/ada-lovelace-day-addressing-the-gender-issue-in-stem-11626/
2024-09-15T11:06:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00595.warc.gz
en
0.958269
978
3.453125
3
Kaspersky Lab, the Russian cybersecurity and antivirus solutions firm, has discovered cyber-espionage malware that attacks and infects victims through compromised network routers. The malware, dubbed "Slingshot," was used in cyber-espionage attacks in the Middle East and Africa from at least 2012 until last month, according to a prepared statement. How Does Slingshot Work? Slingshot frequently is used to compromise routers, Kaspersky indicated. The malware first places a malicious link library inside a router. Then, when an administrator logs in to configure the router, the device's management software downloads and runs the malicious components on the administrator's computer. After an administrator's router is infected, Slingshot loads Cahnadr, Gollum and other app modules onto the device, Kaspersky stated. These modules are connected to one another and perform information gathering and data exfiltration. Slingshot works as a passive backdoor and does not have a hardcoded command and control (C&C) address, Kaspersky pointed out. Instead, the malware obtains a C&C address from an administrator by intercepting network packages in kernel mode and checking to see if there are two hardcoded magic constants in the header. Once Slingshot obtains the C&C address, it establishes an encrypted communication channel to the address and uses it to transmit administrator data for exfiltration. Why Are Hackers Using Slingshot? Slingshot's main purpose appears to be cyber-espionage, according to Kaspersky. The malware is used to collect a wide range of administrator data, including: - Clipboard data. - Keyboard data. - Network data. - USB connections. In addition, Slingshot can run in kernel mode to "steal whatever it wants," Kaspersky said. Slingshot hides its traffic in marked data packets that it can intercept without trace from everyday communications. Slingshot also uses advanced techniques to evade detection by anti-malware software and other cybersecurity solutions. These techniques include: - Calling system services directly to bypass security product hooks. - Encrypting all strings in its modules. - Selecting an injection process based on the security solution that has been installed and its processes. - Using anti-debugging techniques. Kaspersky researchers identified roughly 100 Slingshot victims located in the Middle East and Africa. Most Slingshot victims appear to be targeted individuals, Kaspersky noted, and some government organizations and institutions have been targeted as well. Furthermore, the initial Slingshot samples were marked as "version 6.x." This suggests Slingshot has existed "for a considerable length of time," Kaspersky stated. How Can Organizations Combat Slingshot Attacks? Kaspersky offered the following recommendations to help organizations detect and block Slingshot attacks: - Use a corporate-grade security solution in combination with anti-targeted attack technologies and threat intelligence. - Provide security staff with access to the latest threat intelligence data. - Leverage managed protection services to proactively detect advanced threats and speed up incident response. Slingshot is a "sophisticated threat," Kaspersky Lead Malware Analyst Alexey Shulmin said. As such, organizations must understand the cybersecurity landscape and deploy effective security measures to combat Slingshot and other advanced cyber threats.
<urn:uuid:a5b2e1fc-87f2-41dc-9f7b-43011f830b47>
CC-MAIN-2024-38
https://www.msspalert.com/news/slingshot-malware-network-routers-kaspersky-lab
2024-09-16T15:04:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00495.warc.gz
en
0.922379
700
2.796875
3
Hyperconvergence, as its name suggests, combines, or converges different IT resources, like computing storage and networking, into a single integrated system. Hyperconvergence is critical to building a simplified IT infrastructure ecosystem. With hyperconverged infrastructure (HCI), a single system manages data with high performance and can be easily scaled to suit your needs. In this blog, we’ll review what hyperconvergence is, the key users and benefits of this infrastructure, and how it differs from converged infrastructure. Table of Contents - What is Hyperconvergence Infrastructure? - How Hyperconverged Infrastructure Differs from Converged Infrastructure - Uses for Hyperconverged Infrastructure - Benefits of Hyperconverged Infrastructure - Drawbacks of Hyperconverged Infrastructure - Hyperconvergence Vs. Cloud Computing - How Does Hyperconverged Infrastructure Work? - Hyperconverged Infrastructure with SysGen - In Summary - Frequently Asked Questions - How does hyperconverged infrastructure (HCI) enhance IT team efficiency? - Can HCI be integrated with our current IT environment? - Is hyperconverged infrastructure scalable to meet future growth needs? - How does HCI simplify storage management for IT teams? - What types of applications are best suited for hyperconverged infrastructure? - What kind of hardware is typically used in hyperconverged infrastructure environments? What is Hyperconvergence Infrastructure? Hyperconvergence infrastructure (also known as HCI) consolidates virtualization, servers, networking, and storage into one seamless solution. These solutions aim to simplify data center management and improve scalability and performance. HCI combines computing, storage and networking resources into a unified machine which leverages virtualization technology creating a software-based solution to minimize hardware costs and requirements while creating a flexible and scalable IT environment. Hyperconvergence also uses virtualization to make traditional hardware-based resources into software running on virtual machines. How Hyperconverged Infrastructure Differs from Converged Infrastructure Both hyperconverged infrastructure and converged infrastructure work to integrate and consolidate IT resources. However, there are key differences between the two, such as scalability needs, management preferences, and the level of customization required. HCI systems are more streamlined than convergence infrastructure as it integrates computing, storage and networking resources into a single unit. In contrast, converged infrastructure combines preconfigured units from different vendors into a single system. In terms of scalability, HCI offers a greater ability to scale as needed, allowing for incremental growth. Conversely, convergence infrastructure requires adding hardware like server racks or additional components to scale. In terms of management, converged infrastructure is managed on separate interfaces. HCI convergences all components into one machine, providing a streamlined, centrally managed interface to management. In terms of customization and flexibility, converged infrastructure allows for greater customization because you can select components from specific vendors to suit your needs. HCI offers less hardware customization due to its use of pre-integrated software. Uses for Hyperconverged Infrastructure Hyperconverged infrastructure solutions are highly beneficial to small and medium-sized businesses as it simplifies the management of your IT environment while improving scalability and reducing costs overall. The applications of hyperconverged infrastructure are varied, but each creates an opportunity for you to have a more efficient, secure and productive IT environment. With virtualization in HCI, you can consolidate your workloads on virtual machines to have more efficient resource utilization, more effortless scalability, and improved performance for applications and services. Backup and Disaster Recovery HCI has built-in redundancy into the system coupled with storage distribution, making it an additional option for a robust backup and disaster recovery solution. This can provide data protection, minimize downtime, and provide quick recovery options in case of a system failure or data loss. Benefits of Hyperconverged Infrastructure Increased IT Efficiency HCI simplifies IT management by consolidating computing, storage, and networking components into a single integrated system. With this, your IT can then be managed through a centralized interface, with the ability to easily provision, monitor, and manage resources from a unified platform, saving time and effort. As a result, you’ll experience increased overall system performance, as data can travel faster with reduced bottlenecks and improved latency. Better Storage at a Lower Cost Cost savings are the main driver of making the switch to hyperconverged infrastructure solutions. When consolidating resources using HCI, we can reduce hardware costs for networking equipment. Moreover, because of the highly scalable nature of HCI, when your IT needs to grow, the cost to expand with them is significantly less than a traditional infrastructure that will require capital expenditure on hardware. Greater Ability to Scale HCI grows and adapts to your business’s IT needs. You can start small and incrementally expand by adding nodes to the system to expand your resources. This flexibility prevents overprovisioning of resources, avoids unnecessary costs and ensures that you’ll always have enough resources at your disposal so your business can continue to achieve its goals. Drawbacks of Hyperconverged Infrastructure Hyperconverged infrastructures offer many benefits. However, like any technology, there are also potential drawbacks. Let’s look at some downsides of hyperconverged infrastructure and ways to mitigate these issues:. - Vendor Lock-in: HCI typically relies on proprietary hardware and software solutions that are vendor-specific, for example, Nutanix HCI solutions. Because of this, there can be barriers to switching or alternative solutions in the future. To mitigate this drawback, evaluating the available alternatives is crucial to ensure that you choose a reliable and secure solution and look at factors like compatibility, support available and ability to integrate with other systems. - Limited Customization: While HCI permits less customization than converged infrastructure this can be mitigated. Through consultation and consideration of the standardized offering we can confirm whether the solution fits your organization’s unique needs or consider opting for a more complex and customizable solution. - Data Protection and Disaster Recovery: While HCI systems have built-in redundancy measures, they cannot be used as a substitute for a proper backup solution. In addition to this infrastructure, your business should have a comprehensive data recovery plan in place. Hyperconvergence Vs. Cloud Computing Hyperconvergence and cloud computing are often viewed as related technologies in their world. While both use virtualization, these are separate, distinct concepts. The key difference between the two is that hyperconvergence focuses on consolidating infrastructure components into a single system, whereas cloud computing is a way to provide access to IT resources through the internet without any physical infrastructure. Cloud computing is about resource delivery methodology, and hyperconvergence is about reducing the resources necessary to run your IT network. How Does Hyperconverged Infrastructure Work? Hyperconverged infrastructure tightly integrates computing, storage, and networking resources into a unified system. HCI uses physical hardware, like servers, as the foundation of the infrastructure; these are known as nodes. The hardware resources are virtualized using virtualization technology and integrated with software layers. This forms the hypervisor, which manages and allocates the resources across the HCI. Hyperconverged Infrastructure with SysGen SysGen’s hyperconvergence services can simplify your business’ IT needs. We’ll consolidate your IT resources into a single system with centralized management so that you can capture all the benefits of hyperconvergence infrastructure. With HCI, your business will be able to: - Eliminate Complexities: A single system can synthesize your computing, storage, and networking resources. - Take Advantage of Simplified Scalability: You’ll be able to expand quickly and contract services on an ad-hoc basis, adding nodes when necessary without deploying extra hardware. - Experience Cost-Savings: Consolidating resources means reducing the cost of running and maintaining your current systems. - Boast Better Performance: With the balancing mechanism of HCI, you’ll never be concerned about system failure or lack of resources. Hyperconvergence infrastructure is a powerful technology that, when leveraged properly, can transform your business’s IT environment. From the benefits of increased productivity and efficiency to cost savings, the possibilities with hyperconvergence are varied. Looking to simplify your IT environment? Frequently Asked Questions How does hyperconverged infrastructure (HCI) enhance IT team efficiency? HCI systems help your team improve efficiency in three ways: - Firstly, through a simplified management system, because HCI consolidates resources into a centralized platform, management is more efficient, and the complexities of IT management are reduced. - Secondly, efficiency is improved through resource optimization. With HCI, because the system is designed to provide resources accordingly to avoid system failure, you’ll never be bogged down by system downtime. - Thirdly, efficiency is achieved through streamlined support and troubleshooting. Due to the tight integration between hardware and software in HCI, there are fewer technical issues for your team to encounter, thus minimizing downtime and ensuring that technology aids your team in their work. Can HCI be integrated with our current IT environment? To determine if HCI is integrable with your current system, you’ll need to assess a few factors: vendor compatibility, access compatibility, and a proof-of-concept assessment. This should be reviewed by an IT expert who learns your current system and understands how to leverage HCI within it. - Access compatibility relates to checking if the components of existing hardware and software you use are compatible with HCI solutions. - Looking at vendor compatibility means diving into the available offerings to see if it aligns with your business needs. - Proof of concept validates the theoretical integration with your current IT environment. These steps are critical to see if your IT environment is compatible with a hyperconverged infrastructure solution. Is hyperconverged infrastructure scalable to meet future growth needs? Yes, hyperconverged infrastructure is designed to be highly scalable, making it well-suited to meet future growth needs. HCI allows you to scale granularly, meaning you can scale resources based on your specific needs. This flexibility and customization will enable you to maximize resource utilization and diverse workloads. Further, HCI is easily scalable in its architecture; adding more nodes to an extensive system improves the capacity and performance of the infrastructure without adding heavy amounts of effort. How does HCI simplify storage management for IT teams? HCI simplifies storage management by creating a unified management platform. HCI consolidates computing, storage and networking resources into a single system, and the ability to control and monitor these resources is vested in a centralized platform. IT teams can then manage storage resources, configurations, and policies through a single platform, simplifying administration. HCI also includes automated storage provisioning, reducing the workload of IT teams in storage provisioning. Since the storage is automatically allocated based on predefined policies, it eliminates the need for manual intervention. What types of applications are best suited for hyperconverged infrastructure? Hyperconverged infrastructure is well-suited for various applications, particularly virtualized workloads, cloud infrastructure and data protection backups. Hyperconverged cloud infrastructure is ideal for building private clouds. Using virtualization and HCI, SysGen can deliver self-service provisioning, resource pooling, and automation, allowing IT teams to provide cloud-like services to internal users while maintaining control over the infrastructure. What kind of hardware is typically used in hyperconverged infrastructure environments? HCI requires servers, storage resources, networking infrastructure, hypervisors, and management software. - HCI environments often have multiple servers, which are the nodes of the environment; the specification of servers depends on your business’s needs. - Storage devices are integrated into the server nodes and make up the overall storage capacity of the HCI. - Networking infrastructure refers to the connection between the nodes, which involves cables, adapters and ethernet switches. - The hypervisor is the virtualized component of the software and computer resources. It runs on each server node and manages the resources. Need help implementing a hyperconverged infrastructure for your business?
<urn:uuid:e415f81a-e80a-429b-98ba-2756275b1ff0>
CC-MAIN-2024-38
https://sysgen.ca/what-is-hyperconverged-infrastructure-hci-sysgen/
2024-09-17T21:12:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00395.warc.gz
en
0.916922
2,567
2.953125
3
With the rapid pace at which technology changes and devices seem to become obsolete as soon as you get them, electronic waste (e-waste) is now the fastest growing waste stream. Unfortunately only about 20% is being recycled in some shape or form. The size of the challenge According to the World Economic Forum, 50 million tons of e-waste is produced each year and is expected to more than double to 120 million tons by 2050. E-waste contains toxic chemicals, including cadmium, lead, nickel, mercury, and chromium and is estimated to account for 70% of toxic waste lying in landfills. The opportunity that lies within But that’s not all. Electronics also contain a number of rare and valuable resources such as precious metals and minerals. By using ways to safely extract these essential resources we can reuse them to meet the future needs of electronic products. This will also generate substantially less greenhouse gas emissions (CO2 emissions) when compared to mining the earth for these minerals. According to the World Economic Forum, there is 100 times more gold in a ton of mobile phones than a ton of gold ore. Responsible recycling is a key part of the solution Helping our customers ensure their e-recycling is handled responsibly and sustainably is the mission 4THBIN was founded on. We are proud to partner with our customers to reduce their carbon emissions and overall environmental footprint. Together we are making a big impact! - 10,354k lbs. recycled - 1,790k lbs. Reused - 14,432k lbs. grennhouse gas Reduced - 298k lbs. Toxic Metals Diverted
<urn:uuid:321bd789-4491-4970-b8f4-1ac5d2d6602d>
CC-MAIN-2024-38
https://www.4thbin.com/making-a-big-impact
2024-09-17T23:00:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00395.warc.gz
en
0.952914
343
3
3
Is E-Waste Making Our Landfills Even More Toxic? Even as we become more environmentally conscious and more people make a point of recycling their paper and plastic waste…a new pollution threat appears to be emerging. With electronic devices becoming more disposable—either wearing out or becoming outdated in a few years’ time—people are simply throwing their phones in the trash, leaving old TVs by the curb and so on. It may seem harmless enough, but it’s not. Experts now suggest e-waste, as it’s called, could be making our landfills even more toxic than before. What makes e-waste so dangerous? From TVs and clock radios to laptops and cellphones, nearly all electronic products contain toxic chemicals and metals like mercury, cadmium and lead. These elements are harmless inside your device, but when you dispose of it improperly, that device eventually makes its way to your local or regional landfill. Over time, these harmful elements make their way into the soil, and eventually into the groundwater, making it unsafe for humans, plants and animals. If most people realized the mercury in their cell phone could eventually wind up back in their water glass, they might think differently about how they throw these products away. A Look at the Numbers As more companies are “going green” and replacing their paper documents with digital ones, we’re disposing of less waste overall, which is good news—but it doesn’t tell the whole story. According to DoSomething.org, electronic waste only accounts for 2 percent of the waste in our landfills, but it also accounts for 70 percent of our overall toxic waste. In the U.S. alone, the amount of e-waste we create amounts to 44 pounds per person, per year! Globally, humans now dispose of 20 to 50 million metric tons of e-waste every year—all of it potentially toxic to the environment. And of this waste, only 15-20 percent of it is being recycled. In other words, while we’re reducing our paper and plastic waste, we’re actually increasing our toxic waste through our disposable devices. By these numbers, the nuclear power plant in the next county isn’t nearly as much of a threat to the environment as that old iPhone you and 10,000 of your neighbors just threw in the trash. What Is Being Done Thanks to increased awareness of the dangers of e-waste, many state and local governments are taking steps to manage the problem. Pennsylvania, New York and other states have laws banning the general disposal of electronic devices. In Philadelphia, if you leave your TV by the curb, it stays by the curb—the trash collectors are forbidden to take it. While Massachusetts and Boston don’t have specific laws addressing e-waste, they do ban the disposal of “ferrous and non-ferrous metals” which are common in most electronics. Unfortunately, these laws are still difficult to enforce when it’s so easy for a small phone or tablet to be tossed undetected into a trash receptacle. As of yet, most cities do not have standardized e-cycling programs to deter businesses or individuals from keeping their devices out of the landfills. What Your Business Can Do The good news is that despite the lack of centralized e-waste control, there are a number of ways businesses can take responsibility for making sure their old electronics get recycled, repurposed and reused rather than adding to the toxic waste in their nearby landfills. 4THBIN works with companies strategically to create an IT product management cycle, making e-cycling a standard part of their electronics life cycle with zero negative impact on the environment. Give us a call at 855-329-2531 to learn more!
<urn:uuid:6bce0e35-c805-45b7-82d1-466bbee7d5be>
CC-MAIN-2024-38
https://www.4thbin.com/news-events/is-e-waste-making-our-landfills-even-more-toxic
2024-09-17T20:57:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00395.warc.gz
en
0.950517
790
3.046875
3
According to a report by The World Economic Forum, existing approaches to cybersecurity are becoming less effective as cybercrime becomes more sophisticated. As cyber threats evolve and businesses grow, they need frameworks for taking action when cybersecurity is, or might become, comprised. This is where incident response comes in. Learn about this concept and its two major components — plans and policies — to enhance cybersecurity at your organization. What Is Incident Response (IR)? Incident response (IR) is the processes, policies, and technologies businesses leverage to identify, respond to, and mitigate cyber-attacks. The goal is to prevent these incidents from occurring and work quickly to minimize their impact if they do happen. IR is needed in any situation where sensitive corporate data or information systems are vulnerable to loss, breach, or other damage. The following are common examples of threats to cybersecurity that may warrant IR: - Social engineering - Supply chain attacks - DDoS attacks In some cases, incidents originate from inside the business. An employee or partner may deliberately jeopardize information security or leave an opportunity for hackers to get in by not following cybersecurity best practices. Why Is Incident Response Important? Cybersecurity threats have a monumental impact. TechTarget cites a study projecting the cost of cybercrime to reach $8 trillion in 2023 and rise to $10.5 trillion by 2025. Companies can expect to encounter some type of cyber threat, and those who have experienced this already know the trouble that can ensue following these attacks. Incident response is not a guarantee against security challenges, but it does provide a means for taking action. Without IR, your business may be blindsided when computer security incidents occur, potentially costing you more time and money to fix than if you had been prepared. It can also help you assess your current practices to identify weak points, enabling continuous improvement of information security. What Is Incident Response Planning? The incident response plan is the formal, written version of the incident response. The document outlines how IT teams should react before, during, and after a computer security incident is confirmed or strongly suspected to have happened. In addition to specific instructions for IR, the plan will detail the processes and technologies needed to contain and eliminate the threat. The National Institute of Standards and Technology (NIST) lists four components that every IR plan needs to be effective. They include: The first aspect of creating an incident response plan is determining which personnel make up the IR team. All staff involved must understand their roles and thoroughly know your company’s IR approach to react quickly during events. Preparation also includes devising and implementing strategies to prevent incidents. NIST offers a Computer Security Incident Handling Guide with many items to consider before problems arise. Detection and Analysis In some cases, you might detect a security incident that is about to happen and respond, but in others, you won’t know cybersecurity has been compromised until after the fact. Detection is simply the process of realizing a security event occurred; analysis is verifying what the incident was to ensure you employ the right response. Notification is integral at this phase. Depending on the data affected, you may need to reach out to the various parties that have a stake in your business, including customers, suppliers, and partners. You might also need to report the situation to law enforcement or government agencies. Containment, Eradication, and Recovery This is the most actionable stage of the IR plan. You’ll evaluate the strategies you intend to use to contain and eradicate the threat, considering the time and resources needed to employ the solution and other factors. Once you remove the threat, you can begin recovery. You may reflect on weaknesses in your cybersecurity structure that led to the incident and make updates accordingly. You’ll also want to train relevant personnel in new approaches to security. The final phase allows time to debrief from the incident. You’ll evaluate the event’s damage and contemplate how to prevent similar problems from happening again. It also encourages you to revisit your existing incident response plan and tweak it to account for what you learned from the incident. An incident response plan is not only beneficial for facilitating a more intentional approach to security incidents. It can also save your employees from making costly mistakes and help you avoid fines or legal action. Moreover, businesses in industries beholden to certain compliance frameworks such as HIPAA, CIS, NIST-CSF, ISO 27001, and others may be in violation without an IR plan. What Are Incident Response Policies? What’s the difference between IR plans and IR policies? Unlike an incident response plan that details what to do when an incident occurs, the IR policy is a higher-level governance document that outlines such things as: - The requirement for the organization to have a plan in place - The major components of what the plan should contain - The timeframe and requirements for reviewing the plan to ensure it remains current While there will inevitably be some crossover between the IR policy and plan, both are needed to ensure a business is covered from both a corporate policy/governance standpoint (i.e., the IR policy) and knowing how to execute a specific response when an event occurs (i.e., the IR plan). Enhance Cybersecurity with M.A. Polce As important as incident response is for your business, having strong cybersecurity measures in place can reduce the likelihood of events occurring in the first place. For this reason, it can be advantageous to work with a managed services provider (MSP) and managed security services provider (MSSP) like M.A. Polce. At M.A. Polce, we offer a range of IT and cybersecurity solutions to small and medium-sized businesses. In addition to a full suite of cybersecurity services, our experienced team also assists with assessment and compliance. Contact us today to learn more about boosting information security at your business.
<urn:uuid:e010a048-09d6-488d-a42e-6189260265ac>
CC-MAIN-2024-38
https://mapolce.com/blog/incident-response-plans-vs-policies/
2024-09-19T04:18:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00295.warc.gz
en
0.948553
1,209
3.21875
3
An AI can unintentionally swallow a poison pill of its own making to induce model collapse is the conclusion of a new research paper that raises new concerns about AI training and long-term accuracy while highlighting a need for an image classification system that may save AI from itself. Once the poison is consumed, the AI never fully heals. The research paper called “Nepotistically Trained Generative-AI Model Collapse” is authored by Matyas Bohacek of Stanford University and Hany Farid of the University of California, Berkeley. The key conclusion is that when an AI is trained on even small amounts of data of its own creation, generative AI produces distorted images. Given that generative AI text-to-image models are trained in data scrapped from the internet, future data scrapping will likely involve material created by the AI itself. When this happens, the researchers discovered, it’s as if the AI has consumed a poison pill. The researchers found that when an AI is trained on its own images, the result, while initially yielding a small improvement in quality, quickly turned into a highly distorted picture, a process they call “model poisoning.” The model collapse persisted even when the self-generated images the AI inadvertently ”retrained” on were as little as 3 percent. The researchers found that “the popular open-source model Stable Diffusion (SD) is highly vulnerable to data poisoning.” The researchers used a mixture of real and AI-generated images but regardless of changing percentages in the mix, model collapse occurred by the fifth iteration, visible as highly distorted versions of the original one. The pair also noted that while model poisoning can occur unintentionally, it can come from an adversarial attack where websites are intentionally populated with poison data. “Even more aggressive adversarial attacks can be launched by manipulating the image data and text prompt on as little as 0.01% to 0.00001% of the dataset.” Once the poison is consumed, the AI can partially heal itself by retraining on new “real” images but artifacts are visible even after many iterations, as if the AI retained some scars from the initial experience. Efforts to retrain the AI using techniques like color matching and the replacement of low-quality images with high quality images yielded no delay in ultimate model collapse. A side effect also was a lack of diversity in terms of appearance in the generated image for an “older Spanish man.” All the faces were similar across latter iterations. Some open questions remain. Key among them is whether data poisoning can generalize across synthetic engines. For example, will SD images retrained on DALL-E or Midjourney images exhibit the same type of model collapse? Another is whether AI can be trained to be resistant to this type of data poisoning. And while it can be circumvented by the determined, some type of labeling for real versus AI-generated images would be very helpful. That’s an ongoing issue as it becomes increasingly difficult to tell the difference between the two. AI detection software has a poor record thus far, but image watermarking innovations like Google’s SynthID initiative may help if they scale up effectively. It’s clear, though, that resolving the difference between real and AI-created images may be for AI’s own good in the long term.
<urn:uuid:dcfc5e2c-8e1e-41f8-a6af-ffba97da03e0>
CC-MAIN-2024-38
https://techstrong.ai/articles/a-poisoned-ai-collapses-its-not-a-good-image/
2024-09-19T05:06:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00295.warc.gz
en
0.94799
696
2.703125
3
A security researcher has found a gap in the way Adobe Systems has fortified its Flash Player for better security, which could result in data being stolen and sent to a remote server. Billy Rios, a researcher who is a security engineer for Microsoft, published on his personal bloga way to get around Flash Player’s local-within-filesystem sandbox. A local file is described as one that can be referenced using “file: protocol” or a Universal Naming Convention path, Rios wrote. But Rios found that the sandbox restrictions are actually not quite so strict. He found he could bypass the sandbox but reformatting the request, such as “file://request to a remote server.” Adobe, however, limits those requests to local IP (Internet protocol) addresses and hostnames, Rios wrote. Adobe also blacklists some protocol handlers but not all, a method that Rios considers dangerous. “If we can find a protocol handler that hasn’t been blacklisted by Adobe and allows for network communication, we win,” Rios wrote. Flash does not blacklist the “mhtml” protocol handler, which is part of Microsoft’s Outlook Express application and installed on Windows systems. So a SWF file could export data by using a command, which Rios detailed in his blog. Rios said the method is particularly effective since if the request fails, the data will still be transmitted to the attacker’s server without the victim knowing. Rios wrote that there are two lessons to be learned: First, running untrusted SWF code is dangerous and that protocol handler blacklists “are bad.” An Adobe spokeswoman said the company has reviewed Rios’ blog post and logged a bug, classifying it as a “moderate” risk according to its Adobe Severity Rating System. “An attacker would first need to gain access to the user’s system to place a malicious SWF file in a directory on the local machine before being able to trick the user into launching an application that can run the SWF file natively,” she wrote in an e-mail. “In the majority of use scenarios, the malicious SWF file could not simply be launched by double-clicking on it; the user would have to manually open the file from within the application itself.” Adobe and Google worked together on the security improvements in Flash. Last month, the two companies released to developers the first version of Flash that uses a sandbox. It works on Google’s Chrome browser on the Windows XP, Vista and 7 operating systems. The release is a continuation of a broad program by Adobe to improve the security of its products, which includes the introduction of a regular patching cycle timed with Microsoft’s Patch Tuesday releases. Adobe also uses a sandbox in its Reader X product, which was released in November. Reader’s sandbox seals the application off from attacks designed to tamper with, for example, a computer’s file system or registry. The sandbox interacts with the file system, but those communications go through a broker, which limits particular actions.
<urn:uuid:0a4c3555-a6cc-4a03-af17-4c71e19e47ec>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/security-researcher-finds-way-around-flash-sandbox/42787
2024-09-20T11:38:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00195.warc.gz
en
0.950707
661
2.53125
3
Whether you work or live in a professionally managed building, you normally don’t give much thought to how secure the building is, aside from good locks, security guards and fire alarms/sprinklers. However, our residences and workplaces should address cybersecurity issues as well, as the IT systems managing environmental and electrical systems are susceptible to attack. Building Management Systems (BMS) or Building Automation Systems (BAS) have been around for years, but recently these solutions have been connected to the Internet for easier management and remote support of these systems. Unfortunately, most of these systems normally aren’t designed with robust security controls, and those that do have some authentication and authorization may be installed with default user-ids and passwords, or weak and guessable passwords are used. To complicate the situation, many system manufacturers rely on sensors and other components which may be difficult to update and patch, yet still rely on Internet connectivity to perform their functions. Some systems may have direct Internet connections while others may be connected to the corporate network. Many companies are entirely unaware that their BMS is connected to the internet, and if they do, may not understand the implications. As more and more devices and appliances are connected to the Internet for management and support, the Internet of Things (IoT) universe expands, along with the opportunity for abuse and exploits. What are the implications of a BMS being accessed by unauthorized people? - Lighting changes, shutting down electrical power, physical access control system (opening or closing secured doors, monitoring or shutting down security cameras and alarms), shutting down heat or a/c or affecting temperatures of buildings, controlling elevators, disabling fire suppression systems: anything controlled by a BMS - Using the BMS to access other components of the corporate network it is connected to Losing control of a BMS can have serious effects and adversely affects security, availability, comfort, and productivity for corporate and residential tenants/owners, with implications as an entry point to any corporate network resources it can access. How does this happen? BMS and their devices can be detected via scans of wired and wireless networks. Instructions for logging in and default ids and passwords can be easily found on the internet. It doesn’t take technical expertise to break into a system. Web sites like Shodan (https://www.shodan.io/ )scan and collect devices as part of the IoT universe and can be a starting point to find sites with a BMS. Most break-ins use credentials guessed/stolen or default passwords. Target: millions of customers’ credit card information was stolen—the point of entry was credentials to a heating and ventilation system. In 2012, hackers illegally accessed the Internet-connected controls of a New Jersey-based company’s internal heating and air-conditioning system by exploiting a backdoor in the software. In 2013: Researchers gained access to Google Australia‘s BMS using a default password. In 2013, hackers had broken into an unnamed state government facility and made it “unusually warm”. In 2016, IBM researchers hacked into an unnamed business through its BMS. In 2016, a security researcher took control of a company’s physical security using its internet-connected BMS. What Can Be Done? The following are suggestions to protect a corporate BMS from being exploited. - Companies should inventory what they currently have in place for their BMS, including a physical inventory to determine if a standalone Digital Subscriber Line (DSL) or cable connection is connected to BMS-controlled systems. Determine if the BMS is connected to the corporate network. - If a company has a cybersecurity staff or function, get them involved with the evaluation and ongoing security of the BMS. - Add cybersecurity controls to the facility budget. - Change all default user IDs and passwords. - Shared user IDs and passwords should not be used—every person requiring access should have their own account. - Network access to the BMS should be behind a corporate firewall. - Remote access should require a Virtual Private Network (VPN). - BMS systems should be isolated from the internal corporate network through its own Virtual Local Area Network (VLAN) and a firewall. - Choose vendors carefully, and be aware of exactly what BMS functions are accessible via online portals. - If possible limit access to the BMS to specific networks. If the BMS vendor requires remote access, limit access to that network. - Be alert for patches for the BMS and its sensors. Appendix of Real-World BMS Attacks Intruders hack industrial heating system using backdoor posted online Tomorrow’s Buildings: Help! My building has been hacked Building automation systems are so bad IBM hacked one for free Hacking the Doors Off: I Took Control of a Security Alarm System From 5,000 Miles Away Researchers Hack Building Control System at Google Australia Office
<urn:uuid:2e3b7e36-4d9f-495e-9801-4b6f715a80df>
CC-MAIN-2024-38
https://www.kaizenapproach.com/building-management-systems-and-cybersecurity-the-internet-of-things-comes-to-facilities-management/
2024-09-09T12:46:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00295.warc.gz
en
0.917702
1,020
2.6875
3
It’s hard to ignore the news articles that paint a magnificent picture of the future by the innovative technology and the center of it is Artificial Intelligence (AI). The new predictions say that how AI technology will affect the complete scenario of tech development. Keeping the media hype to an optimum level we need to see the numbers and trends that can tell us about how the AI technology can affect the overall global economy. Currently, many of the people are using the AI technology in their home or smartphones; it mostly concentrated towards the voice assistant devices. When we compare the 10-k fillings that measure the mentions of Artificial Intelligence from 2011 to 2016, we see a rise from 50 to 475 mentions. Enterprises are investing heavily in AI technology to improve their products and service offerings but the biggest impact would in seen towards the social factors around the world. There has been a lot of debate about how to drive the AI technology sustainability towards the common good of the planet. In 2015, all 193 member countries of the United Nations ratified the 2030 sustainable development goals (SDG). It was a call for all the member nations to come together end poverty, protect the planet, and ensure peace and prosperity were the three major goals. The first target in the current scenario is eradicating poverty by 2030 for all the people everywhere, people living under the wage of $1.25 a day were termed in the category of poverty. The UN emphasized on Science, Technology, and Innovation (STI) will be imperative if we all want to achieve ambitious targets. The rapid advances over the past decade in technology can practically make many of the current problems closer to been eradicated. Artificial Intelligence and Machine Learning are considered as transformative technologies that deliver the required resources requirements and assist in the changes required in the current economy. A recent study by the McKinsey Global Institute added that the AI could add close to 16 percent to global output by 2030 almost $13 trillion. Mckinsey calculates the annual increase in productivity growth of Artificial intelligence could be able to surpass the potential applications fundamentally transforming all the regions around the world. AI and ML are not transformative till date in the real-life applications we see, but we are increasingly able to observe various applications that will be next-generation solutions. The current scenarios that our humanity faces, the technology can eradicate those. A survey was conducted where we observed the different sustainable development goals and concluded that out of 17 at least 12 can be solved with AI and ML applications. Below we explore three SDG scenarios that can effectively be solved by the AI applications Access to basic financial services including different account types in banking to assist the user to receive, and make payments. A financial inclusion would mean that a person can obtain credit and insurance as the prerequisite to alleviating poverty. Close to two billion people in the world either have limited or no access to financial services, making them face a financial dependency on the other informal sectors. AI and ML are increasingly assisting the financial institutions to serve such people who have no financial touch point. For example, many of the current banks are utilizing different ML technology to rectify the creditworthiness of an individual; it will retrieve the complete data about the individual about his/ her occupation and financial status. Many of the developing economies are deploying the AI technology in regional languages that can interact with the local people without any financial touch points and teach them about various facilities that will come with the inclusion in banks. It’s more like a virtual assistant that solves various banking problem. Machine learning algorithm will be able to analyze the demographic data and financial data providing insights about the financial support a region needs. AI and ML will keep the cost of such facilities down with automation in processing of various data and chatbots to solve the customer service issues we will be able to induct the finance management. 2. Healthcare Facilities The current inequality between the rural and urban population in terms of healthcare services can be seen in developing countries. The rural or remote areas in the region suffer from a severe shortage of even basic medical equipment and facilities. Smartphones or portable health devices with biometric sensors will be able to bring the tools of diagnosis closer to the required patients. AI will be able to assist the doctors by solving the common health issues and providing the precautionary steps to be followed to be healthy. It was estimated by an AI model that a 24/7 facility can reduce the infant mortality rate by almost 10 percent for all the rural regions. Most of the rural community center lacks the required medical staff for providing basic care. An AI technology can assist in solving the healthcare issues with timely and accurate diagnoses combined with distributing the medical supplies required by the healthcare facility. Technology will be able to predict whether the regions will facing an epidemic after going through the string of symptoms in the patients. Most rural regions lack the required connectivity with urban regions making several job opportunities beyond the reach. The area where most of the enterprises are investing in AI applications is towards the transportation industry. The unpreceded level of urbanization will present major challenges for modern cities. According to the UN, 2.5 billion people will move to the cities by 2050, 93 percent of them belong to low and middle-income countries creating staggering logistical challenges. Public transportation rather than private vehicles will be the heart of the AI technology solution. Major Cities like Hong Kong and Singapore analyze millions of data per day and allocate the best transportation solutions and schedules. Artificial Intelligence and Machine Learning can have a profound impact on the lives of millions of people. It signifies the ability of the human begins to create autonomous devices that can make own decisions and solve social problems monitoring all the potential solutions, finding relations that couldn’t have been possible earlier. To know more about Artificial Intelligence, you can download our whitepapers.
<urn:uuid:40cb9013-d6db-42f8-91ef-ef3879fec707>
CC-MAIN-2024-38
https://www.ai-demand.com/insights/tech/artificial-intelligence/how-artificial-intelligence-can-be-considered-as-a-solution-for-social-issues/
2024-09-13T00:34:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00895.warc.gz
en
0.939984
1,163
3.21875
3
SQL injection, an infamous term known in the world of cybersecurity, often sends shivers down the spine of IT professionals. It is a common and potent threat that can have serious repercussions for businesses and individuals alike. This article aims to demystify this term, understand its consequences, cite examples, and explore preventive measures. What is SQL injection SQL injection is a code injection technique. Cyber attackers employ this technique to manipulate Structured Query Language (SQL) statements, which are used to interact with databases. By exploiting vulnerabilities in an application’s database layer, attackers can gain unauthorized access to sensitive data, manipulate data, or even execute administrative operations. How does an SQL injection attack work An SQL injection attack occurs when an attacker inserts malicious SQL code into a query. The process generally begins with the attacker locating an application’s input field that directly uses input in SQL queries without properly sanitizing it. They then input data containing SQL commands. If these commands are executed without being checked, the attacker can manipulate the query to achieve various nefarious outcomes. This could range from viewing sensitive data that they should not have access to, modifying or deleting this data, or potentially even executing administrative tasks on the database, providing them with a high level of control over the application’s data. Consequences of SQL injection attack The consequences of an SQL injection attack can be severe and far-reaching, including: - Data Breach: SQL injection can lead to unauthorized access to confidential data like user names, passwords, and credit card information. - Data Loss or Corruption: Attackers can manipulate the data, modify it, or even delete it, causing loss or corruption of valuable data. - Unauthorized Access: SQL injection can provide attackers with administrative rights to the system, allowing them to alter database structures or application features. - Loss of System Integrity: Attackers can use SQL injection to deploy malicious code or scripts, compromising the integrity of the system. - Reputation Damage: An SQL injection attack can harm a company’s reputation, leading to loss of customer trust and potentially serious financial implications. - Legal Consequences: If sensitive customer data is compromised due to an SQL injection attack, the affected organization could face legal actions or fines for failing to comply with data protection laws. How to prevent SQL injection Preventing SQL injection attacks requires a comprehensive approach designed to address its unique challenges. These specific strategies can significantly enhance an application’s resilience against such attacks: - Parameterized Queries: Also known as prepared statements, they allow database engines to distinguish between SQL code and data, regardless of what user input is supplied. - Stored Procedures: Stored procedures can encapsulate the SQL statements, reducing the surface area for potential SQL injection attacks. - Input Validation: Validate user input rigorously to ensure it conforms to expected patterns, thereby preventing the inclusion of malicious SQL code. - Least Privilege Principle: Limit the permissions of accounts that interact with the database. Each account should contain the minimum privileges necessary to perform its task, mitigating potential damage from an SQL injection attack. - Database Firewall: Deploying a web application firewall (WAF) can help identify and block SQL injection attacks. WAFs can be programmed to recognize SQL injection tactics and halt suspicious activities. - Regular Updates and Patching: Keeping the database management system (DBMS) up-to-date is crucial. Regular updates and patches often include fixes for known vulnerabilities that could be exploited through SQL injection. By implementing these strategies, organizations can significantly decrease their vulnerability to SQL injection attacks. Understanding the threat of SQL injection SQL injection is a serious cybersecurity threat with potentially significant consequences. However, it is possible to mitigate this risk with a robust security framework and adherence to best practices. Remember, prevention is always better than cure, especially regarding data security. Stay informed, stay vigilant, and keep your systems secure.
<urn:uuid:c13765a0-1911-4125-9a71-9517d993c638>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/endpoint-security/what-is-sql-injection/
2024-09-13T01:11:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00895.warc.gz
en
0.907756
808
3.65625
4