text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
It’s time to have the talk with your child. You peeked at their browser history and, well, it’s time. It’s going to be awkward and uncomfortable for both of you, and things have changed a lot since you were that age. But better to hear it from a parent than learn it from a stranger or God-knows-who online. No, not that talk. It’s time to discuss online privacy with your kids. Won’t the internet and government regulate this for me? Haha. Good one. Because we all know how honest people are when asked their age before entering a website. And everyone with the ability to make a website or app has a thorough understanding of ethics and regulations when it comes to collecting data and serving advertisements to minors. With great power comes great responsibility, right? COPPA has been heavily criticized for being ineffective and even counterproductive in protecting kids online. Children often resort to less age-appropriate content instead of waiting around for a parent’s approval. It doesn’t stop kids from accessing pornography or from being advertised to. Websites that might otherwise provide content that’s appropriate for kids often ban children altogether because of the compliance burden and potential fines for violating COPPA. The UK has been a bit more pro-active in spreading online privacy awareness among British youth through the UKCCIS and its “Click clever, click safe” mantra. However, this is the same organization that in 2013 attempted to filter websites deemed unsafe or inappropriate for children, but inadvertently blocked the websites of LGBT rights groups and charities meant to educate children about drugs, health, and sex. So no, you can’t depend on the internet self-regulating itself or on governments (which can only create regulations for their own country, anyway) to step in on your behalf. Is children’s privacy really an issue? You bet it is! Three out of four children have access to a smartphone in the US. In the UK, 43 percent of nine- to 12-year-olds have a social media profile, according to the Library of Congress. One in three are on Facebook despite the 13-year-old age limit. A quarter of those kids on Facebook never touch the privacy restrictions on their profile, and a fifth of them publicly display their address and/or phone number. Facebook claims it is powerless to stop children from lying about their age and creating accounts. And that’s just Facebook. It isn’t even cool anymore. Snapchat, Tumblr, Vine, Instagram, and Kik are all popular among teens and pre-teens. Who knows what will come next? Social media and games pose the biggest threat to children’s privacy, because they request a significant amount of information upon registration. Profile info is used by the social network to serve targeted ads and recommend content. That info can also be used by scammers and predators to target kids. To be fair, it happens to adults, too. But kids are far more susceptible than adults. The ramifications of ignoring a child’s online activity can have both immediate and long-term effects. You’ve probably heard of horror stories where a kid unknowingly spends thousands of dollars on in-app purchases in a mobile game. Drug dealers and sex offenders target kids online, as do identity thieves. In fact, Carnegie Mellon CyLabs says children are over 50 times as likely to have their social security number used by another person. One in 40 families has a child who is a victim of identity theft, according to the Identity Theft Assistance Center and the Javelin Strategy & Research Group, and that figure is on the rise. Kids make great targets for identity theft because they have clean slates with no blemishes on their credit report. Identity fraud can go on for years without notice, because kids have no need for credit until they are old enough to buy a car, rent an apartment, or take out loans for college. When that day comes, however, these young victims are in for a rude awakening. See also: Best identity theft protection Enough of your fear-mongering! What can I do about it? As a parent, there’s a fine line between protecting your kids’ privacy and invading it yourself. But there are a few simple precautions to take that will allow them freedom while safeguarding their interests. Follow and friend your kids Worried about what your kid is posting on Snapchat? Well, that’s easy. Install it, make an account, and follow them. Now you can safely monitor their public account activity from a reasonable distance, and they’ll likewise be more conscious about what they post. You can view their friends list on Facebook to see if there’s anyone shady. No, you won’t be able to screen what’s being said on private channels, but kids are allowed to have secrets. Do the same for every social media account. Log into Minecraft to terrorize Junior’s village. Not only will it help keep your child safe, you’ll also get to know them and the world they live in better. It’s a win-win for all parties. Don’t start making rules that seem arbitrary to your kid. Without being condescending, explain to them the risks and dangers of failing to protecting online privacy. Toss out some of those stats from above as proof. Don’t go behind their back and spy on your kids, either. This will only further distrust and could leave them more exposed. When you take a measure that requires some oversight, be transparent about it. Kids and adults alike get sucked into playing quizzes and taking surveys online, especially on Facebook. But many of these sites ask that the user log in with their social media profile before the results can be posted for friends to see. Tell your kid to avoid those games and quizzes, as many of them mine data from your child’s profile and their friends’ profiles, which is used by the company and third parties to target advertisements and who knows what else. Unless you recognize and trust the company that owns the website, don’t use your social media profiles to authenticate or authorize apps. Adjusting kids’ privacy settings Almost every social media app will have a tab full of privacy settings. Learn them. Read the privacy policies. Now that you have the same apps as your kid, sit down with them and disable what needs to be disabled. Remove the accounts from search results so strangers can’t send friend requests. Remove as much public profile info as possible–address, school, phone number, email address, etc. Tightening privacy settings for the most part won’t affect how a social media app functions, so your child shouldn’t put up much of a fight. Protecting your child’s privacy is really just an extension of protecting your own privacy. You can perform many of these tasks together. We won’t cover every single app that your child may or may not have installed in this article, but we’ll touch on a few of the big ones. First off, on all devices, location services have become the norm. This allows Apple, Google, Microsoft, and app makers to monitor the location of the user. For obvious reasons, it’s best to turn these off. Tell your kids not to geo-tag their photos on social networks–at least not until they’ve left that particular location and don’t plan to return. In newer versions of iOS and Android, you can disable the location-tracking permission on an app by app basis, or disable it entirely in the settings. Front-facing cameras are also nearly universal on phones, tablets, and laptops nowadays. There’s no shortage of news stories about both hackers and law enforcement remotely enabling cameras unbeknownst to the user, snapping photos and learning their whereabouts. Place a sticker or piece of electrical tape over these cameras. Always set a swipe pattern, PIN number, or password on your devices to keep both strangers and ill-willed acquaintances out of your kid’s business. Facebook privacy settings It’s likely that no social network on the internet knows more about us than Facebook, and the privacy settings of the world’s largest social network can unfortunately be a bit tricky to navigate. Start by going to the top-right corner of the home page and clicking the lock icon. Click to drop down “Who can see my stuff?” and switch it to “Just friends.” This should keep your child out of view from passing strangers. In the next drop-down, “Who can contact me?”, you can set who is allowed to send your child friend requests. There’s no longer an option to make a profile un-searchable. Instead, the most private option you get is to only allow Friends of Friends to send friend requests. Your kid can still send friend requests to whomever he or she pleases, so it won’t limit who they can be friends with. On the last section, block scammers, cyber bullies, and anyone else you don’t want your kid communicating with. We’re not done yet. On the very bottom of this tab, click “See more settings.” Here you can prevent people from searching for your kids’ account by their phone number or email address. Do so. Click the “Timeline and Tagging” tab on the left sidebar of this page. Set all these settings to “Friends” when available to keep strangers at bay. Here you may also want to add the option to review photos and statuses in which your child is tagged. This prevents any inappropriate photos and cyber-bullying from showing up on their account, which could otherwise come back to haunt them later. Facebook now lets users choose to share statuses and photos but exempt specific people from seeing them. Let your kid know that they shouldn’t block you in this way, as anything that they don’t feel comfortable sharing with you shouldn’t be shared with the rest of the world. Next up is the Followers tab. A follower is basically someone who can view your profile and posts but isn’t personally friends with you. Switch this from Everybody to Friends as another barrier to strangers. After you’ve removed all the unnecessary permissions from all these apps (make sure to click Show All at the bottom), scroll down a bit further to the three panels below the app list. Under Apps Others Use, click Edit. This is a list of information that apps used by your friends can see on your profile. Tricky, right? Even after disabling all those app permissions, the apps used by your friends can still access your information. Uncheck everything and stick it to Big Brother. Okay, last step. Go to the Security tab at the top of the left sidebar. The privacy-related bits we’re most concerned with here are Login Approvals, App Passwords, Your Browsers and Apps, and Where You’re Logged In. - Login Approvals is basically the same as two-step authentication. Whenever logging in from a new device, a code will be sent to your phone as an extra layer of security. You will have to add a phone number if you haven’t already. - App passwords lets you set a separate password for apps that support this function and allow you to log in with your Facebook account, such as Spotify and Skype. It’s a good idea to have different passwords for each app when possible. Learn more about creating and memorizing strong passwords here. - If your kid gets a new phone or logs in on someone else’s device, the Your Browsers and Apps setting is important. It’s a log of devices that don’t require identity confirmations or send notifications when logged in. Remove any that aren’t among your current devices or that you don’t recognize. - Where You’re Logged In is similar to the above setting, but for active logins. Again, remove any you don’t recognize or that aren’t yours. Snapchat privacy settings If sifting through all of Facebook’s privacy settings made you weary, you’ll be happy to know Snapchat is much simpler. Launch the app and click the ghost icon at the top of the screen, then the settings cog at the top right. Scroll down to the “Who can…” section. Set both Contact Me and View My Story to My Friends. Who can view my story can also be customized to a specific list of people. This is also where to block certain individuals. Back on the settings page, click Login Verification to set up two-step authentication. This can be done with a phone number and SMS or using an authentication app like Google Authenticator or Authy. Login verification makes logging into Snapchat from a new device a two-step process, which is more secure. Instagram privacy settings Rounding out the top three most-used apps among teens is Instagram. To find Instagram’s privacy settings, click the head and shoulders icon on the bottom right, then the three dots on the top right. You can elect to switch to a Private Account, but most would agree this sort of defeats the point of Instagram. Other than that, there’s not much to make private. Instead, privacy and safety on Instagram is more about how the app is used. When you post a photo, don’t add a location until after your child has left said location, and only if they don’t plan to return anytime soon. Otherwise, strangers can determine where your kid hangs out or where they are as soon as the photo is posted. Not only could this mean a predator could find your kid, it also means a burglar could figure out if a family is home or not. Twitter privacy settings Similar to Instagram, there’s not much to hide on Twitter. Don’t add any personal details to tweets or the profile blurb, and you’ll be fine. Tumblr privacy settings Tumblr isn’t quite as popular among teens as other apps, but among the art and poetry lies a haven for porn, smut, and vulgarity. Tumblr doesn’t require a real name upon registration, so there’s no need to use one. Privacy settings can be accessed through the app or on the website. Here you can disable messaging so strangers can’t contact children. If your child has his or her own Tumblr blog, it’s probably a good idea to disable comments and replies to posts. Blogs can be made private, but this makes them password protected. It’s between you and your child if you think this is best. As with everything else, don’t post personal details and be smart about geo-tagging photos. Parental controls can be enabled by either a built-in mechanism or by a third-party tool on Android, iOS, and most modern web browsers. These controls not only protect your child from inappropriate content, but can also prevent them from inadvertently divulging personal details about themselves or your family. These are mainly aimed at younger children; your 16-year-old won’t appreciate the level of micromanagement that these tools offer. Android lacks dedicated parental control, but some phones come with the ability to create multiple user accounts. In the settings, check for for a “Users” section, where you can add a restricted profile. A restricted profile allows you to toggle which apps the user can access. This is especially useful if you allow a young child without a phone of his or her own to play with your tablet or phone. The account switches depending on the PIN or password entered on the lock screen. If you’re worried about invasive apps or games that are likely to run up a bill, parents can require that their Google account password be entered before downloading an app or making in-app purchases. Apps can be filtered by low, medium, or high maturity levels. Several apps out there make it easy to monitor and manage what children do with their phones. Norton Family Premier costs a whopping $49.99, but it comes with a slew of useful features including location tracking, the ability to block individual apps, and web filtering. Parents can see and limit when and how much screen time their kids get. It also works on multiple devices for families with multiple smartphone-touting rascals. Qustodio, Net Nanny, and PhoneSherriff are other solid premium options. For free alternatives, check out Funamo, Lock2Learn, MM Guardian, and AppLock. iPhones and iPads, unlike Android, have some parental controls built in. In the General settings of iOS, just click on Restrictions and create a passcode. Here you can disable installed apps and certain features. Safari, the App Store, FaceTime, music apps, Siri, and in-app purchases can all be turned off or filtered. Social media and location services can be restricted as well. If parents want to be able to monitor and manage iPhone use with more granularity, there’s an app for that. Netsanity, Qustodio, OurPact, and Kidslox all include features like curfews, timers, site blockers, and app hiders. In Chrome settings on the desktop browser, scroll down to the People section. Uncheck “Let anyone add a person” so your child can’t easily circumvent it, then click “Add person.” You can choose to create a desktop shortcut especially for them. Select an icon for them and check “Control and view the websites this person visits from .” Navigate to the supervised users dashboard at https://myaccount.google.com/people-and-sharing. Choose your new profile, then click Manage on the top right of the Permissions frame. Here you can enter specific websites to block, or only allow certain websites to be accessed. Remember to enable SafeSearch as a general filter for kids. If you block a site that your tiny surfer wants access to, he or she can request it without even having to ask you face to face. For more granular controls, a handful of extensions in the Chrome store should fulfill your needs. WebFilter Pro and Blocksi Web Filter offer features like time management, Youtube filtering, web filtering, whitelists, and blacklists. Firefox doesn’t come with any built-in parental control measures, so you’ll have to rely on third-party plug-ins. FoxFilter is probably the most widely used. The sensitivity can be set to block keywords in the body text or just in the metadata, such as page title and URL. Specific keywords and websites can be blacklisted and whitelisted, and many keywords are included upon installation. Microsoft introduced dedicated children’s accounts starting with Windows 8. On Windows 10, click on the start menu and go to Settings. Head to the Accounts > Family and Other Users, and hit “Add a family member.” On the following screen, choose “Add a child.” You may need to create an email account for them. Enter a phone number used to reset the password. Windows will then ask you if you want to let Microsoft target your kids with ads or send them promotional offers. Turn these off, as they are counterproductive to the whole privacy stance we’re trying to take. Now that you have a child account set up, you can receive weekly reports on their activity and manage the settings online. You can choose to block inappropriate websites, add your own sites to the whitelist and blacklist, limit apps and games by rating, and set when and how long the computer can be used. You might find this process a bit off-putting since your child has to register with an email address. If that’s the case, third-party applications are also available. You might recognize the names from our Android and iOS lists: Qustodio, Norton Family, and SocialShield are all solid options. SocialShield is particularly useful for monitoring social media, alerting parents to posts containing content about sex and drugs, suspicious friend requests, and messages that could lead to a real-world interaction. To turn on parental controls in OSX, head to the System Preferences in the Apple Menu. Click Parental Controls, and add a new user with parental controls enabled. Now back on your administrator account, enable parental controls for the new user. If you spoiled your kid with his or her own Macbook, you can also manage parental controls from another computer. To set restrictions, click through the tabs on the top. Apps lets you specify a permitted rating and what apps your kid can access. Web lets you filter access to websites. People restricts a child’s interaction with others through the Mail, iMessage, and Game Center apps. Time limits is for time management. Other can be used to censor language, block the built-in camera, and prevent password changes. Qustodio and Norton Family are also available as third-party parental control software for Macs. ID theft protection ID theft can happen to anyone, and children are often targeted because few people think to check their kids’ credit reports. Be one of the few who do. In the US, all citizens are granted one free credit report from each of the three national credit reporting bureaus per year, which you can get from AnnualCreditReport.com. Order a copy and check it for any unauthorized or suspicious activity. UK citizens don’t get the same courtesy, but a few credit reporting agencies offer free trials from which you may obtain your kids’ credit reports. Credit reporting begins as soon as a child has an account opened in their name for which a credit check is required. From that point on, they have a credit score. Teach your kids good habits early on, such as thoroughly checking each purchase on a credit card statement if they have one, and regularly monitoring bank accounts for any activity they didn’t authorize. Let them know the importance of safeguarding their social security numbers (national insurance numbers if you’re in the UK), as well as other ID numbers on driver’s licenses and medical insurance cards. These can all be used to commit fraud under your kid’s name and damage their credit for years to come. As an added layer of protection, you might consider investing in an identity theft protection service. These agencies monitor your personal information, bank accounts credit cards, and public records for misuse. They offer assistance should discrepancies or fraud crop up, along with large insurance plans to compensate for any losses that occur as a result of identity theft. If your child has previously been a victim of identity theft, then they are more at risk, so these services are especially useful. TrustedID is the only ID theft protection service that we’ve reviewed with a true family plan, but other agencies usually have options to enroll kids. Check out all of our ID theft protection reviews to find which one best suits your family. Parental control software Parental control software can give parents broader and more granular tools to manage their kids online behavior. Many programs allow you to monitor what websites your kids visit and which apps they use. You can block specific sites or apps or enable blocking on websites that contain certain keywords or fall under a certain category. You can even specify when kids can use their devices and for how long. Thanks to the complicated procedures that browsers go through in order to fetch websites, it is possible to use content filtering services that operate on the cloud. These services intercept Web traffic between the browser and the Web servers that host websites. The parental control software doesn’t need to be installed on any of your devices. You just have to alter the DNS setting either on your computer or on your router. With this change, the parental control software can intervene and block inappropriate sites. For an example of this service, check out the free parental control software service offered by CleanBrowsing. This also checks out requested sites for malicious content and it can identify the fake sites used by phishing hackers. This service is also available as a firewall for use by businesses in a paid edition. There is also a tailored version for schools. Dozens of parental control software are available, so finding the best fit at the right price can be a challenge. Fortunately, we’ve taken care of that for you by extensively testing several of the top parental control software on the market. Check out our parental control software reviews here. Use a VPN When inputting private information on registration pages and online shopping sites, make sure the site uses a verified SSL certificate. This is usually indicated by a lock icon and a URL prepended with HTTPS. This encrypts communication between the browser and the server. Install the HTTPS Everywhere extension on your browser to use HTTPS by default when available. HTTPS isn’t available for most websites, however, so whatever information transmitted from your computer to the web is unencrypted and viewable by anyone who wants to see it. To better protect yourself and your children, invest in a VPN service. A VPN encrypts all your incoming and outgoing traffic, and it also routes that traffic through a server in a location of your choosing. This has the effect of making all your internet activity anonymous while hiding both the content of your connection and masking your IP address and true location. Switching on the VPN before surfing the web or doing anything else online is a good habit to get into for both parents and kids. When it comes to ease of use–even something a young child could learn to use–it’s tough to beat ExpressVPN. It’s one of the fastest VPNs we’ve tested and is designed with novices in mind. On the downside, it doesn’t offer family plans, and the individual plans are relatively expensive. Read our review of ExpressVPN. For a cheaper option that works for an entire household of devices, we recommend PureVPN or Private Internet Access. Read our reviews of PureVPN and PIA. Note that if you also intend to use your VPN when streaming not all VPNs work with Netflix, BBC iPlayer, Amazon Prime Video, Sky Go and many other popular streaming services. Making your child anonymous In combination with a VPN, the following list of precautions can make your child invisible or at least more of an enigma to the internet at large. Fake personal information You probably spent a fair amount of time teaching your kids how to spell their names, when their birthday is, and the address where they live. Now teach them to lie about it to strangers, when asked. Use a fake birthday on Facebook, if you use one at all. A first and middle name is preferable to a first and last. If you’re not expecting mail, use a phony address. Enable ad blockers Online advertisements aren’t just for advertising, they are also used to mine data from the person viewing them. Some are harmless, but others are downright malicious. Ad blockers and anti-tracking extensions can prevent ad companies from snooping on your kids. We recommend Ad Block Plus and Disconnect. Disconnect even offers an educational kids version, Disconnect Kids. Some antivirus products also include ad blockers. Whether for homework or curiosity, children will need to use search engines. Google and others will collect information on every user to create a profile around them, which is used to make recommendations and target ads. On both mobile devices and desktop browsers, you can set the default search engine to something more anonymous. DuckDuckGo, StartPage, and ixquick don’t log IP addresses, use tracking cookies, or monitor what results you click on. StartPage and ixquick actually scrub your personal details before submitting the search query to Google or another major search engine on your behalf, so you get the same results without giving up any information. See also: Best private search engines This can be more difficult than it sounds. Depending on your child’s age, you may want to untag all photos of your kid that appear online. On Facebook, as mentioned above, you can opt to review any photo that your child is tagged in before the tag is made public to friends. But not all social networks have such granular controls. If your child is on a sports team or club, this can get tricky. Discuss the issues with adult leaders and coaches about tagging kids in photos and making web pages and Facebook groups private. Lay ground rules with babysitters and fellow parents about posting photos online. Likewise, tell your child to be respectful and not tag anyone in a status or photo without their permission. Collecting kids’ info If you run a website, app, or even just a Facebook group that involves kids, it’s important to know what information to gather, how to collect and secure it, and who to share it with. We’ve got a separate guide just for that: Fair practices for collecting information from children. Kid’s privacy isn’t just about protecting them from predators and fraudsters, although that’s certainly reason enough. But there’s a societal impact on kids who are bombarded by algorithm-triggered advertising and marketing. Kids are impressionable, and their minds can be shaped by what they see online. What they see on the internet is defined by what Google, Facebook, Microsoft, and other corporations that rely on advertising and mass dragnet data collection want them to see. Likewise, in an age when nothing is forgotten, children are also shaped by what they leave behind. In a Wall Street Journal article, Julia Angwen sums it up best: “They won’t have the freedom I had as a child to transform myself. In junior high school, for example, I wore only pink and turquoise. But when I moved across town for high school, I changed my wardrobe entirely and wore only preppy clothes with penny loafers. Nobody knew about my transformation because I left no trail, except a few dusty photographs in a shoebox in my parents’ closet. Try that in the age of Facebook.” The internet can open up more of the world to your child than any generation before them. But we shouldn’t allow faceless for-profit corporations to mould them into a class of consumers limited to the online personas that they unknowingly helped to create.
<urn:uuid:ba76dc7e-d162-44fc-abc6-4ac6882de0bf>
CC-MAIN-2022-40
https://www.comparitech.com/blog/vpn-privacy/protecting-childrens-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00273.warc.gz
en
0.933113
6,755
2.59375
3
The Intelligent Cloud : Artificial intelligence (AI) Meets Cloud Computing If you thought that mobile communications and the Internet have drastically changed the world, just wait. Coming years will prove to be even more disruptive and mind-blowing. Over the last few years, cloud computing has been lauded as the next big disruption in technology and true to the fact it has become a mainstream element of modern software solutions just as common as databases or websites; but is there a next phase for cloud computing? is it an intelligent cloud? Artificial intelligence (AI) is the type of technology with the capacity to not only enhance current cloud platform incumbents but also power an entirely new generation of cloud computing technologies. AI is moving beyond simple chat applications like scheduling support and customer service, to impact the enterprise in more profound ways; as automation and intelligent systems further develop to serve the purpose of critical enterprise functions. AI is bound to become ubiquitous in every industry where decision-making is being fundamentally transformed by ‘Thinking Machines’. The need for smarter and faster decision making and the management of big data is the driving factor behind the trend. Remember Moore’s Law? In 1965, Intel’s co-founder, Gordon Moore observed that the transistors per square inch on integrated circuits had doubled in number each year since their invention. For the next 50 years, Moore’s Law was maintained. In the process, multiple sectors like robotics and biotechnology saw remarkable innovation because machines that ran on computers and computing power all became faster and smaller with time as the transistors on the integrated circuits became more efficient. Now, something even more extraordinary is happening. Accelerating technologies such as big data and artificial intelligence are converging to trigger the next major wave of change. This ‘digital transformation’ will reshape every aspect of the enterprise, including cloud computing. Artificial intelligence (AI) is expected to burgeon in the enterprise in 2017. Several IT players, including today’s top IT companies, have heavily invested in the space with plans to increase efforts in the foreseeable future. Despite the fact that AI has been around since the 60’s, advances in networking and graphic processing units, along with demand for big data, have put it back at the forefront of several companies’ minds and strategies. Given the recent explosion of data from Internet of Thing (IoT) and applications, and the necessity for quicker, real-time decision making, AI is well on its way to becoming a key differentiator and requirement for major cloud providers. In a market that has for the longest time been dominated by four major companies – IBM, Amazon, Microsoft, and Google –an AI first approach has the potential to disrupt the current dynamic. “I think we will evolve in computing from a mobile-first to an AI-first world.” -Sundar Pichai, Chief executive of Google The consumer world is not new to AI-based systems; products like Siri, Cortana and Alexa have been making our lives easier for a while now. However, the enterprise applications for AI are completely different. An AI first enterprise approach should be designed to allow business leaders and data professionals to organize, collect, secure and govern data efficiently so they can gain the insights they require to become a cognitive business. In order to maintain a competitive advantage, businesses today have to get insights from data; however, acquiring those insights is complex and requires work from skilled data scientists. The ability to predict strategic and tactical purposes has evaded enterprises due to prohibitive resource requirements. Cloud computing solves the two largest hurdles for AI in the enterprise; abundant, low cost computing and a means to leverage large volumes of data. Today, this new breed of Platform as a Service (AIaaS) can be applied on all the data that enterprises have been collecting. Major cloud providers are making AI more accessible “as-a-service” via open source platforms. For enterprises with an array of complex issues to solve, the need for disparate platforms working together can’t be ignored. This is why making machine learning and other variations of AI applications and technology available via open source is critical to the enterprise. By leveraging AI-as-a-service, businesses can innovate solutions that solve infinite problems. As machine learning becomes more popular as a service, organizations will have to decide the level at which they want to be involved. While the power of cognitive intelligence is undeniably high, wanting to use it and being able to use it are two completely different things. For this reason, most companies will opt to use a PaaS vendor to manage their entire cycle of data intelligence as opposed to an in-house attempt, allowing them to focus on powering and developing their applications. When looking for an AI provider, you have to ask the right questions. The ideal vendor should be in a position to elucidate both how they handle data and how they intend to solve your specific enterprise problem. There are multiple digital trends that have the potential to be disruptive; the only way to guarantee smarter business processes, more agility, and increased productivity is by planning ahead for the change and impact that is coming. The main differentiating factor between competing vendors in this space will be how the technology is applied to improve business processes and strategies. Author: Gabriel Lando Image Courtesy: pexels.com
<urn:uuid:a1cd34b5-75f4-4cda-be3a-9588bb497ee0>
CC-MAIN-2022-40
https://www.filecloud.com/blog/2017/02/the-intelligent-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00273.warc.gz
en
0.950901
1,107
2.71875
3
DescriptionThis indicates an attempt to push a blob onto Docker. Docker is an application that allows users to perform virtualization of applications like a web server or web application - known as containers. Containers are created as "images" file like virtual machines which specify the contents. One major difference between containers and virtual machines is containers are more lightweight. Network bandwidth consumption
<urn:uuid:46721345-d183-40bf-8cf6-8e92d82ec1bd>
CC-MAIN-2022-40
https://www.fortiguard.com/appcontrol/46404
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00273.warc.gz
en
0.895724
96
2.59375
3
Internet of Things: the next Big Bang of Technology The tsunami of inter connectivity among all of the objects, from door locks to toll booths, from refrigerators to smart phones, coffeemakers, cars and lamps to anything, has led to the Internet of Things (IoT). It is a world in which objects, animals or people are provided with unique identifiers that have the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMs) and the Internet. It’s built on cloud computing and networks of data-gathering sensors; it’s mobile, virtual, and instantaneous connection; and it’s going to make everything in our lives from streetlights to seaports “smart.” When we talk about making machines “smart,” we’re not referring strictly to M2M. We’re talking about sensors. So now we have sensors monitoring and tracking all sorts of data; we have cloud-based apps translating that data into useful intelligence and transmitting it to machines on the ground, enabling mobile, real-time responses. And thus bridges become smart bridges, and cars smart cars. Maruti Suzuki has already introduced and become thefirst in India to introduce the Apple CarPlay Infotainment system for Premium buyers for Baleno. "It is a world in which objects, animals or people are provided with unique identifiers that have the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction" It’s all about connections. All the gadgets and technologies showcased at the CES (Consumer Electronic Show) recently had one thing in common – they are all connected to the internet and potentially to each other. And it’s the connections between these connected things that we call the internet of Things. Gartner forecasts that 6.4 billion connected Things will be in use worldwide in 2016. Up 30 Per cent from 2015 in 2016 5.5 Million new things will get connected every day. IoT is leading to a convergence of electronics and software where the electronics is getting smarter and the software is getting more pervasive to drive what can be done. Users may only be able to enjoy the benefit on the third year after months and months of testing, but IoT believes the momentum is certainly there. Starting in earnest in 2016 with dense urbanized populations and advanced 3G, 4G and 5G network coverage, more devices will be connected. However, the movement will be burgeoning with industry. Although the idea seems quite simple, it can be very advantageous for a company to utilize the IoT to ensure quality service is given to their customers. Another advantage of IoT is the ability to track individual consumers and targeting these consumers based on the information supplied by the devices. Devices can make decisions and adapt without human guidance to reduce their energy usage.
<urn:uuid:7c5a7d9c-dc50-432f-a6d8-ad4b3e061aa4>
CC-MAIN-2022-40
https://sharepoint.ciotechoutlook.com/cioviewpoint/internet-of-things-the-next-big-bang-of-technology-nid-3219-cid-123.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00273.warc.gz
en
0.937256
606
2.78125
3
I can remember like it was yesterday when I got infected with ransomware for the first time. It was a widespread attack back in 2012 that affected multiple countries. Basically, I got my screen frozen with an image saying that it was the police, and they will going to arrest me if I don’t pay the $300 fine. I was so scared, and I believed it was legit because it had all the badges and the language the police will use. My heart started racing, and my head was about to explode. But I realized it was fake because their claims were false. Next thing I did… I started using an antivirus. Can Antivirus detect ransomware? A premium version of Antivirus can detect ransomware. Still, it has to have real-time protection, anti-exploit technology, and dedicated anti-ransomware technology that blocks any action of holding files hostage. Such antivirus software will protect you against all of the major ransomware attacks. But watch out for the “major” part, which implies that there are still attacks that might get to you. And so, apart from installing Antivirus that can detect ransomware, there are plenty of other things you’ll have to do yourself. Never go to war without knowing your enemy couldn’t apply better to fighting ransomware. Here’s what you should know about this potentially catastrophic type of malware: What is the difference between a virus and ransomware? Both viruses and ransomware are types of malware, quite commonly encountered. But they work differently. A virus becomes part of an infected program. It inserts a copy of itself into that program. And it multiplies itself and spreads from one computer to another. As it propagates, the virus leaves a path of destruction. Despite its “viral” characteristic, the effects of infection can be just mild disturbances. Or, if it’s a company that falls victim, it can suffer severe damages, with data or software loss and operational complications. Ransomware has a single specific target. It works with cryptoviral extortion. And it basically encrypts files, making them inaccessible. The victim has to make a ransom payment to regain access to the locked files. Ransomware attacks usually imply a deadline for the payment. Should the victim miss it, data will be deleted. Otherwise, the hacker would send a decryption key following the payment. But that’s not always the case. Whether they pay or not, victims – especially large organizations – end up spending even more (as in millions of dollars) while working to recover their data or rebuild the lost work. The fear is so big that many of them actually purchase cyber insurances specifically for this type of attack. How can you get one or the other? This is yet another aspect that sets the difference between a virus and ransomware. The virus cannot exist on its own. It needs a host, which is an executable file. And this is also how it spreads. When the victim runs the host file, the viral code is executed along with it, and that’s when it becomes active and starts replicating and spreading itself. You can get viruses when you transfer an infected file. Whether you do it via email attachment, file sharing, drive, or network transfers, you’ll activate it unknowingly if the file contains the virus. Ransomware, on the other hand, can spread through malvertising (one would hack legitimate advertising and use it to spread ransomware), phishing emails, or advanced exploit kits. Much like a worm, the ransomware can easily infect many different devices, as long as the victim takes the bait. Ransomware will either trick someone into installing it or exploit a security hole in some vulnerable software. Because the purpose is to extort money, it will most likely target moderately high-profile victims. Small and medium-sized businesses and public institutions that don’t easily afford to lose the data but that also don’t really afford to pay ransomware insurance will be attacked. What makes ransomware so successful? As long as organizations or individuals keep paying ransoms, hackers keep developing and spreading ransomware. The reason why many fall victims in the first place is that they fail to address the critical security flaws of the networks they work with. Organizations and enterprises tend to rely too much on cloud and online backups, which leaves them vulnerable when those backups are encrypted. There’s also the fact that anonymous money transfer services make it very easy for the bad guys to get their payments without being caught. And to make yourself a better idea of the size of the ransomware business, know that any cybercriminal now has the option to purchase ransomware-as-a-service! Why should you never pay ransomware? First of all, because you have no guarantee that you’ll get back your encrypted files after you make the payment. According to a report published by CyberEdge Group, only 19% of the ransomware victims who paid the ransom also got their data back. Second of all, because not paying will discourage hackers from repeatedly launching such attacks. As mentioned above, one thing that makes ransomware so successful is that it works. Many people pay for it, and hackers are encouraged to keep launching these attacks because they’re profitable to them. Do not pay the ransom is what any cybersecurity expert would tell you! If you’ve been scammed once, know that you only have 19% chances to get your valuable data back. And remember that by not giving the hacker what he wants, you’re reducing your odds of going through this again in the future! How to prevent ransomware? Ransomware prevention plays an even more significant role in avoiding the worst that can happen. Here’s what I suggest you do, to be as protected as possible: - Use reputable, top-of-the-tier antivirus software; - Keep your security software up-to-date; - Keep all systems and software up-to-date; - Use content scanning & filtering on all your mail servers, to prevent spam email with malware-infected attachments or malware links from getting to your inbox; - Make sure that all inbound emails are scanned for threats, and that suspicious attachment is blocked; - Instruct all employees: - Not to share personal information when receiving unsolicited emails, phone calls, instant or text messages with this purpose – EVEN IF the sender claims to be from IT; - To double-check any such request by directly contacting your IT department; - To always announce the IT department before traveling, if planning to access work documents remotely; - To always use a VPN when connecting to public Wi-Fi; - Always be prepared for an attack: - Include preventive network segregation, and segmentation that would minimize data loss should one segment be compromised; - Rely on offline backups – ideally, you should have three copies stored in two independent places; - Work with your security department to set up a risk management plan. Which Antivirus is best for ransomware? Judging by the protection layers they come with, and the scores received in testing, you should consider the following best Antivirus for ransomware: This one comes with more than one ransomware protection layer. It can eliminate known ransomware and work to recover whatever data has been damaged before it managed to stop the attack. Not to mention it continually scans for ransomware-specific activity and behaviors, ready to prevent any unauthorized file modification. On top of this, BullGuard will not be heavy on your system. This is one of my favorite parts, and this is what I use and what I will recommend. Click Here To find out more about their Antivirus. Kaspersky Security Cloud This free option will ensure your protection against file-encryption ransomware and the various disk-encryption varieties roaming around. It comes with a built-in module that works on breaking a couple of less-known screen-lock ransomware. AND you can pair it with Kasperky’s dedicated Anti-Ransomware Tool, which is also free. This one features cloud-assisted behavior detection, and it is ready to scan for and quickly block ransomware or crypto-malware! With AVG, you’re getting an effective ransomware shield and a dedicated anti-malware app. It has the benefits of coming with a user-friendly interface that you’ll easily tweak, and it gives you lots of options to configure it. Protection targets not only downloadable threats but also fishy links. Plus, you really need to check out their option to remotely scan a PC using the mobile! There’s no better way to protect you from ransomware other than preventing it from happening in the first place. Knowing that there is antivirus software that can detect ransomware should only bring you a bit of relief. You’re still supposed to be extremely careful about how you access online resources. Show as much caution as you can. And never stop informing yourself and learning about the new ways that hackers are spreading ransomware.
<urn:uuid:afb463d6-38af-4d5e-8689-d53048c71042>
CC-MAIN-2022-40
https://antivirusjar.com/can-antivirus-detect-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00273.warc.gz
en
0.945837
1,888
2.828125
3
Due to the continued growth of remote work, threat intelligence teams around the world have been tracking a significant increase in phishing and social engineering attacks. These attacks coincide with a temporary drop in more traditional cyber attacks, indicating that attackers, like workers, are modifying their efforts in order to accommodate changes in how we work. In fact, our recent Global Threat Landscape Report details this and more. Attackers are attempting to capitalize on the current business environment via social engineering attacks. For example, during the early stages of the COVID-19 pandemic, they impersonated legitimate organizations, such as the Centers for Disease Control and the World Health Organization, offering fake informational updates, discounted masks and other supplies, and even promises of accelerated access to vaccines. Similar attacks targeted healthcare workers, political movements, or even the recently unemployed using the same sort of tactics. As time goes on, attackers are continuing to use disruption to their advantage – but you’re more likely to experience the following attacks: Baiting, where attackers use a victim’s greed or curiosity to transmit malware. A well-known example of this is mailing a key employee a mysterious USB, which spreads malware when an employee inserts it into their PC. Scareware, where attackers trick victims into believing their system is under threat, and the “solution” is often a malware-infected tool. Phishing, where attackers pretend to be an authority and create a sense of urgency or curiosity in the victim and ask them to follow a malicious link. Pretexting, where an attacker establishes trust by impersonating someone important or authoritative. Then, they ask to collect sensitive information using that authority. The reason that social engineering – an attack strategy that uses psychology to target victims – is so prevalent, is because it works. According to Verizon’s 2019 Data Breach Investigations Report (DBIR), nearly one-third of all data breaches involved phishing in one way or another. Cyber criminals are opportunistic, and they constantly prey on the only vulnerability that cannot be patched – humans. It is a perpetual bombardment, every minute of the day, 24x7x365. And the odds are in the favor of the attacker, because they only need one unsuspecting person to click on a malicious link or attachment to open up the gates into the corporate network. And the truth is, nobody is immune – from entry-level employees, contractors, and interns at one end, on up to the C-Suite at the other. Business partners can also be indirect targets, mining them to obtain information to soften up targets. And for those of us now connecting to the office through our home networks, even our children are potential targets. Even seasoned security professionals get caught off-guard, in part because attack tactics have become more sophisticated. The goal, of course, is to gain access to our networks and sensitive information, either to steal it, corrupt it, or hold it for ransom. Most often, however, spear phishing is just the tip of the attack, and can easily go unnoticed by a victim who has been compromised. Cybersecurity awareness has grown – up to 95% of employees now receive phishing training so they can learn to spot suspicious emails. This is important progress, as most breaches start with a phishing email followed by an unsuspecting employee who opens a malicious file or clicks on a bad link. Despite this training push, however, the number of employees that can tell the difference between a legitimate email and a malicious one remains frighteningly low. That’s because cyber criminals are experts at the art of masquerading, manipulating, influencing, and devising lures to trick targets into divulging sensitive data, and/or giving them access to our networks and/or facilities. To prevent social engineering attacks, organizations can try the following tactics: Help employees feel like they are part of the security team. They must understand the repercussions of a security event, and how it can personally affect them, is a good place to start. Seeing connections such as these – between safe cybersecurity practices and the positive impact they feel they are making when everyone is engaged and responsible – should lead to direct improvements in how people behave when they are confronted with suspicious cyber behavior or questionable email or websites. Give employees the tools they need to succeed. For example, in most organizations there is typically no easy way for employees to manage a multiplicity of complex passwords. If they choose to use a password management program, one which generates and manages complex passwords, it is only because of their own initiative. Eliminate sources of risk. Organizations need to update email security gateways with sandboxing and content disarm and reconstruction (CDR) tools to eliminate malicious attachments and links. They need to use web application firewalls to secure access to websites and identify and disable malicious links or embedded code or deploy cloud-based solutions and endpoint detection and response (EDR) tools so users are protected both on- and off-premise. They also need to add proactive access controls to ensure that connections originating from compromised home networks and personal devices can’t be used as a conduit for an attack. Regardless of the details, the most important key to improving an organization's risk profile is still getting employees involved, one way or another, in accepting and fulfilling their security responsibilities. With training, the right tools, and effective processes, including support from top-tier company leaders, security teams can help everyone take cybersecurity seriously — and take a serious bite out of cyber crime. There are two challenges at play here: employees are not taking cybersecurity seriously, and cyberattacks are getting even more sophisticated. For example, there are still far too many employees who never change their passwords, and two-thirds who still do not use a password management tool. At the same time, years of training people to identify phishing emails, avoid clicking on suspicious links, and follow best practices with their passwords have not panned out the way InfoSec professionals would have liked. The thing is, people know they need to use complex passwords, but they still use obvious choices that hackers can easily guess or discover by simply browsing a target’s social media sites, such as their pet’s name, the name or birthday of their child, or the year they graduated from college. The problem is not awareness – it is rooted in human behavior. Safe password practices – using long passwords with nonsensical characters and numbers, for example – take extra effort to implement. And when it comes right down to it, employees have shown that, for whatever reason, the extra effort is not worth their time and energy. The most important key to improving an organization's risk profile is getting employees involved, one way or another, in accepting and fulfilling their security responsibilities. With training, the right tools, and effective processes, including support from top-tier company leaders, security teams can help everyone take cybersecurity seriously — and take a serious bite out of cyber crime.
<urn:uuid:39c4c9d3-9c46-4bc8-a4ae-681413a708fb>
CC-MAIN-2022-40
https://www.fortinet.com/blog/industry-trends/how-to-combat-social-engineering-attacks?utm_source=blog&utm_campaign=social-engineering-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00273.warc.gz
en
0.948376
1,426
2.609375
3
A holistic research paper documents the security concerns within decentralized ledger technologies, prompting concerns over the security of cryptocurrency transactions. New research challenges the security of the ledger technology blockchain software runs on, raising concern about its uses, from cryptocurrency spending and trading to electronic voting. Commissioned by the Defense Advanced Research Projects Agency, researchers reviewed the features and vulnerabilities of distributed ledger technologies to gauge if the software is truly decentralized, or free from external control. Distributed ledger technologies refer to software that stores information on a secure, decentralized network where users need specific cryptographic keys to decrypt and access data. It is the central technology that runs cryptocurrency transactions. Commonly known as blockchain, distributed ledger technology is supposed to be decentralized to prevent a single actor from tampering with information stored across its network. “The report demonstrates the continued need for careful review when assessing new technologies, such as blockchains, as they proliferate in our society and economy” said Joshua Baron, the DARPA program manager overseeing the study. “We should not take any promise of security on face value and anyone using blockchains for matters of high importance must think through the associated vulnerabilities.” Authored by cybersecurity consulting firm Trail of Bits, the report found that some blockchain technologies can be mutable and susceptible to change, which threatens the data stored within the proof-of-work blockchain. This conclusion stems from the increased centralization of ledgers associated with popular cryptocurrencies, namely Bitcoin and Ehtereum. “This report gives examples of how that immutability can be broken not by exploiting cryptographic vulnerabilities but instead by subverting the properties of a blockchain’s implementations, networking, and consensus protocol,” the report begins. “The data—and, more importantly, the code—deployed to a blockchain are not necessarily semantically immutable.” Several factors contribute to vulnerabilities within blockchain systems. One critical component of a secure and decentralized blockchain ledger is the system of nodes, or participating computers, included in the network. Should just one of these nodes not have the proper security protocols or simply be run by a dishonest actor, the data passing through the blockchain is susceptible to hacking or change. This finding erodes the longstanding notion of blockchain’s inherent security and threatens the information stored within various blocks. Additionally, security protocol inconsistencies among nodes in a blockchain network or mining pool pose a threat to the safety of every included node. The report also notes that all Bitcoin protocol traffic in particular is unencrypted, which does not initially pose a threat for data passing between nodes within a network. However, should a third party within the network route between nodes become corrupted, external actors can potentially disrupt transactions on the ledger. Concerns over the software underpinning cryptocurrency transactions come as the emerging technology corners a larger part of the market and continues to be volatile. With an executive order and numerous bills, the federal government is seeking to find a regulatory grip on the cryptocurrency arena to better understand the new asset class and how it will impact the broader economy.
<urn:uuid:9708af22-8813-4097-88c7-19f99fd8e5d6>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2022/06/darpa-report-finds-vulnerabilities-blockchain-tech-non-secure-crypto-transactions/368392/?oref=ds-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00273.warc.gz
en
0.923898
618
2.734375
3
Primer: Network Segmentation’s Role In Cybersecurity Explained Whether we are talking about the network in your home or the network in your office, it’s important that these networks be properly segmented. Why? To keep sensitive traffic on the network separated from more public traffic and more importantly, some potential threats that could infiltrate your network. So what exactly is network segmentation? When you create a network, all of the devices on that network can “see” or communicate with each other. In the past, this was fine, because networks consisted of trusted devices that performed their work and that was really it. Then came the internet and now devices connected to the network can also communicate across the internet. Still, this was mainly fine. Then came wireless networks and more devices being connected to the network, either by cable or wirelessly. Then came guest networks, so visitors to your office would wirelessly connect, but not necessarily be connected to the “main” network. This is where network segmentation comes in to play. Network segmentation allows you to create separate networks across the same wired or wireless networks. The most common way to do this is by using something called a VLAN. In effect, when you setup a wireless network and a guest wireless network, you may be using VLAN technology. What is a VLAN? A VLAN is a Virtual Local Area Network. One way to think of this is as a network within a network. In other words, on a wired network, a VLAN travels across the same physical wire but keeps traffic separated. VLANs allow you to segment your network into groups of devices that have rules associated to them in terms of what other devices or locations on the network they can see. In the case of a wireless network, a VLAN does the same thing, but across the same wireless access points that broadcast the wireless network. So, let’s say you have an accounting department, a manufacturing department, a marketing, sales, customer service and administration department. You have a single hard-wired network that connects all these department to your network and the internet. If you segment this network, you could create a VLAN for your marketing, sales and administration departments, a VLAN for your manufacturing department and a VLAN for your accounting department. The VLANs are assigned as follows: one VLAN to the sales, marketing and administration departments, one VLAN to the manufacturing department and one VLAN to the accounting department. You can then decide what devices on each network are allowed to do. The computers on the accounting VLAN may be able to see the computers on the others and access the Internet. The VLANs in the manufacturing, sales, marketing and administration departments may be able to see one another, but not the accounting department. The computers on the manufacturing VLAN may not be able to see the Internet. This is network segmentation, allowing you to further secure your IT infrastructure by creating network rules that keep things more safe. Smart Homes, Internet of Things and Cybersecurity With the proliferation of smart devices, like smart thermostats, TVs, speakers, lights and the like, a VLAN is the ideal way to isolate these predominantly wireless devices from accessing anything on the network. You would want these devices to reach the internet for updates and centralized control. As these types of devices are considered to be relatively insecure, you would definitely want to isolate them from any other part of your network, so that if they were to be compromised, they would not be able to compromise other more important devices on your network. Network segmentation is key to keeping your data and IT assets secure. If you’re not sure if your network is properly segmented, ask your IT department or partner to review your network and be sure it is properly configured for maximum security. MJ Shoer is founder & principal consultant at MJ Shoer LLC., which offers consulting services for MSPs and Channel Organizations. He previously launched, built and sold one of New England’s most successful MSPs.
<urn:uuid:0d0cd483-a12f-44b6-a199-58b7c425493d>
CC-MAIN-2022-40
https://www.channele2e.com/technology/security/network-segmentation-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00473.warc.gz
en
0.953656
837
3.109375
3
A few years ago, enterprises that wanted top-notch CPU performance had to contend with high-end x86 processors or go for costly PowerPC chips. At the time, the industry’s reliance on x86 chips seemed to be increasing. And when Apple announced nearly fifteen years ago that it intended to use x86 processors for its Macs, many observers concluded that the era of non-x86 processors was over. This paradigm shifted in the 2000s as more users began embracing tablets and smartphones, which demanded smaller and more energy-efficient processors. ARM’s 32-bit chips emerged as the dominant processor in the tablets and smartphones landscape, achieving speeds of between 1 GHz and 2 GHz. An ARM64 processor is an evolution of ARM architecture that includes servers, desktop PCs, and the internet of things (IoT). ARM64 processors help address the increased processing demands from new technologies such as high-resolution displays, realistic 3D gaming, and voice recognition. But why should you use ARM64? Discover why ARM64 could be the future of instruction set architecture (ISA) in this blog post. What Is ARM64? A Brief Overview An ARM-based CPU is a family of processors based on reduced instruction set computer (RISC) architecture. Arm Holdings Ltd — a U.K.-based company — designs the architecture and licenses it to other vendors, who, in turn, develop their own processors based on those architectures. The ARM design has undergone several iterations. The first architecture — commonly known as ARM1 — used 32-bit registers with 26-bit address space. This limited the architecture’s main memory to 64MB. Arm Holdings later released the ARMv3 series that enhanced the processor performance. Additional iterations up to the ARMv7 series remained 32-bit, with each processor having 15 general-purpose registers. Arm Holdings unveiled ARM64 — also called ARMv8-A — in 2011 to extend support for 64-bit computing. Unlike ARM32 that has 15 general-purpose registers, ARM64 architecture uses 31 registers, each 64-bits wide. As such, its registers can process larger numbers and hold more memory addresses. ARM64 architecture also provides user-space compatibility with ARM32. This means that you can easily execute 32-bit applications in 64-bit operating systems (OSs). You can also place a 32-bit OS under the control of a 64-bit hypervisor. Recently, the company unveiled a series of additional instruction sets for various rules. For example, the “Thumb” extension adds both 16-bit and 32-bit instructions for enhanced code density, while “Jazelle” incorporates those instructions that directly handle Java bytecodes. Other recent changes to the ISA include simultaneous multithreading (SMT) that improves fault tolerance and performance. Why ARM64 Could Be the Future of ISA The original intention of RISC processors was to allow the system to process a smaller number of instructions. Unlike complex instruction set computing (CISC) processors such as x86 that have more instructions to process, RISC processors strip out unneeded instructions to optimize pathways. This helps them to operate at higher speeds, performing millions of instructions per second (MIPS) more than their CISC counterparts. Because of their reduced instruction sets, RISC processors require fewer transistors, which means a smaller size for integrated circuitry (IC). It also means RISC processors consume less power than CISC equivalents. Because of these features, original ARM processors (ARM32) are best suited for increasingly miniaturized devices such as smartphones and embedded systems. With the increasing need to minimize energy consumption and enhance computational capabilities, there is a slow but steady transition to ARM64 architecture. Today, 64-bit ARM processors are reasonably pervasive, with mobile devices such as smartphones and tablets dominating the market. The same applies to set-top boxes and single-board computing devices like Raspberry Pi, ROCK Pi, Asus Tinker Board, and Odroid. While desktop computers and servers running ARM64 processors are still rare, the breakthrough may be nearing. For example, in November 2020, Apple Inc. unveiled the first Macs with ARM64-based M1 chips, debuting new MacBook Air, MacBook Pro, and Mac mini models. The company also announced plans to fully transition away from Intel to in-house chips over the next couple years. In June 2021, Microsoft Corporation announced that it is building native and interoperable applications for Windows 11 on ARM64 processors. Both Microsoft Corporation and Apple Inc. are leaning into the low-power consumption and enhanced computational efficiencies in ARM64 chips to reverse a steady decline in PC and Mac sales. In 2018, Amazon Web Services (AWS) unveiled Graviton processors—64-bit ARM chips—to power its Linux-based Amazon Elastic Compute Cloud (Amazon EC2) instances. AWS Graviton processors also support popular Linux operating systems such as Amazon Linux 2, Red Hat, and Ubuntu. AWS has since enhanced the capabilities of Graviton processors to deliver optimal performance, particularly in EC2 instances. The growth of Graviton processing in AWS has been driven by both significant cost reduction and strong processing power, both characteristics that are good for business. Since AWS is a critical part of IT infrastructure for many companies these days, the chances are high that ARM64 chips could become the future of ISA. What Are the Benefits of ARM64 Processors to Enterprises? ARM64 processors are beneficial to enterprises because they can help: - Reduce carbon footprint. A 64-bit core can undertake certain operations faster than ARM32; meaning the task gets done quickly, and then the chip shuts down. When used in data centers, ARM64 processors can enable organizations to save significantly on carbon footprint. - Provision more demanding applications. IT administrators can now deploy compute-intensive applications such as rich graphics and grid computing to users by leveraging ARM64 processors. - Save on costs. ARM64 processors can help accelerate the adoption of bring your own devices (BYOD) in enterprises. Because personal devices are powerful enough to handle higher end, resource-intensive applications needed to perform their job, organizations can save on costs. - Enhance the experience for end users. ARM-based devices such as Apple M1s in your Mac fleet allow employees to have more choice when choosing their personal devices. This enhances their experiences and overall productivity. Simplify the Management of ARM-Based Devices with JumpCloud The debate surrounding ARM versus x86 processors is not going away anytime soon. To succeed, organizations must create an environment where the two architectures can coexist. Many enterprises are compelled to use siloed management practices because of the architectural differences between ARM and x86 processors, or differences in operating systems. While these practices have worked to some extent in managing heterogeneous devices, many problems such as duplication of effort, increased management costs, and a lack of visibility still exist. This does not need to be the case. Instead, you can leverage a cloud directory platform like JumpCloud to simplify the management of heterogeneous endpoints, no matter if those endpoints are diverse in processor, OS (Windows, Mac, and Linux), location (remote, in-office, or a hybrid of both), ownership (corporate-owned device or BYOD), and more. The JumpCloud Directory Platform allows IT administrators to securely manage Windows, Mac, or Linux devices from a consolidated admin console. The platform also enables IT administrators to access near-real-time reporting and granular security controls for every managed identity and endpoint, which means businesses of any size can easily implement a robust Zero Trust architecture and achieve compliance standards.
<urn:uuid:a52eef5b-5144-4044-b2e8-fa071c6ee5dd>
CC-MAIN-2022-40
https://jumpcloud.com/blog/why-should-you-use-arm64
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00473.warc.gz
en
0.926266
1,572
3.390625
3
Ephemeral storage is the volatile temporary storage attached to your instances which is only present during the running lifetime of the instance. In the case that the instance is stopped or terminated or underlying hardware faces an issue, any data stored on ephemeral storage would be lost. This storage is part of the disk attached to the instance. It turns out to be a fast performing storage solution, but a non-persistent one, when compared to EBS backup volumes. Ephemeral storage is ideally used for any temporary data such as cache, buffers, session data, swap volume etc. Apart from this, multiple ephemeral storage can be used together as RAID volumes and for specific Hadoop jobs where high performance and multiple nodes sharing the same data is desired. Ephemeral storage is a non-billable resource that is included in the cost of the instance. However, ephemeral storage is only limited to certain instance types and the number and sizes of ephemeral drives differ for different instance types. In addition, it is available via two options, magnetic disk, or SSDs (which again depends upon the instance type you are using). It is not a part of recent instance type families such as M4 or C4. Learn more here. There are a variety of use cases where multiple ephemeral storages in RAID configurations are used to store crucial processing data which is shared across nodes. These ephemeral storages are attached to the same instance, and an instance failure can lead to total collapse. In order to achieve persistence, it is advisable to replicate your ephemeral storage to EBS volumes, so that data loss can be avoided in case of an instance failure. This article will focus on how you can mirror your Ephemeral storage to an EBS volume, ensuring that the EBS volume is used solely for write operations and the ephemeral storage is solely for read operations. This way you can avoid data loss in the case that the instance is lost/stopped/terminated due to a hardware failure or an accidental event. Mirror Ephemeral Storage to EBS Volume For mirroring ephemeral storage to an EBS volume, RAID 1 can be setup across both volumes. To achieve this, follow the steps below: Launch an EC2 Instance with Ephemeral Storage and an EBS Volume.It is important to note three points while selecting an instance type : Ephemeral storage is supported only for selected instance types. The size of Ephemeral storage supported by each instance type is different. Kindly consider this for your ephemeral storage requirements. The number of Ephemeral storage volumes supported by each instance type varies. For more details on above points, you can refer to this AWS documentation link. For demonstration purposes, we have chosen a m3.large instance which offers an ephemeral storage volume of 32GB and an attached 32GB of EBS volume. 2. Access your Ephemeral Storage and EBS Volume. In most of the cases, ephemeral storage is already formatted and mounted. On Windows, EBS volumes are also formatted to NTFS if they are attached during launch time. Linux : To access your EBS volume, it can be formatted and mounted by using the below commands. Command to Format EBS Volume : # mkfs.ext4 Command to Mount EBS Volume : # mount Configure Software RAID between Ephemeral Storage and EBS Volume with –write-mostly option. Linux: As typical RAID configurations will distribute read and write operations across different volumes, setting up the –write-mostly option on the EBS volume will ensure it is used solely for write purposes, while read operations are performed by the ephemeral storage. Note : Unmount volumes in the case that they are already mounted, and make sure entries are removed from fstab too. # sudo mdadm --create --verbose /dev/md0 --level=1 --name=<RAID_NAME> --raid-devices=<number_of_devices> <device-1> --write-mostly <device-2> Once a RAID volume has been created, format the volume. # sudo mkfs.ext4 /dev/md0 Once the volume is formatted, mount it using a mount command. # sudo mount /dev/md0 <mount-point> To make sure mounts are persistent across the reboot operation, a fstab entry can be made. echo “/dev/md0/backup ext4 defaults 0 0” >> /etc/fstab That’s it. Now, the RAID volume is ready to use. Any writes will automatically go to the EBS volume while read operations will be done by Ephemeral operations. This will ensure high read performance and the ability to minimize data loss. To create a software RAID1 in Windows, navigate to Disk Management under Computer Management, select any of your volumes and right click on it to create a mirrored volume. When selecting New Mirrored Volume, select the volumes that should be a part of the Mirrored volume. Clicking on next will ask you to assign a drive letter, and later on you will be prompted to format the volumes. Once done, the volume will be formatted and automatically mounted. Now you have established mirroring between the ephemeral storage and the EBS volume, which means that all data will be in mirrored between both disks. Ephemeral storage volumes are a great option for holding data where data persistence is not an issue, hence allowing you to achieve high-performance levels. A right combination of ephemeral storage and EBS volume storage can provide an ideal combination of performance and data persistence. N2WS Backup & Recovery is an enterprise-class backup-recovery and disaster recovery solution for AWS EC2. N2WS Backup & Recovery is available as a service model that allows users to manage multiple AWS accounts and configure policies and schedules to take automated snapshot backups. It also features a Windows agent to consistently back up Windows applications. N2WS allows you to recover a volume from a snapshot, increase its size and switch it with an existing attached volume in a single step. Try N2WS Backup & Recovery for FREE!
<urn:uuid:df7cad9d-4e2d-4265-9c82-abdb806eaf7f>
CC-MAIN-2022-40
https://n2ws.com/blog/how-to-guides/ephemeral-storage-on-ebs-volume
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00473.warc.gz
en
0.891194
1,305
2.609375
3
The Budapest Convention, a New Legal Standard The Council of Europe’s Convention on Cybercrime, also known as the Budapest Convention, is the first international treaty that is specifically geared toward both preventing and fighting the threats of cybercrime and cyberterrorism on an international scale. The treaty was signed on November 23, 2001, and the Purpose of the Budapest convention is to “better combat cybercrime by harmonizing national laws, improving investigative abilities, and boosting international cooperation”. The U.S. ratified the convention in 2006, and 65 nations around the world have signed the treaty as of 2021. While there have been many supporters and detractors of the convention, the treaty nevertheless serves as a deterrent to individuals or organizations who wish to engage in cybercrime, as the convention mandates that the countries who sign the treaty take certain measures to aid in the fight against said cyber threats. What are the requirements of countries that have signed the Budapest Convention? The 65 countries that have signed the Budapest Convention as of 2021 agree to fulfill the following requirements at all times: - Define criminal offenses and sanctions under their domestic laws for four categories of computer-related crimes– Under the Budapest Convention, countries who sign the treaty agree to define criminal offenses in the context of four specific categories of cybercrime. These four categories include copyright infringement, fraud and forgery, child pornography, and security breaches, including system interference, hacking, and illegal data interception. Moreover, the terms of the treaty mandate that countries “enact laws establishing jurisdiction over such offenses committed on their territories, registered ships or aircraft, or by their nationals abroad”. - Establish domestic procedures for detecting, investigating, and prosecuting computer crimes, and collecting electronic evidence of any criminal offense– These procedures include expediting the preservation of electronic communications and computer-stored data, seizures and system searches, and the interception of data or information in real-time. Furthermore, countries who sign the treaty are also responsible for guaranteeing “the conditions and safeguards necessary to protect human rights and the principle of proportionality”. - Establish a rapid and effective system for international cooperation– Under the Budapest Convention, cybercrimes are considered to be an extraditable offense. To this end, the treaty permits law enforcement agencies from respective countries to collaborate with one another through the means of collecting computer-based information and evidence. Additionally, the treaty also calls for “establishing a 24-hour, seven-days-a-week contact network to provide immediate assistance with cross-border investigations”. What are the benefits and risks associated with the Budapest Convention? Proponents of the Budapest Convention argue that the provision of the treaty that enables law enforcement agencies from different jurisdictions and countries to work with each other to tackle cybercrime around the world represents a significant step forward as it relates to reducing cyberattacks around the world. As many of the countries that have not signed the treaty do not consider cybercrimes to be an extraditable offense, proponents of the treaty argue that this effectively allows cybercriminals to act with impunity, as they can conceivably commit a cybercrime in one particular country, and then flee to another with little fear of resistance or punishment. As it relates to the U.S., many members of the information technology or IT community support this stance, as the nature of cybercrime is different from virtually any other form of criminality as there is often no physical evidence that is associated with such crimes. Alternatively, skeptics of the treaty have argued that the nations around the world who have ratified the treaty as of 2021 are not the “problem countries” as it pertains to the international threat of cybercrime. To this end, many of the countries that have signed the treaty are developed first nation countries such as the U.S. and U.K. that already have significant legislation in place that is designed to protect citizens from the threat of cyberattacks. What’s more, many civil liberties groups from around the world have also argued that the provisions of the Budapest Convention undermine the individual privacy rights of the citizens living in the countries who have ratified the treaty, as the requirements of the treaty can be viewed in the context of the expansion of government surveillance power that extends too far. Such civil liberties groups often cite legislation such as the U.S. Patriot Act as an example of a law that was designed to protect citizens but has instead led to privacy concerns. As the internet has connected the various nations around the globe in a manner that has never been seen before, treaties such as the Budapest Convention are all but inevitable. Through the creation of such treaties, major nations around the world can use more traditional forms of law enforcement to fight the threat of cybercrime, as catching cybercriminals can be extremely difficult due to a multitude of factors. However, as many countries around the world have also passed data protection and privacy laws such as the EU’s GDPR law and the California Privacy Rights Act of CPPA, future treaties that are similar in nature to the Budapest Convention will need to be drafted in accordance with the provisions that have been set forth by these laws, as expanding government power can always lead to the potential for the invasion of privacy.
<urn:uuid:d0bdebfd-5884-48d2-b7fa-d271e45d767a>
CC-MAIN-2022-40
https://caseguard.com/articles/the-budapest-convention-a-new-legal-standard/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00473.warc.gz
en
0.953381
1,056
3.15625
3
Microsoft Hyper-V is a collection of multiple technologies running efficiently to form the basis of business private cloud space. It includes, hardware, software, management processes and business processes integration. Hence Hyper-V is the platform where your entire virtual infrastructure resides. Hyper-V is the virtualization platform and Hypervisor technology is software on which multiple virtual machines can run by controlling the hardware and allocating resources to each VM operating system. The key to good virtualization is the hardware on which the virtual infrastructure is present. Constructing a virtual environment requires white-boxing approach. If a reliable hardware is not chosen for the virtual infrastructure, the infrastructure shall collapse. An optimized hardware is necessary with decent connecting of all pieces together. There are four core resource areas to Hyper – V: Processing, storage , memory, and networking. These four points combine together to produce private clouds under Hyper- V. Apart from these core concepts there is another side to Hyper- V and that is the management processes that run as the core business element on Virtual infrastructure.
<urn:uuid:037e3057-a7e6-43e8-8814-44bce3abf2d5>
CC-MAIN-2022-40
https://community.machsol.com/2018/09
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00473.warc.gz
en
0.904957
208
3.078125
3
Any time a ranking of a technology is put together, that ranking is always called into question as to whether or not it is representative of reality. Rankings, such as the Top 500 list of the top supercomputers in the world, has been the subject of such debate with regards to the Linpack Fortran performance benchmark that is used to create the rankings and its relevance to the performance of actual workloads. When it comes to networking, the changes in the list in recent years are likely a better reflection of what is going on in high performance computing in its most general sense. Over the past several years, the Linpack test has been run by various cloud and hyperscale companies to show off the performance of their clusters, and most of these systems do not use InfiniBand or proprietary interconnects to lash server nodes together, but rather Ethernet, which is by far the standard interconnect used by enterprises, telecommunications companies, cloud builders, and hyperscalers. The Linpack test is basically irrelevant to these customers (unless they are using their clusters for simulation and modeling as well as for other parallel workloads), but the clusters are absolutely real and the same issues that traditional HPC shops are wrestling with in terms of applying bandwidth and low latency to adapters and switches to squeeze more performance out of clusters are driving networking choices at these shops outside of traditional HPC. So, ironically, the political and corporate motives that have driven companies that have thus far ignored the Top 500 test and have recently sought to have their clusters ranked is probably making the list a better reflection of the upper echelon of cluster computing, whether or not the systems are being used to what we think of as HPC. The rankings continue to demonstrate that InfiniBand and other interconnects have a place in distributed computing and set the stage for some hyperscalers and cloud builders to switch to InfiniBand and possibly other technologies as they create clusters that have special needs in terms of bandwidth and latency. The trend lines in the Top 500 list over the past decade reflect this change. There has been a resurgence of Ethernet, as you can see in the chart below, but not so much because traditional HPC shops are moving away from InfiniBand and other interconnects like the “Aries” interconnect from Cray and the Omni-Path interconnect from Intel, but because non-traditional companies that deploy Ethernet are submitting test results to the Top 500 administrators so they can brag about the performance of their clusters. Because of this, Ethernet has once again surpassed InfiniBand as the most popular interconnect on the list with the June 2016 rankings. Drilling down into the Top 500 data by sector and user class is illustrative and shows the marked contrast between in interconnect usage patterns among research, academic, and industry sectors as well as traditional HPC and other clusters that run the Linpack test for bragging rights. As an aside. It is important to realize that the Top 500 list is a voluntary one, and does not include some of the largest traditional HPC systems in the world and also does not include many data analytics clusters in use by governments and hyperscalers that, had they run the Linpack test, would no doubt be ranked. For instance, the Facebook clusters based on its “Big Sur” CPU-GPU systems have a collective 40 petaflops of raw single-precision computing, but because the Tesla M40 GPUs that they employ only support single-precision math, they can’t run the Linpack test and yet they clearly constitute a distributed, capacity class supercomputer. Similarly Google has well above 1 million servers, with many of its clusters having 10,000 nodes and many having as many as 50,000 nodes. If Google decided to run Linpack on these systems in succession, all of them would rank in the top ten of the list, and some of them would rank as high as number two on the list but probably not beat the 93 petaflops Sunway TaihuLight system that China just fired up. But here is the thing: Google could put somewhere between 50 and 100 petascale-class machines on the list if it was so inclined. This would skew the demographics of the Top 500 list, since Google only uses Ethernet for its interconnects. Now think about what happens if Amazon, Microsoft, Facebook, Baidu, Tencent, and Alibaba all did the same thing. Only the most powerful HPC systems, using InfiniBand, Omni-Path, Aries, and high-speed Ethernet would stay on the list. This, in our estimation, would probably be a more accurate ranking of “high performance distributed computing” and we think maybe there ought to be a Top 2000 list that ranks machines in some other way besides Linpack (but still including Linpack). Maybe by cluster size alone in terms of peak theoretical integer and floating point performance with a node count floor of 1,000 or 2,000 machines. You could then group machines by whether they are running MPI, Hadoop, Spark, Mesos, or some other distributed computing framework. That is an issue for another day. We were just making a point that this comes down to definitions. What is clear from the most recent Top 500 list is that interconnect choices depend on what kind of organization is running the system and what kind of workloads they are actually running in production. If you carve out the machines used by telcos, hyperscalers, cloud builders, and others and look at traditional HPC machines only, then InfiniBand is by far the most popular interconnect on the list, with 197 machines out of 280 machines. Proprietary interconnects (including IBM’s BlueGene, Fujitsu’s Tofu, and SGI’s NUMALink switching) account for 31 machines and several generations of Cray interconnects (mostly its “Gemini” XT and Aries XC networks) comprise the other 83 machines on the June Top 500 list. BlueGene systems will eventually fall off the list since IBM has stopped investing in BlueGene systems, and over a much longer haul, Cray will adopt Omni-Path 2 as its core interconnects and Gemini and Aries systems will be upgraded to this. The distribution of networking types by industry segment is particularly interesting and no doubt correlates to what most of us expect: On the June 2016 list, InfiniBand dominates in the research and academic sectors, with Cray interconnects coming in a strong second and Ethernet basically being non-existent. If jump over to industrial users on the Top 500 list, InfiniBand is only on 39 of the 243 systems on this subset of the list and Ethernet, with 83 percent share, utterly dominates. In effect, we already have two very different lists encapsulated inside the Top 500 list. In general, whether you look at traditional HPC systems or the overall list, the larger the machine the more likely it is to be using InfiniBand, Cray, or another proprietary interconnect and the less likely it is using Ethernet. Take a look at this scatter bar chart that shows the number of machines running traditional HPC jobs as you expand from the Top 100 machines in increments of one hundred out to the full Top 500 list: As you can see, the prevalence of InfiniBand among the traditional HPC clusters ranked by Linpack on the Top 500 list increases as you move from the most powerful systems at the top of the list to the wider list. This is another way of saying that other interconnects tend to be at the top of the list and have not been able to penetrate to the lower performance levels. We think this has more to do with the nature of the applications and deep-seated networking preferences on the wider part of the list. We also believe that, over time, Intel will be able to get traction with its Omni-Path networking, particularly once it has embedded Omni-Path controllers available for its current “Knights Landing” Xeon Phi processors and future “Skylake” Xeon E5 v5 processors. The question is whether Omni-Path will displace Ethernet or InfiniBand on the list. (There are those who would argue that Omni-Path is essentially a modified variant of InfiniBand.) What we can say for sure is that Intel wants to radically increase its share of networking in the datacenter, and it is starting with HPC shops and expanding out to machine learning and other distributed workloads. It is a reasonable guess that Intel will target existing InfiniBand shops, including its own True Scale InfiniBand users as well as those employing Switch-IB gear from Mellanox Technologies. It may be easier for Intel to sell against Ethernet than it can against InfiniBand, oddly enough, and the Ethernet target is certainly juicier at the bottom two-thirds of the list where Omni-Path could meet less resistance if organizations can get over their preference for Ethernet. Again, it comes down to applications and how easy it will be to port from Ethernet to Omni-Path. The easier and more transparent this is for applications, the more Ethernet can be displaced within large-scale clusters. We think that Mellanox will have a very aggressive technology roadmap for InfiniBand and will prove to be a tough contender as Intel pushes harder into HPC. Cray will continue to get its share with Aries and SGI with NUMAlink, too. We know another thing for sure: All of this advancement of technology and competition will be good for customers, allowing them to scale out their clusters faster – and more affordably – than they might have been able to do otherwise.
<urn:uuid:710c9358-fbe5-4cb2-9ce6-59fc08e2df32>
CC-MAIN-2022-40
https://www.nextplatform.com/2016/07/11/competition-heats-cluster-interconnects/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00473.warc.gz
en
0.955182
1,990
2.546875
3
- Access control lists - Permission inheritance - Mandatory access control or integrity levels - icacls syntax - Getting help - Displaying the ACL - Setting permissions - Advanced permissions - Inheriting permissions - Removing permissions - Denying permissions - Resetting the ACL - Setting ownership - Exporting and importing an ACL - Determining user rights - Managing Windows integrity control The most common task for an admin is to modify the permissions of various objects. The file explorer's Security tab works fine for adjusting a few permissions, but changing a lot of permissions using the file explorer is monotonous and eventually becomes tedious if you happen to do it on a regular basis. What if you could use a built-in command line tool to do that job for you? The icacls utility is built into Windows to help you. In this article, you will learn how to manage file and folder permissions with the help of icacls. Before diving into the icacls command directly, you should be aware of certain things related to permissions and security in Windows. Access control lists ^ In computer security, ACL stands for "access control list." An ACL is essentially a list of permission rules associated with an object or resource. Each permission rule in an ACL is known as an access control entry (ACE), which controls access to an object by a specified trustee, such as a person, group, or session. These types of access control lists are called discretionary access control lists (DACLs). In a DACL, permissions are generally set by the administrator or owner of the object. The NTFS permissions in Windows are an example of a DACL. Windows supports the following types of permissions in a DACL: - Basic permissions - Full access (F) - Modify access (M) - Read and execute access (RX) - Read-only access (R) - Write-only access (W) - Advanced permissions - Delete (D) - Read control (RC) - Write DAC (WDAC) - Write owner (WO) - Synchronize (S) - Access system security (AS) - Maximum allowed (MA) - Generic read (GR) - Generic write (GW) - Generic execute (GE) - Generic all (GA) - Read data/list directory (RD) - Write data/add file (WD) - Append data/add subdirectory (AD) - Read extended attributes (REA) - Write extended attributes (WEA) - Execute/traverse (X) - Delete child (DC) - Read attributes (RA) - Write attributes (WA) The letters in parentheses indicate the short notation you will use with the icacls command when setting a particular permission. For example, to grant test.user a write permission on file1.txt, you will use icalcs as shown below: icacls file1.txt /grant test.user:W Don't worry about the command if you don't understand it yet; I just wanted to show what the letters in parentheses really mean at this point. To grant full access, you would just write test.user:F instead of test.user:W. Since you will see the terms ACL and ACE a lot throughout this guide, the following image will help you clearly understand and distinguish them: Permission inheritance ^ Permissions can either be explicitly defined on an object or can be inherited from a parent container. Windows supports the following types of inherited permissions: - Inherit (I)—The ACE is inherited from the parent directory. - Object Inherit (OI)—The objects in the current directory inherit the specified ACE; applicable only to directories. - Container Inherit (CI)—The subdirectories in the current parent directory inherit the specified ACE; applicable only to directories. - Inherit Only (IO)—The ACE is inherited from the parent directory but does not apply to the object itself; applicable to directories only. - Not Propagate (NP)—The ACE is inherited by directories and objects from the parent directory but does not propagate to nested subdirectories; applicable to directories only. Again, the letters in parentheses indicate the short notation you will use with the icacls command when setting permissions with inheritance. You can see that most inheritance attributes apply only to directories. You will learn more about permission types and how inheritance works later in this guide. Mandatory access control or integrity levels ^ In mandatory access control (MAC), permissions are defined by policy-based fixed rules and generally cannot be overridden by users. Starting with Windows Vista and Server 2008, Microsoft introduced mandatory integrity control (MIC)—a form of MAC—to add an integrity level (IL) for most objects in Windows. It is also referred to as Windows integrity control (WIC) or Windows integrity level (WIL), but we will call it IL throughout this guide. The integrity level is used to determine the level of trustworthiness or protection of an object (or process) from the perspective of Windows. There are six integrity levels in Windows: - Untrusted—The lowest level of trustworthiness. The processes that are anonymously logged on are automatically allocated an untrusted IL by Windows. - Low—The processes that directly interact with the Internet are allocated a low IL by default. Such processes have very limited access to files and directories. - Medium—The processes started by standard and non-admin users are allocated an IL of medium by default. This is the default and implicit IL in Windows. The objects lacking an IL are by default treated as medium by Windows. - High—The high IL is allocated to the processes running with an elevated security token (processes launched using the Run as Administrator option). - System—The system IL is allocated to the core operating system processes and services. - Trusted installer—The trusted installer IL denotes the highest level of trustworthiness. In a nutshell, you could say that MIC and IL are more restrictive defense mechanisms used by Windows that override the NTFS permissions (DACL) and evaluate the object's access before the DACL does. Therefore, a process with a lower IL cannot write to an object with a higher IL, even if there are full NTFS permissions on that object. Windows uses the concept of ILs to protect the core files and processes, so even if you've got full control on a core system file, you will still get an Access is denied error when you delete that file. To view the IL of a process in Windows, you can use the Process Explorer tool from Sysinternals. The following screenshot shows that most core Windows processes are running with System integrity, the user processes are running with Medium integrity, and the processes launched with elevated tokens (e.g., powershell and procexp64) are running with High integrity. The icacls command is primarily used to manage DACLs in Windows, but it can also be used to manage ILs with certain limitations. The terms MAC, WIC, WIL, IL, MIL, etc., used throughout this guide, essentially mean the same thing. Later in this guide, we will see how to use icacls to view and modify the ILs. Now let's get started. icacls syntax ^ The following syntax shows how to use icacls with a file object: icacls <filename> [/grant[:r] <sid>:<perm>[...]] [/deny <sid>:<perm>[...]] [/remove[:g|:d]] <sid>[...]] [/t] [/c] [/l] [/q] [/setintegritylevel <Level>:<policy>[...]] The following syntax shows how to use icacls with a directory object: icacls <directory> [/substitute <sidold> <sidnew> [...]] [/restore <aclfile> [/c] [/l] [/q]] Don't worry if the syntax looks a little complicated. It will eventually become clear as we progress through this guide. Getting help ^ First, let's take a look at the Help section. To view the help, just run the icacls command without any parameters, as shown below: The help section displays all the parameters supported by the icacls command along with a few examples. I will try to cover as much as possible with the help of examples. Displaying the ACL ^ To display the current ACL for an object, run the icacls command with the name of the object (file or directory). The following command shows the ACL for a directory object: where RnD is a directory in my C: drive. When you run the icacls command on a file object, the output is slightly different: You can see that the ACL of the directory contains values such as (OI) or (CI), but you cannot see these in the file ACL. Instead, you will see an (I), which means the ACE is inherited from its parent container (the RnD directory, in this case). In the Access Control Lists section, we mentioned that (OI), (CI), (IO), and (NP) are inheritance rights and are applicable only to directories (a.k.a. containers). Let me briefly explain the ACL output returned by this command. - The first part of the output, (NT AUTHORITY\SYSTEM), is the username. This can be a user, a group, or a special identity, such as Everyone, Authenticated Users, or Network Service. - The second part, (OI), indicates that the other (new or existing) objects in this container will inherit this ACE. You will see this only in the case of a directory. - The third part, (CI), indicates that other containers (or directories) in this parent container will inherit this ACE. Again, you will see this only in the case of a directory. - The last part, (F), indicates that the user specified in the first part has Full I hope it has now started making a little sense to you. This will become clearer in the upcoming sections. Let's keep going. Setting permissions ^ In the last example, we saw that the directory name RnD was accessible to SYSTEM, Administrators, and Users only. Anyone else who tries to access this directory will be denied access, since implicit deny is the default behavior of an ACL. If you want to add the special identity Everyone to this ACL and then grant them a Read permission recursively, you can use the icacls command, as shown below: icacls RnD /grant everyone:R /t /c - /grant parameter—Adds a new ACE in the ACL, granting read access to the special identity Everyone. You can also combine multiple rights. For example, to grant read and write permission, use Everyone:RW. - /t parameter—Specifies a recursive operation, which means the permissions will be updated on all the files and subdirectories in the specified directory (RnD, in our case). - /c parameter—Specifies a continued operation despite any errors. This option is particularly useful during bulk permission changes using a script when you don't want your script to stop executing even if there are errors. - /q parameter—Suppress success messages. By default, the command displays all the success messages for each operation. Note that using special identities, such as Everyone, Authenticated Users, Network Service, etc., with the icacls command only works if the system language is set to English. If you're working on a non-English system, use the SID format to specify such special identities. So, on a non-English system, the above command needs to be used as shown below: icacls RnD /grant *S-1-1-0:R /t /c The SID should be prefixed with an asterisk (*); S-1-1-0 is the well-known SID for the Everyone identity. To know the well-known SIDs for all special identities, see this article. Now that we've run the above command, let's take a look at the ACL of the RnD directory. icacls RnD /t where the /t parameter is used to recursively list the ACLs of all the child objects. The Everyone identity is now added to every file and subdirectory inside the RnD parent directory because of the /t parameter. Advanced permissions ^ To grant or deny advanced permissions, the syntax of the icacls command is slightly different. For instance, if you want to give the Auditors group the ability to write NTFS permissions, you need to give that group the Write DAC (WDAC) permission. To do that, use the following command: icacls RnD /grant:r Auditors:(WDAC) /t /c Notice that the advanced permissions need to be enclosed in parentheses. You can specify the multiple permissions in a comma-separated string in parentheses. For example, to specify the Read Extended Attributes (REA) permission along with (WDAC), write it as follows: Inheriting permissions ^ With the previous command, we assigned the special identity Everyone a Read permission recursively to all the child objects in our RnD directory. If we take a closer look at the ACL of the dir1 subdirectory, which is inside the RnD directory, we can see that the ACL shows Everyone with just an (R), indicating the expected read permission. But if we create a new subdirectory, dir2, and then view its ACL, we can see that there is no ACE for the Everyone identity. This happened because we had not yet set the RnD parent directory with inheritable permissions. The /t option is only useful for setting permissions on objects that already exist. But what about objects such as files or directories that will be created in the future? The permissions for such objects will be handled by inheritance. By default, when an ACE is set with the OI permission, it is applied to the files in the directory but not to the subdirectories. In the same way, the ACE set with the CI permission is applied to the subdirectories, but not to the files. Therefore, to obtain a combined result, we need to use both the OI and CI permissions together. Take a look at the following command: icacls RnD /grant:r Everyone:(OI)(CI)W /t The /grant:r parameter causes the Read only permission for Everyone in the existing ACE to be replaced with Write. If you do not add :r with the /grant parameter, a new ACE will be added instead of replacing the existing one. Now let's create another subdirectory, dir3, inside the RnD parent directory and view its ACL. Notice that the new directory, dir3, inherited the ACE from the RnD parent directory. This is how inheritance works. Removing permissions ^ To remove a permission from a user (or group), you just have to remove the corresponding ACE from the object's ACL. Don't forget to disable the inheritance from that object beforehand (if the target is a directory). For instance, to remove the Everyone identity from the dir3 directory, we will use the icacls command, as shown below: icacls RnD\dir3 /inheritance:d /t /c icacls RnD\dir3 /remove:g Everyone /t /c - In the first command, the /inheritance:d parameter disables the inheritance on the directory and copies the ACEs. To directly disable the inheritance without copying the ACEs, and then remove the inherited ACEs, you could use /inheritance:r; however, this operation is a bit risky. You might get yourself locked out, as this could remove your own access. To reenable the inheritance, use the /inheritance:e - In the second command, the /remove:g parameter removes the grant permissions from the Everyone To remove the deny permissions, use the /remove:d parameter. Denying permissions ^ Normally, there is no need to define a deny permission explicitly, since implicit deny is there by default. Every experienced admin will suggest that you avoid the explicit deny since it could cause unexpected results. For example, a user is a member of two groups, and you add both groups to the ACL of a directory. One group has the grant ACE, and the other has a deny ACE; guess what will happen? The deny ACE will win, and the user will be denied access. This could give you a lot of headaches if you manage a lot of groups. The best approach is to define the grant ACEs for whatever groups you want, and the remaining users and groups will be denied access implicitly. However, if you still want to define a deny permission explicitly, icacls allows you to do that, too. For example, to deny Full Control to the Developers group on the HR directory containing the important records of all the employees, use the following command: icacls D:\FileShare\HR /deny Developers:(OI)(CI)F /t /c Note that explicitly denying permission overrides any permission explicitly granted to the same user or group. To remove the deny permission, use the following command: icacls D:\FileShare\HR /remove:d Developers /t /c Notice the use of the /remove:d parameter in this command. It will not work if you use the /remove:g parameter since we are removing the deny permission here. To remove a grant permission, use the /remove:g parameter. Resetting the ACL ^ There are situations in which you might want to reset the permissions to default. For example, a junior admin messed up the permissions on a program's directory, which broke its functionality, or a malware attack corrupted the ACL of an important directory. In such cases, you could use icacls with the /reset parameter to reset the permissions to the default. The following command shows how to reset permissions: icacls RnD /reset /t /c The /reset parameter is equivalent to the Replace all child permission entries with inheritable permission from this object option in the GUI. Setting ownership ^ You can use the icacls command to set ownership on directories and files. The following command sets the owner Surender on the RnD directory recursively: icacls RnD /setowner Domain\Surender /t /c /q - /setowner parameter—Makes the specified user or group the new owner. You can also specify the username in UPN format (i.e., email@example.com). - /t switch—Perform the set ownership operation recursively. - /c switch—Indicates a continued operation, even if errors occur. - /q switch—Perform the operation quietly and suppress the success messages. Unfortunately, the icacls command does not offer any way to view the owner of an object, but you can use the dir /q command as shown in the screenshot below. You can see that the owner is now recursively changed on the RnD directory and all its child objects. Exporting and importing an ACL ^ One of the coolest features of the icacls command is its ability to export the ACL of an object to a file and then use that backup file to import the ACL back to restore the permissions. This feature is loved by most admins, since it makes the monotonous task of setting permissions very easy. Whenever you have to do a bulk permission change on huge directories, it is recommended to back up the existing permissions with the help of the icacls command so that if something goes wrong, you can restore the permissions. To export the ACL, use the icacls command with the /save parameter as shown below: icacls RnD /save rnd_acl_backup /t This command will save the ACL of the RnD directory to the rnd_acl_backup file in the current working directory, as shown in the following screenshot. Now, I will modify some permissions on this directory and restore them using the backup file we created. Let's take a look at the directory permissions for a moment. The screenshot shows that the test.user has a deny write permission, the Everyone identity has full control, and so on. To restore permissions from the backup file, use the following command: icacls C:\ /restore rnd_acl_backup You need to provide the path of the parent directory for the /restore parameter to work properly. If you try to use the command as shown below, you will get an error. icacls C:\RnD /restore rnd_acl_backup If you take a closer look, the error itself indicates that icacls is looking for a C:\RnD\RnD directory, which doesn't exist. So, you got an error stating, 'The system cannot find the file specified.' If you open the ACL backup file in a text editor, you will notice that there are references for the relative path to the RnD directory itself. Therefore, you need to carefully type the directory path when using the /restore parameter. To fix this error, you just need to provide the path of the main directory where the RnD directory actually exists. Moreover, it really depends on how you backed up the ACL while using the /save parameter. Some people prefer doing it this way: icacls RnD\* /save rnd_acl_backup_2 /t This command will not save the ACL of the parent directory (RnD, in our case) itself. If you save the ACL backup file this way, you will notice that there is no reference to the RnD parent directory. To restore this backup ACL file, you can use the previous command that gave you an error, like this: icacls C:\RnD /restore rnd_acl_backup_2 Don't make changes to the ACL backup file by opening it in a text editor. While doing so might sound intriguing to some people, it could render the ACL backup files unusable, so it is never recommended. Furthermore, the target directory where you restore the ACL does not necessarily need to be the same. With icacls, you can save the ACL of a container and then restore that ACL to a different container. The following screenshot shows how to do this. Another important feature you get while restoring the ACL with the icacls command is the /substitute parameter. As the name suggests, you can use this parameter to replace a user (group or SID) with another user. Let's understand this with the help of an example. Suppose you have a backup of an ACL for a really big file server share. You are going to import the permissions back using the /restore parameter. The problem is that the backup file is slightly old, and it has a grant ACE for an old admin user, John, who is no longer working in the organization. He is now replaced with a new admin user, Mike. The good news is that you can use /restore along with the /substitute parameter to replace John with the new user, Mike, on the fly while restoring the permissions using the icacls command. Since the file shares can be really big, you won't have to spend extra time replacing the outdated users after the ACL is restored. The following command shows how to do this: icacls D:\ /restore file_share_acl /substitute John Mike /t /c /q where file_share_acl is the ACL backup filename that is supplied by the /restore parameter and John is the old user followed by Mike, the new user supplied by the /substitute parameter. This command recursively restores the permissions and replaces the old user John with new user Mike while preserving the rights. Determining user rights ^ There are situations when you, as an admin, might want to determine which user has what permissions. If we consider the previous example, where I restored the ACL on a file share and replaced the old user with a new user, you might want to determine whether there are any files or directories in the D: drive of the file server to which the old user, John, still has access. To do this, icacls offers a /findsid parameter. The following command shows the files and directories with the user John listed in their ACL. icacls d:\ /findsid John /t You can see that the user John is listed on two main directories, D:\DRV and D:\SQL, and their child objects. Once you determine that, you can go ahead and replace the user with a new one or just remove that user from the ACL using the /remove parameter, as discussed above. Managing Windows integrity control ^ As promised earlier, it's now time to learn how to manage MAC or IL using the icacls command. While there are six ILs in Windows, the primary limitation of icacls is that it only allows you to work with the low, medium, and high ILs. It doesn't allow the use of the restricted, system, and trusted installer ILs. Keeping this in mind, let's first understand how to view the IL for an object. The icacls command displays the IL as a Mandatory Label (or Mandatory Level). The following example shows how to view the IL of a directory: where RnD is the name of the directory. If you're following this guide, you probably won't see this Mandatory Label in the output. This is because when you create an object, it will get a medium IL by default and will not show up when you use the icacls command. Remember, the medium IL is default and implicit in Windows. To be able to view the Mandatory Label, you need to explicitly set the IL on the object using icacls, which we will see in a moment. In the output of the above command, the Low Mandatory Level indicates the low IL and (NW) indicates the no write up integrity policy, which is used to restrict write access on an object coming from a lower IL process. Let's understand this with the help of an example: I will now run an elevated command prompt, which will give my user account and cmd.exe process a high IL. Like other objects, the user's logon session also gets an IL. To see the IL of a user, just run the whoami /groups command and you will see a Mandatory Label field. The following screenshot shows the output of this command from a non-elevated command prompt: Notice that the user account gets a medium IL (or Mandatory Label) by default. If you run the same command in an elevated command prompt, you will see a high IL. Now, in the elevated command prompt, I will create a directory testDir and then use the icacls command to set a high IL on it: icacls testDir /setintegritylevel h The /setintegritylevel parameter can only accept l (for low), m (for medium), and h (for high) ILs. If you try to set the system or untrusted IL as shown in the following screenshot, you will get an error: The parameter is incorrect. You get this error since the icacls command doesn't allow you to work with the system, untrusted, or trusted installer ILs. Anyway, the most important thing to remember is that you cannot set the IL beyond your own user account. For example, if my user account has a low IL, I cannot set any object with a medium or high IL. When my user account has a high IL (for an elevated process), I can set an object with a high, medium, or low IL. In short, the IL that I can set is equal to or less than the IL of my own user account, as shown in the following screenshot: Here, you can see the high mandatory level assigned to testDir. Now, you might be wondering how this is helpful for admins. Well, if someone with a low or medium IL tries to write to the testDir directory, he will get an Access is denied error even though he's got a Full Control NTFS permission in the ACL. The following screenshot will help you better understand this: You can see that the test.user had Full Control on the testDir we created earlier. But he still couldn't write to that directory, thanks to the high IL. Admins can use this trick to prevent standard users (or their processes) from writing to important directories or files. However, does this prevent those users from reading the contents of the directory or file? Hmmm, this is the limitation of icacls. Just recall the NW policy that I explained earlier. It restricts the write access to an object coming from a lower IL. It doesn't restrict the read access. There are no read up (NR) and no execute up (NX) policies, too. The NR integrity policy prevents low integrity processes from reading high integrity objects. Similarly, the NX policy prevents low integrity processes from executing high integrity objects. Unfortunately, however, we cannot use icacls to set NR and NX integrity policies. The Windows processes, by default, get an NR integrity policy to prevent low integrity processes from reading their address space. If we could somehow set the NR integrity policy on a directory or file, it would definitely prevent other users from reading the content. Unfortunately, there is no such tool built in with Windows. However, there is a third-party tool named chml, developed by Mark Minasi, back in the days of Windows Vista. If you want to give it a try, you can do so at your own risk. This free tool allows setting up the untrusted or system IL on objects, and you can even set the NR or NX integrity policies. The following screenshot shows how to use chml to set the system IL on testDir along with the NR, NW, and NX integrity policies: The chml tool supports an -fs (force system) switch, but it sometimes does not work as expected in the modern versions of Windows. By the way, if you are stuck in a similar situation where you cannot open or delete a directory, you can use psexec with the -s switch, as described in the How to use PsExec guide, to launch cmd with system account privileges and then use chml to set a lower IL on that directory. In this way, you will be able to delete that directory successfully. That is all for this guide. I know I haven't covered everything related to the icacls utility in this guide, but it surely can help you get started. If you're stuck somewhere, don't forget to take a look at the help section of the command.
<urn:uuid:0f9ecbc9-a80d-441f-a6c3-e90829d7898c>
CC-MAIN-2022-40
https://4sysops.com/archives/icacls-list-set-grant-remove-and-deny-permissions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00473.warc.gz
en
0.892681
6,600
3.03125
3
Do you think that you are protected from hackers? Of course, you are using a wireless access point with an encryption. You are wrong! pull your password from the air! There are 4 things, that hackers hope you won’t find out WEP encryption is unworkable for protecting your wireless network. WEP (stands for Wired Equivalent Privacy – is one of security algorithms for wireless networks) ) it just gives users a false sense of security that can be easily cracked in a minute. Even a novice hacker can breakdown a WEP password within minutes that proves it a useless piece of protection. If you have an old router and have never changed your encryption from WEP to the advanced and more powerful WPA2 (Wi-Fi Protected Access II is a security protocol and security certification program that secures wireless computer networks) security you are in danger. Switching your router to WPA2 is a very simple process. More details about how you do this can be found on your router manufacturer’s website. Using the router’s MAC filter to avoid unknown devices from joining your network is inefficient and easy to hack. Each IP-based hardware has its own hard-coded MAC address in the network interface. Most routers give you the option of permitting or denying network access based upon the device’s MAC address. Then router investigates the MAC address of any device requesting access and matches it with a list of permitted or denied MACs. This appears to be a great security hurdle but the problem is that hackers can fake the MAC into accepting one, which has been pre-approved. Switch off your wireless router’s remote administration feature can be a very efficient way to make your data safe from hackers. Most routers have a setting that gives you the option to administer the router with a wireless connection. What does this function mean? It’s that you have an access to all of the routers settings without using the computer that you registered into the router using an Internet cable. This is very convenient for a user, it is also very convenient for a hacker. We recommend you turn off this remote access, change the security settings so you need a physical ‘hard wired,’ connection to the network. If you use public hotspots you are perfect victim for a hacker and hijacking attacks. Hackers can use programs like Firesheep and AirJack to do “man-in-the-middle” attacks where they incorporate themselves into the wireless chat between sender and receiver. Once they have included themselves in the communication, they can see all your account passwords and have an access to your e-mails, etc. We recommend you to read our article HOW TO USE WI-FI FOR FREE AND KEEP YOUR DATA PRIVATE. So how to keep your PC safe? - First of all, create a difficult password and change it regularly. Use long enough (12 characters at least) password that includes numbers, symbols, capital letters, and lower-case letters. - Ask someone to hack your site or device so you can identify weaknesses. - Regularly update your software, because a new versions have a stronger security system. - Choose a strong anti-malware program for your PC so it can protect you from malware programs and data breaches. GridinSoft Anti-Malware will protect you and keep devices and information safe.
<urn:uuid:077bd524-f465-4f89-bd12-439b157b8b78>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/can-hackers-crack-router/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00473.warc.gz
en
0.921994
698
2.53125
3
Dip Before the Plunge The stuff of science fiction in the mid-20th century, Virtual Reality (VR) simulates or replicates physical, three-dimensional occupancy in a computer-generated milieu. This environment can be either imaginary or based on real events or places. All of the user’s senses – sight, hearing, smell, taste and touch – “perceive the digital environment” of the computer technology to achieve the full sensation of total immersion. Common applications include gaming, military and medical uses. Note that artificial reality (AR) and virtual reality are two different concepts. AR, according to wareable.com’s Sophie Charara, “overlays graphics onto your view of the real world” whereas VR strives to create a “life size, 3D virtual environment without the boundaries… usually (associated) with TV or computer screens.” VR technology has evolved from the primitive behemoth ‘The Sword of Damocles,’ created in 1968 as the first head-mounted display (HMD) system, to the recent ballyhooed virtual reality headset (VRH) Oculus Rift. HTC executive director of marketing, Jeff Gattis wears the HTC RE Vive Image Source: Maurizio Pesce Taking the Plunge Three basic prerequisites are generally required to operate VRHs: - An app or game driver such as a console, PC or smartphone - A headset to run the game or app’s display before the user’s eyes - Input hardware such as head and hand tracking (see below), controllers, voice commands, trackpads or on-device keys. A few key concepts should be mentioned when discussing VR. They include: Latency – the time lapse between VRH head movement and the corrected view of the display. This interval can cause motion sickness in some users. Field of View (FOV) – the scope of the perceivable world observed at any given time. Head Tracking – the repositioning of the image in front of the user as she looks down, up, sideways, etc. VRH components such as gyrosensors, laser positions and accelerometer – collectively called a 6DoF, short for 6 Degrees of Freedom– create three axes (plural of axis) to accurately track head movements like roll (shoulder to shoulder), pitch (forwards and backwards) and yaw (side to side). Hand Tracking – Per John Marco of virtualrealitytimes.com: “Hand tracking, in general, is a complex and abstract aspect of artificial intelligence that makes use of numerous algorithms and the principles of mathematics and physical sciences to bring real-time interpretation of hand movements, gathered as data and processed into tangible user input.” Persistence – a subjective measure of motion blur. In reality, humans move their heads while keeping their eyes fixed at one point. In VR, there is a perceptible latency between focus and blur when the user moves his head yet, for example, keeps his eyes glued to the dials in the cockpit of a jet airplane. Toleration levels of persistence and latency, measured in milliseconds, vary between individuals. Users sensitive to persistence and latency may feel motion sickness. The Simple Law of Persistence, as postulated by Blur Busters is: “1 ms of persistence = 1 pixel of motion blur during 1000 pixel/second motion.” Presence – shortened from “telepresence,” Wikipedia defines it as “a phenomenon enabling people to interact with and feel connected to the world outside their physical bodies via technology.” Valve Software’s R&D team names the following prerequisites to establish presence: - A wide field of view (80 degrees or better) - Adequate resolution (1080p or better) - Low pixel persistence (3 ms or less) - A high enough refresh rate (>60 Hz, 95 Hz is enough but less may be adequate) - Global display where all pixels are illuminated simultaneously (rolling display may work with eye tracking.) - Optics (at most two lenses per eye with trade-offs, ideal optics not practical using current technology) - Optical calibration - Rock-solid tracking – translation with millimeter accuracy or better, orientation with quarter degree accuracy or better, and volume of 1.5 meter or more on a side - Low latency (20 ms motion to last photon, 25 ms may be good enough) Below is Top 5 Best VR Headsets 2016, a YouTube video from Austria’s TechMagnet. Note that the VRHs in this video all use Smartphone apps and thus are different from PC-based systems like Oculus Rift or HTC Vive. Prolonged exposure to VR has caused unwanted side effects. Motion sickness, as noted above, is a malady commonly cited by users. Reportedly, social diseases such as conjunctivitis can be contracted from a VRH worn by an infected user. Some observers raise worries about virtual reality addiction, a condition akin to video game addiction. As a rule most VR systems caution consumers against protracted use. But there are more cogitative worries. A number of writers have posited philosophical doubts regarding the social impact of VR technology. For example, Mychilo Cline – author of Power, Madness and Immortality: The Future of Virtual Reality – contends that as VR becomes more a part of everyday life, it will cause many significant changes in human behavior and activity. Will users attempt to reach Nirvana via VR? More importantly, will there be a steady “migration to virtual space,” affecting the culture, commerce and perspective of society? Such questions are likely to make critics wonder if VR has the potential to transform the world as did, for instance, the Industrial Revolution or the automobile. If so, what would the inevitable residual repercussions be? For those interested in plunging headlong, so to speak, into the immersive world of VR, below are “the plungers” or the major players. The best VRHs according to wareable are: - HTC Vive®, made by mobile technology company HTC and video game developer Valve Software, is (of course) compatible with Valve’s massive gaming platform. It includes 70 sensors supporting 360 degree head tracking and a 90HZ refresh rate. Latency aka motion sickness is supposedly less of an issue than with other VRHs. Another selling point is the “Lighthouse” room tracking which allows users mobility while wearing the headset. However, this mobility means a sizable space is needed. HTC claims 2 x 1.5 meters is sufficient while a reviewer suggests 3 x 3 meters is more appropriate. - Oculus Rift, recently acquired by Facebook, connects to a computer’s USB and DVI ports. The latest version boasts a 2160 x 1200 resolution, with a 90 HZ refresh rate, operating at 233 million pixels per second. Described as a “big black box with a strap,” it aksi features a 360º tracking, Head Related Transfer Function (HTRF) – think 3D audio – and Touch controllers. It comes bundled with a XBox One controller. - Samsung Gear, almost identical to Rift since the two share much of the same technology, is a case using a Samsung Galaxy Smartphone for its display and processor. The catch – and it’s a big one – is that only the Samsung headset can be used with the case. See the above video for how Smartphones work with VRHs. Great value compared with the Vive and the Rift, costing hundreds of dollars less, and has a repository of vid content from Milk VR plus a ton of games. - Not available until October 2016, the Sony Playstation VR (formerly known as Project Morpheus) has been eagerly anticipated by gamers looking for a superlative VR experience on a familiar platform. According to insiders, the system features low persistence and a high refresh rate; reputed issues such as latency and tracking accuracy have apparently been resolved. There are “starter” VRHs at a nominal cost; perhaps the best known is the Google Cardboard, which currently sells for US$15. As Kyle Orland of arstechnica.com posits, “Virtual reality (has) been a pipe dream concept, well ahead of the technology needed to realize it.” It’s certainly not the holodeck from Star Trek: The Next Generation. Although within reach, a truly realistic and totally immersive VR experience is still well into the future.
<urn:uuid:ec6e2b11-c9b1-4d4f-bbcf-a3ae9552126d>
CC-MAIN-2022-40
https://internet-access-guide.com/the-brave-new-world-of-virtual-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00473.warc.gz
en
0.922423
1,789
3.359375
3
Computer software becomes increasingly complex and makes systems and organizations vulnerable to hacker attacks. That’s why companies are turning to outside experts to track down the errors in their software. One of the methods is participation in bug bounty programs. Companies offer rewards to ethical hackers who discover bugs or security weaknesses. They are often run by big software publishers so that they can fix these issues before they’re discovered and exploited by criminals. Google’s Android, Chrome, and Play platforms continue to be vulnerability-rich environments. In 2021 Google paid a record $8.7 million in rewards to 696 third-party bug hunters from 62 countries who discovered and reported thousands of vulnerabilities in the company’s technologies. It’s a nearly 30% increase from the $6.7 million in 2020. Companies often hire a team to test the security of their website or system before deployment. But what happens when new features or updates are pushed? What about the bugs or weaknesses that these teams miss? That is why it makes sense to sign up for a bug bounty program to make sure that the system gets tested by a vast range of freelance security experts, not just one team. Bug bounty programs also ensure that the system is always being tested, not just at one point in time. For a mid-size company, it could be a way to save money. After all, an in-house team of cybersecurity experts may be simply too expensive for them. In bug bounty programs cybersecurity experts are rewarded when they discover a new bug, the time they spend to do so doesn’t matter for a company. There are two most popular variants of the bug bounty program: ethical hackers work directly with the company or with the use of an intermediate platform. This intermediary can provide verification of the cybersecurity expert’s work before notification to the company. Typically, a hacker receives a monetary reward for successful submission. For less critical vulnerabilities they can get branded company merchandise. The prize offered should be equivalent to the severity of the vulnerability discovered and the effort the ethical hacker has made. If the compensation offered is unfair, the company can expect negative backlash. In 2013 Yahoo had to change its bug bounty policies after it offered t-shirts to bug hunters for successfully finding critical vulnerabilities. After that Yahoo’s program reputation was damaged. This part is still often criticized by the community as unfair as the wages paid by standard penetration testing are much higher and not dependent on the number of reported findings. Some bug bounty ecosystems introduce reputation points and associated leaderboards to reward successful submissions. These reputation points are often the criteria for admission to private programs. While direct programs are often public, allowing for submissions from anyone, in private programs only selected security researchers can see the program details and participate. Private programs allow some organizations to test procedures before going public, some of them remain private for a significant amount of time or permanently. Consequently, these programs avoid some issues prevalent in public operations. A bug bounty is a side activity for many security researchers, but there is also a group of people who have made bug bounty a way of life. A 30-year-old hacker from Romania worked for his first million in those programs for two years. Such a result is certainly impressive, but it is worth remembering that bug bounty programs do not mean high revenues for everyone. Different companies have a different approach to when the prize should be paid out, some do it when the reported bug is accepted, others only when it is fixed, and this can take many months. Very often, there is also a dispute about how to classify the severity of the vulnerability. Most companies are friendly to bug hunters cooperating with them, unfortunately, this is not a common standard. The rules of the game are determined by the company, in the event of a disagreement, some researchers break the rules and – giving up the prize – publicly disclose the details of the vulnerability. This, in turn, can lead to legal issues and costs on both sides of the dispute. At first glance, the bug bounty program looks like an ideal solution: it enables constant testing of system security and does not ruin the company’s budget. The reality is not so colorful. A significant issue in bug bounty programs is the high volume of low-quality submissions. The poor-quality report is the result of racing to submit a vulnerability. Many ethical hackers look to maximize the number of submissions rather than focusing on specific vulnerabilities. The reason behind it is simple, it’s a more profitable tactic. One of the key factors influencing the effectiveness of bug hunters is an “arms race” in the category of finding assets. Companies do not always inform about any subdomains or subpages within the scope of the program, because of that it is common to run tools that search for additional targets. The methodologies are different: spidering, brute-forcing, dictionary attacks, they are used at the same time with the fastest available tools and cloud systems. For example, the Axiom tool can divide the work into hundreds of machines in the cloud, which will be deleted a second after the work is finished. There is also a problem with the duplicate submissions. The race to submit as the first often leads to reports lacking essential details. A company or platform requires from the ethical hacker further information. At this time, another hacker may submit a more significantly detailed report for the same vulnerability. The second report, although possibly more beneficial to the organization, according to the rules, is a duplicate. The treatment of duplicates varies. Synack addresses this issue by setting a 48-hours window for submissions, all reports are accepted. After two days duplicates are grouped together and the one with the most detailed report gets the bounty. Some platforms do not monetarily reward duplicates. This mechanism discourages detailed submissions. Another disturbing trend within bug bounty programs is the result of the probability of finding a given number of bugs. As the average bounty per program scales super-linearly, while the probability of bug discovery decays rapidly. After some time switching to another program is more profitable than making an in-depth analysis of the old one. There is a potential problem with incomplete coverage possibly leading to a false perception of security. There is also a lot of controversy in cases where a security researcher has found and reported a bug to a company that does not have an official program. This creates potential legal issues; bug hunters could be seen to be extorting the target rather than acting for good. Above all companies and ethical hackers don’t have binding contractual relationships. There is always the risk that a bug hunter could choose to sell the vulnerabilities they discover on the black market, or even double bluff their client and ask for payment as well as sell the information on the dark web. Cybersecurity expert Troy Hunt describes the phenomenon of the so-called Beg Bounty. In this scenario, a company receives from researcher unexpected information about a very serious vulnerability. The details will be disclosed in a moment, but first, you need to determine the amount of the payment. Often this particularly important vulnerability is something completely irrelevant from a security point of view: unrealistic clickjacking, missing some HTTP header, or loose SPF record configuration. Companies don’t have to choose between bug bounty programs or a team of experts to profoundly test their security. The best model is a combination of two solutions, a third-party penetration testing performed annually or after a major system update and a well-organized bug bounty program to complement the existing vulnerability management process. In-depth tests are an excellent tool to find and fix security weaknesses. Bug bounty programs can help to secure companies in the gaps between penetration tests.
<urn:uuid:fc8de813-333f-4baa-96eb-fa80c655fcc2>
CC-MAIN-2022-40
https://cqureacademy.com/blog/short-article/bug-bounty
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00673.warc.gz
en
0.950705
1,566
2.640625
3
What is a Cyber Vulnerability? What is a cyber vulnerability, and how vulnerable am I? This is a question we often get from customers, and it’s important to know the answer. Cyber vulnerabilities are weaknesses within your network that cyber attackers can exploit in order to gain access to your systems, install malware, steal data, or perform other types of cyber-attacks. Now, how vulnerable are you? That’s a question answered on a case-by-case basis. CRE companies have thousands of devices spread across their network. If most devices have outdated software, then you may be incredibly vulnerable. However, if you have up-to-date software on your devices, perform regular vulnerability scans, and incorporate mitigation practices in your cyber strategy, you’re probably doing alright. But, let’s take a look into the many different types of cyber vulnerabilities, and the consequences of letting your vulnerabilities go unchecked. The cyber attacks that can come from vulnerabilities in your network are not to be taken lightly. In 2021, the average data breach cost $4.24 million according to a report by IBM. There are many different types of cyber vulnerabilities, and it’s important to be aware of each type. A vulnerability is any weak spot within your security strategy. Vulnerabilities can include a missing lock on your company’s front door or server room, outdated firewalls, out of date software, an unlocked laptop, or even an unsecured keycard. Most of these would fall under the category of faulty defenses. Faulty defenses might make your company feel secure while actually exposing your organization to massive cyber threats due to improper implementation. Poor process management is another common vulnerability, although it’s not a specific thing. Everyone has processes to follow, but when short-cuts are taken to “just get it done”, there is a tendency to leave the process broken, thus potentially leaving a hole your cyber defenses. Unsecure connections can also lead to massive cyber risks. Securing the circulation of data is the number one way you can prevent vulnerabilities such as open redirect, cross-site scripting, or SQL injection. If you aren’t sure where to get started with mitigating your cyber risk and identifying vulnerabilities, reach out to us at email@example.com. We have several products and services dedicated to keeping you cybersecure so you can rest easy knowing your data is safe.
<urn:uuid:7834fa27-b059-4f85-85da-6e708c6f36aa>
CC-MAIN-2022-40
https://www.5qpartners.com/post/what-is-a-cyber-vulnerability
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00673.warc.gz
en
0.948438
504
2.859375
3
What Kind of Cabling Is Used for Data Centres? The forms of cabling that keep a data centre operational consist of power cables, ground cables and data transmission cables. Data centre cabling is of two types – - Copper cables and - Fibre optic cables. In case you are confused and cannot decide on the type of cabling you should choose for your data centre then read the following sections. What Is Data Centre Cabling? Data centre cabling, in simple words, is the network of cables that consists of power cables, ground wires and data transfer cables used in a typical data centre. Data centre cabling is carried out in two formats – - Structured data centre cabling and - Unstructured data centre cabling. Definitions for the above terms are as follows – - Structured data centre cabling – involves the usage of an MDA (or Main Distribution Area) that acts as the interface through which all connections used in a data centre are run. - Unstructured data centre cabling – is messy compared to the structured data centre cabling approach. In unstructured data centre cabling, the data links are established between devices. There is no central panel installed in this type of data centre cabling system. What Type Of Data Centre Cabling Is Best Copper Coaxial or Fibre Optics? According to the collective opinion of cabling engineers associated with revered structured cabling companies in Mumbai, it would be unwise to state that copper cabling is superior to optical fibre cabling for a data centre. The reason is simple – the data transmission rate of optical fibre cabling is higher than copper cabling. There is a catch though – copper cabling is cheaper compared to fibre optic cabling as copper cabling needs minimal upkeep and low upfront capital expenditure. Overview About Copper Cables Copper cables used in data centres are known as copper coaxial cables. This type of cable consists of a primary copper cable that transmits data surrounded by a thick insulating material which is further wrapped in a metal shield to keep interfering signals at bay. Benefit Of Using Coaxial Copper Cables Instead Of Fibre Optics In Data Centres Data centres that primarily use copper coaxial cables instead of fibre optic cables can save more than 100 kilowatts of energy. Furthermore, the operational life of coaxial copper cables is more than fibre optic cables and it stands at (+-) fifty million hours! Overview about Fibre Optics A fibre-optic cable has an outer appearance that is similar to an electrical cable but at its core, instead of a copper wire, it contains optical fibres that will carry data in the form of light pulses instead of electricity. Benefit Of Using Fibre Optic Cables Instead Of Fibre Optics In Data Centres Fibre optic cables have a high data transmission rate when compared to copper cables. The reason is simple – instead of using electrons to transmit data from one point to the other, fibre optic cables transmit data using light pulses. Furthermore, the data upload and download speeds of fibre optic cables are equal which is why one of the many USPs of fibre optic cables is “symmetric data speed”. Reasons Why Copper Cables And Fibre Optics Are Better Together In Data Centres Fibre data centre cables offer more speed and bandwidth compared to copper data centre cables but they come with hefty upfront costs and need regular upkeep. Copper data centre cables, on the other hand, have limited bandwidth but they are more reliable compared to fibre optic cables therefore regular upkeep is not required. At the same time, they come with affordable upfront costs. Fibre optic cables can only transmit data whereas copper data centre cables can be used to transmit data at speeds of up to 10 or 40 GB/s and at the same time, are used to supply up to 100 Watts of DC power to devices used at a typical data centre. Hence, copper cables and fibre optics are best used together in a data centre to keep the establishment efficient, easy to maintain and affordable to set up. Why Proper Data Centre Cabling Management Is Important? According to the opinion of a network engineer associated with Network Techlab – one of the leading structured cabling companies in India, proper data centre cabling management should not be an afterthought as with efficient data centre cabling management procedure in place, a data centre becomes – - More efficient - Help in the reduction of maintenance costs - Help in the reduction of daily operational costs and - Keep downtimes at bay. Whichever variant of data centre cabling you want to use at your place of business, it is always a good idea to hire a reputed structured cabling company in your vicinity. In case you are looking for a structured cabling company that can meet your requirements and that too at a reasonable tariff then contact Network Techlab. It is one of the leading IT service providers in India that has been offering IT solutions to its pan-India clientele for the last 25+ years. It is an ISO 27001:2013 certified company hence hiring them for all your data centre cabling requirements will be a great idea. For more details, please call +91-8879004536 or send an email at email@example.com.
<urn:uuid:28c487b7-1e42-4cac-973a-ebee5e3f76dd>
CC-MAIN-2022-40
https://www.netlabindia.com/solutions/what-kind-of-cabling-is-used-for-data-centres/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00673.warc.gz
en
0.903592
1,107
2.90625
3
3. Consider more than one point of view. Everyone has their own opinions and motivations – even highly intelligent people making reasonable-sounding arguments have personal opinions and biases that shape their thinking. So, when someone presents you with information, consider whether there are other sides to the story. 4. Practice active listening. Listen carefully to what others are telling you, and try to build a clear picture of their perspective. Empathy is a really useful skill here since putting yourself in another person's shoes can help you understand where they're coming from and what they might want. Try to listen without judgment – remember, critical thinking is about keeping an open mind. 5. Gather additional information where needed. Whenever you identify gaps in the information or data, do your own research to fill those gaps. The next few steps will help you do this objectively… 6. Ask lots of open-ended questions. Curiosity is a key trait of critical thinkers, so channel your inner child and ask lots of "who," "what," and "why" questions. 7. Find your own reputable sources of information, such as established news sites, nonprofit organizations, and education institutes. Try to avoid anonymous sources or sources with an ax to grind or a product to sell. Also, be sure to check when the information was published. An older source may be unintentionally offering up wrong information just because events have moved on since it was published; corroborate the info with a more recent source. 8. Try not to get your news from social media. And if you do see something on social media that grabs your interest, check the accuracy of the story (via reputable sources of information, as above) before you share it. 9. Learn to spot fake news. It's not always easy to spot false or misleading content, but a good rule of thumb is to look at the language, emotion, and tone of the piece. Is it using emotionally charged language, for instance, and trying to get you to feel a certain way? Also, look at the sources of facts, figures, images, and quotes. A legit news story will clearly state its sources. 10. Learn to spot biased information. Like fake news, biased information may seek to appeal more to your emotions than logic and/or present a limited view of the topic. So ask yourself, “Is there more to this topic than what’s being presented here?” Do your own reading around the topic to establish the full picture. 11. Question your own biases, too. Everyone has biases, and there’s no point pretending otherwise. The trick is to think objectively about your likes and dislikes, preferences, and beliefs, and consider how these might affect your thinking. 12. Form your own opinions. Remember, critical thinking is about thinking independently. So once you’ve assessed all the information, form your own conclusions about it. 13. Continue to work on your critical thinking skills. I recommend looking at online learning platforms such as Udemy and Coursera for courses on general critical thinking skills, as well as courses on specific subjects like cognitive biases.
<urn:uuid:cbd2b44c-4992-4506-bf10-a39caa526e87>
CC-MAIN-2022-40
https://bernardmarr.com/13-easy-steps-to-improve-your-critical-thinking-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00673.warc.gz
en
0.947042
639
3.40625
3
Sir Tim Berners-Lee Inventor, World Wide Web Sir Tim Berners-Lee invented the World Wide Web in 1989 while working as a software engineer at CERN, the large particle physics laboratory near Geneva, Switzerland. Sir Tim understood the unrealized potential of millions of computers connected together through the Internet and envisioned the Web as a global information sharing space. Sir Tim proposed what was to become the World Wide Web with a proposal specifying a set of technologies that would make the Internet truly accessible and useful to the world.. Despite initial setbacks and with perseverance, by October of 1990, he had specified the three fundamental technologies that remain the foundation of today’s Web : HTML, URL, and HTTP. He also wrote the first Web page editor/browser (“WorldWideWeb”) and the first Web server (“?httpd“). By the end of 1990, the first Web page was available. By 1991, people outside of CERN joined the new Web community, and in April 1993, from much encouragement from Sir Tim and his colleagues, CERN announced that the World Wide Web technology would be available for anyone to use on a royalty-free basis. Since that time, the Web has changed the world, arguably becoming the most powerful communication medium the world has ever known. Whereas only just over one half of the people on the planet are currently using the Web the Web has fundamentally altered the way we teach and learn, buy and sell, inform and are informed, agree and disagree, share and collaborate, meet and love, and tackle problems ranging from putting food on our tables to curing diseases. In 2009, Sir Tim recognized that the Web’s potential to empower people to bring about positive change remained unrealized by billions around the world. Announcing the formation of the World Wide Web Foundation, he once again confirmed his commitment to ensuring an open, free Web accessible and to all where people can share knowledge, access services, conduct commerce, participate in good governance and communicate in creative ways. In 2012 Sir Tim co-founded the Open Data Institute with Sir Nigel Shadbolt, which seeks to show the value of open data, and to advocate for the innovative use of open data to affect positive change across the globe. A graduate of Oxford University, Sir Tim is a professor at Massachusetts Institute of Technology at the Computer Science and Artificial Intelligence Laboratory (CSAIL) andin the Computer Science Department at Oxford University.
<urn:uuid:fc56bec2-c771-4834-ad61-880b7656b49e>
CC-MAIN-2022-40
https://www.cloudsecurityexpo.com/speakers/sir-tim-berners-lee
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00673.warc.gz
en
0.942853
498
3.234375
3
Is Software Testing as Old as Coding? - August 22, 2013 Is software testing as old as coding? In the early days of the software development, developer is considered to be a person responsible for ensuring quality of the developed software and to answer all the questions related to the software quality. This is what IT professionals and experts believe in 1970s. Software Testing was merely an activity that used to take place after the coding process to make sure it runs alright and can meet expectations. Concept of Testing in 1979 In 1979, Myers defines testing in his book, the art of Software Testing, as the process of executing the program or system and checking it to find errors. Testing was perceived as an activity that takes place right after the coding process with the intent to check it for the errors. However, later in the 1980s, another IT expert describes testing as an activity which takes place for the purpose of the product evaluation and to measure and improve the quality of the software. Hence, for the first time software testing was regarded as a process of product evaluation and quality assurance. Testing is a Concurrent process In the year 2002, two authors named Caig and Jaskeil came up with a more improved definition of the software testing. According to them, testing is not an activity that takes place after the development, but it’s a more pragmatic activity. They stated testing as the concurrent process that should start with the development process. The main focus of the definition is on three major points. Firstly, testing is not an activity to be started after the completion of development process, but it’s a concurrent activity with the development and coding process. Secondly, ensuring traceability by the checking the app against the given checklist is critical in testing. Thirdly, testing is associated with taking proactive actions and removing the bugs to improve the overall level of the software quality. Testing as a Three Step Activity: Today, testing has become a new field in the software engineering. With the new inventions and developments in the testing field, QA experts now define testing as a three-step activity. - A process of verification and validation of the program - Meeting the technical and business requirements, that defines its design and development. - Must meet the expectations of the user. QA people and software testing service providers believe that the purpose of the testing is not limited to verification, validation and finding bugs. It’s now more related to meet the standards of quality and to achieve the TQM model in their operations and processes.
<urn:uuid:64efd7b8-cee1-43dc-a730-f7c5409f0cfc>
CC-MAIN-2022-40
https://www.kualitatem.com/blog/software-testing/software-testing-old-coding
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00673.warc.gz
en
0.96528
527
2.5625
3
When changes occur in a network topology because of the failure or restoration of a link or a network device, IP Fast Reroute enables rapid network convergence by moving traffic to precomputed backup paths until regular convergence mechanisms move traffic to a newly computed best path, also known as a post-convergence path. This network convergence may cause short microloops between two directly or indirectly connected devices in the topology. Microloops are caused when different nodes in the network calculate alternate paths at different times and independently of each other. For instance, if a node converges and sends traffic to a neighbor node, which has not converged yet, traffic may loop between the two nodes. Microloops may or may not result in traffic loss. If the duration of a microloop is short, that is the network converges quickly, packets may loop for a short duration before their TTL expires. Eventually, the packets will get forwarded to the destination. If the duration of the microloop is long, that is one of the routers in the network is slow to converge, packets may expire their TTL or the packet rate may exceed the bandwidth, and packets may get dropped. Microloops that are formed between a failed device and its neighbors are called local uloops, whereas microloops that are formed between devices that are multiple hops away are called remote uloops. The ISIS Local Microloop Protection feature helps networks avoid local uloops. Local uloops are usually seen when there is no local loop-free alternate (LFA) path available, especially in ring or square topologies. In such topologies, remote LFAs provide backup paths for the network. However, the fast-convergence benefit of the remote LFA is at risk because of the high probability of uloop creation. The ISIS Local Microloop Protection feature can be used to avoid microloops or local uloops in such topologies.
<urn:uuid:4dbc547a-dd47-4539-be10-be576e12c304>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_isis/configuration/xe-16-10/irs-xe-16-10-book/irs-uloop-local-avoid.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00673.warc.gz
en
0.908592
422
2.8125
3
Created 30 years ago, IPv4 has a 32-bit addressing scheme and can support approximately 4.3 billion devices connected directly to the Internet. Well aware that IPv4 addresses would eventually run out, the IETF created IPv6 as an upgrade to IPv4. IPv6 features a 128-bit addressing scheme, supports a mind-numbing amount of devices and delivers much needed security and performance improvements. While the IPv6 protocol has been around for along time, forklift upgrades to IPv6 were (rightly) seen as expensive and time consuming without much practical benefit. However, with the pool of IPv4 addresses completely exhausted, IPv6 is a trend whose time has come. It is well known that most security incidents are caused by human error, either as the result of a programming error or through misconfiguration, so it comes as no surprise that research by Tufin Technologies revealed that misconfigurations are the greatest source of firewall-related risk and inefficiencies. The lack of experience and training for those IT professionals dealing with IPv6, will only make mistakes more likely, and IPv6 address complexity will only exacerbate this because they are extremely difficult to read and do not lend themselves to memorization. Compare a typical 32-bit IPv4 addresses -192.0.2.31 with a 128 bit IPv6 address -2001:db8:31:1:20a:95ff:fef5:246e. Now do you get it? Knowing that IPv6 migration will be a fact of life, here are some measures you can take to ensure migration efforts will not impede firewall management: Understand what IPv6 means to your network, people, and vendor partners: Although many potential issues can be avoided by testing IPv6 conditions in a lab or by running pilots, as with any IT deployment, there are some scenarios that even th most savvy IT people would not have known to anticipate. The only way your team will learn what the issues are, is by experience. For example, network devices or firewalls could become overwhelmed and fail when used in an IPv6 environment, allowing data traffic to pass without full inspection or resulting in an outage, or not. Talk to your firewall and network infrastructure vendors to see where they are at with IPv6 and what sort of resources they can provide to aid with migration. If you outsource firewall management, get educated on what your MSSP or service provider is doing for IPv6. Avoid having to manually type IPv6 addresses: Because writing IP addresses manually is a highly error-prone, endeavor, you should minimize this. If you have to write an address, do it once and whenever possible, assign a human readable name to it and use the name in all places (firewall rules, policies, ACLs etc.).In order to minimize the duplication of address definitions, you need consolidated management systems so that IPv6 addresses are stored on a central repository and can be sourced as needed – for example, host naming should be consolidated across firewalls and routers, even from different vendors. For those organizations running Next-Generation firewalls, incorporate your firewalls with Active Directory to avoid having to manually enter user addresses. Things will go wrong. Be prepared: IPv6 increases complexity, which is already beyond manual control on most enterprise firewall policies. But if you plan ahead, when something does happen, you will be in a good position to troubleshoot. From a process and operations perspective, the simpler the better. Make sure changes are properly and clearly documented so that anyone can understand what the actual change was, why it was made, who made it and when. Deploy network management tools that understand IPv6: Most organizations will be running dual IPv4 and IPv6 networks, known as dual stacks, as they transition.IPv4 and IPv6 cannot communicate with each other, so they will need to be deployed in tandem until the transition is complete. That means, that for the period during which you offer both IPv4 and IPv6, you have to do everything twice, which among other things, will significantly increase the number of firewall changes that will occur in a given change window. In addition to having more changes to deal with, IPv6 changes will be more complex. If you have a multi-vendor, multi-type firewall environment, the business case (i.e. time and cost savings) for automating firewall management should be extremely compelling. Look for tools that will help analyze IPv6 addresses, objects, rules and ACLs across networks and security devices. Additionally, look for network management tools that can provide reverse lookup for any IPv6 address to its human readable names. Do not be the person that gets stuck having to manually troubleshoot mistyped IPv6 addresses across multiple firewalls. When upgrading or automating, leverage internal and external domain expertise: Chances are external people you are working with on your IPv6 migration efforts are working with others as well. Any tips or best practices specific to IPv6 migration or in general with the systems or products they work with should be welcomed to ensure that systems are optimized for future needs. The processes you automate are likely to stick for quite some time – take the time to set things up in a way that is just aligned with the strengths of the product(s) your deploying, standard operating procedures and the culture of your company and team. While it may not be of consequence to end users, IPv6 migration will be a big deal to enterprise IT and particularly network and network security managers. IPv6 has been in use for many years, it has been deployed on relatively few networks. Because people are less familiar with it, they are less likely to spot mistakes. With IPv6, security practitioners have a chance to get ahead of the game and bake best practices into IPv6 processes and operations instead of bolting them on after the fact. Lessons learned and best practices will come from trial and error, information sharing, and by supporting industry initiatives. Let’s not waste the opportunity to do things right.
<urn:uuid:38f07625-2c59-4581-b9ad-b06053c3d873>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2012/07/02/firewall-management-ipv6-and-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00673.warc.gz
en
0.950214
1,232
2.578125
3
Tuesday, September 27, 2022 Published 7 Months Ago on Wednesday, Mar 02 2022 By Karim Husami The combination of edge computing and artificial intelligence (AI) technology is essential for innovating the Internet of Things (IoT). These two growing concepts face many issues for their integration, such as data storage structure, model generation algorithms, and cloud-edge collaboration mechanisms. Edge computing cannot support AI and can be enabled via primary network functions related to Quality of Experience (QoE), such as passive computation offloading and content caching. Edge computing can be considered one of the nominee technologies that could challenge these problems. A distributed computing paradigm allows computation and data storage to be brought as close as possible to the relevant data sources. Edge computing technology has been proposed to satisfy low latency requirements and reduce bandwidth consumption. It was nominated in 2018 as one of the best technologies that will lead businesses in the aforementioned future, especially companies linked to the IoT devices and networks where a fast response is one of the main targets of the application. IoT-generated data are differentiated as user-private data preserved locally in IoT devices, edge-private data isolated on edge, and public data uploaded to the cloud. Therefore, a cloud-scale machine learning model can be generated, followed by privacy-preserving transfer learning running on each edge, which also has data updated more frequently that enables the model’s incremental learning. Getting into more details, the model distribution is achieved through lightweight deployment pipelines consisting of cloud compression and edge reconstruction. Conversely, some critical issues of edge computing, such as computation offloading and content caching, achieve a better solution using localized AI. We perform the prototype-based evaluation, which indicates that the ICE computing architecture enables a benign combination of AI and edge computing. The Internet of Things is playing an increasingly important role in human digital life. Its application has expanded to more areas, such as intelligent homes, health monitoring, and vehicle networking. This has prompted IoT to move toward two critical trends, low latency computing, and intelligent services. The former gave birth to edge computing, while the latter promoted the application of artificial intelligence in the IoT. Therefore, the combination of edge computing and AI in IoT is imperative and has immense potential. The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:83a4af0f-e1cb-4365-9533-ebfd2ce056d4>
CC-MAIN-2022-40
https://insidetelecom.com/intelligent-cooperative-edge-computing-in-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00073.warc.gz
en
0.934837
581
3
3
There was a time when the average computer user had to enter login credentials only once per day - likely to simply gain access into the Windows environment. But in this cloud-computing/mobile device age, unless you've activated the "stay signed in" option (assuming it's available), you'd probably have to login to 5 or even 20 different Web sites or online applications per day. Many people already find it hard to remember a single username/password pair, let alone five or more. To work around this difficulty, some individuals use the same login credentials for all the sites or applications they need access to. But in doing so, they create a serious vulnerability. If a malicious individual gets hold of those credentials, he could potentially gain access to multiple applications. That's why some organizations have started implementing SSO or Single Sign-On. Note: Many of the applications we use these days are on the Web, so I'll be using the terms "web sites", "sites", and "applications" interchangeably throughout the article. SSO in a nutshell SSO is an advanced authentication, authorization and access control method that's built for environments where users normally access multiple applications everyday. Adopted by companies like Google, Facebook, Yahoo, PayPal, and Microsoft, who run online sites that serve millions of users worldwide, SSO is designed to simplify access to multiple applications and address insecure login practices. Its most basic function is to enable users to login just once in order to gain entry into several applications. For example, after you login to Gmail (see screenshot above), you're automatically granted access to your accounts in Google-owned applications like Drive, YouTube, Google+, Maps, Blogger, Play, and others. Ever seen this before? You probably have if you frequent certain social networking sites like Pinterest, FourSquare, and StumbleUpon. This is another example of SSO. With it, you can use your Facebook login credentials to access sites that support Facebook logins. While SSO is often associated with popular Web 2.0 sites, it's also actually ideal for and certainly applicable to various organizations. Why organizations implement Single Sign On SSO actually offers significant benefits. Here are some of them: SSO reduces password fatigue People are now subjected to so many passwords, that many of them eventually suffer from password fatigue. Password fatigue, which is caused by having to remember too many passwords, can force users to adopt insecure password habits - such as using the same password over and over again. But isn't using a single password what SSO is all about? Yes, but it's done in a more secure way. I'll explain this later. SSO increases user productivity When users have to login to multiple applications everyday, their productivity can suffer. Why? Because logging-in takes time. Logging-in to a single application would probably only take a second. But that only holds true if you only have to open one application and one application alone and hence have to remember only a single password. But if you have to access multiple applications and hence have to keep multiple passwords, you'd have to open your list of passwords, scan through it, and type in the one that matches the application in front of you. That, I can assure you, takes time. What if you forget or lose your list? That can be a big problem. You'll have to request a password reset and wait until you're issued a new password. What makes things worse is that lost passwords not only affect user productivity. They can also bring down the productivity of people tasked to resolve these types of issues. SSO increases IT's productivity Someone has to be in charge of generating, managing, and resetting passwords. That person or group of people would normally be your IT guy or IT department. Sometimes, your IT department would also have to manage other authentication credentials like digital certificates, tokens, and keys. Managing security credentials for a large organization with thousands of users and numerous applications can be a headache. And yes, it can be a waste of time. Think of all IT's missed opportunities to innovate and engage in more productive endeavors just because they have to deal with password resets and other security credential management tasks. SSO relieves IT of all these hassles because most of the tasks, especially password management, are now going to be handled by the SSO identity provider. SSO encourages usage of Web-facing applications Because of the considerable convenience it offers end users, SSO can potentially drive up usage of any service you provide through the Web. Let's say, for example, you're providing users secure file transfer services and then you decide to adopt the same SSO used by Google Apps. So, once your users have already logged into Google Apps, they would then be automatically granted access into your secure file transfer service. They would be able to easily move from one application to the next (including yours) without having to log in each time. This convenience can encourage them to use your service more. Lastly, and perhaps surprisingly, SSO is actually secure. But before I elaborate on that, let me give you basic explanation on how SSO works. It will help you understand its security advantages compared to other methods. A simplified explanation on how SSO works Although there are different implementations of SSO, the general flow as well as the main players are almost the same. There are typically three main players in SSO: ✔ Service provider - This is an entity (usually a server) providing a service (e.g. file transfers, video streaming, cloud-based word processing, etc.); ✔ User - A person who wants to use the service. The User connects with a Service Provider through a client application (e.g. an app running on a mobile device or a Web application running on a browser). ✔ Identity provider - A server where user identities and credentials are stored. This server is responsible for all user authentication tasks. Here's a simplified version of a typical SSO flow: 1. User connects to the service provider; 2. Service Provider transmits an authentication request to the Identity Provider; 3. User is redirected to the Identity Provider for authentication; 4. User submits login credentials to the Identity Provider, who in turn authenticates the User 5. User is redirected back to the Service Provider, accompanied by a token confirming positive authentication and bearing User information and access rights; 6. User starts using the service. Notice that the Service Provider doesn't, in any instance, perform user authentication. All of that is done at the Identity Provider. That's because all of the user's login credentials are stored in the Identity Provider. You'll see very shortly how this enhances security. Is Single Sign On secure? Let's now talk about what is perhaps the most common misconception about SSO. Many people think that, because SSO provides wholesale access into multiple applications, it provides a huge risk. Presumably, if a malicious individual acquires a user's SSO credentials, all applications protected by it can fall. While that last sentence may be true, it's really an oversimplification of the whole story. We're usually tempted to jump straight to the part where a cybercrook already acquires SSO credentials, that we easily forget to analyze the chances of him succeeding. Compared to current login practices, SSO logins can actually make login credentials less susceptible to unauthorized acquisitions. Let me explain by giving two common scenarios. Scenario #1: A user uses the same login credentials for all online applications In this non-SSO scenario, the user employs the same username and password for all sites he logs into. This is actually the worst case scenario. Many users aren't privy to the security of most of the sites they sign up for and yet they sign up anyway. Some of these sites may be secure, some may not (in which case, can be easily hacked). Some may even be rogue sites, built to steal confidential information - like usernames and passwords. Both insecure sites and rogue sites can cause login credentials to easily fall into the wrong hands. Once those login credentials are stolen, all the lucky crook would have to do is find out which sites could be accessed with them. After that, hacking into the user's accounts in all those sites (including the secure ones) using the same username and password would then be a walk in the park. Scenario #2: A user uses different login credentials Theoretically, this can be the most secure method. Unfortunately, it can also be the most tedious and time consuming. One thing I'd like to point out is that - because many people don't realize this - even when a user employs different passwords for each application but also registers the same email address every time he signs up, he's still giving attackers an opening. Once the email account gets hacked, it would be easy for the hacker to request password resets to all online applications linked to that email. Why SSO can be more secure When you implement SSO, all authentication processes and elements are handled by the identity provider. Many of these providers (e.g. Google, Yahoo!, AOL, Salesforce) are large and reputable organizations who have the means and motivation to establish really strong security. Thus, it would be extremely difficult for a cybercrook to acquire your login credentials from there. SSO is better than Scenario #1 because even if the user attempts to connect to an insecure site or a rogue site, he won't be submitting his login credentials to them. Rather, he will actually be sending those credentials straight to the SSO identity provider. For as long as the user always makes sure he is logging into his identity provider, his login credentials will be safe. You can learn more about SSO login best practices (particularly for the OpenID protocol), on this page. Scenario #2 is theoretically good. In fact, probably the best, especially if 1) the email service provider is really secure, 2) the email account is protected with a strong password, and 3) the user maintains a secret list of passwords. Sounds good? Not really. In every security endeavor, you need to consider the human factor. Let's face it. Not all users are going to maintain a secret list of passwords. There will always be users who would want an easier way. If they find a security policy too inconvenient (like having to use different passwords), they'll circumvent it. It always happens. SSO is better because it relies on a highly secure system that makes policies relatively easier to adhere to. SSO allows users to remember just one set of login credentials. All they have to do is keep those credentials secret. If they're able to do that, the security of the system will hold. If you're looking for a secure file transfer server, we recommend one that supports SSO. Not only will Single Sign-On make it more convenient for your users to access your secure file transfer system (and encourage them to actually use it), it will also keep their login credentials safe. The latest version of JSCAPE MFT Server, which comes with a FREE fully-functional evaluation edition, already supports SSO.
<urn:uuid:820079af-4580-4624-854a-1fbae6418377>
CC-MAIN-2022-40
https://www.jscape.com/blog/sso-single-sign-on-simplified
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00073.warc.gz
en
0.942928
2,325
3.171875
3
What is Challenge Response Authentication? Challenge-Response authentication is a group or family of protocols in which one party, typically the server, presents a challenge to another party, typically the client. The second party must respond with the appropriate answer to be authenticated. A simple example of Challenge Response authentication is Security Questions. The challenge is from a server asking the user a question, the user must respond with the correct answer to the server so that the user can be served. The following DualShield authentication products supports Challenge Response Authentication - Security Questions - GridID Cards - MobileID App
<urn:uuid:4381ba73-78c7-4a2b-abb5-f780d01ebbd5>
CC-MAIN-2022-40
https://deepnetsecurity.com/challenge-response/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00073.warc.gz
en
0.888784
120
2.65625
3
The Gramm-Leach-Bliley Act of 1999 (GLBA) How to comply when disposing of computer equipment What Is the Gramm-Leach-Bliley Act of 1999 (GLBA)? The Gramm-Leach-Bliley Act of 1999 (GLBA) was a bi-partisan regulation under President Bill Clinton, passed by Congress on November 12, 1999. The GLBA was an attempt to update and modernize the financial industry. The GLBA is most well-known as the repeal of the Glass-Steagall Act of 1933, which stated that commercial banks were not allowed to offer financial services—like investments and insurance-related services—as part of normal operations. Many businesses collect personal information from customers. This might include names, addresses, and phone numbers; bank and credit card account numbers; income and credit histories; and Social Security numbers. The Gramm-Leach-Bliley Act requires companies defined under the law as “financial institutions” to ensure the security and confidentiality of personal information. As part of its implementation of the GLBA, the Federal Trade Commission (FTC) issued the Safeguards Rule, which requires financial institutions under FTC jurisdiction to have measures in place to keep customer information secure. Safeguarding customer information is the law and disregarding that law can have major consequences for you, your customers, and your business. The FTC, the federal banking agencies, other federal regulatory authorities, and state insurance authorities enforce the GLB Act. Each agency has issued substantially similar rules implementing GLB’s privacy provisions. The states are responsible for issuing regulations and enforcing the law with respect to insurance providers. The FTC has jurisdiction over any financial institution or other person not regulated by other government agencies. The FTC may bring enforcement actions for violations of the Privacy Rule. The FTC can bring actions to enforce the Privacy Rule in federal district court, where it may seek the full scope of injunctive and ancillary equitable relief. The FTC also has authority under Section 5 of the FTC Act to examine privacy policies and practices for deception and unfairness. Who must comply with the Gramm-Leach-Bliley Act? All “financial institutions”, of any size must comply with the GLBA. A “financial institution” may include businesses that are “significantly engaged” in providing financial products or services. While this refers to banks and investment houses, it also includes check-cashing businesses, payday lenders, mortgage brokers, nonbank lenders, personal property or real estate appraisers, professional tax preparers, and courier services. The Safeguards Rule also applies to companies like credit reporting agencies and ATM operators that receive information about the customers of other financial institutions. Financial institutions must maintain their own safeguards, but they are also responsible for taking steps to ensure that their vendors, affiliates, downstream partners, and service providers safeguard customer information in their care as well. How to comply: The Safeguards Rule requires companies to develop a written information security plan that outlines their customer information protection program. The plan must be appropriate to the company’s size and complexity, the nature and scope of its activities, and the sensitivity of the customer information it handles. The customer information protection program has multiple section, but each company must: - Designate one or more employees to coordinate its information security program; - Identify and assess the risks to customer information in each relevant area of the company’s operation, and evaluate the effectiveness of the current safeguards for controlling these risks; - Design and implement a safeguards program, and regularly monitor and test it; - Select service providers that can maintain appropriate safeguards, make sure your contract requires them to maintain safeguards, and oversee their handling of customer information; and - Evaluate and adjust the program in light of relevant circumstances, including changes in the firm’s business or operations, or the results of security testing and monitoring. “…select service providers that can maintain appropriate safeguards, make sure your contract requires them to maintain safeguards, and oversee their handling of customer information…” Securing customer information is critical to compliance with GLBA. This effort must be made in all areas of operation, including the appropriate use and protection of laptops, PDAs, cell phones, or other mobile devices. Those devices must be secure while in use, but also after disposal. As it pertains to the disposal of computer equipment Financial Institutions must: - Maintain a careful inventory of your company’s computers and any other equipment on which customer information may be stored. - Dispose of customer information in a secure way - Destroy or erase data when disposing of computers, disks, CDs, magnetic tapes, hard drives, laptops, PDAs, cell phones, or any other electronic media or hardware containing customer information. GLBA data privacy compliance requires that any 3rd party service provider that comes in contact with your clients sensitive information must sign a “GLB Security Agreement” This agreement requires the vendor to provide the same level of data care and protection that your organization provides. Over 25 Years of experience with more than 1000 clients Back Thru The Future® specializes in providing secure onsite data destruction services to the financial industry in the Northeast business corridor. Our clients include some of the largest international banking enterprises as well as the Federal Reserve. In the State of NJ we service nearly 70% of the entire community banking industry. This specific experience, along with our unique credentials as a Federal EPA permitted universal waste destination facility electronic recycler and NAID AAA certified secure data destruction provider, meet with the OCC requirements for a qualified third party service provider. 100% of our client quality control surveys rate both our pre-project and post-project communications as “Excellent” 92% of our new client quality control surveys have been returned marked “exceeded expectations”. Our Mission is Protecting our Clients from Environmental and Data Security Liabilities with Secure, Auditable and Compliant Recycling and Data Destruction Services.
<urn:uuid:397fb60a-beeb-407b-90f3-e03d84b9028f>
CC-MAIN-2022-40
https://www.backthruthefuture.com/the-gramm-leach-bliley-act/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00073.warc.gz
en
0.928326
1,251
3.046875
3
A Certificate Authority or CA is basically an entity or institution that is authorized and responsible for the distribution of digital certificates. The issuance of certificate is one of the important segments of securing the interactions taking place over the Internet. Why is it that these certificates help in recognizing the identity of someone over the Internet? Well, the certificate cryptographically tie up an identity with public key. The primary use of certificates is in SSL encryption, in order to authenticate the devices as well as the people and to legitimize codes and documents. One of the unique features of a trusted certificate authority is ‘ubiquity’. It is necessary for it to be as compatible with different versions of Internet browsers and operating systems as possible. It is essential to deploy uninterrupted validation of the certificate to various users across any sort of service or device. If we talk in a layman term, “what do CA (certificate authorities) execute?” They ensure everyone over Internet is who they claim to be they are! The Types of Certificate Authority There are two basic kinds of certificate authorities that we are going to discuss right over here. One is public certificate authority and the other is private certificate authority. The point that we are going to discuss down below is differentiating public vs private certificate authority. So let us find about the same- A Public CA is a certificate authority rendering services to the common public and any organization offering CA features and services to you that you are not associated with is basically a Public CA. Most of the public CAs are basically companies that have earned the faith of the public at large. There are some public CAs operated by the governments as well. On the other hand, Private CAs, also termed as local CAs, are a form of self-hosted certificate authorities meant only for internal usage. Private CAs have been provided limited scope intentionally, which is usually employed within an organization, a very large company or even a university. It is a fact that the private CA is only ‘trusted’ by users inside that organization – and it rarely interfaces with the outside networks. As you are often required to pay for each of the certificate issued, Public CAs are the favourite option in case you have to release a limited number of certificates. On the other hand, in case you forecast a high volume of certificates, just because the organization is massive or rather the certs required to be reissued on frequent basis, it can be reasonable and cheaper to operate your own CA instead of paying for every release issued by public CA. Another aspect which is different between both CA types is wrt nature of communication i.e. Public CA is the go-to solution under any case where the situation demands transparent communication on the Internet. In case of any public-facing service or product, you will require a public CA. Private CA approach is more secured. Having control on certificate expiration period is an important factor for organizations that do have a time-sensitive or cynical nature. Let’s understands which one is more secured – Public certificate authority is widely used utility over the Internet. Most kinds of privacy or security include a public form of CA in one way or the other. This is not the case with Private CA – They are significantly more secure in comparison of public counterparts. In Public CAs, certificates are handed out to anyone who pays, the private CAs limit their certificates only to specific devices or people, generally those inside the organization. Some examples of Pubic Certificate Authority are while Implementing SSL, encrypting emails and signing digital documents. On the other contrary, Private Certificate Authority is a vital part of building a secure and robust intranet (i.e. internal network). Public vs Private Certificate Authority – Marking the Difference The table differentiating public vs private certificate authority has taken into consideration various aspects associated with the certificate forms. Right from the utility to the security features, both public and private CAs have been measured to deliver best understanding. Hope, it would have answered much of your questions.
<urn:uuid:6d0f0425-0eb7-4268-b971-669ebcec2407>
CC-MAIN-2022-40
https://ipwithease.com/public-vs-private-certificate-authority/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00073.warc.gz
en
0.957642
822
3.125
3
According to Google researchers, numerous studies have shown that data center computers rarely operate at full utilization. This processor.com report takes an in-depth look at Google’s exploration and research on how to build energy-efficient data center networks that use power proportional to the data that gets transmitted throughout the network. “Several Google researchers recently authored a paper, ‘Energy Proportional Datacenter Networks,’ in which they propose several new methods of data center network designs. The methods pertain to the large clusters of 10,000 or more servers used by companies like Google and could result in significant energy and cost savings. In their paper, the research team demonstrated an 85% power reduction in a simulated energy proportional network. “‘Basically, what Google is talking about is taking the application load and translating it through sophisticated math models into capacity requirements and then adjusting the load to those requirements,’ says Clemens Pfeiffer, chief technology officer at Power Assure (www.powerassure.com), which makes software that uses algorithms to automatically adjust network capacity.”
<urn:uuid:75f4c610-07e3-47d3-b859-739923582706>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/news/google-suggests-a-better-data-center-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00073.warc.gz
en
0.920914
225
2.875
3
Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack. While not necessarily a new threat, they have in fact been around since the late ’90s. When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow. The New Battleground – Rent-a-bots on the Rise Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps. The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic. This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk. So what steps can businesses take to protect themselves now and in the in the future? First: Contain the Threat With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace. A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers. Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business. By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix. Second: Protection at the Switch Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy. - A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts. - Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches. - How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation. Third: Do the Simple Things to Secure Your Smart Things As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps. - Change the default password – One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control. - Update the software – As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top. - Prevent remote management – Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https. Evolve Your Network The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks. Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
<urn:uuid:bb157a97-8192-49b1-ab55-47615f630dc8>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/industry-perspectives/checklist-getting-grip-ddos-attacks-and-botnet-army
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00273.warc.gz
en
0.93109
1,399
2.53125
3
What Is Data Discovery and Why Should I Use It? The brain is an amazing organ. It weighs about 3 ½ pounds and consists of two hemispheres, 100 billion neurons, and 100 trillion synapses: the command and control center for each of us. The brain controls autonomous functions like heartbeats and brainwaves, but it also allows us to learn through its unequaled capacity for identifying and analyzing patterns. Intelligence is essentially the ability to store patterns in our memory. Recognizing and analyzing patterns makes us uniquely human. That is, until computers can do it better. When IBM developed machines to recognize and analyze chessboard patterns, the machines could “learn” to defeat grandmasters. Today, IBM’s Watson is starting to diagnose diseases; Google and Tesla vehicles are learning to drive autonomously. This is all possible through data discovery. At its core, data discovery is the “process of extracting actionable patterns from data.” It starts with aggregated data, identifies outliers, and results in extracted data to be leveraged in specific circumstances. Data discovery helps you develop real world solutions based on specific data provided by established patterns. That Doesn’t Sound Too Hard… And, it isn’t – if you have the right tool. The problem is the sheer amount of data. Estimates predict daily data generation at 2.5 billion gigabytes per day, and 90% of the world’s collective data has been produced in just the last two years. In a 2016 study, Veritas found that of the information stored and processed globally, 52% is “dark”’ (no one really knows what it holds) and another 33% is redundant, obsolete or trivial (ROT) and useless. More telling is the fact that employees spend up to 30% of their workday (2.5 hours) searching for useful data. Can any company tame the chaos of information overload? Is there a way to locate all files and to evaluate each one’s importance—if any—to the business? And, is it even possible to create an ongoing, sustainable system to manage files today and in the future? Yes! Information management—also known as information Governance or IG—is not only possible but accessible. The right tool can help you comprehend and protect the data you have, get rid of data you don’t need, and provide data access to the right people at the right time. How Can FileFacets Help? With FileFacets, your business can find, analyze, and categorize data. The result: You spend less time searching for data, resulting in solving problems more quickly and effectively. FileFacets scans content within file sharing environments, enterprise content management systems, Microsoft Exchange servers, and individual desktops. The tool analyzes and recognizes data patterns, then categorizes files into tailored headings (e.g., contracts, billing, PII, etc.). It can also flag anomalies (outliers) to determine file or data ROT (redundant, obsolete, and trivial); this helps optimize your data for action. Then it helps you manage your data for easier searches and analysis. FileFacets allows you to locate and process all content across your enterprise.
<urn:uuid:0d4567f4-be22-4a5c-92ca-bb3f4bdbfc21>
CC-MAIN-2022-40
https://data443.com/blog/what-is-data-discovery-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00473.warc.gz
en
0.886901
676
2.515625
3
IoT devices and blockchain innovations have a lot more in common than you may realize. First, let’s examine blockchain. Basically, blockchain applies time-stamps and visibility to online transactions. A blockchain continues to build upon itself, so with each blockchain transaction, security expands. In terms of supply lines, blockchain tech in conjunction with IoT can do a great job of reducing operational cost. Supply chains generate quite a bit of paperwork. As a matter of fact, it’s estimated that about 20% of the supply chain cost is caused by paperwork. Blockchain technology in combination with IoT devices helps make information visible, but not alterable, to those with requisite access permissions. And combining blockchain with IOT technology can help substantially reduce supply chain costs. Additionally, IoT supported by blockchain technology provides: - Workflow Improvement - Infrastructural Management - Reduction of Security Threats - Asset Life Cycle Management - Reduction of out-of-stock incidents - Reduction of data redundancy Workflow and Infrastructure The supply chain is complex and IoT devices can help improve operational workflow and supply chain management. Case studies in the shipping industry have identified opportunities for cost reduction and process improvements through paperwork reduction and real-time workflow management. The improved supply chain transparency will reduce fraud and processing errors and improve inventory infrastructure management. And when the paper trail is replaced by an automated record tracking system more, and more accurate, data can be collected and used to manage the process and the operational infrastructure. In addition, with real-time data collection, and storage in the cloud, inventory in the entire supply chain can be better controlled. Inventory levels and usage rates at key stations are visible to the entire supply chain. The inventory consumption rate and replenishment rate at these key stations can then be used to tightly control the inventory levels. A more tightly controlled inventory level in the supply chain, while preventing stock-out and overstock situations, results in significant cost savings. Improved supply chain transparency is also beneficial in situations when a product is selling particularly well. The higher sales numbers can be transferred to manufacturing facilities in real-time, allowing them to increase their production to replenish a supply line before the inventory is exhausted. Security and Assets Almost everything can be recorded with the records in a blockchain. A blockchain is a centralized shared ledger that is tampering resistant. It allows participating companies to store, view and share digital information in a secure environment. Blockchains are designed to be secure and because of this feature companies are exploring ways to use blockchains for fraud prevention. Fraud in supply chains is an issue because they are complex and offer lots of opportunities for fraud to be committed and go undetected. Blockchain can help to reduce fraud in the supply chain with greater transparency and improved traceability of inventory. Once an item is digitized on a blockchain, it can easily be traced back to its origin. With blockchain ledgers supported by IoT devices, security threats to assets in a supply chain can be identified and reduced. Reduction of Redundancies Something else that is considerable here is the reduction of data redundancies. In terms of backup, redundancy is good, but with a supply chain, too much paperwork can “clog the works”. With blockchain-enabled IoT, everyone can more easily get on “the same page”, errors can be identified, and obtuse bureaucratic data creation can be managed such that the weight of infrastructural supply chain organization is reduced. Optimizing Your Supply Chain IoT devices which incorporate blockchain protocols can help optimize supply chains in terms of security, efficiency, and ultimately, profitability. An added bonus is increased competitiveness through decreased operational losses, and subsequent increase in profit. Accordingly, introducing this technology has a high chance of paying for itself.
<urn:uuid:10930ba9-92c1-4ea0-b985-7b1d927c4dd9>
CC-MAIN-2022-40
https://iotmktg.com/iot-devices-blockchain-technology-fine-match/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00473.warc.gz
en
0.927482
779
2.609375
3
The beautiful first photos from the James Webb House Telescope present the deepest and clearest look but into outer house, Lisa Grossman reported in “Postcards from a new space telescope” (SN: 8/13/22, p. 30). JWST observes house utilizing infrared, a type of mild not seen to the human eye. To visualise the pictures, scientists colorize them. Reader John Dohrmann puzzled how that colorizing is finished. JWST’s pictures are colorized by senior information imaging developer Joseph DePasquale and science visuals developer Alyssa Pagan, each of the House Telescope Science Institute in Baltimore, Grossman says. Their primary rule of thumb is to color the photographs utilizing wavelengths of sunshine as a information. The sunshine emitted within the longest wavelength in a picture is assigned the colour crimson, and the shortest blue, she says. Wavelengths in between are assigned a spectrum of greens and yellows (SN: 3/17/18, p. 4). However there are additionally different concerns, comparable to information on the chemical compositions of stuff within the picture. How one can colorize these parts will be extra of an artwork than a science, Grossman says. “There’s a subjective artistry to it too.” Reader Stu Kantor requested why some stars within the JWST pictures seem to have eight spikes — six giant ones and two smaller ones (see “Out of this world,” beneath). These are referred to as diffraction spikes, Grossman says, they usually’re an artifact of the telescope’s optical setup. JWST has two mirrors: a major hexagonal mirror and a smaller secondary mirror that sits in entrance of the first mirror and is held up by three help beams. When it hits the telescope, mild bends on the two edges of every of the secondary mirror’s helps, producing six diffraction spikes. The six edges of the first mirror additionally create six spikes. Scientists designed the telescope in order that 4 of the spikes from the secondary helps overlap with 4 of the first mirror’s spikes, Grossman says, so although there are 12 spikes, we see solely eight. Diffraction spikes should not distinctive to JWST. “Photos from the Hubble House Telescope have these too, however they solely have 4,” Grossman says. “The eight factors are a particular characteristic of JWST, like an artist’s signature.” On the nostril Scientists found a neural hyperlink within the canine mind that connects the olfactory system to imaginative and prescient, which can assist clarify why humankind’s greatest buddy is such an excellent sniffer, Laura Sanders reported in “New nose-to-brain link ID’d in dogs” (SN: 8/13/22, p. 9). The story impressed a number of readers to mirror on the conduct of their very own furry mates. “I now know why my German shepherd couldn’t play the best model of the shell sport,” Ed Hughes wrote. “Utilizing a small piece of pet food and two Dixie cups … one shift within the location of the cup hiding the pet food utterly confused her. I might watch her eyes observe the cup, however she by no means picked the cup with the pet food. She had prelocated it together with her nostril, and something her eyes detected was utterly ignored.” Reader Roy R. Ferguson shared his fascination with canine’ sniffing skills, having labored with the animals in search and rescue efforts for the final 20 years together with his spouse. “We’ve got realized to permit the Okay-9s to do their work with as little tremendousimaginative and prescient as attainable,” Ferguson wrote. “They always make selections that appear uncommon on the time however make sense as soon as the total story is understood.” “Our Okay-9s have positioned drops of blood in mild rain and human decomposition in varied automobiles. Reside finds embrace one man who wandered over 10 miles after a head wound and a 6-year-old who had been out all evening …. The kid discover was notable as a result of great amount of scent contamination within the space,” Ferguson added. “We don’t know how these superb creatures do such marvelous feats. They work their hearts out for nothing greater than reward and a toy reward,” Ferguson wrote. “It has occurred to [us] that we’re there to supply them help, drive and work the radio. In return, they make us look as if we all know what we’re doing.”
<urn:uuid:84495d7c-2b92-4a5f-b966-70498860400a>
CC-MAIN-2022-40
https://dimkts.com/readers-discuss-colors-and-spikes-in-the-james-webb-space-telescopes-images-and-more/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00473.warc.gz
en
0.947922
994
3.328125
3
Now it’s Python’s time to boast. On average, it has the lowest amount of high security vulnerabilities over the past 5 years. In 2018, security vulnerabilities in the language decreased and has overall been decreasing since 2015. Why is Python a secure language? Python is designed as a user’s language. It gives developers all of the tools that they need in order to build solid applications that won’t fall prey to common exploits inherent in more complicated programs. … Python makes it easy to ensure your data remains secure. Is Python more secure than Java? Security. Python and Java both are termed as secure languages, yet Java is more secure than Python. Java has advanced authentication and access control functionalities which keep the web application secure. Which programming language is most secure? According to our knowledge base, C has the highest number of vulnerabilities out of all seven languages, with 50% of all reported vulnerabilities in the past 10 years. How safe is Python? By and large, the official third-party library repositories for languages run as open source projects, like Python, are safe. But malicious versions of a library can spread quickly if unchecked. Is Python more secure than C? What is the best programming language? 7 Best programming languages for beginners to learn in 2021 - Python. The ever-growing importance of data in business has resulted in a quick rise in popularity and demand for Python. … - Go. … - Java. … - Kotlin. … - PHP. … Is Python good for cyber security? Is Python good for cybersecurity? Python is an extremely useful programming language for cybersecurity professionals because it can perform a multitude of cybersecurity functions, including malware analysis, scanning, and penetration testing tasks. Is Java safe in 2021? YES. Java is one of the most secure languages in the market. Java’s security features are far superior to other leading programming languages. Which programming language do hackers use? Access Hardware: Hackers use C programming to access and manipulate system resources and hardware components such as the RAM. Security professionals mostly use C when they are required to manipulate system resources and hardware. C also helps penetration testers write programming scripts. Which one is better Java or Python? Java and Python are the two most popular programming languages. Both are high-level, general-purpose, widely used programming languages. Java Vs. Python. |Learning curve||Difficult to learn||Easy to learn| Is Python more secure than PHP? Python is more secure than PHP. It has many security features that you can leverage to build complex applications with high-end functionality and clear goals. In fact, this March 2019 report shows that Python is one of the most secure programming languages. How do I protect a Python code? The best solution to this vulnerability is to encrypt Python source code. Encrypting Python source code is a method of “Python obfuscation,” which has the purpose of storing the original source code in a form that is unreadable to humans. Is it safe to download Python? So the answer to your question is: yes, it is safe. Go ahead and install it from the official source/website. Does Python have security issues? Python is increasingly becoming one of the most popular programming languages among developers. The relatively low number of Python security issues and its user-friendliness give it an edge over other languages.
<urn:uuid:6ac0333b-93e0-46b6-b0d9-fc70802ed567>
CC-MAIN-2022-40
https://bestmalwareremovaltools.com/physical/frequent-question-is-python-a-secure-language.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00473.warc.gz
en
0.909609
850
2.765625
3
What is Network Function Virtualization (NFV) The telecom industry sometimes has a way of making even the smartest of us feel lost, confused, and occasionally even a little dumb. Just when you’ve figured out the latest hot buttons in the industry buzz – cloud computing, OpenFlow and software defined networking (SDN) – along comes another new concept for you to get your head around. The latest new concept is called Network Functions Virtualization, or NFV, and it has rightfully taken its place in the industry conversation as another step towards creating more agile, lower cost network infrastructure. But what exactly is NFV, and how does it fit into the current industry drive towards enabling more open, consolidated, packet-based networks? Let’s start with the basics. So what exactly is Network Functions Virtualization? Network Function Virtualization, or NFV, is a way to reduce cost and accelerate service deployment for network operators by decoupling functions like a firewall or encryption from dedicated hardware and moving them to virtual servers. Instead of installing expensive proprietary hardware, service providers can purchase inexpensive switches, storage and servers to run virtual machines that perform network functions. This collapses multiple functions into a single physical server, reducing costs and minimizing truck rolls. If a customer wants to add a new network function, the service provider can simply spin up a new virtual machine to perform that function. For example, instead of deploying a new hardware appliance across the network to enable network encryption, encryption software can be deployed on a standardized server or switch already in the network. This virtualization of network functions reduces dependency on dedicated hardware appliances for network operators, and allows for improved scalability and customization across the entire network. Different from a virtualized network, NFV seeks to offload network functions only, rather than the entire network. What are the advantages of NFV? NFV reduces the need for dedicated hardware to deploy and manage networks by offloading network functions into software that can run on industry-standard hardware and can be managed from anywhere within the operator’s network. Separating network functions from hardware yields numerous benefits for the network operator, which include: - Reduced space needed for network hardware - Reduce network power consumption - Reduced network maintenance costs - Easier network upgrades - Longer life cycles for network hardware - Reduced maintenance and hardware costs Is NFV an open standard? So the concept and benefits of NFV are simple enough, but implementing NFV gets more complicated. That’s because in order to realize the full benefit of NFV, some level of cooperation and interaction between various network solution providers and network operators is needed. That’s where industry groups like ETSI come into play. Over 130 of the world’s leading network operators have recently joined together to form an ESTI Industry Specification Group (ISG) for NFV. While the ETSI NFV ISG has garnered a lot of interest in defining the framework for NFV, it is only one player amongst many in this now burgeoning area of industry development. Literally dozens of groups, some open-source, others more traditional standards organizations are creating pieces of the (large) puzzle needed to make NFV a reality. All while operators large and small kick the tires, engage in proofs-of-concept exercises and evaluate the business case for what is surely the industry’s largest transformation in decades. Want to dig deeper into NFV? Here are some additional resources for you: - Ciena's SDN and NFV resource hub - White Paper: Quantifying the business benefits of NFV-based managed enterprise services - SDN and NFV are changing the game with Blue Planet - Webinar Archive: The transformation of your network starts with software - Chalk Talk video: What is SDN? - Ciena Blue Planet NFV Service Orchestration This post has been updated from its original content to reflect the latest definitions and resources related to NFV. The last update to this page was July 13, 2016.
<urn:uuid:08721a47-ab87-44f9-b6a8-bd27cbc55d02>
CC-MAIN-2022-40
https://www.blueplanet.com/resources/What-is-NFV-prx.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00473.warc.gz
en
0.908195
842
2.546875
3
Making complex data simple and compelling From digital device to digital evidence Unlock your vehicle's digital evidence potential Forensic Analysis and Enhancement Investigating and analyzing financial records Gain access to the online accounts of deceased loved ones Clear, precise evidence for a messy world Expert reports to suit your specific needs We can locate people anywhere Stop worrying and learn the truth Prevent, Detect, Respond To Cyberattacks First response is crucial. Every minute counts. The first response is critical to reduce liability Detection & Removing Spyware Services Reduce your electronic risk from digital transmittals Find out who you are really talking to Experienced, Confidential Services Swift, professional incident response Complicated cases require compelling digital facts Find, recover and document digital evidence Bring solid evidence before a judge Cases can be investigated using Social Media Divorce, custody battles, and other Win the most important battle of your life Everything you need Effective Expert Witness in Court Evidence shows who is telling the truth Subpoena power yields strong evidence Digital evidence can build a strong defense Go to court with compelling digital evidence Cybercrimes cover a broad spectrum, from email scams to downloading copyrighted works for distribution, and are fueled by a desire to profit from another person’s intellectual property or private information. Computer forensics, or digital forensics, is a fairly new field. It is the art and science of applying computer science to aid the legal process. The goal of computer forensics is to perform a structured investigation while maintaining a documented chain of evidence to find out exactly what happened on a computing device and who was responsible for it. Forensic investigators typically follow a standard set of procedures. Computer forensics requires specialized expertise and tools that go beyond the usual methods of collecting and storing data available to end users or technical support personnel. This involves similar techniques and principles to data recovery, but with additional guidelines and practices designed to create a legal audit trail. Computer forensics will play a greater role in exposing the malicious acts of people. As it continues to advance. It will make it more difficult for people to hide their wrongful acts and easier to have them held responsible. The issues facing computer forensics examiners can be broken down into three broad categories: technical, legal and administrative. Forensics are critically important to the incident response process and are useful for both routine and timely response. For example, in an incident where a company is dealing with a successful phishing attack, forensic processes can be used to establish facts such as who clicked on the link, who was successfully phished/compromised, and what information was actually accessed or taken. Computer forensics has become its own area of scientific expertise, with accompanying coursework and certification. What knowledge and skills should a digital examiner have? There are a lot of mobile devices around. Luckily for digital examiners, the war between developers of mobile device operating systems is ended. Now 99% of mobile devices are running iOS or Android OS. Knowledge of the forensic artifacts of just two mobile operating systems allows a digital examiner to explore a vast number of mobile devices. Mobile devices store a lot of private data about their owners. This can be used to investigate crimes. Also, some mobile devices are vulnerable to a virus attack despite actions taken by the developers, which can lead to theft by private data hackers. There are several good tools for extracting and analyzing data from mobile devices, but manual analysis will result in detection of more forensic artifacts on the analyzed device. The cloud concept is very convenient for users. You can access your private information or working documents from anywhere in the world. Do not worry that a hard drive in a laptop or a desktop may break or that priceless family photos will become inaccessible in a broken smartphone. Many cloud services allow the user to copy all of his information and files to his local PC or a laptop for free. Exploring the artifacts of cloud services on the owner’s devices lets you understand what files were uploaded or downloaded to or from a cloud and other information about the use of cloud services and the data in the clouds. Every day, more and more drones are used in everyday life. Investigating information extracted from drones will soon become a routine job for digital examiners. We already see the use of encryption to protect data in the memory of drones and the use of cloud services for storing information necessary for a drone’s successful functioning. The vast majority of PCs and laptops are running Windows OS. Also, companies often use a server running Windows OS. Researchers constantly report the discovery of new artifacts that can be used in a forensic analysis. Therefore, knowledge of Windows Forensics is fundamental to any digital examiner. Of course, the number of Mac computer owners varies from country to country, but the general trend is that the number of Macs falling into digital forensic laboratories is increasing. Knowledge in Mac Forensics will allow a digital examiner to successfully explore similar devices. File Systems Forensics There are not many basic file systems. These are: EXT, FAT, NTFS, HFS +. Knowledge of what kind of artifacts remain in the file systems is needed in Windows Forensics, Incident Response, Data Recovery and Mobile Forensics. There are a lot of tools on the internet for hackers and penetration testers. These tools allow you to automate the routine work of attackers. Therefore, the number of incidents associated with the theft of money, private or financial information is constantly increasing. The demand for digital examiners with Incident Response skills is constantly growing. This is a specific area of knowledge that a digital examiner will not use on a day-to-day basis. However, knowledge in Memory Forensics allows significantly faster Incident Response, detection of malware, decrypting of drives and partitions. The examiner can retrieve other data and files that are stored in the RAM of the device under test. This allows detection of anomalies in the operation of computer networks and detection of an intruder. It is also used in dynamic analysis of malware. Cyber Tread Intelligence Hackers and pentesters can use a huge number of methods to penetrate an attacked computer or computer network. Knowledge of Cyber Tread Intelligence allows the examiner to separate several most likely methods of attack from a whole variety of methods. This allows you to reduce response time to an incident and identify all compromised computers and other devices (for example, routers). Of course, a digital examiner does not have the same skills as a malware analyst. However, the knowledge of a digital examiner should suffice to understand which of the viruses participated in the incident (usually a compromised system contains several viruses) and understand how the attack was carried out on a compromised system. For example, a typical attack on a computer looks like this: an email with a malicious document arrives at the email address of the owner of the computer. When someone tries to open this document, it runs the powershell script that downloads an executable file (a virus). In order to understand how the incident happened and what happened on the compromised computer, knowledge in Malware Forensics is needed. The article examined basic knowledge and skills that should help a digital examiner to work effectively. Of course, this list is not exhaustive. Developments in the computer industry require a digital examiner to learn new skills and knowledge. About the authors Oleg Skulkin, GCFA, MCFE, ACE, is a DFIR enthusional (enthusiast + professional), Windows Forensics Cookbook and Practical Mobile Forensics co-author. Igor Mikhaylov, MCFE, EnCE, ACE, OSFCE, is a digital forensic examiner with more than 20 years of experience and Mobile Forensics Cookbook author. Save my name, email, and website in this browser for the next time I comment. Speak to a Specialist Now
<urn:uuid:c843ca4b-3382-4bf4-8263-cc82e53ece6b>
CC-MAIN-2022-40
https://www.digitalforensics.com/blog/articles/skill-and-knowledge-in-digital-forensics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00473.warc.gz
en
0.915486
1,669
2.9375
3
Cyber attacks date back as far as the Internet. However, the first major cyber attack was in 1988. Since then, malicious software has continued to evolve and multiply, so much so that the risk has become real for companies of all sizes (very small businesses are often the most targeted). Today, 57% of cyber attacks have negative consequences, including production disruptions, website unavailability, loss of revenue or even business interruption. If cyber attacks sometimes have similarities, it has become necessary to know them in order to better fight them. A major challenge for the cybersecurity of companies : Cyberattacks: phishing, spear phishing and botnets Phishing and spear-phishing are among the most common cyberattacks. Indeed, this threat concerned 79% of hacked companies in 2019 according to a CESIN study. According to the ANSSI agency ” phishing aims to obtain from the recipient of a legitimate-looking email that he transmits his bank details or login credentials to financial services, in order to steal money”. While phishing is an attack addressed to a global mass of recipients, spear-phising is a targeted attack. To protect yourself, it is advisable to be critical and to analyze your e-mails. The easiest way to do this is to move the mouse cursor over a link before clicking to check its authenticity. Similarly, it is recommended to check that the URL starts with “https” on online shopping sites. Adding anti-spam filters to your inbox and anti-spam filters to your web browser can also be effective against phishing. Finally, it is possible to use a sandbox environment that simulates the opening of a potentially dangerous attachment or link. The president scam or fake money order scam (FOVI) concerns 47% of companies attacked in 2019. It consists of impersonating an executive (CEO, CTO, supervisor, supplier…) to obtain a wire transfer. In 5 years, 2300 filings of this type of complaint have been recorded by the Ministry of the Interior. In March 2018, the Pathé group lost more than 19 million euros because of this fraud. To avoid falling into this trap, it is recommended to raise awareness among your employees. A botnet (or zombie army) refers to a network of computers infected with malware. Botnets are used to send viruses, steal data or perform DoS attacks. To do this, attackers use drive-by downloads and e-mail. The most famous botnets are Kelihos, Conficker, Zeus, Waledac and Mariposa. To counter them, it is recommended to make regular security updates or to use RFC3704 filtering. Cyber attacks: malware, rootkits and DoS Malware is both vicious and numerous. They are capable of recording data without a user’s knowledge (spyware), holding files hostage (ransomware) or hiding in legitimate-looking software (Trojan Horse). Among the most famous malware is Stuxnet, used in 2010 against a centrifuge at the Natanz uranium enrichment site in Iran. There is also Triton, a malware directed against the Petro Rabigh company in Saudi Arabia, which could have had disastrous environmental consequences if it had not been stopped in time. Rootkits or exploit kits allow hackers to access the administrator account of a machine. Thanks to them, it is easy to gain administrator privileges. A sneaky method that can also be used to hide other malware on a device. Once in possession of the operating system control, the hacker can then use its functions remotely. Some rootkits can even alter the security settings of a machine, which makes them even more difficult to detect. Distributed Denial of Service (DoS) is a cyber attack that can paralyze a website. The principle is simple: the attacker floods the site’s traffic until it malfunctions. The service then becomes unavailable, which can cause severe financial losses and damage the company’s image. This cyber attack is indeed public, and can therefore be noticed by the company’s suppliers, customers, partners and prospects. Moreover, it is advisable to be careful and to protect yourself with a firewall and complex passwords. To limit the risks of cyber attacks, it has become essential to be more vigilant with regards to one’s computer network, USB keys, the web, applications, wifi, connected objects and suppliers and partners. All of them can be, each in their own way, vectors of a cyber attack. An effective cybersecurity method consists in equipping oneself with detection probes. Gatewatcher now offers its Trackwatch solution capable of analyzing network flows to detect advanced threats.
<urn:uuid:09ba4998-18a5-4344-920e-f2b124c3e7ab>
CC-MAIN-2022-40
https://www.gatewatcher.com/en/what-is-a-cyber-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00473.warc.gz
en
0.947582
952
3.0625
3
Thank you for Subscribing to CIO Applications Weekly Brief All You need to know about Ransomware Attacks A close eye and protection tools are advisable to protect against ransomware infection. Fremont, CA: Cybercriminals use ransomware as a form of malware. As ransomware infects a device or network, it either disables access to the machine or encrypts its data. In exchange for freeing the data, cybercriminals demand ransom money from their victims. A close eye and protection tools are advisable to protect against ransomware infection. After being hit with malware, victims have three options: pay the ransom, attempt to uninstall the malware, or restart the system. Extortion Trojans commonly use Remote Desktop Protocol, phishing emails, and software vulnerabilities as attack vectors. As a result, a ransomware attack will affect both individuals and businesses. Two types of ransomware - Locker ransomware Malware of this kind disables simple machine functions. Users can, for example, refused access to the desktop because their mouse and keyboard are partially disabled. It helps one to continue interacting with the ransom demand window to pay the ransom. Aside from that, the machine is completely unusable. - Crypto ransomware Crypto ransomware aims to encrypt any crucial data, such as documents, photos, and videos, but not to disrupt the computer's essential functions. Since users can now see their files but still not access them, this causes panic. However, If users don't pay the ransom by the deadline, all files would get deleted, according to crypto developers. Crypto ransomware can be catastrophic due to many users unaware of the requirement for backups in the cloud and on external physical storage devices. As a result, many victims pay the ransom merely to regain access to their files. As previously stated, ransomware threatens people from all walks of life. The ransom requested is usually between 100 dollars and 200 dollars. On the other hand, some corporate attacks necessitate much more – mainly if the attacker is aware that the data will get blocked, which would result in a substantial financial loss for the business attacked. As a result, cybercriminals can make a lot of money using these techniques. The cyberattack victim is (or was) more important than the type of ransomware used in the two examples below. - WordPress ransomware WordPress ransomware encrypts files on WordPress websites, as the name implies. As is typical of ransomware, extort the victim for ransom money. The more popular a WordPress site is, the more likely it is to be targeted by cybercriminals who use ransomware to extort money. - The Wolverine case In September 2018, the healthcare supplier Wolverine Solutions Group was the target of a ransomware attack. Many of the company's files were encrypted by the malware, rendering many workers challenging to access. Fortunately, on October 3, forensics experts were able to decrypt and restore the files. However, the attack exposed a large amount of patient information. See Also :Top 10 Risk Analytics Solution Companies
<urn:uuid:ea69b339-17b3-44df-a87f-b57fc63da7d9>
CC-MAIN-2022-40
https://www.cioapplications.com/news/all-you-need-to-know-about-ransomware-attacks-nid-7847.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00473.warc.gz
en
0.928093
613
2.90625
3
Containers have become the new hit of the tech industry, as big of a buzzword as cloud, IoT or big data and also increasing in adoption just as quickly. The technology isn’t new but questions remain about how it sits in the same ecosystem as virtualisation. Virtualisation, which to some extent can be considered to be last year’s fashion, still remains incredibly useful and it still serves a purpose. It just doesn’t seem to have that wow factor anymore like it used to, unlike containers. As containerisation grew in popularity so did the questions as to whether container tech and virtualisation could live in the same world together, well they can. Containerisation and virtualisation are typically known to work together through container-based virtualisation or application containerisation, which is an Operating System (OS) level virtualisation method for deploying and running distributed applications without the need to launch an entire virtual machine for each of the applications. Containers include the necessary components to run all desired software for usage, such as files, environment variables and libraries. They can also be created much faster than hypervisor-based instances. In comparison, virtualisation is the creation of a virtual version of something such as an operating system, server or a storage device or network resources. Operating system virtualisation is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The market is full of different flavours of containerisation and virtualisation, with each vendor pushing their wares as being the best one. CBR lists three of the top container vendors and three of the top virtualisation vendors to help tackle that confusion. Docker is an open source organisation that automates the deployment of applications inside software containers. By using Docker containers, users are able to deploy, replicate, move and also back up a workload more quickly and easily than can be achieved with virtual machines. This function is shown through the use of a cloud-like flexibility to any infrastructure capable of running containers. Although Docker provides containers, it can also be identified as another form of virtualisation, as Docker containers virtualise the OS into virtualised compartments to run container applications. A Docker container has its own file system, storage, CPU, RAM and so on, but the key difference between a container and VM is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel. One of the recent Docker updates is in the form of Docker Datacenter, which is designed to deliver different options for container deployment such as on-premises, virtual private cloud. This enables Docker to be deployed to virtual private cloud environments but retains portability so that the user can retain control of where it can be used. Microsoft, for a long period of time, has been providing an end-to-end suite of virtualisation products and technologies which together form a centralised management system. The server hardware virtualisation uses software to create a Virtual Machine (VM) that is similar to a physical computer, this then creates a separate OS environment that is logically isolated from the host server. By providing a range of VMs at once, the approach enables several operating systems to run simultaneously on a single physical machine. Windows Server 2008 Hyper-V was Microsoft’s first tool that provides everything needed to support server virtualisation as an integral feature of the operating system. The benefit of its hardware virtualisation includes help to consolidate multiple, under-utilised physical servers on a single host, reduction of workforce, space and kilowatt by leveraging virtualisation for server consolidation and agility and also lower costs. It also offers desktop, application and management virtualisation. Its Virtual Machine Manager helps to enable centralised management of physical and virtual IT infrastructure, increased server utilisation and dynamic resource optimisation across multiple virtualisation platforms.
<urn:uuid:9fe78c31-681d-400a-85e9-f0e2bf7f6368>
CC-MAIN-2022-40
https://techmonitor.ai/technology/software/containers-vs-virtualisation-red-hat-docker-aws-take-vmware-microsoft-oracle
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00473.warc.gz
en
0.915289
796
3.03125
3
Watering Hole attacks increase in a meaningful way in the last years following a scaring trend, the technique is based on infection of website’s visitors, typically attackers use to compromise legitimate websites with a “drive-by” exploit. Watering Hole technique has been observed since 2009 when civil society organizations were attacked with this method and used as a channel to deliver 0-day exploits to specific targets. The techniques results ideal for the impairment of selected targets, individuals or limited communities, that search for specific contents proposed by website used to deliver malicious code. Efficiency of Watering Hole attacks increase with the use made by attackers of zero-day exploits that affect victim’s software, in this case victims has no way to protect their systems from the malware diffusion. Once a victim visits the page on the compromised website a backdoor trojan is installed on his computer, Watering Hole method of attacks is very common for cyber espionage operation or state sponsored attacks. Governments are the primary buyers for zero-day exploits that are used to exploit victim’s machine remaining uncovered for long periods, the capability to remain silent during the time is determinant for the success of the attack. A recent post published by Dancho Danchev revealed that a Compromised Indian government Web site leads to Black Hole Exploit Kit, the researchers at Webroot firm detected the infection interested the web site of the Ministry of Micro And Medium Enterprises (MSME DI Jaipur). The researchers tried to profile the campaign discovering that the Black Hole Exploit Kit serving URL was used for other previous client-side exploit serving campaigns, in 2012 the same IP was also seen in fact during a malvertising campaign. The researchers provided in the post the list of malicious domain name used for the attack and sample of compromised URLs, following the details of the investigation. Sample compromised URLs: hxxp://sisijaipur.gov.in/cluster_developement.html hxxp://msmedijaipur.gov.in/cluster_developement.html Malicious domain names/redirectors reconnaissance: 888-move-stuff.com – 18.104.22.168 – Email: email@example.com 888movestuff.com – 22.214.171.124 – Email: firstname.lastname@example.org jobbelts.com (redirector/C&C) – 126.96.36.199 – Email: email@example.com More malicious domains are known to have been responding to the same IP in the past (188.8.131.52): adventure-holiday-specials.com appraisingla.com arc-res.com a-to-z-of-barbados.com bookmarkingdemonx.com ceointerns.com charityairsupport.org csepros.com dominateseowithwordpress.com enum365.com jobbelts.com karenbrowntx.com rankbuilder2.net seopressors.org stopchasingmoney.com thefamily4life.org ventergy.com To have an idea of the efficiency of the malware used by attackers, known as Trojan:JS/BlacoleRef.W; Trojan-Downloader.JS.Iframe.czf having MD5 equal to 44a8c0b8d281f17b7218a0fe09840ce9, it is useful to evaluate the detection rate for the malware that is 24 out of 27 antivirus. Despite the The Black Hole Exploit Kit redirecting URL that compromised the Indian government Web site is currently not accepting any connections, the security experts at Webroot noted that it was working on 2012-07-03 08:04:36 delivering malicious content. The Sample redirection chain discovered by the researcher is Once exploited the client application on the victim’s machine it is dropped the Trojan-Ransom.Win32.Birele.vjr, aka PWS:Win32/Fareit.gen!C and then additional malware are downloaded from: Attacks like this one are becoming very popular, early 2013 Solutionary’s Security Engineering Research Team published an interesting study that revealed the rise of exploit kits mainly originated in Russia. BlackHole 2.0 is considered most popular and pervasive exploit kit despite it exploits fewer vulnerabilities than other kits do. Over 18% of the malware instance detected were directly attributed to The BlackHole exploit kit that is a web application that exploit known vulnerabilities in most popular applications, frameworks and browsers such as Adobe Reader, Adobe Flash and Java. Watering Hole is much more efficient if compared to a spear phishing attack in which the success of the operation depend on the recipient clicking the link or opening an attachment. There’s an high probability that victim discard the malicious email, even if malware is able to elude antivirus detection due the presence of a zero-day exploit. Watering Hole allows to overcome this difficulty compromising and infect a website victim is likely to visit. What to expect from the future? Security experts have no doubts, the number of watering hole attacks is destined grow in the next months due the large diffusion of exploit kits in the black market and despite the impairment of a target website is much more difficult of other methods of attack. (Security Affairs – Watering Hole attack)
<urn:uuid:3c87c94b-b5dd-4f58-9008-cab4c25a935d>
CC-MAIN-2022-40
http://securityaffairs.co/wordpress/14725/hacking/watering-hole-attacks-exploit-kits-indian-gov-site-case.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00673.warc.gz
en
0.892769
1,162
2.65625
3
The US National Vulnerability Database has been hacked and infected with malware on the 8th of March 2013. Until today, the same place from where both black hats and white hats get information about existing software vulnerabilities, is still offline (15th of May 2013). So far no official report was released that mentions how the hackers managed to hack into and infect NIST’s catalogue of software vulnerabilities with malware. Though from an email sent from Gail Porter (abstract below) it seems that the malicious users exploited a known software vulnerability. It all started when Kim Halavakoski noticed that the NIST vulnerability database is offline. He got in touch with NIST to find out what happened and Gail Porter from the NIST’s Inquiries office replied and stated in an email that; “On Friday March 8, a NIST firewall detected suspicious activity and took steps to block unusual traffic from reaching the Internet. NIST began investigating the cause of the unusual activity and the servers were taken offline. Malware was discovered on two NIST Web servers and was then traced to a software vulnerability. Currently there is no evidence that NVD or any other NIST public pages contained or were used to deliver malware to users of these NIST Web sites.” For a complete transcript of the email sent from Gail Porter (NIST) to Kim Halavakoski click here. I wonder what motivated the hacker/s to hack into such website and infect it with malware. This is not a normal commercial or government website. This website is extremely popular with web application security experts; both the black hat and white hat communities benefit from the information it contains. Could it be the start of a new wave of survival of the fittest in the underground world? The web security industry got a lot to learn from this hacking incident. To start off with, like NIST are currently doing, it is of utmost importance to contain the incident by temporarily restricting network connectivity to the infected web application. This reduces the chances of the malware infection spreading to other web servers on the network or infecting website visitors’ computers. Plan of action to keep hackers at bay Below are four web application security guidelines which if followed, should help you in avoiding that your business ends up hacked and infected with malware. - Frequently scan your website for web vulnerabilities; today’s web applications are dynamic and everyday they become more sophisticated by providing more functionality. The more functionality and features are added to a web application, the bigger is the attack surface. So it is imperative to frequently audit your web applications and scan them for web vulnerabilities with a reliable web vulnerability scanner such as Acunetix WVS. - Backup your web applications; If you identify the security hole of a hacked website it is easier and more efficient to restore a website’s clean backup and close the security hole rather than trying to remove the malware infection. By restoring a website backup you are ensuring that your website is not tampered. On the opposite, you fix a tampered website, it is not guaranteed that you removed all the applications and backdoors that the hacker managed to install and that you will be able to restore all the data to its original state. - Monitor your website files and scan for malware; even if your web application does not have any vulnerabilities, it is still a good practise to implement a website watchdog and scan your website for malware and file changes (file integrity checks). In case a hacker manages to hack into your website via another source, such as the hosting provider network, you are still alerted about the intrusion and can act at the earliest possible to remediate the hacker’s wrong doing. - WAF integration; as seen from this incident, it was the firewall that triggered the alarm. As web security experts and PCI DSS recommend, if the budget permits you should perform web application vulnerability scans and implement a web application firewall. If you have a web application firewall, you should ensure that the findings of your web vulnerability scanner of choice can be imported into your web application firewall configuration to mitigate such attacks. Get the latest content on web security in your inbox each week.
<urn:uuid:e4948aa2-2980-42dc-8ff8-5bec8db71e10>
CC-MAIN-2022-40
https://www.acunetix.com/blog/news/us-national-vulnerability-database-was-hacked/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00673.warc.gz
en
0.934239
852
2.59375
3
One of the great challenges presented by the Internet of Things (IoT) is linking the billions and potentially trillions of sensors and endpoints together. (If trillions of devices seems like a stretch, consider that implementations include such things as tracking individual cows as their herds migrate.) In addition to sheer numbers, the IoT must go everywhere. This includes the gaps between coverage in rural areas, unsettled areas, and over the seven oceans (and assorted seas, rivers and lakes) of the world. A key platform for filling this tall order is the development of low-power long-range wide-area networks (LoRAWANs). We saw two pieces of news this week on the this front. Today, Multi-Tech Systems said that its MultiConnect mDot LoRaWAN device, which it says is “915 MHz-ready,” was certified by the LoRa Alliance. The certification was through the LoRaWAN North American 915 MHz process conducted at the AT4 Wireless laboratory in Malaga, Spain. The bottom line is that this element is now available to vendors for inclusion in LoRaWAN platforms in Europe and North America. A less technical announcement was made yesterday by Inmarsat, which said that its LoRaWAN, which was developed in partnership with Actility, is serving customers. The network, the press release says, is already in use in asset tracking, agribusiness and oil and gas applications. Inmarsat says that the network, which uses Actility’s ThinkPark low-power wide area (LPWA) platform, is the first that can be used anywhere in the world. Actility is now a member of Inmarsat’s Certified Applications Provider Programme, the press release says. Inmarsat is not the only company developing LoRaWAN networks. The LoRa Alliance defines it as a low-power WAN that supports inexpensive, mobile, secure and bidirectional communications for the IoT, machine-to-machine (M2), smart city and industrial applications. It is, the alliance says, optimized to support multiple millions of devices. It offers “redundant operation, geolocation, low-cost, and low-power.” In some cases, devices can run by harvesting energy from their environment, making batteries unnecessary. The value of the IoT will rise in proportion to the size of its footprint. If it can truly circle the globe, and leave no dead spots, its chances increase that it becomes the revolutionary technology its proponents predict. LoRaWANs will play a key role in making that possible. Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at [email protected] and via twitter at @DailyMusicBrk.
<urn:uuid:1f8fe6c5-355d-448d-9568-6f860f235e6a>
CC-MAIN-2022-40
https://www.itbusinessedge.com/networking/steps-toward-taking-the-iot-truly-global/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00673.warc.gz
en
0.951918
646
2.515625
3
We have all heard the term “Network”. But what does it actually mean? A computer network is defined as a set of computers connected together for the purpose of sharing resources. The most common resource shared today is a connection to the Internet. Other shared resources can include a printer or a file server. Here is a list of the different parts and considerations of a business IT network. Servers are more than expensive boxes you put in your server closet and replace every 4-5 years. The server is where you store your company files. It is where you establish rules and manage your end user devices (aka workstations/computers). How old is your server? Do you know what operating system it is running? Do you have a plan in place for when you are going to replace the server or are you going to deal with it whenever it starts to fail? (Waiting is bad. This is a business risk if you wait. Don’t be reactive. Be proactive. Have a plan.) (Application= Software) How are your business applications stored? Are they in the cloud? Are they on the server? If they are in the cloud, how fast is your internet speed to reach the application? If your line of business software is hosted on your server, how many software licenses do you need so that your staff can work? Do you need additional licenses for tablets? Your printers can be shared throughout the network. How many printers do you have in the office? Is there one that is shared by everyone? Or do you have multiple types sprinkled throughout the office? End User Devices First and foremost: A “device” is the technical word for the computer, laptop, or tablet. A network server allows you to monitor and manage these devices. How many devices do you have in the office? Do you know when the warranty is up on each device? What operating system is on each device? Are they being updated regularly? How are your computers connected? Do they go to the server? Or are they free agents? (Free agents = a business risk, by the way.) The basics of a secure business network include the following. A Firewall for your network. Antivirus on each device (including your server). Spam Blocker for your email. Group Policies on every device. Scheduled updates for your operating systems and for your applications. These are what makes a network secure. A backup is a digital copy and archive of a computer so that it may be used to restore the original in the event of data loss. Do you have a backup? Do you know what it covers? Does it backup your files on the server? What about your email? How does the backup occur? Is it redundant? Is it in the cloud? Is it onsite? Do you have a copy saved at home? How current is your most recent backup? How long would it take to restore your network if the server crashed? How would that affect your bottom line, your team’s morale, and productivity? Policies and Procedures These are standards which apply to the end user’s behavior and habits while using the devices. Do you have an internet protocol agreement for each new employee? What are your policies regarding how and where documents are stored? Did you warn your team about suspicious emails with suspicious zip files or suspicious links? All in all– your Managed Service Provider (MSP) should be able to provide you with this information via a network assessment. Evaluate your MSP’s Network Assessment. Here are some questions to help you. - [icon type=”chevron-circle-right” class=”accent fa-li”]How clearly is the Network Assessment presented? Do they have a dashboard or a point form of critical issues? - [icon type=”chevron-circle-right” class=”accent fa-li”]Is the content crafted specifically for you and your office? - [icon type=”chevron-circle-right” class=”accent fa-li”]Does the MSP share the details of the assessment for each device? Understand that a Network Assessment is a way for you and your MSP to get on the same page regarding the health of your network. They help you, as the business owner, manage your capital expenditure budget. Hardware is an investment into the business. If your network is fresh- you will reduce the number of IT headaches and hits in office morale. Little things matter. We take technology for granted. Downtime is expensive. However, through working with your MSP, you can reduce the risk of devices crashing and make educated decisions on how to invest in your business’ productivity. Reach out to us to check the health of your network.
<urn:uuid:9adbe5f8-92ae-4acf-8767-969d056c56ae>
CC-MAIN-2022-40
https://www.dimension.irissol.com/blog/bits-pieces-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00673.warc.gz
en
0.940762
1,021
2.9375
3
The General Data Protection Regulations (GDPR) came into effect in the European Union in May 2018. The regulations served to replace the existing regulations covering data protection, which were woefully out-of-date with modern technology and inadequate to deal with major cybersecurity risks. The creators of GDPR hoped that the regulations would reduce the risk of data theft to a minimum by requiring that a number of safeguards are in place to protect the data at all times. By overhauling and reforming existing practices, it is hoped that GDPR will ensure the protection the integrity of confidential information. GDPR covers all types of organisations, including public agencies, governments, or companies of various sizes. In addition to introducing new standards in data protection, GDPR has given EU residents with new rights and freedoms with their own data. Before GDPR, ordinary citizens had very few little say in how their data could be used or collected. Now, any company which collects data from people within the EU must be GDPR compliant, regardless of the physical location of their headquarters. It is important to note that EU citizens who have their data collected outside of the EU (for example, while they are holidaying in the US) are not protected by GDPR. GDPR has introduced new laws about how organisations must respond to data breaches. In order to remain GDPR-compliant, organisations are required to disclose certain data breaches within 72 hours of their discovery. Before GDPR, reporting data breaches was not compulsory for many organisations. Kroll, a corporate investigations and risk consulting organisation, launched a study to investigate the effect of GDPR’s introduction on the number of reported data breaches in the EU. According to the results, there has been a spike in the number of data breaches reported by companies in Europe. The data was obtained through the freedom of information act, or was publicly available on the Information Commissioner’s website. The numbers varied from country to country, but in general there was a huge increase in the number of data breaches reported. For example, the number of breaches reported to the UK supervisory authority, the Information Commissioner (ICO), increased by 75% in the past year. The Kroll study showed the ICO had received more than 2,000 data breach reports in the past 12 months that could be attributed to human error, compared to just 292 that were attributed to cyberattacks. The most commonly reported breaches were emails sent to incorrect recipients (447 incidents), misdirected letters and faxes containing personal information (441 incidents) and loss or theft of physical records (438 incidents). Of the deliberate cyber incidents reported, specific circumstances logged include unauthorised access (102), malware (53), phishing attacks (51) and ransomware (33). The healthcare industry was responsible for reporting the majority of the breaches, accounting for 1,214 of the 2,000 reported incidents. The general business sector filed 362 reports, followed by education and childcare (354) and local government (328). According to their website, Kroll suggests that the increased number of breach reports may be due to organisations “gearing up for a new era of transparency around data breaches under GDPR”. They state that they expect the number of reports to increase further during the first full year under GDPR. Kroll also suggests that there is likely to be a substantial increase in the penalties issued for preventable data breaches. Before GDPR, the maximum possible fine was £500,000 in the UK. GDPR allows for much greater fines to be levied against organisations, with the maximum penalty being €20 million – £17,845,000 – or 4% of global annual turnover, whichever is the greater. It is hoped that such a hefty fine will act as a deterrent to organisations who may be a little slack about reforming their business practices. The risk of a substantial fine on top of the cost of dealing with a breach and repairing repetitional damage is likely to see companies pay much more attention to data security and invest more heavily in data protection solutions. One of the new rights that GDPR has granted to EU citizens is the ability to submit complaints to a data protection authority if they are suspicious that their personal data is being misused by an organisation, or has not been secured with adequate protection. The Kroll report also investigation the effect of GDPR on the number of privacy and data security complaints made by consumers. The report shows that these numbers have also increased, ICO figures show that GDPR is likely to be a major cause for this increase. In the first three months since GDPR came into force, the number of data protection complaints have doubled. Prior to the introduction of GDPR in May, ICO had received 2,310 complaints but that figure jumped to 3,098 complaints in June and 4,214 complaints in July. There have also been significant increases in complaints in other countries in Europe. The supervisory authority in France received 37% more complaints between May 25 and July 31, 2018 compared to the same period the previous year. in Ireland there has been a 65% increase in data protection complaints since GDPR came into effect.
<urn:uuid:97e1a23c-28ef-46e6-a39e-d9a773af0495>
CC-MAIN-2022-40
https://www.defensorum.com/report-data-breaches-gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00673.warc.gz
en
0.964613
1,041
2.859375
3
Passwordless Authentication is a method for authenticating users in a service without requiring a password. Instead, the user may be authorised by an alternative factor such as a registered device, biometrics, or in the case of Single Sign-On systems, the user is already authenticated by a third party. In this case, a token is then passed to the service to allow the user access without being required to enter a password. We’ll take a look at how passwordless authentication works, as well as the technology and protocols used to implement it. We’ll also look at why many enterprises are embracing passwordless solutions, the security challenges faced by the proliferation of cloud apps, and how they can be overcome. How does passwordless authentication work? Passwordless authentication operates with a three-way relationship between three parties: - The user – an individual wishing to access a service such as a cloud app - A Service Provider (SP) – the service the user is requesting to access - An Identity Provider (IdP) – a third party which authenticates the user Instead of the normal procedure of the user entering their credentials directly into the SP’s app, the user is instead verified by the IdP, which then confirms the user’s identity via a protocol to the SP. This result is much more secure, as well as being quicker and easier for the user. Identity Providers (IdPs) IdPs are third-party systems which authenticate users in order to allow them access to applications without the use of a password. The IdP can authenticate the user in a variety of different ways, one such method is to integrate with the existing corporate directory (e.g. Active Directory), enabling the user to be automatically authenticated with the IdP once on the corporate directory. The IdP effectively has the process of authenticating users outsourced to it by the apps the user wishes to access. This one identity is used across multiple apps, forming a federated identity that permits Single Sign-On. The Passwordless Login Process The end result of the three-way communication between user, IdP and SP is a simple process: - A user identifies themselves to the Identity Provider, which authenticates them by use of a password, physical device, biometrics or some other means. - The user then requests access to a service provider, such as by launching a cloud app. - The IdP provides a security token to the service provider via the web browser. - The service provider authorises the user, permitting them appropriate access for their identity. While this process can sound complex, the actual user experience is far simpler and smoother than a password-based system. From the perspective of an employee at an enterprise using a passwordless system, they are simply able to launch and close apps at will, without having to enter their credentials after they are on the Active Directory. SAML, OIDC and OAuth In order to securely communicate with one another, the three parties will use an identity protocol to exchange the data. By far the most common of these are SAML and OIDC, the latter of which is built from the OAuth protocol. While the process is the same for both protocols, there are some key differences between them. What is SAML? SAML is one of the most common protocols used in passwordless authentication, and allows the user, IdP and SP to communicate securely. A SAML assertion is transmitted in XML from the IdP to the SP to verify the user’s identity and allow them access to the service. SAML also supports IdP-initiated sign-in, where instead of initiating the process by launching the SP app, the user first launches the IdP portal, and gains access to the app from there. What is OIDC? Similar to SAML, OIDC is a common protocol used for passwordless authentication. OIDC is an authentication layer which is built on top of the OAuth framework. The process is mostly the same as SAML, but there are some key differences. The user data, which is transmitted in JSON format rather than XML, is known as a claim, while the SP is instead referred to as the ‘Relying Party’ (RP). What is OAuth? OAuth is an open standard for authorisation, most commonly used to grant federated access between different websites and cloud apps. Access tokens are issued to third-party apps to allow a federated identity, where one user can access multiple services from the same identity. What makes passwordless authentication more secure? Why passwords cause businesses problems The primary reason that enterprises move to passwordless solutions is that they are much more secure. Every password provides a potential point of attack for hacking and phishing, and the simple elimination of them greatly reduces the possibility of a data breach. Data breaches cost businesses an average of £2.71m per breach in 2020, resulting from lost business, reputational damage, system downtime and a loss of new and existing customers. Worse, the trend has been steadily increasing year-on-year, both in average cost and in the number of breaches. In addition, the average time to detect and contain a data breach came in at a worrying 280 days. Hackers frequently focus on passwords as an attack vector - 80% of hacking-related data breaches are caused by weak, reused or otherwise compromised passwords. Phishing is a particular source of concern, with 90% of compromised passwords involving some kind of phishing or social engineering. Despite growing awareness to the threat, over 50% of employees still click on phishing links. Why passwordless authentication prevents malicious attacks The secure-by-design nature of passwordless authentication means that many of the techniques used in malicious attacks are rendered ineffective. Brute force attacks This involves simply forcing through as many random guesses as possible at a rapid rate. Without a password, this technique can’t be used. Similarly to brute force, this method attempts lots of guesses, but using a vast database of previously compromised combinations of usernames and passwords. Again, with no password, this is not an option for malicious attackers. Malware which can record keystrokes entered to discover username and password combinations. With a SAML or OIDC token replacing usernames and passwords, the identity of the user cannot be compromised. The simple act of physically looking at a person entering a password can still pose a serious security risk. Passwordless authentication removes this threat when signing onto a cloud app involves simply clicking through to the app. A user is contacted, usually by email, and directed to a fake website where they would normally input their credentials. Since in a passwordless Single Sign-On System the user does not enter a password, and does not know the password even if there is one, employees cannot compromise the system and allow a data breach. This involves a variety of techniques where a password is predicted from a user’s public data, or the user is convinced to reveal the password themselves. Even in cases where a SP is not compatible with security protocols such as SAML, some IdPs can still provide a passwordless experience of Single Sign-On where the password is randomly generated and unknown to the user, meaning this method is also ineffective. Finally, there are additional benefits besides security for enterprises. Filling out credentials takes time, and so does resetting them when things go wrong. According to Gartner, 20-50% of all helpdesk calls are for password resets, with each one costing an average of £50. The wasted hours and resources quickly add up, leading to thousands of lost hours of productivity each year for larger businesses. 5 Reasons businesses move to passwordless solutions 1. Increasing security of systems and data By far the biggest benefit of passwordless authentication for enterprises is to mitigate against data breaches. As we’ve seen, many techniques used in malicious attacks are ineffective in a system with a passwordless solution. By removing one of the biggest attack vectors, the possibilities of suffering a data breach are reduced enormously. 2. Keeping control over authorisation - shadow IT Integrating a passwordless, Single Sign-On system has the added benefit of helping businesses keep track of which apps are being used. The number of actual apps in use has been shown to be up to 20 times higher than CIO estimates, leading to further security risks which are not immediately apparent to IT departments and CIO/CISOs. By being able to keep track of these apps and add or remove them from the passwordless system, enterprises gain much more control over their systems. 3. Easier to respond to breaches A significant reason why data breaches can become so costly is the time taken to identify and respond to them – an average of 280 days, according to an IBM report. With a passwordless, Single Sign-On system, it becomes much easier to track unusual behaviour and to manage the access of employees centrally. 4. Time saved for employees The time spent entering, changing and resetting passwords adds up for large enterprises, with 73% of companies saying their employees spend over one hour per day navigating between apps. As well as providing a much smoother user experience, a passwordless system also increases efficiency in the workforce. 5. Time/resources saved for IT IT departments spend enormous amounts of time on password resets, with the majority of helpdesk calls being solely related to this issue. With a passwordless authentication solution in place that eliminates the problem of forgotten passwords, organisations save on the time and money used to manage password resets. Implementing passwordless authentication Use of IdPs Businesses wishing to implement passwordless authentication for applications will first need an IdP to handle the management and authentication of users. Access Management (IAM) and Identity as a Service (IdaaS) providers fulfil this IdP function. How IdPs create passwordless solutions An IdP will authenticate all users in the directory to any third-party apps they wish to use, through a passwordless, Single Sign-On solution. To choose the best IdP, businesses will have to consider which of their most commonly used apps are compatible with Single Sign-On protocols, such as SAML and OIDC. If apps are not compatible, some IdPs may offer workaround solutions which still provide a passwordless experience. Moving to passwordless future When moving to a passwordless system, existing users can be migrated by uploading metadata to compatible SPs, while each app will have to be configured to accept SAML/OIDC requests. For new users, many IdPs provide ‘Just-in-time provisioning’, where the user is registered as a new account automatically as soon as they click to access the service. How My1Login enables organisations to implement passwordless authentication My1Login act as the IdP to provide passwordless authentication for enterprises through its access management and Single Sign-On solution. As well as comprehensive integration with cloud apps, My1Login also provides Single Sign-On for legacy Windows desktop apps. My1Login’s passwordless authentication solution does not require the organisation to change their existing directory (e.g. Active Directory). Instead, My1Login compliments this, integrating with the existing corporate directory to provide a seamless experience for users. The user simply authenticates with their corporate directory and My1Login delivers seamless authentication into all application types. My1Login utilises SAML and OIDC to replace passwords with token-based authentication, enabling organisations to move away from passwords and to passwordless authentication. Where the SP does not yet support passwordless authentication protocols, My1Login’s Secure Web Authentication can be used to provide a passwordless experience, even for applications which have non-standard login pages. My1Login leverages token-based authentication where available by the service provider, and where not yet available, performs Secure Web Authentication to remove the need for end-users to know, manage or enter application passwords. On an organisational level My1Login eliminates the most common sources of data breaches, while on the user level, a smoother working experience increases both efficiency and productivity.
<urn:uuid:28a20051-5938-405e-8033-83743bc75bf8>
CC-MAIN-2022-40
https://www.my1login.com/resources/white-papers/what-is-passwordless-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00673.warc.gz
en
0.928691
2,550
2.875
3
When we talk about the Internet of Things (IoT) we are talking about connecting everyday objects to the internet via low power signals. Kevin Ashton is credited with coining the phrase and being one of the first to write about the concept. He discusses how humans are great at doing many things, but capturing clean and accurate data about events happening in real time in the real world is not one of them. This ability to capture large amounts of data about everyday usage of everyday things is what makes the IoT such a powerful innovation. It means that big data not only applies to presidential elections and the center for disease control. With the Internet of Things, big data applies to parking meters microwaves, and taxi-cabs. Being left to computers, big data up until now has been all about Google searches, Amazon queues, and Netflix recommendations. Picture the Internet of Things as a car dashboard for every appliance you own. Do you want to see how much energy your washing machine is burning versus how much you actually need to wash those towels? Does a manufacturer benefit from getting real time analytics on the usage of their products rather than depending on focus groups? The answer is “yes,” and IoT makes this possible. One of the issues surrounding the Internet of Things is the positive impact it can have on the environment. Innovation opens doors previously hidden behind cement walls. For example, your refrigerator can run on less power depending on the number of times it’s opened per day; however there isn’t currently a mechanism or control panel to make this happen. What if your shower head could control water output based on a price you set for a monthly water bill? It’s easy to see ways in which this technology can have a positive impact on greenhouse gas emissions and usage of natural resources. There was a time when the only people to own computers were the companies trading on the S&P 500. The barrier to entry (price) dropped significantly and soon enough every home in the US had a personal computer. Think of cloud computing too. Dropbox and Google Drive make it possible for individuals and businesses big and small to store huge amounts of data on a virtual cloud. The Internet of Things will become available to consumers and enter homes the same way. Previously, supply chain management for large companies used expensive RFID tagging systems and inventory control, but these systems were not available to smaller companies. With the growth and soon to be accessibility of the IoT, small businesses can have inventory management systems mimicking large corporations and manufacturers. With the ability to tag products without having to purchase software costing hundreds of thousands of dollars, if not millions of dollars, businesses will use a simple dashboard, most likely connected to the cloud. It will improve efficiency across the board. The IoT is growing quickly, and the Android OS is evolving right alongside it. Read more about the relationship between Android and the Internet of Things on our main site. Do you have an upcoming project and wantus to help speed up your time to market? These cookies are necessary for the website to function and cannot be switched off. These cookies allow us to monitor traffic to our website so we can improve the performance and content of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited or how you navigated around our website. These cookies enable the website to provide enhanced functionality and content. They may be set by the website or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
<urn:uuid:94af5ee5-f335-4c05-b684-0c76df707adb>
CC-MAIN-2022-40
https://hsc.com/Resources/Blog/What-s-All-This-Talk-About-The-Internet-Of-Things-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00673.warc.gz
en
0.947339
884
3.15625
3
Attackers are always evolving in order to evade traditional security controls, and in recent years, fileless threats have become one of the most popular attacker strategies. Fileless threats have been around for many years, but have recently made a resurgence in the wild. Unlike traditional malware, fileless threats don’t exist as a file that resides on a system’s disk. Instead the malicious code may only exist in memory, be run as remote scripts, or run in areas outside of the disk and beyond the view of traditional security tools. Why Go Fileless? This strategy can make the threat much harder for security and forensics tools to detect and analyze. For example, if the threat doesn’t exist on a device’s disk, the threat can avoid some of the traditional file-based analysis of an antivirus tool. Likewise, if the threat only manifests itself at runtime, security teams have a very limited window in which to analyze the threat. Attackers can also avoid using traditional files by leveraging valid administration tools such as PowerShell and WMI, which can be used to remotely run scripts or other malicious code on a victim device. As it turns out, fileless threats also apply to the often overlooked firmware layer of a device. And since the whole purpose of going fileless in the first place is to avoid traditional security controls, the firmware layer is a natural location for attackers to hide their code. This underscores an important and fundamental aspect of security—organizations need to develop the same level of visibility and security at the firmware level that they have traditionally had at the operating system layer and for files that reside on hard drives. With this in mind, let’s take a look at the intersection between fileless threats and firmware. The Fileless Attack Surface While the definition of a fileless threat is open to some debate, Microsoft provides a very helpful framework for understanding and classifying these threats. The diagram below taken from Microsoft’s site breaks down fileless threats in two important ways: - How Malicious Code is Run: This includes code run from hardware, via exploits, and via code injection. All of these options allow attackers to execute code without necessarily relying on traditional file. - How the Threat Relies on Files: While fileless threats only run in memory, they may still depend on files that reside on disk. For example, some threats can be completely fileless, while others may run in memory but only with help from more traditional files that reside on the disk. This diagram provides two very important take-aways. First, hardware components represent a significant part of the logical attack surface of fileless threats. This includes firmware sources such as system BIOS/UEFI, CPUs, PCI devices, and USB. These are all examples where malicious code can reside outside of the system disk and typically beyond the view of the operating system. In reality this is actually an abbreviated list of the hardware attack surface, and we encourage you to refer to our Know Your Own Device resource to learn more about the many threats and vulnerabilities affecting other components in typical devices. Secondly, the true fileless category of attacks (Type I in the Microsoft model) is heavily tied to the hardware attack surface. In other words, fileless attacks from firmware sources typically don’t require support from other files. This means they leave the fewest traces and can be the hardest to detect by traditional means. Unfortunately, this style of attacks against firmware is becoming more common. Broad-based malware campaigns targeting the firmware layer of devices have been observed in the wild, and earlier this month outdated firmware was used in a denial of service attack against the US power grid. Likewise, our own recent research demonstrated vulnerabilities that could allow unsophisticated attackers to gain control over the BMC firmware of enterprise servers. This means that the bar for attackers is lowering precisely in the area that is the most ripe for truly fileless threats. Firmware and Living Off the Land Fileless threats are also often defined as threats that take advantage of valid tools on the system to perform malicious actions. This is often referred to as ‘living off the land” since the threat uses native tools on the system instead of bringing its own malware. This includes tools such as PowerShell, WMI, PsExec and other SysInternals tools. The LOLBAS (living off the land binaries and scripts) project has compiled a list of the many tools used by attackers to live off the land. Most organizations rely on these tools in order to efficiently manage their systems. Unfortunately the power of these tools is equally valuable to attackers, who can abuse the functionality to run malicious scripts or install malicious code. And while WMI can install malicious files that reside on the disk, they are stored in a shared repository making it almost impossible to delete them without damaging valid data. BMCs and IPMI The firmware layer contains tools that can play a somewhat similar role for attackers. Within modern servers, the combination of BMCs and IPMI provides administrators with complete remote control over a server. The BMC essentially acts a parallel and independent computer within a server solely for the purpose of remote management. It contains its own firmware, networking capabilities, and even its own power in order to provide management even if the server itself is powered off. In recent years, BMCs have also been one of the most common sources of vulnerabilities. Vulnerabilities within the BMC can allow attackers to install their own BMC firmware containing malicious code and gain virtually unlimited power to do damage. As with all firmware threats such code could be used to achieve attacker persistence, to steal data, or to disable devices or components completely. Intel AMT and ME These hardware-based management channels are not limited to servers. Intel Active Management Technology (AMT) and the Management Engine (ME) provide similar out-of-band management capabilities for traditional laptops. These components likewise have their own communication channels and have been used by attackers to communicate without the operating system’s knowledge. These tools can be used to deliver code to low-level components and control the behavior of the operating system itself. Much like the BMC of a server, Intel AMT can provide the plumbing for an attacker to deliver malicious code that hides beneath the operating system and without touching the filesystem. And these are not the only examples. LoJack functionality which resides in a device’s firmware is designed to help track and remotely wipe a device in case of theft. This functionality has been compromised by attackers and used as a backdoor and command-and-control channel. Similarly, our research has shown that the very kernel drivers used to manage firmware can be used by attackers as a vector to infect the firmware. Such drivers are often used to update the firmware, set firmware-specific options, or diagnose problems. But in the wrong hands they can provide a natural vector to deliver malicious code that will never touch the disk of an affected system. This once again shows how low-level tools built in a devices firmware can wreak havoc if not properly secured. Address Your Blindspots These types of threats highlight the need to extend security to the many layers where traditional security can’t see. It is important to remember that the whole reason attackers go fileless is to avoid the prying eyes of security solutions. As such, it should be no surprise that firmware and fileless threats overlap a great deal. Going forward, organizations need to establish visibility into the hardware and firmware layer to detect vulnerabilities, weaknesses, and threats therein. And since, fileless attacks are rapidly evolving, it is important to recognize threats that are new and may be unknown to the industry. This means that not only do we need to be monitoring the hardware and firmware of our devices, we also need to be monitoring the behavior of these components to recognize unknown or zero-day threats. At Eclypsium we specialize in the unique vulnerabilities and threats affecting this layer, and provide an approach to defending against fileless threats that security tools at the operating system level simply can’t reach. To learn more about Eclypsium and how we can extend your security strategy to the firmware layer, please contact us at firstname.lastname@example.org.
<urn:uuid:bb34d1a5-b527-4566-a0ab-bbccdf01b39e>
CC-MAIN-2022-40
https://eclypsium.com/2019/09/24/the-firmware-face-of-fileless-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00073.warc.gz
en
0.93683
1,680
2.734375
3
Technology is omnipresent in our daily lives and has a great impact on them. But, unfortunately, according to tech addiction statistics, more than 210 million people demonstrate some kind of addictive behavior when it comes to internet use. Social media and various apps help us organize our schedules, remember important events, stay informed on topics that interest us, and connect with our friends and family. But where is the fine line between the normal use of technology and the point when it becomes a problem or even an addiction? Most often, that line is not visible to the addicted person, at least not at the beginning. Technology addiction definition explains this phenomenon as a technology-related behavior that is frequent, obsessive, and increasing despite the negative consequences upon the user. Like other forms of addiction, it seems that this one also has a powerful impact on the lifestyle of a person, making them less functional and present in their everyday life. Statistics also talk about many psychological and social changes in the addicted person’s behavior. If you’re on the lookout for interesting facts and statistics regarding this particular type of addiction, keep reading and find out some compelling data. Tech Addiction Facts (Editor’s Choice) - Among the grown-up population in the United States, 81% claim to go online daily - Almost 50% of people aged 18–19 are online almost constantly - In 2020, 48% of children and teens spent more than six hours online a day - Over 40% of teenagers admit to spending too much time on social media - Over 50% of the US population aged between 18 and 38 claims that they are addicted to social media - Up to 30% of smartphone children users feel great discomfort without their phones - 85% of parents are worried about the amount of time their kids are online - By 2030, there will be 7.33 billion people with mobile phones Technology Addiction Statistics 2019 Technology has been around for quite a while now, but it’s usage has increased rapidly over the last few years. And the trend seems to carry on. 1. The total number of internet users worldwide in 2019 was 4.39 billion. (Digital in the round, DatarePortal) Two years later, in 2021, that number grew to 4.88 billion users. All in all, the number of Internet users increases by over 600,000 every day, and at an annual rate of nearly 5%. Moreover, statistics on technology use point out that the average Internet user globally now spends nearly seven hours online per day, but were things any different in 2019? Read the following statistic and find out. 2. The average user spent approximately 6 hours and 42 minutes online every day in 2019. Social media are the platforms users spend the most time on, with an average of approximately 3 hours. In addition, 12.4% of internet users stated that they often spend much more time online than they initially planned, according to dependence on technology statistics. And this is the average time spent online, meaning that users exceed the average time and spend the majority of their day interacting via apps and platforms. 3.In the US, 81% of people claim to go online every day. According to a Pew Research Center’s survey conducted in early 2019, 28% go online almost constantly, and 45% are online several times a day. On the other hand, 9% of surveyed people go online just once a day, and 8% are online several times a week or less; also, 9% of people claimed they don’t use the internet. 4. Tech addiction statistics show that 48% of younger adults aged 18–19 go online almost constantly. 46% in this age group claim to go online several times a day. The lowest percent (only 7%) of users who claim to go online almost constantly are among the population over 65, of which 35% go online several times a day. The highest percent (34%) of constantly online users have a household income of $75K or more a year. As per statistic on technology use, people who live in urban and suburban areas stated they are online almost constantly, more often compared to those living in rural areas, 5. According to 2019 data, 3.4% of high school students had a severe form of internet addiction. A lower level of internet addiction was present in 39% of high school students, while 32% had moderate-level addiction. This suggests that the performance and focus on education decrease due to tech usage in more than 70% of people at a very young age. A different study found that 65.5% of the slightly younger junior high school students are in the internet addiction-risk group, while 6.5% are in a severe internet addiction group. Technology Addiction Statistics 2020 We think that we could all agree that 2020 was a particular year, to say the least. The global pandemic and social distancing measures conditioned an increase in the use of technology among all generations. Partially for fun and partially out of necessity, online options were sometimes the only way to get things done, to work, or continue education when schools were closed. No doubt that the pandemic changed our habits and the way we perceive things. Nevertheless, the following statistic on technology use is worrying, especially concerning children. 6. The number of internet users worldwide in 2020 was 4.66 billion. This makes approximately 59% of the world’s population and an increase of around 28 million people compared to the data from 2019. In addition, 92.6% of all internet users use smartphones to access the global network. Countries with the biggest number of internet users are China, India, and the US. Although, India with approximately 560 million and China with over 854 million internet users still have large parts of their population that are offline. 7. As can be seen from statistics about technology use, 92.6% of people using the global network are mobile internet users. More than 4.32 people use their smartphones to access the global network. Out of 4.14 billion people who are active social media users, 4.09 billion use mobile social media. Considering the availability of different content and the convenience of smartphone use, this comes as no surprise. In 2020, the Google Play store had an astonishing number of 3.14 million available apps for Android users, followed by the Apple App store with 2.09 million apps for IOS users. 8. In 2020, 48% of children and teens spent over six hours online every day. Compared to 2019, statistics on technology usage show that this is an astonishing increase of 500%. On top of that, 26% of children were spending more than eight hours online daily. 78.21% of children mostly used Youtube. Besides Youtube, the platforms that kids used the most in 2020 were Netflix (49.64%) and TikTok (33.41%). 9. 85% of parents expressed their concerns about the excessive amount of time their children were spending online. This data shows that most parents are aware of the negative consequences too much screen time can cause, especially at a young age. And screen time is not the only concern here. Children’s online safety is also a significant issue that we must consider and address appropriately. Keeping in mind that, according to statistics on technology use, 30% of children are spending over four hours online unsupervised increases the risk of falling victim to online predators or being exposed to cyberbullying. 10. 93% of parents agreed on the need to expand the protection of the Children’s Online Privacy Protection Act and include children aged 13–17. With children spending excessive time on screens, statistics revealed that parents were more aware of the negative sides of the internet. Most concerns were about cyberbullying, sexual predators, etc. Only 14% of surveyed parents consider that Big Tech companies are doing enough to protect children online. Yet, 85% think that the US Congress should include legislation that would protect kids from sexual predators, deceptive advertising, and other forms of danger in the virtual world. Social Media Addiction Statistics Social media platforms such as Facebook, Instagram, TikTok, Youtube, and others are highly addictive and often create a virtual space that users choose over the real world. And some users find themselves unable to stop scrolling. 11. 37% of 18–24-year-old users find being unable to check social media unpleasant. In the same age group, 21% feel restless when they cannot check social media for messages. According to overuse of technology statistics, being consistently present and up to date on social media can potentially lead to narcissism or lower self-esteem, especially with the younger population. Besides, when exposed to content that others can easily manipulate or hide behind a fake identity, the vulnerability of users is increased. 12. 41% of teenagers are aware and admit to spending too much time on social media. Tech addiction facts show that time spent on social media and other online activities prevails. Young people spend less and less time socializing in person with their friends and family. And not just the young. This applies to internet users of all ages. Time spent on social media surpasses the time spent on sleeping, shopping, house chores, eating, and drinking. 13. The whooping 52% of US residents between 23 and 38 years old admit to being addicted to social media. 45% of users aged 18–22 and 35% in the age group 39–54 also claim that they are addicted. According to overuse of technology statistics, the lowest percentage (22%) admitting their addiction is the population aged between 55 and 64. Very often, people are not aware of the problem of addictive behavior. Therefore, we may conclude that people are facing this problem without realizing it. 14. With 2.74 billion users, Facebook is one of the most popular social media platforms. Facebook is the first social network with more than one billion registered accounts. Currently, according to social media addiction statistics, it has over 2.74 monthly active users. Also, Facebook owns four of the biggest social media platforms, each of them with over a billion monthly active users. Besides Facebook as a core platform, the company also owns WhatsUp, Facebook Messenger, and Instagram. 15. A region with the biggest number of Facebook users in the world is Asia. The number of users across this region in 2020 was 1.03 billion, and it’s estimated that by 2025 it will reach 1.35 billion. On the other hand, Instagram has the biggest number of followers in the US, India, and Brazil. The US and India both have around 140 million people using Instagram, while Brasil counts for 99 million. Smartphone Addiction Statistics 2020 The convenience of always having access to your favorite app, game, or social network makes smartphones one of the most practical devices we can use to go online. However, convenience comes with a price. Statistics on cybercrime show that most attacks go through smartphones. So it’s crucial to be well informed and cautious while using our phones. Not only because of the risk of addiction but also regarding privacy protection. 16. Predictions say that by 2023, there will be 7.33 billion people with mobile phones. (Digital in the round) The current number of smartphone users surpasses three billion. AC US, China, and India are leading countries in the number of people using smartphones, with over 100 million users each, according to statistics of technology use. Some earlier statistics show that more people in the world own smartphones than basic sanitary necessities, such as toilets. 17. Between 10% to 30% of children use a smartphone or other device in such a manner that not having the device around causes them significant discomfort. Children get easily stimulated with overwhelming and colorful content displayed on the screen. But, like Bill Gates and Steve Jobs, some of the top tech leaders admitted to limiting their children’s screen time. The age that children get introduced to technology gets lower by the year. Statistics about technology use show that 22% of children get a service plan at the age of ten. 18. In 2020, the number of hours spent on smartphones daily increased by 39%. Due to social distancing measures in 2020, habits changed when it came to smartphone usage. People were relying more on online resources for shopping and staying in touch with friends and family. Most of the surveyed people (37%) were texting more. Stats on technology addiction show a 36% increase in social media usage, a 32% increase in video calls usage, and 23% of people used shopping apps more. 19. In the US, the average time spent using smartphones in 2020 is 3.82 h per day. Time spent on smartphones was over three hours in previous years and slightly increasing every year. But, it’s just a little over the time that an average person in the US spends in front of the TV at the current rate. Most surveyed people used their smartphones mostly for communication, reading the news, and shopping. Teenage Cell Phone Addiction Statistics 20. 95% of teenagers in the US have access to smartphones, and 50% are online almost constantly. (Digital in the round) Social media is one of the dominant activities of smartphone use when it comes to teenagers. Facebook was the most dominant platform to use, but recent statistics show that its use dropped to 51%. With the increase in social networks, the young now focus more on other options such as Instagram, TikTok, and similar. Worrying statistics showed how important smartphones are to teenagers, with more than half (56%) of them reporting feeling anxious, lonely, or upset without their phones. 21. When with friends, 52% of teenagers still spend a long time on their phones. As can be seen from this and similar facts about technology addiction, the device designed for communication keeps us apart at times, even when we’re close to each other. The use of smartphones has been infiltrated in normal activities to the point that many teenagers use them regularly during classes without considering it rude. Smartphone usage doesn’t stop while they’re doing their homework, either. 22. The average time spent using technology among teens is 7.5 hours a day, not including schoolwork. This shows that they spend the majority of their waking hours in front of the screen. Also, technology addiction facts state that most teens are driven by the Fear of Missing Out (FoMO) and frequently check their phones to stay updated. A different statistic has shown that most teenagers (22%) spend 30–60 minutes on their phones before sleeping, while 11% spend more than three hours. 23. 71% of teenagers who spend five or more hours a day using their phones are more likely to develop some of the suicide risk factors when compared with other teenage users who spend an hour daily on this device. Comparing the same two groups also shows that 51% of teens who spend over five hours a day using smartphones are less likely to get enough sleep. Addiction to technology facts show that the amount of time spent using electronic devices also makes a big difference in their effects on the individual. The statistics above give us a glance at how different habits in technology use may have a different influence on their users. 24. 77% of teenagers have fought with their parents over the use of smartphones. The disagreement between parents and teenagers is a pretty common thing. However, parents trying to change their children’s behavior regarding technology use may face even a more significant challenge, considering they overuse technology too. Statistics of technology usage point out that most parents face the same problem concerning tech addiction. And their children are aware of it. 25. 39% of teens reported that they are feeling “addicted” to their smartphones. (The Washington Post) This is an improvement compared to a similar study conducted in 2016 when 50% felt addicted. Like the 2016 study, most parents (68%) feel their children spend too much time using smartphones, and 61% consider their children addicted. When it comes to parents, more than 50% are experiencing addiction to mobile devices. According to the same statistics about technology addiction, 38% of teens stated that their parents were addicted to smartphones. Video Games Addiction Statistics 26. It’s estimated that more than 2.5 billion people worldwide are users of the gaming industry. In the US, spending on video games reached a record of $11.6 billion in 2020, and in 2021, the global gaming industry revenue reached $180 billion. The most popular games with teenagers are Fortnight, Minecraft, Animal Crossing, and Fall Guys. 27. Addiction to technology statistics reveal that 10% of young gamers display pathological behavior that worsens over the years. As they grow older and become young adults, they express higher anxiety, depression, aggression, and problematic cell phone use. In 18% of adolescents, there were moderate symptoms, and they stayed the same over time. Also, 72% of adolescents experienced relatively low symptoms that didn’t deteriorate over time. These results came from the most extensive technology addiction study involving video games that included 386 adolescents who were followed and observed over six years. 28. In 2019, the WHO listed a “gaming disorder” as a behavioral addiction in the international classification of diseases. Many people spend a lot of time playing games, but for someone to be considered addicted, the number of hours spent playing games isn’t the criterion taken into consideration. Instead, if gaming interferes with areas of an individual’s life, and he still isn’t able to stop playing, he is considered addicted. Typically, this situation would continue for at least a year, according to tech addiction facts. 29. 7.8% of the US population spends more than 20 hours a week playing games. The biggest share of respondents, 18.2%, reported playing games between four to seven hours a week. 13.6% of the respondents claim they play less than one hour a week, while 7.8% of them play over 20 hours a week. , We already mentioned that the number of playing hours isn’t one of the official criteria used to determine whether someone is addicted. However, the more hours spent playing video games, the greater the effects on gamers’ lives. 30. As per technology dependence statistics,38% of gamers in the US are between 18 and 34 years old. Gaming is a more popular hobby with the younger population. 21% of gamers under 18, and 26% fall in the group of 34–54 years old. Also, 6% of players are older than 65, according to the above statistics. When it comes to gender distribution, in 2020, 41% of total US gamers were women, which indicates a slight increase of women gamers over the past few years. Technology Addiction Is Altering Our Daily Habits Overconsumption of technology and the broad range of activities that people enjoy online bring new challenges. According to tech addiction statistics, the time spent online is increasing, especially with children, teenagers, and young adults who seem to be most prone to technology use. Hopefully, the statistics listed above helped you expand your knowledge and understand the potential threats that came with the increase of online activities. Considering the effects that addictive behavior of any kind, including technology, has on people’s lives, their ability to perform everyday activities, and general health, we must address this condition with the attention it deserves. People Also Ask Are we getting addicted to technology? It seems so. With technology penetrating all segments of our private and social lives, we must be aware of its addictive potential. Also, we must be extra careful about our data and privacy on the internet to avoid unpleasant and dangerous situations. With more people engaging in online activities and organizing their lives around apps and social media, we may conclude caution and moderation are necessary. It seems like we need to stop and think about whether too much connection makes us disconnected. What percent of adults are addicted to their phones? Around 75% of adults in the US consider themselves addicted to their phones. The use of phones goes as far as 64.2% of people texting the person in the same room. Maybe some of the most revealing statistics are the ones that show 32.7% spend more time on their phones than with their partners. Also, 66% of the world population is suffering from Nomophobia, a term that describes the fear of being without a phone. How does technology make us addicted? The internet and technology make us addicted by using human beings’ basic needs for stimulation, acceptance, and interaction. By making everything so much more approachable and just a click away, technology has us “hooked” on constantly checking our devices for new information, photos or messages. The technology addiction mechanism is often compared to substance abuse addiction because similar chemicals are released in the brain. For example, when we get a “like” or move on to the next level in the game, the brain releases dopamine and other chemicals that make us feel good. Similar processes happen to people who are addicted to drugs or alcohol. Is addiction to technology a serious problem? Yes. Every form of addictive behavior can be potentially hazardous and harmful to the person experiencing it and can influence the people around her. The possible physical and psychological consequences show just how serious the problem with addiction to technology can become. This mainly applies to the younger population whose brains are still developing and forming patterns. The lack of real-life interaction at an early age may cause problems in social and emotional development. How do I stop my digital addiction? The first step would be acknowledging it. If you feel like technology is affecting your daily activities, there are a few things you could try to prevent this condition from escalating: - Delete some of the accounts or stop unnecessary notifications; choose to keep only the ones you find most beneficial or enjoyable. - Limit your time online. One technology addiction study found that Facebook users who set their phone vibrating every five seconds when the time limit exceeds spent over 20% less time on this app. - Get an alarm clock; keep your phone away from your bedroom. Also, try not to use it at least 1–2 hours before you sleep. If needed, professional help is also available. What are the consequences of technology addiction? People with technology addiction issues can experience anxiety, depression, mood changes, loss of time orientation, inability to prioritize appropriately, and low school and work performance. The list, of course, isn’t final. Depending on various factors, some people may experience more severe symptoms and even get entirely isolated, avoiding their friends, family, and obligations, including school and work. Some of the most common physical consequences of this condition may include: - Lack of sleep and various sleep disorders - Poor nutrition - Extreme obesity - Back and neck pain - Lack of personal hygiene - The general neglect of basic needs To name a few. Considering all the tech addiction statistics listed above, it seems we must learn how to control the use of technology. Otherwise, it will control us.
<urn:uuid:bfd9d6a0-9b7e-4bc0-9d0f-be28a9ecdc30>
CC-MAIN-2022-40
https://safeatlast.co/blog/tech-addiction-statistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00073.warc.gz
en
0.94755
4,957
2.890625
3
German authorities said that a ransomware attack on the IT systems of a Düsseldorf hospital may have led to the death of a patient. With systems down, patient data inaccessible, and operations postponed, she had to be sent to a different hospital an additional 32 kilometers (20 miles) away, delaying potentially life-saving treatment. A matter of time It is believed that the hackers had planned to attack Heinrich Heine University, but accidentally brought down the systems of the affiliated Düsseldorf University Clinic. The attack encrypted 30 servers at the hospital, with one including a ransomware note - addressed to the university. When Düsseldorf police told the hackers that they were attacking a hospital, they withdrew the extortion attempt and provided a digital key to decrypt the data. They are no longer reachable. Prosecutors have launched an investigation against the hackers on suspicion of negligent manslaughter. Should an investigation conclusively show that the woman would have been likely to survive had the hospital not been under attack, the case may be treated as a homicide. Hundreds of other patient visits and appointments were delayed or routed elsewhere. Cybersecurity professionals have long warned that hospitals are at risk of cyberattack, and that - with more and more medical equipment connected to the Internet - such hacks can grind healthcare facilities to a halt. Back in 2017, the WannaCry attack brought down most of the UK's National Health Service, disrupting countless procedures. Numerous hospitals around the world have been hit by ransomware attacks. Even after an attack, there can be lasting damage. Last year, a study published in Health Services Research found that in the three years following a hospital data breach patients with heart attacks were likely to be treated slower, and at greater risk of death. This is because breach remediation efforts took primacy, and other aspects of hospital quality suffered as a result.
<urn:uuid:c2dc4763-c540-4a06-bf18-464afd1aa30f>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/news/patient-dies-after-german-hospital-it-systems-were-hacked/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00073.warc.gz
en
0.967022
382
2.609375
3
You have no items in your shopping cart. The Host is a PC where the software resides and is used to manage the system. The Host will communicate directly to the controllers via network. Controllers make all access control decisions after being programmed from the host and the controllers are connected to devices such as readers, sensors, locks and sounders. Readers are typically hardwired using Wiegand or OSDP. Sensors are hardwired inputs that a controller uses to detect when a door is open or closed. Push button sensors can be used to remotely unlock a door for a request to exit scenario. All sensors are wired to the controller's input ports and reported back to the Host. Door locks are wired to the output relay ports of the controller. The controller's relays can be toggled from the Host software to lock/unlock doors, and toggle aux outputs such as sounders and/or lights. Rule based logic can be programmed from the Host to make the system interact based on specific conditions being met. This rule based logic is downloaded to the controller(s) and gives the user a way to customize the system's behavior and how it interacts with other systems. Cards/Tokens are assigned to users of the system from the Host software that sends the information down to the controller's database. All of the access controller intelligence occurs at the controller level without any dependency on the Host.
<urn:uuid:6651214b-4dc6-4d8a-800a-6cdcd927f0e8>
CC-MAIN-2022-40
https://imron.com/pages/access-control-installation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00073.warc.gz
en
0.925258
283
2.59375
3
Specifying User Defined Choices This topic describes how to enter user defined choices for the following form control types: list box and combo box radio button group tree-control and drop-down tree control The choices displayed by the control are defined on the Choices tab of the control's Properties dialog box. In the case of a tree-control a period is used to delimit the different levels in the tree. For example, if you want to display a list of cities in a list box or combo box, you would specify the choices as follows (one choice per line): Boston New York Atlanta Los Angeles San Francisco Johannesburg These images show a list box in regular and combo box formats. There are several additional optional parameters that you can specify for each choice. The parameters are all delimited with the vertical bar character. For each choice, you can specify the following parameters: Label |[ Value ]|[ Bitmap_Type | Bitmap_Name |[ Pressed_Bitmap_Type | Pressed_Bitmap_Name ]] The label that appears in the list box. This parameter is required. Optional. Default = Label. The value that is stored by Alpha Anywhere in the field when the user selects the corresponding label. Optional. The type of the bitmap to use. Alpha Anywhere will automatically create all of the bitmap parameters for you if click the Define bitmaps ... button. However, if you know the name of the bitmap that you want for each choice, entering the parameters directly into the text box is quicker. The options are: "E" = Embedded bitmap (i.e. the bitmap is stored inside the form). "F" = The bitmap is a file "I" = The bitmap is selected from the bitmap library, or from Alpha Anywhere's built-in bitmaps. Optional. The name of the bitmap to display. Optional. In the case of a multi-state button, when the button is pressed you can specify a different bitmap to display. Optional. The name of the pressed bitmap. Assume you have defined a list box or combo box to display product names. You could specify the choices as follows: Alpha Anywhere|100 Alpha Anywhere Runtime|101 Alpha Anywhere Application Server|102 The list box or combo box will display the names of the products. When the user makes a selection, the corresponding value (100, 101 etc.) will be stored in the field. Assume you have defined a radio button group. Also, assume that for each radio button you want to display text and an image, which you do by selecting "Bitmap followed by text" as the "Display" option. You could specify the choices as follows: Print|Print|I|$a5_print Preview|Preview|I|$a5_preview Save|Save|I| This will display a radio button group with two radio buttons, "Print" and "Preview". For each button, there will be a corresponding bitmap selected from Alpha Anywhere's built-in list of bitmaps. When the user selects a radio button, the value stored in the field is either "Print" or "Preview" Assume you have defined a tree control and you want the tree control to display the following tree: Massachusetts Boston Cambridge Weston New York Albany Ithaca New York City You would specify the choices as follows: Massachusetts.Boston Massachusetts.Cambridge Massachusetts.Weston New York.Albany New York.Ithaca New York.New York City These images show a tree control in regular and drop-down formats. If you also want to control the stored value, you would use the standard syntax to specify the stored value by including |stored value after each entry. 23 Main St. 2 Circle Drive 1 Memorial Drive 1 Madison Ave. 2 Lexington Ave. 34 Coddington St. The data for this tree control could be entered as follows: MA^Boston^23 Main St. MA^Boston^2 Circle Drive MA^Cambridge^1 Memorial Drive NY^New York^1 Madison Ave. NY^New York^2 Lexington Ave. NY^Ithaca^34 Coddington St. In the above example, the ^ character is the delimiter between levels on the tree control, but you can specify any delimiter character that you want. You must choose a delimiter that does not appear in your data. If you are populating the tree control automatically with values from a table, then you must specify an expression that returns values from table in the correct format. In the above example, the hierarchy is State, City, Address. Assume that the table from which you were populating data had fields called "State", "City" and "Address". Also assume that the delimiter you had specified was the ^ character. The expression that you would need to specify would be:
<urn:uuid:69a94f84-7237-4688-838c-986c0b57f2b1>
CC-MAIN-2022-40
https://documentation.alphasoftware.com/documentation/pages/Guides/Desktop/Design/View/Layout%20Control/Specifying%20User%20Defined%20Choices.xml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00073.warc.gz
en
0.794253
1,091
2.875
3
Reverse Engineering in Malware Analysis Written by: Malware Analysis Team, Ensign Labs Malware intrusion is the world's leading type of cyberattack on systems and computers. Malicious software such as viruses, spyware, and adware have evolved over the years. With increased levels of sophistication, malicious software can inflict greater damage, as well as disable and disrupt operations of an organisation. Malware detection must therefore be done as early as possible to prevent any potential damage that could be costly for organisations. The dramatic rise in malware attacks in recent years has resulted in organisations spending more resources to analyse these types of software. The objective is to understand malwares’ impact on the organisation’s IT assets. The act of analysing malicious software is called malware analysis. What is Malware Analysis? Malware Analysis is the process by which a suspicious, potentially malicious file is dissected for incident responders to better understand its behaviour and capability. This helps the incident responders mitigate possible threats. There are various reasons as to why Malware Analysis is performed: - Incident Management (Investigation & Response): Understand how the malware works, in order to triage and react accordingly - Better Malware Detection: Uncover Indicators of Compromise (IoCs) which are especially useful when dealing with threats that were never seen before - Malware Research: Understand the malware's modus operandi to better detect and counter them While the most obvious use of the output from Malware Analysis is to support Incident Response and Triaging, it also uncovers IOCs that security analysts can use in threat hunting. This translates to improving the efficacy of alerts and notifications of security tools, which could potentially deter future threats. What is Reverse Engineering and how is it conducted? Reverse Engineering (RE) is the process where the malware file is taken apart using mainly Static and Dynamic Analysis techniques. This means extraction of key malware attributes and features, based on official file format specifications using disassemblers like IDA Pro, and Windows Portable Executable (PE) format viewers like CFF Explorer. This is the detonation of malware sample in a controlled environment to observe its behaviour. This is usually performed in a sandbox, using debuggers like x64dbg and Network Protocol Analyzers like Wireshark. Ensign’s Approach for Static Analysis Before spending the time to delve deeper into the malware’s code, it is wise to first ascertain the objective and resources committed to the task, followed by a preliminary assessment of the malware under investigation. For instance, from a filetype like PE32+ executable (console) x86-64 (stripped to external PDB), for MS Windows, it is likely that the malware file is a 64-bit Windows executable binary. This helps frame the context for analysts to know what they are dealing with. This way, analysts can mentally prepare relevant knowledge since CPU registers, calling conventions, and Operating System (OS) internals differ by malware. The next check is to try determining the Programming Language/Compiler. During the analysis process, analysts should also look out for any sort of Obfuscator/Protector/Packer being employed as this could greatly hinder the overall analysis’ outcome. A rules-based identification tool that may be useful is ‘Detect It Easy’ (see: https://github.com/horsicq/Detect-It-Easy) Example output from Detect-It-Easy: - Packer: UPX 3.96 - Compiler: MinGW As a follow-up, it is advisable to quickly search online for existing solutions to speed up the analysis process. In the example above, the malware used the UPX packer. The official UPX tool features a command-line switch to unpack the file directly, although sometimes it might turn out to be unreliable or inaccurate. It is thus important to know how to manually unpack it by hand to find the Original Entry Point (OEP) as a fallback, especially because RE is typically done manually. Additionally, if the malware is not customer-sensitive, meaning it can be submitted online or it is already public information, we can submit it to VirusTotal (VT) and check for its classification by the various Anti-Virus (AV) vendors. The label assigned might tell the nature of the malware, and will dictate the steps to follow in the analysis. A label such as HEUR:Trojan-Ransom.Win32.Generic probably means it is a ransomware, a type of malware that has gained popularity in recent years. The Strings embedded in the malware, including its Original Filename, and Compilation Timestamp/Recency it was observed are also useful parameters to keep in mind. Static Analysis provides hints on the code's behaviour that analysts can focus on to figure out answers to valuable, malware-related questions that support Incident Response operations. For instance, analysts can establish if it is possible to decrypt the locked documents and files after a ransomware attack. Ensign’s Approach for Dynamic Analysis Dynamic Analysis complements Static Analysis in gaining a more holistic picture of the nature of the malware sample. It is often preferred to do dynamic analysis to doing static analysis, especially when important parts of the code are obfuscated (e.g., packed, making it extremely tedious to understand what the malware is doing, or if an additional download of the next payload is required). Dynamic Analysis is achieved by detonating (or executing) the binary file within a sandbox, which is an isolated, controlled environment (such as within a Virtual Machine). This environment comes equipped with instrumentation to monitor various key indicators (e.g., File, Network, and Process activity). Some examples of Dynamic Analysis techniques include: - Function call monitoring - Information (control/data) flow tracking 1. Function call monitoring A function is a reusable, self-contained block of code that accomplishes a specific task. Functions accept input, process it, and produce (return) a result. Functions can be invoked ("called") by other functions, and can provide a level of abstraction to facilitate the understanding of the entire programme. During execution, malware will exhibit function-calling behaviour that is unique to its family. Monitoring the type and sequence of such function calls can help us associate malware with the correct family. It also helps in better understanding what the malware may be doing on the system. Function call monitoring is achieved by intercepting calls between functions, with the aim of identifying the critical parts of the programme to focus the analysis on, and which "junk codes" to ignore. The process of capturing the input arguments and return result(s) of function calls is known as hooking. Typical candidates for hooking are standard Windows Application Programming Interface (API) functions or System Calls. Hooking can reveal cryptographic keys and/or decoded/decrypted data automatically, without going through the manual process of tracing individual instructions. 2. Information (control/data) flow trackingSource: "DroidEcho: an in-depth dissection of malicious behaviors in Android applications" Information flow tracking is used to monitor how a programme processes its data. We make use of this to extract decoded/decrypted data from malware. This is especially useful when dealing with ransomware or obfuscated malware. During analysis, specific key data of interest are "tainted", and their propagation throughout the rest of the code is then observed. Subsequently, the recorded execution trace is examined to infer logical properties of the data relationship between various states of the programme execution lifecycle. This allows the analyst to discover data dependencies and unravel the algorithm to decode or decrypt the data. Malware is constantly employing novel ways to evade detection. It also evolves its defence mechanisms against analysis. Organisations need to stay ahead of attackers by being proactive in detecting and identifying malware. This is where malware analysis comes into play. We hope this article gives you a better understanding of what malware analysis is, and why it is necessary. While this article has presented a whirlwind overview of Reverse Engineering in Malware Analysis, through the lens of Static and Dynamic Analysis, it is by no means exhaustive. We did not discuss other techniques such as Program Analysis and Hybrid Analysis which we hope to cover in future articles.
<urn:uuid:b783f0b5-ca5a-4299-b747-7cb6898ff0ef>
CC-MAIN-2022-40
https://www.ensigninfosecurity.com/analysis-insights/2022/04/25/reverse-engineering-in-malware-analysis
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00073.warc.gz
en
0.917308
1,738
2.546875
3
Internet Message Access Protocol (IMAP) is a standard email protocol, first widely deployed in the 1980s, that stores email messages on a mail server, but allows the end user to view and manipulate the messages as though they were stored locally on the end user’s computing device. This can help employees at a business to organize messages into specific folders, let clients know which email have been read, flagged for urgency, or follow-up and save draft messages on the server. Additional Reading: Where does IMAP Security Fall Short? What should an SMB Owner do with IMAP? Many businesses have left IMAP for webmail services such as GSuite and O365 because they generally are far more secure than IMAP. They provide encrypted access, two-factor authentication, and robust services that have far exceeded the capabilities of IMAP. If your business is still relying on IMAP, get off of it today!
<urn:uuid:c161818f-de44-4eb4-878e-d137c4e156e6>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/internet-message-access-protocol-imap/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00274.warc.gz
en
0.949788
195
2.796875
3
Ransomware is big business today and getting bigger all the time. It is so profitable that organized crime and state actors have gotten into it in a big way. It is easy for any criminal, terrorist organization or state sponsor to buy the latest variations of ransomware on the dark web. The experts say the best defense against a ransomware attack is a good backup, replica or snapshot (referred to collectively as backup). The criminal coders running ransomware know that, too, and have been pouring their profits back into research and development to defeat ransomware backup defenses. As a result, the latest generation of ransomware attacks have included backup data as a target. To understand this threat further requires a brief explanation about the various stages of ransomware. Ransomware is a type of malware that encrypts all of the data on the system upon which it resides and demands a ransom for the decryption key. It then ransoms access to the data back to the system owner(s). The ransomware perpetrators threaten to destroy the key if they are not paid in a set amount of time, and commonly demand payments in stages based on set time limits. If the disseminators of ransomware are not paid the ransom, the ransomware then destroys the key and thereby access to all of the data. There are five stages to ransomware: Infection, detonation, gestation, dormancy and destruction (or release). Infection occurs when a tainted file, picture or website is connected to the system. Up-to-date anti-virus software can stop known infection signatures or blacklists, but not necessarily all. Whitelist-based anti-virus solutions can also miss signatures that mimic well-known applications. No front door prevention is leak-proof, however, so ransomware infections will occur and the statistics prove it. Approximately 71% of all businesses targeted in 2017 were infected with ransomware. Detonation is when the ransomware encrypts the data on the infected system. Early generations of ransomware detonated as soon as they infected the system. Unknown to the users of the system, the malware encrypted data immediately and transparently in the background. It takes time to encrypt all of the data, but once complete, the ransomware deletes the key on the now-detonated system, then holds the data up for ransom. If the ransom is not paid in full within a set period of time, it randomly deletes files to raise data owner anxiety. This creates a sense of urgency to pay the ransom: If ransomware is never paid, the malicious actor destroys their side of the key, making the data forever inaccessible. The latest generation of ransomware today does not detonate and encrypt immediately. It has a gestation period designed to maximize revenues and overcome the backup defense. The ransomware’s Phase One attack during the gestation period is to spread as far as it can from one system to another using the permissions of the systems it’s infected. When it cannot spread any further it goes to Phase Two of its attack by deleting or encrypting the backups it is able to locate. Backup files have a known signature and backup software of all kinds have published APIs that can be used to delete older backups no longer needed. Ransomware uses that API to do just that—upon detonation the user discovers that their backups (snapshots and replicas, too) have been deleted. This data is very tough to recover when there are no backups. This evolved ransomware neuters the No. 1 defense against ransomware, forcing the ransom to be paid or the data is lost forever. After spreading as far as it can, the latest variations of ransomware will lie dormant and not delete or encrypt the backup files. Ransomware lies dormant for one, two, four, six, “n” months before finally detonating. This is analogous in humans who have a virus that can lie dormant for months or years before it makes an appearance. The problem with dormant ransomware is that it will be backed up along with the legitimate data the entire time it is dormant. Any recoveries from infected backups will detonate all over again. This is called an attack-loop. Destruction or release The final stage is when the ransomware destroys files. As previously discussed, if a valid encryption key is not entered within the specified time, hostage files may be randomly deleted and the ransomware price for the encryption key will be increased. The malicious actor’s version of the key is destroyed if no ransomware is paid. The destruction of the encryption key effectively destroys the data, but paying the ransom is a poor choice. It may seem expedient, but it identifies the organization as a target that pays and it will be hit again. It’s also no guarantee that the encryption key will be released. There have been several documented cases where the encryption key was destroyed even though the ransomware was paid. How backup vendors are responding to ransomware Backup and data protection vendors have responded to the increasingly sophisticated and disastrous ransomware attacks in three ways: Do nothing (a.k.a. denial), detect and react to ransomware detonations, or prevent backups from being deleted or encrypted. Doing nothing is ignoring the changing reality in the ransomware era. It’s analogous to treating an antibiotic-resistant infection with the same antibiotics the infection is resistant to. The backup defense is compromised while the backup vendor is refusing to acknowledge it. This is an ineffective response. Reacting vs. preventing ransomware detonations This approach leverages the backup software’s incremental or changed block-tracking mechanism. After the first backup the amount of data being incrementally backed up is typically very small. When ransomware detonates and encrypts the data, the backup software sees the encrypted data as all new and is forced to back up all the data. That’s going to stick out like a sore thumb and the backup will take considerably longer. This action provides the backup software an alerting mechanism—the software enables user or software determined policy-based triggering thresholds to detect a likely ransomware detonation, notify the administrator and suggest recovery responses. Some can start the recovery process immediately. The problem with this increasingly popular approach to ransomware recovery is that it’s reacting to a detonation, not preventing one. It assumes the ransomware infection has not made its way into the backups and is enabling recoveries from the most recent backup, solving the ransomware detonation. This is a dangerous supposition. Even assuming the backup software has an effective response to preventing the ransomware from encrypting or deleting the backup data as previously discussed, reacting to detonations does nothing to prevent the nefarious attack-loop. Detecting and reacting to ransomware detonations is an ineffective response. Prevent backups from being encrypted or deleted Successful prevention of a ransomware attack-loop requires a cybersecurity capability that needs to detect ransomware infections in the backup stream. The technology essentially isolates the infected files, prevents them from being backed up and notifies the backup and security administrators, who can then identify the infected files and remove them from their origin before they detonate, stopping ransomware in its tracks. A backup solution with this capability also prevents infected files that may have been backed up in previous generations of backup data to ensure a clean recovery. The solution would need to detect and isolate the infected file and notify the backup and security administrators of any issues, giving them an option to recover or not. As part of a preventative strategy as opposed to a reactive approach, look for solutions that make specific types of backup data difficult to locate in the first place with variable repository naming. This will make it much more difficult for the more intelligent strains to identify backup data with important customer records, personally identifiable information, very important financial data or valuable operational data. Experts also recommend going further and demanding two-factor authentication (2FA) that prevents the deletion of data with a single mouse-click or API call. While backups should be a critical component of every company’s data protection plan, simply having backup infrastructure in place is not enough. Backup technology has evolved and now it is possible to all but guarantee that backup data will be safe by using a cybersecurity-enabled backup/recovery solution, giving organizations the best chance of defeating the extortion attempts of malicious ransomware coders. • Eran Farajun is the executive vice president of Asigra and an expert in the area of cloud-based data protection with more than 20 years in the industry. This article originally appeared on Security Boulevard
<urn:uuid:8ead4d3d-7ad6-40dd-aa5e-7f9d89536f75>
CC-MAIN-2022-40
https://news.networktigers.com/cybersecurity-news/ransomware-is-backup-data-its-next-victim/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00274.warc.gz
en
0.936634
1,746
2.890625
3
The auto industry is on the verge of a revolution. Cars are an integral component of multi-modal, on-demand transportation systems. Under the influence of new technologies, the old view of the family car – or cars – as a necessity is disappearing. - Daimler launched ‘car2go’ in partnership with Europcar Autovermietung GmbH in 2011. Daimler’s ‘car2go’ models incorporate advanced telematics, and serve more than 1 million users across 60 cities in eight countries. - The BMW DriveNow car-sharing service is based on the “pick up anywhere, drop off anywhere” Electric cars are included in the DriveNow fleet. Passengers in San Francisco and select cities of Europe can locate cars using an app or find one on the road, use a chip in the driving license as the key, and leave the car anywhere. Users are billed based on the duration of travel, which includes fuel and parking charges. - American ride-sharing services Uber and Lyft, founded respectively in 2009 and 2012, are both multi-billion-dollar companies today. Uber is experimenting with autonomous vehicle technology that will further revolutionize the transportation sector. These developments mean that fewer and fewer people need to own a car. Private car ownership among millennials in particular is declining, or at best being postponed. More than half of adults between the ages of 22 and 37 say a car is not worth the money spent on maintenance. Even more of a threat to the old view of car ownership is the decline in the number of people who go to the trouble of obtaining a driver’s license. In the 1980s, 80% of 18-year-olds in the U.S. had one. Now that figure is only 60%. There is also ample anecdotal evidence for anyone who lives in an urban area that more and more people have settled on the bicycle as their vehicle of choice. New attitudes toward mobility and technology are now interacting to increase the pace of change. It is not yet known whether travelers in developing countries will follow suit. In these countries, regulators are implementing policies to restrict private vehicles even though car sales are robust, and rising income levels encourage both first-time and multiple car ownership. The Pace of Change Accelerates Whatever the global distribution of new driving habits may be, what is certain is that changes in mobility patterns have enormous technology implications for the automotive sector and beyond. Car makers now must deal with a whole new layer of technology that extends beyond the car. This includes networking requirements that relate to fleet management—e.g., the ability to locate a vehicle that is near someone who wants a ride; IoT technology that may include the need to interact with other vehicles; and the ability to integrate with “smart city” systems designed to manage the flow of traffic and eliminate congestion. Another area of technology that will become important revolves around payment systems that might be integrated into vehicle technology. The transportation technology boom will clearly extend beyond vehicles themselves. Every business associated with travel, from gas stations to fast food outlets to lodging establishments, will view the intelligence and networking capabilities of smart vehicles as a marketing opportunity, alerting drivers and passengers to special offers based on geolocation in the same way that stores connect with customers in malls through their smart phones today. It is important to note that there are other forces besides driving habits that make technology more important than ever to the automotive sector. Environmental issues are leading the trend toward electrification, while safety concerns are encouraging the adoption of radar, lidar, cameras, and sensors fitted within and outside of vehicles to ensure the safety of passengers as well as pedestrians. With the arrival of 5G (and telcos all over the world jumping over hoops to announce how many cities they are going to enable with 5G and launch dates), a completely new era of extremely high bandwidth, low latency communication opens up. With this comes the possibility of real-time sensing and a hyper-connected mesh with high network intelligence, which can enable real autonomous decision making. Taken together, the technologies now available promise a transformation in the automotive sector that will rival the first assembly line.
<urn:uuid:3e7c2121-8a4c-4df6-b2d6-d50da11515aa>
CC-MAIN-2022-40
https://www.cio.com/article/220074/technology-and-connectivity-transforming-the-auto-industry.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00274.warc.gz
en
0.957617
855
2.578125
3
Wikipedia is always a good source of definitions for technology-related issues. It defines hacktivism as “the use of computers and computer networks to promote political ends, chiefly free speech, human rights, and information ethics”. As with any technology, “hacking” and therefore hacktivism can be a force for good or evi... As websites become ever more secure, so those “hacking” them become more sophisticated in their methods. Over the years, many of the more sophisticated hacks have been carried out by groups of hackers or nation states, rather than individuals. Two of the most widely known groups are Anonymous and Lulz Security (more commonly abbreviated to LulzSec). However, in the case of LulzSec, the group has (allegedly) disbanded and some of its members arrested. Nevertheless, given the disparate structure of these organisations and the transient nature of their members, it is unlikely that all the members have been caught. The range of targets for these organisations has been wide. One of Anonymous's earliest targets was the Church of Scientology. The initial attack consisted of making prank telephone calls to the organisation and sending black sheets of paper by facsimile transmission. This tactic was then added to by its internet equivalent – a denial-of-service attack. This involves sending multiple simultaneous requests for information to the target website, causing it to crash. While some regard a denial of service as relatively simplistic and, indeed, a denial of freedom of speech, it is nevertheless effective. Is hacking worse than a physical attack? Whether hacking is worse than a physical assault, such as sending large numbers of useless facsimiles or holding a mass protest outside the buildings of the Church of Scientology, depends on your point of view. Is it worse? At first sight it might seem so, since those protesting electronically invariably do so anonymously. However, some of those protesting physically do so wearing hoods or masks. Of course, like many protests, innocent bystanders can be hurt. During the campaign against Scientology, a secondary school in the Dutch municipality of Deventer and a 59-year-old man from Stockton, California were incorrectly included as targets. Unintended consequences can follow hacktivist attacks. In 2011, LulzSec made an attack on the internet pornsite www.pron.com. LulzSec published 26,000 email addresses and associated passwords, in an apparent attempt to embarrass users. These appeared to include two Malaysian government officials and three members of the US military. This triggered an unexpected response from Facebook, which prevented users with the same email address from accessing their Facebook account. Facebook automatically assumed that those users might have the same passwords. During the 2013 Zimbabwean election, hacktivist group Anonymous Africa attacked and closed down 50 websites, including those associated with the ruling Zanu PF party Many targets of hacktivist groups are of a more overtly political nature. LulzSec, in its short “career”, attacked InfraGard a partnership between businesses and the Federal Bureau of Investigation in the US. It successfully attacked the US Senate and the Central Intelligence Agency websites. It defaced the InfraGard website, damaged the Senate by releasing some “secure” information, and hit the CIA by taking its site down for over two hours. It also attacked the UK National Health Service, but in this case it performed a public service, merely sending the NHS an email informing it of the security vulnerability it had found. Other countries have also suffered from hacking attacks. In Portugal, for example, the websites of the Bank of Portugal, the Portuguese parliament and the Ministry of Economy, Innovation and Development have all been attacked. This was apparently in response to police brutality at public protests against austerity measures held on 24 November 2011. But, as with many such attacks, it is not always possible to identify the causes conclusively. Hacktivism and the Arab Spring Not all hacktivists work in secret. In 2011, at the start of the Arab Spring, the Egyptian government tried to shut down the internet. This provoked a response from Google, Twitter and SayNow. They collaborated and in a very short time produced a “Speak2Tweet” service allowing anyone, inside or outside Egypt, to leave a message on certain telephone numbers. The messages were then immediately placed on Twitter. The stated motive was: “We hope this will go some way to helping people in Egypt stay connected at this very difficult time.” There are other examples of hacktivism against states. When, in 2009, Iranians protested unsuccessfully against perceived widespread election fraud, Anonymous set up an information exchange website called Anonymous Iran. More recently, the Turkish government has taken an increasingly sharp swing to authoritarianism. This prompted what, to many people, is an example of “good” hacktivism by Turkish hacktivist group Redhack. China has been accused of attacking Japanese sites in its continuing dispute regarding sovereignty over the Senkaku/Diaoyu islands Giving protestors a voice Redhack suggested that protesters alleged to have sent illegal messages by Twitter should say their account had been hacked into by Redhack. Redhack said it would “take the blame [for Twitter users targeted by the state] with pleasure”. Redhack also advised activists to use Twitter rather than Facebook or Skype because the latter two services confirmed the identities of their users to the authorities, whereas Twitter does not. The previous targets of Redhack have included the Turkish Council of Higher Education, the country's police force, army, Türk Telekom and the National Intelligence Organisation. After it offered to assist those targeted by the authorities, the number of followers of Redhack's Twitter account numbered more than 600,000. Hacktivism in Africa A recent example of hacktivism concerns the activities of hacktivist group Anonymous Africa. During the 2013 Zimbabwean election, it attacked and closed down 50 websites, including those associated with the ruling Zanu PF party as well as those of the regime newspaper The Herald. Some justified this by pointing out that president Robert Mugabe’s regime was allowed plenty of airtime on state TV to support its own message, while giving none to the opposition. Harder to justify was the attack on the website of South Africa-based Independent Newspapers. This was targeted following a pro-Mugabe opinion piece in one edition. Some say the action, an unsophisticated denial-of-service attack, was an unjustified erosion of freedom of speech. Others equate Mugabe, who in a judgment by the Council of the European Union on 26 January 2009 was said to be “responsible for activities that seriously undermine democracy, respect for human rights and the rule of law”, with Hitler and applaud the attack. Hacktivism is sometimes state-sponsored. One large-scale state-sponsored instance, called Titan Rain, occurred over a three-year period commencing in 2006. The attacks seemed to be targeted at US defence contractors' websites and were widely alleged to be the work of the Chinese military. While the stories of “Unit 61398” of the Chinese Army are numerous, a larger and, in many respects, more insidious example of state-sponsored hacktivism is that undertaken by Russia. The Saudi national oil and gas company, Saudi Aramco, had 30,000 of its computers infected with the Shamoon computer virus In 2007, in a row between Estonia and Russia over the relocation of a statue in the Estonian capital, Tallinn, another massive cyber-attack took place. Given the complexity of this attack, it is widely believed to have been sponsored by the Russian state: this allegation was made by at least two Estonian ministers of state. In the attack, considerable interruptions were caused to many state-related entities in Estonia, also including Estonian financial institutions. Russian attacks against Georgia Stronger evidence pointing the blame at Russia emerged during the conflict with Georgia in 2008, during which Russia re-established its earlier “annexation” of Abkhazia and South Ossetia. Georgian targets included the Parliament and the Ministry of Foreign Affairs websites, which suffered a cyber attack. A subsequent study by network security firm Greylogic in March 2009 concluded: "The available evidence supports a strong likelihood of GRU/FSB planning and direction at a high level while relying on Nashi intermediaries and the phenomenon of crowd-sourcing to obfuscate their involvement and implement their strategy." In March 2014, during the Russian invasion of Crimea, the Ukraine’s Security and Defence Council stated: “There was a massive DoS [denial of service] attack on communication channels of the National Security and Defence Council of Ukraine, which was apparently aimed at hindering a response to the challenges faced by our state." The Ukrainian state-run news agency, Ukrinform, has suffered a similar attack. In the same way that the physical presence of the Russian army was not immediately obvious, because many did not wear uniforms, so too did Russia's cyber attacks take place surreptitiously. Chinese military hacking units Another example of state-sponsored hacktivism is an attack on a number of US companies and federal agencies. The internet security company Mandiant published detailed evidence showing the Chinese Army’s Unit 61398 to be the source of this hacking. Many of the world’s conflict zones are also associated with political hacktivism. Like most weapons, hacking can be used for good or bad, to defend freedom or attack it One that is often reported is the Israeli-Palestinian conflict, but others include India-Pakistan (which began in May 1998, when Pakistan-based hackers attacked the Indian Atomic Weapons Research Establishment in Mumbai) and China’s attack on pro-Tibetan Independence websites, as well as on Taiwan. China has also been accused of attacking Japanese sites in its continuing dispute over sovereignty over the Senkaku/Diaoyu islands. China-based hacking has also been suggested as the cause of the demise of the once-huge Canadian company Nortel, which lost a large number of its corporate secrets through hacking emanating from China. In a recent UK-related incident, the firm Dattatec, based in Sante Fe, Argentina, launched an arcade-style shooting game in April 2013 in which police on the Malvinas (Falklands) fought British “terrorists”. The Argentine company was then forced to face another battle: a denial-of-service attack from the equivalent of 5,000 computers at once. This attack may have been the work of a lone individual. Stuxnet and Iran A game-changing event was the development and release of the Stuxnet virus. The virus was uncovered in June 2010, but not until it had caused the centrifuges in Iran’s uranium enrichment programme to spin out of control. It specifically targeted the Siemens control systems for the centrifuges. While many in the West may applaud the motives behind this attack on Iran’s nuclear ambitions, it undoubtedly changed the rules by causing real physical damage. While there has never been any formal acknowledgment that Israel and the US were behind the Stuxnet virus, Eugene Kaspersky, co-founder of the Kaspersky Anti-Virus Company, has estimated that the development cost behind Stuxnet was of the order of £10m. It is therefore unlikely that anyone would have the means to create such an entity without the backing of a nation state. Iran launches Shamoon It did not take the Iranians too long to retaliate. In August 2012, the Saudi national oil and gas company, Saudi Aramco, had 30,000 of its computers infected with the Shamoon virus. This computer virus renders hard drives unusable by writing spurious data over the files stored on them. An unknown Hacktivist group, Cutting Sword of Justice, claimed responsibility, but the Iranian state is widely believed to have been behind this highly sophisticated attack. The Saudis have long been allies of the Israelis in trying to thwart Iran's nuclear ambitions. So, is hacktivism good or bad? That depends on your perspective. Like most weapons, hacking can be used for good or bad, to defend freedom or attack it. Perhaps only time will tell whether hacktivism earns a reputation for net detriment or net benefit. Dai Davis is a chartered engineer and solicitor. He has Masters degrees in physics and computer science. Previously national head of IT law at Eversheds, he is now a partner in his own law firm. He can be contacted at [email protected]
<urn:uuid:c12262d4-22fa-4b38-9d91-8fa301414317>
CC-MAIN-2022-40
https://www.computerweekly.com/opinion/Hacktivism-Good-or-Evil
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00274.warc.gz
en
0.963338
2,648
2.6875
3
Nov 20 2018 Historically, high-performance computing (HPC) was used for a very narrow set of problems: fluid dynamics (especially for nuclear weapon simulation and design verification), weather prediction and modeling, aerodynamic simulation, and particle physics modeling. Over the last decade, the problem space that HPC can successfully and economically address has grown considerably, fueled primarily by increases in the capabilities of both servers and GPGPUs. However, as data sets have grown, the new bottleneck has become the movement of data from storage systems and storage devices to the servers and CPUs for processing. As we stated in our last blog, this is an issue that computational storage can address for specific HPC problems. Generally speaking, the problems that computational storage can address in HPC are those that utilize petabyte-scale data sets, are read-intensive, and involve parallel operations on the data sets. Even better are those applications that perform significant searching of the data sets to find vector similarities. Problems that computational storage is not well positioned for are problems that are highly scalar in nature or are primarily write-intensive (complex data transformations fall into this category). Finally, some computational storage solutions provide acceleration for some problem sets such as artificial intelligence (AI) and encryption/decryption. For the NGD Systems Newport and Catalina-2 computational storage platforms, workloads that match the attributes above include TensorFlow-based HPC applications such as the Facebook Artificial Intelligence Similarity Search (FAISS), biological workloads such as BLAST, unstructured databases like Apache HBASE, Redis and Aerospike, and applications like Hadoop MapReduce. For these and other similar applications, computational storage can significantly reduce the amount of data moved between storage and CPU DRAM, increasing performance and the number of jobs that can be run on a given hardware footprint. Find out how NGD Systems computational storage can help your HPC workloads at www.ngdsystems.com.
<urn:uuid:92b85c1e-c195-45c6-ae91-9b849049b8e4>
CC-MAIN-2022-40
https://ngdsystems.com/what-types-of-hpc-problems-can-nvme-computational-storage-solve/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00274.warc.gz
en
0.934557
401
2.609375
3
You handle them every day, but how much do you know about what’s inside? Here we delve into the key internal components of common open standards-based transceivers and highlight the ways in which not all transceivers are created equal. Inside the SFP The job of an optical transceiver is to convert the electrical signal from a switch or router to an optical signal that can be transmitted and received over fiber optic cable. The optical portion of SFP transceivers built to MSA (multi-source agreement) specifications are laid out as shown below: Fiber Stub. A strand of fiber cable along which optical signal enters the transceiver. A small fiber stub is optimal to minimize signal attenuation. Isolator. Shields the transmitted signal from the received signal by reducing EMI (electromagnetic interference) within the transceiver. This improves signal strength in a highly compact form factor. Focusing Lens. Refocuses the light coming in (or going out) to maximize signal strength. TOSA and ROSA. The Transmitter and Receiver Optical Sub-Assemblies, or TOSA and ROSA, house all the components that enable data transmission and reception over fiber optic cable. This is where lower-quality manufacturers are very likely to cut corners; using inferior lasers will lower the cost to build, but will burn out much faster, thereby increasing your operational cost overall. Inside the QSFP28 TOSA MUX and ROSA DEMUX. Unlike 1G and 10G transceivers, 100G transceivers like the QSFP28 use internal WDM technology to achieve higher data rate transmission. The data is channelized into four wavelength “lanes” of 25G each within the transceiver. Tx and Rx CDR. Higher data rate transceivers are equipped with Clock Data Recovery (CDR) to ensure the transmitted and received signals are synchronized for optimal transmission Tx EML. The EML is the point at which the electrical signal actuates the laser to generate the optical signal. Micro Controller. The heart of the transceiver, an electrical component akin to a computer’s CPU that controls all transceiver functions. This is another component that can be skimped on in lower-quality transceivers. TEC Control. A thermoelectric cooler that prevents the transceiver from overheating. Want to learn more about transceivers? Contact us today.
<urn:uuid:7aa6e480-0c72-494a-ba60-009ede6cb7a5>
CC-MAIN-2022-40
https://www.championone.com/blog/whats-inside-your-optical-transceiver-a-look-inside
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00274.warc.gz
en
0.91029
508
3.03125
3
The Six Best Programming Languages to Learn in 2020 Did you know that the average salary for a computer programmer in the UK is £44k, according to Adzuna? Programmers have never been more popular as they are at the moment. Websites, mobile apps and even home appliances are all dependent on code, and this demand is only set to grow over time! If you are looking to change careers or improve your job prospects, learning a programming language is a fantastic option. However, there are a lot of different languages out there. You may be wondering which one will be the best one to learn in order to help your career to grow. We’ve selected the six best programming languages that will help to supercharge your job prospects and open the door to an exciting new career. Ideal for: People who want to take the first step on the programming ladder Python was released in 1991 and although you may think it was named after the snake, it is actually named after the famous 1970s comic troupe! If you are new to coding and don’t have much experience, Python is a great place to start. Not only is it easy to learn, but it is used across a wide range of systems and platforms. Machine learning, data science and artificial intelligence are all new technologies that extensively use Python. A wide range of companies use Python including NASA, Netflix and Google. In fact, Guido Van Rossum, the creator of Python, worked at Google for several years! Ideal for: People who want to develop an awesome mobile app! Java is currently extremely popular in the development of Android apps, as well as desktop applications for businesses. Difficulty-wise, Java is somewhere in the middle. It’s a little harder to learn than Python, but there are trickier programming languages out there. 3. C and C++ Ideal for: People who want to be the next big video game developer C was devised in the 1970s, making it the oldest programming language still in use today. C++ was launched in the 1990s as an enhanced version, and powers software including Adobe and Firefox. These two programming languages are commonly used to develop computer games, making them the perfect languages to learn if you are looking to create the next big gaming sensation! C and C++ are challenging languages to learn, but programmers are in extremely high demand, with the best developers earning on average £72k, according to Adzuna. Ideal for: People who want to launch a new online business quickly and efficiently Created in Japan in the 1990s, Ruby is a popular programming language to learn as it is easy to understand, very intuitive and has a supportive online community behind it. Ruby is used to create ‘Ruby on Rails’, a web application framework used to structure web pages and databases. As it is easy to pick up and run with, Ruby is popular with a lot of start-ups, with companies like Airbnb and Shopify using Ruby to build their websites. Ideal for: People who want to learn a simple language used across a wide range of systems PHP is short for Hypertext Processor and was invented in 1994. It’s used by some of the world’s largest websites, including Facebook, Wikipedia and WordPress. It’s seen as an easy programming language to learn and operate, but is also great for more experienced programmers as it offers a lot more advanced features too. Like Java, PHP is used to power the back end of applications and content management systems worldwide. In fact, 79.0% of all websites are using PHP! Ideal for: People who want to learn the next big programming language The youngest language on this list, Go (or GoLang) was created in 2009 by developers at Google. Based on C, it’s seen as an easy language to learn, especially if you have prior experience of C or C++. It’s used by a variety of companies including Apple, BBC and the New York Times. In summary: Which programming language is right for me? We hope that this short guide has given you an insight into which language is right for your current needs and future career. Although there are a lot of languages out there at the moment, it is best to learn one that has a lot of staying power. All the languages on this list have been around for at least ten years and are still in use today. After all, you don’t want to spend time and money learning a language, only for it to become obsolete as soon as you have your accreditation and are ready to start coding! The great thing about learning a programming language is that it is a lot like learning a musical instrument. Once you have learned one, it will be a lot easier to pick up another if you want to! All programming languages can be learned online, making them the ideal choice if you are not able to attend a college or university, or want to learn how to code around your current job and circumstances. Want to find out more about learning a programming language? If after reading this, you are interested in learning a brand-new programming language, we are here to help. We offer a wide range of online courses. No matter your level of experience or hours you have available to learn, we have the perfect course for your needs. Contact us today to find out which language is right for your needs and the learning options that are available.
<urn:uuid:652e228a-e0f6-47af-9415-eb772bcc4a91>
CC-MAIN-2022-40
https://www.itonlinelearning.com/blog/the-six-best-programming-languages-to-learn-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00274.warc.gz
en
0.964277
1,223
2.625
3
Sensors and 4G LTE wireless networks revolutionize public sector cost-efficiency The influx of wireless connectivity, Machine-to-Machine (M2M) communications and the Internet of Things (IoT) has had dramatic effects on individuals’ lifestyles and the day-to-day operations of companies. For the public sector, connectivity is improving the well-being of the communities being served. Appliances and devices on wireless networks make it possible to start a coffee machine, washer, dryer, heater, cooler, outdoor lighting, indoor lighting, music system, and a million other systems —all while in bed, on the road, or even out of town. It’s a gateway to levels of efficiency and data that many of us never even dreamed of. It’s that type of efficiency and data that has cities, counties, states, and more imagining a smarter world. In many ways, it’s already happening. Beyond ‘smart cities’ What are the features of a smart city? Gartner Research defines a smart city as “an urbanized area where multiple sectors cooperate to achieve sustainable outcomes through the analysis of contextual real-time information sharing among sector-specific information and operational technology.” The rise of the “Smart City”* has been enabled by consistent innovations in wireless technology, as well as by the proliferation of reliable, affordable, and secure connectivity. Unlike in the home, where IoT applications are confined to relatively small spaces, there are thousands and thousands of ways that wireless technology can make cities and states run smarter, faster, and cheaper. The potential is virtually limitless. Wireless connectivity improves communities in variety of ways Given tight budgets and pressure to get the most out of every taxpayer dime, public-sector IT departments are searching for new ways to do more with less. Examples abound of how municipal departments, school districts, water districts, emergency services, law enforcement, education, and many other agencies are maximizing the proliferation of sensors and 4G LTE connectivity. WiFi-connected laptops and tablets enable officers to do more of their work out in the field, giving officers more hours each day to focus on keeping communities safe. First responders use 4G LTE for mission-critical communications, consolidating multiple agencies’ frequencies on one device. Connected school buses enable educators to foster in-vehicle learning during field trips, trips to sporting events, and more. Sensors everywhere — for everything Cities, counties, and states are using sensors to acquire more information and make better resource decisions for their constituents: Smart apparel with embedded sensors monitor firefighters’ location, body position, heart and respiratory rates, and body temperature. Public-sector administrators avoid emergencies, reduce emissions, and save money by monitoring the structural integrity of buildings, bridges, and dams. Cities use sensors to track which streets have been plowed after snowstorms. Environmental departments access real-time readings of pollution levels, wildlife counts, and water levels. Remote controls for streamlined processes Remote management saves valuable time and ensures that key data leads to improved cost-effectiveness: Real-time updates regarding power, heating, and cooling usage give organizations opportunities to regulate their in-office controls as needed. Water managers use SCADA (coded signals over communication channels to remote equipment) to remotely collect and analyze water samples, predict usage patterns and challenges, control valves, and more. Entities that place sensors in streets and traffic signals use data to guide traffic patterns in a way that’s fruitful for local commuters. Surveillance for improved public safety Wireless technology enables self-contained surveillance cameras that gather important information: Law enforcement agencies use dashboard and body cameras to monitor and record encounters between officers and the public. Cameras enable law dispatchers to remotely examine incident scenes in real time so they can accurately determine the right number of officers to deploy. Police use cameras to remotely spot stolen vehicles, theft, illegal dumping, and suspicious activities. Data on the move For years people have been buzzing about the concept of “smart roads,” an infrastructure that that could eventually lead to driverless cars. We’re not there yet, but the surfaces we drive on are becoming a lot less passive. Sensors embedded in streets and traffic signals capture data that leads to decisions affecting congestion and energy use. While still in the trail stage, solar-powered roads would transform transportation. Paved with durable solar cells, the average American highway would be able to capture and store solar energy, which could then be used to operate digital traffic signage and charge electric vehicles as they pass by. “Smart” applications are making our cities and states more and more efficient, but it’s only possible with constant, secure wireless connectivity. With 4G LTE networks and best-in-breed routing solutions from Cradlepoint — the ability to keep stationary and mobile locations connected is easier than ever — enabling the list of applications and improved efficiencies to continue to grow. Click here to learn more about how wireless connectivity enables smart cities.
<urn:uuid:b7bfe4e6-24df-4d7b-8f39-965a62ecad2c>
CC-MAIN-2022-40
https://cradlepoint.com/resources/blog/connectivity-enables-smart-cities-smart-states-smarter-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00274.warc.gz
en
0.926562
1,090
2.578125
3
Bringing fabrication to the desktop When your master's thesis consists of building soccer-playing robots, what do you do for a Ph.D.? For Evan Malone, a graduate student in mechanical engineering at Cornell University, the answer is clear: you design a three-dimensional fabrication machine capable of building complete robots—power supply, electronics, body, and motors—that can walk right out of the machine when complete. When Malone can do that, he'll walk out of Cornell with a doctorate, but in the process of working on his robot fabrication technology, he became interested in a different challenge: bringing fabrication to the desktop. Three-dimensional fabrication tools, or "fabbers," have been common in industry for years. They can be used to make rapid models of car parts, gears, or other bits of industrial machinery in a matter of minutes. Most rely on a slow process of deposition in which various kinds of plastic are built up in layers to form the desired shape. Unfortunately for those who want to put this incredible technology to use in the home, the machines run $20,000 and up. Way up. Many of the machines cost well above $100,000. Malone's goal was to build something cheap and reliable, something that hobbyists could use to kickstart a "home fabbing revolution" that would have analogues to the personal computer revolution that hobbyists helped to launch in the early 1970s. The result was Fab@Home, an open-source project that provides drivers, applications software, and detailed design plans for assembling a three-dimensional desktop fabricator. Total cost: under $2,400. Malone's machine puts fabbing within hobbyist budgets for the first time. Since the first Model 1 Fabber began life in the summer of 2006, Malone has launched a wiki and built a community of enthusiastic tinkerers, all in his spare time. The project has already attracted worldwide attention; Malone has taken his device to South Africa at the request of the government there, and one of the first Model 1 machines has already been requested for an exhibit at the Science Museum in London. Early machines are still primitive, but they work reliably. A Model 2 revision is already in the works. Fab@Home is about more than making small plastic objects in your living room, however. Malone and his mentor, Dr. Hod Lipson, believe that such devices can change the world. Digitizing the real world "A machine that could make a huge variety of reasonably complicated objects, and yet was attainable by ordinary people, would transform human society to a degree that few creations ever have." So wrote Lipson in a 2005 article in the IEEE Spectrum, a piece in which he laid out his vision of building a machine that builds other, working machines. He's Malone's dissertation advisor, and the two of them have high hopes for the potential of home fabbers. Most forms of information—text, light, and sound—were digitized long ago. In each case, the move to digital made words and movies and music both inexpensive and simple to shuttle around the globe. Sometimes, they made it a bit too simple, and copyright owners cried foul. Home fabrication tools may bring the same digital revolution to the world of objects, with stores one day selling not items but data files to be printed at home as easily as pages of text now spit from our printers. Fab@Home is the first small step on that journey. Lipson designed the software application that controls the fabber, while Malone actually built the machine and the interface to the computer. When he talks about the Model 1, Malone often compares it to the Altair 8800 kit that made it possible for hobbyists to experiment at home with computers. Both projects, in fact, cost roughly the same amount in today's dollars. Wheels for a Lego car This is still a machine for tinkerers to play with; it's not yet suited to serious commercial work, though Malone says that he has used a Model 1 for 100 hours without a breakdown. It accepts a wide range of deposition materials, anything from silicone to chocolate, and it can already be used to build replacement Lego wheels, houses made from easy-spray cheese, and chocolate letters. Mmm... cheese house For those curious about how a Model 1 looks in action, Malone has produced videos of his machine building a watch band and a silicone bottle (a lower-resolution YouTube video is also available). The video clips are accelerated in order to stave off boredom; this is not yet a quick machine, and it can take hours to produce a complicated object. Everyone loves chocolate Building such a machine is not for the faint of heart, but Malone has designed it to be relatively easy to assemble. If you have a hobbyist's bent, some Allen keys, screwdrivers, scissors, pliers, and a soldering iron—along with 18 to 24 hours of free time—a home fabber is now within your reach.
<urn:uuid:3c2c27ff-49af-4194-a7ca-9f70c52b5432>
CC-MAIN-2022-40
https://arstechnica.com/gadgets/2007/04/fabathome/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00474.warc.gz
en
0.964397
1,023
3.171875
3
Two-factor authentication (2FA) is one of the best ways to prevent criminals from breaching your accounts. For platforms like Google, you are often required to enter the 2FA code when signing into a new device. Tap or click here to see how your texts could be hijacked. This has been optional for the most part, but Google is now automatically enrolling users to make their accounts safer. This can be turned off, but the automatic rollout has already begun. But cybercriminals are constantly finding ways to rip people off. There is a new, sophisticated trick that not only allows hackers access to your account but it bypasses the 2FA process. Keep reading for details. Here’s the backstory Two-factor authentication adds a layer of security to your accounts. You need to enter credentials to log in but also a code that only you have access to. But a shocking new trend on cybercrime forums has been raising the eyebrows of security experts. An investigation by Motherboard revealed that hackers had employed bots to help bypass the 2FA process. The hack requires a victim to willingly hand over their 2FA or authentication codes, and it’s much easier than it sounds. Motherboard received a call from PayPal that an attempted purchase of $58.82 was made from their account. “In order to secure your account, please enter the code we have sent your mobile device now,” the bot explained. When the code was entered, a swift response said: “Thank you, your account has been secured and this request has been blocked.” The problem is it wasn’t PayPal at all. It was a bot delivering the message on behalf of cybercriminals. If you hand over the authentication code sent to your phone, the criminal could use it to get into your account. A crook needs your phone number, email address and log-in credentials to an account to successfully pull off this scam. They can get much of this information through data breaches or leaks that result in your details posted on the Dark Web. It’s an elaborate scheme, but it works. What you can do about it Two-factor authentication remains a reliable way to protect your assets. And since this hack requires your input, you can fight against it. Here are some ways that you can stay safe: - If you receive a call about an attempted purchase or breach, hang up the phone. Next, call the company directly on an official number for more details. - Never give out personal information over the phone. Criminals often rely on you to panick and make mistakes. Relax and think about what’s happening logically. - Never click on a link in an unsolicited text message or email. And don’t download or open attachments.
<urn:uuid:9b95c43b-9987-40ca-9d72-3d7db50f836f>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/2fa-bot-scam/814951/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00474.warc.gz
en
0.943844
575
2.625
3
Babies born with high levels of bad cholesterol and a certain type of fat may face a heightened risk for social and psychological problems in childhood, according to new scientific findings. In a study involving 1,369 children tracked from birth to 5 years of age, psychological scientists found that results of a standard blood test taken at birth could predict how teachers rated the children on emotion regulation, self-awareness, and interpersonal behavior 5 years later. The results are published in Psychological Sciencee. Researchers Erika M. Manczak of the University of Denver and Ian Gotlib of Stanford University were specifically interested in looking at the long-term implications of infants’ lipid profiles — a measurement of the amount of cholesterol and triglycerides in the blood. Triglycerides are fats that, at high levels, can increase the risk of stroke and heart disease. Manczak and Gotlib used data from an ongoing study involving children born in the town of Bradford in the United Kingdom between March 2007 and December 2010. They looked at data involving 1,369 children from birth to 5 years of age. The babies were born to mothers of various ethnic backgrounds. When the children reached age 3, the mothers were asked to rate their child’s health. And when the children were 4 to 5 years old, their teachers rated each of them on their psychological development, including self-confidence, emotional control, and interpersonal relationships. The teachers were asked to classify each child as below, at, or exceeding developmental expectations. Manczak and Gotlib found that newborns’s whose cord blood showed high levels of high-density lipoprotein (HDL)–known as the “good cholesterol” because it removes fat from artery walls–were significantly more likely to later receive higher ratings on psychological development by their teachers. In contrast, newborns whose cord blood tested high for triglycerides and very-low-density lipoprotein–known as the bad cholesterol–were more likely to receive low teacher ratings on social and emotional development. The results were consistent across ethnic groups and gender. Manczak and Gotlib found that newborns’s whose cord blood showed high levels of high-density lipoprotein (HDL)–known as the “good cholesterol” because it removes fat from artery walls–were significantly more likely to later receive higher ratings on psychological development by their teachers. Manczak and Gotlib acknowledge that their findings are correlational and don’t conclusively show that lipids in cord blood lead to psychological problems over time. But the results do introduce the possibility that lipids may be a new mechanism to consider when trying to understand the causes of mental health problems, they say. “If this is replicated in other studies, it would suggest that lipid profiles at birth could play a role in identifying children who might be at heightened risk for psychological problems later, allowing health care providers to intervene early,” Manczak says. “It also introduces the possibility that lipids may be a new mechanism to consider when trying to understand what causes mental health problems.” Mental health disorders (MHD) are very common in childhood and they include emotional–obsessive-compulsive disorder (OCD), anxiety, depression, disruptive (oppositional defiance disorder (ODD), conduct disorder (CD), attention deficit hyperactive disorder (ADHD) or developmental (speech/language delay, intellectual disability) disorders or pervasive (autistic spectrum) disorders. Emotional and behavioural problems (EBP) or disorders (EBD) can also be classified as either “internalizing” (emotional disorders such as depression and anxiety) or “externalizing” (disruptive behaviours such as ADHD and CD). The terminologies of “problems” and “disorders” are interchangeably used throughout this article. While low-intensity naughty, defiant and impulsive behaviour from time to time, losing one’s temper, destruction of property, and deceitfulness/stealing in the preschool children are regarded as normal, extremely difficult and challenging behaviours outside the norm for the age and level of development, such as unpredictable, prolonged, and/or destructive tantrums and severe outbursts of temper loss are recognized as behaviour disorders. Community studies have identified that more than 80% of pre-schoolers have mild tantrums sometimes but a smaller proportion, less than 10% will have daily tantrums, regarded as normative misbehaviours at this age[2,3]. Challenging behaviours and emotional difficulties are more likely to be recognized as “problems” rather than “disorders” during the first 2 years of life. Emotional problems, such as anxiety, depression and post-traumatic stress disorder (PTSD) tend to occur in later childhood. They are often difficult to be recognised early by the parents or other carers as many children have not developed appropriate vocabulary and comprehension to express their emotions intelligibly. Many clinicians and carers also find it difficult to distinguish between developmentally normal emotions (e.g., fears, crying) from the severe and prolonged emotional distresses that should be regarded as disorders. Emotional problems including disordered eating behaviour and low self-image are often associated with chronic medical disorders such as atopic dermatitis, obesity, diabetes and asthma, which lead to poor quality of life[7–9]. Identification and management of mental health problems in primary care settings such as routine Paediatric clinic or Family Medicine/General Practitioner surgery are cost-effective because of their several desirable characteristics that make it acceptable to children and young people (CYP) (e.g., no stigma, in local setting, and familiar providers). Several models to improve the delivery of mental health services in the Paediatric/Primary care settings have been recommended and evaluated recently, including coordination with external specialists, joint consultations, improved Mental Health training and more integrated on-site intervention with specialist collaboration[10,11]. A review of relevant published literature was conducted, including published meta-analyses and national guidelines. We searched for articles indexed by Ovid, PubMed, PubMed Medical Central, CINAHL, the Cochrane Database of Systematic reviews and other online sources. The searches were conducted using a combination of search expressions including “childhood”, “behaviour”, “disorders” or “problems”. CLINICAL PRESENTATIONS OF CHILDHOOD BEHAVIOURAL AND EMOTIONAL DISORDERS Various definitions for a wide range of childhood behavioural disorders are being used. The DSM-5 offers the commonest universally accepted standard criteria for the classification of mental and behaviour disorders. The ICD-10 is the alternative classification standard. Any abnormal pattern of behaviour which is above the expected norm for age and level of development can be described as “challenging behaviour”. It has been defined as: “Culturally abnormal behaviour (s) of such an intensity, frequency or duration that the physical safety of the person or others is likely to be placed in serious jeopardy or behaviour which is likely to seriously limit or deny access to and use of ordinary community facilities”. They can include self-injury, physical or verbal aggression, non-compliance, disruption of the environment, inappropriate vocalizations, and various stereotypies. These behaviours can impede learning, restrict access to normal activities and social opportunities, and require a considerable amount of both manpower and financial resources to manage effectively. Many instances of challenging behaviour can be interpreted as ineffective coping strategies for a young person, with or without learning disability (LD) or impaired social and communication skills, trying to control what is going on around them. Young people with various disabilities, including LD, Autism, and other acquired neuro-behavioural disorders such as brain damage and post-infectious phenomena, may also use challenging behaviour for specific purposes, for example, for sensory stimulation, gaining attention of carers, avoiding demands or to express their limited communication skills. People who have a diverse range of neurodevelopmental disorders are more likely to develop challenging behaviours. Some environmental factors have been identified which are likely to increase the risk of challenging behaviour, including places offering limited opportunities for making choices, social interaction or meaningful occupation. Other adverse environments are characterized by limited sensory input or excessive noise, unresponsive or unpredictable carers, predisposition to neglect and abuse, and where physical health needs and pain are not promptly identified. For example, the rates of challenging behaviour in teenagers and people in their early 20s is 30%-40% in hospital settings, compared to 5% to 15% among children attending schools for those with severe LD. Aggression is a common, yet complex, challenging behaviour, and a frequent indication for referral to child and adolescent Psychiatrists. It commonly begins in childhood, with more than 58% of preschool children demonstrating some aggressive behaviour. Aggression has been linked to several risk factors, including individual temperaments; the effects of disturbed family dynamics; poor parenting practices; exposure to violence and the influence of attachment disorders. No single factor is sufficient to explain the development of aggressive behaviour. Aggression is commonly diagnosed in association with other mental health problems including ADHD, CD, ODD, depression, head injury, mental retardation, autism, bipolar disorder, PTSD, or dyslexia. Disruptive behaviour problems Disruptive behaviour problems (DBP) include attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD) and conduct disorder (CD). They constitute the commonest EBPs among CYP. Recent evidence suggests that DBPs should be regarded as a multidimensional phenotype rather than comprising distinct subgroups. ADHD is the commonest neuro-behavioural disorder in children and adolescents, with prevalence ranging between 5% and 12% in the developed countries. ADHD is characterized by levels of hyperactivity, impulsivity and inattention that are disproportionately excessive for the child’s age and development. The ICD-10 does not use the term “ADHD” but “hyperkinetic disorder”, which is equivalent to severe ADHD. DSM-5 distinguishes between three subtypes of the disorder: predominantly hyperactive/impulsive, predominantly inattentive and combined types (Table (Table11). Subtypes of attention deficit hyperactivity disorder (based on DSM-5) |Subtypes||Predominantly inattentive (ADD)||Predominantly hyperactivity/ impulsivity||Combined ADHD| |Criteria||6 of 9 inattentive symptoms||6 of 9 hyperactivity/ impulsivity symptoms||Both criteria for (1) and (2)| |Details||Fails to pay close attention to details or makes careless mistakes||Squirms and fidgets| |Has difficulty sustaining attention||Can’t stay seated| |Does not appear to listen||Runs/climbs excessively| |Struggles to follow through on instructions||Can’t play/work quietly| |Has difficulty with organization||“On the go”/“driven by a motor”| |Avoids or dislikes tasks requiring a lot of thinking||Blurts out answers| |Loses things||Is unable to wait for his turn| |Is easily distracted||Intrudes/interrupts others| |Other criteria||Onset before age of 12, lasting more than 6 mo, symptoms pervasive in 2 or more settings, causing significant impairment of daily functioning o development| ADHD: Attention deficit hyperactivity disorder. CD refers to severe behaviour problems (Table (Table2),2), characterized by repetitive and persistent manifestations of serious aggressive or non-aggressive behaviours against people, animals or property such as being defiant, belligerent, destructive, threatening, physically cruel, deceitful, disobedient or dishonest, excessive fighting or bullying, fire-setting, stealing, repeated lying, intentional injury, forced sexual activity and frequent school truancy[13,22]. Children with CD often have trouble understanding how other people think, sometimes described as being callous-unemotional. They may falsely misinterpret the intentions of other people as being mean. They may have immature language skills, lack the appropriate social skills to establish and maintain friendships, which aggravates their feelings of sadness, frustration and anger. DSM-5 definition of conduct disorder and oppositional defiant disorder |Oppositional defiant disorder||Conduct disorder| |A pattern of angry/irritable mood, argumentative/defiant behavior, or vindictiveness lasting at least 6 mo as evidenced by at least four out of 8 symptoms from any of the following categories, and exhibited during interaction with at least one individual who is not a sibling||A repetitive and persistent pattern of behavior in which the basic rights of others or major age-appropriate societal norms or rules are violated, as manifested by the presence of at least three of the following 15 criteria in the past 12 mo from any of the categories below, with at least one criterion present in the past 6 mo| |Aggression to people and animals: (1) Often bullies, threatens, or intimidates others; (2) Often initiates physical fights; (3) Has used a weapon that can cause serious physical harm to others (e.g., a bat, brick, broken bottle, knife, gun); (4) Has been physically cruel to people; (5) Has been physically cruel to animals; (6) Has stolen while confronting a victim (e.g., mugging, purse snatching, extortion, armed robbery); (7) Has forced someone into sexual activity| |Angry/irritable mood: (1) Often loses temper; (2) Is often touchy or easily annoyed; (3) Is often angry and resentful| |Argumentative/defiant behavior: (4) Often argues with authority figures or, for children and adolescents, with adults; (5) Often actively defies or refuses to comply with requests from authority figures or with rules; (6) Often deliberately annoys others; (7) Often blames others for his or her mistakes or misbehavior| |Destruction of property: (8) Has deliberately engaged in fire setting with the intention of causing serious damage; (9) Has deliberately destroyed others’ property (other than by fire setting)| |Deceitfulness or theft: (10) Has broken into someone else’s house, building, or car; (11) Often lies to obtain goods or favors or to avoid obligations (i.e., “cons” others); (12) Has stolen items of nontrivial value without confronting a victim (e.g., shoplifting, but without breaking and entering; forgery)| |Vindictiveness: (8) Has been spiteful or vindictive at least twice within the past 6 mo| |Serious violations of rules: (13) Often stays out at night despite parental prohibitions, beginning before age 13 yr; (14) Has run away from home overnight at least twice while living in the parental or parental surrogate home, or once without returning for a lengthy period; (15) Is often truant from school, beginning before age 13 yr| |Note: The persistence and frequency of these behaviors should be used to distinguish a behavior that is within normal limits from a behavior that is symptomatic and the behavior should occur at least once per week for at least 6 mo| |The disturbance in behavior causes clinically significant impairment in social, academic, or occupational functioning| |The disturbance in behavior is associated with distress in the individual or others in his or her immediate social context (e.g., family, peer group, work colleagues), or it impacts negatively on social, educational, occupational, or other important areas of functioning||If the individual is age 18 yr or older, criteria are not met for antisocial personality disorder| |Specify whether: Childhood-onset type (prior to age 10 yr); Adolescent-onset type or Unspecified onset| |Specify if: With limited prosocial emotions: Lack of remorse or guilt; Callous-lack of empathy; Unconcerned about performance or Shallow or deficient affect| |Specify current severity: Mild; Moderate or Severe| |The behaviors do not occur exclusively during the course of a psychotic, substance use, depressive, or bipolar disorder. Also, the criteria are not met for disruptive mood dysregulation disorder||ICD-10| |It also requires the presence of three symptoms from the list of 15 (above), and duration of at least 6 mo. There are four divisions of conduct disorder: Socialised conduct disorder, unsocialised conduct disorder, conduct disorders confined to the family context and oppositional defiant disorder| |Specify current severity: Mild; moderate or severe based on number of settings with symptoms shown| CD is the commonest reason for CYP referral for psychological and psychiatric treatment. Roughly 50% of all CYP with a MHD have a CD. About 30%-75% of children with CD also have ADHD and 50% of them will also meet criteria for at least one other disorder including Mood, Anxiety, PTSD, Substance abuse, ADHD, learning problems, or thought disorders[24,25]. Majority of boys have an onset of CD before the age of 10 years, while girls tend to present mainly between 14 and 16 years of age. Most CYP with CD grow out of this disorder, but a minority become more dissocial or aggressive and develop antisocial personality disorder as adults. ODD is considered to be the mildest and commonest of the DBPs, with prevalence estimates of 6%-9% for pre-schoolers and boys outnumbering girls by at least two to one. CYP with ODD are typically openly hostile, negativistic, defiant, uncooperative, and irritable. They lose their tempers easily and are mean and spiteful towards others (Table (Table2).2). They are mostly defiant towards authority figures, but they may also be hostile to their siblings or peers. This pattern of adversarial behaviour significantly negatively impact on their lives at home, school, and wider society, and seriously impairs all their relationships. Emotional problems in later childhood include panic disorder, generalized anxiety disorder (GAD), separation anxiety, social phobia, specific phobias, OCD and depression. Mild to moderate anxiety is a normal emotional response to many stressful life situations. Anxiety is regarded as a disorder when it is disproportionately excessive in severity in comparison to the gravity of the triggering circumstances, leading to abnormal disruption of daily routines. Panic disorder is characterized by panic attacks untriggered by external stimuli. GAD is characterized by generalized worry across multiple life domains. Separation anxiety disorder is characterized by fear related to actual or anticipated separation from a caregiver. Social anxiety disorder (also called social phobia), is characterized by fear of social situations where peers may negatively evaluate the person. Common manifestations of Anxiety disorders include physical symptoms such as increased heart rate, shortness of breath, sweating, trembling, shaking, chest pain, abdominal discomfort and nausea. Other symptoms include worries about things before they happen, constant concerns about family, school, friends, or activities, repetitive, unwanted thoughts (obsessions) or actions (compulsions), fears of embarrassment or making mistakes, low self-esteem and lack of self-confidence. Depression often occurs in children under stress, experiencing loss, or having attentional, learning, conduct or anxiety disorders and other chronic physical ailments. It also tends to run in families[7–9,31]. Symptoms of depression are diverse and protean, often mimicking other physical and neurodevelopmental problems, including low mood, frequent sadness, tearfulness, crying, decreased interest or pleasure in almost all activities; or inability to enjoy previously favourite activities, hopelessness, persistent boredom; low energy, social isolation, poor communication, low self-esteem and guilt, feelings of worthlessness, extreme sensitivity to rejection or failure, increased irritability, agitation, anger, or hostility, difficulty with relationships, frequent complaints of physical illnesses such as headaches and stomach aches, frequent absences from school or poor performance in school, poor concentration, a major change in eating and/or sleeping patterns, weight loss or gain when not dieting, talk of or efforts to run away from home, thoughts or expressions of suicide or self-destructive behaviour. Disruptive mood dysregulation disorder (DMDD) is a childhood disorder characterized by a pervasively irritable or angry mood recently added to DSM-5. The symptoms include frequent episodes of severe temper tantrums or aggression (more than three episodes a week) in combination with persistently negative mood between episodes, lasting for more than 12 mo in multiple settings, beginning after 6 years of age but before the child is 10 years old. Autistic spectrum and pervasive development disorder The definition of Autism has evolved over the years and has been broadened over time. DSM-IV-TR and the ICD-10 defined the diagnostic category of pervasive developmental disorders (PDD) as the umbrella terminology used for a group of five disorders characterized by pervasive “qualitative abnormalities in reciprocal social interactions and in patterns of communication, and by a restricted, stereotyped, repetitive repertoire of interests and activities” affecting “the individual’s functioning in all situations”. These included autism, asperger syndrome, childhood disintegrative disorder (CDD), pervasive developmental disorder not otherwise specified (PDD-NOS) and Rett syndrome. Autism and Asperger Syndrome are the most widely recognised and clinically diagnosed among this group of disorders. CDD is a term used to describe children who have had a period of normal development for the first 2-3 years before a relatively acute onset of regression and emergence of autistic symptoms. PDD-NOS was used, particularly in the United States, to describe individuals who have autistic symptoms, but do not meet the full criteria for Autism or Asperger’s Syndrome, denote a milder version of Autism, or to describe atypical autism symptoms emerging after 30 mo of age, and autistic individuals with other co-morbid disorders. The category of PDD has been removed from DSM-5 and replaced with Autism Spectrum disorders (ASD). ASD (Table (Table3)3) is diagnosed primarily from clinical judgment usually by a multidisciplinary team, with minimal support from diagnostic instruments. Most individuals who received diagnosis based on the DSM-IV should still maintain their diagnosis under DSM-5, with some studies confirming that 91% to 100% of children with PDD diagnoses from the DSM-IV retained their diagnosis under the ASD category using the new DSM-5[35,36], while a systematic review has found a slight decrease in the rate of ASD with DSM-5. DSM-5 criteria for autism spectrum disorders |Persistent deficits in social communication and social interaction across multiple contexts, as manifested by 3 out 3 of the following, currently or by history| |Deficits in social-emotional reciprocity, ranging, for example, from abnormal social approach and failure of normal back-and-forth conversation; to reduced sharing of interests, emotions, or affect; to failure to initiate or respond to social interactions| |Deficits in nonverbal communicative behaviours used for social interaction, ranging, for example, from poorly integrated verbal and nonverbal communication; to abnormalities in eye contact and body language or deficits in understanding and use of gestures; to a total lack of facial expressions and nonverbal communication| |Deficits in developing, maintaining, and understanding relationships, ranging, for example, from difficulties adjusting behavior to suit various social contexts; to difficulties in sharing imaginative play or in making friends; to absence of interest in peers| |Restricted, repetitive patterns of behavior, interests, or activities, as manifested by at least two out of 4 of the following, currently or by history| |Stereotyped or repetitive motor movements, use of objects, or speech (e.g., simple motor stereotypies, lining up toys or flipping objects, echolalia, idiosyncratic phrases)| |Insistence on sameness, inflexible adherence to routines, or ritualized patterns or verbal nonverbal behavior (e.g., extreme distress at small changes, difficulties with transitions, rigid thinking patterns, greeting rituals, need to take same route or eat food every day)| |Highly restricted, fixated interests that are abnormal in intensity or focus (e.g., strong attachment to or preoccupation with unusual objects, excessively circumscribed or perseverative interest)| |Hyper- or hyporeactivity to sensory input or unusual interests in sensory aspects of the environment (e.g., apparent indifference to pain/temperature, adverse response to specific sounds or textures, excessive smelling or touching of objects, visual fascination with lights or movement)| |Symptoms must be present in the early developmental period (but may not become fully manifest until social demands exceed limited capacities, or may be masked by learned strategies in later life)| |Symptoms cause clinically significant impairment in social, occupational, or other important areas of current functioning| |With or without accompanying intellectual impairment With or without accompanying language impairment| |Associated with a known medical or genetic condition or environmental factor| |Specify current severity based on social communication impairments and restricted, repetitive patterns of behavior| There are many intervention approaches and strategies, used alone or in combination, for supporting individuals with ASD. These interventions need to individualized and be closely tailored to the level of social and linguistic abilities, cultural background, family resources, learning style and degree of communication skills. Various communication enhancement strategies have been designed to manage ASD, including augmentative and alternative communication (AAC), Facilitated Communication, computer-based instruction and video-based instruction (Table (Table4).4). Several behavioural and psychological interventions (Table (Table5)5) have also been used successfully in managing ASD children, including applied behaviour analysis (ABA) and functional communication training (FCT). Summary of common social communication enhancement strategies |Augmentative and alternative communication||Supplements/replaces natural speech and/or writing with aided [e.g., Picture Exchange Communication System, line drawings, Blissymbols, speech generating devices, and tangible objects] and/or unaided (e.g., manual signs, gestures, and finger spelling) symbols||[39,129–131]| |Effective in decreasing maladaptive or challenging behaviour such as aggression, self-injury and tantrums, promotes cognitive development and improves social communication| |Activity schedules/visual supports||Using photographs, drawings, or written words that act as cues or prompts to help individuals complete a sequence of tasks/activities or behave appropriately in various settings||| |Scripts are often used to promote social interaction, initiate or sustain interaction| |Computer-/video-based instruction||Use of computer technology or video recordings for teaching language skills, social skills, social understanding, and social problem solving||| Summary of common behavioural modification strategies for management of childhood emotional and behavioural disorder |ABA||Uses principles of learning theory to bring about meaningful and positive change in behaviour, to help individuals build a variety of skills (e.g., communication, social skills, self-control, and self-monitoring) and help generalize these skills to other situations||[122,123]| |Discrete trial training||A one-to-one instructional approach based on ABA to teach skills in small, incremental steps in a systematic, controlled fashion, documenting stepwise clearly identified antecedent and consequence (e.g., reinforcement in the form of praise or tangible rewards) for desired behaviours||| |Functional communication training||Combines ABA procedures with communicative functions of maladaptive behaviour to teach alternative responses and eliminate problem behaviours||| |Pivotal response treatment||A play-based, child-initiated behavioural treatment, designed to teach language, decrease disruptive behaviours, and increase social, communication and academic skills, building on a child’s initiative and interests||| |Positive behaviour support||Uses ABA principles with person-centred values to foster skills that replace challenging behaviours with positive reinforcement of appropriate words and actions. PBS can be used to support children and adults with autism and problem behaviours||| |Self-management||Uses interventions to help individuals learn to independently regulate, monitor and record their behaviours in a variety of contexts, and reward themselves for using appropriate behaviours. It’s been found effective for ADHD and ASD children||| |Time delay||It gradually decreases the use of prompts during instruction over time. It can be used with individuals regardless of cognitive level or expressive communication abilities||| |Incidental teaching||Utilizes naturally occurring teaching opportunities to reinforce desirable communication behaviour||| |Anger management||Various strategies can be used to teach children how to recognise the signs of their growing frustration and learn a range of coping skills designed to defuse their anger and aggressive behaviour, teach them alternative ways to express anger, including relaxation techniques and stress management skills| ABA: Applied behaviour analysis; ADHD: Attention deficit hyperactivity disorder; ASD: Autistic spectrum disorder. Social (pragmatic) communication disorder Social (pragmatic) communication disorder (SCD) is a new diagnosis included under Communication Disorders in the Neurodevelopmental Disorders section of the DSM-5. It is characterized by persistent difficulties with using verbal and nonverbal communication for social purposes, which can interfere with interpersonal relationships, academic achievement and occupational performance, in the absence of restricted and repetitive interests and behaviours (Table (Table6).6). Some authors consider that CYP with SCD present with similar but less severe restricted and repetitive interests and behaviours (RRIBs) characteristic of children on the autistic spectrum. SCD is thought to occur more frequently in family members of individuals with autism. DSM-5 criteria for social (pragmatic) communication disorder |Persistent difficulties in the social use of verbal and nonverbal communication as manifested by all of the following| |Deficits in using communication for social purposes, such as greeting and sharing information, in a manner that is appropriate for social context| |Impairment in the ability to change communication to match context or the needs of the listener, such as speaking differently in a classroom than on a playground, talking differently to a child than to an adult, and avoiding use of overly formal language| |Difficulties following rules for conversation and storytelling, such as taking turns in conversation, rephrasing when misunderstood, and knowing how to use verbal and nonverbal signals to regulate interaction| |Difficulties understanding what is not explicitly stated (e.g., making inferences) and nonliteral or ambiguous meaning of language (e.g., idioms, humor, metaphors, multiple meanings that depend on the context for interpretation)| |The deficits result in functional limitations in effective communication, social participation, social relationships, academic achievement, or occupational performance, individually or in combination| |The onset of the symptoms is in the early developmental period (but deficits may not become fully manifest until social communication demands exceed limited capacities)| |The symptoms are not attributable to another medical or neurological condition or to low abilities in the domains of word structure and grammar, and are not better explained by autism spectrum disorder, intellectual disability (intellectual developmental disorder), global developmental delay, or another mental disorder| The term “pragmatic” has been used previously to describe the communication skills that are needed in normal social intercourse and the rules that govern routine interpersonal interactions, including ability to pay at least some attention to the other person in a conversation, take turns, not interrupting the other speaker unless there is a very good reason, match language and volume to the situation and the listener, etc. Social and pragmatic deficit are known to also occur in diverse clinical populations, including ADHD, William’s syndrome, CD, closed head injury and spina bifida/hydrocephalus. Treatment modalities that have been used for supporting children with SCD are similar to those that have been used for several years in children with ASD (Tables (Tables44 and and5).5). The first randomized controlled trial of social communication interventions designed primarily for children with SCD was reported in 2012. The Social Communication Intervention Project (http://www.psych-sci.manchester.ac.uk/scip/) targets development in social understanding and interaction, verbal and non-verbal pragmatic skills and language processing among children with SCD. Pathological demand avoidance or Newson’s syndrome Pathological demand avoidance (PDA) or Newson’s Syndrome is increasingly being accepted as part of the autism spectrum. PDA was first used in 2003 for describing some CYP with autistic symptoms who showed some challenging behaviours. It is characterized by exceptional levels of demand avoidance requested by others, due to high anxiety levels when the individuals feel that they are losing control. Avoidance strategies can range from simple refusal, distraction, giving excuses, delaying, arguing, suggesting alternatives and withdrawing into fantasy, to becoming physically incapacitated (with an explanation such as “my legs don’t work”) or selectively mute in many situations. If they feel threatened to comply, they may become verbally or physically aggressive, best described as a “panic attack”, apparently intended to shock. They tend to resort to “socially manipulative” behaviours. The outrageous acts and lack of concern for their behaviour appears to draw parallels with conduct problems (CP) and callous-unemotional traits (CUT), but reward-based techniques, effective with CP and CUT, seem not to work in people with PDA. PDA is currently neither part of the DSM-5 nor the ICD-10. Though demand avoidance is a common characteristic of CYP with ASD, it becomes pathological when the levels are disproportionately excessive, and normal daily activities and relationships are negatively impaired. Unlike typically autistic children, people with PDA tend to have much better social communication and interaction skills, and are consequently able to use these abilities to their advantage. They often have highly developed social mimicry and role play, sometimes becoming different characters or personas. The people with PDA appear to retain a keen awareness of how to “push people’s buttons”, suggesting a level of social insight when compared to CYP with Autism. On the other hand, children with PDA exhibit higher levels of emotional symptoms compared to those with ASD or CD. They also often experience excessive mood swings and impulsivity. While the prevalence of ASD in boys is more than four times higher compared to that of girls, the risk of developing PDA appears to be the same for both boys and girls. O’Nions et al have recently reported on the development and preliminary validation of the “Extreme Demand Avoidance Questionnaire” (EDA-Q), designed to quantify PDA traits based on parent-reported information, with good sensitivity (0.80) and specificity (0.85). EDA-Q is available online (https://www.pdasociety.org.uk/resources/extreme-demand-avoidance-questionnaire). Scott Sleek – APS The image is in the public domain. Original Research: Closed access “Lipid Profiles at Birth Predict Teacher-Rated Child Emotional and Social Development 5 Years Later”. Erika M. Manczak, Ian H. Gotlib. Psychological Science doi:10.1177/0956797619885649.
<urn:uuid:fe2e4493-595f-4890-8787-350f312c733f>
CC-MAIN-2022-40
https://debuglies.com/2019/11/14/babies-born-with-high-levels-of-bad-cholesterol-face-a-heightened-risk-for-social-and-psychological-problems-in-childhood/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00474.warc.gz
en
0.92424
7,621
3.5
4
As we all know, multimode fiber is usually divided into OM1, OM2, OM3 and OM4. Then how about single mode fiber cable? In general, single mode fiber cable is categorized into OS1 and OS2 fiber. OS1 and OS2 are cabled single mode optical fiber specifications. In fact, there are many differences between OS1 vs OS2 single mode fiber. This text will make a comparison between OS1 vs OS2 and then give you a guide on how to choose the right fiber optic cable for your applications. OS1 single mode fibers are compliant with ITU-T G.652A or ITU-T G.652B standards. Besides, the low-water-peak fibers defined by ITU-T G.652C and G.652D also come under OS1 single mode fibers. That is to say, OS1 is compliant with specifications of ITU-T G.652. However, OS2 single mode fibers are only compliant with ITU-T G.652C or ITU-T G.652D standards, which means OS2 is explicitly applied to the low-water-peak fibers. These low-water-peak fibers are usually used for CWDM (Coarse Wavelength Division Multiplexing) applications. Besides the standards, the main difference between OS1 and OS2 single mode fiber is the cable construction. Typically, OS1 cabling is tight-buffered construction, which is usually used for indoor applications, such as campus or data center. Yet OS2 cabling is loose-tube design. Cable with this construction is appropriate for outdoor cases like street, underground and burial. For this reason, OS1 indoor fiber has greater loss per kilometer than OS2 outdoor fiber. In general, the maximum attenuation for OS1 is 1.0 db/km and for OS2 is 0.4db/km. As a result, the maximum transmission distance of OS1 single mode fiber is 2 km but the maximum transmission distance of OS2 single mode fiber can reach 5 km and is up to 10 km. Then for all these reasons, OS1 is much cheaper than OS2. There is point need to pay attention to is that both OS1 and OS2 single mode fibers over their distance will allow speeds of 1 to 10 gigabit Ethernet. All of these differences between OS1 and OS2 discussed above are listed in the table below. You can get a clear understanding from it. |Standards||ITU-T G.652A/B/C/D||ITU-T G.652C/D| |Construction||Tight buffered||Loose tube| |Distance||2 km||10 km| Learning about the differences between OS1 vs OS2 single mode fiber cable, then which cable should you choose? First, if you want to use for the indoor application, OS1 is better for you. However, if used for outdoor application, you should choose OS2. Second, there is no benefit to be gained in using OS2 cable if under 2 km. OS2 is best for distance over 2 km. Finally, you should note that OS1 is much cheaper than OS2. In order to save cost, if the OS1 is enough for your application there is no need to use OS2. Fiberstore offers OS1 and OS2 single mode fiber cable as well as all kinds of multimode fiber cable. It is your optimal selection.
<urn:uuid:23348fbb-98b5-47a2-a089-39f2ae272e46>
CC-MAIN-2022-40
https://www.cables-solutions.com/difference-between-os1-and-os2-single-mode-fiber-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00474.warc.gz
en
0.920268
764
2.578125
3
MQTT and CoAP: Security and Privacy Issues in IoT and IIoT Communication Protocols Machine-to-machine (M2M) communication protocols, which enable machines to “talk” with one another so that commands are communicated and data is transmitted, are indispensable to applications and systems that make use of the internet of things (IoT) and the industrial internet of things (IIoT). Message Queuing Telemetry Transport (MQTT) is a communication protocol widely used in both IoT and IIoT deployments. MQTT is a publish-subscribe protocol that facilitates one-to-many communication mediated by brokers. Clients can publish messages to a broker and/or subscribe to a broker to receive certain messages. Messages are organized by topics, which essentially are “labels” that act as a system for dispatching messages to subscribers. Constrained Application Protocol (CoAP), on the other hand, is a client-server protocol that, unlike MQTT, is not yet standardized. With CoAP, a client node can command another node by sending a CoAP packet. The CoAP server will interpret it, extract the payload, and decide what to do depending on its logic. The server does not necessarily have to acknowledge the request. MQTT is preferred over CoAP for mission-critical communications because it can enforce quality of service and ensure message delivery. CoAP, for its part, is preferred for gathering telemetry data transmitted from transient, low-power nodes like tiny field sensors. Despite fulfilling different needs, both protocols are fundamental in IoT and IIoT deployments, where fast and flexible data exchange is a basic operational requirement. Unsecure protocols and exposed records An internet-wide scan on exposed MQTT endpoints conducted by IOActive’s Lucas Lundgren between 2016 and 2017 presented a clear deployment problem among tens of thousands of unsecure MQTT hosts. A smart-home-centric MQTT research was also released by Avast in 2018, highlighting the lack of secure configurations and the likelihood of misconfigurations in home devices that use MQTT. We decided to look into the same problem — and include CoAP in the picture — and to see if there has been more awareness surrounding it. What we found was striking: Hundreds of thousands of MQTT and CoAP hosts combined are reachable via public-facing IP addresses. Overall, this provides attackers with millions of exposed records. Finding exposed endpoints in virtually every country is feasible due to the inherent openness of the protocols and publicly searchable deployments. We also outlined design issues and implementation vulnerabilities, which can contribute to the number of unsecure deployments that we found. A design issue that we discovered (designated as CVE-2017-7653 for Mosquitto, the most popular broker), for instance, can allow a malicious client to supply invalid data. By using the message-retain option and modifying the quality of service (QoS), an attacker can lead clients to be flooded with the same (retained) message over and over. Unsecure endpoints, moreover, can expose records and leak information, some of which we found to be related to critical sectors, for any casual attacker to see. Vulnerable endpoints can also run the risk of denial-of-service (DoS) attacks or even be taken advantage of to gain full control. This is an excerpt from an article published originally here. For in-depth analyses and insights, read the research “The Fragility of Industrial IoT’s Data Backbone. Security and Privacy Issues in MQTT and CoAP Protocols” , written by Trend Micro Research with EURECOM and Politecnico Di Milano (POLIMI).
<urn:uuid:4a2cc3d9-20f5-4096-a26f-273bea7a05e0>
CC-MAIN-2022-40
https://www.iiot-world.com/ics-security/cybersecurity/mqtt-and-coap-security-and-privacy-issues-in-iot-and-iiot-communication-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00474.warc.gz
en
0.929102
771
2.578125
3
How much of this page will you read? How much will you remember? And does it make a difference when you’re reading, or where? Those are the sorts of questions that a University of Chicago neuroscientist asks in an innovative new study – one that examines brain scans to uncover how attention is sustained over time, and when it might fluctuate. “Maybe in general, we’re pretty good at paying attention, or maybe we struggle – but it’s not the same all the time,” said lead author Monica Rosenberg, an assistant professor in UChicago’s Department of Psychology. “We wanted to build a model that could predict a person’s attentional state based on what we see in their brain scans.” Published in the Proceedings of the National Academy of Sciences, the study relies on functional MRI data collected for this study as well as data from previous research, combining the results of 107 individuals from five different data sets. By using what Rosenberg calls “green science” – replicating results in data collected for other purposes – the study expands its pool of participants beyond what is usually found in a single lab. The research examines functional MRI scans of people who performed a computerized task multiple times in one day – watching a stream of images and pressing a button in response to some them – as well as those who performed the same task on different days. It also examines brain scans of those who have been administered anesthesia, as well as 30 scans of a single individual over the course of 10 months. The participants’ ages ranged from 18 to 56. “If we want to build brain-based models that are applicable in clinical or translational settings, they have to be able to generalize across data sets,” said Rosenberg, an expert on attention. “It has to be the case that models don’t just predict behavior from data collected on a single hospital scanner from a single group of individuals. “If a model can’t predict something about people across different sites and populations, it’s less practically useful.” Prior research has found that every person has a unique pattern of functional brain connectivity – a sort of fingerprint that can predict their cognitive and attentional abilities. Using brain scans, a new study proposes a model that can predict when someone is paying closer attention – and when their attention might fluctuate. Rosenberg and her co-authors – including scholars from Yale University and the University of Florida – tested whether those patterns could extend to predict how a person’s attention changes from moment to moment, or day to day. They found that patterns of functional brain connectivity reliably predicted when people were more and less focused on the computer task. These predictions were highly accurate when averaged across many scan sessions. However, the patterns still predicted attentional state even when measured in short window of time, such as 30 seconds of an fMRI session. Previous studies have historically used single data sets, due in part to the high cost of fMRI. “It’s only in the past couple of years that sharing data sets has become much more common,” Rosenberg said. “That’s what gives us access to a wider variety of samples, which allow us to ask how general our models are.” Rosenberg hopes further research can provide insights into how attention changes over longer periods of time, like development and aging. She is also in the process of testing whether predictive models can translate to settings outside the lab. For example, her lab is asking whether patterns of functional brain connectivity can predict attention fluctuations as we listen to a story or watch a movie. “When we collect brain data in an MRI scanner,” she said, “we often give people psychological tasks that involve seeing pictures and pressing buttons. That’s really not how we navigate the world.” Our ability to stay focused is limited: prolonged performance of a task typically results in mental fatigue and decrements in performance over time. This so-called vigilance decrement has been attributed to depletion of attentional resources, though other factors such as reductions in motivation likely also play a role. In this study, we examined three electroencephalography (EEG) markers of attentional control, to elucidate which stage of attentional processing is most affected by time-on-task and motivation. To elicit the vigilance decrement, participants performed a sustained attention task for 80 min without breaks. After 60 min, participants were motivated by an unexpected monetary incentive to increase performance in the final 20 min. We found that task performance and self-reported motivation declined rapidly, reaching stable levels well before the motivation manipulation was introduced. Thereafter, motivation increased back up to the initial level, and remained there for the final 20 min. While task performance also increased, it did not return to the initial level, and fell to the lowest level overall during the final 10 min. This pattern of performance changes was mirrored by the trial-to-trial consistency of the phase of theta (3–7 Hz) oscillations, an index of the variability in timing of the neural response to the stimulus. As task performance decreased, temporal variability increased, suggesting that attentional stability is crucial for sustained attention performance. The effects of attention on our two other EEG measures – early P1/N1 event-related potentials (ERPs) and pre-stimulus alpha (9–14 Hz) power – did not change with time-on-task or motivation. In sum, these findings show that the vigilance decrement is accompanied by a decline in only some facets of attentional control, which cannot be fully brought back online by increases in motivation. The vigilance decrement might thus not occur due to a single cause, but is likely multifactorial in origin. University of Chicago
<urn:uuid:fadeea18-7b21-4437-8c59-a42217837a19>
CC-MAIN-2022-40
https://debuglies.com/2020/02/22/how-is-attention-sustained-over-time/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00674.warc.gz
en
0.946148
1,226
2.984375
3
In 1940, Frank Knox contacted Ed Booz. As newly appointed secretary for the navy, he had a vision for a two-ocean navy, one that could win campaigns in both the Atlantic and Pacific. But the Navy was under-equipped. It did not have enough ships. Its headquarters were in a temporary building left over from World War I, and its telephone, internal mail, and intelligence systems were out of date. “What’s our job?” Knox asked his admirals. “To double the Navy,” they replied. They estimated that it would take about 4 years. “We have only half that time,” Knox said. “It must be done by 1942.”
<urn:uuid:d6a2a938-223c-4998-9804-a848e71c6daa>
CC-MAIN-2022-40
https://www.boozallen.com/e/culture/the-naval-fleet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00674.warc.gz
en
0.993507
152
3.015625
3
Hurricane Sandy, which struck the U.S.’ Eastern seaboard last October, exemplified the use of socialnetworks in disaster communications. Millions of residents were affected. For many, real-timeinformation was provided through independent, citizen-generated Facebook pages like Jersey Shore Hurricane News and Twitter. “Harnessing the power of Facebook and Twitter, Jersey Shore Hurricane News wasable to keep people informed in real-time — as events were unfolding,” Justin Auciello,who started Jersey Shore Hurricane News, told TechNewsWorld shortly afterward. Hisbulletin-board page had received 191,000 Likes at the time. TechNewsWorld featured some of the 21st century citizen-created news sources like those JSHN utilized during Sandy in “When the Lights Go Out, Social Nets Can Be More Than Friends.” The Credibility of Crowdsourcing This reliance on social media is new. Governments historically have been in charge of distributing information in a disaster through established emergency management systems — utilizing classic media like television. How are they adapting to the two-way, social media channels now complementing conventional platforms like television, radio and the Web? “Social media provides an excellent opportunity not only to disseminate urgent emergency information in real time, but also to crowdsource situational information from affected locals on the ground,” said Axel Bruns, Ph.D., associate professor of media and communication at Queensland University of Technology. Parts of Queensland saw major flooding in 2011, and Twitter was used extensively by police, Bruns found while compiling a report on the use of social media during the disaster. Emergency services and media organizations were among the most visible on Twitter, partly through retweets, note Bruns and coauthor Jean Burgess in the report. Leading Twitter accounts, including those from police media bureaus, received about 25 retweets for each message. Those messages primarily focused on situational information and advice that was crucial for public safety purposes. A police media bureau Twitter account, tagged @QPSMedia, also tackled rumors and misinformation via Twitter — and those tweets were widely retweeted too. Twitter became a source for mainstream media, according to the report, partly because social network users in the field included photographs and videos in their tweets. Bruns found that one in every five #qldfloods links was to an image. “Emergency services around the world are now actively developing their social mediacrisis communication strategies,” Bruns added. Everyone’s a Reporter It wasn’t easy for local governments, which traditionally have controlled the dissemination of disasterinformation, to allow communications to become two-way, as happens with social networking. “Those working in government are concerned about relinquishing control of thedistribution of information,” explained Anthony S. Mangeri, a professor at the American Public University System. Mangeri, who specializes in emergency management initiatives, was operations chief for the New Jersey Emergency Operations Center during 9/11. “Protection of one’s brand reputation is an essential element of any good crisis management strategy,” he told TechNewsWorld. “However, many in government and private sector public affairs are aware that the world has changed, and the methods for collecting and distributing information have also changed,” Mangeri acknowledged. “Everyone with a cellphone is now a reporter and a videographer, capable of sharing information with the world through various social media channels,” he said. Local and state government crisis communications plans are not complete unless they include a strategy to monitor and react to social media, said Mangeri. Coordination Is Critical Virtual Operations Support Teams are one answer, he suggested. Members of the team keep an eye on social media channels, looking for trends and patterns of information. They then provide analysis for the emergency operations center and respond to posts. “An emerging value of social media in time of crisis is the ability of emergency operations centers to monitor posted information as an incident unfolds. Virtual Operations Specialists can search for trends and patterns of posted information to analyze the impact of a crisis on the community,” explained Mangeri. “If interpreted properly, two-way communication via social media provides potential insights into serious unmet human needs, adding a depth to the damage assessment process that was unavailable before,” he said. Emergency organizations should engage with and respond to public messages received via social network accounts, advises Bruns in his report on the Queensland flooding. An established presence on Twitter is important, as is understanding of user practices like specific hashtags — for example #qldfloods. Training is important. In addition coordination must take place between different emergency and government services — as well as media — to avoid conflicting messages, he says. Organizations need to take the time to develop written protocols, policies and procedures when implementing a social media disaster plan, said Mangeri. “Organizations and corporations that do not currently have social media as part of their crisis communications plan should begin now.”
<urn:uuid:89899573-264f-4080-876d-a3cf22b0e002>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/the-socially-responsible-role-of-social-media-in-crisis-management-77439.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00674.warc.gz
en
0.949966
1,060
2.734375
3
Two decades ago popular films like “The Matrix” starring Keanu Reeves and Sandra Bullock’s “The Net” have tried to portrait, what the emerging digital reality would look like. They depicted how humans and sentient machines could live a simulated digital reality, including dangers and risks associated with the digital world. While most of these science fiction predictions have not been turned into reality, people live digital lifestyles and “digital realities” which form an essential part of their everyday lives. A great deal of our commercial transactions are digital, as are also our interactions with friends and family. Likewise, we use a great number of apps for leisure and infotainment. During any given day, a person with a digital lifestyle will take advantage of many applications, devices and appliances, which will boost his/her interactions and transactions in the digital world. Some prominent examples in different areas follow: All these applications keep track of our activities and behavior throughout the day, while being able to create and maintain our digital profile in the respective application areas. Apart from the listed applications, humans are everyday users of popular social media platforms such as Facebook, Twitter, LinkedIN and Instagram, which they use for their personal and professional interactions. With over two billion users, Facebook is probably the highest populated community on the planet. It’s worth noting that over 1 billion Facebook users are active, as they log in to the platform on a daily basis. Facebook and the rest social media platforms are also able to develop and keep track of a human’s digital profile, which in most cases comprises more detailed information than our application-specific profiles. Our rich set of activities in the digital world enables a parallel digital reality, which is stored and processed as part of different applications, platforms and ecosystems. This digital reality is crossing with our physical reality, as our digital transactions and interactions reflect to artifacts in the real world. Likewise, digital information is used to drive our real-world activities such as for example healthcare or transport decisions. At the same time, several of our activities in the physical world (e.g., our face-to-face meetings or a dinner at a restaurant) are also recorded and reflected in the digital world (e.g., through calendar apps or check-ins in social media platforms). Overall, there are cases where activities in the real world have their counter representations in the digital world and vice versa. This gives rise to the idea of modelling and maintaining a digital model of our activities, which will represent a “mini-world” associated with our life. This digital model could remain synchronized to their real world i.e. changes to our real-world status will be reflect to the digital model and vice versa. The idea of such a synchronized representation has its roots in the fourth industrial revolution (Industry 4.0), where all real world items (e.g., machines, sensors, devices, robots) and their interactions have a faithful synchronized digital representation. Similar to Industry 4.0, the synchronization of our digital reality with our real-life activities opens up opportunities for managing our lives digitally, bringing the benefits of fast and accurate processing to our decisions and actions. For instance, this would permit the development of “DigitalMe”, our digital shadow that could continually provide us with useful information and advice taking into account our context and preferences. Towards this vision, there is certainly a need for consolidating digital information, which is currently fragmented across many different digital platforms, social media platforms and related ecosystems. This is to some extend attempted by several platform providers, which unify personal information that flows through apps that are managed in their ecosystems. Google, Microsoft and Apple represent some prominent examples of large ecosystems that provide us with consolidated information about our interactions with them. For example, Google consolidates information from Gmail, Google+, YouTube and other applications that it provides in its ecosystem. To a lesser extend it is nowadays possible to consolidate information spanning multiple and diverse ecosystems. Some early efforts can be found in the case of social media platforms based on applications that integrate information from multiple accounts in different platforms. As an example, Feedient enables the consolidation of different social media feeds (including Twitter, Facebook, Instagram, YouTube, and Tumblr) in a single feed, which it presents in a scrollable dashboard. Similarly, the Fuse mobile application is able to aggregate a users’ social networking activity into a single “fused” feed, which can be processed in a unified fashion i.e. as if it was coming from a single platform. These applications demonstrate the unification concept, yet they are far from providing a consolidated digital reality for a human user. The latter requires much more sophisticated processing over integrated data, such as social network analysis processing performed by tools like socilab. While we are still far from developing the synchronized digital reality of a human being, the underlying technologies (e.g., IoT, BigData, AI) are already here and developing in a rapid pace. Nevertheless, one has to consider the non-technological barriers as well, such as: Sooner or later the era of “digital” human beings will be materialized. However, the exact form of this digitalization remains to be seen. The emerging role of Autonomic Systems for Advanced IT Service Management How to Create an Inclusive Conversational UI Achieving Operational Excellence through Digital Transformation Keeping ML models on track with greater safety and predictability How education technology enables a bright future for learning Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:6899535e-39e7-4fd9-bfc5-babb884fc3d1>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/digitalme-a-humans-digital-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00674.warc.gz
en
0.931628
1,338
3.359375
3
Every data scientist without a degree loves the labor shortage! But why? Data scientist without a degree: non-techs battle data science labor shortages Data Science continues to evolve as one of the most promising and in-demand career paths for skilled professionals. Data Science is a great career with huge opportunities for advancement in the future. To uncover useful insights for their organizations, the data scientist must master the entire data science lifecycle and possess a level of flexibility and understanding to maximize returns at every phase of the process. A data scientist is an analytical expert who uses technology and social science skills to find trends and manage data. The term “data scientist” was coined in 2008 when companies realized the need for data professionals who could organize and analyze massive amounts of data. He/she includes preparing data for analysis, including cleaning, aggregating and manipulating data to perform advanced data analysis. The data science shortage is not simply a matter of not having educated people to become data scientists. The reasons for the shortage of data scientists: The main reason for the shortage of data scientists in the industry is the lack of skills. But organizations are not able to find the required data science skills in data science aspirants. And the growing demand for analytics in business has resulted in an exponential shortage of data scientists. Many organizations can make do with a software engineer and analyst who can analyze data to maintain business without a data scientist. And organizations require a master’s degree with a few years of experience. For starters, no data science experience and companies require experience as it is necessary for the position. So this forms a dead end. Data scientists without a degree in the field of data science: Data scientists required many skills, both technical and non-technical. It is not essential whether or not he has a degree in data science. Many organizations have a specific data-related requirement, which requires applicants to have an in-depth knowledge of data science, not a data scientist degree. A portfolio of real-life projects can help you get noticed and build your credentials as an aspiring data scientist and learn how to apply your data skills and improve them at a much faster rate. Thus, anyone can pursue a rewarding career in data science with a non-technical degree. Perfecting all the skills required by data science would mean spending several lifetimes on the subject. You are never going to stop learning and you should always keep the spirit of intellectual curiosity that brought you to data science in the first place. Finding a data science mentor can help move up the ladder to data scientists with good critical thinking and analytical skills. No degree can stop you from achieving your dream career in data science. More trending stories Share this article Do the sharing About the Author More info about the author
<urn:uuid:548f6cd9-d2ed-44ff-a9ad-0aee55d3143d>
CC-MAIN-2022-40
https://guay-leroux.com/every-data-scientist-without-a-degree-loves-the-labor-shortage-but-why/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00674.warc.gz
en
0.929605
565
2.703125
3
When speaking about cybersecurity (and physical security), the phrase “employees are your weakest link” is often used. In truth, they can be your strongest line of defense in preventing cyberattacks when they are armed with social engineering training. With proper training and guidance as part of a consistent, ongoing process, your employees will be able to make the right decisions when they encounter a social engineering attempt. On the flip side, not having clearly defined employee policies leaves the entire organization vulnerable to cyberattacks from social engineering ploys such as business email compromise, invoice fraud, social media attacks, and various types of phishing. What is Social Engineering? Social engineering is the art of exploiting human psychology rather than technical hacking techniques to gain access to buildings, systems, or data. It could look like an email sent by the CEO of your company to join an online meeting immediately or a request from a ‘vendor’ requiring a wire transfer or invoice payment. Cyber attackers use social engineering to personalize attacks to manipulate people into performing unsafe functions. Cybercriminals also take advantage of the emotions or negligence of human beings more often than they target system vulnerabilities. Even with traditional security in place (antivirus and firewalls), cyber-attackers can penetrate the organization by getting an employee to provide access to the systems that house sensitive data (and often without their knowledge). Most cyberattacks start with social engineering components, such as a phishing email, leading to credential harvesting, direction to a malicious website, or having a user send sensitive information or make a financial payment to an attacker. There’s a misconception that email spam protection systems block all malicious activity, which is not true. For example, bad actors can launch a social engineering attack from a legitimate email account that is compromised. Spam protection will not automatically filter this type of email out. Attackers don’t just use email; they use various techniques: phishing, smishing, vishing, social media, company website, chats, and other methods. Without proper training, employers and employees can fall victim to wire fraud, gift card scams, and other takeovers or compromises. In addition, poor home security habits, home internet security lapses, cloud apps, and Shadow IT open substantial security risks for organizations. Not having clearly defined policies leaves employees many opportunities to accidentally fall victim to social engineering attacks. They are simply unaware that activities such as connecting personal devices to the network, checking personal email on a work computer, or not correctly reporting a suspicious email can increase the likelihood of a successful cyberattack. Employees are Your Biggest Ally The whole mindset that employees are the weakest link in cybersecurity needs to change. Employees can be your biggest ally when you set clear expectations and policies and deploy dynamic training, making them determined cyber defenders. Think of it this way: Employees are the defensive middle linebackers of an organization. The linebacker reads the opposing offense (cyber attacker) and calls the plays (responding to a possible malicious attack). The linebacker is the defense’s heart, mind, and soul, much like an employee is to an organization. What should social engineering training for employees include? Social engineering training needs to be part of your comprehensive cybersecurity program. There must be buy-in and participation from everyone in the organization from the top down—the CEO, IT, sales, technicians, and interns. Expectations have to be clear and communicated through written policies (especially tech and data use) and simulations, videos, awareness posters, and various media that capture and keep employees’ attention and reinforce best practices. Frequency: At least monthly – ongoing, short reinforcements The frequency and repetition of social engineering training should be at least monthly, with ongoing, short reinforcements. Social engineering training has to be more than quarterly or annually and engaging, so information “sticks” with employees, and they retain the knowledge to reduce the likelihood of a cyber-attack. Social engineering training also needs to be dynamic – with specific examples and clear expectations of next steps and how employees should react to various situations. Repetition retains knowledge Neuroscientists have proven that repetition is key when internalizing information and retaining knowledge, also known as Spaced Learning or spaced repetition learning. Spaced repetition learning is based on the way the mind works. While we can pick up facts in no time, real learning is best understood as a longer-term process that occurs over time through repetition. Employees need space and the passage of time to let information marinate, to review and refresh their knowledge, and to have the opportunity to apply it in a real-world situation. Here’s what to include in your social engineering training: - Classroom Training Videos—at the beginning of employment and annually for the entire organization. - Awareness Posters — constant visual reminders of key topics staged around the business or digitally. - Phishing Simulations — to build up muscle memory of how to spot a phishing email. Its’ purpose is to get real-life attacks in front of employees before attackers do. - Employee Awareness Training — monthly, relevant, and engaging short training videos. - Tech and Data Use Policy — set clear expectations and provide detailed guidance. Want to find out how an expert implemented a successful employee security awareness program? Watch our webinar Implementing a Successful Employee Security Awareness Program Why Is Social Engineering Training for Employees Important? Social engineering training is an important part of your cybersecurity education and policies. Even though cybersecurity is so critical today in everyone’s lives, most people have never studied how to protect themselves from a cyberattack in school or elsewhere. Attackers are getting more sophisticated and are banking on an instant response, so you constantly have to train about new threat vectors. Your policy should set specific expectations that keep in mind how dependent we are on technology. It should be a way for employees to verify requests before they respond, which is especially critical in the absence of face-to-face methods and work from home employees. Social engineering training should be part of the onboarding process of new employees, interns, or temporary employees. Consistent, targeted, and evolving social engineering training decreases the chances of a socially engineered cyberattack. Combined with policies, assessments, testing, and detection and response, your employees will be ready at your defense. Blog: Everyone’s Role in Cybersecurity Blog: Looking Ahead to Social Engineering Trends of 2022 Blog: Fight the Phish: How to Identify and Handle Phishing Attempts Webinar: How to Spot a Phish: Tips to Spoil Advanced Phishing Attempts Blog: Catch a Phish Before It Catches You Resources & insights Protect and defend with multiple layers of cybersecurity Faster. Smarter. Stronger.
<urn:uuid:69ceb1e6-00f9-451d-9802-a972bd7f144a>
CC-MAIN-2022-40
https://www.defendify.com/blog/social-engineering-training-for-employees-the-framework/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00674.warc.gz
en
0.930036
1,401
2.75
3
Ever since the invention of machines or computers, their ability to do difficult tasks grew significantly. The recent growth in computer and machine algorithms has changed the face of engineering and science. The growth is so tangible that it is significantly rebuilding relationships among people, organizations, intelligent behavior, and engineered systems. The computer systems have been developed to increase the speed and reduce size in terms of time. The curiosity of humans has pushed them to wonder “Can a controlled system or machine think and behave like humans?” Probably such curiosity had led to the invention of modern machine learning systems such as Artificial Intelligence. Before we dig deep into the subject, let’s first understand what Artificial intelligence is. What is Artificial Intelligence? In 1956, John McCarthy had coined the term Artificial Intelligence (AI). A commonly believed notion that “Machine will think and do like humans more accurately in the near future” is the concept behind Artificial Intelligence. In other words, AI can be defined as “the engineering and science of making intelligent machines, especially computer programs”. AI is similar to making a computer or software that thinks intelligently and in same manner as intelligent humans. On the other hand, Deep Learning is a subset of Artificial Intelligence that emulates the functioning of the human brain in processing data for use in making decisions, detecting objects, recognizing speech, and translating languages. Deep learning methods can work without human supervision. Why AI is gaining Massive Popularity? The Digital era has brought about an explosion of data in varied forms and from different regions of the world. This data is known as big data that is taken from different search engines, social media, e-commerce platforms, and other digital sources, etc. In this, there is a huge amount of unstructured data that would take decades for humans to understand and extract relevant information. Many companies realize the potential of AI systems like Deep Learning in unraveling this wealth of data into easy to understand format. There is a positive correlation between the popularity of AI and its growing advantageous applications to vast fields. Now, let’s dive deep into the applications of AI to different fields because of which Artificial Intelligence is gaining massive popularity. If you’re seeking to enhance your skills and career prospects, then nothing can be better than learning Artificial Intelligence. Extensive and exclusive courses on Artificial Intelligence are offered by SkillXs. Applications of Artificial Intelligence Artificial Intelligence has a dominant role in various fields owing to its numerous beneficial applications. The evolutionary calculation is the term for some computational strategies in line with the ordinary growth process that mimics the system of common choice and survival of the fittest. Artificial Intelligence has been effectively helping PC to locate the tumors in therapeutic pictures. AI is similarly known to reach to the conclusion of various types of growth, and inborn heart surrenders. 2. Accounting Databases AI is known to mitigate the problems of accounting databases. There are some issues with existing accounting database systems such as the needs of decision-makers are not fulfilled by accounting information, humans do not comprehend the computerized databases, and accounting systems are not easy to use, etc. Integrating AI systems with these databases can help in the appraisal of huge volumes of information with or without a leader’s support. 3. Gaming World Playing games is one of the favorite things to do for many people. With the advancement of computer games, they have come a long way from modest text-based to the 3D graphics games with the complex universe. Today, AI is the most crucial part of any computer game. Playing the game without artificial intelligence would be so tedious. AI provides complex and new features to the gaming world. Spatial thinking, learning, audio use, assets portion, circumstance investigation, direction, rushing are some of the many different ways that AI adds to modern era computer games. 4. Traffic Signal Recognition Various devices can perceive, recognize, and follow traffic signs from moving vehicles. Existing algorithms and systems cannot distinguish traffic signs. Locating and detection is done by AI methods based on color segmentation using different shape models. 3D technology is also often used. Other methods that use machine learning algorithms are also used for detection and classification. Using Artificial Intelligence with advanced research work, it would be possible to manage the roads’ traffic. Additionally, by applying different algorithms of AI, it would be possible to reduce the number of accidents that happen on the road. Though AI has not touched the common people’s lives directly, but it is open to areas like neutral networks, medical, industry, space, military, and geology. With extensive research and advancement, we can expect bright forecasts in the field of AI and we will be able to move from the concept of humans like machines to developing the machinery that will be able to understand and act like intelligent humans. That will be the era where robots based on AI will be doctors in hospitals, drivers in a bus, cook in restaurants, and professors in the classroom. Experts of Artificial Intelligence predict that there will be 2.3 million jobs in the field of AI by 2020. Moreover, it is also predicted that AI technology will destroy around 1.7 million jobs worldwide, but still creating about half a million new jobs all over the world. Additionally, AI offers various feasible and unique career possibilities. AI is used in various fields ranging from entertainments to transportation, yet there is a dearth of skilled and qualified professionals. Hence, we can conclude that careers in AI technology can take a person to great heights in terms of growth, stability and money. There is extensive research going on in the field of Artificial Intelligence and Deep Learning. The domain of AI gives machines the ability to think intelligently and analytically. There is a huge contribution of AI techniques to various areas. AI would positively play a vital role in different fields in the coming future. AI techniques are used in computer games to make them more user-friendly. Moreover, AI plays a vital role in managing traffic signals and reducing the number of accidents on the road. Owing to vast applications, it is pertinent to say that the field of AI will flourish more in the times to come. It is not wrong to say AI would be going to attract the attention of aspirants, researches, and the common people to a great degree in coming future. Mastering Artificial Intelligence and other digital courses become easy with SkillXs. You may click on the given below link to know more about the courses. To join course click here: https://www.skillxs.com/course/115/artificial-intelligence-and-deep-learning
<urn:uuid:8784f781-9b06-415b-8bee-19cef78d5418>
CC-MAIN-2022-40
https://globalriskcommunity.com/profiles/blogs/artificial-intelligence-and-deep-learning-are-transforming-modern?context=tag-projects
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00074.warc.gz
en
0.949608
1,372
3.34375
3
If you regularly access databases on the command line, or if you want to run SQL commands from a script, then you might want to set up passwordless access. This article looks at two ways to configure this; you can either store the password in a user’s ~/.my.cnf file or you can use the unix_socket plugin. Database users need privileges on database tables, and they need to log in order to run database queries. The most common way to configure this is by using the IDENTIFIED BY clause followed by the user’s password. For instance, here I create the user example@localhost and grant it all privileges on any table in a database named testdb: MariaDB [(none)]> CREATE USER 'example'@'localhost' IDENTIFIED BY '6FGGpT3SusKxf_v7HKfD3-ezvq6SqQf2tZVy'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON testdb.* TO 'example'@'localhost' IDENTIFIED BY '6FGGpT3SusKxf_v7HKfD3-ezvq6SqQf2tZVy'; MariaDB [(none)]> FLUSH PRIVILEGES; The user example can now connect to the testdb database. Note that the -p option causes MariaDB to prompt the user for its password. $ mysql -u example -p testdb Enter password: MariaDB [testdb]> If you don’t want to enter the password on every login then you can store the password in a .my.cnf file in your home directory (on Windows the file is named my.ini). The .my.cnf file is the default configuration file for MySQL and MariaDB. A .my.cnf file in a user’s home directory only sets options for the user (i.e. its scope is limited to the user). The contents of the file should be as follows (obviously you need to set the correct password): $ cat ~/.my.cnf [client] password = 6FGGpT3SusKxf_v7HKfD3-ezvq6SqQf2tZVy You also want to make sure that the file has 600 permissions. This is particularly important on older Ubuntu systems, as the default permissions let users peek inside each others home directories (RHEL-based servers never allow this): $ chmod 600 ~/.my.cnf And that’s it – the user example now has passwordless access: $ mysql testdb MariaDB [testdb]> When I created the user example I user the IDENTIFIED BY clause to set the user’s password. An alternative is the IDENTIFIED VIA clause. With that option the user can authenticate using a plugin rather than a password. There are quite a few different authentication plugins. A commonly used plugin is Unix Socket. This plugin authenticates a user using their operating system credentials. When the user logs in it tells MariaDB: “I am a user on this system and I own this database, so don’t you dare bother me with password prompts.” MariaDB then checks if what the user says is true, and if so authenticates the user. The plugin is installed by default from MariaDB version 10.4.3 onwards. RHEL8-based servers ship an older version, so you may need to install the plugin. You can check if the plugin is already installed using this command: MariaDB [(none)]> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME = 'unix_socket'; Empty set (0.001 sec) Here, the plugin isn’t installed. To fix that you can run the below command. Note that the name of the plugin is auth_socket rather than unix_socket: MariaDB [(none)]> INSTALL SONAME 'auth_socket'; MariaDB [(none)]> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME = 'unix_socket' \G *************************** 1. row *************************** PLUGIN_NAME: unix_socket PLUGIN_VERSION: 1.0 PLUGIN_STATUS: ACTIVE PLUGIN_TYPE: AUTHENTICATION PLUGIN_TYPE_VERSION: 2.1 PLUGIN_LIBRARY: auth_socket.so PLUGIN_LIBRARY_VERSION: 1.13 PLUGIN_AUTHOR: Sergei Golubchik PLUGIN_DESCRIPTION: Unix Socket based authentication PLUGIN_LICENSE: GPL LOAD_OPTION: ON PLUGIN_MATURITY: Stable PLUGIN_AUTH_VERSION: 1.0 1 row in set (0.002 sec) Next, you can create the user and double-check that the unix_socket plugin is enabled for the user: MariaDB [(none)]> CREATE USER example@localhost IDENTIFIED VIA unix_socket; MariaDB [(none)]> SELECT user, host, plugin FROM mysql.user WHERE user = 'example'; +---------+-----------+-------------+ | user | host | plugin | +---------+-----------+-------------+ | example | localhost | unix_socket | +---------+-----------+-------------+ And after that you grant the user one or more privileges: MariaDB [(none)]> GRANT ALL PRIVILEGES ON testdb.* -> TO 'example'@'localhost' -> IDENTIFIED VIA unix_socket; Et voilà! Our user now has passwordless access to the testdb database: $ mysql testdb MariaDB [testdb]>
<urn:uuid:179b6c23-0a1c-4542-93ff-03eb99d84175>
CC-MAIN-2022-40
https://www.catalyst2.com/knowledgebase/server-management/how-to-set-up-passwordless-access-to-mariadb-mysql/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00074.warc.gz
en
0.762695
1,236
2.8125
3
Data centers use two percent of the world’s electricity. Although this number does not seem very large, this demand is expected to grow eight times by the year 2030 – faster than any other industrial sector. Much of that electricity is used to cool the facility and keep the servers running. Data center electricity use. A typical data center averages 500 electric motors in its HVAC system, which is a large number of motors to maintain as they cool servers and other critical equipment. For data center operators, not only is reliable performance is a must, but so is managing the cost of electricity to run all of those motors. Choosing the right motors will help minimize operating expenses and ensure energy costs are kept to a minimum. Using only the energy needed Did you know that most HVAC systems operate at 80 percent load or less, and they do that more than 99 percent of the time? Traditionally, dampers, valves and other mechanical means are used to regulate a motor’s power or speed to operate fans, pumps and compressors in HVAC systems. To reduce electricity consumption and optimize energy efficiency, operators should ensure variable frequency drives (VFDs) are integrated into the motor operation, as doing so can save 20 to 60 percent in energy costs. VFDs help match energy consumption to actual energy needs, eliminating wasted energy. Not all motors can be run with a drive, however. Selecting an inverter-duty motor that includes a form of motor bearing protection is also important. An ultra-premium (IE5+) efficient motor reduces energy loss by as much as to 40 percent compared to standard induction motors (NEMA Premium efficiency) that operate directly across the line (DOL). Mitigating shaft current issues While using a VFD to control your motor has many benefits, it can also present some challenges. Without proper wiring and grounding techniques, shaft currents induced by VFDs will find the path of least resistance, typically through the motor bearings to the ground. The damage caused by this electrical discharge can lead to catastrophic failures to the motors. However, there are many ways to mitigate shaft current issues. Data centers cannot afford any motor downtime; therefore, insulated bearings as well as shaft grounding devices that direct current away from the bearing are becoming a popular practice when specifying motors. Insulated ceramic bearings are non-conductive and prevent shaft currents from flowing through the motor bearings all together. Since no electrical current flows through the motor bearings, there is little chance of current-induced wear. Highly efficient components and insulated bearings provide a combination of performance and protection. Smaller size, better efficiency A trend in industrial engineering has been toward the utilization of more and smaller motors, optimized for specific tasks. Matching the output of a motor to the maximum power required for a task already represents a major step toward achieving greater energy efficiency. Large, belted fan applications are being replaced by smaller fans in an array to improve system efficiency. Although this improves overall fan efficiency performance, it presents more complexity to the system. Multi-motor plus drive configurations are an ideal setup because of the reduced maintenance cost associated with replacing direct-drive fan applications or motors and drives together. Replacing belt-drive setups with direct-drive systems reduces complexity and maintenance costs while increasing efficiency. Longer lasting means more sustainable Modern, high efficiency motors, paired with variable speed drives, are designed to be flexible and reliable. Yet above all, they are extremely efficient, offering significant reductions in power consumption compared with older systems. Motors that run reliably for longer do more than reduce electricity consumption and lifetime cost; they are also more sustainable than a motor that needs constant maintenance or needs to be replaced often. Motors designed for reliability help extend the life of the driven-load and data center sustainability.
<urn:uuid:c4a131e6-8d6d-494d-b9f5-4711966cee56>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/industry-perspectives/how-right-motors-increase-reliability-and-reduce-data-center-costs
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00074.warc.gz
en
0.942033
776
3.078125
3
Security is a top priority for every business. Now that technology has grown more advanced, security measures have also become more sophisticated and intricate. Many organizations are embracing the benefits of newer technologies while they struggle with ways to protect their data and infrastructure. Since most of our operations are over the internet, we have become vulnerable to malicious attacks and data breaches. This can cause businesses to lose important customer information and harm their businesses’ reputations and bottom line. Cloud security and network security are both very important to ensure the safety of your business, but they serve different purposes. In this blog, we’ll look at the major differences between both and when you’ll need data security services to protect you. What is Cloud Security? When multiple users access the cloud computing server, the data becomes vulnerable to attack. Cloud security is the term for online information protection. It refers to protecting data, applications, and infrastructure in cloud computing environments. Cloud security is particularly important because it deals with sensitive data that could be compromised if not protected properly. Cloud security protects a cloud computing environment from internal and external threats. It includes protecting data, applications, and other resources hosted by the cloud provider. Cloud security is implemented through various strategies, including authentication, access control, encryption, patch management, and data loss prevention. Cloud security works by keeping your data safe while it’s in transit or at rest on a remote server. Cloud encryption protects your data while it’s being transmitted over the internet, and encrypts your data so that only authorized users can access it. What is Network Security? Data travels through networks to reach its destination, so there needs to be a way to keep it safe while it’s being transmitted. Network security relates to the protection of a network from external threats. It is a common term used to describe the security measures taken over a network, such as a local area network (LAN) or wide area network (WAN). Network security focuses on protecting the computers themselves from unauthorized access by hackers who may try to break into them remotely through the internet or other networks such as Wi-Fi or Bluetooth. It also aims to prevent intruders from accessing sensitive information such as credit card numbers or social security numbers. Network security is implemented through firewalls, intrusion detection systems (IDS), intrusion prevention systems (IPS), virtual private networks (VPNs), antivirus software, and other tools. Cloud Security vs. Network Security: How Different are They? Type of Protection Cloud security and network security are different concepts often used interchangeably, but they mean different things. Cloud security means the security measures taken within a cloud environment. Network security refers to the measures taken on a network, regardless of whether it’s in a cloud or not. So while cloud security is a wide term, network security is a part of it. Level of Protection Cloud security is a system that protects data that is stored in the cloud, including email, documents, photos, and other files. This type of security involves encrypting data before it leaves your computer or device and then decrypting it after it reaches its destination. Network security is protecting data on a network using firewalls and antivirus software. This type of security aims to prevent hackers from accessing your computers or networks. Method of Protection Cloud security is a set of tools and practices used to protect data in the cloud. It includes encryption, segmentation, and other methods of keeping your data safe as you move it around. Network security protects your network from outside threats with firewalls, antivirus software, and other tools that prevent unauthorized access to your internal network. Enforcing Maximum Security for Your IT Infrastructure In the past decade, the cloud has been a major disruptor for the IT industry. With the emergence of cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), many businesses have adopted this new technology that offers many benefits over traditional on-premise solutions. Cloud computing is becoming increasingly popular among organizations to reduce costs and improve efficiency and agility while increasing their ability to react quickly to business opportunities and challenges. However, an organization’s security posture must be strong to protect their data and applications from internal and external threats to their networks — and you’ll also need strong disaster recovery measures in place. This is especially true when moving data into the cloud, where it can be accessed by external users or hosted on a third-party server with no direct control by an organization’s IT department. This is when you’ll need the help of IT support or IT consultant service to exercise maximum security to protect your data at all costs. Want to hire an expert IT consulting firm to help enforce data security? Then reach out to our team in Corpus Christi, providing data security service. We offer various levels of protection and security measures to keep your entire IT infrastructure safe. Our IT support specialists can help you develop a solid plan to secure your IT operations.
<urn:uuid:22d10c04-266b-4e12-b59f-6f63c9492b89>
CC-MAIN-2022-40
https://l1n.com/tag/it-support/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00074.warc.gz
en
0.935845
1,028
2.859375
3
Reverse-engineered irises created to fool eye-scanners Scientists are now experimenting with technologies to deceive biometrics security. Academics have created reverse-engineered irises that are able to fool eye-scanners. This new research was released at Black Hat security conferences in Spain and the United States. For the first time, the academics are able to closely match the eye images of real subjects, which can trick iris-recognition systems and match digital iris codes stored in databases used in identifying people. It can trick security, allowing persons to gain entry at border crossings and secure facilities using biometric security solutions. Javier Galbally, with colleagues at the Biometric Recognition Group-ATVS, at the Universidad Autonoma de Madrid, and researchers at West Virginia University conducted the research. He said: “The idea is to generate the iris image, and once you have the image you can actually print it and show it to the recognition system, and it will say ‘okay, this is the right guy.’” Irises are scanned to create iris codes, the binary representation of the image. The iris code, which consists of about 5,000 bits of data, is stored in a database for matching. Using a genetic algorithm, the researchers took between 100-200 iterations to achieve an iris image that is “sufficiently similar” to one the researchers are trying to reproduce. Galbally said: “At each iteration it uses the synthetic images of the previous iteration to produce a new set of synthetic iris images that have an iris code which is more similar (than the synthetic images of the previous iteration) to the iris code being reconstructed.” Furthermore, he said that it takes about five to 10 minutes to produce an iris image that matches an iris code. His team tested the scanned images against a commercial iris recognition system, VeriEye iris recognition system made by Neurotechnology (http://www.neurotechnology.com/) and were able to trick the system. The study assumes that it is possible to hack into a database containing iris codes, such as the one that B12 Technologies maintains for the FBI by tricking someone into having their iris scanned. But B12 Technologies stated on their Web site that they employ biometric templates that “cannot be reconstructed, decrypted, reverse-engineered or otherwise manipulated to reveal a person’s identity. In short, biometrics can be thought of as a very secure key: Unless a biometric gate is unlocked by using the right key, no one can gain access to a person’s identity.” Do you believe that more scientists should be attempting to test biometric security systems?
<urn:uuid:da8fe660-c2f8-4815-812c-6831684c7f05>
CC-MAIN-2022-40
https://www.biometricupdate.com/201207/reverse-engineered-irises-created-to-fool-eye-scanners
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00074.warc.gz
en
0.931314
581
2.640625
3
Popular Usable Frequencies in the UK Frequencies most commonly used for wireless (2.4Ghz and 5Ghz) are generally licence free. However, there are 3 bands for 5Ghz, A, B and C and Band C is a licenced frequency! Band A is typically used for indoor wireless, where Band B and C are used for outdoors. Band C means you can transmit with up to 4W of output power. This means when using a high gain antenna you can send data over longer distances with higher throughputs (Band A allows you to transmit at up to 200mW, Band B means you can transmit up to 1W). All wireless equipment sold I the UK, will ask you to enter a country code as part of the configuration, this will ensure that you are compliant! Ofcom with periodically assess what is going on in the airspace, and it will mean that if you are exceeding these limits, you will be contacted and asked why you are doing so, potentially under caution! If you were looking to connect two buildings using a wireless bridge with capabilities of 1Gbps, you would most likely be looking to use the 80Ghz frequency. This would mean a licence fee of around £50 per year so the costs are not always particularly huge. Like with most things when it comes to wireless, there are limitations to this frequency, you would only really be able to send these throughputs up to around 5KM. When looking to send over a longer distance with good availability, the 7Ghz frequency is a good choice for quality but when it comes to the licence fee, you could face a bill of thousands of pounds a year. An alternative could be to look at the 13Ghz frequency, you may not achieve 99.999% availability but you would still be able to get a throughputs of up to 200Mbps over distance of around 15KM with a much lower annual fee.
<urn:uuid:2332ca88-7834-49b8-94dc-d5754a879532>
CC-MAIN-2022-40
https://www.digitalairwireless.com/articles/blog/popular-usable-frequencies-uk
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00074.warc.gz
en
0.963448
394
2.9375
3
CCENT 100-101 ICND1 Exam Topics - Determine the technology and media access control method for Ethernet networks. Ethernet has continued to evolve from the 10BASE2 flavor, capable of speeds up to 10Mbps, to the newest 10GigE (10 Gigabit Ethernet), capable of speeds up to 10Gbps. Since 1985, the IEEE has continued to upgrade the 802.3 standards to provide faster speeds without changing the underlying frame structure. This feature, among others, has made Ethernet the choice for LAN implementations worldwide. Today we review Ethernet technologies and operation at both the data link and physical layers. 802.3 is the IEEE standard for Ethernet, and both terms are commonly used interchangeably. The terms Ethernet and 802.3 both refer to a family of standards that together define the physical and data link layers of the definitive LAN technology. Figure 29-1 shows a comparison of Ethernet standards to the OSI model. Figure 29-1 Ethernet Standards and the OSI Model Ethernet separates the functions of the data link layer into two distinct sublayers: - Logical Link Control (LLC) sublayer: Defined in the 802.2 standard - Media Access Control (MAC) sublayer: Defined in the 802.3 standard The LLC sublayer handles communication between the network layer and the MAC sub-layer. In general, LLC provides a way to identify the protocol that is passed from the data link layer to the network layer. In this way, the fields of the MAC sublayer are not populated with protocol type information, as was the case in earlier Ethernet implementations. The MAC sublayer has two primary responsibilities: - Data encapsulation: Includes frame assembly before transmission, frame parsing upon reception of a frame, data link layer MAC addressing, and error detection. - Media Access Control: Because Ethernet is a shared media and all devices can transmit at any time, media access is controlled by a method called Carrier Sense Multiple Access with Collision Detection (CSMA/CD) when operating in half-duplex mode. At the physical layer, Ethernet specifies and implements encoding and decoding schemes that enable frame bits to be carried as signals across both unshielded twisted-pair (UTP) copper cables and optical fiber cables. In early implementations, Ethernet used coaxial cabling.
<urn:uuid:51aa02d8-c2a8-464c-ba3e-4864dcd4a6ae>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2164979&amp;seqNum=4
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00074.warc.gz
en
0.868782
487
3.59375
4
What happens when confidential emails are accidentally sent to the wrong recipient? This is an easy mistake to make, however, once you’ve sent the email, there’s no getting it back. If the recipient publicizes the information, your business could be in deep trouble in terms of liability, loss of customer confidence, and reputational damage. So You’ve Sent a Confidential Email to the Wrong Recipient – How Can You Fix the Situation Now? While the sending of confidential information may have been unintentional, the information was still wrongfully disclosed. Mistakes happen, however, this is a mistake that must be fixed immediately. If the email contained highly sensitive information, legal advice may be required to resolve the issue. How Can You Prevent This Type of Information Disclosure in the Future? Whether you’ve experienced this type of information disclosure before, or you’re looking to prevent the sending of confidential information in the future, email encryption is absolutely mandatory to protect your customers, employees, and business reputation. While email encryption will protect you in the event of sending information to the wrong recipient, email encryption will also protect you from the most dedicated hackers trying to intercept your communications. When you send an unencrypted email, the information is sent as plain text via web. This allows hackers or unintended recipients to access the information without hassle. An encrypted email, on the other hand, is sent as jumbled text to ensure unauthorized individuals cannot read it without the decryption key. How Does Encryption Work to Ensure Unauthorized Individuals Cannot Read the Information? Email encryption uses two individual keys – one private and one public. The public key is available to recipients while the private key remains confidential to you. The pair of keys work together to protect information. If you send an email and encrypt it using the private key, the recipient must enter the public key to decode the information. If the email is sent to the wrong recipient, the information will be jumbled until it’s decrypted. And the wrong recipient won’t be able to decrypt it without the public key. Ultimately, email encryption is critical for various reasons, from preventing cybercriminals with malicious intent to complying with regulatory requirements. If you’re sending confidential emails on a regular basis, email encryption is necessary to protect sensitive information. To learn more about email encryption, give us a call at (613) 828-1280 or send us an email at firstname.lastname@example.org. Fuelled Networks can help you select and implement the right email encryption solution for your unique needs. Published On: 21st April 2014 by Ernie Sherman.
<urn:uuid:794e0890-3660-4ef3-8dec-20d3b7df9b6e>
CC-MAIN-2022-40
https://www.fuellednetworks.com/the-treacherous-territory-of-unencrypted-email-dont-put-your-business-at-risk-for-liability-and-reputational-damage-any-longer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00074.warc.gz
en
0.899266
546
2.515625
3
And how to mitigate them Cloud security is a big topic. The Cloud is not just one thing, but a concept that comes in many different forms. Each form has its own security intricacies that need to be dealt with. No matter which aspects you look at, Cloud computing has been steadily increasing in popularity for years. Most recently, the global lockdowns in 2020 have brought its business rewards and security risks into even sharper focus. Cloud-based services like Microsoft 365 and G Suite are now mainstays of British business. The big question often asked of the Cloud is whether it is more or less secure than on premise. The only viable answer is, “it depends.” Cloud computing has the potential to be more secure if managed and implemented well and a disaster if it is poorly managed or implemented. Security responsibilities in Cloud Computing One of the key differences between Cloud and on-premise computing is that businesses cannot control many parts of the Cloud. To understand Cloud Computing Security, we first need to understand who is responsible for what. We can break Cloud offerings into seven areas: - Governance: how is the Cloud offering managed to remain consistent with corporate policy? - Data: the core asset managed by the Cloud. - Application: the tool with which the user interacts to access and manipulate the data. - Platform or operating system: Windows, Linux, or proprietary developer tools. - Communications: access to the application and data, i.e. through a wide area network as opposed to local infrastructure. - Infrastructure: network switchers, routers, firewalls. - Physical: the hardware on which it all runs. The table below, taken from our whitepaper on enabling Secure Information Exchange in Cloud environments, shows where responsibilities lie for these seven areas when it comes to the three main Cloud Services: IaaS, PaaS and SaaS. In short you, as the data owner, remain accountable for the governance and security of your data in every scenario. The Cloud Service Provider (CSP) may take some responsibility for provision of the technical controls, but you are always responsible for the appropriate processes to ensure these controls mitigate your risks, and that the controls are configured and managed correctly. The NCSC’s Cloud Security Principles Because the responsibility for governance always lies with the enterprise and not the CSP, it is essential to be certain that the Cloud technology used to manage your data is trustworthy. The NCSC’s 14 Cloud Security Principles outline considerations for CSPs that will help keep their solutions secure. Enterprises can use these principles to see if potential CSPs offer a trustworthy Cloud solution. The principles are listed below in brief, but you can see definitions and a more complete breakdown in our dedicated blog post. - Data in transit protection - Asset protection and resilience - Separation between users - Governance framework - Operational security - Personnel security - Secure development - Supply chain security - Secure user management - Identity and authentication - External interface protection - Secure service administration - Audit information for users - Secure use of service As a business looking to remain secure when using Cloud technology, the principles instead provide a checklist of what you should be looking for in your services. You are responsible for your data security, so you need to be certain that your CSP is implementing their controls to the level that you require. Using these principles When assessing the suitability of CSPs, Nexor recommends that you draw up a table for each principle with the following headings: - Risks you need to mitigate; - Your responsibility; - CSP’s responsibility; - How you will evidence the CSP is meeting that responsibility (governance). These principles also do not absolve the end user of all responsibility. In fact, the 14th principle makes it clear that Cloud services can only be expected to be secure when used properly. CSPs who are aligned with the NCSC’s principles should provide audit records that allow you to monitor Cloud access and usage, but it is your responsibility to use that data to spot potential issues. To mitigate the risks of Cloud computing, the businesses using the technology must take steps like the assessment shown here to ensure that their suppliers are secure and trustworthy and that their own employees are using the technology responsibly. Read our blog post for more practical information. The Data Lifecycle model in Cloud Computing To understand the security risks of Cloud computing, we must acknowledge that the technology is different to on-premise solutions. Successful management and mitigation of that risk requires a different way of thinking. We have found the Data Lifecycle Model a very useful tool in understanding and thinking through the new risks that Cloud computing brings. For example, the risk of what happens to your data when you no longer require the Cloud service; how can you be sure the data is deleted? (The answer, by the way, depends on the Cloud type and service model). - Create: Data can be created both inside and outside of the Cloud by both humans and machines. The key challenge is to attest the integrity of the data before ‘accepting’ it. - Store: Where and how is the data being stored? Check the controls that are applied to ensure that only authorised users can access it later in the lifecycle. - Use: Human users and machine analytics may require access to data in the Cloud. Core controls should ensure only authorised access. - Share: Again, core controls should ensure only authorised access when and if data is shared outside of the specific Cloud environment. - Archive: Effectively sharing the data with a long term storage solution, sometimes offline. - Destroy: Consider upfront how to keep track of where data is stored and how to erase it. Although the diagram illustrates the lifecycle as a linear progression, it is normal for data to bounce around the different stages, or to miss some of them out altogether. For example, not all data is archived. Risks in the Data Lifecycle The data lifecycle exposes certain points of risk for data stored in the Cloud. These are encompassed in the NCSC’s 14 principles, but are useful to extract when looking at the data model: - Data at rest. Data needs to be protected where it is stored by access control and encryption. - Data in motion. Data needs to be protected when it is moved between, into and out of Cloud systems and systems need to be protected from rogue data. Solutions include encryption, data or transport layer considerations, data transformation and data validation. Nexor’s SIXA, based on NCSC architectural models, is concerned with this element of risk. - Data in use. Data being used by humans or machines also needs to be protected. Access control and identity management are crucial for maintaining security, and there is overlap with the data in motion controls. Who is responsible for mitigating the risks? Once you know the risks, you can consider mitigations and, most importantly, who is responsible for them. If you are responsible for mitigation, then it should be business as usual. Manage it with the same processes you would with an on premise system (as part of your ISO 27001 compliant iSMS). Note the focus on processes – the technical controls to mitigate the risk may be different, but the security management process the same. If the CSP is responsible for mitigating these risks, you need to determine how you will assess their effectiveness at providing the control so that you can hold them accountable. The NCSC’s principles provide a starting point for you, but you ultimately need to ensure consistency with your corporate policy, as mentioned in our brief discussion of governance above. To summarise, the simple principles/responsibility matrix mentioned in our section on the NCSC’s Cloud Security Principles will clarify what the risks are, who is dealing with them, and how you can be sure they are managed. Is Cloud computing less secure than on premise? The answer depends on the business processes that the CSP and you, as the customer, have put in place. If these are working together effectively then the Cloud can be more secure. If either party gets these processes wrong, you should have a business contingency plan for being the next data breach headline! Nexor offers Cyber Resilience as a Managed Service (CRaaMS), to help businesses respond to the new threats they may be facing from the adoption of Cloud technology. It is process-focused, taking into account business objectives, threat identification, mitigation and recovery in accordance with our underlying CyberShield Secure® methodology. To speak to our consultants about any of our services, including remote delivery, get in touch today. Be the first to know about developments in secure information exchange
<urn:uuid:9bad143d-1261-44b6-b86a-1345ae9bc71f>
CC-MAIN-2022-40
https://www.nexor.com/cloud-computing-security-risks-mitigation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00074.warc.gz
en
0.93402
1,819
2.5625
3
Data mirroring is a common approach to protecting data. It involves the copying of data in real time from one location to another, such as another local server or device or a remote storage medium as a facet of disaster recovery (DR). The mirrored data is an exact copy, so if one set of data is lost, the other is available. Mirroring is often used to lessen risk. If an organization has all of its data in one location, it is open to risk. In the event of a loss of power, cyberattack, or disaster, having all data copied to another location can be a life saver. This approach avoids single points of failure within the infrastructure. Another aspect of mirroring is that it is sometimes used in place of backups. As everything is automatically copied elsewhere in real time, some consider there is no need to continue backups. However, smart organizations employ both. Backups and mirroring should go hand in hand. Here are the five key trends in data mirroring you should be aware of in your cybersecurity practices. See more: Cloud Disaster Recovery Best Practices 1. Data Sharing Data mirroring has many uses — DR, avoiding single points of failure, and retaining a second copy of data in real time. But a new use called data sharing is emerging. “Despite being originally crafted to sidestep single points of failure in the storage infrastructure, data mirroring is now being applied in various data sharing scenarios,” said Augie Gonzalez, director of technical marketing at DataCore Software. He offered the example of mirroring being used at the intersection of transaction processing and business intelligence (BI), where real-time online transactional processing (OLTP) data is mirrored to analytics tools. Those analytics tools can then delve into that data set to derive insight without interfering with the primary data set. 2. Data Proximity The closer data and applications are to end users, the lower the latency will be. If the primary site is on the other side of the country or half a world away, it can take a while for that data to travel across the network to where it is needed. Data mirroring is one way to reduce such latency. “Data mirroring is gaining favor as a means of bringing data closer to the user in metropolitan area networks,” said Gonzalez with DataCore. “Each site in a stretch cluster operates directly on the nearest mirrored copy rather than incur the delay of traversing LANs to reach the source image.” 3. Cloud Databases Databases used to be retained solely on-premises. But these days, cloud databases are beginning to dominate. With databases now in the cloud, mirroring provides much needed flexibility, latency, and agility. “With cloud deployments, data mirroring takes on greater importance and particularly when core business applications, like analytics and databases, move to the cloud,” said Kirill Shoikhet, CTO of Excelero. “Databases must have mirroring across availability zones. Technologies that can synchronously mirror, while minimizing the number of round trips have an advantage, as cross-availability zones and regions means that there is some basic latency due to the distance and physics.” See more: Cloud Database Trends 4. Ransomware Protection Ransomware attacks are happening with higher frequency, and they are rapidly growing in complexity. It’s getting tougher for many organizations to prevent them or even detect them in some cases. Senior executives, therefore, are urging their teams to keep data highly available and secured against ransomware attacks to keep business running 24/7. This is doubly important in the remote model of operations that have emerged. “To protect against ransomware, businesses will continue to replicate data into multiple immutable copies across both on-premises and multicloud systems with built-in air gap mechanisms, while the primary storage system provides data snapshot and versioning features to recover earlier versions from attacks,” said Param Kumarasamy, VP of product management at Commvault. “Due to these trends, business leaders will be focusing less on on-premises data replication and instead ramping up the push for their application owners to adopt a multicloud strategy for high data availability for virtualization, enterprise, and cloud applications, by replicating the data across multiple cloud vendors.” See more: Fortifying Your Backups from Ransomware 5. Serverless Replication Replication used to be done between one server and another, one network attached storage (NAS) box and another, or one appliance and another. However, that has changed as the cloud has become more pervasive. A trend noted by Kumarasamy with Commvault is serverless replication of data within and across the cloud. New era cloud applications have separated the data and compute layers, so each resource can be scaled independently. “The data of these cloud applications will be replicated in multiple cloud zones within or across regions for high availability,” Kumarasamy said. “Data availability is essential for most business-critical applications.”
<urn:uuid:98e1bd91-0920-48fa-8fdc-285be49e5d95>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/management/data-mirroring-trends/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00074.warc.gz
en
0.930175
1,055
3.046875
3
In a prior blog post, I talked about what virtual memory is, the difference between swapping and paging, and why it matters. (TL;DR: swapping is moving an entire process out to disk; paging is moving just specific pages out to disk, not an entire process. Running programs that require more memory than the system has will mean pages (or processes) are moved to/from disk and memory in order to get enough physical memory to run – and system performance will suck.) Now I’ll talk about how to monitor virtual memory, on Linux (where it’s easy) and, next time, on Solaris (where most people and systems do it incorrectly.) Assessing Linux Memory Usage There are three things that may spring to mind when you think of measuring your memory system: - how much physical memory is in use - how much virtual memory is in use - paging rate from one to the other For physical memory usage, you can run top or free from a shell: [demo1.dc7:~]$ free -g total used free shared buffers cached Mem: 47 45 1 0 0 21 or you can use your handy monitoring system to view this over time: Some people may see the output of free, or the above graph, and react with “1G on my 48G system is all that’s free? I’m out of memory!” This is a natural reaction – but wrong. Think of ‘free memory’ in Linux as ‘wasted memory’ (or better ‘memory the operating system has not yet been able to take advantage of’). Almost half the memory on this system is in use: but by the file cache. The file cache does what it sounds like – caches in memory recently accessed files – meaning that if a program requests access to a file that is in the file cache, no disk access is required. Linux uses all physical memory that is not needed by running programs as a file cache, for efficiency. But if programs need that physical memory, the kernel will reallocate the file cache memory to the programs. So memory used by the file cache is free (from the point of view of being available for allocation to programs) but serving a useful purpose until it is needed by a program. Even if all Linux memory is used, and very little is free or in use as a file cache – that can be OK. It’s better to have some file cache in most situations: but not if you are running a large Java program, and want to maximize the heap; or a database, and you want the database to manage disk caching, not the OS (as the database has more knowledge about the utilization of the data.) In any event, so long as there is free virtual memory, and not active swapping, you will be OK with highly utilized physical memory. Virtual Memory in Use To see the amount of swap memory in use, you can also use top or the free command [demo1.dc7:~]$ free -t total used free shared buffers cached Mem: 49376156 48027256 1348900 0 279292 22996652 -/+ buffers/cache: 24751312 24624844 Swap: 4194296 0 4194296 Total: 53570452 48027256 5543196 Or your monitoring: So looking at the outputs above, we can see that the system has used zero swap space. So even though 90% of the total virtual memory space is in use (counting both swap and physical) there has never been a time when the system ran low enough on physical memory that it couldn’t free some from the file cache, and had to put some on swap. If your swap usage is high – that can be dangerous, as it means the system is in danger of exhausting all the memory – and then if a program needs more, and is unable to get it, Bad Things happen. (Amongst others – the OOM (out of memory) killer will start to kill processes based on, among other criteria, the amount of memory they initially requested – which means that the server process that is the whole point of the server is likely to be one of the first to die.) It should be noted, however, that a low to moderate level of usage of swap memory that is not actively being used is no cause for concern whatsoever. It just means the system has shifted pages not actively being used from physical memory to disk, to free memory for more active pages. This is a Good Thing. The key is to know whether the swap is being actively used, which brings us to the next section. Virtual Memory Paging Rate To see the rate of memory pages being moved from physical memory to disk – use vmstat, and examine the columns si and so (which stands for pages swapped in, and swapped out). e.g. on a system low on memory: dev1.lax6:~]$ vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 8 17 2422376 122428 2520 24436 952 676 1796 904 10360 4421 41 0 33 26 0 9 17 2423820 123372 2524 24316 732 1716 752 1792 12259 4592 43 0 25 32 0 8 17 2425844 120408 2524 25044 416 2204 1616 2264 14675 4514 43 0 36 21 0 7 19 2427004 120532 2568 25640 608 1280 764 1308 12592 4383 44 0 36 20 0 8 24 2428816 121712 2572 25688 328 1880 500 1888 13289 4339 43 0 32 25 0 or see the same thing visually, with your SaaS based server monitoring: This is really the main indicator that will show you memory issues – when your system is low on memory, it will swap out a lot of blocks. If this happens at a high rate, it can be a bottleneck to performance. Even worse, when your system needs to run code that is now on disk, instead of physical memory, it will have to swap that back in – which means your code is now running subject to the access time of disks (slow) compared to memory (fast.) The two metrics you need to care about are the rate of swapping (if it’s in the hundreds of blocks per second, for more than a few minutes – you are out of memory, and your system performance will suffer), and a high level of usage of swap (> 75% of swap space, not of total virtual memory.) So long as you have a monitoring system alerting you to these two attributes, you are in good shape. And if you don’t – you’re playing with fire. Want to see more? Follow us here:
<urn:uuid:87a715f3-2887-4094-903e-3f5a364c0264>
CC-MAIN-2022-40
https://www.logicmonitor.com/blog/the-right-way-to-monitor-virtual-memory-on-linux
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00074.warc.gz
en
0.911097
1,455
2.78125
3
The fourth version of the Internet Protocol, used to transfer data between devices online. It defines the IP address format and the structure of packets, or standard blocks of information. IPv4 addresses are written in the form of four values between 0 and 255, separated by periods. This version of the protocol is limited to approximately 4.3 million unique combinations of IP addresses. That is no longer sufficient to identify all devices connected to the Internet. The sixth version of the protocol (IPv6) overcomes this issue.
<urn:uuid:a5c455c1-c164-4c54-959f-e682f2001bb4>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/ipv4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00274.warc.gz
en
0.921483
105
3.46875
3
What is spear phishing? Spear phishing is defined as the fraudulent practice of sending emails ostensibly from a known or trusted sender to induce targeted individuals to reveal confidential information. As the name would suggest, spear phishing is a type of phishing attack targeted at a small group or individual. Whereas phishing attacks are broad and apply to many people, spear phishing emails are focused on a particular individual following in-depth research into the target. A spear phishing email may have been designed to mimic a supplier’s unpaid invoice so that the cyber attacker can gain financial reward for their efforts. By sending an email that looks the same as a genuine supplier invoice email, the hacker will hope to fool the recipient into paying funds into their account rather than to the legitimate supplier. Spear phishing threats Spear phishing attacks are increasing year on year. Not only are they becoming extremely common but much more sophisticated. According to the FBI, Business E-mail Compromise (BEC) schemes which are a form of spear phishing continued to be the costliest for businesses in 2020: 19,369 complaints with an adjusted loss of approximately US$1.8 billion. Broader phishing scams were also prominent: 241,342 complaints, with adjusted losses of over US$54 million. The number of ransomware incidents also continues to rise, with 2,474 incidents reported in 2020. The data for 2021 has yet to be released, but experts claim the phenomenon has been on an upward trend over the past few years at least. Damage caused by spear phishing The damage caused by spear phishing attacks is immense. Phishing attacks are more commonly used due to their simplistic nature and overall effectiveness. The underlying principle behind a phishing attack is to trick a human into doing the cyberattacker’s job for them. This process is cost-effective and drastically reduces the time spent on the task for the hacker. The alternative is for a cybercriminal to gain access and deploy malware by exploiting a vulnerability in an organisation’s cybersecurity defences which is, of course, costly, complicated and time-consuming. Verizon’s 2021 Data Breach Investigation Report (DBIR) states that phishing attacks are involved in 36% of data breaches. Following the FBI’s lead, they also found that BEC and phishing attacks are the costliest causes of data breaches. It was also proven that phishing emails are one of the most common delivery vectors for malware. Sadly, many employees are unable to detect sophisticated spear phishing attacks. Educating staff and protecting the organisation against spear phishing threats requires sophisticated security solutions that can identify and block phishing attacks before they reach workers’ inboxes. What helps protect from spear phishing? Spear phishing attacks are bespoke and targeted specifically at the recipient. This makes them extremely difficult for employees to detect compared to standard phishing campaigns. But don’t worry, there are many actions your organisation can take to protect itself against sophisticated spear phishing campaigns. Here is a selection of the best practices to follow: - Educate staff Training staff to be able to spot the warning signs of phishing emails is key to managing spear phishing threats. - Email scanning Spear phishing emails use an array of techniques to appear legitimate, the most common being spoofing sender addresses. Security teams can halt these attacks by scanning emails for indicators of phishing then block them. - Relationship monitoring By creating a relationship graph and identifying anomalous messages, an anti-phishing solution will flag emails that are likely to be the subject of spear phishing attacks. - Malicious URL detection Spear phishing emails will more than often contain malicious URLs to direct recipients to pages designed to install malware or steal login credentials. Security teams should be able to block emails containing links to suspect URLs. - Use MFA when you can Using multi-factor authentication (MFA) is key to bolstering security in any organisation. - Sandboxed attachment analysis Phishing emails often include malicious attachments disguised as legitimate files. By using a sandboxed attachment analysis, malicious files can be detected and deleted before they reach an inbox. Spear phishing protection Spear phishing attacks are becoming more and more sophisticated and are more difficult to detect and block. Phishing attacks are a huge threat to corporate cybersecurity. Not only do they allow hackers to steal user credentials, but they can be used to steal money and plant malware on company systems. So, what helps protect from spear phishing? Companies like RiskXchange are key to helping organisations right around the world improve their cybersecurity defences. Not only can we help protect your organisation against a range of phishing threats but can also protect your company against sophisticated spear phishing campaigns and increase your overall cybersecurity defence methods. Get in touch with RiskXchange to find out more about phishing emails and what helps protect from spear phishing.
<urn:uuid:2205071b-ae63-4d43-be9b-c1bad5a13ebc>
CC-MAIN-2022-40
https://riskxchange.co/5132/what-helps-protect-from-spear-phishing-emails/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00274.warc.gz
en
0.948692
1,019
3.4375
3
The state of cyber resilience at the close of 2021 13th January 2022 The pandemic forced UK organisations across several industries to adopt remote working in order to enforce social distancing measures—forcing several organisations to accelerate their digital transformation. At the same time, financial and political instability created a volatile environment that spurred cybercriminals on to become bolder in their cyberattacks. How has cybersecurity fared during the past few years? Cyberattacks surged over fivefold as a result of the pandemic. The UK's largest organisations suffered, on average, 885 cyberattacks in a single year; more than triple the global average of 270. Furthermore, the cost of cybercrime has grown to over £1.3 million a year—£350,000 more than the global average. With companies forced to make an overnight transition to digital working, company systems are not as stable as they should be. A successful data breach also does more damage and puts data at significant risk, with over half of UK organisations reporting that they lost over 100,000 customer records in a year. Business executives have, however, taken action to curtail the rising number of cyberattacks. UK executives have increased cybersecurity budgets by at least 10% to tackle growing threats and protect their data by bolstering cyber defences in the face of bolder, more aggressive cybercriminals. Thanks to larger cybersecurity budgets, UK organisations have reduced the number of successful breaches from 30 to 17 a year. Organisations have also improved cyber incident response rates, with over 90% of organisations taking less than 30 days to remediate an attack. What security concerns do organisations face in the future? Despite the progress made last year, there are concerns that must be addressed. Over 80% of organisations say the cost of staying ahead of cybercriminals is unsustainable, an increase from the previous year where only a fifth of businesses made this statement. If organisations are to make cyber defence sustainable, they need to devise more cost-effective means of staying ahead of cybercriminals. Businesses must also expand their cyber defences to a wider ecosystem and secure their data because indirect cyberattacks through the supply chain accounted for 64% of cyber breaches in recent years. The pressure to devise more cost-effective means to stay updated on cyberattacks and monitor the supply chain more closely indicates the importance of better cybersecurity and supply chain security technology to reduce cyberattacks and data breaches.
<urn:uuid:42b3fae9-2ea1-4bb0-b968-7c67362f6f10>
CC-MAIN-2022-40
https://riskxchange.co/the-state-of-cyber-resilience-in-2021/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00274.warc.gz
en
0.956129
494
2.546875
3
Over the previous decade, cyber-attacks have been escalating. Owing to increased internet exposure and the inclusion of businesses in the digital economy, threats have become more common. Although the information security industry has been making strides, breaches still cause damages of over USD 6 trillion every year globally. To be able to survive in the labyrinth that cyberspace is, defending your digital presence is non-negotiable. The pandemic also played its part in creating a hospitable environment for cyber criminals. As more people were forced to move away from their offices, cyber attacks became more common with approximately 47% of surveyed individuals working from home having fallen for some form of a phishing attack. It is important to be aware of and be prepared for threats, especially in a time where global tech adoption rates are skyrocketing owing to COVID-19. In this article, we discuss how enterprises of all scales can strategize their cyber defense practices and mitigate the chances of being a victim of cybercrime. A fault in the design Even when campaigns and improved access to information have helped spread awareness on the importance of cyber security, attacks are on the rise. Unsurprisingly, a vast majority of these attacks are a direct consequence of human error. The larger issue here is the indifference and complacency in the development of an action plan. Even though a report by Accenture suggests that more than 68% of surveyed business leaders believe their cyber security risks are rising, most companies still rely on rudimentary measures to protect their data. Such steps, although necessary, are simply not enough to provide the protection that is required. Another important consideration here is the inadequacy and insufficiency of security measures in a system. The lack of proper solutions to main defenses creates multiple voids in a strategy. These gaps are opportunities for attackers to exploit systems and, instead, should be opportunities for businesses to improve systems. While there can be innumerable reasons for failed security, some attributes stand out. Here are some reasons that can cause a cyber defense strategy to fail: It is common for businesses to isolate their data in order to prevent it from being breached. However, contrary to popular belief, information silos do not necessarily improve cyber security. Since data is stored in a single location, the system reduces complexities, making it easier to breach data. Businesses often use different platforms to fulfill their cyber security needs. This can help cover multiple vulnerabilities, however, is not cohesive. The lack of integrated functionality creates gaps in the system and increases the chances of a system being breached. The evolution of technology inevitably trickles down into the development of new ways to attack a system. Unfortunately, most cyber security measures are not future-proof. This results in making the defenses redundant against the latest threats. Several parameters define the success or failure of a strategy. Ranging from pure statistics to quality of actions, these factors can help determine the efficacy of the approach. Here are some factors to decide the right course of action. The average time taken by a strategy to identify and contain a threat is critical to system integrity. These metrics help determine the promptness. A faster MTTD and MTTR can prevent a threat from spreading. It is also an important performance indicator for an information security officer to gauge the long-term effectiveness of a strategy. Accuracy in identifying, diagnosing, and resolving threats is another important factor. For a strategy to be effective, it must be able to identify the maximum number of threats and neutralize them. Degree of efficiency in response The strategy’s response relative to the severity of an attack is an important indicator. A threat response system that can counter complicated cyber attacks must be preferred over an unsophisticated strategy. Incident Response Plan (IRP) The results that a strategy yields while responding to incidents are critical to the integrity of a system in the long run. Efficient incident response can help reduce the extent of damages caused and allow the system to recover quickly. Cost of doing nothing One of the most important factors to consider while assessing a strategy is the cost of inaction. The damages sustained by an enterprise as a result of a failed attempt to counter threats are a clear indicator of a non-viable action plan. While several factors require careful analyses to assess the success of a strategy, the cost can be determined in simple and absolute terms. To put things into perspective, the Kaseya VSA (Virtual System Administrator) ransomware attack in July of 2021, which was made possible through an authentication bypass vulnerability, caused over 1,000 companies to face significant downtime. A renowned retail chain in Scandinavia had to keep its stores closed for a whole week, and several businesses had to rebuild their systems from the ground up. Not only did the attack cause various companies to lose data, but it also halted operations resulting in lost revenues, unexpected expenditure and damage to reputation. Groups like REvil (which was responsible for the Kaseya attack) are always on the lookout for opportunities to strike, and not doing nothing should not even be considered a remote possibility. Suggested reading: Mapping the ROI of a good cyber defense strategy What you can do In order to prepare an air-tight cyber security blueprint, it is important to understand the flaws and evaluate the need for a better platform. While preparing a strategy, businesses need to keep in mind the impact of a failed strategy. It is just as important to be able to prevent attacks as it is to control the damage. While preparing an action plan it is easy to get confused. With a myriad of options and solutions in the market, choosing the right one can get tricky. Therefore, here is an introductory roadmap to developing a cyber defense strategy that keeps your data secure. Acknowledging the status quo One of the first steps to approach your cyber security is to assess individual needs. Evaluate the maturity of your defenses and their likelihood to fail. Consider multiple aspects of your system such as previous incidents, scale, exposure, and risk. Understanding the needs of the system can help you determine the best course of action for securing your information. Apart from the strategy itself, it is also equally important to gauge the threat landscape. Before settling for a strategy, the need to know the environment and associated hazards is absolute. Testing and tweaking Once identified, deploy your strategy to test the waters. A test run will help you tweak the strategy to manage vulnerabilities and risk. Simulating possible threats to check if the strategy can counter them can prepare both the system and your team for possible incidents. Such examination will yield metrics that can then be used to determine methods to improve security. A result-oriented approach can help businesses achieve the desired security goals. Deploying, monitoring, and adapting After several rounds of testing and corrections, a strategy is ready to be deployed. However, just because a strategy is prepared for action, it does not mean that the work is complete. One of the most vital components of a strategy to keep it viable is continuous improvement. Once a strategy is prepared, it requires constant monitoring to validate its success. A strategy, like threats, should be dynamic. It must be updated constantly to adapt to new dangers and be equipped to counter possible future hazards. The comfort of the framework Such a strategy is scalable and can be employed by enterprises of all scales. Large businesses that are under a constant threat of being attacked need to understand the importance of such a strategy. While small and medium-sized businesses are usually not at risk of large-scale collaborative attacks, it is advisable to always be prepared for every possible threat. As we understood that the use of multiple systems often leaves gaps in cybersecurity, businesses must also consider unifying their processes. It is recommended to use a platform such as BluSapphire Elite that can perform a variety of functions and share information across multiple processes. Such a solution is relevant for businesses of all sizes and can adapt to both the smallest and biggest requirements. The strategy targets major pain points of cyber security, i.e. present considerations, scrutiny, and engagement. These aspects are key determinants of the success of a cyber defense strategy. The way forward As the digital work environment becomes more complicated with each day, businesses need solutions that can catch up with the changing trends. Increasing penetration of technology will inevitably create more room for cybercrimes. To counter such present and future threats, a sound cyber defense strategy becomes imperative. As more businesses gain digital residence, more opportunities are created for miscreants to experiment. To avoid subjecting your business to the perils of failed defenses, a strategy must be developed expeditiously. There is no way of knowing what the future holds, however, statistics can be interpreted to predict possible threats. Since the number of cyberattacks (and attackers) has been on a rise, it is only reasonable to assume that the threats will become worse. Moreover, the severity of these hazards will be coupled with a rise in the quantum of such attacks. As it becomes harder to protect data, only a well-thought-out strategy can safeguard the information of a business.
<urn:uuid:46083541-b099-4df0-beb9-24167b0c0149>
CC-MAIN-2022-40
https://www.blusapphire.com/blog/mapping-your-cyber-defense-strategy-how-to-counter-adversaries-in-present-and-future-cyber-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00274.warc.gz
en
0.950468
1,882
2.90625
3
SIEM stands for Security Information and Event Management, a software solution that is designed to collect, collate and analyze activity from a variety of active sources (servers, domain controllers, security systems and devices, networked devices, to name a few) that span your company’s IT infrastructure. By analyzing all this stored data, SIEM helps detect threats and uncover trends, thus enabling your organization’s cybersecurity personnel to look into any red flags that may turn up. SIEM is a combination of first-generation Security Information Management (SIM) systems that keeps extensive logs and stores massive amounts of data, with Security Event Management (SEM) which is a second-generation system. SEM includes correlation of events, notifications, console views, and real-time monitoring. In a nutshell, SIEM is a data aggregator that puts together, stores, and categorizes enormous amounts of data, and makes it accessible for your team to delve into and analyze security breaches in minute detail. It applies statistical techniques and correlation rules to extract useful, actionable information from multiple events’ worth of data and numerous log entries. The system consolidates logs into separate categories to differentiate between successful or failed logins. It detects malware presence and runs port scans to sweep for any attempted cybercriminal activity. SIEM provides visual aids such as dashboards to maximize the visibility of an organization’s security system and sends alerts to flag potential issues in real-time. It applies boolean logic rules to raw data to make sense of it. Why do you need SIEM? SIEM helps your Incident Response Team by: - Generating alerts based on the analytics that line up with a pre-determined rule set which is indicative of a security breach or issue - Performing forensics and reporting on security incidents. Whereas the primary and most critical capabilities of SIEM are threat detection, threat investigation, and rapid response time, it also performs other functions such as data normalization, threat assessment, and response workflow. Central Log data aggregation and visibility Today, with digitization, the number of logs generating sources have only increased exponentially. That said, irrespective of the size of organization, it is essential from an organization perspective in aggregation and maintenance of logs generated from multiple sources. Here are quick examples of log generating sources within an organization: - Desktops/ Laptops/ Servers/ Virtual Machines/ Thin Clients or any other compute device - Existing Security Infrastructure (Example: AV Solution, Firewall, Web Proxy, Intrusion detection system Etc) - Network Infrastructure (Example: Switch, Router, Access Points, Load Balancer Etc.) - Cloud/ SaaS Applications (Example: Office 365 Etc) - Cloud Infrastructure (Example: AWS/ Azure/ Digital Ocean/ Google Cloud/ Etc) - Application Logs (Example: SAP/ Oracle/ Custom App Etc) - Configuration Management systems (Example: Solar Winds Etc) - Active Directory - Infrastructure Management (Example: Nutanix Etc) Today, each component on which you should have a visibility on generates high volume of unstructured data and it is humanly impossible to derive value from such high data volume. An SIEM functionality will normalize the data from an unstructured format, enrich the structured data with real time cyber threat intelligence. The enriched log is computed further to check if there are any potential cyber-attack via correlation rules and presented fluidly over a dashboard enabling your security analyst in faster cyber threat detection due to multi data visibility. At the same time, it also enables an analyst in forensic analysis standpoint. Enabling Organization’s Compliance & Regulatory requirements With data being the new oil, data privacy and sovereignty has been a major concern bringing in a plethora of compliance and regulatory requirements/ practices. Technically every business vertical/ horizontal today irrespective of the size and revenue needs to meet some form of compliance or regulatory requirement. Majority of these compliance and regulatory requirements from an IT and Data Security standpoint overlap. Here are quick examples of data insights that are usually a part of audits: - Storage of Log data from multiple log sources (Firewall, Compute devices Etc) - Authentication Audit data - Access Insights - Reports on user/ access audit insight - Modification of objects - Database access authentication - Configuration insights - Server audit/ authentication insights SIEM by design is collecting data insights which act as a baseline in providing the above insights. Since, the logs received into SIEM are stored in a normalized and structured format, real-time reporting and meeting the compliance needs is a easier process enabling streamlining of compliance/ regulation based reporting. BluSIEM, specifically, also offers you a varied capability in building custom reports enabling custom audit/ internal audits. Enhanced Forensics & Efficiency Today’s cyber attacks are only discovered once the damage has already been done. One of the major reasons is an organization’s viewpoint only to secure crown jewels and ingest limited data insights primarily to save OPEX and complexities around data management. A next-generation SIEM with Big data and horizontal scalability, and being cloud native will enable your organization to break free from the shackles of OPEX cost and complex data management issues enabling to view the security log data from multiple different log sources in your organization from a single interface. This enables your security team to go back through days/ months/ years of data swiftly and perform detailed forensics at ease. Not only does this improve visibility but it also enhances incident management process in multiple ways. Your security teams which are today completely lost in finding answers to the challenges will be enabled & equipped to fight future threats with ease. Key benefits of SIEM: Threat detection, including insider threat identification using a branch of analytics known as UEBA, or, user + entity behavior analytics. For example, it can flag suspicious activity of say, an employee changing permissions without being authorized to, or making repeated attempts to log in somewhere. It can also pinpoint malware- compromised user accounts of individuals within an organization. It can continually surveil network traffic and perform threat assessment functions. Companies are subject to several rules and regulations they are required to be compliant with, by law. The regulatory bodies they are answerable to depends on the nature of the enterprise. For example, health insurance would need to be HIPAA-compliant to protect patient data and uphold privacy laws. SIEM security solutions can monitor traffic along with the network and identify attackers, flag vulnerabilities and detect malware. The data they store can be useful for audits and for generating reports as needed. SIEM can identify new critical systems, monitor access to various files, record changes to credentials, verifies authentication info, and monitors changes to data policies. IoT- relevant security features SIEM provides advanced security solutions that pertain to cyber attacks, data exfiltration, IoT, and connected device security, and sends out alerts to investigate any suspected incidents. It applies threat intelligence and analysis of past incidents to seek out newer attacks. Where the target is not known, SIEM scans network traffic to find large data transfers and the system that is doing it. This capability extends to anomalies indicating such data exfiltration happening over mobile or any other compromised smart devices. Zero-day threat detection Zero-day threats pertain to undetected or unaddressed flaws in hardware or software. Once detected, it is a race against time to “patch” the flaw before hackers can exploit the vulnerability, resulting in a zero-day attack. SIEM can detect and analyze the behavior associated with a zero-day attack. For instance, an attack via PDF can lead the Adobe Reader functionality to crash. At the back end, a process is generated that links the attack through outbound or inbound connections, and SIEM can be designed to pick up traces of such activity. Data for operations and capacity management Data is of great value in capacity planning. SIEM aggregated data can also help companies track their bandwidth and data accumulation over time to factor into their budget and expansion plans. This helps organizations avoid unnecessary capital expenditures. Limitations of SIEM applications Sometimes it is difficult to differentiate between actual critical data theft and more benign activity, even with SIEM. This is because SIEM flags what it sees as threats without providing context: it is up to the team to sort out exactly what is happening. SIEM falls short in pinpointing relevant information in unstructured data. For example, a SIEM system may flag rising or unusual network activity at a particular IP address. It will not reveal who the user behind it is, or exactly which files the user accessed. Similarly, SIEM cannot differentiate between authorized file activity and suspicious activity. This generates a lot of work for the security team to diagnose just what is going on. The number of false alarms and wild goose chases can be high enough to desensitize them, and an actual exploit might go unnoticed. However, BluSapphire's next-gen SIEM platform is built to mitigate some of these issues with SIEM in general. Please take a look at the BluSapphire Basic solution page to know more.
<urn:uuid:a06b6156-5ad9-48a1-a13a-5e51bfdfb4b6>
CC-MAIN-2022-40
https://www.blusapphire.com/blog/siem-101-what-is-siem-and-other-questions-answered
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00274.warc.gz
en
0.911023
1,907
2.640625
3
Sikov - stock.adobe.com People need to regain control of their own data, instead of it being in the hands of the tech giants, according to Belgian professor of computer science Ruben Verborgh, who is working with World Wide Web inventor Tim Berners-Lee on Solid, a set of standards for personal “digital data safes” to be set up. Companies such as Facebook and Netflix, but also Pinterest and Google, are keen to collect as much data as possible about users in order to steer their behaviour accordingly, with the ultimate goal of being as interesting as possible for advertisers. Recently, a documentary called The Social Dilemma became available on Netflix. In it, a number of key people prominent in the early days of today’s tech giants talk about their concerns. Verborgh, professor of semantic web technology at Ghent University and researcher at Imec, has seen the documentary, but is disappointed by it. “It was 10 years late,” he said. “Although it may still contain shocking information for the general public, this knowledge has been there for a long time. “But the documentary shows very well that we have now lost control of our personal and public data. We can no longer decide for ourselves what to do with it and how we want to obtain information.” Verborgh added: “Tim freed up information with the web. No matter where you are or what device you’re using, you can access the information on the web. We want to do the same with Solid.” When Berners-Lee announced the Solid project, he announced on his website that the current web had become “a driver of inequality and division”. He doesn’t like the fact that his invention is now ruled by a handful of tech giants who demand personal information from users in exchange for their services. In his view, Solid needs to “redirect” the web to his original vision of a democratic and equal network for the exchange of information. Read more about personal data protection - The UK’s three main political parties are collecting personal data on voters, but much of it is wrong and its use may fall foul of data protection laws. - Surprising findings from an Okta report on digital identity suggest Brits would be willing to part with valuable personal data for a surprisingly low amount. - It’s impossible to ignore the importance of security when dealing with personal information. Follow these tips to help keep your data safe and secure. Verborgh believes companies’ “data collection frenzy” hinders innovation. “It also encourages unfair competition,” he told Computer Weekly. “After all, companies that do have an innovative idea often don’t make progress because they lack data. Large companies such as Facebook collect all this data, but do not use it for innovation.” Writing in current affairs magazine Knack, Verborgh called this socially and legally undesirable competition. “Here is a simple example that illustrates this: can you name one innovation that Facebook or Twitter has implemented over the past five years?” he wrote. “Perhaps not. These companies don’t innovate significantly because they already have so much data.” And it is that collective frenzy that can be contained with Solid, he added. “Because then, companies can make use of data and information, but they don’t have to collect and store it all themselves.” People often simply do not know that their digital identity has value, or they don’t really care. Laziness and inattention also cause people to hand over valuable information about themselves. Not only do we run the risk of becoming a victim of cyber crime, but we also gradually lose our own will. “Very few people know that the data they post on social media often becomes the property of the platform,” said Jelle Wieringa, security advocate at KnowBe4, an organisation that trains people in security awareness. “Such a social media platform can do whatever it wants with it, in accordance with the conditions that are often blindly accepted by the user. Many people don’t think it will happen to them anyway, but because of the huge amounts of information they receive every day, they have become complacent and inattentive. “They simply accept terms and conditions that they don’t even read and post information on social media that can be used against them. The average web user does not realise that they, too, can become a victim and when they do realise it, the question is whether they have enough knowledge to recognise attacks and respond to them safely.” Virtual data vault That is why it is important to regain control over your data, said Verborgh. “And control doesn’t mean that everyone constantly has to decide for themselves whether to pay for a particular service with money or their data,” he said. “Compare it to investing. There are people who like to invest themselves, but there are also people who leave that to their bank. The fact that you have a choice – that’s what it’s all about.” At the beginning of this year, Innoviris, the Brussels Institute for Research and Innovation, invested €500,000 in Digita, a Belgian startup working on a virtual data vault based on Solid. “We are working on a technology that makes it possible to set up a worldwide personal data web,” said Tom Haegemans, a founder of Digita and professor of policy informatics at KU Leuven. “In such a personal data web, everyone owns a virtual data vault in which you get a uniform overview of all your data, even if it is actually stored at different companies. With such a virtual data vault, you can easily retrieve your data, manage access to it and keep it up to date.” Solid is intended to correct the distorted balance of power in the digital world, so that users once again determine what happens to their personal information and where that information is stored. According to Verborgh, the project offers benefits for both users and companies. “Innovation often requires data, and collecting all that relevant data is a challenge for many organisations,” he said. “With Solid, this becomes a lot easier, because you no longer have to waste energy on collecting, because in the end you never have enough data.” A pleasant side-effect of this is that privacy is much less of an issue, and according to Verborgh, privacy is not at the heart of Solid. “Privacy does not really have anything to do with it,” he said. “It is collateral damage from the wrong business model. There is no need to focus on privacy, because you cannot fix privacy as such, because it is one-sided. By giving people control over their own data, privacy improves by itself. It is a consequence of a better business model.” The Flemish government is now working on a data utility company, based on Solid. Among other things, the company is working on a new way of dealing with citizens’ data. Berners-Lee applauded the Flemish government’s efforts. “You understand,” he told Flemish business paper De Tijd, “the government understands the concept, and Ghent University and the Imec research centre are at the top.” Verborgh added: “We need to innovate in a different way. Instead of collecting as much data as possible, we need to learn to work with the data that citizens make available to us.”
<urn:uuid:af48fcd3-e475-4cd1-95e1-61570f7ed2cf>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252492752/Data-safes-will-give-users-control-over-their-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00274.warc.gz
en
0.965761
1,609
2.578125
3
Pop quiz: When did people first start shooting movies? You might be tempted to answer, “The 1910s” or “The 1920s,” but you’d be wrong. As it turns out, the Lumière brothers released their earliest films during the 1890s, and some simple footage goes back even further to the 1870s. But of all the silent films released by the Lumière brothers, “The Arrival of a Train at La Ciotat Station” is one of the most important. Not only does it give you a perfect window into daily life in the 1890s, it allegedly tricked terrified audiences of the time into thinking a real train was coming out of the screen! Tap or click to see more historical films like this for free at the Library of Congress. Of course, today’s audiences know better than to fear moving pictures. But that hasn’t stopped “The Arrival of a Train” from becoming a classic in its own right. And now, more than 100 years later, clever tech users are upgrading the clip with the help of a neural network. The results are astonishing. A piece of film history gets a digital facelift When the Lumière brothers released their landmark silent film “The Arrival of a Train at La Ciotat Station,” audiences were dazzled. They had never seen moving pictures like it, and a popular anecdote tells of people running away from the oncoming train out of fear of being crushed. But nobody was in danger. And even today, the power of this film is obvious at a glance. It’s only 46 seconds long, but it paints an accurate picture of life in the late 1800s. Despite the film’s historical value, it’s still very much of its time. The graininess and staccato nature of the film reel makes sense for a 100-year-old clip. But one YouTuber named Dennis Shiryaev wanted to use this clip to study the effects of a Neural Network on old films. The results are nothing short of remarkable. As you can see, it no longer looks like old footage. The addition of smooth motion and increased clarity makes the film look more timely than ever, and hammers home the point that these were real people immortalized in this piece of film history. How did they do it? This modern restoration of a classic film reel is only possible thanks to the magic of neural networks. In a neural network, multiple computers work in tandem to process data — similar to the structure of an actual brain or nervous system. Each computer in a neural network helps fill in gaps the others miss, and when crunching a video like the one above, allows for easy insertion of extra animation frames, smoothing and even color. With the help of a neural network, another YouTube user was able to colorize the footage to make it look even more modern. Now, you can take an even clearer trip into the past with this footage, and see what the world was like through the eyes of a person from the 19th century. As video technology gets even more powerful, we expect to see neural networks fulfilling an even more critical role in the future. After all, they’re already helping software developers generate life-like human faces based on nothing but data. Tap or click here to see how computers are creating fake faces and more.
<urn:uuid:4d0a93eb-c80c-4385-8c97-6571d27e42dd>
CC-MAIN-2022-40
https://www.komando.com/news/classic-train-scene-in-4k/706032/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00274.warc.gz
en
0.955255
711
3.296875
3
Unlike application-layer distributed denial of service (DDoS) attacks and volumetric DDoS attacks, protocol DDoS attacks rely on weakness in internet communications protocols. Because many of these protocols are in global use, changing how they work is complicated and very slow to roll out. Moreover, for many protocols, their inherent complexity means that even when they are reengineered to fix existing flaws, new weaknesses are often introduced allowing for new types of protocol attacks and network attacks. Detecting protocol DDoS attacks requires in-depth monitoring of streams of communications and analysis of deviations from expected standards. Border Gateway Protocol (BGP) hijacking is a great example of a protocol that can become the basis of a DDoS attack. BGP is used by network operators to announce to other networks how their address space in configured. If a bad actor manages to send a BGP update that’s presumed to be authentic then traffic intended for one network can be routed to a different network and the spurious traffic can cause resource depletion and congestion. Because BGP is used by tens of thousands of network operators around the world, an upgrade to a more secure version of the protocol would be both complicated and very expensive to deploy. Other protocol attack examples include SYN flood and Ping of Death. Unfortunately, protocol attacks aren’t large enough to make the news so finding good examples is hard. In addition, protocol DDoS attacks aren’t deemed successful based on their size but rather the frequency and persistence of the attack. One of the rare examples of a protocol DDoS attack occurred in 2018 when hackers used BGP hijacking to redirect traffic intended for the MyEtherWallet, a service that managed Ethereum cryptocurrency accounts, to Russian servers that presented a fake version of the site. The attack lasted for roughly two hours and acted as a cover for stealing the contents of cryptocurrency wallets. The Verge reported: Connecting to the service, users were faced with an unsigned SSL certificate, a broken link in the site’s verification. It was unusual, but it’s the kind of thing web users routinely click through without thinking. But anyone who clicked through this certificate warning was redirected to a server in Russia, which proceeded to empty the user’s wallet. Judging by wallet activity, the attackers appear to have taken at least $13,000 in Ethereum during two hours before the attack was shut down. The attackers’ wallet already contains more than $17 million in Ethereum. A10 Networks Thunder® Threat Protection System (TPS®) provides network-wide protection against all types of DDoS attacks with high availability to ensure application performance. Designed for deployments at enterprise- and service provider-scale, A10’s DDoS mitigation solutions provide 10 to 100 times lower cost per subscriber compared to traditional network vendors and are available in both hardware and software form factors. Learn about latest developments in the world of DDoS that can help you improve your security posture and protect your resources against devastating DDoS attacks.Get the Free Report
<urn:uuid:9fa01986-cbd7-4588-ba4c-e69deb586582>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-a-protocol-ddos-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00474.warc.gz
en
0.941673
614
2.890625
3
To fulfill this purpose, this technology focuses on the development of software, technological solutions and artificial intelligence techniques. (essentially deep learning and machine learning). Within this specialty, for example, communication devices are designed for ALS patients (amyotrophic lateral sclerosis): as the disease develops the patient can reach an advanced stage called locked-in syndrome in which they require an artificial interface in order to communicate. Deep and machine learning algorithms are used for designing these devices, which allow the decoding of the information behind the signals of the central nervous system. The purpose is that patients can use the devices for things like moving a wheelchair, moving a cursor on a screen, or to use a word processor. Certainly, it is about generating innovative technological solutions that improve the life quality of the patients. Deep learning y machine learning Let’s remember that machine learning (ML) is a critical part of artificial intelligence, which enable systems to learn for themselves, allowing the identification of complex patterns within a huge data stream without human intervention. ML algorithms can also predict future behaviors. Another technique called deep learning is also specific ML technique that uses informatic models which work similarly to the human brain, giving rise to a neuron’s artificial networks system that analyzes data and extracts patterns in order to solve problems. This technique is applied for computer vision, face and voice recognition and natural language processing optimization, along with also improving intelligent translators and semantic interpretation. Deep Learning is known for the ability of reaching the perceptual potential in the same way the human nervous system works. Computer vision is one of the specialties where it offers a strong improvement compared to the results of the usual algorithms. Additionally, Gartner also expects that by 2023 artificial intelligence (AI) and deep learning techniques will be the most common approaches to new applications of data science.
<urn:uuid:d9182ee0-1bdb-4234-b245-8b6d1e976412>
CC-MAIN-2022-40
https://baufest.com/en/assisted-intelligence-challenge-interfaces-developing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00474.warc.gz
en
0.922988
369
3.484375
3
Emerging technologies allow the Army to quickly map assets around the world. The famous 19th century graph by Charles Joseph Minard showing Napoleon’s troops during the invasion of Russia is an intersection of data, illustration and storytelling. The graph depicts three distinct and major variables that factored into the eventual French defeat – troop levels, temperature and movement. Even though the process of cartography is starkly different in the 21st century, the use of pictures and diagrams as visual data is still prevalent, at least for the U.S. Army. Data visualization is helping the Army cut through red tape by displaying information such as equipment counts in places all around the world, said Chuck Driessnack, vice president of missile defense at SAIC. For example, Driessnack said that having a comprehensive view of the estimated $36 billion worth of equipment in Afghanistan is crucial as the U.S. continues to draw down its presence and bring equipment back home. Charles Joseph Minard, a French civil engineer, drew this map depicting Napoleon's 1812 advance into and retreat from Russia. According to scimaps.org, it 'may be the best statistical graphic ever drawn.' "We have all this equipment that has accumulated over all those operations and they're sitting over in Afghanistan and we're coming out," said Driessnack, who was speaking at the Tableau Customer Conference in National Harbor, Md., on Monday. Tableau Software specializes in making data digestible through visualization systems, and works with both the private and public sectors. An example of SAIC’s visualization program is a map that shows how many Army ambulances are at locations around the globe. "So what's common in these organizations is they have the data but they can't get arms around it," he said. The other major benefit in implementing visualization systems is ensuring that personnel at every level receive consistent information through dashboards. This allows for better information sharing from data analysts all the way up to the upper echelons of Army leadership. "I'm talking about from the four-star general all the way down to the analyst and they're seeing it all at the same time," Driessnack said. NEXT STORY: The mobile warfighter: Preparing for deployment
<urn:uuid:ae48b96f-0222-493f-ba74-c748f69b0f27>
CC-MAIN-2022-40
https://fcw.com/digital-government/2013/09/data-visualization-army-style/211964/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00474.warc.gz
en
0.957556
467
2.765625
3
When the General Data Protection Regulation (GDPR) becomes law across the European Union, it will also affect non-EU countries. This is because GDPR applies to all businesses and organisations that are involved with the processing of the personal data of individuals who are within the EU, no matter where that business or organisation is located. This does not just apply to EU citizens, but to citizens of any country who are within the EU when data is collected from them and processed. Itis also worth noting that GDPR regulations do not apply to EU citizens whose personal data is collected and processed outside of the EU. The need for GDPR compliance can be difficult for non-EU countries, such as the US, as their attitude to the protection of personal data is very different to that of the EU. The EU attitude to Data Protection The ethos behind GDPR is that every individual should be entitled to privacy as a basic human right. This is why the new regulation sets out to harmonise the way that personal data is processed throughout the EU. The stipulations help to ensure that personal data is dealt with securely, in order to protect the privacy of individuals. The US Attitude to Data Protection There is no overall expectation of privacy in the US. Instead, personal data tends to be regulated depending on the subject matter. Examples of this are HIPPA which regulates health data and GLBA which regulates financial data. What all of this means is that some information which is protected by GDPR requirements may not be protected under US law. Therefore, processing the personal data of EU citizens will have different rules attached to it than processing the personal data of non-EU citizens, once GDPR becomes law. How does this Affect US Companies? Dealing with two different attitudes towards data protection is likely to be too complicated for many US businesses and organisations. It will be too onerous to have separate systems for different groups of customers, depending on where they are located. It is also worth noting that one individual could be subject to two different sets of rules. For example, a man could purchase a TV from a US company, while at home in Texas. The data processed would be subject to US rules. He could then go on vacation to France and order more equipment from the same supplier while he is away. The data processed would be subject to GDPR rules, as the man is within the EU at the time of processing. You can see how complicated the situation can get. This is why the most appropriate approach to adopt would be to treat data protection as an all-encompassing requirement in all aspects of data processing. This is a less complicated approach in the long term and helps to ensure that businesses and organisations are compliant with the GDPR. It remains to be seen how many US businesses and organisations adopt this approach.
<urn:uuid:32606c21-58e4-4213-b5c8-53b5cd989a7b>
CC-MAIN-2022-40
https://www.compliancejunction.com/differences-european-privacy-laws-american-privacy-laws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00674.warc.gz
en
0.96092
570
2.9375
3
In the first blog post of this series, we introduced the topic of machine learning and discussed why there is a lot of excitement around the topic. In this blog post, we explore different types of machine learning. Let’s start with a simple example that everyone can relate to. You want to teach a three-year-old some basic discipline of keeping their toys in the right place. The room is full of interlocking blocks and soft toys. There are two boxes – one for blocks and another larger one for soft toys. You want to train the kid to put the right toy in the right box. You start by showing a block and then placing the block in the block box; similarly, you pick up a soft toy and then place it in the toy box. With a couple of iterations, the child learns what is a ‘block’ and which box it should go into and which is a ‘soft toy’ and the designated box for the soft toy. This type of learning is called Supervised learning. It is a type of machine learning, where one guides the system by tagging the output. For example, a supervised machine learning system that can learn which emails are ‘spam’ and which are ‘not spam’ will have its input data tagged with this classification to help the machine learning system learns the characteristics or parameters of the ‘spam’ email and distinguish it from those of ‘not spam’ emails. Just as the three-year-old learns the difference between a ‘block’ and a ‘soft toy’, the supervised machine learning system learns which email is ‘spam’ and which is ‘not spam’. Techniques such as linear or logistic regressions and decision tree classification fall under this category of learning. Now let’s say that you want to test how smart the three-year-old is and you ask it to sort the blocks into different piles. The child has not got any ‘clue’ from you but it recognizes the different shapes of the blocks. It picks up all square blocks and creates a pile, all rectangle blocks into another pile and so on. The child can also sort the blocks based on the color or even a combination of shape and color. We call this type of ‘unaided’ learning as unsupervised learning. Unsupervised learning is a somewhat harder form of machine learning. In this type of learning the input data is not ‘tagged’ requiring the system to infer the naturally occurring boundaries or classifications. A good example of this is when you have a large amount of survey data and you want the learning system to determine segments of consumers based on their socio-demographic or behavioral characteristics. Techniques such as clustering or dimension reduction are types of unsupervised learning techniques that can take raw data and form groups based on certain characteristics of the data. Now instead of telling the child which toy to put in which box, you reward the child with a ‘big hug’ when it makes the right choice and make a ‘sad face’ when it makes the wrong action (e.g., block in a soft toy box or soft toy in the block box). Very quickly after a few iterations, the child learns which toys need to go into which box – this is called Reinforcement Learning. Dynamic systems that can take an action in the real world and measure the outcome to correct its future behavior often exhibit this type of learning. Control theoretic techniques and Markov decision processes are types of reinforcement learning. Based on your problem domain and the availability of data do you know which type of machine learning system you want to build? What are some of the challenges you face in implementing each these types of learning systems?
<urn:uuid:b4bfb179-07ae-4850-b67a-2958a45cbaf3>
CC-MAIN-2022-40
https://www.crayondata.com/demystifying-machine-learning-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00674.warc.gz
en
0.954233
780
3.40625
3
This article demonstrates what is NAT loopback. It's useful when you run the server inside the network. After properly configuring the open port, port redirection or DMZ host, you can use the public IP address (or domain name) to access the server from both your home or office network as well as from the Internet. The internal Web server shown in the image below, which can be accessed by the user in the local network using the public IP(184.108.40.206) or domain name(example.com). Was this helpful? Sorry about that. Contact Support if you need further assistance, or leave us some comments below to help us improve.
<urn:uuid:94ece71a-c66b-41f1-b0a6-905b9683c2aa>
CC-MAIN-2022-40
https://www.draytek.com/support/knowledge-base/7427
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00674.warc.gz
en
0.884922
148
2.703125
3
Despite their modern implications and future forward technologies, the data centre is much older than it lets on. Data centres date back to the 1940s, when the first computer designated rooms became home to large military machines that were set to work on specific data tasks. Then, in the 1970s, as Unix, Linux, and IT became more prominent, specific rooms with equipment and networking would popularise the name “data centre”. With the 1960s came the mainframe computer. Remember the episode in Mad Men when the ad office loses its lunchroom to the colossal computer? Well, this happened all over the world, with IBM leading the charge, filling dedicated mainframe rooms in large organisations and government agencies. Indeed, in some cases, these increasingly powerful and expansive machines needed their own free-standing building, which were to become the first data centres. With the 1980s, PCs were introduced that were typically connected to remote servers to enable them to access large data files. By the time the internet became ubiquitous in the 1990s, internet exchange (IX) buildings had sprung up in key cities to serve the needs of the World Wide Web. These IX buildings were the most important data centres of their time, serving most people’s needs. Since then, the need for data storage has grown in lockstep with storage innovation, which has become a critical factor. Storage devices were manufactured into many form factors to fit the needs of the data centre and ultimately helped to power its incredible growth over the decades. The start of storage To understand the role of storage in the data centre, a brief glance back in time is the place to start. After their inception in 1956, hard disk drives became the preferred non-volatile device for computing, which was still the case when AOL created the first modern data centre in 1997 at the start of the dotcom bubble. This kickstarted a boom in data centres, with companies using their remote servers to quickly get their websites online. However, as more data was created and captured, CPU speeds hit the roof, churning through the information more quickly. And with this, the industry was galvanised into action to accelerate storage to the speed of compute. With no blueprint for how to address this industry challenge, storage took on a variety of new forms. Over the years, experiments with semiconductors led to the adoption of SSDs for the enterprise sector. Then came the evolution of SATA connections to PCIe and the emergence of M.2 slots, as well. Today, there are five major form factors used in data centres, a marked expansion from the one that started it all. Looking back through the decades of data centre evolution, there are some common themes facing storage developments; demands for speed, constraints on storage, and a willingness to try anything to limit bottlenecking. While CPUs remained iterative, storage had to shape its own path. Much like clay in the hands of a sculptor, over the years storage has been moulded into every conceivable shape and size that can be imagined. From the spinning disk to slotted memory and beyond, these innovations have been the cornerstone for flexibility in the data centre. That flexibility is also the data centre’s primary strength. HDDs and SSDs coexist while serving different purposes and, with access to both, data hubs can find a balance between cost and speed limitations. Today, the major players in the enterprise sector, the behemoths of cloud service providers, are looking to build data centres using custom components, and engineers are working hard to meet these use-case specific requests. Combined with the unrelenting rate of data creation in the world, this means that storage continues to evolve in step with the digital world. The pace of change Change is moving at a good pace – faster than ever before. To keep up, engineers are currently tinkering with 13 new form factors, more than double the five that are currently in use. How many will make it to mass production? To avoid a bottleneck, it’s essential the storage speeds synchronise with the speed of computation. The new E1.S and E1.L drives are the front runners, as they are suited for hyperscale data centres and high-capacity use cases, respectively. But emergent use cases may make one of the other 11 form factors a better contender for mass production. It’s anyone’s guess. While technology enables new solutions, the key driver is the exponential growth of data. Our increasingly automated, digitised lives leave us entrusting all our files into the ether. Yet maintaining the data centres that hold all this is anything but simple. Along with building, running, and chilling these massive data centres, the physical media on which the data is stored requires constant upkeep. And with extra storage capacity being added all the time, tending to it all is an increasing burden for cloud computing providers. This demand for cloud applications has surged during coronavirus. According to the property company Knight Frank, take up of data centre capacity almost doubled in cities such as Madrid, Warsaw and Milan compared with 2019. Data hub mergers and acquisitions totalled almost $35bn globally in 2020, more than five times the volume of deals in 2019 and $10bn ahead of the previous annual record set in 2017. Data storage needs to continue to shapeshift safely, intuitively, and cost-effectively to best support the data it serves. Without the versatility of the storage components, the data centre today would look radically different, proving the importance, and persistence, of storage. In the end, data must be stored. In this industry, it’s the one element that’s here forever.
<urn:uuid:bb0ae835-2c75-481a-b1f3-ad1a3a4a09d6>
CC-MAIN-2022-40
https://datacentrereview.com/2022/03/the-changing-times-of-the-data-centre/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00674.warc.gz
en
0.961491
1,165
3.375
3
Interest in nanochemistry research and energy storage led 18-year-old Eesha Khare, a senior at Lynbrook High School in San Jose, to develop a supercapacitor that could potentially be used in flexible displays and fabrics. Her effort won her first prize at the Intel Science Fair and the Project of the Year award in the California State Science Fair’s senior division for 2013. Khare created a nanorod electrode capacitor with increased electricity density that retained a supercapacitor’s energy density and long life. “I wanted to see if I could apply my research to a commercial idea,” Khare told TechNewsWorld. There hasn’t been as much research done in the supercapacitor area as there is for batteries and capacitors, so she decided to focus on supercapacitors. There’s been speculation that the supercapacitor might be able to recharge a cell phone battery rapidly, but “this advancement is not really about charging mobile solutions, it’s about power storage,” said Jim McGregor, principal analyst at Tirias Research. “Perhaps instead of two batteries or cells, you might have a single battery or cell with something like this capacitor to recharge the battery.” A Thumbnail Sketch of Khare’s Project Khare designed, synthesized and characterized a core-shell nanorod electrode with a hydrogenated titanium dioxide core and polyaniline shell. The titanium dioxide acts as a double-layer electrostatic core. In electric double-layer capacitors, the electrical charge stored at the interface of a metal, and an electrolyte is used to construct a storage device. The good conductivity of the hydrogenated titanium dioxide combined with the high pseudocapacity of polyaniline results in heightened overall capacitance and energy density while retaining power density and cycle life. Pseudocapacitance can increase the capacitance value of a supercapacitor by an order of magnitude over the double-layer’s capacitance. Researchers often work with one form of titanium dioxide, anatase, but Khare grew rutile titanium dioxide crystals instead because “rutile offered a direct electron transport pathway” even though it was harder to grow. She put down an initial seed growth of rutile on a flexible substrate and used thermodynamic growth, where the rutile aggregated on top of itself. The new electrode was fabricated into a flexible solid-state device to light an LED as a test of a practical application. It demonstrated a capacitance of 238.5 Farads per gram compared to the 80 F/G of the next best alternative supercapacitor in previous research. This resulted in excellent energy density of 20.1 Wh/kg, comparable to batteries, while maintaining a high power density of 20540 W/kg. It also demonstrated a much higher cycle life compared to batteries. Khare used laboratory equipment at the University of California Santa Cruz under the supervision of professor Yat Li, who agreed to work with her after she had reached out to “30 or 40 professors at many universities in my area” without success. Google has reached out to Khare but “I haven’t followed through with them yet.” Google did not respond to our request to comment for this story. Comments on Khare’s Work Supercapacitors “are probably not very useful in consumer electronics where sustained power is needed,” Harold Kung, a professor in Northwestern University’s chemical and biological engineering department, told TechNewsWorld. “Supercapacitors are used where you need a surge of power rapidly for a very short time. Batteries discharge power slowly, which is why you can power your cell phone for a long time.” Whether or not a supercapacitor could be used on flexible fabric is open to question because of their rapid discharge, Kung said. Recharging a cell phone battery or any lithium-ion (Li-ion) battery in a matter of minutes or even seconds is easy, Tirias Research’s McGregor told TechNewsWorld. “It’s keeping the battery from catching fire and exploding that is the challenge.” Many companies are working to improve the algorithms to manage charging in Li-ion batteries, McGregor noted. Also, Qualcomm offers Quick Charge 2.0 technology, which should cut battery charging time by about 50 percent.
<urn:uuid:ce343956-a935-4413-92cc-c81b1cbd0380>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/teenagers-power-storage-project-lights-up-science-world-78085.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00674.warc.gz
en
0.924215
930
3.234375
3
Researchers from Johns Hopkins University Applied Physics Laboratory have tested a 32-year-old scientific approach for use in quantum computing, a data processing method that breaks down information into quantum bits for faster processing. Physicist Greg Quiroz leads a team that demonstrated the use of a “simultaneous perturbation stochastic approximation“ algorithm to boost the accuracy of quantum computing, JHU APL said Wednesday. Using modern processors, the team tested the method by conducting a numerical simulation. “Here, we present a method that utilizes feedback from hardware to determine the correct control protocol to perform the correct quantum operation and mitigate noise; thus, improving the computational accuracy of the device,“ Quiroz said. James Spall, who was then a statistician at JHU APL, developed the algorithm in 1987 for use with a wide variety of hardware systems available during that era. Spall is now a member of JHU APL's principal professional staff. Dave Clader, experimental and computational physics group supervisor at JHU APL, said the effort demonstrates how concepts tailored for microelectronics and robotics also hold applications in quantum science.
<urn:uuid:5668bc6b-ca99-4522-90b8-17c9a97f1aba>
CC-MAIN-2022-40
https://blog.executivebiz.com/2019/07/john-hopkins-apl-tests-32-year-old-algorithm-for-quantum-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00674.warc.gz
en
0.908969
240
2.8125
3
Upstream: How Much Speed Do You Need? In the middle of a global pandemic, in which people are working and playing on their various devices at home, internet usage is surging—whether because of virtual meetings or streaming entertainment or mindlessly scrolling through apps. And it’s not just the heavily used downstream aspect that’s seeing increased usage, we’re also seeing an increase in upstream usage. What Is Upstream? Upstream is when data flows from the user to the network. When we play an online multiplayer video game or conduct a web conferencing call, we’re using the upstream channel. According to the NCTA’s COVID-19 dashboard, upstream internet traffic through late July was elevated, up 22.1 percent compared with pre-pandemic levels. Cable networks have ably handled this increased traffic, aided by the fact that popular upstream-dependent applications require relatively modest bandwidth. A web audio conference call requires a modest 0.03 to 0.15 Mbps in bandwidth, whereas a video call may require up to 3 Mbps. Given that nearly all U.S. households passed by cable networks have currently available upstream speeds of at least 20 Mbps, there’s sufficient capacity to meet today's demands. Your cable broadband internet connection can handle it today and we continue to advance cable network technology to ensure we're also ready for tomorrow.
<urn:uuid:823d5f6c-7ee3-404c-95b8-7e2cf199a524>
CC-MAIN-2022-40
https://www.cablelabs.com/blog/upstream-how-much-speed-do-you-need
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00674.warc.gz
en
0.913811
293
2.734375
3