text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Humble leaders produce results and develop successful teams because they inspire close teamwork and rapid learning. They understand their own weaknesses, recognize the strengths of others, and focus beyond themselves. They share the credit with their teams, which increases collaboration and loyalty. Companies are beginning to actively screen for humble leaders because they drive success – this implies that humility is a character trait, but is it valued equally for men and women? Humility is a quality found more often in women leaders than in their male counterparts Women build collaboration by communicating their vision and empowering others to contribute. Part of the difference is that women and men collaborate differently; women focus on helping others to accomplish the teams’ goals, while men are task oriented, concerned with playing their role well. Women devote more time and effort to team achievement. They may end up shouldering a more significant portion of the work, while men focus on their deliverables and the importance of their contribution. The way woman and men message their achievements and view their worth reflects this difference. Women see achievement as team success while men tend to look at their personal deliverable. The term Servant Leaders refers to leaders with similar characteristics, developing their team, listening actively and showing they care. This philosophy provides a recipe for leaders to use to create collaboration and teamwork. It is a process to build humility. We need inclusive leadership now more than ever to handle the disruptive change, the rapid rate of technological advancement and the need for innovation. If humble leaders produce results, why are so many famous leaders arrogant? Unfortunately, we base our view of good leadership on perception rather than results. We admire leaders who take-charge and provide answers. We see them as winners. Flamboyant leaders with big egos are revered for being bold – they make the headlines. But arrogance and overconfidence are often confused with leadership skills. These leaders can ruin companies, because they take on too much responsibility, fail to ask for advice and have a false sense of their ability, all of which may lead to riskier decisions which put their companies at jeopardy. Men are more likely to overestimate their abilities (Dunning–Kruger effect shows that on average men rate their performance to be 30 percent better than it is). Who are better decision makers in stressful situations? Studies have found that under pressure men are more apt to take risks, they become focused on rewards (when their heart rates and cortisol levels are high). In similar situations women react differently; They weigh contingencies and are more interested in smaller guaranteed rewards. Studies have found that as they got closer to stressful situations, women’s decision making improved while men’s got worse. Their reaction under stress is why women are often brought into organizations to lead in times of crisis. I was a humble leader I shared credit, inspired collaboration, delivered results and then waited to be recognized. But, I used the pronoun WE too often when I should have said ‘ME’ and made my achievements known. You see, I did not recognize the importance of taking credit for my accomplishments and contributions. Leaders, even humble leaders need to be vocal, claim credit, and ask for what they want for themselves and their teams. I learned too late that if you don’t share your story, someone else will take the credit. Women need to be aware of the effect of unconscious bias and command the respect they deserve. Research shows that if a team solves a problem both men and women typically assume that a man was the leader. People expect men to make visible critical decisions, even when women make all the behind-the-scenes decisions. It is essential to distinguish between leaders who exude confidence and those that demonstrate competence. For women, it is critical that your managers recognize and appreciate the value you bring to the table. Be proud of your ambition, your accomplishments and be visible! Use your empathy, compassion, and team-building to be successful. Women’s leadership style is different from men. Women are more willing than men to reinvent the rules, create/sell their visions. They have the determination to turn challenges into opportunities and listen to the advice of others. Humble leaders are in demand and women bring these qualities to the table – capitalize on this quality to be a better leader.
<urn:uuid:540ec58f-4cd8-4cf4-81f7-0fde17819974>
CC-MAIN-2022-40
https://www.cio.com/article/222490/does-leadership-style-matter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00485.warc.gz
en
0.972808
870
2.5625
3
To understand humans and their personality, an entirely different study known as physiognomy is followed by scholars. Physiognomy is the study of facial expressions of humans. To understand humans and their personality, an entirely different study known as physiognomy is followed by scholars. Physiognomy is the study of facial expressions of humans. Right from the time of Aristotle, people have been using the approach to predict human behavior. Physiognomy is derived from a Greek word, which if broken down, means judging the nature of humans. Personality prediction is quite a challenging job, which requires a lot of study and research. But the results derived from the study may or may not be accurate. And that’s the lacuna which AI promises to fill! Knowing your Personality using Artificial Intelligence We humans can be quite temperamental and complicated to figure out. Take, for example, the teacher-student relationship. Providing the best education is always the aim of all educational institutions. Educating and guiding students right for a better tomorrow has been an educator’s priority since times immemorial. But not all students will have similar interests, however. Some students might be good at lab work, while some might be good at sports, and some others at mathematics. Some students might open up to teachers sooner than the others. Understanding each student on a personal level is not an easy undertaking for any teacher. Now, what if teachers are given AI tool, that helps them understand a student’s personality? Then there is the example of lunch hours at offices. While some employees might stick to the same spot every day and have their food, others experiment sitting with different people and in different spots inside the cafeteria. These small choices we make, say big things about our personality type. Blinking, Winking, and Moving your Eyes to Know your Personality Trait People say that eyes are the window to the soul. Our eyes say a lot about what we feel, think and perceive in a given situation. On careful examination, we’ll find that our pupils expand and contract in response to every occurrence us. So, just by looking at a person’s eyes, we can determine what they’re thinking or feeling. Researchers at the University of South Australia and the University of Stuttgart found a close connection between the eye movements and personality traits of an individual. The study identified that people with same behavior tend to move their eyes in a similar fashion.[…]
<urn:uuid:7eaf7b8c-3ea3-4345-bee4-9e03868c99b1>
CC-MAIN-2022-40
https://swisscognitive.ch/2019/02/14/artificial-intelligence-and-personality-tests/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00485.warc.gz
en
0.955533
506
2.828125
3
Big and small, municipalities are under siege from cybercriminals. It feels like at least once a week there is a headline about the latest city government breach. You would have thought the Atlanta breach would be a wakeup call for all cities, but the evidence indicates there is still a long way to go. We live in a world where the question is no longer “if” a breach will occur, but “when”. Not surprisingly, one of the key entry points for many attacks is phishing – the fraudulent practice of sending emails purporting to be from reputable companies in order to induce individuals to reveal personal information, such as passwords and credit card numbers. Some reports claim phishing is the entry point for over 90% of breaches. But why is phishing so successful? To start with, it’s the easiest method for an enterprising phisherman to execute targeted attacks. And with the plethora of information most people make available about themselves on social media, it’s not hard to collect enough information to sound legitimate. Think I’m joking? Search YouTube for Dave the Psychic and be amazed. Combine the above with multi-tasking, government workers who check their email at all times, day and night, from multiple devices. Getting them to click a bad link, open a seemingly benign attachment or provide a nugget of personal information is child’s play when there is always “just one more” email to read. Once “hooked”, the unsuspecting victim can be exploited to download ransomware or transfer funds via business email compromise (BEC). In the face of unrelenting attacks, and overwhelmed security teams, according to a recent article in the Wall Street Journal, cities like Houston, Fort Worth and many others are purchasing millions of dollars of cyber security insurance policies with annual premiums up to $500,000. What’s more, the scale of these attacks is unprecedented. The mayor of Atlanta has estimated that her city faced more than $20 million in costs following their attack. Why is this so prevalent now in city governments? - “Easy” targets Compare the cybersecurity budget for a typical city government with that of a reasonably-sized financial services institution and it’s no wonder that city governments are targets. While IT teams have always lamented the lack of people, time and money, being lean and mean as a junk yard dog doesn’t work against today’s cybercriminals. Too much to do, not enough resources, unable to stay ahead of cybercriminal activities. This is not a finger-pointing exercise, it’s just reality. This exposure to threats makes city municipalities enticing targets. - Media attention is valuable If an attacker shuts down servers in say, Atlanta GA, you’ve got thousands of civilians without services, public welfare at risk and a horde of angry media on city hall steps. Regardless of whether the criminal group is ever identified, the city has a public relations nightmare that must be dealt with quickly. Whereas a private corporation may be able to ride out the storm, city governments need to get services up and running quickly. Not surprisingly, ransomware, (again activated via a phishing email) is a common attack. Encrypt thousands of endpoints and servers and cities will readily pay the ransom. - Migration to Office 365 Microsoft Office 365 moves email and other critical applications to the cloud for a defined monthly fee with no 3-year upgrade cycles; a CFO’s financial dream. And municipalities want to take advantage of both the fiscal prudence their constituents love and improved efficiencies their IT teams need. While Office 365 provides “free” email security, it falls in to the “good enough” category for most organizations. Reality is, though, that industry analysts state that 35 percent of Office 365 users are looking to augment the built-in email security, so something is amiss. Gateway email security is vital, but it’s only one part of the equation and Office 365’s email security is no different. - Tasty clickables Let’s face it, there is no shortage of “tasty clickables”. Whether it’s the latest smiling cat video or the past-due invoice from a vendor, things to click, open, view and listen to are coming at us fast and furious. And with the increasingly mobile work force, it’s becoming harder to differentiate work from personal as everything melds together on our phones and tablets. Our fingers and thumbs are itching, nay twitching, to click on stuff. But some of these things aren’t good. URLs are published up to the tune of 1.5 million a month just to fake us into thinking an email is indeed originating from your payroll provider, bank, Facebook page, insurance claim form, etc. With so much click-bait available, how is this ever-more distracted workforce to know good from bad? - IT to the rescue (?) For many organizations, the challenge of phishing is “solved” by having users forward suspicious email to the internal security team. And why not? These are trained professionals whose entire raison d’être is to protect the organization from everything – only if they had the time. Or experience. Or proper tools. Or money. Suffice to say, there is a reason many users chose NOT to send suspicious email to their security team. - Not enough Information Security pros As larger companies compete for top IT talent, it puts tremendous pressure on municipalities in hiring and retaining top expert staff. In the aforementioned Wall Street Journal article, one insurance executive who is helping write new municipal cyber security insurance policies stated: “There aren’t enough of these men and women around for the Fortune 500, much less for all the towns and cities and states that need these talents.” So now what? Whether you work in a city service department or are the CISO of New York City, there are things you can do to improve your security readiness for today’s advanced email-borne threats. 1) Don’t assume that your email security gateway is all you need. The fundamental technology for these gateways is vital, but decades old. While they repel many threats and spam invasions, they are challenged to block targeted, socially engineered attacks like spear phishing. And that goes double for anyone considering that Microsoft Office365 security is good enough. 2) Don’t assume that your IT staff and employees can just fend it off on their own. Your IT staff does a lot of things. While they may know a lot about email threats, they are usually not email security experts, nor do they have the time to review all the suspect emails that come into your employees. And no matter how much you may train your government workers about the dangers of email threats, it isn’t enough (see above section on tasty clickable). 3) Consider that these new threats require a new approach. Not only a modern email security gateway that filters emails predelivery before user’s inboxes, but a new layer of security that protects users postdelivery of email into their inbox. And, lest we forget, the all-important email incident response for when malicious email is detected in the inbox. There are now solutions that combine the best of machine learning with expert human analysis to help stop, block and remediate advanced phishing attacks, taking the burden off your employees and IT department. You can consider it a bipartisan vote for a more secure email future.
<urn:uuid:9019aafb-b0eb-4383-94fe-600a2310052e>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/cities-under-siege-is-your-city-next/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00485.warc.gz
en
0.935545
1,569
2.671875
3
The long-lived and usually reliable mouse could soon be put out to pasture, as Microsoft unveiled this week a new hand-gesture sensor that could allow users to point with their fingers rather than a cursor. The new Digits prototype is part of an effort to create a mobile device that could transform interaction with a computer interface. This wrist-worn sensor would allow the wearer to control a range of equipment via hand gestures, which could allow it to be an interface beyond the computer screen. Microsoft has been looking at motion control, which has already been adapted for its Kinect for the Xbox 360 game console, as offering an alternative to pressing buttons on a device. Digits, which was developed at Microsoft’s computer science laboratory at the University of Cambridge with help from researchers at Newcastle University and the University of Crete, could possibly open the door for virtual controllers. “Digits represents a step forward as a blended reality trend,” said James Canton, Ph.D., of the Institute for Global Futures. “This is the fusion of virtual and physical things and the ability to interact between them.” Microsoft did not respond to our request for further details. Motion control as part of an interface device received a significant boost with the 2006 release of the Nintendo Wii video game console. In 2010, Microsoft — as well as rival Sony — followed up with its own motion control system. While Kinect is primarily a video game controller, Digits could take motion control technology in a new direction. “Microsoft’s Digits is an interesting example of how technology tends to become generationally smaller and more sophisticated as time passes,” said Charles King, principal analyst for Pund-IT. “In many ways, Digits is similar what Nintendo accomplished with the Wii controller — offering a simple, intuitive device for interfacing with various kinds of content.” But the sort of content Digits will interface with and the applications the device will inspire are yet to be seen. “It’s more likely that Digits will replace devices like TV remote controls than computer mice, mainly because mouse functions will be superseded as touchscreen technology becomes as common in PCs and laptops as it is on tablets and smartphones,” King told TechNewsWorld. “That said, technologies like Digits could find a home in a host of personal computing devices.” Content Will Be Crucial As with many small leaps forward, Digits could be — at least for now — a solution that has no problem. That could change — and likely will — as developers find uses for motion control and motion control-supported interfaces. There are still challenges to overcome, however. “This comes back to the 3D problem,” said Canton. “It is great if you are going to the movie and watching it. But the challenge is that we don’t have 3D on many of our devices.” Digits could change the way people interact with devices such as keyboards. “That interaction today is actually fairly primitive,” Canton told TechNewsWorld. “The next step in this evolution is telepresence for a more immersive experience,” Canton added. “This is the future of communications, the future of entertainment, and it is the next generation of collaboration. But you still need to have content that is there to interact with, and the Digits interface is only as good as the content that interacts with it.” This chicken-and-egg dilemma will no doubt be resolved as content arrives and users embrace the technology. Already there is other motion control technology on the horizon. Digits is just one of many concepts suggesting that gestures are just a finger snap away. “At the Intel Developer Forum a few weeks ago, the company introduced its Perceptual Computing SDK which is designed to take advantage of new features supported in next-generation Intel Core CPUs, including stereoscopic webcams that enable gesture controls via 3D webcams,” King explained. “The initial functions — playing a version of Whack-a Mole with your bare hands, for example — were extremely basic,” he continued, “but Intel noted that functionality would become more sophisticated and powerful over time. In addition, the new Core chips can use the webcams to enable true 3D.” Microsoft’s Digits is also likely just one step toward greater immersion with the digital experience, and it isn’t too hard to imagine how this technology could itself interface — or be the interface — for Microsoft’s recently announced patent filing for “holograph” technology. “This could very well be integrated with that,” Canton noted. “The real interesting evolution is going to be when it starts to morph and change — it becomes more dynamic and pervasive. It will allow us to enter virtual worlds. These devices get us closer to the possibilities.” So maybe not tomorrow, but soon users could be reading articles like this and instead of using the mouse button, they could scroll with gestures. “We’ll look back in a few years,” Canton suggested, “and say, ‘Wasn’t that primitive?'”
<urn:uuid:f10400f8-f3d9-4a0b-9fe2-4efdeba2267f>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/microsofts-digits-could-turn-us-all-into-hand-dancers-76345.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00685.warc.gz
en
0.94881
1,103
2.5625
3
FTPS vs SFTP While FTPS adds a layer to the FTP protocol, SFTP is a totally different terminology based on the SSH protocol (Secure Shell). Both are used in data file transfer. Let’s understand both the concepts in detail, benefits of each other and detailed comparison of their features / capabilities. What is FTPS? FTPS, also known as FTP-SSL, is secured form of FTP. FTPS is basically FTP with security added to data transfer. Special security protocols TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are cryptographic and provide encryption of data to protect your information, including username/password. FTPS is to FTP very much like HTTPS is to HTTP: an added layer of security while keeping the original protocol relatively unchanged. How does FTPS work? FTPS uses two connections – (1) a control channel and (2) a data channel. Either both channels will encrypt or only the data channel will perform encryption. FTPS authenticates your connection using a username and password, a trusted certificate or both. FTPS client will first check if the server’s certificate is trusted, if either the certificate was signed by a known certificate authority (CA) or if the certificate was self-signed by your partner and you have another copy of the public certificate in your trusted key store. If your certificate isn’t signed by a third-party CA, your partner may allow you to self-sign your certificate. Username authentication can be used with any combination of certificate and/or password authentication. Pros of FTPS : - Widely known and used. - Communication is readable by humans. - Provides services for server-to-server file transfer. - SSL/TLS has good authentication mechanisms (X.509 certificate features). - FTP and SSL/TLS support is built into many internet communications frameworks. - FTPS is encrypted. - Commonly understood and utilized. - Easy to implement. - Offers service for server to server file transfer based on SSL/TLS. - Easily supported by mobile device. - Works in OS that have FTP support but not SSH/SFTP client. - Built in support in .net framework. Cons of FTPS: - Not all FTP servers support SSL/TLS. - Does not have a standard way to get and change file or directory attributes. - Can’t perform file system operations. - Uses multiple ports, making firewall configuration more complicated. Related – TFTP vs FTP What is SFTP? SFTP, also known as SSH FTP, encrypts both control and data during transmission. All data and credentials are encrypted as they pass through the internet. SSH is a protocol that allows you to remotely connect to other systems and execute commands from the command line. SFTP was created to transfer files through the secure channel (SSH). SFTP makes data transfers using the SFTP faster than other secure FTP connections. How SFTP works SFTP provides two methods for authenticating connections like FTP with SFTP. Authentication method includes private and public key SSH keys. When request reaches the SFTP server, their client software will transmit public key to the server for authentication. If the public key matches the private key, along with username and password, then the authentication will be a success. Key points of SSH key authentication are: - System generates a public-private key pair. - You link your public key to your account on the SFTP server. - When connection is established with the server, client will produce a signature with private key that the server can confirm with the already-stored public key. - If the public and private keys match up, the connection is established. Pros of SFTP: - The connection is always secured. - SFTP requires only one connection. - Uses only one port, hence simple to configure firewall rules. - SFTP is supported by linux and unix OS by default. - Can perform the file system operation, file lock, permission and attribute, manipulation and symbolic link creation. Cons of SFTP: - SSH public and private keys are difficult to manage and validate. - It is not human readable because of binary form. - The protocol does not offer removal operations for the recursive directory in addition to a server-to-server copy. - No integrated SSH/SFTP assistance in VCL and .net structure. Comparison between FTPS and SFTP: - FTPS establishes connection via SSL/TLS, while SFTPS establish connection via SSH channel. - For both protocols, server authentication is via public key and Client authentication via username and password - FTPS control channel is 990/TCP and FTPS data channel is 989/TCP. SFTP port number is 22. - Both use the algorithm of Asymmetric, symmetric and key exchange. - FTPS performs authentication via x.509 certificates and SFTP authentication takes place via SSH keys. - FTPS will allow you to create custom commands and SFTP has better control of file permissions, ownership and properties. - FTPS allows use of Trusted x.509 certificates, whereas SFTP server only requires a single port to be open on the firewall. - FTPS supports EBCDIC transfers. SFTP allows creation of symbolic links. - SFTP will be slower than FTPS because there are more steps to render security. - FTPS has separate connections for command and file data transfer which is not the case of SFTP. - Encrypted command and file data connection is supported in both SFTP and FTPS. - Key-based authentication is supported in SFTP. On the contrary, third party signed certificate is used in FTPS. - Both supported host identity verification. Comparison Table : FTPS vs SFTP |Connection Security||via SSL/TLS||via SSH channel| |Port||990/TCP for the FTPS control channel. 989/TCP for the FTPS data channel.||SFTP port number is 22.| |Security||Server authentication is verified using a public key infrastructure. Client authentication can also be performed via username / password or client certificate verification.||Server authentication is achieved by securely distributing the server’s public key to clients. Clients can be authenticated using username and password or public key authentication.| |Connections Requirement||At least 2 ports required, one port to issue commands and a separate data port for each and every directory listing or file transfer.||Only 1 port is required (commands and data use the same connection).| |Algorithms||Asymmetric, symmetric, and key exchange.||Asymmetric, symmetric, and key exchange.| |Authentication||Performed via x.509 certificates||Performed via SSH keys| |Server Requirements||Requires a server X.509 certificate and private key.||Most SSH server installations will include SFTP support (or Open SSH can be used)| |Compatibility||Many functions can sometimes lead to client and server interoperability issues.||SFTP generally is compatible with many modern devices and systems (Linux and Unix) but is not suitable for communicating in legacy like VCL and .NET frameworks.| |Configuration||Can cause firewall/transmission issues due to more complex configuration requirement.||Primarily due to its streamlined connections, reduced firewall issues are experienced.| |Performance||Offers the highest possible secure transfer speeds.||SFTP is a robust and flexible protocol.| |File/Directory Manipulation||FTPS commands are limited and not standardized, hence require additional administrative configuration.||Many standardized controls and commands for activities such as file directory manipulation, permissions locking, etc.| |Server to Server Communication||FTPS supports server to server communication.||Server-to-server communications are not well-supported.| Download the comparison table here. FTPS was created as an enhanced version of FTP to add security framework, while SFTP is an enhanced version of SSH that adds easy file transfer capabilities to the already secure SSH. FTPS uses two channels to facilitate control communications and data transfer, while SFTP only uses one. FTPS transfers data in a human readable format, while SFTP transfers data in binary format. While FTPS is widely known globally, SFTP has the advantage of being more secure.
<urn:uuid:db1d5610-9000-4cb5-857a-4df818836cc5>
CC-MAIN-2022-40
https://ipwithease.com/ftps-vs-sftp-know-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00685.warc.gz
en
0.874059
1,767
3.015625
3
As part of your overall personal data security, you should make sure you are using secure Wi-Fi connections for privacy of your transmissions, and to control who (or what devices) have access to use your Wi-Fi connection. An unprotected wireless network (or Wi-Fi connection) and can expose you to risk as hackers can use this connection to access any data you are sending over this (so basically anything you’re accessing and inputting on the Internet). This can even give them access to the files on your computer or mobile devices that are connected to the Wi-Fi network. Usually, the security on Wi-Fi equipment, like your wireless router, is disabled when it’s first taken out of the box. These are the default settings and should be changed as you set up your home wireless connection. How to Secure Your Network - Login to the router settings following the directions provided on the router or provided in the packaging - Change the username and password that control the configuration settings - Change the network name (SSID) of your connection from the default name (if possible) - Enable the WPA2-PSK with AES encryption protocol and make sure you enter the passphrase (usually it’s at least 10 characters) Additional Security Measures - Turn off your wireless router when you’re not at home or not using it - Change your settings so your wireless network’s name (SSID) is private and not broadcast out for anyone to see - Employ MAC (media access control) address filtering, which lets you set which devices can connect to your Wi-Fi connection. Each device that has wireless connection capabilities will have a MAC address. So if MAC address filtering is enabled and another computer tries to access your Wi-Fi connection, even if they have the SSID and password, they won’t be able to connect. And you can also make sure that your devices and data are protected by taking these steps: - Utilize comprehensive security, like McAfee LiveSafe™ service, on all your computers, smartphones and tablets - Make sure your mobile devices are not set to automatically find and connect to Wi-Fi networks - Use mobile security like McAfee Mobile Security (that comes with McAfee LiveSafe or is available if you already have computer protection) that will warn you and automatically disconnect you from Wi-Fi networks if it sees that the connection is being compromised Using wireless networks is convenient in our always on culture, but it’s critical that we remember to protect ourselves and our data. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:0c6c1e6b-6f73-4368-aa97-01cd4c61c687>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/family-safety/secure-home-wifi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00685.warc.gz
en
0.913486
555
3
3
A Business Continuity Plan is a document that outlines how a business will continue to operate if any disruption arises that impacts the services provided by that business. Such a plan goes beyond a basic disaster recovery plan or contingency plan because it contains contingency plans for every single aspect of the business. Many business leaders don’t even know what business continuity plans are– which is unfortunate, considering they are vital. In this guide, we’ll explore what a business continuity plan is in more depth, its purpose, the areas of business they cover, and what a typical business continuity plan entails. The process of developing a framework for preventing and recovering from potential risks to a corporation is known as business continuity planning (also known as a BCP). In the event of a crisis, the plan ensures that workers and assets are protected and that operations can resume rapidly. BCP is intended to protect employees and assets while also ensuring that they can function swiftly in the event of a crisis. BCPs should be tested to guarantee that any flaws that may be found can be sufficiently fixed. Simply described, business continuity planning is the process through which a corporation develops a framework for preventing and recovering from hazards such as natural catastrophes or cyber-attacks. A checklist of supplies and equipment, data backups, and backup site locations is usually included in plans. Plans can also include contact information for emergency responders, essential individuals, and backup site suppliers, as well as plan administrators. Specific ways for maintaining business operations during both short and long-term disruptions may be included in plans. A disaster recovery plan, which includes techniques for dealing with IT disruptions to networks, servers, personal computers, and mobile devices, is an important part of a business continuity strategy. The strategy should include how to reestablish office productivity and enterprise software in order to meet critical company needs. The plan should include manual workarounds so that operations can continue until computer systems can be restored. A business continuity strategy for critical apps and processes has three main components: There are three major components to a well-designed business continuity strategy. First and foremost, a business continuity plan must be robust. This means that critical company functions are maintained in the event of a calamity. The business continuity team conducts a risk assessment of each function to identify weaknesses and vulnerabilities and then implements countermeasures. This helps to keep risk management policies in place. Second, stakeholders rank functionalities and determine which should be implemented first. The sooner that functions can return to a functional state after a disaster, the less likely the organization will experience long-term damage. IT stakeholders must build an actionable disaster recovery plan and set realistic disaster recovery time goals. After mission-critical functions have been restored, team members work their way down the priority list, enlisting third-party assistance as needed to implement recovery procedures. Third, companies must have a contingency plan with branching paths that outline the chain of command, stakeholder duties, and any technical skills required for emergency management in pre-determined disaster scenarios. Finally, an optimized business continuity plan contains a recovery time objective (RTO) to determine how quickly business activities must be restored, as well as a business impact analysis (BIA) to measure the success of recovery efforts. A disaster report, on the other hand, demonstrates to stakeholders how the disaster recovery planning process might be improved in the future. An organization can withstand crises, assess damage rapidly, and recover as swiftly as possible if these three pieces are in place. A business continuity plan must also be understood as a live document that must be updated on a regular basis as the organization adopts new technology and processes. Organizations create new solutions and infrastructures as they scale up; they must be factored into the plan, or disaster recovery issues may be exacerbated by unforeseen bottlenecks. It's critical to have a business continuity plan in place to identify and solve business process, application, and IT infrastructure resiliency issues. A failure of infrastructure can easily cost a corporation hundreds of thousands of dollars each hour, with some companies losing millions of dollars. To survive and thrive in the face of these various threats, businesses have understood that they must do more than develop a sound infrastructure that allows expansion and protects data. Companies are increasingly building comprehensive business continuity plans that can keep your firm up and running, secure data, protect the brand, retain consumers, and, in the long run, help you save money on total operating costs. With a business continuity strategy in place, you can reduce downtime and improve business continuity, IT disaster recovery, corporate crisis management capabilities, and regulatory compliance over time. However, because systems are very much linked and deployed across hybrid IT environments, generating potential weaknesses, constructing a complete business continuity plan has grown more complex. Business continuity planning, as well as disaster-related recovery, overall resiliency and prevention, regulatory compliance, and overall security, get more complicated when more vital systems are linked together to manage increasing expectations. When one link in this fragile chain breaks or is attacked by an outside threat, the ramifications can be felt throughout the company. If a company fails to remain resilient while adapting and responding to threats and opportunities, it risks losing revenue and customer trust. Many businesses must take multiple steps to create a good BCP. They are as follows: Companies may also find it useful to create a checklist that includes crucial facts such as emergency contact information, a list of resources the continuity team may require, the location of backup data and other required information, and other relevant employees. The company should test both the continuity team and the BCP itself, in addition to the continuity team. It should be tested multiple times to guarantee that it can be used in a variety of risk circumstances. This will assist in identifying any plan flaws, which may then be addressed and corrected.
<urn:uuid:d0a658d5-d0ba-41ca-9f5b-ad07619dfbfd>
CC-MAIN-2022-40
https://www.accountablehq.com/page/business-continuity-plans
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00685.warc.gz
en
0.949184
1,182
3.046875
3
Last updated on May 12, 2022. DNS poisoning and DNS cache poisoning use security gaps in the Domain Name System (DNS) protocol to redirect internet traffic to malicious websites. DNS poisoning attacks exploit vulnerabilities built into DNS from the beginning. Without getting into the details of DNS protocol, suffice it to say that DNS was built with scalability—not security—in mind. This post will cover how DNS poisoning and its cousin, DNS cache poisoning, work. It will then examine ways to prevent both. And it will touch on how the BlueCat platform makes it easy to manage your DNS to keep poisoning at bay. How DNS poisoning works A user types example.com into a web browser. After that, the client device asks for IP address information and tries to find the answer locally on the device. When your browser or an application goes out to the internet, it starts by asking a local DNS server to find the address for a name (such as bluecatnetworks.com). The local DNS server will ask the root servers that own that domain, and then ask that domain’s authoritative name server for the address. DNS poisoning happens when a malicious actor intervenes in that process and supplies the wrong answer. These types of man-in-the-middle attacks are often called DNS spoofing attacks. The malicious actor is, in essence, tricking the DNS server into thinking that it has found the authoritative name server when, in fact, it hasn’t. Once it has tricked the browser or application into thinking that it received the right answer to its query, the malicious actor can divert traffic. By doing so, it can feed whatever fake website it wants back to the host device. These are usually pages that look like the desired website. In reality, they are actually phishing websites, attempting to collect valuable information like passwords or account numbers. How DNS cache poisoning works Standard-issue DNS poisoning can also turn into DNS cache poisoning. When that happens, the attack becomes even more difficult to deal with. Most DNS resolvers are caching resolvers. This both reduces the load on remote DNS servers and more quickly returns answers. Caching resolvers will only make requests to remote servers the first time a domain name is asked for and again when that cache entry expires. In the meantime, it might serve thousands of requests with the cached value. So, once a malicious actor intercepts and “answers” a DNS query, the DNS resolver stores that answer in a cache for future use. And in this case, it makes the attack worse by continuing to supply that wrong answer. Even if your filters and firewalls identify the IP address as a malicious site and block it, browsers and applications will still try to go there as long as the cache defaults to the wrong answer. How long those DNS entries remain in your cache depends on the time to live (TTL). This is a DNS server setting that tells the cache how long to store DNS records before refreshing the search for a legitimate server. How to prevent DNS poisoning Thankfully, there is an antidote: DNS Security Protocol (DNSSEC). This protocol was developed specifically to counter DNS poisoning. Implementation of DNSSEC is a recognized best practice used by most large enterprises. ICANN recommends DNSSEC for everyone and it is also part of many industry standards such as NIST 800-53. (Note that DNSSEC is different from DNS security.) DNSSEC uses public-key cryptography to verify that an authoritative nameserver is providing the correct information back to the requesting device. In reality, it’s a lot more complicated than that. So, BlueCat has put together this handy resource on how DNSSEC works. DNSSEC can be simple with the right solution Unfortunately, DNSSEC implementation isn’t as widespread as it should be. The decentralized configuration of default DNS solutions such as Microsoft DNS or BIND is primarily to blame. For both of these, the configuration of DNSSEC settings is a manual and complex server-by-server process. And any update to those settings or DNS architectures requires another round of configurations. The advantage of a centralized, automated DNS solution like BlueCat is that protecting your network against DNS poisoning through DNSSEC is simple. Setup is straightforward, with configurations and updates happening automatically on the back end across the network. DNS response data also helps Another way to prevent poisoning is to pay attention to DNS responses. Even without the protocol-level cryptography of DNSSEC, you can simply compare the DNS request and the DNS response data to see if they match. BlueCat’s platform makes it easy to do that with comprehensive DNS logging. How to prevent DNS cache poisoning DNSSEC also lowers the threat to your domain name server from DNS cache poisoning attacks. But there are still more things you can do to further protect your network. Adjusting the TTL of your DNS caching servers will certainly help with any DNS cache poisoning issues. Lower TTLs will naturally decrease the number of DNS queries that could be led to the wrong address. How low that TTL setting needs to go is ultimately up to your network team. There’s a balance between security and performance for TTL values that will probably need tuning over time. Since it sits as the first hop in any network query, BlueCat manages the caching function of every DNS server on your network. With BlueCat DNS infrastructure in place, you can automatically adjust the TTL on every query to help prevent DNS poisoning attacks. BlueCat experts tend to set the TTL somewhere between five and 30 seconds. But that’s something you can adjust if performance becomes an issue. The challenges from spoofed DNS are significant, but closing the technical loophole that allows them to happen is probably easier than you think. Implementing a comprehensive DNS, DHCP, and IP address management (together known as DDI) strategy is the best way to deal with these kinds of security vulnerabilities. At the same time, you can improve the performance and reliability of your core infrastructure. Two levels of BlueCat support offer health checks to analyze customers’ system data for potential problems and fix them before they take networks down. New features tame network complexity, reduce costs, improve security, and automate DDI tasks to drive rapid innovation. Whether you’re a newbie or an expert, BlueCat training offers self-paced online learning, instructor-led training, and expert certification badges. Renowned cybersecurity expert Richard Clarke delves into protecting your network from ransomware and what cloud adoption means for your security strategy.
<urn:uuid:24ebf4e5-2992-4249-801e-b251f9273249>
CC-MAIN-2022-40
https://bluecatnetworks.com/blog/what-is-dns-poisoning-how-to-prevent-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00685.warc.gz
en
0.925041
1,368
2.59375
3
Undoubtedly, the IoT has deeply embedded in our lives and everything we do. From smart speakers to smart homes, wearables to self driving cars, we directly or indirectly depend on IoT devices to reduce our workload. For instance, if we take the banking sector into account, various users employ different devices for executing their transactions, the science behind it is IoT. What actually happens? The information about the users is obtained from various devices and taking this data into consideration the banks offer a variety of services to the users for helping them in making wise and effective decisions. This showcases a simple glimpse of internet-connected devices. In the scope of this blog, you will observe how IoT is frequently used in financial and banking services and produces more value to the users. IoT builds an integrated communication between interconnected devices and platforms through the internet and engages the physical and virtual worlds together. The transmission of any kind of data becomes a daily routine with the advent of remote and digital-based IoT systems. These devices can send signals to the server and to other devices to get a large amount of data. This dataset can provide key insights into various procedures of a company. With these advantages, IoT is gaining huge attention in businesses across the world. Different industries, like, healthcare, retailing and financial services are exploring the paths to grasp the potential of IoT in order to bring out excellent revenues from business undertakings. However, you can learn more about examples of IoT from here. IoT and Financial Services “Everywhere where more data can help you make a decision – that is where IoT adds value to financial services,” - Varun Mittal, Fintech leader, EY With the swift advent of digitalization and mobilization in banking and financial services, the industries are searching for the chances of IoT in finance to leverage data and to reduce risks. Also, customers expect easily available and user-friendly technologies,thus banking services are utilizing the latest technologies and hence embracing IoT with open arms and delivering accessible financial resources. As the industries are able to manage highly sensitive data, they also come up with the fear of compliance or security gaps. IoT has the possibility to transform the financial services, it collects data and analyzes it to know customer’s behaviors and preferences. It permits institutions to improve their services and control over their operations and strategies. IoT assists banks in multiple ways to reduce cost, improve risk management and general efficiency. We know that IoT provides communication between customers and banks is dynamic; banks have access to gather data and classify the needs of customers to provide the best offers to customers. This information helps banks to remit more value-based services. Banks provide many services related to IoT such as ATM services, electronic banking, internet banking, online banking, mobile banking, tablet, etc. (Most related: What is Fintech?) Role of IoT in Banks IoT is getting in touch with each and every industry today and comes up with great innovations to have an impact on business. Banks take many parameters into account as the number of assets, frequent customers visit, employees, etc. to get a complete dataset. Banks require actionable and ingenious insights that could enhance business efficiency and prevent unwanted incidents. IoT can help finance with the terms of rising working costs, addressing concerns for safety and security, prevention of crime and burglary, monitoring of energy/power in real-time, better customer experience, etc. Illustration of the internet of things(IoT) in banks Let’s understand these cases more deeply; Banks always have ample scopes and innovations, I am creating a small picture of the role of IoT plot in banks; - Planning and Controlling of Products: Banks can utilize mobile data to launch better and more targeted services, they also can answer queries like what and when to launch products, what are key targets, what could be different services to begin, etc, by analyzing past services data. - Smart Marketing: Demanding personalized designs and solutions for changing requirements, customers’ present economic conditions, purchasing behavior and personal requirements need to be tailored. This could only be possible after the implementation of IoT successfully in banking services or BFSI. It helps to keep track of all customer activities and deliver solutions to personal requirements. - Proactive and Dynamic Services: There might be service faults, modifications in products, an underlying concern of a product, etc. which can be easily handled with IoT in banking and financial industries. Customers’ past data also help service providers in contributing to better solutions. (Most related: Introduction to Personal Finance) Areas of benefits from IoT in Finance 1. Optimization with Management Capacity Banks are constantly managing and expanding existing services with maximum performance efficiency. - To determine the optimal number of counters at each branch, IoT can be used to keep track of the number of customers per day and hence average time in a queue. - This can further help in deciding to open new branches with respect to location. The same procedure can also be done to optimize the number and location of ATMs. 2. Autonomous Payment Systems IoT can let you do payments from everywhere, it creates a finance ecosystem for fast and rapid acceleration of payment processes in the finance industry. Wherever you are- you can do payment through your nearest internet-connected devices. 3. In Fraud Prevention IoT and Artificial Intelligence together make interaction possible with customers, they not only enhance efficiency but also improve the methods and strategies in struggling with cybercrimes. (Now that I mentioned Artificial Intelligence, you can also check out our blog on Banking on Artificial Intelligence) IoT-systems can collect user data and interpret the activity. The data can then be transferred to the cloud, where it’s paired against the traditional behaviour patterns of users to confirm if they are consistent. In case suspicious activity has been detected, the account will be temporarily disabled while the user is instantly alerted. Some popular names that have been executing innovative fraud detection systems are namely American Citibank and the UK-based British Standard Chartered, both of which have invested heavily into AI and IoT-based cybersecurity solutions. Advantageous model of Internet of Things(IoT) in the financial services industry 4. Easy and Transparent Payment Methods IoT is the integrated system of sensors and Softwares that aid in transparent payment methods. In development with smart financial procedures like smartwatches, voice-recognition devices, special RFID sensors in Uber cars, etc to make automatic payments even without using mobile phones. 5. Independent Wearables Functioning The wearable technology has the capacity to transform the way financial service providers allow users to pay the bills, speed up the transaction, and escalate its quality as well as security. Wearables like VR devices and hi-tech clothes can replace Google or Apple-based transaction apps and dominate as a preferred method for carrying out transactions. Contactless wallets will allow bank clients to instantly check their account balance or the state of a loan. Major Concerns While Using IoT in the Finance Industry One of our previous blogs has clearly explained two sides of a single coin while dealing with market research and data science. Somewhere, the two-sided face of IoT also has become obstacles for each and every industry with some pitfalls that IoT bears to humanity. When you talk about finances, high accuracy and securities are in demand. All the benefits addressed above also brings some problems interrelated to the security of customers’ personal data. We came to know that how the financial field is vigorously using internet-equipped devices for better efficiency and performance, but we should also be aware of the weak points of IoT in finance; 1. Privacy and Security: This is the main thread that comes first while handling any sensitive and personal data to a high rate while using IoT. The hacking risks of data are increasing when personal information gets transmitted through IoT networks. So, the privacy and security of data should be taken into account while dealing with protection about financial matters. 2. No Mutual Parameters: IoT is a combined system of different devices, these devices require different maintenance. But, unluckily, IoT hasn’t set any such mutual parameters for maintaining IoT devices. Actually, hardware devices are manufactured by different suppliers and get in touch within a single IoT system. So, it becomes hard to get common parameters for all hardware devices. Even if a single manufacturer will begin to design all types of equipment, it can raise a dangerous economic situation worldwide, so a lack of mutual parameters is a great failure in IoT functions. 3. Manifold Finance Ecosystem: It is explained in the above point that the bigger an IoT system, the higher is its probability of system failure. As we are talking about the financial IoT system, any kind of failure could lead to huge losses and malfunctioning in the whole IoT systems. IoT is an interconnected network, any kind of breakage will down a complete system to bear loss at a high rate. So all the hardware and software must be built with high-quality materials for more safety of systems and in order to obtain better experience without failure. (Also read: Introduction to Financial Analysis) “The IoT is removing mundane repetitive tasks or creating things that just weren’t possible before, enabling more people to do more rewarding tasks and leaving the machines to do the repetitive jobs.” — Grant Notman, Head of Sales and Marketing, Wood & Douglas The increment in the usage of devices has led to an expansion in IoT data made by customers. IoT has acted as a revolutionary step to transform the lives of individuals; everything is connected with wires or without wires and so in finance products. (Recommended blog: Introduction to Investment Banking) To get some important information, IoT data needs to get extracted. This will increase the market values of businesses and provide a much better experience to customers.
<urn:uuid:145d2968-03d5-4c4b-8aa1-84cfb4def202>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/do-you-know-the-iots-strategies-and-planning-structures-for-financial-services
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00685.warc.gz
en
0.927925
2,059
2.828125
3
We’ve all dived into the world of the Matrix with Thomas Anderson and strolled through the X-Men Danger Room, the fictional training facility that was a virtual reincarnation of the X-mansion. Little did we reckon that the technology in these sci-fi movies might not be as far fetched as we’d expected. We’ve all experienced the world via our senses and perception systems, that is our five basic senses which are basically the sense of smell, touch, sight, hearing as well as taste. Yet you could say these are only some of the mainstream sense organs. We consist of various more senses apart from these senses that safeguard that we receive a rich flow of information from our surroundings into our minds. Everything that we’ve learned about our reality we’ve learned through our senses. Basically, our entire experience of reality is merely a merging of sensory data and our brains comprehending that information using its mechanics. So following this logic, our perception of reality will alter according to the information fed to our senses. This means that our senses can be shown a version of reality that isn’t actually real but could be perceived so. And this is basically what we term Virtual Reality. What is Virtual Reality (VR)? The definition of virtual reality originates from the definitions of both ‘virtual’ and ‘reality’. "Virtual’ implies near and reality implies the conditions experienced by human beings. Hence the term ‘virtual reality’ basically implies ‘near-reality’. Virtual Reality (VR) is the employment of computer technology to develop an artificial environment. Virtual Reality situates the user within an experience, contrary to conventional user interfaces. Rather than just glimpsing a screen in front of them, the users are engaged and allowed to converse with 3D worlds. By replicating most of the senses like vision, touch, smell, and hearing, the computer is molded to serve as a doorway into an artificial world. How does virtual reality (VR) work? “The incredible thing about the technology is that you feel like you’re actually present in another place with other people. People who try it say it’s different from anything they’ve ever experienced in their lives.” - Mark Zuckerberg Virtual reality technology allows the creation of virtual experience with the intention of replicating real-life scenarios or developing a virtual world consisting of non-realistic elements such as in games providing participation in various situations with risk-free choices. The expense of executing this situation in real life is not advantageous when compared with the anticipated ROI yet with the aid of Virtual Reality (VR) and Augmented Reality (AR) technologies, all the likely applications can cut down the expense of employing these situations for varied purposes such as training of employees, products trail, examining of locations, projects trial and various other purposes, which can be accomplished through the personalized use of these technologies. Now that we've mentioned both Virtual and Augmented Reality, you can also take a look at our blog on Extended Reality. Alongside AR and VR applications which are benefitting various business fields, some examples of VR visualization equipment that can be used to experience virtual reality are Google Cardboard, Samsung Gear VR / VR as well as Oculus Rift Applications of Virtual Reality (VR) “There are as many applications for VR as you can think of, it’s restricted by your imagination.” - John Goddard, HTC Vive From the military to sports, to mental health, to our daily lives, Virtual Reality is seeping its way through every sector. Check out some of the applications of this technology : Areas where Virtual Reality is being used 1. VR in Military Both the military from the UK as well as the US have employed virtual reality in their training as it enables them to take up a wide range of imitations. Virtual Reality is utilized for all departments of service ranging from the navy, the army, the air force, marines to the coast guard. Virtual Reality can effectively transport a trainee into a variety of varying scenarios, locations as well as environments with the purpose of facilitating training. VR is employed for military purposes like simulations of flights, vehicles, and the battlefield, medic training, to create a virtual boot camp, and so on. The technology is an entirely engaging experience complimented with visuals and sound that can securely simulate risky training scenarios for preparing and training soldiers, while also avoiding putting them at risk until they are prepared for combat. Alongside this, the technology can also be employed for teaching soldiers skills like interaction with local civilians or with international correspondents while residing on the field. Yet another VR adoption is done for treating Post-Traumatic Stress Disorder (PTSD) which the soldiers who have come back from combat often face, needing assistance to adapt to their normal life. This treatment is termed Virtual Reality Exposure Therapy (VRET). One of the main advantages which employing VR has provided is the curtailing of training costs. 2. VR in Education Virtual Reality in Education VR is also deployed in the education sector for teaching and learning scenarios. It aids the students in conversing together, in the vicinity of a 3D environment. The students can also be carried on virtual field trips such as to museums, embarking on tours of the solar system as well as traveling back in time to varying eras. Virtual reality can prove to be specifically advantageous for students having special needs. Research has discovered that VR could prove to be a motivating platform to safely train children and teach them social skills including children having autism disorders. For instance, the technology company, Floreo, executed virtual reality situations that enable children to absorb and train themselves with skills like making eye contact, pointing as well as developing social connections. 3. VR in Sports Virtual Reality has been steadily shifting the sports industry for all its participants. This technology can be employed by coaches as well as players for training effectively across various sports, with them being able to view as well as experience particular scenarios repeatedly and enhancing their performance every time. VR is also adopted to serve as a training aid for assisting in assessing athletic performance and examining techniques. It’s also been known to enhance the cognitive capabilities of athletes while injured by allowing them to virtually experience gameplay situations. Likewise, technology is also being adopted to improve the experience of the viewer while watching the sporting event. Various broadcasters have begun streaming live games through VR and are arranging to sell virtual tickets for live sports events which will allow people situated anywhere in the world to be a part of any sports event. This also enables the people who may not be able to afford to spend money, to attend live sports games and feel included as they enjoy a similar experience from their own locations, at no cost or for a reduced expense. Speaking of Sports, you can also take a look at our blog on Big Data in the Sports Industry. 4. VR in Mental Health VR technology, as I’d mentioned before, is being adopted to treat PTSD. By employing VRTD (Virtual Reality Exposure Therapy), a person is placed in a recreation of a traumatic event with the aim to help the person to come to terms with the event and start recovering. Alongside this, it is also being employed for treating feelings like anxiety, depression, and phobias. For instance, various patients having anxiety have discovered meditation by employing VR as a useful approach to deal with stress sensitivity and enhance coping mechanisms. The VR technology can facilitate a safe environment for patients to face the components that they fear, whilst staying in a guarded and secure environment. 5. VR in Medical Training VR is also being employed for practicing surgeries and procedures by medical as well as dental students, owing to its interactive characteristics, enabling a safe and guarded environment, free from any dire consequences, minimizing the risk of any harm or blunders upon practicing it on the actual patients. Virtual patients are adopted for allowing trainers to gain abilities they can later use in the actual world. VR technology is not only aiding in enhancing the quality of medical training but also holding the potential to optimize expenses. 6. VR in Fashion VR’s application in the Fashion industry is one aspect that has been much less talked about. For instance, virtual replications of store environments can prove to be greatly effective for retailers to practice constructing their signage as well as product displays without the necessity of having to commit to the build. Likewise, proper time and resources can be assigned for developing a layout of the stores. A couple of renowned brands that have started executing VR in their business are namely Tommy Hilfiger and Coach and Gap. These brands are adopting VR to facilitate a 360-degree experience of fashion events and enabling consumers to virtually try on clothes. You might also be interested in checking out our blog on IoT in fashion industry. 7. VR in Marketing Marketing is a process that requires constant up-gradation in order to hone and improve its interactive techniques, with the focus being on persuading consumers. The traditional techniques are slowly becoming history with virtual reality becoming the new trend. (Recommended - Marketing Analytics) Customized VR approaches play a hand in enhancing the marketing performance, owing to which the technology is adopted by many content creation organizations for enhancing their engagement with the content. 360º VR promotional videos become a valuable interactive tool for persuading consumers to acquire business items and services by facilitating a virtual experience regarding how these items can enrich their lives and appease their present and future requirements. For instance, back in 2016, Oreo launched a VR marketing campaign developed by digital agency 360i. In this campaign, the user is taken through a fun journey across the “wonder vault”. The video marked the cookie-selling company’s first step into the world of Virtual Reality. 8. VR in Architecture VR applications have been offering an advantage to architects for presenting their ideas and designs for their clients with a 1:1 scale which will allow the clients to undertake an in-depth exploration of the project prior to accepting the designs and starting the construction operations. From residential buildings, commercial buildings, or any such construction project, each of them will gain an advantage from virtual reality applications since this will allow these projects to be visualized in a virtual environment to interpret each aspect of the project which includes safety precautions or cutting down any discrepancy from the finalized design. The technology is also being deployed for improving the interior design, allowing customers to virtually walk-through the various interior designs for their business or personal building, replacing the standard drawings for designs. The above applications are just a handful of instances of how Virtual Reality technology is being adopted. The capacity held by technology is unlimited and flawless. Alongside these applications the technology is also being executed in mass communication fields like Cinema and Entertainment, Research, Health & Safety, Heritage & Archaeology, Fine Arts, Marketing, and Music and Concerts, etc. It remains to be seen how this technology will revolutionize various industries across the world in the future.
<urn:uuid:09e68ded-7dd4-4ac3-937a-169602362515>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/introduction-virtual-reality
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00685.warc.gz
en
0.956189
2,260
3.234375
3
The continuously growing IT infrastructure is not only providing numerous opportunities to businesses but is also opening the thresholds for raised cybercrime. The flow of data is everywhere on every device we use. This data, if not encrypted to safeguard from hackers, can bring about severe consequences. The increasing reach of the IoT and computer machinery has made human lives much more comfortable than they were before. These devices use a flow of information to communicate and execute tasks. However, leak or misuse of such data can yield lucrative gains to hackers; this is one of the reasons why cybersecurity has emerged as one of the most important prerequisites to IT infrastructure. Scientists are continuously working towards strengthening the IT horizon so that they may become immune to external threats that cause a breach of information. Companies have come across employing one of the most successful strategies such as Big data Analytics (BDA). Big Data Analytics comes in handy when it comes to tackling data threats; it involves an automated processing approach of examining vast and varied sets of data spreading across various servers on different computers. It hunts for patterns and trends of characteristic data and then analyzes for probable misfit that might cause a system disturbance. Organizations nowadays are also using BDA to explain and predict customer preference to attract more massive sales. BDA is two shots from one bullet— it can help reshape the framework of a business target, and it enables technicians to analyze, detect, and terminate probable cybercrime threats. The latter feat is achieved through minor reprogramming in the system software itself while keeping other operations intact. Having said enough, let’s explore some of the primary ways through which BDA can help in boosting cybersecurity: Identifying Unusual Behavior Big data analytics helps analyze big chunks of data through an automated process of continuously analyzing the data, which, if given to a person, will take infinitely to examine with no guarantee of accuracy. Due to the vastness of data generated every second across the world, it becomes a time-consuming process for even machines if the data is bombarded. BDA takes on separate small bits of big data and analyzes the entire big dataset gradually, separating valid data from the threats; this not only makes the process less tedious but also decreases the chances of errors. Cybersecurity experts often find it challenging to spot abnormalities accurately due to a varied spectrum of data. People often jump across different networks, and that makes manual data analysis very difficult. BDA distinguishes normal behavior from abnormal ones very quickly and proposes recommendations for the betterment of data flow. The more it indulges in complex data analysis, the better its structures become in tackling abnormalities. Through increased smart detection, it can quickly detect malware without any false alarms. Tackling Malware Attacks Cybersecurity is not ensured only through detection of malware—proper treatment of malware is also required for ensuring safety. Big data analytics can be customized to detect and respond to malware and other information threats automatically. At the hour of need, BDA can prevent an information breach through automatic cutting off the flow of information to the device that has supposedly originated the suspicious threat. It can additionally prompt automatic messages to devices that indulge in possible suspicious activities. DBA can also send a detailed report of suspicious activity to both the user and service provider. These quick actions ensure blocking of potential threats and security of confidential data. Preparing Systems for the Future Even tackling malware is not enough. Experts say it is better to prevent than to cure data breaches. With BDA’s smart analytics, engineers can formulate frameworks that can detect future disturbances and avoid them at their very emergence. For this purpose, BDA conducts network monitoring besides continuously analyzing the big data; it finds out probable threat cases and prepares systems to safeguard against them in advance. Customer information is one of the main concerns of big companies and breaches may lead to severe consequences. The current leak of information from Facebook is the latest example of this sort. With the help of big data analytics systems can easily track and remove the sources of cybercrime. The ever-growing information flow in the public domain will keep attracting hackers to steal or infect data; therefore, it is advisable that organizations employ big data analytics as it offers other benefits besides securing data systems from hackers.
<urn:uuid:0461a86b-fe2c-4ece-981f-9ec06d52a9b7>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/big-data-analytics-for-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00085.warc.gz
en
0.93348
860
3.1875
3
If you’re plan to take the Security+, SSCP, or CISSP exam, you should know about many of the attack types such as the smurf attack. As an example, Objective “3.2 Analyze and differentiate among types of attacks” for the CompTIA Security+ exam lists several common types of attacks including the smurf attack. A smurf attack spoofs the source address of a broadcast ping packet to flood a victim with ping replies. That’s a complex sentence, so it’s worthwhile breaking this down. A Ping is Normally Unicast A ping is normally a unicast message sent from one computer to one computer. It sends ICMP echo requests to one computer, and the receiving computer responds with ICMP echo responses. Figure 1 shows how this works. Computer 1 is sending out a unicast ping to computer 3 and computer 3 responds with ICMP replies. If you receive the responses you know that the other computer is operational. Note: Because ICMP is used in many types of attacks, many firewalls block ICMP echo requests. If you don’t receive ping responses back it doesn’t necessarily mean the other computer is not operational. It could be because the ping is being blocked by a firewall. On Windows systems, ping sends out four ICMP requests and gets back four replies. On some other operating systems, ping continues until stopped. You can add the -t switch to ping on Windows systems causing ping requests to continue until stopped. A Smurf Attack Sends the Ping Out as a Broadcast Instead of using a unicast message, a smurf attack sends out the ping request as a broadcast. In a broadcast, one computer sends the packet to all other computers in the subnet. These computers then reply to the single computer that sent the broadcast ping as shown in Figure 2. Computer 1 is sending out a broadcast ping to all the computers on the subnet and each one of them are now responding, flooding the computer with ping replies. If computer 1 is the attacker, the results of Figure 2 aren’t very beneficial. If something isn’t changed, the attacker gets attacked. The Smurf Attack Spoofs the Source IP If the source IP address isn’t changed, the computer sending out the broadcast ping will get flooded with the ICMP replies. Instead, the smurf attack substitutes the source IP with the IP address of the victim, and the victim gets flooded with these ICMP replies. Figure 3 shows how computer 1 can send out the smurf attack using computer 2’s IP address as the source IP address. All the computers on the subnet then flood computer 2 with ICMP replies. Smurf Attacks Use Amplifying Networks A smurf amplifier is a computer network used in a smurf attack. This is easily prevented by blocking IP directed broadcasts used by smurf attacks. However, if a router or a firewall isn’t configured to protect the network, it can become part of the attack. Figure 4 shows how this works. The attacker (computer 1) sends a broadcast ping into the amplifying network with a spoofed source IP address of computer 6. Each computer in the amplifying network receives the broadcast and then responds by flooding the victim (computer 6) with ping replies. Not Blue Packets The rumor that a smurf attack is one where attackers send out little blue packets that report back to Papa Smurf is simply not true. Ensure you understand the basics of a smurf attack when taking any security-based exam such as the Security+, SSCP, or CISSP exams. A smurf attack spoofs the source address of a broadcast ping packet to flood a victim with ping replies. Smurf attacks are known to use amplifying networks but administrators commonly block this rules on a router or firewall. Master Security+ Performance Based Questions Video Other Security+ Study Resources - Security+ blogs organized by categories - Security+ blogs with free practice test questions - Security+ blogs on new performance-based questions - Mobile Apps: Apps for mobile devices running iOS or Android - Audio Files: Learn by listening with over 6 hours of audio on Security+ topics - Flashcards: 494 Security+ glossary flashcards, 222 Security+ acronyms flashcards and 223 Remember This slides - Quality Practice Test Questions: Over 300 quality Security+ practice test questions with full explanations - Full Security+ Study Packages: Quality practice test questions, audio, and Flashcards
<urn:uuid:3117490f-159f-4bf0-960d-c11d7aa8ee65>
CC-MAIN-2022-40
https://blogs.getcertifiedgetahead.com/smurf-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00085.warc.gz
en
0.922399
953
3.40625
3
High availability (HA) solutions are perhaps most commonly used to ensure that data is accessible during failures and outages. Savvy IT teams implement this software in preparation for complete disaster scenarios—hoping that they never have to use it—but they also realize that HA can help avoid downtime and provide business continuity in many other scenarios. 1. Availability After Disaster The main focus of high availability projects for many organizations is to ensure application and data availability if and when the production server becomes inaccessible, even after a significant failure. HA solutions accomplish this by replicating changing data across great distances in order to keep production data and the replicated backup data separate and safe, ideally on a server in a different geographic region. High availability solutions for Power Systems technology have been in use for over 30 years and continue to improve as speed off the Power server increases along with the speed of communications. In IBM i environments, teams use this software to replicate all libraries and all IFS directories that are critical to the process of providing application resiliency on a role swap, which should be performed at least once a year, possibly once a quarter, depending on your business continuity goals and risk of disaster. 2. Data Propagation Data propagation describes the movement of data from one or more data sources to one or more local access databases according to propagation rules. It feeds data warehouses and makes data more accessible to users. Some organizations take advantage of the remote journaling feature within high availability solutions to transmit data in real time across the network. As data changes, it is immediately sent and then applied to a consolidated database. HA solutions can also keep objects or data files in sync across a network of IBM i servers. For example, if you need to update a rate table or inventory file across a network of IBM i servers, an HA solution could take one or more objects and propagate them across multiple servers. You could also do the reverse with several smaller tables on smaller IBM i servers and use the HA solution to transmit copies of the data to a consolidated server. For example, a bank with hundreds of branches, each with their own IBM i server, might use many-to-one data replication within their HA solution to consolidate multiple P05 servers into a larger P30 server to make reporting easier. 3. Business Intelligence Business intelligence (BI) can often cause performance issues on production servers. End users build queries over data and choke the system with poorly-written query rules. Through no fault of their own, end users have a tool, but don’t really understand how the data is efficiently queried—it’s just a fact of life for many in the BI world. In an ideal world, however, it would be best to offload these queries to another system to lessen the burden on the production system and keep business applications performing optimally while ensuring that the data remains current. Believe it or not, this is possible. For years, many organizations have used high availability solutions to replicate the data and objects from production to a backup server to run their queries. Of course, this doesn’t prevent poorly-written queries from degrading system performance on the backup server, so it’s still prudent to keep an eye on things with a performance and application monitoring tool, but your HA solution will keep the data fresh and confine performance-gobbling queries to the backup server so you can do business and service your customers even while someone is running a killer query. 4. Backups on Secondary Many organizations have found that high availability solutions offer new options for backups. Data and objects—including the IFS—can be replicated to a target server, which you would back up periodically. For a truly safe and clean backup, you would stop the replication process. Your option is to use the save-while-active process. Save-while-active (SAVACT) was established in the IBM i operating system well over a decade ago to help eliminate downtime due to backups. It is an IBM i save command attribute that can be used to back up data and objects while they are in use. With remote journaling, data is constantly being applied to the target server as it is brought over from the source, so your objects are changing while you execute a backup. With the save-while-active checkpoint option, you would end remote journaling, execute your save until you get a clean checkpoint, then restart remote journaling and continue with your save operation. This often reduces downtime to a five-minute window, even though the actual backup may run for a few hours. By using an HA solution to run these backups on a secondary system, you no longer have any backup-related downtime on your production servers and negligible to no downtime on the secondary system. 5. Hardware or Software Maintenance Performing maintenance on your production server usually results in some downtime for adding resources, modifying major software application levels, or replacing a failing component, such as one of your drives that is being mirrored. To avoid downtime for hardware and software maintenance, many organizations use their high availability solution to do a planned role swap from source to target server, replace the hardware or upgrade the software on the original source server by shutting it down, and then restart journaling. The original source becomes the new target server until they do another role swap. The key here is that you must be able to successfully execute a role swap, which shouldn’t be a problem since your HA software should also make it possible to test role swaps regularly. 6. System Migrations Things can get tricky when the time comes to move data from a server sporting an older operating system to one with IBM i 7.1 or higher. You don’t want any downtime for your business-critical applications running on the system. Some organizations solve this problem by, you guessed it, using their high availability solution. By installing the software on the production or source system and then replicating the data in real time across servers, the data is fresh and you’re ready to recompile the application code on the newer OS. Once that is set, all that remains is to role swap from the old OS over to the new OS and your system upgrade is complete. By using an HA solution for system upgrades, you can move the data to a newer system level without going through multiple steps and you can do it at your own pace—just leave this HA setup as is until you’re comfortable with the health of the application at the new OS level. 7. Data Conversion Another factor in system upgrades is converting data from an older operating system to a newer system while the data is active on production. Here again, organizations have found that high availability software is very good at moving the raw data and objects from IBM i to IBM i. An HA solution allows you to define new rules that move data while it changes on the old production server to the new target server in another data center until you’re ready to run production from the new data center. Once you do the full move to the data center, you simply turn off replication. At that point, you might decide to create new rules to take the newly converted data back to your full-time target server so that you have replication going again for this new workload. Taking a data replication approach during these daunting data migration and data movement challenges has proven to be an asset at many organization, but it does shake up the traditional thought process. Data replication allows you to move the data over time unlike the save-and-restore approach, where you’d have to shut down for a weekend to travel the data across the country and then do a restore. 8. Regulatory Compliance Compliance regulations are not driven by technology, but many industries are required to have a proper backup and business continuity plan in place for IT emergencies. Whether it’s SOX, PCI, or GBLA, they all require you to prove the effectiveness of any HA/DR solution you may have in place. Organizations should be able to turn to their high availability solution to provide audit, setup, and history reports (i.e., dashboards) that help them pass those pesky audits with ease. Most HA solution rules are database-driven and a simple query over the data should provide any auditor with proper information about what you’re replicating. Additionally, you must be able to prove that you have tested your role swaps—just another reason to practice them! Ideally, your HA solution is able to track this activity automatically or put information messages into the system log on the server. If not, you may just have to keep track of this manually in a document. No matter which way is easier for you, auditors need proof that you are doing what you say you are. These are just a few of the most common uses for high availability solutions and data replication beyond disaster recovery. Some companies have been known to use HA solutions to build real-time test data on a development partition, which is one reason to use software-based replication instead of or alongside hardware-based replication, since it is more flexible and can be used to face many IT challenges. 24/7 business demands 24/7 system and application availability. When you’re ready to avoid downtime—be it planned or unplanned—Robot HA is the fastest, easiest, most affordable way to establish high availability at your organization.
<urn:uuid:14be4124-d91e-4359-b9a6-cf02f36789d8>
CC-MAIN-2022-40
https://www.helpsystems.com/resources/articles/eight-different-ways-use-ha-my-organization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00085.warc.gz
en
0.930981
1,926
2.65625
3
What is SIEM?GRIDINSOFT TEAM Security Information and Event Management (shortly SIEM) is a bunch of tools and techniques for researching the events in the environment where this system is applied. SIEM consists of two technologies that were growing separately from each other - Security Event Management (SEM) and Security Information Management (SIM). The former is responsible for monitoring and alarming the incoming events in the network (new connections, possible issues, and dubious behavior). SIM, on the other hand, is about analyzing logs and providing the corresponding conclusions to the analysts' team. SIEM conjoins both of these approaches, offering a simultaneous logging of all events in the system and their analysis. This technique allows the cybersecurity teams to monitor the happening, detect the threatening elements and react to them. The reaction usually requires skills and software that make it possible to secure certain areas of the network. For a more effective and convenient response, companies opt for EDR/XDR solutions that allow them to apply the needed actions automatically for all of the units. What are the SIEM components? The SIEM process consists of 4 logical steps. The software that provides them is unitary, but these procedures are easily divisible. These are data collection, data consolidation, event notifications and policies of actions. All of them are running on a repeated and continual basis. Let’s check out each one, top to bottom. - Data collection is a part where the system collects all possible information about the events in the network. The sources for this information are workstations, endpoints and network facilities (i.e. firewalls, network switches etc.). Additionally, SIEM can gather information from cybersecurity software if the one is present and configured to share data with security information and event management systems. At this stage, the dataset needed for further analysis is formed. - Data consolidation is the stage where the gathered information is analysed. To make it more convenient for systems and specialists, SIEMs automatically sort the incoming information. Then the data is grouped by categories, and event correlation is determined. This information is enough to make the conclusions and trace the cause and effect relations. The latter is critical when it comes to investigating cybersecurity incidents. - Event notification is the least extensive procedure, as it simply supposes a certain manner of notifications about the predefined events. The way and the cases where these notifications are sent are set during the SIEM deployment in the protected network. These alerts are serving as markers for the cybersecurity team about where they should pay attention . - Policies of actions is a list of predefined states of the controlled area in certain circumstances and the alarms and notifications that correspond to them. The analysts create so-called “profiles” that set how the environment looks and acts in a normal situation and during routine cybersecurity incidents. Policies are needed to make the SIEM able to define whether the system functions normally or is exposed to a hazard. How does a SIEM work? As you can see by the paragraphs above, the key function of SIEM is to gather and group the information about the things that happen in the environment. It is also able to provide advice on how to deal with potentially dangerous events, but these tips are more like a supplement than a recommendation. The cybersecurity team must make the decisions, and the exact SIEM can make only minor adjustments. The standard algorithm of the SIEM work consists of the four steps above but with deviations according to the current situation. When nothing happens, the system is idling and passively monitoring the events. However, when something unusual happens - for example, a new user is connected to the network, or a workstation is rebooted at an unusual moment - SIEM starts working. It pays attention to the suspects, analyses the information, and, if something valuable is spotted, reports it to the cybersecurity team. Security information and events management is not an autonomous cybersecurity process. It cannot protect your network when you’re away, as its general function is to journal and process the information. Most often, cybersecurity in corporations is provided by EDR/XDR solutions, and SIEM is a complementary system that eases the tracking of events. Types of SIEM By the form of implementation, SIEM can be divided into 3 categories - cloud, in-house and managed. Most of the other differences that may be used for classification are too minor and differ from one vendor to another. Cloud SIEM, as you can suppose by its name, is a security information and event management system that bears on cloud computing. This variant became popular after the global spreading of cloud computing solutions in corporations. It gives the SIEM functionality with much less effort spent on the deployment. However, it is slightly less adjustable for the needs of each separate corporation. Another problem is the increased attack surface: the data collected and processed by the SIEM system may be leaked not only from the client company but also from the cloud service provider as well as intercepted on the way to the server. However, the flexibility of tariffs and no need to purchase and maintain the hardware makes it pretty attractive, especially for medium businesses. In-home SIEM supposes implementation of the system on your own hardware and software. That provides maximum integration since, usually, the in-home model supposes the extended adaptation of the software for the needs of a certain company. Usually, such integration is opted for by the companies which use an all-encompassing protection measures that form a full-size security centre. Such an approach requires way more skilled specialists to take part and costs much more money but will be as effective as possible. The security center will provide the peak performance of all security solutions applied in the environment, and in particular, the security event and information management process. Managed SIEM is a type of system that may be based on both in-home and cloud forms but with the help of an outsourced analyst team. The technology provider offers both the calculation power and the qualified staff who will make decisions, constantly or on-demand. This service is attractive for companies that lack the corresponding staff or do not want to hire additional personnel. Cybersecurity researchers define three stages of SIEM evolution that happened through time. - In the first stage, which started in the security information and event management solutions were just a combination of SIM and SEM. Due to the absence of proper logging in most of the contemporary programs and operating systems, first-generation SIEMs were pretty restricted on their control area. - The second generation of SIEM software featured some enhanced mechanisms for working with large arrays of data. The increased number of applications and environment elements covered by the solution boosted the data flow exponentially. Additionally, second-generation SIEMs paid more attention to the historical data - they became able to compare the current events with past logs. - The latest, third generation, appeared as a concept in 2017, featured the UBA/UEBA functionality and SOAR additionally to classic SIEM functions. User behavior analytics and security information and event management systems are similar in their purpose but were aimed to track different areas of inner activities. Such a joint action increases the solution's efficiency and improves coverage. On the other hand, security orchestration and response solution is software that makes responding to any kind of threat more convenient and fast. In conjunction with SIEM/UBA notifications, SOAR makes it possible to deal with threats in a miserable period. Why do we need the SIEM tool? SIEM is a great complement to the anti-malware protection that is already present in the corporate network. Although it can present the whole pack of information needed to monitor the network and understand the threats, it does not give any facilities for automated malware detection and removal. Moreover, without the XDR, SOAR, and UEBA, it is pretty hard to create any response at all - manual management of the security settings of a whole network is complicated even for a professional. Using SIEM in cooperation with other security mechanisms will drastically increase the overall protection, as it raises awareness among cybersecurity personnel. Contrary to UBA, it will be effective even in small companies, as it does not concentrate on only users’ behavior . If you want your cybersecurity team to be aware of any little thing that goes beyond routine events - SIEM is what you need. Frequently Asked Questions There are three major problems in any cybersecurity system that lacks the SIEM - absence of proper content, time to react and information overload. Security information and event management system is offering the solution for all of them.Absence of proper content is when the cybersecurity team finds it difficult to act because there is no relevant and easy-to-process information available. SIEM finds what is needed and makes it much easier to process. Information overload is a contrast point for the previous situation. When there is too much data floating around and there is no grouping for it, anyone will struggle to make a decision. Sorting and grouping functionality of SIEM allows to heavily mitigate this issue. Time to react is the problem that comes from the previous two. While having a massive information flow, or no flow at all, you will likely fail to react in a proper time frame. For cyberattacks, the momentum may last for just a few minutes. Solving the problems with structuring the data and giving the proper alarms sharply decreases the reaction time and increases the chances to deflect the attack.
<urn:uuid:594f1d2f-2522-4f70-90a0-3ead205853ce>
CC-MAIN-2022-40
https://gridinsoft.com/siem
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00085.warc.gz
en
0.945738
1,929
2.671875
3
Brand protection is a process of striving to prevent copycats, counterfeiters, and other bad actors from violating your brand legitimacy by using your brand name, intellectual property (IP), brand identity, and violating your trademarks, copyrights, designs, and other types of IP. Most companies use various approaches to protect their revenue and reputation. Whether you’re working for a big organization or are a small business owner, you must understand the importance of brand protection. One of the most useful methods to secure your online business is using proxy servers while surfing the internet. Datacenter proxies provide internet privacy to companies when they extract data for business use or when accessing geo-restricted sites. Organizations can purchase these proxies to keep their brand competitive and secure. Moreover, datacenter proxies are remote servers that mask your business’s information on the internet by securing your IP address. Brand abuse is one of those concepts in the brand protection industry that refers to the abuse of a company’s intellectual property by a counterfeiter party. Whether their objective is for personal gain or some other malicious intent, these counterfeiters use different approaches such as: - Counterfeits and Replica Products: It refers to a product designed to look exactly like an existing product made by a third-party brand and illustrates the brand’s logos, symbols, and trademarked names without any permission or authorization. - Copyright Infringements: It refers to using an authentic brand’s copyrighted photos, text, or other media to sell counterfeit product listings on e-commerce stores and marketplaces. - Design-infringing products: It refers to other products that have some distinctive components of an existing brand. It is achieved by avoiding using any protected trademarks that show the product as fake. - Rogue websites: The site looks the same, but the address makes it distinctive from the brand site. For instance, if you mistyped Facebook as ‘facbook’, it might show the same site that looks the same as the original if someone has purchased that domain and created a website. - Copycats: It refers to the product that looks and feels like an existing product, but there is no direct violation of a third party’s trademark. - Brand impersonation: It indicates that the third party claims to be a representative or affiliated with original brands. They use the same brand intellectual property to claim the people as an authentic brand. The modern business environment needs new business techniques which will expand your business over time by safeguarding its IP. Before deciding on any tools and technology you will require for business, you must develop an effective strategy. So, what exactly should that strategy have? Here are four essential elements that you need to build an effective brand protection strategy: Brand protection starts with involving the Government from the outset. You need to ensure that whatever protection the law offers your intellectual property, such as copyrights, patents, and trademarks, your business has them. Imagine spending millions of dollars on a brand’s marketing and growth only to find out that you never got the brand name registered. Facebook, now Meta, went to pay around $60 million to gain access to the Meta name to avoid getting into a long and expensive legal battle. It can be crucial for your company to conduct comprehensive internet advertising monitoring as part of its strategy. It can assist you in identifying where your advertising is being displayed in unfavorable regions and provide the information you need to rectify it. Moreover, using cutting-edge proxy technology, your company can quickly discover brand abuse in online marketplaces worldwide. Monitoring social media is also crucial, and it frequently reveals networks of bad merchants who utilize major social media outlets to promote their products and services using your brand. Recently, a few cases were reported in which the discovery of a solo Instagram account promoting counterfeit goods led to the disclosure of a network of several social media accounts. It’s essential to have full-time monitoring over social media platforms. A data center proxy server will help to access multiple accounts from the same physical location. You can also explore building a network of ‘offline’ partners (such as law enforcement or government customs agencies) with whom you can track down the most persistent counterfeiters. It’s the kind of action that is likely to lead to Intellectual Property law enforcement and Brand Protection litigation for specific brands, but not all. For those brands who pursue Brand Protection, this is continually the final step in the process. Most successful companies are confronting brand abuse, and they employ an appropriate strategy to tackle these counterfeiter parties. Nowadays, building an adequate brand protection strategy for online or offline business growth is essential as the infringers and copycats will go to great lengths to harm and interrupt your brand value, reputation, and growth.
<urn:uuid:3a318f18-0d50-4c75-bf2f-09b2ce8321d0>
CC-MAIN-2022-40
https://www.hackread.com/4-essential-facets-of-brand-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00085.warc.gz
en
0.93472
981
2.75
3
Accessibility features improve the usability of your device for users with impaired vision or hearing, cognitive difficulty, or reduced dexterity. In this tutorial, you will learn how to: • Access accessibility features • Turn on/off Narrator Narrator is a screen reader that describes what's on your screen so you can use that information to navigate your device. It can be controlled by keyboard, touch or mouse. To activate Narrator, select the Narrator tab then select the Use Narrator switch. Note: Review the Narrator keyboard changes prompt, then double tap OK. Great! We're so glad we could help. We're sorry that didn't solve your issue.
<urn:uuid:2b1d04bc-d700-4c0b-9687-03ccdc507433>
CC-MAIN-2022-40
https://www.att.com/device-support/article/wireless/KM1454872/Lenovo/Lenovo20WLS1PH00
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00085.warc.gz
en
0.884919
140
2.921875
3
On Wednesday, South Korea’s government said a malicious code from unknown hackers caused “massive” computer network failures at several banks, the police and TV stations. ATM machines ceased to function. The South Koreans seemed fairly quick to blame it all on the nasty people from the North. This morning I woke up to the news that the attacks originated from an IP address in China; “apparently” it’s a favourite tactic of the North Koreans to work indirectly through Chinese IP addresses to cover their tracks. The whole story is starting to pong. Facts are scarce, but the suspicion is that that this malware was distributed by email in the traditional manner, using files called ‘KBS.EXE’ and ‘MBC.EXE’ (Page in Korean but you can get Google to translate). This doesn’t sound like a targeted attack on critical infrastructure, it sounds like a standard malware delivery to PCs. It’s claimed that the malware activated on Wednesday and wiped the hard disks, displayed skulls and so on. It possible, but another explanation is that malware often attempts to install itself on the boot partition and sometimes goes wrong, leading the luser to believe the disk has been maliciously wiped when in fact it’s just been made inaccessible accidentally, and it won’t boot. The synchronised timing could be accounted for by a botnet software upgrade that didn’t work as expected. Now let’s consider the “plot”: To knock out critical South Korean infrastructure. If you wished to disrupt the Internet, that’s what you’d have to attack; not the endpoint PCs. Attacking PCs simply inconveniences individual users rather than taking down an organisation. The suggestion that an email virus could take down the ATM network is, frankly, ridiculous. How do you kill an ATM machine by emailing it? Or the bank’s mainframe? If there was ATM disruption, it could have been a side-effect of botnet traffic gone wild, but to say it was targeting the ATM network needs evidence to back it up before I’d take it remotely seriously. A DDoS attack may be possible if it’s not isolated from the Internet, but if that were true they were being very lax about things, and reports are talking about PC malware, NOT a DDoS attack. And what of the attacking IP address traced back to China? No surprise there. China is botnet central. To be blunt, a lot of the software used on private computers in China is bootleg, which means it’s either supplied with botnet software pre-loaded, or isn’t able to receive security updates from Microsoft making it easy prey. It’s no coincidence that the incidence of zombie computers is higher in countries where interlectual property rights are less vigorously enforced, and that part of the world is a case in point. So, whilst it’s true that North Koreans would use botnets based in China, it also a meaningless statement. Everyone uses botnets based in China and the Far East. Reports could be wrong, of course. This could be a DDoS attack against the South Korean Internet in general, and specific high profile targets. However, this does not square with the malware reports of computers not booting, and “skulls appearing on screens”. The whole thing pongs. Here’s my theory: Social engineering emails were used to distribute malware in South Korea. Because the criminals were using emails in Korean, only Korea was affected. Either maliciously, or more likely through incompetence, the malware tried to install some botnet software and broke a number of PCs. The news media in Korea has been quick to blame this on a sinister North Korean plot, and the world’s media has picked this up as a story without enough people sanity-checking the whole scenario.
<urn:uuid:2601d5c4-9e1c-4ef3-976f-ac6422f9fc7e>
CC-MAIN-2022-40
https://blog.frankleonhardt.com/2013/south-korea-attacked-from-chinese-ip-address-so-it-must-be-north-korea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00085.warc.gz
en
0.951008
812
2.640625
3
Attacking the Platform These are attacks on computer systems and networks based on exploiting hardware design or manufacturing bugs, or "not playing by the rules" in dealing with the hardware. The idea of violating the "rules" by freezing the semiconductors or overwriting Ethernet firmware data seems analogous to the very common software vulnerabilities caused by not fully validating user input. Well, maybe not just analogous, maybe we should consider frigid liquids or Firewire signals or Ethernet signals as user-supplied input just like packet contents or form data submitted to web servers. What makes these different is that we don't generally have control of the hardware design and manufacturing. Yes, you could choose to buy an Ethernet card or CPU or motherboard from a different manufacturer, but you have to choose from the existing market. What if you can't trust the Then you can't trust anything! The paper "Stealthy Dopant-Level Hardware Trojans: Extended Version" in the Journal of Cryptographic Engineering looked at undetectable backdoors built into chips. Journal of Cryptographic Engineering link (paywall) Paper provided by authors While there are some interesting open-source hardware projects, they are the exception and do not generally provide the features and performance needed. Enthusiasts must not forget that corporations and government agencies require well-known and trusted hardware manufacturers. Speculative Execution CPU Design Flaws In early January 2018 we learned that effectively all Intel CPUs and many AMD and ARM CPUs since 1995 have hardware design flaws. They do not really enforce isolation between user applications and the operating system. The kernel memory leak flaw allows a user process to read memory pages of other processes and of the kernel itself. This can expose cryptographic keys and other sensitive data. When a CPU reaches a conditional branch in the code, something if (condition) ..., it tries to predict which branch will be taken before the decision is known. If its prediction was correct, it will have already executed a block of code before the decision is known. If not, execution rolls back and follows the correct branch instead. Modern processors are good at this "speculative execution" because they store details about previous branch decisions in the BHB or Branch History Buffer. While this often leads to the correct decision, the conditional statement may be security-critical. The roll-back at incorrect predictions doesn't include cache and the Branch History Buffer. That's because the speculative execution is done for performance reasons, and rolling back cache and BHB would hurt performance. However, speculative execution, by its very nature, is not controllable and possibly dangerous.Finding a CPU Design Bug in the Xbox 360 See the great on the discovery of a hardware design bug in the Xbox 360 CPU, a three-core PowerPC processor made by IBM. It had the usual dcbt instruction for But because it was a video game console processor and performance was therefore all-important, xdcbt or extended prefetch instruction was added. They found that calls to So, they started compiling the code so as to not use However, the crashes continued. They eventually figured out that the branch predictor would sometimes speculatively execute the If it appeared anywhere in the application code, only where it was used in an extra-cautious way, it might be speculatively executed where it shouldn't, The author of that piece mentions that the DEC Alpha processor had a similar problem. Fixing a design flaw requires redesigning the system. People have looked at this, see this paper for a discussion of designing a new instruction-set architecture that would be immune to Spectre and similar attacks while still being almost as fast as a CPU using what we now recognize to be dangerous speculative execution. Speculative execution vulnerabilities aren't restricted to the x86 architecture. The ARM-based M1 CPU used in Macs is susceptible to the "PACMAN" flaw. An attacker can bypass the ARM Pointer Authentication which is intended to defend against pointer corruption attacks. MIT paper on the flaw doi: 10.1145/3470496.3527429 The Register Macworld Apple on arm64e pointer authentication Kernel Page Table CPU Design Flaws These mechanisms are abused in the "Spectre" and "Meltdown" attacks on speculative execution. Meltdown and Spectre attacks on 1995-2017 Intel, AMD, and ARM CPUs Very early public report The Register initial report The Spectre attack paper describes two exploits: (1) reading kernel memory from user space when running on bare metal, and (2) reading host kernel or hypervisor memory from kernel space on a VM. The Meltdown attack paper describes an additional exploit, also reading kernel memory from user space when running on bare metal. Other variants appeared in the following months. See the research paper from researchers at Princeton University and NVIDIA: "MeltdownPrime and SpectrePrime: Automatically-Synthesized Attacks Exploiting Invalidation-Based Coherence Protocols" Also see the paper by researchers at the College of William and Mary, Carnegie Mellon University, University of California Riverside, and Binghamton University: "BranchScope: A New Side-Channel Attack on Directional Branch Predictor" Also see CVE-2018-3639, CVE-2018-3640, CVE-2018-3693, updates from Oracle's Director of Security Assurance, and a paper from MIT's Computer Science and Artificial Intelligence Lab: NVD: CVE-2018-3639 NVD: CVE-2018-3640 NVD: CVE-2018-3693 Alert TA18-141A: Side-Channel Vulnerability Variants 3a and 4 Oracle's Eric Maurice on CVE-2018-3640 ("Spectre v3a") and CVE-2018-3639 ("Spectre v4") Oracle's Eric Maurice on CVE-2018-3693 or Bounds Check Bypass Store "Speculative Buffer Overflows: Attacks and Defenses" The SWAPGS side-channel attack appeared in August, 2019. US-CERT on SWAPGS Threatpost on SWAPGS Phoronix on SWAPGS mitigation impact The post "x86 is a high-level language" from 2015 explains why these exploits are possible. The 1995 paper "The Intel 80x86 Processor Architecture: Pitfalls for Secure Systems" listed, among other security flaws, "Prefetching may fetch otherwise inaccessible instructions in virtual 8086 mode." RSB Speculative Execution Attacks The RSB or Return Stack Buffer is another specific target in speculative execution attacks. In July 2018, researchers at University of California, Riverside announced what they called SpectreRSB, a "Spectre-class" attack on the RSB. The attack allows modification of the return address pointer, and recovery of data from other processes running on the same CPU. It can expose data outside an SGX or Intel Software Guard eXtensions enclave. Intel says that this is related to CVE-2017-5715, the Branch Target Injection vulnerability. Research paper CVE-2017-5715 Side-Channel CPU Information Leakage Hacking the Micro-Op Cache Intel, AMD, and ARM processors translate complex instructions into simpler internal micro-operations that are cached in a dedicated on-chip structure called the This paper describes exploiting it as a timing channel to leak supposedly secret information. I See Dead μops: Leaking Secrets via Intel/AMD Micro-Op Caches The Hertzbleed Attack Power side-channel attacks on Intel and AMD x86 CPUs can be turned into timing attacks that can be done remotely through the Hertzbleed attack. As the paper's authors say: In this paper, we show that on modern Intel (and AMD) x86 CPUs, power side-channel attacks can be turned into timing attacks that can be mounted without access to any power measurement interface. Our discovery is enabled by dynamic voltage and frequency scaling (DVFS). We find that, under certain circumstances, DVFS-induced variations in CPU frequency depend on the current power consumption (and hence, data) at the granularity of milliseconds. Making matters worse, these variations can be observed by a remote attacker, since frequency differences translate to wall time differences! They have demonstrated a chosen-plaintext attack against constant-time implementations of SIKE, a post-quantum key encapsulation mechanism, allowing full key extraction via remote timing. Hertzbleed paper Hertzbleed web site Other CPU Design Flaws Intel x86 Considered HarmfulIntel x86 The paper Intel x86 Considered Harmful is an excellent discussion of an architecture that is overly complex for what it provides. The complexity provides too many opportunities for vulnerabilities. The paper has a great description of the firmware, the peripherals, networking, USB, graphics, disk and storage controllers, audio devices, video devices, and other potential points of attack and defense. Can these be audited? That isn't practical. AMD Zen architecture CPU Bugs In March 2018 CTS Labs announced what they called a "Severe Security Advisory" about vulnerabilities in AMD's Ryzen and EPYC product lines. The describe the problems as four classes of attack. Given administrative-level privileges on the system, an attacker could plant malware that would persist not only through reboots but also reinstallations of the operating system. Then only shared the information with AMD one day before releasing it, versus the typical disclosure window of at least a few months. And, their paper has very little technical detail. On top of that, their website contains a disclaimer that CTS Labs may have "an economic interest in the performance of the securities of the companies whose products are the subject of our reports." The attacks, to the extent they're described, exploit vulnerabilities in AMD's Secure Processor and in a peripheral controller chipset sold by ASMedia. The debugging backdoors are built into the hardware, meaning they can't be fixed, only replaced. Security Advisory Website CTS Labs whitepaper There were some interesting short articles about Intel Core 2 bugs, see here and here for the articles, and also see the background on Intel's quiet patch release. Affected CPUs were the Core 2 Duo E4000/E6000, Core 2 Quad Q6600, Core 2 Xtreme QX6800, QX6700, and QX6800. Remember the Pentium CPU's that were bad at floating-point division? For some Pentium CPU's, a block of machine code 0xF00F will just plain halt it. Intel x86 Processor Rosenbridge Backdoor Christopher Domas demonstrated an x86 processor backdoor at the 2018 Black Hat USA and DEF CON meetings. It allows Ring 3 (user space) code to circumvent processor restrictions and read and write Ring 0 (kernel) memory. The backdoor is enabled by default on some processors. This "Rosenbridge" backdoor is a small CPU core embedded alongside the main x86 core. It is controlled by one of over 1,300 undocumented model-specific registers (or MSRs) built into the processor. This secondary core has its own independent architecture, but shares parts of the instruction pipeline with the main core. The vulnerable processors are used in various older desktop, laptop, and embedded systems. Researcher's explanation and code repository Dark Reading story Lazy FP State Restore CPU Bug (CVE-2018-3665) This bug allows the floating point register content to be leaked from another process. Unfortunately, those registers are also used for cryptographic operations. The bug is present on Intel Core based microprocessors from the Sandy Bridge architecture up to the bug announcement of June 2018. Register content can be saved and restored when switching from one process to another, but this takes time. This can be done "lazily" (that is, when needed) as a performance optimization. This bug was fixed in the Linux kernel at release 4.9 (Dec 2016) by defaulting to "eager" (and safe) floating point restores on all x86-64 processors from around 2012 and later. Windows 10 and Server 2016 are believed to be safe, although Server 2008 needs a patch. Current OpenBSD and Drangonfly BSD are safe, and FreeBSD has a fix. The FreeBSD kernel discussion (see the thread SSE in libthr shows how this type of problem arises. Someone finds a clever hack that improves performance by a few percent in a specific situation. That goes into place despite it not helping the majority of cases. Then someone finally notices that the change can be abused. Red Hat back-ports fixes in the Linux kernel and applications. Especially with the kernel, the release number doesn't mean what you would assume it means. It would take a lot of work to figure out what all they have changed. RHEL 7 already had the fixes in place despite apparently having a kernel that describes itself as much older than 4.9. NVD: CVE-2018-3665 Intel advisory INTEL-SA-00145 US-CERT alert ZDNet story Other Design Flaws Cisco Trust Anchor module and ThangrycatThangrycat Cisco Trust Anchor module (or TAm) is a proprietary hardware security module used in a wide range of products, including enterprise routers, switches and firewalls. TAm is the root of trust that underpins all other Cisco security and trustworthy computing mechanisms. The Thangrycat attack can make persistent modification to the Trust Anchor module through FPGA bitstream modification. This defeats the secure boot process and invalidates Cisco's chain of trust at its root. While the flaws are in hardware, they can be exploited remotely. Solid-State Drives Failing to Encrypt Academic researchers published a draft paper reporting that a wide range of solid-state drives fail to encrypt data in any meaningful way. Microsoft's BitLocker turns off software file system encryption in the operating system if the drive claims to handle it. So, many users who had enabled BitLocker had nothing but a false sense of security. Within a few days Microsoft had published a workaround and US-CERT had issued a warning. "Self-encrypting deception: weaknesses in the encryption of solid state drives (SSDs)" Microsoft: ADV180028 | Guidance for configuring BitLocker to enforce software encryption US-CERT: Self-Encrypting Solid-State Drive Vulnerabilities IPMI or Intelligent Platform Management Interface is a protocol to communicate with a server via its BMC or baseboard management controller. The concept is called out-of-band management, and it's dangerously powerful. Serious security problems come from the Intel specification itself, how the protocol and BMC are implemented by the vendor, and how the customers use it. For example, the IPMI standard requires passwords to be stored in plaintext, or to be recoverable on demand. If that isn't bad enough, vendors stuff in more attractively powerful features that open further security holes. See Dan Farmer's for many details. IPMI was developed by Intel, Dell, HP, and others, server and firmware vendors add features and change the name: Dell calls their version iDRAC, HP iLO, IBM IMM, and so on. Brand names including: - Integrated Lights Out Manager or ILOM (Oracle) - Integrated Lights Out or iLO (HP) - Integrated Dell Remote Access Card or iDRAC (Dell) - Integrated Management Module or IMM2 (IBM) - IMM (Lenovo) - Integrated Remote Management Controller or iRMC (Fujitsu) - SuperMicro Intelligent Management (SuperMicro) You can monitor physical health and status (temperature, fan speeds, memory and disk errors, etc), and you can reboot the server, change firmware settings, attach optical media images, and thereby do a rescue boot or even install a new operating system. An embedded server called the BMC or Baseboard Management Controller is installed on server motherboards. The BMC typically runs Linux on its own small CPU with memory and storage, and runs independently of the operating system or hypervisor you think of as being installed directly on the system. IPMI and the BMC provide networked access to the hardware even when the system is powered down. The BMC controls the server at a very low level, mostly invisible to the operating system, and it can support HTTP/HTTPS and other application protocols. The BMC usually runs Linux on its own CPU, memory, and storage, and it operates even when the main power supply is turned off. Papers and tools on IPMI security problems Bruce Schneier: "The Eavesdropping System in Your Computer" Internet Storm Center /SANS report: "IPMI: Hacking servers that are turned 'off'" Ars Technica: "'Bloodsucking leech' puts 100,000 servers at risk of potent attacks" USENIX report: "Illuminating the Security Issues Surrounding Lights-Out Server Management" Rapid7: "A Penetration Tester's Guide to IPMI and BMCs" ITworld: IPMI: The most dangerous protocol you've never heard of There are several articles by Dan Farmer: IPMI security IPMI++ security best practices "Sold Down the River" on the state of IPMI vulnerability exposures "IPMI: Freight Train to Hell" on flaws in IPMI and BMC Intel has added something similiar to their Core processors with Active Management Technology or AMT, which gives you keyboard-video-mouse remote access to a system regardless of the computer's state. The default password is either HowToGeek on AMT Intel on AMT Infineon Technologies TPMs Generating Weak RSA Keys If the hardware won't even do what it's supposed to, there are big problems! Infineon Technologies AG builds some secure hardware chips. They built their RSA library 1.02.013 into TPM chips starting in 2012. Unfortunately, that library and thus the chips generates RSA keys that can be factored in practical attacks. The vulnerable hardware had NIST FIPS 140-2 and CC EAL 5+ certification. The vulnerability was discovered in early 2017 and disclosed to the vendor, then announced 8 months later as CVE-2017-15361. Also see the research group's discovery report and their paper with the details. The Return of Coppersmith's Attack: Practical Factorization of Widely Used RSA Moduli They looked at the amount of CPU time required to factor the faulty keys, and converted that to US dollar cost on the Amazon AWS c4 computation platform. A 512-bit RSA key would cost just $0.06 to break. 1024-bit RSA keys would cost $40-80, and 2048-bit keys $20,000-40,000. The chips have been built into products sold by a wide range of vendors. Two weeks later, Daniel J. Bernstein and Tanja Lange announced even faster attacks. Ars Technica on initial announcement Estonia announces national ID card vulnerability Estonia disables 760,000 vulnerable ID cards Cybernetica Case Study: Solving the Estonian ID-card Case Infineon advisory Google advisory Microsoft advisory Faster attacks Subverted Hardware and Firmware If your hardware or firmware has been replaced or modified, there is no reason to expect it to behave as you would expect, or hope. NSA ANT Attacks on Hardware An article in Der Spiegel describes a 50-page internal "product catalog" from an NSA division called ANT, listing hardware and software (called "implants" in NSA terminology) which can penetrate systems to monitor, modify, and extract information. These include modified cables allowing "TAO personnel to see what is displayed on the targeted monitor", USB plugs and cables that covertly communicate over radio links (see the COTTONMOUTH device at right), replacement USB and Ethernet ports with covert data capture and communications built in, replacement chips and daughter cards to exploit the motherboard BIOS and using System Management Mode to reload itself at every boot, up to active GSM base stations that mimic legitimate mobile phone towers and therefore monitor and even control nearby mobile phones. ANT also attacks the firmware in disk drives manufactured by Western Digital, Seagate, Maxtor, and Samsung, and modifies hardware and/or firmware in Cisco, Juniper, and Huawei routers and firewalls. Cryptome has the full NSA ANT catalog available for download. A Wired article also discusses the catalog. The router and firewall backdoors work by subverting the hardware's boot ROM, re-installing themselves every time the system starts and running below the operating system itself. HEADWATER is a persistent backdoor software implant for selected Huawei routers. SCHOOLMONTANA, SIERRAMONTANA, and STUCCOMONTANA are persistent backdoor software implants for all modern versions of JUNOS, a version of FreeBSD customized by Juniper. They are for J-Series, M-Series, and T-Series routers, respectively. FEEDTROUGH is a persistence technique for two software implants, BANANAGLEE and ZESTYLEAK, used against Juniper Netscreen firewalls. GOURMETTROUGH and SOUFFLETROUGH are used against other Juniper firewalls including the SSG 300 and SSG 500. JETPLOW is a similar product for Cisco PIX and ASA firewalls, HALLUXWATER is for Huawei Eudemon firewalls. HOWLERMONKEY variants are RF transceivers to exfiltrate data from air-gapped systems. Other ANT products are miniaturized digital cores packaged in multi-chip-modules, basically miniaturized computers running full operating systems like the Raspberry Pi but concealed beneath a chip on the motherboard. Chinese Implants? Supposedly On 4 October 2018 Bloomberg Businessweek published The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies. The article quickly came under heavy criticism, with little effective support from Bloomberg. The article said that Amazon and Apple had found in 2015 that motherboards from Supermicro, one of the leading suppliers, included "a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design." This chip then sent data to servers China. Bloomberg cited "six current and former national security officials" backing up the story. The article was accompanied by "artist's conception" artwork that many misinterpreted as actual photographs. Or, that Bloomberg wanted us to misterpret the artwork as photographs. There is some obvious nonsense in the Bloomberg story. Their example of how it might work speculates about "the Linux operating system" and confuses the kernel, the operating system itself, with user-space programs that check user passwords. The article has been made friendly for non-technical readers, with "artist's conceptions" of what might have been. The result is that you don't know if they made the technical parts wrong to make them easier for non-technical readers, or if they're simply wrong to start with. Amazon, Apple, and Supermicro published detailed rebuttals of the Bloomberg story. The National Cyber Security Centre, a unit of GCHQ, soon agreed with Apple and Amazon that Bloomberg was wrong. Officials from the U.S. Department of Homeland Security, the FBI, and the Office of the Director of National Intelligence either discounted the report's validity or said they had no knowledge of the attack as Bloomberg described it. The NSA issued public statements saying they were "befuddled" by Bloomberg's report. Here is a piece from SecuringHardware.com, Do I Have a Hardware Implant? Here's a good summary from the Grugq. See this excellent piece on counterfeit electronics (from 2011, when there had just been a scare about the U.S. DoD supply chain). It explains the many forms of counterfeit, remarked, refurbished, and simply re-used parts. You don't need elaborate sinister conspiracies to explain hardware mysteries. Before long, The Atlantic had an article about how even if the story is wrong, at least we can learn a lesson about supply chain trust. IEEE published an article about automated hardware inspection. As several pointed out, Bloomberg has a track record of sensational but incorrect stories about hacking and cyberwar. The same reporters claimed in 2014 that the US government knew about the Heartbleed bug, and then refused to correct the story, follow up, or even comment. Bloomberg published a sensational cyberwar story about a Turkish pipeline explosion in 2014 that technical experts and the intelligence community said was wrong, but Bloomberg never retracted it. Bloomberg News published a new story in February 2021 Now they said that 14 former law enforcement and intelligence officials vouched for the story that China's exploitation of Supermicro products "had been under federal scrutiny for much of the past decade." Bloomberg News Feb 2021 story Bruce Schneier's overview Atlantic Council report on the insecure software supply chain There were still significant oddities and clumsiness in the 2021 article. They quote someone as describing US military laptops that "had a chip encrypted on the motherboard." Clearly they should have said "embedded", why did Bloomberg either misquote them or not realize the need to add "[sic]" to the (mis)quote? Later they take pains to explain what BIOS is, when UEFI firmware almost completely replaced BIOS around 2010–2012. They offer as background support conversations they had with a former US Navy SEAL who co-founded a venture capital firm—maybe of interest to their readers but not a specialist in the fields of microelectronic manufacturing or cybersecurity. Plus, they published their report just a month after a group of Qanon believers had stormed the US Capitol, at the end of four years in which an increasing number of people who believe the Qanon nonsense and other conspiracy theories were put into dangerously influential positions in the US Government. At that point, "recently retired US government officials" wasn't a terribly impressive or convincing source of background. NSA said that it still stood by its comments about being "befuddled" by Bloomberg's 2018 report. Maybe that was the literal truth, maybe that was part of a cover story. Who knows? Supply chain insecurity is a very important problem. I don't know how much this has to do with it. Modify the main board firmware (BIOS, UEFI): describes a vulnerability in Intel's SPI Flash memory, a component of several Intel CPU series. An attacker could use this vulnerability to block BIOS/UEFI updates or selectively erase parts of the firmware. CVE-2017-5703 INTEL-SA-00087 Security Advisory There was concern back in 2006 about an ACPI/BIOS based attack. The Rakshasa software collection can reflash firmware. software collection Rakshasa can reflash firmware, "Black Hat: Researcher Demonstrates Hardware Backdoor", Dark Reading July 2012 "Researcher creates proof-of-concept malware that infects BIOS, network cards" "New Malware Can Bypass BIOS Security", can fool a host's Trusted Platform Module into thinking that the BIOS firmware is clean when it isn't, Dark Reading May 2013. "Research Into BIOS Attacks Underscores Their Danger", Dark Reading Nov 2013. The Computrace / Absolute Track / LoJack system seems like it's either rootkit or backdoor. Read the CoreSecurity review and see the related Black Hat presentation. Other UEFI hacking resources include: Hacking the Extensible Firmware Interface (UEFI) Thunderstrike: EFI bootkits for Apple MacBooks Attacks on UEFI security Speed Racer: Exploiting an Intel Flash Protection Race Condition Attacking UEFI Boot Script Attacking UEFI Boot Script Table Vulnerability How your Mac firmware security is completely broken badBIOS — Real or Not? The badBIOS story appeared in October, 2013. Dragos Ruiu told about very advanced malware that infected both Mac and PC hardware, reflashing the BIOS, UEFI, or EFI firmware, spreading via ultrasound or signals from software defined radios, traveling in USB memory sticks that were merely plugged in but never mounted.. Compilation of Ruiu's observations Ars Technica 31 Oct 2013 "Meet 'badBIOS,' the mysterious Mac and PC malware that jumps air gaps" InfoWorld 1 Nov 2013 "BadBIOS: Next-gen malware or digital myth?" Ars Technica 5 Nov 2013 "Researcher skepticism grows over badBIOS malware claims" InfoWorld 12 Nov 2013 "4 reasons BadBIOS isn't real" includes this analysis: People following this story fall into a few different camps. Many believe everything he says — or at least most of it — is true. Others think he's perpetrating a huge social engineering experiment, to see what he can get the world and the media to swallow. A third camp believes he's well-intentioned, but misguided due to security paranoia nurtured through the years. A few even think we're witnessing the public mental breakdown of a beloved figure. They point out that paranoid schizophrenics often claim to be targeted by hidden communication no one else can hear. To be honest, I've found myself in all these camps since the story broke, though I'm leaning toward those who think Ruiu is well-intentioned, but perhaps seeing too much of what he wants to see. Is Your Hardware Really What You Think It Is? There have been stories of counterfeit hardware from Cisco modules down to integrated circuits for some time. The first thing I noticed explaining just how these parts get into the parts supply stream was this "How counterfeit, defective computer components from China are getting into U.S. warplanes Given the horror stories it contains of entirely unmonitored suppliers chosen for U.S. military parts based largely if not entirely on their status as "disadvantaged", "woman and minority owned", and so on, I can see why the government didn't explain the details immediately.... Even If You Have AMD Hardware, Is It Really What You Thought It Was? All AMD processors made during 2000-2010 included a secret debugging feature well outside the standard x86 architecture definition. All processors starting with the Athlon XP have a firmware-controlled feature that can put the CPU into debugging mode. See the article in The Register for an overview, the announcement by the discoverer for far more details, and this list of undocumented Machine Specific Registers in AMD processors. Modify the processing hardware: University of Illinois researchers exploited a system by modifying its processing hardware. With Linux running on a programmable LEON processor, based on Sun's Sparc design, they changed 1,341 of the over 1 million logic gates. A carefully crafted network packet injected the malicious firmware, and the attacker could then login as a legitimate user. Note that this would require a processor programmed with an OS with malicious hooks — this seems far-fetched but US DOD warned of this very attack in February 2005 because a shift toward overseas integrated circuit manufacturing could present a security problem. This was reported at the Usenix Workshop on Large-Scale Exploits and Emergent Threats in April 2008, and described in this IDG News article. See "Stealthy Dopant-Level Hardware Trojans", a paper discussing how to tamper with logic gates by changing the doping of one transistor. This sabotage would be undetectable by optical inspection or functional testing. Hardware / Firmware Exploits NSA Hardware and Firmware Guidance The NSA maintains an official Github account. Among other things, they provide guidance on these classes of vulnerabilities. NSA Hardware and Firmware Security Guidance MoonBounce UEFI-Based Rootkit Kaspersky reported on the MoonBounce UEFI firmware-level compromise in January 2022. It's placed in SPI flash memory on the motherboard instead of touching the disk, making it harder to detect and capable of persisting through disk replacement. They attributed it to APT41, a threat actor that has been widely reported as Chinese-speaking. Kaspersky analysis of MoonBounce The First UEFI-Based Rootkit Researchers at ESET announced in late December 2018 that they had discovered the first example of a UEFI-based The group known variously as APT28, Sofacy, Fancy Bear, and Sednit had recently started using this. The researchers call the rootkit LoJax, as it's based on Absolute Software's LoJack recovery software. It takes advantage of firmware vendors allowing remote re-flashing of firmware. Detailed write-up Brief Threatpost story Supermicro Firmware Vulnerabilities Research on modifying firmware, providing ultimate access Firmware Vulnerabilities in Supermicro Systems Reverse Engineering Firmware with Radare Radare is a portable reverse-engineering framework and tool set that runs on Linux, OSX, ANdroid, Windows, Solaris, and Haiku. See this presentation for details on using the radare2 reverse engineering framework and toolset. This can take advantage of CPU microcode, Intel and AMD controller chipsets, PCIe chips, Intel Ethernet controllers, magnetic and solid-state disk controllers, controllers in mice and keyboards and touchpads, controllers in webcams, PCI/PCIe option ROMs, Subvert the Intel Management Engine The Intel Management Engine is a microcontroller that handles data transfer between the processor and peripherals. EFF on Intel Management Engine Background on IME In May 2017 researchers discovered a remote code execution vulnerability (tracked as CVE-2017-5689) in this controller. "A critical RCE flaw in Intel Management Engine affects Intel enterprise PCs, dates back 9 years" CVE-2017-5689 In August 2017 they discovered an undocumented setting to disable the controller. "Experts found an undocumented Kill Switch in Intel Management Engine" Later in 2017 we learned that the Management Engine runs MINIX with a full network stack and a web server. This is down at Ring –3. That is, "minus 3", three levels below what your kernel can see. Google presentation Dmitry Sklyarov analysis Network World story Techdirt story Intel's advisory Attack the Intel SGX Enclave: Intel's SGX or Software Guard Extensions was intended to provide a secure enclave protected even from the Management Engine. Another term for this is a Trusted Execution Environment. Well, it wasn't secure after all. Intel refers to the vulnerability as L1TF or Level 1 Terminal Fault. See the Foreshadow speculative execution The original version extracts data from the SGX enclave. The next-generation version can extract any data in the L1 cache, including memory in use by virtual machines, hypervisors, the operating system kernel, and System Management Mode (or SMM) memory. Foreshadow Original Foreshadow paper Foreshadow-NG technical report CVE-2018-3615 (vs SGX) CVE-2018-3620 (vs OS kernel & SMM) CVE-2018-3646 (vs VMs) Intel on L1TF US-CERT on L1TF The SGX attacks keep appearing: Foreshadow Crosstalk description Crosstalk paper Intel on SGX attacks Attack the Apple T2 security chip Apple's T2 security chip was launched in 2017. Within two to three years it was being exploited by the same method used to jailbreak older iPhones. This could disable macOS security features including System Integrity Protection and Secure Boot. It might also be exploited to obtain FileVault encryption keys and decrypt user data. Ars Technica on the flaw Pangu Team vulnerability disclosure Modify or Replace the Volume Boot Record: FireEye identified malware modifying the Volume Boot Record, hijacking the system boot process. FireEye calls the group "FIN1", which they speculate is located in Russia or at least the group largely speaks Russian. FireEye calls the specific software "BOOTRASH", it's part of an overall system that its developers call "Nemesis." FIN1 is targeting payment card data for financial gain. ArsTechnica also covered the story with comparisons to other so-called bootkits. Update Intel and AMD Processor Microcode: Ben Hawkes' Notes on Intel Microcode Updates discusses how updateable CPU microcode exists to work around hardware and firmware bugs. There is speculation that malicious changes might be able to move sensitive data to a known location, but Hawkes' work shows there are RSA digital signatures, using SHA-1 on older processor models and SHA-2-256 in newer processors. Modify USB firmware: The firmware is proprietary so we have no "known good" for comparison. Plus, as the U.S. Government has demonstrated, they have no interest in closing this particular security hole. BadUSB is an attack on the firmware controllers in typical USB devices. Malware on the system can subvert an attached USB device, and a hostile USB device can attack a system into which it is plugged. Practical BadUSB attack software is available. Modify RAM contents while running: The Rowhammer vulnerability in DRAM devices is based on repeatedly accessing a row in high-density DRAM devices and flipping bits in adjacent rows. A Google team has demonstrated and documented using this to gain kernel privileges. The good news is that it's a much larger challenge to flip the bits in a constructive way that provides access for the attacker. Turn off the NX bit while running: The NX bit, also called the XD bit, is used by CPUs to enforce memory segregation into instructions versus data. Intel calls it XD for eXecute Disable, AMD calls it Enhanced Virus Protection, and ARM processors call it XN for eXecute Never. This feature is enabled as a BIOS setting, and so it would appear to be down in the hardware where neither applications nor the operating system can reach it. But... The NX bit is simply a hardware feature that may or may not be available. Even if available, the operating system may not use it. For example, in Windows, Data Execution Prevention or DEP is Microsoft's name for support of this technology in the operating system. This page explains how to turn it on for specific programs or for all programs. See the Wikipedia page on the NX bit for detailed descriptions of the technology and its support on various combinations of operating systems and processors. Modify the TPM (Trusted Platform Module) chip: In February, 2010, Christopher Tarnovsky announced a successful hardware exploit of an Infineon TPM chip. Background: What is TPM? Short overview at hackaday.com Associated Press story More technical article at Dark Reading, with links to more detail Freeze the memory: Princeton researchers reported cold boot attacks — literally cold boot. The problem — sensitive information such as passwords used for file system encryption and some file contents themselves may remain in RAM for surprising amounts of time, especially if the RAM is chilled. See the original report from Princeton and discussion and news coverage: Original report from Princeton New York Times 22 Feb 2008 Bruce Schneier's blog Wired magazine, February 2008 Break in through the Firewire port: Winlockpwn is a tool where the attacker connects a Linux machine to the Firewire port. The attacker gets full read-write access to memory and the tool deactivates Window's password protection residing in local memory. Steal passwords, drop malware on the system, and so on. Similar hacks have been demonstrated against Linux and macOS. See the Dark Reading story Break in through the network interface hardware: There's been some work on attacking the firmware on network interface cards, some of which focuses on permanently damaging the card. But more interesting work looks at attacking the NICs on a firewall so they do PCI-to-PCI data transfers, moving information down at a hardware level where firewalls don't look. There is speculation this might allow reading the disk device through its PCI-based controller. See this discussion, referencing an excerpt from the Robust Open Source mailing list. Kristian Kielhofner's Packets of Death describes how problems with Intel's 82574L Ethernet controller has vulnerabilities: "death packets" containing the correct pattern can shut down an interface. It turns it off — the link lights on the card and the switch go out and only a power cycle can turn it back on. The "kill code" data pattern can be in the application layer payload, so a hostile HTTP server could put the pattern in an HTTP 200 response and shut down client machines behind a firewall. The network interface may have its own processor powerful enough to run an SSH server. See these examples: What if you can't trust your network card? (paper) Can you still trust your network card? (presentation) CVE-2010-0104 Network Interface Card SSH Rootkit Project Maux Mk.II Closer to Metal: Reverse engineering the Broadcom NetExtreme's firmware Exploit the disk controllers: Storage devices, both rotating magnetic disk and solid-state drives, have their own controllers. That is, the SATA or SCSI or whatever interface has its own processor with firmware, but the storage device itself also has one. The SATA/SCSI/etc interface is on the motherboard or an the drive controller is inside the small box containing Most of these are ARM and MIPS controllers. Some of the firmware is stored in an embedded flash chip, the rest is on hidden sectors of the disk. Seagate: Exploring the impact of a hard drive backdoor Western Digital: Hard disk hacking Exploit the SD/MMC memory cards: They include 8051 and H8 processors. The Exploration and Exploitation of an SD Memory Card (video) The Exploration and Exploitation of an SD Memory Card (slides) Exploit the processor and firmware in the mouse and keyboard: Logitech G600 mouse has an AVR architecture ATmega32u2 Mouse Trap: Exploiting Firmware Updates in USB Peripherals (paper and slides) Mouse Trap: Exploiting Firmware Updates in USB Peripherals (paper) KBT Poker II keyboard has a ARM Cortex-M0 CPU with reflashable firmware: Manufacturer's announcement of firmware release Finding the executable code in the firmware Firmware at GitHub is a small utility that dumps the flash RAM of a laptop's Embedded/Environmental Controller or EC, typically an 8-bit or 16-bit processor. Table of EC details, links to much more Synaptics TouchPads use an AVR or PIC architecture. Synaptics TouchPad Interfacing Guide Synaptics RM13 Interfacing Guide Synaptics RM14 Specification Synaptics PS/2 TouchPad Interfacing Guide Exploit the webcam: Reverse engineer and modify the firmware run by the processor in the webcam Modify Intel AMT firmware: Intel Active Management Technology or AMT is part of the Intel Management Engine, built into systems with Intel vPro technology. It's intended for remote out-of-band management. Video explaining the exploit: Persistent, Stealthy, Remote-controlled Dedicated Hardware Malware Igor Skochinsky's presentation: Intel ME Secrets: Hidden code in your chipset and how to discover what exactly it does. Exploit PCI / PCIe Option ROMs PCI / PCIe expansion cards can have their own firmware, and it might be exploited: BIOS Disassembly Ninjutsu Uncovered Building a "Kernel" in PCI Expansion ROM Option ROMs: A Hidden (But Privileged) World Broadcom Wi-Fi Firmware Vulnerability is a heap overflow on Broadcom Wi-Fi chips, triggered by a packet with a WME (Quality-of-Service) information element with a malformed length. Analysis Proof of concept Detailed description Intel Atom "System on Chip" (or SoC) Bugs In early 2017 Cisco issued an advisory warning about the failures of a clock signal component manufactured by one of their suppliers. They said: Although the Cisco products with this component are currently performing normally, we expect product failures to increase over the years, beginning after the unit has been in operation for approximately 18 months. Although the issue may begin to occur around 18 months in operation, we don't expect a noticeable increase in failures until year three of runtime. Once the component has failed, the system will stop functioning, will not boot, and is not recoverable. Others reported the problem on Synology storage devices and various products from Dell, HP, NEC, NetGear, SuperMicro, and other manufacturers. The problem seems to come from the Intel Atom s2000 SoC, as indicated by Intel's Specification Update, which stated that its clock may stop functioning and the SoC will no longer boot. The parts started shipping in 2013. Cisco advisory The Register Router Jockey Intel Specification Update Synology community forum ClockGate 2017 on cantechit Virtualization / Emulation Bugs VirtualBox is a virtualization product that used to be from Innobox, which was purchased by Sun, which was purchased by Oracle. See this message from the OpenBSD project leader reporting that CPU registers become corrupted under VirtualBox. "We don't know how other operating system products continue running when the userland ecx register gets clobbered on a return from a page fault, but at least people should be aware that there is likely some security risk from running that That VM does not emulate the x86 correctly, See my page on Violating Virtualization Security for more information on Type 1 and Type 2 virtualization vulnerabilities, VM escape, the use of malicious hypervisors, and more. How Not to Respond to Intrusions The IT security department of the Economic Development Administration within the Department of Commerce ridiculously over-reacted and spent over $2.7 million destroying $170,000 in hardware including desktop computers, cameras, printers, keyboards and even mice after they were informed of a potential malware infection. They would have destroyed even more hardware but they ran out of money to continue the idiotic operation. See the Network World overview and the official Office of the Inspector General report for further details.
<urn:uuid:04be9a3c-0b89-4344-8f63-2340185723dc>
CC-MAIN-2022-40
https://cromwell-intl.com/cybersecurity/hardware.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00286.warc.gz
en
0.91257
9,996
3.125
3
Malware – and all of its various forms, including ransomware – has grown increasingly stealthy and sophisticated in recent years. Also on the rise: Its ability to fly under cybersecurity software's radar. One of the primary reasons detecting and stamping out malware is so difficult is the rise of an attack method called living off the land (LotL). Despite conjuring up idyllic images of urban farming or sustainability, the term refers to a group of techniques that typically execute in shell code or scripts running in memory. Attackers who "live off the land" make use of a system's own tools and utilities to conduct malicious activity. With these attacks, which don't use easily detectable malicious files, an attacker can lurk within a computer or network and avoid discovery by security tools. Even if an attack is discovered, the binaries used are exceptionally difficult to eradicate. As a result, a LotL attack is particularly risky for victims. Living Off the Land: A Brief History The concept of using fileless malware, or malware that relies on legitimate programs to attack, first appeared around the start of the current century. Early examples of this approach include malware with names like Frodo, Code Red, and SQL Slammer Worm. However, these payloads were more of a nuisance than a real threat. Then, in 2012, a banking Trojan named Lurk appeared. Although it wasn't terribly sophisticated, it demonstrated LotL's potential. In 2013, security researchers Christopher Campbell and Matt Greaber coined the LotL term to describe malware that hides within a system and exploits legitimate tools and utilities to cause damage. Over the past few years, the scope and sophistication of these attacks has grown. In fact, as security firms have become better at identifying and blacklisting malicious files, fileless attacks have moved into the mainstream. How Does Living Off the Land Work? In a LotL attack, adversaries take advantage of legitimate tools and utilities within a system. This might include PowerShell scripts, Visual Basic scripts, WMI, PSExec, and Mimikatz. The attack exploits the functionality of the system and hijacks it for nefarious purposes. It may include tactics like DLL hijacking, hiding payloads, process dumping, downloading files, bypassing UAC keylogging, code compiling, log evasion, code execution, and persistence. Cybercriminals use different methods and unleash different types of malware that fall into the general category of LotL. In many cases, they tap tools such as Poshspy, Powruner, and Astaroth that take advantage of LOLBins and fileless techniques to evade detection. Most attacks involve Windows binaries that mask malicious activities; however, LotL attacks can also affect macOS, Linux, Android, and cloud services. The reason this approach works so well is because resources such as PowerShell and Windows Scripting Host (WScript.exe) offer capabilities that far exceed the needs of most organizations—and many of these features aren’t switched off or removed when they’re not required by an organization. Overall, more than 100 Windows binary tools represent a serious risk, according to GitHub. What Do LotL Attacks Look Like? Once attackers have invaded legitimate tools, such as PowerShell, they're able to tap other legitimate processes and code, including built-in scripting languages such as Perl, Python, and C++. For example, an attacker might create a script that includes a list of targeted machines and, together with a PSExec account with executive privileges, copy and execute malware into peer machines. Another possible method of attack is leveraging a logon and logoff script via a Group Policy Object (GPO) or abusing the Windows Management Interface (WMI) to mass-distribute ransomware inside the network. A similar approach uses malware to inject malicious code into a trusted running process like SVCHOST.EXE or use the Windows RUNDLL32.EXE application. This makes it possible to encrypt documents from a trusted process, cybersecurity firm Sophos reports. This tactic can evade some anti-ransomware programs that do not monitor or are configured to ignore encryption activity by default Windows applications. Ransomware may also run from a NTFS Alternate Data Stream (ADS) to hide from both victim users and endpoint protection software, cybersecurity firm Malwarebytes Labs points out. Oftentimes, the entire attack takes place within a few hours or during the night when staff pay less attention to IT systems. Once the malware has encrypted files, the recipient winds up with a locked screen and a ransom note. These attacks often appear to come out of nowhere because the actual file encryption is performed within a trusted Powershell.exe component. As a result, endpoint protection software may not detect the process because it appears to be legitimate, according to Sophos. One of the most widely publicized LotL attacks occurred in 2017, when so-called Petya malware appeared. It initially infected a software accounting program in the Ukraine and then spread across companies. More recently, the SolarWinds attack, a.k.a. SUNBURST, used LotL and other methods to plant malware in one of the security firm’s software patches. Reducing Risk Is Critical There's no simple way to avoid the risk of an LotL attack. It's also difficult to determine who is initiating the attack because of the stealthy nature of the malware. In general, the best defense is to ensure that unneeded components are switched off or removed from systems. Other strategies include setting up application whitelisting where possible, tapping behavioral analytics software, patching and updating components regularly, using multifactor authentication, and continuing to educate users about the risks associated with clicking email links and opening attachments.
<urn:uuid:493833b9-43ab-44af-a6cd-aff10eaa2343>
CC-MAIN-2022-40
https://www.darkreading.com/edge-articles/is-an-attacker-living-off-your-land-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00286.warc.gz
en
0.920026
1,173
3.296875
3
What is Diameter? Diameter is one of several defined Authentication, Authorization, and Accounting (AAA) protocols. What does that really mean? An AAA protocol refers to the activities used by a data network to control access and services. This allows the service provider to restrict access and to ultimately bill the subscriber for services like bandwidth. As an example, people would dial into their Internet Service Providers (ISP) by providing an ID and password to an access server, which then authenticated the user before granting internet access. One of the earlier AAA protocols was Remote Access Dial-In User Service (RADIUS). It was written by the Internet Engineering Task Force (IETF) and was designed to provide a simple yet efficient way to deliver such AAA capability. RADIUS worked well for what it was designed for – small-scale configurations like dial-up access to the internet. By the late 1990s things were changing and RADIUS was not well-suited for larger-scale and higher-speed access. AAA protocols such as RADIUS were initially deployed to provide dial-up Point-to-Point Protocol (PPP) and terminal server access. Over time, with the growth of the internet and the introduction of new access technologies, including wireless, Digital Subscriber Line (DSL), Mobile IP and Ethernet, routers and Network Access Servers (NAS) had increased in complexity and density, putting new demands on AAA protocols. Some of these demands were for a more reliable transport, a need for agent support and a need for server-initiated messages. So out of that need for an updated AAA protocol Diameter was born. Diameter addressed the shortcomings of RADIUS. It used either Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) as its transport layer; unlike RADIUS, Diameter supports Agents (Relay, Proxy, Redirect, and Translation); it supports client-initiated messages; it supports capability negotiation to name a few. Diameter is a peer-to-peer, binary-coded protocol rather than a client/server text-based protocol like Session Initiation Protocol (SIP). With Diameter any peer can send a request to another peer. Either a client or a server can send and receive requests and responses. In the case of Diameter, a client is an entity that performs access control and a server is an entity that performs authentication and authorization. Diameter messages are either requests or responses (answers). Normally all Diameter requests are answered so the sender knows the status of the request right away. All data delivered by the protocol is in the form of an Attribute Value Pair (AVP). Some of these AVP values are used by the Diameter protocol itself, while others deliver data associated with particular applications that use Diameter. Diameter Protocol Stack Diameter is a base protocol that contains a base functionality independent of any application. Applications are extensions and are tailored for a particular usage in a particular environment. The picture below shows four such applications. Applications can be developed as needed without affecting the Diameter Base Protocol. Diameter is defined by the IETF in Request for Comments (RFC) 6733, and each application is defined in its own separate RFC. Diameter is defined in terms of an AAA base protocol and a set of applications. The base protocol provides basic mechanisms for reliable transport, message delivery and error handling. It must be used along with a Diameter application. A Diameter application uses the services of base protocol in order to support a specific type of network access. Diameter is a rather rote protocol – a Request gets sent and the reply comes back in the form of an Answer. Diameter and 3GPP Yes, yes that is all fine but where does LTE and 3GPP fit into all of this? How does an IETF-written AAA protocol find its way into the mobile telecom world of 3GPP? It really started with 3GPP Release 5 around 2002. In that release 3GPP defined an optional architecture called IP Multimedia Subsystem (IMS). This architecture was for delivering Internet Protocol (IP) and multimedia services over a mobile network. The primary protocol in the IMS is Session Initiation Protocol (SIP), but Diameter, with its Request/Answer format was quite well suited for some communications. When Diameter is used in 3GPP, the interface it rides on becomes the application. In IMS these interfaces are the Cx / Dx and Sh interfaces. So now if we look at the Diameter protocol stack, it reflects the Base Protocol, six applications defined by the IETF and two applications defined by 3GPP as shown below: And that certainly was not the end of 3GPP’s involvement in Diameter. In Release 8, the architecture and protocols made a radical change. This new architecture is called the Evolved Packet System (EPS). However, the more marketing-friendly acronym “LTE” is what most people are familiar with. In previous releases and architectures, functions like managing the mobile’s location, handling subscriber data, authentication, fault recovery and checking the Mobile Equipment’s (ME) identity were all handled using SS7. Now starting with this new architecture, those functions are handled by Diameter on interfaces called “S6a/S6d/S13/S13.” In addition, LTE networks allow an optional architecture called SMS in MME. And, yes that also uses Diameter on interfaces “S6c and SGd.” So now our Diameter application tree looks like this: So that pretty well sums it up. Diameter has found its way into 3GPP in the optional IMS architecture and in LTE. But there is more: it is also used to deliver policy control. Policy control allows a service provider to better control services and revenues. It includes things like Guaranteed Bit Rate (GBR) and Quality of Service (QoS). To deliver that policy control information, once again Diameter is used. Part of policy today deals with online charging (prepay) and offline charging (post pay). 3GPP lumps all of these together and calls it Policy and Charging Control (PCC). And if you are one step ahead of me, and guessed that PCC is handled by Diameter you would be correct. So let’s update our Diameter application tree: As a recap, Diameter is an IETF-defined AAA protocol. It is in a Request/Answer format. It delivers parameters called Attribute Value Pairs (AVP). Along with the base protocol, the IETF wrote several applications for Diameter. Additionally, it is used in 3GPP networks in the IMS, in LTE for mobility management and for PCC. Though we won’t redraw our 3GPP application tree, 3GPP also uses Diameter for other interfaces. These include: - Generic Authentication Architecture (GAA) - 3GPP to Wireless LAN (WLAN) Interworking - Location Services (LCS) - EPS AAA Interfaces However, the primary ones are used in IMS, LTE and PCC.Tags: diameter Categorised in: Blog
<urn:uuid:9004940f-f578-454d-ba67-86aadc4a904b>
CC-MAIN-2022-40
https://www.cellusys.com/2020/05/15/diameter-and-3gpp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00286.warc.gz
en
0.923005
1,506
3.9375
4
Ideally, despite locks, your database system will allow a lot of users at once, and each transaction will get in, make the single change needed, and get out again; but locks inevitably mean blocking, and when transactions need to do multiple operations, this locking can even lead to deadlocks. Although your application users will report that the application has deadlocked, this kind of behavior does not actually mean a deadlock has occurred. When a deadlock has been detected, the Database Engine terminates one of the threads, resolving the deadlock. The terminated thread gets a 1205 error, which conveniently suggests how to resolve it: Error 1205 : Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction. Indeed, rerunning the transaction is often the best course of action here, and hopefully your application or even your stored procedure will have caught the error, recognized that it is a 1205, and tried the transaction again. Let’s consider how a deadlock occurs, though. How a deadlock occurs It’s quite straightforward really — one transaction locks a resource and then tries to acquire a lock on another resource but is blocked by another transaction. It won’t be able to finish its transaction until such time as this second transaction completes and therefore releases its locks. However, if the second transaction does something that needs to wait for the first transaction, they’ll end up waiting forever. Luckily this is detected by the Database Engine, and one of the processes is terminated. Diagnosing problems with deadlocks When diagnosing these kinds of problems, it’s worth considering that there are useful trace events such as Lock:Deadlock and Deadlock graph events. This enables you to see which combination of resources was being requested, and hopefully track down the cause. In most cases, the best option is to help the system get the quickest access to the resources that need updating. The quicker a transaction can release its resources, the less likely it is to cause a deadlock. However, another option is to lock up additional resources so that no two transactions are likely to overlap. Depending on the situation, a hint to lock an entire table can sometimes help by not letting another transaction acquire locks on parts of the table, although this can also cause blocking that results in transactions overlapping, so your mileage may vary.
<urn:uuid:cc022019-7d7b-4185-b1c6-8f2915f43359>
CC-MAIN-2022-40
https://logicalread.com/sql-server-deadlocks-w01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00286.warc.gz
en
0.95509
496
2.625
3
Black and White Lists Black and white lists are a common security mechanism to segregate traffic and then take a specific action on that segregated traffic. A blacklist consists of malicious traffic that is specifically targeted for removal or blocking. For example, if a service provider was trying to block objectionable websites, they would put the offending sites on their blacklist and deny access, while allowing all other traffic to pass. A whitelist is just the opposite; it is a list of known “good” sites and if a site is not on the whitelist it is presumed to be bad and is thus blocked. Black and white lists accomplish the same task—block objectionable traffic—but with a completely different approach. An ANIC SmartNIC can be used to maintain a black or white list and apply specific actions when a match is found such as drop the traffic, redirect to a specific port (e.g. for further examination) or only provide certain aspects (e.g. header only) of the packet. To learn more about how an ANIC SmartNIC can help with your security needs, please contact us at email@example.com.
<urn:uuid:b8e63402-dd55-4f8f-b8a4-4fbdf3af506f>
CC-MAIN-2022-40
https://accoladetechnology.com/black-and-white-lists/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00286.warc.gz
en
0.935662
236
2.515625
3
Perry Holdsworth, our Sales & Marketing intern at Fornetix, gives us a primer on the recent WannaCry ransomware attack that has wreaked havoc on global networks. What is it? On May 12, 2017, malware known as WannaCry viciously attacked large organizations and networks around the world, invading computers and holding hostage the data they contain. The ransomware strategically infected computers by using a previously unseen vulnerability recently exposed in NSA document leaks. What’s the process? Ransomware like WannaCry starts by encrypting files on a person’s computer and then asking for payment to decrypt the files and release the information. In the case of WannaCry, $300 worth of bitcoin must be paid before three days or the amount will double. Worse, if no payment is received after seven days, the ransomware will delete all files with no hope for recovery. Who was targeted? WannaCry targeted Windows PCs in over 150 countries across the world and infected more than 200,000 systems. The most notable victim in this attack was Britain’s National Health Service, forcing hospitals to alter their plans and transport emergency patients to different hospitals. What’s next and what have we learned? Even though the WannaCry attack has passed, it’s still not safe to say we are in the clear from ransomware. In fact, many cybersecurity firms are recommending users find strategies to help repel or defend themselves against attacks in the future. Some of those strategies include having a talented IT team, a great defensive program against threats, and superior network infrastructure. One major lesson learned from this event is that ransomware attacks are easy to start. Many people can start them — even your employees. Consequently, firms must train and educate their people to help prevent these attacks from happening again in the future. Another lesson from this event is to always stay up to date on patches. WannaCry was able to infect Windows computers because they were not installing patches. This allowed WannaCry to infiltrate companies’ networks with minimal effort. Always make sure you are up to date on patches so no ransomware software can infect any of your networks or computers. WannaCry is just the beginning of ransomware. Inevitably, something more powerful will take its place and we must take the right steps in order to ensure we are fully prepared when the time comes.
<urn:uuid:996ef3ab-006a-437b-8126-13c77cdcfef7>
CC-MAIN-2022-40
https://www.fornetix.com/articles/guest-post-the-whos-and-whats-of-wannacry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00286.warc.gz
en
0.946337
488
2.625
3
Entrepreneurs should take a cautious look beyond the hype surrounding blockchain and assess the true capabilities of the innovation at this early stage in its development. In theory, blockchain has enormous potential for business applications. However, once you see beyond the holy grail hype of blockchain theory, you may experience a sobering epiphany. Since Satoshi Nakamoto’s 2008 unveiling of the technology, researchers have supported blockchain as a viable security solution in a world where humans are struggling to keep up with technology. All the numbers add up on a whiteboard, but so far applying the technology as a security solution has proven much more challenging than conceptualizing various use cases. A blockchain-powered future Hypothetically, blockchain could disrupt information management across a range of industries. The technology can enable enterprises to manage data with heightened efficiency and transparency. What blockchain currently does well is shift trust from people to technology. However, it will never eliminate the need for human intervention. Any practical application of blockchain must work with some variant of traditional information management. The decentralized nature of blockchain considerably enhances data security. Also, because blockchain does not incorporate identifying information, it’s inherently anonymous. Moreover, there’s a built-in trust component with blockchain because of the decentralized network of computers needed to resolve its complex algorithms. In logistics, for instance, supply chain companies could use blockchain to track the movements of goods from manufacturers to the end-user. The technology would also simplify the transfer of shipments. Alternatively, manufacturers could use blockchain to ensure quality assurance and conduct investigations with ease. Also, blockchain could prove useful for accounting because it protects data from tampering. It produces an audit trail that personnel can trace easily. In a recent Deloitte survey, 39 percent of responding executives expressed their intent to invest at least $5 million into blockchain development. But is that too much faith in blockchain? According to Alan Amling, blockchain expert and teaching fellow and University of Tennessee’s supply chain management program, industry 4.0 technologies, including blockchain, are redefining business processes in the digital economy. Whether $1 million, $5 million, or $10+ million is the right amount of investment depends on the size of the company and the issues and opportunities they need to tackle. “I think blockchain will be one of those ‘Wait… wait… wait… wow, that was fast’ innovations and companies need to be ready. As standards for public blockchains become implemented, a rising tide will lift all boats. When that happens, you want to make sure you have a boat,” he says. Pay no attention to the man behind the curtain For supply chain and logistics managers specifically, how much trust should they put in blockchain? Amling suggests that companies should trust but verify. “Many companies are taking the pragmatic steps of conducting pilots to understand the technology and the highest payback applications. Now we’re beginning to see positive ROI use cases appear, especially in private blockchains. However, there are still many blockchain applications that could be better served by existing technology. The highest payback applications tend to be those that include sharing data with a vast network of parties, where data provenance and audit trails are highly valued, and all parties can agree on the rules for sharing access.” Technology issues aside, there’s another problem regarding the commercial application of blockchain technology. Initial coin offerings (ICOs) are a way that businesses can use bitcoin to raise capital. Blockchain isn’t yet a mainstream technology—however, plenty of snake oil dealers who use the technology to make billions using fraudulent ICOs. Resultantly, you may want to think twice before staking a corporate claim on the bitcoin landscape. So far, the blockchain vertical isn’t delivering as desired. Of 43 major blockchain ICOs, not one has successfully created a product, and according to the first annual Cryptocurrency Anti-Money Laundering Report, cybercrimes executed using digital coin exchanges surged from $125 million to $356 million in just six short months in 2019. In theory, blockchain is a phenomenal innovation. In practice, however, the technology isn’t entirely living up to its reputation—at least not yet. Going forth with blockchain The blockchain industry is still in its beginning stages. Resultantly, enterprises have yet to develop ample best use cases for the technology. However, companies that experiment with blockchain without a quantifiable definition of success will fail to reap a return-on-investment, according to a report published by McKinsey and Company business consultancy. Accordingly, decision-makers must determine why they are investing in blockchain development before beginning research. For some enterprises, the studies conducted by their industry competitors provide a foundation to begin with. For answers, stakeholders can examine the limited research and development information of their industry peers. While there is limited data available, some early adopters have at least established the relevancy of the technology for their vertical. Also, it’s not necessary for blockchain to replace human mediators to generate value. A blockchain-powered future will most likely always require human mediators—albeit in a diminished capacity. For now, blockchain provides value in that it reduces costs, rather than completely transforms enterprise business models. Furthermore, analysts forecast that the deployment of blockchain at scale is still years away. To date, companies have found a considerable challenge in scaling up to a full-size live environment where humans still have sufficient control over transactions. In this matter, there’s a valid cause for blockchain research and development for most enterprises. Blockchain deployment by industry Around the world, business leaders want to know how they can leverage blockchain to gain a competitive advantage. Despite obstacles, blockchain technologists are making remarkable research and development breakthroughs. Companies such as American Express, Visa, MasterCard, and Goldman Sachs have all invested in blockchain R&D. In the automotive industry, manufacturers are researching blockchain deployments for autonomous vehicles, electric-powered mobility, and other applications. The Toyota Research Institute, for instance, is leading in research for using blockchain to decentralize the trade of autonomous vehicle data. A breakthrough in this area will make it impossible for dealers to rollback odometer mileage. In aviation, Airbus—as part of the Hyperledger Consortium—researched blockchain for jet plane part tracking, and Air France is conducting blockchain research to track maintenance supply chains and workflows. Maersk is researching blockchain for logistics monitoring, and in telecom, industry leaders are researching ways to deploy blockchain to manage the rapid proliferation of Internet of Things (IoT) devices. The IoT universe is expanding rapidly, but security for the network of connected devices lags far behind. Meanwhile, Cisco is experimenting with blockchain as a way to verify the identity and trustworthiness of IoT devices. Recent years have resulted in a fast upshift of blockchain research and development activities. Analysts forecast this trend will continue. Rather than conduct pure experimentation, however, enterprises now want to learn how to extract strategic value from the technology. As more studies conclude, blockchain will most likely go mainstream as researchers produce clear and tangible results that they can relay to non-technical executive managers. As the race to leverage blockchain wages on, more organizations with aligned interests will join forces to figure out how to make use of the technology. Very soon, the bodies of work resulting from blockchain R&D will enable enterprises to address critical pain points and compete on yet another tear of technological excellence.
<urn:uuid:52589c60-5a00-4cf4-8ed1-5c5cdeb300d4>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/11/29/blockchain-adoption-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00286.warc.gz
en
0.930077
1,544
2.515625
3
The internet, as we know it, has been around for decades at this point and it’s evolved a lot. One particular aspect of being online that is slowly going away is our ability to be anonymous online. Remember the days of anonymity being on the message boards with an alias that only you knew the reason behind and not having to worry about being discovered? Those days are, for the most part, gone. Now, real names, IP addresses, data breaches, social media, and other publicly accessible databases are flowing like rivers of information, exposing us on the world wide web. As our anonymity goes away, so does our ability to stay safe. Today, “doxing” is becoming the de facto method of cyber-revenge against others putting themselves out there. In this guide, we’ll uncover what doxing is and how you can help prevent an attack against you. What is doxing? Doxing is when your personal identity is compromised and it can happen with or without your knowledge. It comes from the combination of the word document and the word tracing. It refers to the collection of documents of an individual or organization in order to learn more about them. Doxing takes place over the internet and includes hunting for details on a person. This internet-based practice of researching and broadcasting personally identifiable data can be obtained by searching openly available databases, social media websites, hacking, and social engineering. Doxxers analyze file metadata, public Wi-Fi packet sniffing, IP loggers, and the previously mentioned data. What a dox typically includes is your real name, phone number, address, Social Security number, personal photos, social network profiles, credit card, banking information, online accounts and email addresses. Doxing can be used for a variety of reasons including general people searching on Google, aiding law enforcement agencies, business analysis, risk analytics. It can also be used for immoral reasons such as extortion, coercion, infliction of harm, harassment, online shaming, and vigilante justice. These immoral reasons for doxing are usually conducted by anonymous bad actors who lack fear of retaliation or consequence as they unleash their inner bully. Anonymity has become a tremendous factor in cyberbullying, as it’s much easier to be harmful if you don’t have to face the hurt of your victims. Destroying an individual’s anonymity has become one of the most powerful online weapons available, and a way to hurt someone from many miles away. How to protect yourself from doxing No one seems to be safe from doxing. Public figures including celebrities, politicians, and even YouTuber’s have been the victims of these sort of attacks. Even though resistance might seem futile, there are things you can do to protect yourself. Here are some ideas: - Using a fake name instead of your real name where only your friends and family know that it’s you - Trusted proxies or a VPN can be used to visit websites as a means to keep your real IP address anonymous - Create multiple usernames and email addresses instead of keeping everything uniform with your primary personal email. You can help keep safe by diversifying emails you use to sign-up for various digital platforms and you don’t have to associate sensitive information with these extra email addresses - Invest in WHOIS protection which can be obtained either at an added cost or for free from services such as Google Domains or DreamHost.com when registering a website domain - Use strong passwords for emails and online accounts. Here are some new ways to come up with secure passwords. - Use multi-factor authorization for critical services like Google Drive, PayPal, and other services with the ability to make purchases. Two-factor authentication should be enabled whenever it is available. - Increase social network privacy settings and edit your profile so you’re only sharing with friends or people you actually know. Change these critical Facebook settings now. - Don’t use the Login with “SOCIAL MEDIA” platform of choice buttons on websites that require or ask you to register using your social media account. Instead, create an account using an email address not associated with your social media platform. This ensures your information isn’t shared with social media - Make sure Google doesn’t have any personal information about you. Even though that’s nearly impossible, you can delete everything you’ve ever searched for through Google. Click here to find out how Why you should be careful The risk you have of being a victim of doxing increases as more of your personally identifiable information is available online. Hackers dox individuals online for revenge, malicious intent, protests, and simply to have control over someone else on the web. Being doxed can result in not only stolen information but harassment, identity theft, humiliation, loss of a job or career, and rejection from your friends and family. Another practice frequently used in doxing, is swatting. Swatting is the art of prank-calling the police or SWAT units and sending them to another person’s address. Typically in the online arena, a victim getting doxed can also lead to swatting as bad actors obtain someone’s address and makes false bomb threats or other serious disturbances, and the police show up to the unsuspecting victims’ home. Digital bullies and trolls have the ability to be very inventive in how they dox you. By using a single clue they can follow it up until they gradually disclose your online persona and expose your identity. With the angry mob of people on the internet who use doxing as a method to win arguments, you have to police what you say and have common sense on public comments. Simply put, be careful and follow the steps above to help yourself avoid getting doxxed, and stay anonymous. If you’d like to search out some creepy public records websites such as Spokeo, Whitepages, Intelius, you can use a website called DeleteMe. If cost is an issue, it has a DIY guide for helping remove your public information.
<urn:uuid:593987ec-2f95-49d6-a049-dce4806491b3>
CC-MAIN-2022-40
https://www.komando.com/privacy/people-use-doxing-to-get-revenge-online-protect-yourself/560317/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00286.warc.gz
en
0.937245
1,275
2.734375
3
Application delivery is a mechanism to deliver application functionality quickly and efficiently to users. Traditionally, hardware application delivery controllers (ADCs) were used to deliver applications. But cloud-native application delivery architectures (such as microservices) require a new application delivery solution — the software ADC. These solutions provide application delivery optimization by allowing enterprises to create a highly scalable application delivery model which makes application services available when required. It does this by automating the deployment of new ADCs when required. What Is Application Delivery? Application delivery refers to the pool of services that combine to provide application functionality, usually web-based software applications, from the data centers or cloud environments where the required data processing and computing is executed to the application clients or end-users over the internet. The services for delivering applications on a network infrastructure aim to provide a reliable user experience by providing, load balancing, security, latency and TCP optimizations which combine to provide application content seamlessly. Business IT teams’ role in application delivery focuses on how applications are architected and managed within the data center, and cloud hosting services. What Is An Application Delivery Network? An application delivery network (ADN) provides application availability, security, visibility and acceleration. The technologies are deployed together in a combination of WAN optimization controllers (WOCs) and application delivery controllers (ADCs). The application delivery controller distributes traffic among many servers. The WAN optimization controller uses caching and compression to reduce the number of bits that flow over a network. ADNs typically assist in the acceleration of content delivery, especially immediate and dynamic content such as online gaming and trading. What Is Application Delivery Management? Application delivery management is the discipline of achieving fast, predictable and secure access to applications. Application delivery management provides delivery solutions by ensuring vital enterprise applications are available and responsive for users. This requires application delivery optimization for throughput, connections per second, security, troubleshooting, and analytics. Benefits of Modern Application Delivery Systems? A cloud-native application delivery system offers the following IT benefits: • Simplified infrastructure: Replaces hardware-based application servers with a public cloud service that is better equipped to scale globally without compromising delivery quality. • Reduced costs: Companies spend less on customer support when the user experience improves with application performance. A cloud-native application delivery process also saves on hardware acquisition and maintenance costs. • Increased productivity: Efficiency is optimized when employees can quickly access the information and services on applications from any device, anywhere. A cloud-native application delivery process makes it possible for applications to perform faster. • Improved end-user experience: Customers will increasingly use and prefer high performance applications made possible by an efficient cloud-based application delivery process. What Does An Application Delivery Manager Do? An Application Delivery Manager is responsible for the availability and responsiveness of an organization’s applications. They can use agile project management techniques to deliver products under an efficient application delivery model. The application delivery manager leads the planning process and prioritizes resources based on team capacity to create application delivery solutions. Does Avi Networks offer Application Delivery? Yes. Avi Networks delivers Intent-Based Application Services by automating intelligence and elasticity across any cloud. Software-defined application services from Avi Networks provide a flexible solution that go beyond load balancing to deliver pinpoint analytics, predictive autoscaling, multi-cloud traffic management and a cloud security solution. For more information on application delivery see the following resources:
<urn:uuid:b0c77b22-1115-4a8c-a938-2806b4ea0c3c>
CC-MAIN-2022-40
https://www-stage.avinetworks.com/glossary/application-delivery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00486.warc.gz
en
0.896593
702
2.78125
3
Artificial Intelligence technology has been advancing over the past couple of years and is being implemented in many industries, including data centers. Google, for instance, is using artificial intelligence in order to make their data center framework more efficient. In fact, they are incorporating the artificial intelligence technology from their self-driving cars and smart assistants into their data centers. It is known that data centers burn massive amounts of energy, but the use of artificial intelligence is helping data centers be more efficient in this regard. Improving energy efficiency will also improve a data center’s environmental impact. Artificial Intelligent systems are now being used to analyze how Google’s data centers are working in real time. These AI neural networks use algorithms to analyze the system, diagnose the problem, and then make changes immediately after to solve the problem. Not only do these artificial intelligent systems work straight away, but they also learn from the problem to solve it even faster then next time, or stop it from occurring again. This is called deep learning. Deep learning is the term that describes how artificial intelligent systems actually learns from situations they encounter. Through learning from its experiences, deep learning helps the AI system with future problem-solving. This software attempts to simulate the way a human brain works. This software learns by identifying patterns in data including images and sounds. This helps the AI system decide the best way for a data center to operate. Although artificial intelligent systems within data centers have helped on some level; many industry executives still feel that AI technology is not quite where it needs to be. According to Erich Sanchack of Digital Realty: “Implementation of AI in the data center will move us well beyond current DCIM systems and their limitations. Using AI we can create an environment in which not only are all of power and facilities decisions and processes completely optimized but that our resource planning and even advanced functions like dynamic bandwidth and server allocation are fully automated as well. According to Jack Pouchet of Vertiv: “True artificial intelligence is a long way from reality, and the cons are all too easy to define. Just think HAL 9000 and Skynet. But we need not even look that far as there are numerous real-world examples almost on a weekly basis of automation going awry. Presently, one of the primary ways that artificial intelligence is being used within data centers is for energy efficiency. Google’s use of AI has been able to cut down their energy use by 40 percent. For a company as large as Google, a 40 percent savings could save millions of dollars. Another way that artificial intelligence is being used within data centers is for server optimization. Because of Deep Learning software, AI can use predictive analysis to help data centers distribute the workload. Maybe one of the most important ways that artificial intelligence is being used is for data center security purposes. Data centers always have to be ready for cyber threats. Unlike humans, artificial intelligent systems can be on the lookout 24/7 and stay on top of all data threats. Monitoring these threats can take a lot of time and human resources, but AI systems are changing the way data centers are dealing with the issue of security. Currently, many data centers and colocation providers use an infrastructure management solution that monitors many aspects of their system. Some of the things that a DCIM solution can manage are security, temperature and cooling, fire hazard, and ventilation. Managing all of these things is difficult even for a DCIM. The idea of having an AI and DCIM combination system seems to be very promising. Data centers occupy much space and managing it can also be a difficult task. LitBit is introducing, Dac, the first AI-powered data center operator that could solve all of these problems. The potential cost for a company to run a data center (manpower alone) could be several hundred thousand dollars. Dac will significantly lower this company cost. Dac will use an Internet of Things type of system that will help identify anything wrong with the data center operations. Dac will also be able to recognize any loose electric wires, water leaks in the cooling system, and even possible power failures. One of the negative aspects of new technology is that certain jobs become obsolete. If the AI-powered data center operator can do all of these jobs, the need for some of the data center workers may be gone. Since the future of artificial intelligent managed data centers seems to be inevitable, the loss of certain data center jobs also seems unavoidable as well. Although there will continue to be a loss of on-site jobs due to artificial intelligent systems, there will be an increase of jobs in the artificial intelligence industry. There are many pros about having an artificial intelligent managed data center, including an overall improvement in data management and data storage, and we are getting closer and closer to the reality of that. The data center and colocation companies would be saving money, but in time will lose its human touch. Currently, Machine Learning is doing a good job of managing the systems that it can manage, but the industry believes artificial intelligence will be able to manage a data center completely. Overall, an artificial intelligent managed data center seems to be a positive step forward for the data center industry. Through artificial intelligence, each company will save money on manpower; they will also become more energy efficient, and even lessen the usage of fossil fuels.
<urn:uuid:eb81aea2-1a1b-45cd-a36b-c34b28148395>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/data-center-artificial-intelligence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00486.warc.gz
en
0.950661
1,094
3.46875
3
NFC is an amazing technology. Mark my words: it's going to be huge. Whether Apple adopts it next week or not. The reason: it ties together the physical world with the networked world. When scanning for Wi-Fi networks or bluetooth devices, it's really hard to know which names on your screen map to which devices in the real world. So Wi-Fi requires typing long and complex network passwords and bluetooth uses an annoying pairing procedure. NFC, on the other hand, really is plug and play. Without the "plug" part. NFC (near field communication) is based on RFID (radio frequency identification). The simplest RFID chips only hold a serial number. More advanced ones can do some processing and store a little information. The interesting thing is that they don't have batteries. Instead, RFID readers emit an electromagnetic field that transfers enough power to the chip to wake it up and have it talk to the reader. RFID chips easily fit inside a credit card or even a paper transit ticket and they cost as little as a few cents. NFC is an extension to RFID that allows communication between more powerful devices, such as cell phones. So what's the big deal, can't you do the same thing with barcodes or QR codes or magnetic stripes? This is true for simple RFID chips, which provide nothing more than a serial number that anyone with an RFID reader can read. Presumably, copying such an RFID chip is also easy, just like copying a barcode or magstripe is trivial. But more advanced RFID chips have on-board authentication mechanisms, and will only talk to authenticated readers. In the short term, the killer app for NFC is going to be payment systems. I've been reading a lot on American websites on how such a system won't go anywhere, because it doesn't provide any merchant or consumer benefits. I don't agree. A payment system that uses NFC-equipped cellphones can be simultaneously easier to use and more secure. The chip+PIN system that's now used in Europe is also quite safe, because all the sensitive data is stored inside the chip and can't easily be copied. However, inserting the card and typing the PIN is still a bit of a hassle and it's still possible for the PIN to be intercepted so the card can be charged if it's stolen or the magstripe is copied. Here in the Netherlands, two of the three large banks are now rolling out debit cards with an RFID chip to allow contactless payments. This seems less secure at first blush because in theory, someone could read (and charge) the card while it's in your pocket. However, RFID/NFC only works over very short distances (less than 10 cm) and online authorization from the bank is required, so in practice this is not going to be a fruitful attack vector. Interestingly, small payments can be made without entering a PIN. Only for payments over 25 euros (or once small payments add up to 50 euros) the PIN must be entered. So paying small amounts with a contactless card will be much faster and more convenient than with a regular card. But how does the addition of a phone make any of this better? I'm glad you ask. There are actually RFID stickers that you can stick on your phone so you can "use your phone" for contactless payments. However, if Apple adds NFC to the iPhone 6, this means that rather than entering your PIN on a terminal—which may or may not be outfitted with a PIN spying camera—you could enter the PIN on your iPhone. Or better yet, use the touch ID sensor. I imagine that for small amounts, simply having the phone unlocked would be sufficient. For larger payments, you would have to authenticate using the fingerprint sensor. I also think that this is will be the extent of Apple's involvement in the payment process.¹ Sure, Apple has hundreds of millions of credit card numbers on file. But being a payment processor entails a lot of customer service and surprisingly thin margins, so i'd be very surprised if Apple had any interest in that business. However, the card issuers are going to love the strong fingerprint authentication, which will surely reduce fraud. This will give Apple significant leverage for negotiating better rates for iTunes / app store credit card payments. Remember that Apple sells more than half of its iPhones outside North America, so even if none of this materializes in the US, that doesn't necessarily mean the whole thing was a failure. In fact, there is some speculation that NFC payments may be a must-have feature in some regions. Of course all of this is just the camel's nose. Once tens of millions of popular devices have NFC capability, and that capability can be used in third-party apps using APIs (we can only hope!), it won't be long before new and innovative uses of NFC will develop. Remember the bump app for exchanging business cards by bumping phones together? With NFC this would be much easier. There are already bluetooth devices that can be paired through NFC. There are RFID stickers you can scan with an NFC phone to change the phone's settings quickly. Setting up a Wi-Fi network, unlocking a computer (or your front door!) without a password, charging up your Oyster card or OV-chipkaart—or replacing that card. I'm sure tying the networked world and the physical world together quickly, easily and securely will lead to lots of innovations that we haven't even thought of. ¹ I wonder how the connection between a bank account or credit card an the iPhone will be made, though.
<urn:uuid:a433d9ce-7f77-4d73-95db-2ecb5a3ecf80>
CC-MAIN-2022-40
https://www.iljitsch.com/mu2014/09-07-nfc-is-the-way-of-the-future.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00486.warc.gz
en
0.953547
1,171
2.625
3
Many of the statistical techniques are useful for both prediction and explanation in the world of advanced analytics. However, certain techniques are necessarily more suited for one than the other. For example, techniques such as Mixed Linear Models are primarily used for explanation. In terms of predictive techniques there are two ‘streams’ of innovation, though the distinction between them as time progresses is becoming less clear as practitioners of each borrow from the other. The two streams are statistics and machine learning. The online chapter of ‘Advanced Analytics Methodologies: Driving Business Value with Analytics,’ by Michele Chambers and Thomas W Dinsmore discusses each of these two ‘streams’ in turn; offering examples of the various ways that each can be discussed. Each one, having very different legacies from the other. Statistical methods, for example linear regression, use known properties to estimate the parameters of mathematical models. These models have the advantage of being generalized. If you can demonstrate that the historical data will conform to a known distribution then this information can be used to predict behavior for new cases. The analogy is that if you can predict the landing spot of an artillery shell, given its starting position, velocity and acceleration – so too can you predict the response to a marketing campaign based on information about a customer’s past shopping habits, demographic characteristics etc. The only limit to this is the ability to show that it follows a known statistical distribution. This is the large disadvantage, since all too often real world actions do not conform to statistical distributions. Machine learning is fundamentally different in one major way: they do not start from a particular hypothesis, but seek to learn and describe the relationship between historical data and the target behavior as closely as possible. Since they are no longer constrained by any set of specific statistical distributions, often they are more accurate than their statistical counterparts. On the other hand, machine learning models can ‘overlearn,’ meaning they can learn relationships from their training data that cannot be generalized to the wider world. This requires in-built techniques to control and limit this phenomenon, such as cross-validation or pruning on an independent sample. Some techniques used, such as linear regression for the ‘statistical stream’ are well understood, widely used and broadly available. Whereas other methods like deep learning, part of the ‘machine learning stream,’ are relatively new. Scientists are still in the process of understanding the technique’s limits, and software implementations are rare. For business, it is not necessary understand the technical details of each technique, but focus on two principles. First, to recognize that experimentation with a broad spectrum is often required. Second, while theoretical limits are interesting academically, the actual performance in application should be the sole measure of any model. Big Data and related technologies – from data warehousing to analytics and business intelligence (BI) – are transforming the business world. Big Data is not simply big: Gartner defines it as “high-volume, high-velocity and high-variety information assets.” Managing these assets to generate the fourth “V” – value – is a challenge. Many excellent solutions are on the market, but they must be matched to specific needs. At GRT Corporation our focus is on providing value to the business customer.
<urn:uuid:387d545d-847d-4647-8ce4-1f04b4bbc402>
CC-MAIN-2022-40
https://www.grtcorp.com/content/prediction-and-advanced-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00486.warc.gz
en
0.943473
678
3.109375
3
Risk Assessment Defined Risk assessment is the identification and analysis of relevant risks to achieving objectives and forming a basis for determining how the risk should be managed (accept, reject, share, reduce, etc.). Every entity faces a variety of risks from both external and internal sources and each of these risks must be assessed. Risks affect each entity’s ability to survive; successfully compete within its industry; maintain its financial strength and positive public image; and maintain the overall quality of its products, services and people. Businesses should perform a risk assessment before introducing new processes or activities, before introducing changes to existing processes or activities (such as changing equipment), or when the company identifies a new risk. Preconditions to risk assessment establish objectives that are linked at different levels and are internally consistent. An organization must not only understand and deal with the risks it faces but also set objectives; integrate with sales; and perform production, marketing, financial and other activities so that it is operating correctly and efficiently. At that point, the entity must establish mechanisms to identify, analyze and manage the related risks. There is no practical way to reduce risk to zero. Therefore, management must determine how much risk should be prudently accepted and strive to maintain risk within these levels. This acceptance is referred to as risk appetite. There are several types of risks that an organization may face. While specific industries have their sets of risks, businesses of all kinds may face idiosyncratic risks that are unique to their organization. In general, however, organizational risks tend to fall into one of four categories as follows: The Four Categories of Risk: - Strategic Risk: As an example, a competitor enters the market, posing a risk to a business’s market share of a product or service. - Compliance and Regulatory Risk: This risk may include the introduction of new rules or legislation that may negatively impact a business from carrying out its activities, thus compromising its revenue. - Financial Risk: As an example, a rise in interest rates increases the cost of debt or a currency rate change, affecting its ability to import or export goods and services. - Operational Risk: This risk includes a breakdown or theft of key equipment or the protection of data. The goal of a risk assessment plan will vary across industries, but overall, it should help organizations prepare for and combat risk. Other tasks contributing to this goal include: - Providing an analysis of possible threats - Meeting legal requirements - Developing awareness regarding hazards and risk - Preventing injuries or illnesses - Creating an accurate inventory of available assets - Managing currency, interest rate and foreign exchange rate risk - Assessing concentration risk (customer and supply chain) - Avoiding key-man risk - Formulating a budget to remediate risks - Justifying the costs of managing risks - Understanding the return on investment The Risk Assessment Steps The steps used in risk assessment form an integral part of an organization’s risk management plan and ensure that the organization is prepared to handle any risk. The three main steps of risk assessment should be carried out in an organized, systematized and logical way. They are as follows: Step 1: Objective Setting Objective setting is a key part of the management process. Objectives must be set before management can identify risk achievements and take the necessary actions to manage the risk. Objectives are considered a prerequisite to and enabler of internal controls — not an internal controls component. Thus, they must be explicitly stated to continue a past level of performance. At the entity level, objectives are often represented by the entity’s mission and value statements, while the entity’s strengths, weaknesses, opportunities and threats (SWOT) can lead to an overall strategy. Entity-level objectives are linked and integrated with more specific objectives established for defined “activities,” — sales, production and engineering — making sure they are consistent. These subobjectives, including established goals, may deal with the product line, market, financing and profit objectives. Critical success factors exist not only for the entity but for business units, functions, departments and individuals. Thus, objective setting enables management to identify measurement criteria for performance with a focus on critical success factors. Categories of objectives include: - Operations: This relates to the achievement of an entity’s basic mission — the fundamental reason for its existence. - Reporting: This pertains to internal and external; financial and nonfinancial reporting; and may encompass reliability, timeliness, transparency or other terms set forth by regulators, recognized standard-setters or the entity’s policies. - Compliance: The entity must conduct its activities, and often take specific actions, in accordance with applicable laws and regulations. An organization should not look at these objectives in silos because an objective in one category may overlap or support an objective in another. Furthermore, the category in which an objective falls can sometimes depend on circumstances. Step 2: Risk Identification An entity’s performance can be at risk due to internal or external factors. Regardless of whether an objective is stated or implied, an entity’s risk assessment process should consider potential risks. Risk identification should be comprehensive and should consider all significant interactions between an entity and relevant external parties. Risk identification is an interactive process and is often integrated with the planning process; thus it’s useful to consider risk from a “clean sheet of paper” approach and not merely relate the risk to the previous review. Step 3: Risk Analysis Once risk has been identified, it must be analyzed in terms of estimating its significance and the likelihood (or frequency) of the risk occurring. How the risk should be managed is considered with an assessment of what actions should occur (accept, reject, share, reduce). As there are numerous methods for estimating the cost of a loss from an identified risk, management should be aware of them and apply them as appropriate. Actions that can be taken to reduce the significance or likelihood of the risk occurring include a myriad of decisions management may make every day. Management must recognize that it is likely some level of residual risk will always exist, either because resources are always limited, and/or other limitations are inherent in every internal control system. Finally, organizations should be open to and expect change to mitigate and manage risk. Tools in the Risk Assessment Process There are several risk assessment tools available for organizations. One tool to help organizations provide timely and accurate risk identification and assessment is a risk and control self-assessment (RCSA). The level of organizational resources required to complete an RCSA and effectively apply the results in a timely manner may, nevertheless, be considerably complex, arduous or costly for many organizations to implement and/or utilize. Furthermore, leadership often finds it difficult to define roles and carve out the necessary time for this intricate and comprehensive process. They may find that RCSA workshops are unproductive while the processes, controls and technology are continuously changing and documentation becomes outdated. Yet there are ways to take on these challenges through the identification of practical and effective changes that an organization can implement, with minimal cost and disruption, allowing for the RSCA program to successfully do what it was intended for. This process can be accomplished in a period of six to 12 months and with favorable results. Examples include: - Rationalizing and optimizing controls - Improving coverage and the integration of regulatory compliance and technology risk - Simplifying taxonomies - Incorporating relevant data points Beyond these relatively simple changes, organizations should consider embracing new technology, such as data analytics tools, predictive capabilities, chatbots, artificial intelligence and automated assistants, to deliver more timely, actionable and forward-looking results. Competitive advantage will be on the side of organizations capable of using risk and control data, particularly RCSA results, to make risk-informed, faster and smarter decisions. Here are some risk assessment tools you can download on KnowledgeLeader: - A risk assessment questionnaire with instructions for completion, information and reference materials, a risk model, rating guidance and risk definitions. The questionnaire collects responses for risk assessment as preparation for annual budgeting and business planning efforts. It includes functional goals, top three to five risks in functional areas, companywide top three to five risks, quantitative risk ratings and an internal audit section - Policy and procedure samples, such as an IT Assessment Policy, which provides a standardized approach and operating instructions for the execution of a company’s IT risk assessment - Risk Assessment Audit Report, including two sample audit reports, that outline steps an audit department should take when conducting a risk assessment, and a guide used by auditors to understand their risk assessment processes In addition to the above examples, KnowledgeLeader provides risk assessment templates, articles, booklets and discussion points that are applicable, timely and insightful. It’s never too early to start or update a risk assessment within an organization. The process may seem daunting at first but with the right tools and resources, an organization will be well on its way to getting there.
<urn:uuid:867adfaf-442f-4dc3-b4ee-e6b7f922a800>
CC-MAIN-2022-40
https://www.knowledgeleader.com/blog/organizational-risk-assessment-performed-right-way
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00486.warc.gz
en
0.934747
1,855
3.515625
4
%Mod (the remainder after dividing) **Exponentiation (note that ^does not do this operation, as you might have seen in other languages) //Divides and rounds down to the nearest integer The usual order of mathematical operations holds in Python, which you can review in this Math Forum page if needed. Bitwise operators are special operators in Python that you can learn more about here if you’d like Arithmetic Operators: Practice Questions Quiz: Average Electricity Bill It’s time to try a calculation in Python! My electricity bills for the last three months have been $23, $32 and $64. What is the average monthly electricity bill over the three month period? In this quiz you’re going to do some calculations for a tiler. Two parts of a floor need tiling. One part is 9 tiles wide by 7 tiles long, the other is 5 tiles wide by 7 tiles long. Tiles come in packages of 6. - How many tiles are needed? - You buy 17 packages of tiles containing 6 tiles each. How many tiles will be left over?
<urn:uuid:45d7181f-1ac7-4ebb-b3a4-643cb99175d8>
CC-MAIN-2022-40
https://cybercoastal.com/python-for-beginners-arithmetic-operators/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00486.warc.gz
en
0.908056
272
3.125
3
Advice Without Explanation Is Not Very Intelligent The weekend edition of my local newspaper writes that the sommelier at restaurants will be replaced with an intelligent algorithm. On the one hand, the article was meant to bejewel the progress we are making in computer science; on the other hand it was to warn the workforce of a jobless future. The renewed interest in AI and related technology is making its way to the general public. The driving force is major software firms that are selling us intelligence — IBM's Watson, Microsoft's Cortana, and Google's DeepMind. Knowing I have a background in AI, friends, colleagues, and customers have asked me about these intelligent modules: - Are they a rule engine? - How intelligent are they? - Will they replace our rules? Given the countless variety of wines, infinite number of recipes, and yearly growing reservoir of wines it is practically impossible to create a set of rules to give good wine advice in a general case. This answers the first question. The new AI modules are NOT a simple rule engine. The new AI technologies use a combination of techniques. There will be a large dataset used to train a (neural) network with the wine advice of recognized sommeliers. There will be heuristics or rules for clear-cut cases or to prepare the dataset. And there may be some learning process to fine-tune the results that set weighting values in the network. The result is probably reasonable wine advice in most cases. You will think: yes, indeed, good idea to drink a Sauternes with foie gras. But what if the advice does not ring a bell? In fact, you believe the advice is very odd. The bot's advice may be good but you want to understand why. A sommelier will try to convince you with enthusiasm of his recommendation and may also suggest an alternative based on some of the hints you gave during the selection process. In case of the sommelier-bot, we don't know if the odd advice is a flaw in the algorithm or a bug in the software. Even if it turns out to be a matching wine for the dish, you would not feel confident in the advice because you don't know the reasoning and there is no proof that the advice was the best or at least reasonable. Bottom line: advice without explanation is not very intelligent. The intelligent modules that we are discussing are a black box for the end-user. Opening that black box is an option. Methods based on entropy or a co-variations analysis may generate decision trees from trained statistical models, but you need the help of the supplier to open the black box. That being said it is highly inefficient to replace rule-based systems with intelligent modules for tasks that involve legal obligations, equality of rights, or decisions that need to be explained. We answered the questions and concluded that intelligent modules are not a rule engine (although they may use rules), are not very intelligent, and will not replace our rules. I hope these observations help you to see the value of intelligent modules and to use them wisely in situations where rules fall short. About our Contributor: All About Concepts, Policies, Rules, Decisions & Requirements We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions.
<urn:uuid:8e739c27-7160-48b8-bc54-2d0d70fa94af>
CC-MAIN-2022-40
https://www.brcommunity.com/articles.php?id=b895
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00486.warc.gz
en
0.94439
722
2.671875
3
The world of “Big Data,” “The Internet of Things,” or simply… “Cyberspace.” Whatever we choose to call it, never in human history has something so profoundly consequential for so many people’s daily lives been unleashed in such a short period of time. Certainly, the printing press, the telegraph, radio, the television, were all extraordinary. But what is going on now is truly unprecedented in its sudden, dramatic impact. In the span of a few short years, billions of citizens the world over are immersing themselves in an entirely new communications environment — one that is changing not only how we think and behave but, more profoundly, how society as a whole is fundamentally structured. Information that previously was stored in our office drawers, in locked closets, in our diaries, even in our minds, we are now transmitting to thousands of private companies and, by extension, to government agencies. This world of Big Data is a supernova of billions of human interactions, habits, movements, thoughts, and desires, ripe to be harvested, analyzed, and then fed back to us, in turn, to predict and shape us. It should come as no surprise, given the rate at which this transformation is occurring, that there will be unintended — and possibly even seriously detrimental — consequences for privacy, liberty, and security. Evidence of these consequences is now beginning to accumulate. First, there are privacy issues. Data breaches that expose the email and password credentials of tens of millions of people have become so routine that researchers are now describing them as “megabreaches.” Our research at the Citizen Lab has shown how numerous popular mobile applications used by hundreds of millions of people routinely leak sensitive user information, including, in some cases, the geolocation of the user, device ID and serial number information, and lists of nearby wifi networks. We have discovered that some applications were so poorly secured, that anyone with control of a network to which these applications connects (e.g., a WiFi hotspot) could easily spoof a software update to install spyware onto an unwitting user’s device. Poorly designed mobile applications, such as those we have examined, are a goldmine for criminals and spies, and yet we surround ourselves with them. Disclosures of former National Security Agency (NSA) contractor Edward Snowden have shown that state intelligence agencies routinely vacuum up information leaked by applications in this way, and use the data for mass surveillance. And what they don’t acquire from leaky applications, they get directly from the companies through lawful requests. The confluence of interests around commercial and state surveillance is where Big Data meets Big Brother. Beyond privacy issues are those of security. For example, researchers have demonstrated how they could use remote WiFi connections to take over the controls of a smart car or even an airline’s cockpit systems. Others have shown proof of concept attacks against “smart home” systems that remotely cracked door lock codes, disabled vacation mode, and induced a fake fire alarm. Of course, what happens in the lab is but an omen of what’s to come in the real world. Several years ago, a computer virus called “Stuxnet” reportedly developed by the US and Israel, was used to sabotage Iranian nuclear enrichment plants. Dozens of countries are reportedly researching and stockpiling their own Stuxnet like cyber weapons, which in turn is generating a huge commercial market for such hidden software flaws. Perversely adding to the insecurities (as the FBI Apple controversy showed us), some government agencies are, in fact, pressuring companies to weaken their systems by design to aid law enforcement and intelligence agencies. As such insecurities mount, and as more and more of our critical infrastructure is networked, the Big Data environment in which we live may turn out to be a digital house of cards. This past week, the Citizen Lab and our partners, Open Effect, produced several outputs and activities that related to concerns around privacy and security in the world of Big Data, including some that we hope can help mitigate some of these unintended consequences. First, the Citizen Lab and Open Effect released a revamped version of the Access My Info tool, which allows Canadians to exercise their legal rights to ask companies about the data they collect on them, what they do with it, and with whom they share it. I wrote an oped for the CBC about the tool, and there were several other media reports, including an interview by the CBC’s Metro Morning host Matt Galloway with Andrew Hilts of Citizen Lab and Open Effect. Also, yesterday the CBC Ideas broadcast a special radio show on “Big Data Meets Big Brother,” in which I participated alongside Ann Cavoukian and Neil Desai, with Munk School director Stephen Toope moderating. We discussed the balance between national security and privacy, and focused in on the limited oversight mechanisms that exist in Canada around security agencies, and especially the Communications Security Establishment (CSE). Finally, Citizen Lab and Open Effect, as part of our Telecommunications Transparency Project, released a DIY Transparency Reporting Tool. The tool is actually a software template that provides companies with a guide for developing transparency reports. To give some context for the tool, companies are increasingly encouraged to release public reports on the length of time client data is retained, how the data is used, and how often—and under what lawful authority—the data is shared with governments agencies. The DIY Transparency Reporting Tool is the flipside of the Access My Info project: whereas the latter encourages consumers to ask companies and governments about what they do with our data, the Transparency Reporting Tool provides companies with an easy-to-use template to take the initiative to report that information to us. The world of Big Data has come upon us like a hurricane, with most consumers bewildered by what is happening to the data they routinely give away. Meanwhile, companies are reaping a harvest of highly-personalized information to generate enormous profits, with very little public accountability around their conduct, or the design choices they make. It’s time we encouraged consumers to “lift the lid” on the Big Data ecosystem right down to the algorithms that sort us and structure our choices, while simultaneously pressing companies to be more responsible stewards of our data. Tools like “Access My Info” and the DIY Transparency Toolkit are a good first start.
<urn:uuid:35314322-e807-49d6-ab69-31585b7aef6c>
CC-MAIN-2022-40
https://deibert.citizenlab.ca/2016/07/the-week-of-holding-big-data-accountable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00486.warc.gz
en
0.956621
1,313
2.53125
3
How Latency, Packet Loss, and Distance Kill Application Performance Latency, packet loss, distance, and application performance. What do all these terms have to do with each other? If you manage IT networks for a global enterprise, it’s important to step back and look at big picture, so you can more clearly see how they all impact one another. This may sound like “Networking 101” to some of you, but it’s critical to understand the relationships between these terms and their combined impact on application performance. - (Network) Latency is an expression of how much time it takes for a packet of data to get from one designated point to another. - Packet loss is the failure of one or more transmitted packets (could be data, voice or video) to arrive at their destination. - Distance is the intervening space between two points or, for the sake of enterprise networks- two offices. - TCP (Transmission Control Protocol) is a standard that defines how to establish and maintain a network conversation via which application programs can exchange data. The Big picture: When there is distance between the origin server and the user accessing that server, to complete a task the user needs a reliable network to connect. This network may be a private network, like a point-to-point link or MPLS. It may also be public, typically over the Internet. If the network has packet loss, the overall throughput between the server and the user significantly reduces with increasing distance. This means that the further away the user is from the origin server, the more unusable a network becomes. Why is that? The main culprit is TCP (Transmission Control Protocol), the standard that defines how to establish and maintain a network conversation via which application programs exchange data. TCP is the protocol or mechanism that provides reliable, ordered and error-checked delivery of data between servers and users across a network. TCP is a good guy and helps with data quality. It’s also a connection-oriented protocol, which means a data communication mode in which you must first establish a connection with a remote host or server before any data can be sent. The next step after the establishment of a TCP connection is to establish flow control to determine how fast the sender can send data and how reliably the receiver can receive this data. Depending on the quality of the network, the flow will be determined by window sizes negotiated from both ends. The ends may disagree if the client and the server view the network’s characteristics differently. This has a major impact on application performance! Certain applications like FTP would use a single flow and scale to the maximum available window size to complete the operation. However windows-based applications tend to be more ‘chatty’ and need multiple back and forth to get the operation(s) completed. The simplistic model to consider: Network + Packet Loss + High Latency = Application Performance for TCP Applications. In fact, looking at the graphic on the maximum throughput one can achieve, you wonder how organizations get any collaboration across long distances at all. Maximum TCP Throughput with Increasing Network Distance Voice and Video perform poorly when there is packet loss, especially over long-distance Internet links. However, even minimal packet loss combined with latency and jitter will make a network unusable for real-time traffic. Why? Because these applications run over UDP (User Datagram Protocol). Unlike TCP, the good guy who polices all interaction, UDP couldn’t care less. UDP is connectionless with no handshaking prior to an operation, and exposes any unreliability of the underlying network to the user. There is no guarantee of delivery. Here is the path most organizations with a global user base and growing application performance issues tend to take. - Focus on Internet links. Buy more bandwidth. Throughput typically increases somewhat but not enough to fix the issue. - Upgrade to MPLS links. Wait for 6-9 months for deployment. Realize that the problem has not been solved for long-distance connections. - Consume more and more and more bandwidth. Deploy QoS to deal with congestion and its impact on real-time traffic. Voice and Video do okay, assuming enough bandwidth is configured. - Realize that you can’t afford to keep buying more bandwidth at this alarming rate. - Add WAN Optimization appliances. With TCP optimization, compression of data and application proxies, it does address the issues of throughput. - See prices skyrocket to manage and maintain WAN Optimization hardware, and then experience sticker shock when it’s time to refresh those appliances. - Consider your options. Cloud Services? Mobility? - Revisit your entire enterprise network design. Vow to transform that network. Plan for the Cloud and for Mobility. Account for Big Data and your growing needs. Accommodate acquisitions and business changes. And how would you do that? If you know that the status quo is broken, you also know that the traditional hardware vendors are trying to squeeze every last red cent out of those boxes before their business model becomes completely outdated. Aryaka is the world’s first and only global, private, optimized, secure and Managed SD-WAN as a service that delivers simplicity and agility to address all enterprise connectivity and application performance needs. Aryaka eliminates the need for WAN optimization appliances, MPLS and CDNs, delivering optimized connectivity and application acceleration as a fully managed service with a lower-TCO and quick deployment model.
<urn:uuid:2d750660-5c50-47b9-b60f-fa82740f2f4f>
CC-MAIN-2022-40
https://www.aryaka.com/blog/latency-packet-loss-distance-kill-application-performance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00686.warc.gz
en
0.930737
1,143
3.09375
3
- A machine learning approach developed by researchers at MIT’s Koch Institute and Massachusetts General Hospital (MGH) may aid in cancer diagnosis of the unknown primary by examining gene expression programs associated with early cell development and differentiation. - The scientists focused the model on indicators of disrupted developmental pathways in cancer cells to find a compromise between lowering the number of characteristics while still capturing the most essential information. - The researchers subsequently created the Developmental Multilayer Perceptron (D-MLP), a machine-learning model that rates a tumor for its developmental components and forecasts its origin. - After training, the D-MLP was applied to 52 fresh samples of especially difficult malignancies of unknown origin that could not be classified using existing techniques. - Furthermore, comprehensive comparisons of tumor and embryonic cells in the study offered promising and sometimes surprising insights into the gene expression patterns of different tumor types. The first stage in deciding the best therapy for a cancer patient is identifying their exact form of cancer, which includes pinpointing the main site, the organ or portion of the body where the disease develops. Even with rigorous testing, the cause of cancer cannot always be established. Although these cancers of unclear origin are often aggressive, oncologists must treat them with non-targeted medicines, which typically have severe side effects and result in low survival rates. Using machine learning for cancer diagnosis A new machine learning methodology developed by researchers at MIT’s Koch Institute for Integrative Cancer Research and Massachusetts General Hospital (MGH) may aid in classifying cancers of the unknown primary by examining gene expression programs associated with early cell development and differentiation. Salil Garg, a pathologist at MGH and a Charles W. (1955) and Jennifer C. Johnson Clinical Investigator at the Koch Institute, stated: “Sometimes you can apply all the tools that pathologists have to offer, and you are still left without an answer. Machine learning tools like this one could empower oncologists to choose more effective treatments and give more guidance to their patients.” Garg is the senior author of new research published on August 30 in Cancer Discovery, and the main author is MIT postdoc Enrico Moiso. The artificial intelligence technique has high sensitivity and accuracy in recognizing cancer types. Parsing the changes in gene expression across various cancers from an unknown source is a great challenge for machine learning to handle. Cancer cells appear and function quite differently than normal cells, due in part to substantial changes in how their genes are expressed. Advances in single-cell profiling and efforts to classify distinct cell expression patterns in cell atlases have resulted in a plethora of, though sometimes overwhelming, data, including clues to how and where different malignancies started. Building a machine learning model that utilizes distinctions between healthy and normal cells, as well as between different types of cancer, into a diagnostic tool, on the other hand, is a balancing act. Suppose a very sophisticated model accounts for too many aspects of cancer gene expression. In that case, it may appear to learn the training data flawlessly yet stumble when confronted with fresh data. However, by simplifying the model by reducing the number of characteristics, the model may lose information that might lead to correct cancer classifications. The scientists focused the model on indicators of disrupted developmental pathways in cancer cells to find a compromise between lowering the number of characteristics while still capturing the most essential information. Many pathways direct how cells reproduce, expand, change their form, and move as an embryo grows, as undifferentiated cells specialize into distinct organs. Cancer cells lose many of their specific characteristics as the tumor grows. At the same time, as they obtain the ability to multiply, change, and metastasize to other tissues, they begin to resemble embryonic cells in certain aspects. Many gene expression programs that control embryogenesis are reactivated or dysregulated in cancer cells. Creating an ML algorithm that can diagnose cancer The researchers contrasted the Cancer Genome Atlas (TCGA), which provides gene expression data for 33 tumor types, with the Mouse Organogenesis Cell Atlas (MOCA), which examines 56 distinct trajectories of embryonic cells as they grow and differentiate. Moiso explains, “Single-cell resolution tools have dramatically changed how we study the biology of cancer, but how we make this revolution impactful for patients is another question. With the emergence of developmental cell atlases, especially ones that focus on early phases of organogenesis such as MOCA, we can expand our tools beyond histological and genomic information and open doors to new ways of profiling and identifying tumors and developing new treatments.” The map of correlations between developmental gene expression patterns in tumors and embryonic cells was then used to train a machine learning algorithm. The researchers divided the gene expression of TCGA tumor samples into discrete components corresponding to a certain moment in a developmental trajectory and assigned a mathematical value to each component. The researchers subsequently created the Developmental Multilayer Perceptron (D-MLP), a machine-learning model that rates a tumor for its developmental components and forecasts its origin. Following training, the D-MLP was applied to 52 fresh samples of especially difficult malignancies of unknown origin that could not be identified using existing methods. These were the most difficult patients seen at MGH in a four-year period commencing in 2017. Excitingly, the model classified the tumors into four groups and produced forecasts and other data that might aid in diagnosing and treating these patients. One sample, for example, came from a woman who had a history of breast cancer and had evidence of an aggressive tumor in the fluid spaces surrounding the abdomen. Using the available methods, oncologists could not locate a tumor mass or categorize cancer cells. However, D-MLP strongly predicted ovarian cancer. Six months after the patient originally came, a lump in the ovary was discovered to be the source of the malignancy. Furthermore, the study’s comprehensive comparisons of tumor and embryonic cells offered promising and sometimes surprising insights into the gene expression profiles of different tumor types. For example, during the early stages of embryonic development, a rudimentary gut tube emerges, with the foregut producing the lungs and other adjacent organs and the mid-and hindgut constituting much of the digestive tract. The study found that lung-derived tumor cells had substantial parallels not just to the foregut but also to the mid-and hindgut-derived developmental trajectories. These findings imply that variations in developmental programs may one day be used in the same manner that genetic mutations are frequently used to generate tailored or targeted cancer therapies. While the work provides a robust technique for tumor classification, it does have certain drawbacks. In the future, researchers intend to improve the prediction value of their model by including more forms of data, namely information from radiography, microscopy, and other types of tumor imaging. Garg stated: “Developmental gene expression represents only one small slice of all the factors that could be used to diagnose and treat cancers. Integrating radiology, pathology, and gene expression information together is the true next step in personalized medicine for cancer patients.”
<urn:uuid:cecd0baf-e06b-4739-9f6a-4c4a99a43cbb>
CC-MAIN-2022-40
https://dataconomy.com/2022/09/machine-learning-aids-in-cancer-diagnosis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00686.warc.gz
en
0.939316
1,467
2.984375
3
|Congress may well pass IoT legislation this year, and the two bills under consideration take different approaches.| Continuing our look at bills Congress may pass this year leads us to an issue area that has received attention but no legislative action; the Internet of Things (IoT). Many Members are aware and concerned about the lax or nonexistent security standards for many such devices, which leaves them open to attack or being used as part of a larger bot network to attack other internet connected devices. There are two bills with significant odds of being enacted, one better than the other, for it is a more modest bill and it is attached to the Senate’s FY 2021 National Defense Authorization Act. However, the other bill is finally coming to the House floor today, which may shake loose its companion bill in the Senate. As the United States (U.S.) Departments of Commerce and Homeland Security explained in “A Report to the President on Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and Other Automated, Distributed Threats, insecure IoT poses huge threats to the rest of the connected world: The Distributed Denial of Service (DDoS) attacks launched from the Mirai botnet in the fall of 2016, for example, reached a level of sustained traffic that overwhelmed many common DDoS mitigation tools and services, and even disrupted a Domain Name System (DNS) service that was a commonly used component in many DDoS mitigation strategies. This attack also highlighted the growing insecurities in—and threats from—consumer-grade IoT devices. As a new technology, IoT devices are often built and deployed without important security features and practices in place. While the original Mirai variant was relatively simple, exploiting weak device passwords, more sophisticated botnets have followed; for example, the Reaper botnet uses known code vulnerabilities to exploit a long list of devices, and one of the largest DDoS attacks seen to date recently exploited a newly discovered vulnerability in the relatively obscure MemCacheD software. Later in the report, as part of one of the proposed goals, the departments asserted: When market incentives encourage manufacturers to feature security innovations as a balanced complement to functionality and performance, it increases adoption of tools and processes that result in more secure products. As these security features become more popular, increased demand will drive further research. However, I would argue there are no such market incentives at this point, for most people looking to buy and use IoT are not even thinking about security except in the most superficial ways. Moreover, manufacturers and developers of IoT have not experienced the sort of financial liability or regulatory action that might change the incentive structure. In May, the Federal Trade Commission (FTC) reached “a settlement with a Canadian company related to allegations it falsely claimed that its Internet-connected smart locks were designed to be “unbreakable” and that it took reasonable steps to secure the data it collected from users.” As mentioned, one of the two major IoT bills stands a better chance of enactment. The “Developing Innovation and Growing the Internet of Things Act” (DIGIT Act) (S. 1611) would establish the beginnings of a statutory regime for the regulation of IoT at the federal level. The bill is sponsored by Senators Deb Fischer (R-NE), Cory Gardner (R-CO), Brian Schatz (D-HI), and Cory Booker (D-NJ) and is substantially similar to legislation (S. 88) the Senate passed unanimously in the last Congress the House never took up. In January, the Senate passed the bill by unanimous consent but the House has yet to take up the bill. S.1611was then added as an amendment to the “National Defense Authorization Act for Fiscal Year 2021“ (S.4049) in July. Its inclusion in an NDAA passed by a chamber of Congress dramatically increases the chances of enactment. However, it is possible the stakeholders in the House that have stopped this bill from advancing may yet succeed in stripping it out of a final NDAA. Under this bill, the Secretary of Commerce must “convene a working group of Federal stakeholders for the purpose of providing recommendations and a report to Congress relating to the aspects of the Internet of Things, including” identify any Federal regulations, statutes, grant practices, budgetary or jurisdictional challenges, and other sector-specific policies that are inhibiting, or could inhibit, the development or deployment of the Internet of Things; - consider policies or programs that encourage and improve coordination among Federal agencies that have responsibilities that are relevant to the objectives of this Act; - consider any findings or recommendations made by the steering committee and, where appropriate, act to implement those recommendations; - how Federal agencies can benefit from utilizing the Internet of Things; - the use of Internet of Things technology by Federal agencies as of the date on which the working group performs the examination; - the preparedness and ability of Federal agencies to adopt Internet of Things technology as of the date on which the working group performs the examination and in the future; and - any additional security measures that Federal agencies may need to take to— - safely and securely use the Internet of Things, including measures that ensure the security of critical infrastructure; and - enhance the resiliency of Federal systems against cyber threats to the Internet of Things S.1611 requires this working group to have representatives from specified agencies such as the National Telecommunications and Information Administration, the National Institute of Standards and Technology, the Department of Homeland Security, the Office of Management and Budget, the Federal Trade Commission, and others. Nongovernmental stakeholders would also be represented on this body. Moreover, a steering committee would be established inside the Department of Commerce to advise this working group on a range of legal, policy, and technical issues. Within 18 months of enactment of S.1611, the working group would need to submit its recommendations to Congress that would then presumably inform additional legislation regulating IoT. Finally, the Federal Communications Commission (FCC) would report to Congress on “future spectrum needs to enable better connectivity relating to the Internet of Things” after soliciting input from interested parties. As noted, there is another IoT bill in Congress that may make it to the White House. In June 2019 the Senate and House committees of jurisdictions marked up their versions of the “Internet of Things (IoT) Cybersecurity Improvement Act of 2019” (H.R. 1668/S. 734), legislation that would tighten the federal government’s standards with respect to buying and using IoT. In what may augur enactment of this legislation, the House will take up its version today. However, new language in the amended bill coming to the floor making clear that the IoT standards for the federal government would not apply to “national security systems” (i.e. most of the Department of Defense, Intelligence Community, and other systems) suggests the roadblock that may have stalled this legislation for 15 months. It is reasonable to deduce that the aforementioned agencies made their case to the bill’s sponsors or allies in Congress that these IoT standards would somehow harm national security if made applicable to the defense IoT. The bill text as released in March for both bills was identical signaling agreement between the two chambers’ sponsors, but the process of marking up the bills has resulted in different versions, requiring negotiation on a final bill. The House Oversight and Reform Committee marked up and reported out H.R. 1668 after adopting an amendment in the nature of a substitute that narrowed the scope of the bill and is more directive than the bill initially introduced in March. The Senate Homeland Security and Governmental Affairs Committee marked up S. 734 a week later, making their own changes from the March bill. The March version of the legislation unified two similar bills from the 115th Congress of the same title: the “Internet of Things (IoT) Cybersecurity Improvement Act of 2017” (S. 1691) and the “Internet of Things (IoT) Federal Cybersecurity Improvement Act of 2018” (H.R. 7283). Per the Committee Report for S. 734, the purpose of bill is to proactively mitigate the risks posed by inadequately-secured IoT devices through the establishment of minimum security standards for IoT devices purchased by the Federal Government. The bill codifies the ongoing work of the National Institute of Standards and Technology (NIST) to develop standards and guidelines, including minimum-security requirements, for the use of IoT devices by Federal agencies. The bill also directs the Office of Management and Budget (OMB), in consultation with the Department of Homeland Security (DHS), to issue the necessary policies and principles to implement the NIST standards and guidelines on IoT security and management. Additionally, the bill requires NIST, in consultation with cybersecurity researchers and industry experts, to publish guidelines for the reporting, coordinating, publishing, and receiving of information about Federal agencies’ security vulnerabilities and the coordinate resolutions of the reported vulnerabilities. OMB will provide the policies and principles and DHS will develop and issue the procedures necessary to implement NIST’s guidelines on coordinated vulnerability disclosure for Federal agencies. The bill includes a provision allowing Federal agency heads to waive the IoT use and management requirements issued by OMB for national security, functionality, alternative means, or economic reasons. In general, this bill seeks to leverage the federal government’s ability to set standards through acquisition processes to ideally drive the development of more secure IoT across the U.S. The legislation would require the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) to work together to institute standards for IoT owned or controlled by most federal agencies. As mentioned, the latest version of this bill explicitly exclude “national security systems.” These standards would need to focus on secure development, identity management, patching, and configuration management and would be made part of Federal Acquisition Regulations (FAR), making them part of the federal government’s approach to buying and utilizing IoT. Thereafter, civilian federal agencies and contractors would need to use and buy IoT that meets the new security standards. Moreover, NIST would need to create and implement a process for the reporting of vulnerabilities in information systems owned or operated by agencies, including IoT naturally. However, the bill would seem to make contractors and subcontractors providing IoT responsible for sharing vulnerabilities upon discovery and then sending around fixes and patches when developed. And yet, this would seem to overlap with the recently announced Trump Administration vulnerabilities disclosure process (see here for more analysis) and language in the bill could be read as enshrining in statute the basis for the recently launched initiative even though future Administrations would have flexibility to modify or revamp as necessary. © Michael Kans, Michael Kans Blog and michaelkans.blog, 2019-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Michael Kans, Michael Kans Blog, and michaelkans.blog with appropriate and specific direction to the original content.
<urn:uuid:8934142f-3bfc-45b7-abcd-2a0306f8c908>
CC-MAIN-2022-40
https://michaelkans.blog/2020/09/14/pending-legislation-in-u-s-congress-part-v/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00686.warc.gz
en
0.947292
2,329
2.59375
3
Characterized by smart manufacturing and interconnected devices, the fourth industrial revolution is here. But just like any other interconnected system, modern industrial control systems are potential targets for cyberattacks. Cybersecurity breaches can have far-reaching consequences for industrial organizations. According to The State of Industrial Cybersecurity 2019, produced by ARC Advisory Group on behalf of Kaspersky, reputational and environmental damage, injury or even death are serious risks if systems fail. And today, the stakes are becoming higher, simply because more systems are connected and dependent on one another. According to Gartner (Magic Quadrant for Industrial IoT Platforms, Eric Goodness et al., 25 June 2019, report available to Gartner subscribers): By 2023, 30 percent of industrial enterprises will have full, on-premises deployments of IIoT platforms, up from 15 percent in 2019. Magic Quadrant for Industrial IoT Platforms, 2019 This also means there’s a lot more that can go wrong, and there are many more attack vectors for cybercriminals to exploit. Many threats originate in third parties, such as suppliers, contracting firms and technology companies. And no modern industrial operation is free from risk. Why are industrial operations a target? Cyberwarfare is no longer the stuff of dystopian science fiction. It’s a very real problem now that society has grown so dependent on modern technology. Attacks may be carried out by state actors, hacktivists (internet activists) – or even competitors bent on industrial espionage. With so many new avenues for attackers to exploit, such as the industrial internet of things (IIoT), it’s never been more important for cybersecurity to take front and center stage in any industrial digital transformation project. Given that a nation’s ability to function depends heavily on the availability of its critical infrastructure, it’s easy to see why major industrial operations are popular targets. It’s a highly efficient way to cripple a rival state. But it’s not just state actors that the industrial sector needs to worry about. Hacktivists, who are often not associated with any particular state, may also target certain industries for political or ethical reasons. Fortunately, there’s an increased focus on OT/ICS (operational technology in industrial control systems) throughout the globe to help keep industrial businesses, and the people that depend on their services, better protected. The state of industrial cybersecurity today The latest annual report The State of Industrial Cybersecurity 2019 explores the worldwide status quo and future development of industrial cybersecurity. The report uncovers what nearly 300 industrial companies and organizations think about the landscape for industrial cybersecurity today, and what measures and processes are needed to prevent cyber-incidents in critical infrastructures and industrial enterprises. Here’s some of the report’s key finding: We’ve got a people problem Despite automation, the human factor can still put industrial processes at risk: employee errors or unintentional actions were behind one in two incidents. The growing complexity of industrial infrastructures demands more advanced protection and skills. But organizations are experiencing a shortage of professionals to handle new threats and low awareness among employees. They’re worried that their OT/ICS network operators are not fully aware of the behavior that can cause cybersecurity breaches, which could explain why employee errors cause half of all ICS incidents – such as malware infections – and also more serious targeted attacks. Companies are seeking to improve protection for industrial networks. But this can only be achieved if they address the risks related to the lack of qualified staff and employee errors. Taking a comprehensive, multi-layered approach – which combines technical protection with regular training of IT security specialists and industrial network operators – will keep networks protected from threats and ensure skills stay up to date. Brand Manager, Kaspersky Industrial Cybersecurity Protecting Industrial IoT and digitization In addition to a technical and awareness boost for industrial cybersecurity, organizations need to consider specific protection for Industrial IoT which can become highly connected externally: four in ten companies are ready to connect their OT/ICS network to the cloud, using preventive maintenance or digital twins. The growing interconnection between IIoT edge devices and cloud services continues to stand as a security challenge. It was a major driver for the creation of the IIC Industrial Internet of Things Security Framework, as well as the subsequent best practices documents and recent IoT Security Maturity Model. Dr. Jesus Molina Chair, IIC Security Working Group, and Director of Business Development, Waterfall Security Solution Digitization of industrial networks and adoption of Industry 4.0 standards are in the pipeline for many industrial companies. Four out of five organizations see operational network digitalization as an important or very important task for the year ahead. OT/ICS is high priority The good news? OT/ICS cybersecurity is becoming a top priority for industrial companies (87 percent). But to achieve the right protection, they need to invest in dedicated measures and highly qualified professionals to make them work effectively. Despite stating it as a priority, only just over half of companies (57 percent) have the budget they need for industrial cybersecurity. But what are the real chances of a cyberattack? Four in 10 of the businesses surveyed hadn’t experienced any kind of cyberattack in the last year. But worryingly for the rest, nearly one in three haven’t implemented an incident response program. Rather than keeping everything crossed, isn’t it time to invest in improving your business’s cyber-defenses? Just a thought…
<urn:uuid:89290c17-785c-47c5-8dfe-59091fb0be11>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/secure-futures-magazine/whats-the-state-of-industrial-cybersecurity-today/28264/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00686.warc.gz
en
0.932929
1,159
2.71875
3
Tuesday, September 27, 2022 Published 3 Months Ago on Sunday, Jun 19 2022 By Daryn Kara Ali For the police, the use of facial technology in a technologically accelerated world is nothing but a means to optimize their internal operations to deliver a more safeguarded environment, conduct faster investigations, and catch criminals. But for the public, the police use of facial recognition is a cause of concern in terms of a potential threat of misguided arrests, surveillance, and a prominent human rights breach and violation. Yet, the fact remains that using facial recognition in law enforcement is a slippery slope. It can be deemed an added value to law enforcement – when properly adopted and used – and simultaneously increases the risks of identity theft, stalking, and harassment from the public’s perspective. In the early 2000s, the first adoption of facial recognition technology for law enforcement emerged in the U.S. The aforementioned was bound to happen in a world where technology is implemented in our daily lives. The technology-accelerated endorsement of facial recognition for state and local police was notably unreliable as the technology’s implementation lacked sophistication in terms of camera systems and algorithmic behavior of its Artificial Intelligence (AI). Now, these systems can capture features and people in the streets and identify their identity in real life. Some law enforcement offices and departments have been tooled with devices with facial recognition systems embedded into them, which can be used to run individuals against law enforcement databases. Since then, the identification process has been as simple as a button click, with various police departments weaponized with the needed technological means to detect any person committing a crime through the intelligent systems – as long as the crime is being caught on any camera in the street. Ever since the governments worldwide equipped their federal and state police with the facial recognition systems, governmental facial biometrics markets’ budgets have skyrocketed, and their value is set to reach a colossal $8.5 billion by 2025 from 2018’s $13.9 million. Commanded by intelligent technology such as AI, these systems enable law enforcement officers to analyze photos of individuals – either taken in the field or extracted from saved images and videos – and conduct comparisons with governmental databases such as mugshots and driver’s licenses, and more. Integrating law enforcement and facial recognition is accelerating the adoption of digital security approaches. In the past years, facial recognition proved to be one of the few biometric techniques that deliver intensively accurate and intrusive results. By comparing various photos of people’s faces in a computer program, the technology can help law enforcement officers identify the person in question from digital images or simply a video frame extracted from a particular source. Officers can authenticate the individual by comparing facial features from a face in law enforcement databases. In criminal investigations, these AI-based systems have been adopted by some of the most prominent federal agencies such as the Federal Bureau of Investigation (FBI), Central Intelligence Agency (CIA), and even some of the Big Tech giants like Facebook – now Meta, Apple, and even Taiwanese manufacturer, ASUS. By inspecting the input image and pre-processing the image, face recognition in criminal investigations will deliver an accurate response when analyzing the image of a person in question from within the system. Once the image is examined in the system, it will catalog and arrange based on specific points of identification features such as the distance between the eyes, the length of the jaw, the form of the nose and mouth, and others. And the facial recognition system strictly focuses on executing its process for criminal identification purposes. As the AI-driven facial recognition market proliferates in terms of accessibility for federal and state police offices, as well as popularity in these governmental establishments, it is of immense vitality to tackle the rising privacy concerns of such implementation. Three core controversies accompany this accelerated adoption. There is no denying the significance of using facial recognition technologies. To prevent any violation of data breach, storing people’s data in the cloud is the most secure and reliable approach. If the data gathered for facial recognition is not secure and appropriately encrypted, then any cloud-based system will not be different. To ensure the highest level of protection and to refrain from jeopardizing data to a violation of privacy, encryption is the key. Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on our ethical tech and AI section to stay informed and updated with our daily article. The world of foldable phones keeps welcoming more additions to its roster. And it makes sense. The foldable phones are selling well even with their pricy asking point. Huawei’s latest foldable is the Huawei P50 Pocket. While it does many things right, it also has its shortcomings. We will take a deeper look at it. […] Stay tuned with our weekly newsletter on all telecom and tech related news. © Copyright 2022, All Rights Reserved
<urn:uuid:2da98433-1d0b-4839-bbf8-961f98d9ceee>
CC-MAIN-2022-40
https://insidetelecom.com/the-knock-on-effect-of-police-use-of-facial-recognition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00086.warc.gz
en
0.941461
1,017
2.703125
3
The Massachusetts Institute of Technology Center for Collective Intelligence’s Climate CoLab seeks to find a climate change solution for the United States and an effective way to evaluate it. The Climate CoLab uses collective intelligence to leverage the experience of an online community of 75,000 people and 200 experts to develop project proposals, select finalists, provide feedback, and determine the best climate-related projects. “We think that by tapping into collective intelligence…we’ll be able to come up with better solutions that we’ve been able to submit so far,” said Laur Fisher, project manager of the Climate CoLab. In 2016, the Climate CoLab held 17 contests, which evaluated 579 submissions. The grand prize winner, the Vancouver-based company Climate Smart, won $10,000 to continue the work on its Business Energy and Emissions Profile (BEEP) Dashboard. The dashboard is a carbon mapping tool that links cities and companies working to cut emissions by helping cities understand their energy usage according to the industry. “Cities are the front lines in the climate challenge, yet often have limited resources,” said Elizabeth Sheehan, chief executive officer and president of Climate Smart. “Our software helps them get the most bang for their buck, so they can be more effective in their work.” Winners don’t have to be companies, according to Fisher. “Anyone can apply to be a member of the Climate CoLab,” Fisher said. “We have a diversity of people who are members.” Individuals don’t have to register to view the proposals and learn about the collective intelligence methods because all of the proposals are online for anyone to view. The CoLab includes professors, engineers, designers, technological professionals, and others who are interested in mitigating climate change. The members are evenly split in age, 86 percent have graduated from college, and 20 percent are students. The projects that received an honorable mention this year include a device that decreases methane emissions during rice farming, deployment of microgrids to the developing world using open-sourced hardware, and an app that monitors an individual’s energy footprint. “The most important [projects] are solutions or proposals that solve multiple problems at once,” Fisher said. The 2015 winner created a SunSaluter, which follows the sun throughout the day to create solar energy, while also filtering drinking water. The best projects consider issues like economic equality and racial justice along with climate change, according to Fisher. The 2013 winner took aerial infrared photos to track heat loss in buildings. The technology can determine whether certain houses in the same neighborhood are losing more heat than others so that homeowners can determine where the problem is occurring and how to fix it. The Climate CoLab also includes the Impact Assessment Fellows, a group that built rules on how to evaluate greenhouse gas mitigation. In the future, the CoLab wants to take on a combination of problems to find out how collective intelligence affects other topics, such as transportation and renewable energy. This method of thinking will lead to the most actionable and effective climate solution for the nation, according to Fisher. “How can they build connections between proposals and create a more complex solution to a more complex problem?” Fisher said.
<urn:uuid:8355e626-7568-488b-8a1b-a6cd9a2053f5>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/mit-uses-collective-intelligence-to-find-climate-change-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00086.warc.gz
en
0.952495
683
2.890625
3
Why Every Web Site is a Target for Cyber Crime From Club Nintendo to a Personal Blog, No Web Site is Immune From Cyber Crime All too commonly, a discussion around risk will land on the topic of how a target is selected for cyber crime. Huge financial gain? Billions of users? High profile? While all of these make great motives for attack, sometimes the goal is much simpler: Breach as many places as you can to collect personal information and credentials. The recent attack against the video game-related "Club Nintendo" web site involved over 15 million brute-force attempts against their user's accounts, resulting in nearly 24,000 compromised accounts. While this level of account compromise represents less than one percent of their entire reported user base, that statistic won’t comfort the people whose accounts were compromised, suffering a huge blow to their privacy and online safety. Even non-financial information helps identity thieves Once an account is compromised the information it holds presents a serious threat of identity theft and cyber crime, even if the account didn’t hold financial information. Information like a full name, home address, email address, and password can be a foothold that an attacker can use to break into other accounts and escalate their level of knowledge about a victim. When people use the same passwords for multiple accounts (an all too common practice) the risk of additional compromise is even higher. Reusing passwords leaves you vulnerable Most users don't think twice about typing in the same password for many sites. With so many online accounts, it's easy to forget the passwords you've used, where you’ve repeated them, and how often you change them. While an increasing number of end-users are utilizing technologies such as LastPass and 1Password to generate complex and unique passwords, the majority of end-users are still memorizing a small set of passwords that they use broadly across Internet sites, corporate accounts, and personal computers.While many online breaches allow for an attacker to try and crack the password hashes of a site after the fact, in this case, the attackers already know what the passwords are since they successfully logged into the site with the guessed credentials. Once the password has been determined, attackers can then steal information from the site and start trying to log in to other sites that the victim may be utilizing. In the case of Club Nintendo, it's concerning that they didn’t have mechanisms in place to alert their staff about the attack. After all, they were being brute forced for close to three weeks straight. Judging by the success of the attack, it's also likely that they didn’t require their users to create particularly strong passwords. It’s also likely that they didn’t have an account lock-out feature in place, since it would have tripped for multiple accounts during the brute-force, giving them an earlier warning of the attack. The more sites that you use on the Internet, the more times you reuse a password, and the weaker the strength of your password, the more likely you are to end-up receiving a notice that your account was compromised. Until organizations put more focus into strong security controls like two-factor authentication, we're all a target for attackers and cyber crime.
<urn:uuid:78395fa5-7092-4fd6-9b78-444bb07e75d2>
CC-MAIN-2022-40
https://duo.com/blog/why-every-web-site-is-a-target-for-cyber-crime
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00086.warc.gz
en
0.957346
659
2.546875
3
BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts. By Ellen Friedman Unlike Las Vegas, what happens at the edge doesn’t stay at the edge. And that’s half the challenge. People commonly think of edge computing as a glorified form of data acquisition or a local digital process control. In reality, edge is a lot more than both of those. It’s true that edge involves many data sources, usually at geo-distributed locations. But keep in mind, it’s the aggregate of that data that holds the key to value and insights. Analysis of the combined data is carried out at core data centers, and actions guided by the resulting insights often need to be carried out at edge locations. Therefore, a surprising challenge of edge systems is efficient traffic not only from edge to core but also back again. Scale is an edge issue as well. Incoming data at edge sources is often huge, and putting that together from a large number of edge locations can create truly enormous amounts of data. A classic example of this is found in the automotive manufacturing industry with its autonomous car development. Car manufacturers need ready access to global data, working with many petabytes of data per day. They must also meet critical key performance indicators (KPIs), including measuring how long it takes to collect data from test cars, how long to process, and how long to deliver insights. Of course, not all edge systems involve this extreme scale of data, but most edge situations do involve too much data to transfer it all from edge to core verbatim. This means the data must be processed for data reduction at the edge before sending it to the core. All this – data analysis, modeling, and data movement – need to be efficiently coordinated at scale. To better understand the challenges of edge systems and how they can be addressed, let’s dig into what happens at the edge, at the core, and in between. What happens at the edge Edge computing generally involves systems in multiple locations, each doing data ingestion, temporary data storage, and running multiple applications for data reduction prior to transport to core data centers. These tasks are shown in the left half of Figure 1. Analytics applications are used for pre-processing and data reduction. AI and machine learning models are also employed for data reduction, such as making decisions about what data is important and should be conveyed to core data centers. In addition, models allow intelligent action to take place at the edge. Another typical edge requirement is to inventory what steps have taken place and what data files have been created. All this must happen at many locations, none of which will have a lot of on-site administration, so edge hardware and software must be reliable and remotely managed. With these needs, self-healing software offers a huge advantage. What happens at the core The activities that happen at the core, seen on the right side of Figure 1, resemble edge processes but with a global perspective, using collective data from many edge locations. Here analytics can be more in-depth. This is where deep historical data is used to train AI models. As at edge locations, the core contains an inventory of actions taken and data created. The core is also where the connection is made to high-level business objectives that ultimately underly the goal of edge systems. Data infrastructure at the core is especially challenging since data from all of the edge systems converges there. Data from the edge (or data resulting from processing and modeling at the core) can be massive or can consist of a huge number of files. The infrastructure must be robust in handling large scale both in terms of number of objects as well as quantity of data. Of course, analysis and model development workflows are iterative. As an organization learns from the global aggregate of edge data, new AI models are produced and updated and analytics applications are developed that must be deployed at the edge. That brings us to the next topic, what needs to happen between edge and core. Traffic between edge and core Just as Figure 1 lists the key activities at the edge or in the core, it also shows the key interaction between the two: the movement of data or code. Obviously, the system needs to move ingested and reduced data from edge locations to the core for final analysis. But people sometimes overlook an unexpected journey: moving new AI and machine learning models or updated analytics programs that have been developed by teams at the core back to the edge. In addition, analysts, developers, and especially data scientists sometimes need to inspect raw data at one or more edge locations. Having direct access from the core to raw data at edge locations is very helpful. Almost all large-scale data motion should be done using the data infrastructure, but it can be useful to have direct access to services running at the edge or in the core. Secure service meshes are useful for this, particularly if they use modern zero-trust, workload authentication methods such as the SPIFFE protocol. Now that we’ve identified what happens at edge, core and in between, let’s look at what data infrastructure needs to do to make this possible. HPE Ezmeral Data Fabric: from edge to core and back HPE is known for its excellent hardware, including the Edgeline series (specifically designed for use at the edge). Yet, HPE also makes the hardware-agnostic HPE Ezmeral Data Fabric software, designed to stretch from edge to core, whether on-premises or in the cloud. HPE Ezmeral Data Fabric lets you simplify system architectures and optimize resource usage and performance. Figure 2 shows how the capabilities of the data fabric are used to meet the challenges of edge computing. Computation can be managed at edge or core using Kubernetes to manage containerized applications. HPE Ezmeral Data Fabric provides the data layer for such applications. And thanks to a global namespace for HPE Ezmeral Data Fabric, teams working at the data center can remotely access data that is still at the edge. Ellen Friedman is a principal technologist at HPE focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she was a committer for the Apache Drill and Apache Mahout open source projects. She is a co-author of multiple books published by O’Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.
<urn:uuid:8972b936-e0ba-4af8-8393-aa448e76395a>
CC-MAIN-2022-40
https://www.cio.com/article/191147/two-way-traffic-at-the-edge-a-guide-for-edge-to-core-computing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00086.warc.gz
en
0.938826
1,384
2.53125
3
- Duration 2 days - Price $1,595 - Test Level 0 - Certifications JNCIS-ENT - Exams N/A This two-day course provides students with intermediate routing knowledge and configuration examples. The course includes an overview of protocol independent routing features, load balancing and filter-based forwarding, OSPF, BGP, IP tunneling, and high-availability (HA) features. Through demonstrations and hands-on labs, students will gain experience in configuring and monitoring the Junos OS and monitoring device operations. This course uses Juniper Networks SRX Series Services Gateways for the hands-on component, but the lab environment does not preclude the course from being applicable to other Juniper hardware platforms running the Junos OS. Network engineers, technical support personnel, reseller support engineers, and others responsible for implementing and/or maintaining the Juniper Networks products covered in this course. - Describe typical uses of static, aggregate, and generated routes. - Configure and monitor static, aggregate, and generated routes. - Explain the purpose of Martian routes and add new entries to the default list. - Describe typical uses of routing instances. - Configure and share routes between routing instances. - Describe load-balancing concepts and operations. - Implement and monitor Layer 3 load balancing. - Illustrate benefits of filter-based forwarding. - Configure and monitor filter-based forwarding. - Explain the operations of OSPF. - Describe the role of the designated router. - List and describe OSPF area types. - Configure, monitor, and troubleshoot OSPF. - Describe BGP and its basic operations. - Name and describe common BGP attributes. - List the steps in the BGP route selection algorithm. - Describe BGP peering options and the default route advertisement rules. - Configure and monitor BGP. - Describe IP tunneling concepts and applications. - Explain the basic operations of generic routing encapsulation (GRE) and IP over IP (IP-IP) tunnels. - Configure and monitor GRE and IP-IP tunnels. - Describe various high availability features supported by the Junos OS. - Configure and monitor some of the highlighted high availability features. Chapter 1: Course Introduction Chapter 2: Protocol-Independent Routing - Static Routes - Aggregated Routes - Generated Routes - Martian Addresses - Routing Instances Chapter 3: Load Balancing and Filter-Based Forwarding - Overview of Load Balancing - Configuring and Monitoring Load Balancing - Overview of Filter-Based Forwarding - Configuring and Monitoring Filter-Based Forwarding Chapter 4: Open Shortest Path First - Overview of OSPF - Adjacency Formation and the Designated Router Election - OSPF Scalability - Configuring and Monitoring OSPF - Basic OSPF Troubleshooting Chapter 5: Border Gateway Protocol - Overview of BGP - BGP Attributes - IBGP Versus EBGP - Configuring and Monitoring BGP Chapter 6: IP Tunneling - Overview of IP Tunneling - GRE and IP-IP Tunnels - Implementing GRE and IP-IP Tunnels Chapter 7: High Availability - Overview of High Availability Networks - Graceful RE Switchover - Nonstop Active Routing Appendix A: IPv6 Appendix B: IS-IS Students should have basic networking knowledge and an understanding of the OSI model and the TCP/IP protocol suite.
<urn:uuid:8f46e2ac-1263-4af7-9782-bc739a23f0f7>
CC-MAIN-2022-40
https://www.dwwtc.com/Outline/jir
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00086.warc.gz
en
0.737499
827
2.71875
3
Reopening schools while maintaining health and safety The recent COVID-19 pandemic forced millions of children to transition to a remote learning environment – far too quickly for many districts to make adequate preparations. While some schools may have been less than fully prepared for this particular pandemic, advance crisis planning helped other schools manage the necessary transition more smoothly. Going forward, strong and effective planning for the reopening of schools need not be a surprise, and careful planning and execution can avoid any decrease in student and staff health and safety during the upcoming opening process. On a positive note, the government is currently enacting a pandemic relief package that includes a significant funding allocation for K-12 schools nationwide in the coming months. If the proposal is implemented, this funding could supply up to $128 billion for all school districts, which comes nearly to $2,500 per student nationwide, and can be used to help school health and safety systems not only for the present reopening needs, but also for longer-term improvement projects. In the sections below, we discuss the three distinct phases of reopening and provide suggestions on how to maximize the effectiveness of each phase.
<urn:uuid:0f6edd01-3ec1-48a3-ada6-40f669618d5d>
CC-MAIN-2022-40
https://insite.maxxess-systems.com/news/campuslifesecurity-com-challenge-accepted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00086.warc.gz
en
0.954069
233
2.828125
3
City leaders in Portland, Oregon, yesterday adopted the most sweeping ban on facial recognition technology passed anywhere in the United States so far. The Portland City Council voted on two ordinances related to facial recognition: one prohibiting use by public entities, including the police, and the other limiting its use by private entities. Both measures passed unanimously, according to local NPR and PBS affiliate Oregon Public Broadcasting. The first ordinance (PDF) bans the "acquisition and use" of facial recognition technologies by any bureau of the city of Portland. The second (PDF) prohibits private entities from using facial recognition technologies "in places of public accommodation" in the city. Both ordinances hold that facial recognition technology has a disparate impact on underprivileged communities, particularly people of color and people with disabilities, and that those disproportionate effects fall afoul of the city's commitment to "human rights principles such as privacy and freedom of expression." Any framework for city use of facial recognition and other technologies must include "impacted communities and transparent decision-making authority" to ensure that the city does not "harm civil rights and civil liberties." The city also explicitly recognizes a degree of privacy as one of those rights. "Portland residents and visitors should enjoy access to public spaces with a reasonable assumption of anonymity and personal privacy," the second ordinance reads. "This is true for particularly those who have been historically over-surveilled and experience surveillance technologies differently." The technologies "have been documented to have an unacceptable gender and racial bias," the ordinance continues. Since the city does not have the infrastructure at this time to evaluate, rate, and certify every possible technology in-depth for bias, Portland "needs to take precautionary actions until these technologies are certified and safe to use and civil liberties issues are resolved." The ordinance governing city agencies is effective immediately; the ban on private use will go into effect on January 1. As facial recognition technology has proliferated, so, too, have concerns about its use and impact. Repeated studies have indeed shown that facial recognition algorithms currently in use do not work equally well on all populations. By and large, software tends to work reasonably well on men, younger people, and white people but performs significantly worse on women, older people, and people who are Black, Native American, or Asian and Pacific Islanders. The addition of masks to the mix does not improve the accuracy rate of the software. Portland is the fifth US city to pass some kind of restriction on facial recognition software. The city of San Francisco banned police and government use of facial recognition software in 2019. Nearby Oakland, California, followed later that year. In between San Francisco and Oakland, Somerville, Massachusetts, a Boston suburb, passed a similar ban, and the city of Boston proper joined the club this summer. Even agencies that do rely on facial recognition technologies may not exactly be getting their money's worth out of it. Following a complaint by the ACLU that Detroit police arrested the wrong man based on a software-generated false-positive match, Detroit's police chief admitted the software his department uses would fail "96 percent of the time."
<urn:uuid:e7f227c6-9aff-4a58-a1f7-de73b2db809c>
CC-MAIN-2022-40
https://arstechnica.com/?p=1704959?utm_source=mosaicsecurity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00286.warc.gz
en
0.948049
633
2.53125
3
Edge infrastructure is the basis and foundation of platforms, applications, and media that generates a perception of immediacy in the devices we use in our daily lives, such as televisions, tablets, and phones; these platforms find this infrastructure the best ally to improve the perception of the end user, facing the distance where the content is produced and the goal of achieving its distribution in a shorter period of time (latency), it is to wonder what a short period of connectivity has to do with me as a user if I still struggle with the simple fact of staying connected. Why is video streaming so slow? This happens because of the distance the content must travel. Some providers are looking for content on the other side of the world, while other users are accessing content just around the corner. The user’s perception is that the service provider leaves them stranded when accessing content on a page and eventually, without an internet connection. On the other hand, a user located less than one meter away from the distributor accesses the same site without further complication. From the point of view of the customer, today’s networks have multiple connection paths. Although through our devices we do not control the way we navigate, a big part of internet users believe that it is unidirectional, and only has a single possible way to access information, therefore, if I cannot access it immediately, I will have to wait until it is my turn. What does an Edge Datacenter offer? Since the Internet is a universe in which the source of information is distributed through different infrastructures and locations, the path they take can be the difference between watching a paused or uninterrupted video. This is where Edge infrastructures make a big difference. The main characteristic of this type of topologies is to allow service providers to access content in a more efficient and direct way. The Edge Datacenter located in the sectors with the highest influx of information in each city allows all providers to be connected to access the content hosted in the Edge servers, avoiding unnecessary network sections, and making it possible to access the required service with relative ease. From a theoretical point of view, the Edge function is the solution to the end user’s quality perception problem. Now, the real challenge is to include this type of technology in new applications and telecommunications infrastructure developments. What are the main challenges of Internet of Things – IoT in Latin America? Because of the great barriers faced by the Latin American idiosyncrasy – that define a process that, although it was accelerated as a direct effect of the global pandemic generated by COVID-19, did not manage to consolidate enough to overcome what can put at risk any Edge infrastructure development. The main challenges are: - Business technology culture. - Understanding of interconnection in Latin America - Integral development of a solution The technological culture at the corporate level is a barrier that not only discourages investments in edge infrastructure but also makes markets behave in a static way, regarding the continuous progression of communication and technology, new markets and products seem to be slowed down or paused to be provided in Latin America and what maintains this corporate practice is that the user is already used to price over quality. The tendency of the Latin American user is that when acquiring internet services, it is understood as an electric energy service, that is to say, that it consists of a fixed point without any variation and the quality is measured only by the continuity of service, but the perspective must change, users should be more self-critical with parameters such as assured contracted transfer rates regardless of the time of day, network coverage and flexible contracts oriented to quality and not quantity. This could certainly be ensured by an Edge infrastructure, and an end-user could tell the difference between a service connected to an Edge node to a common node, based on speed and availability of content. Additionally, a second factor must be taken into account in this cultural transformation, and that is the knowledge of the communications infrastructure, to the vast majority of customers it is completely transparent, the location of the infrastructure or place in the cloud, the service is the same which plays as a big mistake, there are inherent risks to it and therefore different levels of service, which again are marked by the price war rather than the real need of the customer. What is the digital divide in Latin America? The service provider must start evaluating what it can provide differentiating it from a simple internet connection. Whether it is distributed through the cloud or the content generated by the user; at this point, the panorama becomes even more challenging, since with a young and barely developing market such as the Latin American one, the need for a new Edge infrastructure of physical and virtual type, is far from being able to be boosted. On the other hand, network interconnection plays a significant role in the development of Edge technologies, because in Latin American countries there is a high degree of complexity to deploy interstate fiber or high traffic density infrastructure due to the characteristics of each territory. Even this problem is continuously circumvented in population centers, since the geography in which most LATAM cities are located, plays against any deployment, and makes the deployment of telecommunications infrastructure barely viable. Therefore, places that comply with good installation characteristics continue to be saturated with network components, putting at risk the resilience of the network; two examples could be used where this happens, the first is the deployment of antennas and the other is the location of Datacenters, which tend to have energy and space access facilities; but it impacts the expansion of connectivity capacity, which limits the population of the network around it, creating communication islands that will be continuously limited to the availability of connectivity. How to reduce the digital gaps in the region? As a result of these digital and infrastructure gaps arises the need for a comprehensive solution, since finding solutions from the perspective of a Datacenter that manages and supplies the environments, makes its shortcomings in the objectivity of connectivity decrease, as a result, the efforts to promote Edge infrastructures are insufficient. In addition, a vision focused on connecting networks without considering the content, the place where it should be hosted, and then the point where it needs to be distributed, completely limits the solution. It is important that companies abandon their position that customers come with their problems and the provider delivers the solution, this unilateral path must be fought, since it has been fully identified that service providers and customers do not have full certainty of the real need for the solution, since the requirement in most cases is completely alien to the operational core of the customer, therefore the result is overestimated or underestimated, and there is no choice but to learn from the mistake. What strategies can be proposed to contribute to digital transformation in Latin America? For some companies, the effort in proposing this type of infrastructure deployment leads to having to propose new paradigms and methods for service delivery; since its inception has been the operational flag of EdgeUno, a full focus on quality, connectivity, user integration with content, and easy access within the technological environment. As well as breaking traditional schemes where large infrastructures were considered the only solution, to integrate a decentralized and customized solution, with the added value of the total abolition of the commercial concept where a high speed is assured to any content, which is not entirely accurate and depends directly on the physical infrastructure and the negotiation of the network. In addition, the service is focused on the customer’s needs and completely overshadows the widespread marketing exercise of hiring hundreds of megabytes of connectivity, which in turn is shared with many users giving the feeling that you will always need more, to elucidate that on the contrary you need is the same, but of better quality. EdgeUno has created these dedicated network segments that guarantee the quality of service and fully meet the service expectations of our users. Based on our experience, the first approach to break the current trend for the edge market is an adaptation exercise that brings together, includes, and integrates network players, content owners, internet providers, and end users as an ecosystem and not as isolated players.
<urn:uuid:82746a49-f4d0-4858-a375-91bf43aa5639>
CC-MAIN-2022-40
https://edgeuno.com/articles/edge-datacenters-in-latin-america-challenges-and-development-opportunities%EF%BF%BC/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00286.warc.gz
en
0.945681
1,646
2.671875
3
Data is arguably the most important asset that your organization has. Your database is a key part of your organization’s infrastructure, and if it goes down, it can create major problems. Data is key for handling daily activities and for making short and long-term decisions. For an organization to run effectively and efficiently, it needs to have a reliable database. Given the essential role that data plays, many organizations will avoid any sort of data-related innovations or changes out of fear of creating problems. As you can imagine, this can actually lead to issues with the database, making it difficult for some organizations to maintain a competitive database. The good news, however, is that adequate database reliability ensures that organizational data is protected, that engineers are free to innovate, and that necessary work is done efficiently. What is Database Reliability? Database reliability is defined broadly to mean that the database performs consistently without causing problems. More specifically, it means that there is accuracy and consistency of data. For data to be considered reliable, there must be: - Data integrity, which means that all data is the database is accurate and that there is consistency throughout data. Data consistency is defined broadly to include the type and amount of data. - Data safety, which means that only authorized individuals access the database. Data security also includes preventing any type of data corruption and ensuring that data is always accessible. When it comes to data safety, engineers must ensure that data is accessible even in the event of unforeseen circumstances, such as emergencies or natural disasters. - Data recoverability, which means there are effective procedures in place to recover any lost data. This is a key to database reliability, ensuring that even if other safety measures fail, there is a system for recovering data. The Importance of Database Reliability Organizational databases store a broad range of information, including customer information, sales information, financial transactions, vendor information, and employee records. This information is essential for maintaining the health of organizations and plays a central role in everything from competitive strategy to daily logistics. In many ways, data works as the eyes and ears of the organization, and without it, organizations lack the necessary information to make informed decisions. It’s the database that makes this information accessible and usable. If an organization’s database is not reliable, consistent, or accurate, it can lead to making bad or misinformed decisions. Further, as the database is a central part of organizational infrastructure, if it goes down, it can lead to substantial issues throughout the organization. This means that database reliability is and must remain a central concern for businesses. Yet, in the current environment, data problems are increasingly complex, making it continually difficult to create, manage, and manipulate databases. Given the importance of database reliability and the increased demands that come with database management, it’s important that organizations have advanced and innovative approaches to ensuring database reliability. A couple of such approaches that organizations should consider implementing are database reliability engineering and the use of effective database management systems. Database Reliability Engineering Database reliability engineering is an effective way for organizations to ensure database reliability and to ensure that organizations are able to effectively use data. Database reliability engineering is generally driven by the database reliability engineer, a data administrator that works to ensure that data is protected and accessible. Among other things, this increased reliability provides the safety and support needed to enable innovation and facilitate work. It’s worth noting that the phrase “database reliability engineering” was first coined by Laine Campbell and Charity Majors in their book, Database Reliability Engineering. This book can serve as a good resource for organizations looking to strengthen their database or wanting to implement new systems for promoting database reliability. The Role of the Database Reliability Engineer (DBRE) First and foremost, the Database Reliability Engineer, or DBRE, is an enabler that allows other data and software engineers to work efficiently without causing problems. The DBRE allows engineers to work within data shares while also ensuring that all data is protected, reliable and accessible. In addition to this central role as an enabler, database reliability engineers: - Utilize automation. A big part of database reliability engineering is automating tasks. Particularly important is automating safety operations, including failovers, backups, and back-pressure mechanisms. It’s this critical automation that lets engineers work quickly and efficiently without having to worry about losing or messing up data. These measures help to protect data and to encourage innovation among engineers. - Conduct risk analysis. Whenever considering automation, database management, or utilizing new tools, it’s important to conduct a thorough risk analysis. It’s a DBRE’s role to consider potential risks and then to make informed decisions. - Make decisions about scaling. It’s the role of the DBRE to anticipate capacity needs and to make timely decisions about scaling. Doing so helps to maintain database reliability, ensuring that the database is meeting organizational needs. - Educate other engineers. Part of the DBRE role is knowledge sharing and educating other data software engineers on everything from the database to the organization’s domain to best practices. Database Management Systems An additional way to promote database reliability is through the use of an effective database management system. A database management system is a type of software that is used to create and manage databases. Additionally, database management systems retrieve, manipulate, and define data. Utilizing an effective database management system can help to ensure that your database is always accessible, that it is safe from corruption, and that your data is accurate and consistent. When determining whether to use a database management system and which system to use, some key features to look for are high availability, corruption and debugging prevention, clustering, and type-safe API. Causes of Database Failures Can Be the Result of Many Factors As you work to ensure that your organization’s database is reliable, it’s important to consider some of the issues that lead to database problems. As you conduct this analysis, keep in mind that it’s often organizational infrastructure or hardware that leads to issues. So when something goes wrong, it can be helpful to look for causes among hardware or infrastructure. Some common sources of problems are issues with disks, RAM, or the motherboard. Regardless of the specific issue, when looking to address issues with database reliability, keep in mind that problems are not always caused by software but can also be caused by infrastructure or hardware. Data is at the heart of everything your organization does. It helps you keep up with customers, financial transactions, employee information, vendor information, sales, and supply chain information. In addition to being central to short and long-term goals, data can be used to enable leaders to make high-level decisions that significantly impact how organizations grow and thrive. As a result, it’s critical that your organization’s data is reliable. As data becomes increasingly complex, more and more organizations are changing their approach to database reliability to keep up with the changing environment. Database management systems and database reliability engineering are two effective ways to meet this need.
<urn:uuid:b2c11095-ed0f-497d-88b0-e6bb931ea238>
CC-MAIN-2022-40
https://www.bmc.com/blogs/database-reliability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00286.warc.gz
en
0.938561
1,456
3.078125
3
Cookie-based Targeting uses pieces of data or “cookies” to target small audience groups based on their web browser behavior. Cookie-based Targeting allows companies to display ads throughout a user’s browsing experience once the user has expressed interest on the company’s website. The guy that will not stop calling because you told him you liked his shirt one time. Cookie-based targeting is also commonly referred to as: Tracking cookies are advertising tools used by marketers to identify you as you search across the web. When you visit a website, cookies are saved to your web browser. This allows marketers to target you with advertisements across different social and web properties. In this chapter, we predict trends for 2020 based on our analysis of the global adoption of APM.
<urn:uuid:04e6757a-7ce5-4810-a1db-6f06f1681254>
CC-MAIN-2022-40
https://www.intricately.com/marketing-terms/cookie-based-targeting
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00286.warc.gz
en
0.945398
164
2.59375
3
The workstation can be of some great importance for someone who is involved in some massive and serious works. So, making sure that the work station has been secured should be one's ultimate goal while installing that. There are several ways through which one can ensure its security and for that purpose, one would have to be vigilant. Here one can learn about some practices and the things which can help contributing towards the good activities like the accounts availability, the desktop security, the passwords complexity etc. Following are the ways which an help the securing of workstation; One can take the help of the passwords and some usernames so that the resources on the computer cannot be accesses by all the people and the ones who are on the network. It is very important that the password that one has are very strong there and they contain some strong characters. It is not very unusual that when someone is going to ix the password, a single word should not be used since one might hack it easily and they can go to the dictionary and detect the word. Also one should not be using something that is understood. Like, some people might use the names of the dogs or the cats they have. Maybe, someone might even put the name of their husbands or the wives. They all are so easy and can be guessed very easily. One should be using some of the upper cases and the lower cases, so that it is difficult and one faces some hard time while guessing the passwords. They all can also be of the lower cases. But if someone tries and mixes them up, then it would become so easy since no one would be able to hack the password. One should not be using some special character. One should not use the "^", sign. It is very common, also using the numbers instead of the letters are not going to help a lot. Like, someone should not be using the 0 instead of the o. or the letter t should not be replaced with the number 7. The problem is that hackers already do know that one should think about doing that. When the hackers try using the brute force they already try replacing the things like he Os with the 0s. Also, if the website can allow, then one should try and making it become the phrase. It is simple that the many words can make the password become so powerful that brutal force isn't so useful there. Or, if one knows some phrase, then using the every first letter form it can make a good password too. When someone logs into the computer, he might not be asked for the surname and the password. This thing is really bad. The reason is that one would like to protect the inform ton that one has in the computer and he should be setting up some passwords for that. So it is very important that there is some password which is require day the one who is trying to have access to the computer. Here comes the thing which is good. This is that the access that one can have to the computer can become limited. One might not like making everyone an administrator on the commuter since the administrators can have more access to many files. One might like assigning the permissions and the rights to some of the people only and only for some specific amount of the time. But this ting thing might be difficult since if someone has the past history of giving the administrative access to many of the people, then it would become a bad thing to do. Also, one can think of some of the permissions and the rights which can be based upon the groups. One can group the people all together. All the departments can have their own groups, like the supply chain department can have their own group. The marketing department can have their own group. Also one can assign some of the permissions and the access which can be based on the groups as well. Like, one can assign some specific access to one group which is not assigned to the other group. When someone is removed from that certain group, then the one would no longer have any access to the information. It becomes so easy thing when done on some per user base. Also, it becomes so easy to do this in some way when one has to audit the things and have to make sure that all the people have adequate rights and the permissions of the things they are required to do. No matter what OS is being used by one, there are some of the user accounts that are created when the OS is being built. They might be used or might not be used. One can find some accounts like the mail account, the root account or some guest account. It is to be noted that not all of the addresses are required for the one. So, one can deleted them or can create even more of them. So, one should make sure that one would disable the accounts which are not used that often. Also, they will never be used again. One might also like that the user names which are not required, get changed. There are some of the applications which can use the many of the applications using the default usernames that are present in the OS. Also, this way, the hackers can know that which of the account would not be sued and where they should attack the person. By just changing the administrative username, one can also try the brute force. They can do that with the help of some old name, which means that they won't be trying any of the passwords. Also, they will never ever be able to get the access with the help of the particular sets of credentials. Another thing is that one should keep changing the default username. If the username is default, it may indicate that one isn't so active and is not so conscious about the security but if it is changed, it might means that someone is active and hence the hacker might not even try to have access to the one's computer. Also, changing the username frequently can create confusion for someone too who is trying to access and one who is setting them up, can be sure of the security, The guest account is the account which has some of the limited access. When the operating system is set up, one might not notice but he can have some of the guests account. One should take notice of this thing and should remove it immediately. If one is using the guest account then it is okay. One can set up some of the password there. But if the scenario is, that the guest account has been made but no one uses it, then it would be useful if someone tries and sets so many passwords at some other account. The reason behind this is that the main reason for the password is the protection. If it happens that one username has password protection and the others don't, it is like a glass has two holes and only one of them is covered. One can easily take out the juice from the glass using the other hole. This other whole is the guest account in our case. So, one should ensure to deleted it if this account is useless so that he can endure the maximum security of the operating system. At the desktop, there are the things which can be done to help one with some security of the data. Of them is the setting up of the screen saver. Screen saver means that when someone is active on the computer, then the computer stays up but when someone goes far, then the screen becomes blank and it doesn't appear again. Then one would have to come back and move the mouse. One can set the screen saver and then can even add the password to it, it is something so helpful. Then when someone comes back, he will move the mouse and instead of getting the welcome screen, all he would be getting would be the authentication confirmation which would be asking some password to the user. This thing helps a lot. The reason is that the screens never would appear when someone is away and is not working on the computer. Then, if someone comes back and tried to open up the desktop, he would be asked to write down the screensaver password. So when the screen save so on, the computer is again locked and is protected y that certain password. Also, where would be some of the single check box? So, one who has the administrative powers, can use them here to protect the computer. That can be done just by one click. In the windows old versions which are windows XP and the Vista, one can find some of the capabilities which are very useful for the security of the OS. One of them is known as the Auto run. That capability was used and it still common in many windows. Here one can simply insert some piece of the media into the computer and then the program would not ask for any permission and would start running itself. That might sound bad to some people and yes, it was actually bad. Many of the hackers, which are the bad guys, figured that when they are going to put some code in the USB or in some CD, then they can put that in the computer. The mechanism was simple. When the CD or the USB was inserted, the auto run ability never asked for permission and infected the whole computer. This was threatening since if one is away and no matter how many passwords are set in, one could simply destroy one's work there. Hence, the Microsoft figured out some way and then gave the users an ability which was about the disability of this ability. One could disable this thing in the registry. In the operating system's registry, hence this worked well and the fear of having the virus only with some attachment of the devices was reduced a lot. But the great thing is, that in the versions of the windows 7, one cannot even have that auto run ability, means whenever we open some application, then the auto run cannot be run and always a window will pop up, axing for the permission. At times, we play many of the programs. So it is good thing that windows 7 doesn't contain the auto run application since at one time we can open many downloaded programs and if they are opened without permission, then it can surely bring the virus as well. But someone people like the auto run. Because when someone runs so many of the files, it is sometimes difficult to click allow on each of them. But this thing is for users own security since it helps reducing some great risks of the virus attacks. Hence, there are many ways through which one can take some measures for protecting the precious data that one has. But there is one thing, that it requires some in depth knowledge and the stamina. Also, one should be sure that he has got some of the antivirus, because as it has been mentioned, if someone doesn't have so much good management ability and on forgets to delete some unwanted account that doesn't have password, then there would be the penetrating point for some virus. So having the other tools too is very important when it comes to protecting the operating system. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from firstname.lastname@example.org and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:feeed04b-8ffb-44df-828d-e386bc0d82ab>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/a-plus-how-to-secure-windows-workstation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00286.warc.gz
en
0.975547
2,322
2.96875
3
The recent announcement of the vulnerabilities found in the Intel, ARM, and AMD processors has sparked a new phishing campaign – not the good kind of fishing with bait and largemouth bass. Although, these hackers are using a particular kind of bait. While Meltdown and Spectre require access to your system, hackers have various ways to gain access. Already hackers are using phishing emails to trick users into giving them access. They send out an email claiming to contain a patch for Meltdown or Spectre. Instead, the email installs malware on your system. This malware gives the hacker access to your system, allowing them to exploit the bugs and take the unprotected data. Be wary of social engineering scams like phishing emails. Hackers are all too eager to take advantage of problems like this, and unfortunately, some people are so eager to fix the problem that they might not realize that the “patch” they just clicked on is now allowing a hacker to steal all their data. WHAT IS PHISHING? Phishing is a hacking technique that “fishes” for victims by sending them deceptive emails. The “ph” replaces the “f” in homage to the first hackers, the “phone phreaks” from the 1960’s and 1970’s. Virtually anyone on the internet has seen a phishing attack. Phishing attacks are mass emails that request confidential information or credentials under pretenses, link to malicious websites or include malware as an attachment. Many phishing sites look just like the sites that they are impersonating. Often, the only difference in many spoofed sites is slight, and easily missed difference in the URLs. Visitors can easily be manipulated into disclosing confidential information or credentials to the hacker if they can be induced to click the link. Even blacklisted phishing sites can often get by standard filters through the technique of time-bombing the URLs. Then the URL will lead to an innocent URL initially to get past the filters but then redirect to a malicious site. Although malware is harder to get past filters, recently discovered and zero-day malware stands an excellent chance of getting through standard filters and being clicked on, especially if malware hides in a non-executable file such as a PDF or Office document. This is how many of the recent ransomware attacks were pulled off. Now with Meltdown and Spectre looming over us, the average person is more susceptible to “quick fixes” and solutions to this issue. Despite the lack of personalization, an astonishing 20% of recipients will click on basically anything that makes it to their inbox. Spear phishing is an enhanced version of phishing that aims at specific employees of the targeted organization. The goal is usually to gain unauthorized access to networks, data, and applications. In contrast to the mass emailing of a phishing attack, which might see hundreds of attack messages sent out to random recipients within the space of a couple of hours, spear phishing is methodical and focused on a single recipient. Often the initial email will contain no URL or attachment. Instead, it will simply try to invoke the recipient into thinking that the sender is who they say they are. Only later on will the hacker request confidential credentials or information, or send a booby-trapped URL or attachment. The additional customization and targeting of a spear phishing email, along with the lack of easily recognized blacklisted URLs or malware customization results in click-rates more than 50%!
<urn:uuid:d38e1405-f5e6-453c-ad2d-2ac245fb6f13>
CC-MAIN-2022-40
https://alltekservices.com/meltdown-and-spectre-spawn-new-round-of-phishing-scams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00286.warc.gz
en
0.934854
717
2.875
3
Try Free for One Month Unlock insights in unstructured data with computer vision and text mining. Create ML pipelines with assisted modeling. What Is Advanced Analytics? Advanced analytics uses sophisticated techniques such as multivariate statistics, data mining, machine learning, visualization, simulation, text mining, graph (network) analytics, forecasting, optimization and simulation to uncover insights, identify patterns, predict outcomes, and generate recommendations. Why Is Advanced Analytics Important? To accelerate innovation and outflank competition, companies make use of advanced analytics to generate predictive insights, and make better, more informed decisions faster. Advanced analytics are used to optimize and improve business operations, reduce risks, and personalize customer experiences. Advanced analytics can solve problems that BI reporting can’t and can be applied to different cases such as monitoring and evaluating social media, predicting machine failures, and forecasting supply and demand, dynamically adjusting prices, detecting fraud, customer attrition, and many more. Advanced Analytics Techniques Techniques used in advanced analytics dive deeper than BI or descriptive analytics. While BI focuses on historical, structured data from various sources, advanced analytics tackles both structured and unstructured data from disparate sources. BI usually yields a summary of past performance, while advanced analytics looks to the future to help optimize and innovate in the present. To do so, advanced analytics employs, as the name implies, advanced techniques such as: How Advanced Analytics Works Advanced analytics is applicable to every industry and can be used across every business function within an organization. Examples can be seen in the graphic below. In a fast-paced world, businesses need to be able to react quickly. With advanced analytics, a company can make decisions based on accurate predictions, which can improve performance and productivity and increase revenue. Advanced analytics can harness HR data for good by helping reduce the costs of recruiting and hiring, decrease turnover, and maintain/increase overall employee satisfaction. Manufacturing and Inventory Demand, preferences, and cost are constantly changing, which impact what products get made, sold, and distributed — and how. Advanced analytics can help to prevent machine failure, reduce irrelevant stock, expedite ordering, and lower distribution costs. Understanding customers is key to predicting how they’ll behave in the future. Advanced analytics can help create personalized marketing experiences and identify sales opportunities. Managing large datasets in real time can help detect fraud, monitor customer reputation, and reduce future risk. Getting Started With Advanced Analytics The Alteryx Analytic Process Automation (APA) Platform™ offers an accessible platform featuring both no-code, low-code building blocks and an easy-to-understand visual platform. The APA Platform integrates advanced analytics into data preparation, blending, analysis, and enrichment using: - A/B testing - Computer vision - Clustering and segmentation - Decision trees and random forests - Demographic and behavior analysis - Machine learning - Multivariate statistics - Optimization and simulation - Forecasting and time-series - Network analytics - Neural networks - Predictive and prescriptive analytics - Spatial analytics - Supervised predictive models - Text mining
<urn:uuid:28554dfd-39ca-4846-b024-494b50fdfc08>
CC-MAIN-2022-40
https://www.alteryx.com/glossary/advanced-analytics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00286.warc.gz
en
0.877528
663
2.625
3
For the prevention of any intentional or unintended errors, building an effective data security policy is more necessary than ever. Not having a security policy makes it hard to coordinate and regulate a security program in an organization. It also makes it difficult for third parties to understand the security measures. Scroll down and see why having it for your business is essential. What is a security policy? Security policies can be defined as the clear, comprehensive, and well-defined policies, regulations, and practices that govern the accessibility to an organization’s system and the data inside it. It is a set of rules and guidelines created by your organizations’ employees. Their goal is to tackle security risks, adopt strategies to limit vulnerabilities present in the system, and define how to respond to attacks and recover any lost data in the event of a system invasion. The business’s needs in various areas, such as objectives, goals, privacy protection, and so on, change with time. Just like those modifications, the Security Policy must change and develop accordingly. The Importance of having a security policy A security policy is essential as it minimizes the risks of data loss along with protection against malicious malware present. It also provides a protocol with all the rules and regulations to follow, ensuring compliance. A few reasons why having a security protocol is important are mentioned below: - Defined threats: Each type of organization has its risks, so it’ll be easier for everyone to understand the associated threats when it’s explained accurately in the policies. - A solution to the risk and threats: When a threat occurs, the relevant staff must adhere to regulations to counteract it. - Limits: This will help the employees understand what is authorized and what’s off-limits. Things to consider when making a security policy A security policy can be as comprehensive or precise as you would like it to be. An efficient one should cover all the security across the organization, be realistic and allow for adjustments and updates. It should also be easy to understand and focused on the aims and goals of your organization. Listed below are the factors to be taken into consideration when making a security policy: - Plan: The first thing to note when making a policy is its overall purpose. This should include information regarding the types of security breaches that can occur along with your organization’s ethical and legal laws. - Employees: The next step is to define to who it applies. - Objectives: The most crucial step is to create clear-cut goals that’ll discuss the strategy. - Authority and Classification: The data should be classified into what’s confidential, public, sensitive, etc, to specify the level of authority and access one has to the data. Security policies address any issues that may arise inside a company. All threats are thoroughly examined, and the best answers are provided. It also mentions the team that will be working on a specific threat. As a result, having a security policy makes it easier for a company to safeguard itself.
<urn:uuid:8bb315c7-99f6-455e-8e73-5bf3032ff626>
CC-MAIN-2022-40
https://fluentpro.com/glossary/security-policy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00486.warc.gz
en
0.94884
627
2.59375
3
This chapter introduced the router. The main purpose of a router is to connect multiple networks and forward packets from one network to the next. This means that a router typically has multiple interfaces. Each interface is a member or host on a different IP network. Cisco IOS uses what is known as the administrative distance (AD) to determine the route to install into the IP routing table. The routing table is a list of networks known by the router. The routing table includes network addresses for its own interfaces, which are the directly connected networks, as well as network addresses for remote networks. A remote network is a network that can only be reached by forwarding the packet to another router. Remote networks are added to the routing table in one of two ways: either by the network administrator manually configuring static routes or by implementing a dynamic routing protocol. Static routes do not have as much overhead as dynamic routing protocols; however, static routes can require more maintenance if the topology is constantly changing or is unstable. Dynamic routing protocols automatically adjust to changes without any intervention from the network administrator. Dynamic routing protocols require more CPU processing and also use a certain amount of link capacity for routing updates and messages. In many cases, a routing table will contain both static and dynamic routes. Routers make their primary forwarding decision at Layer 3, the network layer. However, router interfaces participate in Layers 1, 2, and 3. Layer 3 IP packets are encapsulated into a Layer 2 data link frame and encoded into bits at Layer 1. Router interfaces participate in Layer 2 processes associated with their encapsulation. For example, an Ethernet interface on a router participates in the ARP process like other hosts on that LAN. The Cisco IP routing table is not a flat database. The routing table is actually a hierarchical structure that is used to speed up the lookup process when locating routes and forwarding packets. Components of the IPv6 routing table are very similar to the IPv4 routing table. For instance, it is populated using directly connected interfaces, static routes, and dynamically learned routes.
<urn:uuid:af7dabdf-6bda-4061-b5e1-59bd6ab4a55f>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2180208&seqNum=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00486.warc.gz
en
0.932554
425
4.1875
4
Sergii Figurnyi - Fotolia Since December 2015, anyone playing popular sandbox game Minecraft has been able to build their worlds on the actual map of Sweden. Lantmäteriet, the Swedish National Land survey, launched the country’s maps as Minecraft-friendly downloads to increase interest in geospatial information and open data, particularly among younger citizens. “We were going to launch some maps as open data and I thought it would be great to do it on Minecraft, and our managers liked the idea,” Bobo Tideström, business developer at the Lantmäteriet, told Computer Weekly. Tideström introduced the idea of a Minecraft Sweden in August 2015, and the complete map of Sweden and individual maps of each of its 290 municipalities were released to the public four months later. “For a governmental department, we completed the project very fast,” said Tideström. Lantmäteriet had a small internal team working on the project while the map data was converted to Minecraft by outside consultants using FME mapping tools. The maps have gathered over 19,000 downloads to date, but Tideström believes their reach is far wider through the visibility of the project and the use of the maps in various other projects, such as a competition for schools to design a future city in the municipality of Kiruna. “We were surprised that municipalities and organisations have started to use Minecraft as an actual planning tool for city development and have a dialogue with citizens,” said Tideström. “It is an easy way to translate maps into 3D, which makes it far easier for people to see how their city will look.” The project, which cost an estimated kr400,000 (£36,000), has also received an accolade from the IT community, winning Digital Project of the Year at the Swedish CIO Awards. Sweden is not the first country to recreate itself in Minecraft. Denmark and Norway have previously had similar projects, but Tideström said Lantmäteriet has gone a step further with the granular data the maps offer, from roads and lakes to forests and grasslands. Read more about the gamification of enterprise IT - With the number of cyber incidents identified by Australian organisations more than doubling in the past year, PwC is using an online game to give enterprises first-hand experience of what it means to face a cyber attack. - Learn how the trend of gamifying apps can be applied during the health application development process to engage and motivate a larger base of people to improve their wellbeing. - Before Pokémon Go, Deloitte Consulting touted gamification as maybe the next big thing in leadership training and used an in-house mobile game to improve long-term planning. Lantmäteriet used the earlier project in Denmark for benchmarking, namely in opting for downloadable maps instead of a server-based approach. “In Denmark, they had an open server so people could log in and play,” said Tideström. “They had big problems with houses being torn down by players.” The Swedish maps are available in 8x8 metre resolution (each Minecraft block is equivalent to eight meters). While this means small file sizes for downloading, the maps are more suitable for roaming the landscape than building detailed houses. To address this Lantmäteriet has so far launched four municipalities in a higher (1x1) resolution to enable more creativity. “In some areas, schoolkids have built the whole centre of a town so it looks like real life, with the right textures and colours,” said Tideström. Tideström said the Minecraft project hasn’t faced any major technical issues, but it has had an impact on Lantmäteriet’s approach to IT projects. The agency is now encouraging more experiments and fast deployments in addition to traditional large-scale projects. “We realised if we would have taken this project through our normal process of driving things, we would have released it in 2018 or 2019,” he said. “We are now looking into how we can change this prioritisation and act faster with the deployment of ideas.”
<urn:uuid:7fc1e38d-8541-4200-9dff-134e028cf3bf>
CC-MAIN-2022-40
https://www.computerweekly.com/news/450413340/Sweden-uses-Minecraft-for-urban-planning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00486.warc.gz
en
0.958795
899
2.578125
3
How Does HPC Work? HPC spreads workloads that are too big for one computer across several resources. These computing systems can be made up of tens, hundreds or even thousands of components – each coming together to give a huge total amount of processing power and completing tasks much more rapidly than single computers. HPC Components: Computing Resource, Network, Storage In every HPC solution, there are three major components: - Computing resource: The processing hardware, like servers - Network: Connections sending data at high speed throughout the system - Storage: Where the data is kept for analysis The high-performance computing definition is a system that brings these components together to work as one. For top performance, each component must match the capabilities of the others. Slow networking, for example, can bottleneck data flow to the computing resources. High-Performance Computing Clusters A fully connected system of components is known as a cluster. Within an HPC cluster, each server is known as a node. High-performance computing architecture connects these nodes together to form a cluster. The clusters’ activities can be run on various operating systems (Linux and Windows tend to be the most popular). Many users also manage their high-performance computing tasks through open-source frameworks like Apache Hadoop. HPC workloads are typically split into embarrassingly parallel workloads or tightly coupled workloads. Embarrassingly Parallel Workloads Embarrassingly parallel workloads are HPC tasks that can run at the same time, entirely independently of each other. Often, these workloads are made up of millions or hundreds of millions of individual tasks in a short space of time. An example might be a 3D simulation sent to nodes as individual pixels for parallel processing. With an HPC, these tasks can be completed almost simultaneously, whereas one resource would queue each pixel and deal with them back-to-back. Tightly Coupled Workloads Tightly coupled workloads, on the other hand, are tasks that need to stay in communication and depend on each other. As the data processing takes place, the nodes within the cluster communicate their results, leading to a final outcome. An example of a tightly coupled workload could be a weather forecasting task consisting of millions of calculations per second. Why is HPC Important? HPC gives us vastly higher processing speeds than would be possible with one machine. In academia, it gives researchers the ability to collect and analyse huge datasets quicker than ever before. It’s also crucial for scientists and engineers, helping them solve intricate questions much more quickly than standard computers. High processing speeds have also become critical in many real-time applications, or systems dealing with vast amounts of data. With large clusters, high-performance computing applications can help advance knowledge in fields like: - Artificial Intelligence (AI) and Machine Learning: Complex AI models like neural networks process massive amounts of data. With HPC, users can build and train deep AI models in a fraction of the time. - Internet of Things (IoT): Millions of IoT devices monitor and collect data in many industries. Using HPC, this data can be gathered and analysed much more quickly. - Simulations: The rise of big data and high-performance computing is not only applicable to data analysis. HPC can use data to generate 3D images, simulations and models, saving time and advancing knowledge. As HPC gives organisations much more analytical power, it comes with many possible use cases. HPC Use cases Various case studies show the current – and potential – power of HPC. Some existing high-performance computing examples include: - Financial services fraud detection: High-performance computing in finance can help systems monitor fraudulent activity in real-time. Financial institutions like Visa handle almost 2,000 transactions per second, requiring high-powered systems to constantly monitor each transaction. - Weather forecasting: Advanced weather prediction models use millions of calculations per second, reacting to the latest data. With HPC, models can account for more data points, giving more granular, exacting, and accurate results through their deep learning. - Healthcare services: HPC allows healthcare services to access patient information, analyse new viruses and diseases, and model new treatments much more quickly than before. - Scientific research: Likewise, the vast processing power of HPC can help scientists understand incredibly complex problems like genomics and DNA sequencing. Recent HPC supercomputers decreased the time taken to analyse a genome from 150 hours to just 6 hours. - Engineering: HPC allows engineers to develop models in a fraction of the time. This helps train and test new concepts in fields like automotive autonomous vehicles, logistics, and oil and gas discovery. Overall, HPC can be used not only to develop innovative solutions, but also to speed up current workload lifecycles and analysis. Benefits of HPC HPC brings many benefits over a traditional computer. A standard computer processor can carry out two to four billion cycles per second. This is enough for normal, day-to-day users, but is not a suitable throughput for massive apps, algorithms and datasets. A cluster or supercomputer in a high-performance computing facility can achieve speeds into the quadrillions of calculations per second – especially if designed with advanced central processing units (CPU), graphics processing units (GPU), high-speed memory and low-latency networking. These speeds can make even the largest tasks manageable. These faster speeds mean that users can solve problems more quickly. While the high-performance computing cost might be a short-term expense, they can save money many times over with their rapid insights, discoveries and innovations. Scalability, Flexibility and Efficiency High-performance computing infrastructure can be changed and optimised for unique workloads. Tuned to their specific tasks, HPCs transform how organisations manage projects – whether it’s streamlining repetitive tasks, using automation, or testing new processes quicker than before. As the world’s dependency on data grows, organisations using HPC will get ahead of the competition. In business, high-performance computing companies might generate insights or deliver services faster than rivals. In research, HPC will help teams innovate more rapidly. Whatever the field, HPC gives the user a competitive edge. Challenges of HPC High-performance computing modern systems and practices can help organisations innovate and thrive but, in some circumstances, there might also be challenges. Data Transfer Limitations Data transfer speeds and bandwidth can be challenging for companies first employing HPC applications. On-premises HPC infrastructure is often an obstacle. Networks might not be designed for the ultra-fast data transfer speeds that HPC needs, while uploading data to HPC systems in the first place can also be time-consuming. Costly to Purchase Similarly, the cost of purchasing equipment to deploy high-performance computing solutions can cause issues. Depending on the HPC workload in question, an organisation may need to purchase several computing resources at once, proving a barrier to entry for many who can’t budget for the initial payment to own their HPC infrastructure. Data privacy is essential for all companies, especially those in highly regulated fields like finance and healthcare. In these fields, personal data must be held securely and comply with many requirements. High-performance computing storage can be spread across multiple solutions, each of which must guarantee data privacy. High-Performance Cloud Computing: What & Why is it? High-performance cloud computing is a model that hosts, deploys and delivers high-performance solutions in the cloud. As servers, networks and storage devices are hosted in data centres designed for scale and rapid performance, organisations are freed from many of the challenges of deploying on-premises private HPC solutions. How Cloud Computing is Useful in High-Performance Computing For that reason, many organisations choose to host their HPC infrastructure in the cloud. This is where the hybrid cloud really shines. Combining private cloud services hosted in HPC-ready data centres with public cloud computing solutions, like Microsoft Azure and Amazon Web Services, opens up almost endless computing possibilities. You can also benefit from increased: - Cost-savings: With the public cloud, companies save upfront costs on purchasing equipment, instead only paying for the resources they use. They also benefit from the scale, expertise, and advanced infrastructure of hosting private cloud resources in HPC-ready data centres - Agility: Cloud HPC resources can be configured, deployed and scaled rapidly, letting users focus on results instead of management - Choice: Companies using the cloud can choose only the resources that match their workloads. Future of High-Performance Computing So, what are the future high-performance computing trends? While big data analytics continues to revolutionise almost every industry, HPC systems will help more companies create new products and analyse their resources for deeper insight. Using the cloud, high-performance computing as a service will also give more users access to HPC. Things like high-performance edge computing, faster processors and AI will also enhance the speeds that data can be processed. This will increase the chances of HPC and big data coming together to give organisations rapid, accurate and innovative data-driven solutions. How Interxion can Help with your High-Performance Computing Needs High-performance computing creates a cluster of nodes to process data at scale. Companies can learn more from their data, make processes quicker, and build more robust models, giving you an edge over the competition. At Interxion, we can help you host your ideal hybrid cloud HPC solution. Located in the heart of London, Hanbury Street – LON 3 is just one of our high-performance computing UK data centres with the infrastructure, reach and connectivity to bring your HPC project to life. Check our other data centre locations to pick your ideal spot, or contact us today for more information.
<urn:uuid:3856bc46-f4c3-465a-b38b-082366059c1f>
CC-MAIN-2022-40
https://www.interxion.com/uk/blogs/what-is-high-performance-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00486.warc.gz
en
0.917843
2,044
3.828125
4
How is it that cybercriminals can hack your friends’ Facebook, Google+, Twitter or Linkedin accounts and send you innocent looking posts that contain links to evil malware? The answer is weak passwords. People have a tough time remembering all the passwords that today’s technology driven lifestyles demand. You need to know usernames and passwords for: email and networks at work, your bank account, your mobile phone account, your webmail, your social media account, etc… It’s hard enough for techies to keep up with these demands but for the average person it’s next to impossible. The solution? People pick something easy that they can remember and use the same username and password wherever they can. Did you know that the most common password created across the internet was password? It’s really easy for cybercriminals to see these common passwords with something as simple as a Google search. With the sophisticated hacking tools they have, they can make short (i mean really short) work of most people’s passwords. Next thing you know, your best friend is posting on your wall telling you about how she lost weight or how to get a free iPad. Here’s a list compiled by SplashData of the 25 worst passwords. If you see yours on this list then stay tuned. I’ve got some great advice on how to create a strong password. Do any of those look familiar?
<urn:uuid:ee8d0aba-3364-404e-8069-d6c886efb005>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/25-worst-passwords-of-2011
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00486.warc.gz
en
0.878287
423
2.71875
3
Internet of things (IoT) and industrial control system (ICS) devices are especially vulnerable to cyber-attacks due to weak security controls and vulnerabilities in the manufacturing supply chain. For example, automotive electronic control units (ECUs), the devices that control in-vehicle safety, drive train, and infotainment systems, are often manufactured in a sprawling supply chain that cannot be completely trusted. These automobile manufacturers are left with the challenging task of managing ECU security once the devices are installed and on the road. Other critical infrastructure devices such as smart building control systems, fire and safety systems, traffic control systems, smart lighting, telematics devices, industrial controllers, medical devices, and smart meters are also vulnerable to compromise during the manufacturing process. With literally thousands of original equipment manufacturers (OEMs), contract manufacturers, and software developers working together, ensuring that the devices themselves can be trusted can be extraordinarily difficult. Product security weaknesses make it difficult for OEMs device operators and end-users to protect the safety, reliability, data privacy, communications, and the integrity of firmware over-the-air (FOTA) updates. The solution? Taking a zero trust approach. The 10 Best Practices for Zero Trust Manufacturing Zero trust manufacturing is an approach to manufacturing trustworthy safety-critical devices along a supply chain that inherently cannot be trusted. Going beyond zero trust networking, this approach addresses supply chain weaknesses during the design, manufacturing, testing, and delivery of products. If you are manufacturing or using IoT or ICS products and you need to start focusing on securing your supply chain. #1 Root of Trust (RoT) An RoT is a foundation upon which all secure computing operations are based. Installed on a device, an RoT contains the keys used for cryptographic functions and enables a secure boot process. Roots of trust can be implemented in hardware, which can offer very strong protection against malware attacks. An RoT can also be implemented as a security module within processors or a system on a chip (SoC). #2 Hardware-based Secure Element When possible, use a hardware-based secure element to create a root of trust (RoT). Leverage a tamper-resistant secure element such as a TPM or a network or cloud-based hardware secure modules (HSMs). TPMs provide secure key generation and storage that provide hardware-based, security-related functions. #3 Generate on-device keys Private and public keys used to identify the device should be generated and stored securely on the device so that the device can attest to its own identity. These keys can be used to for public-key cryptography, encryption, and code signing. #4 Cryptographic software libraries Integrate strong cryptographic libraries without known CVE vulnerabilities to handle crypto-operations such as encryption, TPM operations, and authentication. #5 Enable mutual M2M authentication The best way to establish trust between IoT endpoints is by utilizing mutual machine-to-machine (M2M) authentication, where both the client and server are authenticated. Implementing client-side certificate authentication, whereby the IoT device itself owns the private key, and only the public key is shared with the other party, is critical to ensuring the integrity and trustworthiness of the device. Avoid using pre-shared keys (PSK), which are highly vulnerable to theft. #6 Automate the PKI Management Lifecycle Managing the key and certificate lifecycle, including PKI, is the most complicated part of implementing managing device security. It is also the most important aspect of ensuring trusted devices. Ensure that you can automate: - Secure key and certificate generation - On-device certificate signing request (CSR) generation - Key and certificate management changes - Root signing ceremony - Transfer of ownership #7 Centralize code signing and secure boot Code signing is the process of digitally signing software executables and scripts to confirm the author and integrity of the software. It is important to ensure that firmware updates are signed by the developer and authenticated by the device before being installed. Replace the initial bootstrap certificate with an updated certificate to ensure the device boots up with the intended firmware. #8 Root Certificate Authority (CA) Implement an on-premise CA or third-party CA with certificates signed by a trusted root CA to provide a high level of trustworthiness along the chain of trust of digital certificates. #9 Integrate Device Management PKI lifecycle management tools should be integrated with the device management system (DMS), so that it becomes a seamless process to generate key pairs and update the PKI. #10 Secure communication with end-to-end encryption Implement encrypted SSL/TLS or IP VPN communications to ensure data privacy that leverages a secure and automated PKI lifecycle management approach. It is difficult to manually integrate these PKI capabilities across a global supply chain to ensure that digital identities can be issued, updated, and managed. Keyfactor Control provides a turnkey solution for automating the management of the IoT security lifecycle in complex manufacturing supply chains.
<urn:uuid:2162484d-c4e3-4f32-9201-95ad59c43493>
CC-MAIN-2022-40
https://www.keyfactor.com/blog/top-10-best-practices-for-zero-trust-iot-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00486.warc.gz
en
0.909802
1,057
2.84375
3
According to the National Center for Women & Information Technology (NCWIT), 47% of all employed adults in the U.S. are women, but only 25% hold computing roles. Racial, ethnic and economic disparities also present a significant gap among STEM fields. Of the 25% of women working in tech, just 5% are Asian, while Black and Hispanic women accounted for 3% and 1%, respectively. Diverse thinking in the workforce drives innovation by drawing from new perspectives and experiences. With fewer women pursuing degrees and careers in STEM, there is a critical need for more significant equity in the industry. Part of this inequity starts in early in life, with young girls who can’t see themselves in STEM roles. Research shows that when asked to describe a typical scientist, engineer, mathematician, or computer programmer, 30% of girls say they envision a man in these roles. Making this change starts in schools, with accessible STEM programs, meaningful mentorship and access to technology so students can build their skills. Nurturing STEM Skills Creating a diverse STEM ecosystem starts in the classroom with programs that make technology accessible and fun for girls at a young age. This can and should be a unified effort with programs supported by the technology community that take place in school, removing as many barriers as possible. Girls Who Game, an extra-curricular program for students in fourth to eighth grade that’s lead by Dell Technologies, Microsoft and Intel, is one example of how to make technology enriching, engaging and exciting. The program provides an opportunity for young girls and underserved students across North America to learn more about gaming and the use of Minecraft as a learning tool. It goes beyond tech to also build global competencies, such as communication, collaboration, critical thinking and creativity. Engaging the right individuals is also important. Programs like this can provide a personalized, safe and supportive community, with women in STEM acting as coaches, mentors and role models. Students walk away with a greater self-awareness of their skills, and are empowered to continue growing in STEM. With cases of COVID-19 rapidly increasing this fall, many schools have decided to continue a remote or hybrid learning environment instead of returning fully to the classroom. K–12 and higher education institutions must be ready to quickly pivot between remote learning and classroom or on campus education, or provide a combination of both. It’s easy to jump in quickly and select technology based on what seems to make sense in the moment, but the infrastructure and other technologies schools invest in will make a significant impact long term. Schools should start by creating a strategy that addresses these questions: What do we want teaching and learning to look like? Where are the resources? How do we make a model of this? This involves a combination of devices, infrastructure and professional learning. We’re supporting higher education and K–12 by helping them unify their technological infrastructure to support daily teaching whether remote or in the classroom. This includes servers, data storage, network capacity, data management software and data security. Additionally, we are supporting technology transitions to ensure IT teams are up-to-date on new equipment and can support any backend or user issues if they arise. How are you guiding them? What issues are schools facing with their IT? Since the pandemic we have seen a considerable increase in the number of devices that have come online off the school or university network. This creates two challenges for these institutions: make sure the devices are safe and secure when they come back on the network and make sure the network is ready for the additional device load. The threat to education institutions is real and the FBI has now issued alerts around increasing ransomware attacks With an increase in endpoint users and network access points and generally stretched IT teams, schools can find it difficult to properly respond once an attack occurs. We always recommend that security is baked in, not bolted on, so security is integral to the technology itself. Because most schools have moved to virtual learning environments in response to COVID-19, what are the likely long-term outcomes of this? A: A likely outcome is that schools will realize that virtual learning should be a component of every student’s learning journey, but fully online will not work for most. In the rush to move online, many educators are learning that what they had to do in 14 days should really take months. The K–12 school systems that already solved for access and moved toward blended learning had a much easier time shifting. As a result, we will likely see a strong push for access and blended learning going into next school year. School systems and higher education institutions will build for the future with blended environments as a core component of design and this will allow for the educator and student to have a smooth transition into fully online learning whenever they may choose. Also, moving forward the technology leader will be seen as an essential part of the leadership team, if they haven’t been already. Administrators are realizing that learning simply can’t happen without the support of IT and, therefore, we should anticipate technology leaders in education will have a voice to support all decisions that impact the vision and the day-to-day work. These leaders will need to look beyond just the devices and think about the infrastructure needed to support learning anytime, anywhere. Will more schools embrace distance learning once we’re beyond the pandemic? If so, what will that look like? Will some educational entities move beyond physical classrooms altogether? This is a question that came up on one of our recent CIO chats that we host and the answer is maybe. I don’t think that it will be embraced as it is being designed right now because most school systems and institutions are rushing to get something created to support their learners and likely would do things differently with more time. But I think we will see collaborative work happen across the education spectrum to create courses and curriculum that can be implemented in ways that take advantage of face-to-face and online learning. This will allow schools and universities to redefine how they use physical space and tailor more toward the actual learning. For example, students working in a collaborative group on a project might need a smaller space in the library with a white board, laptops, internet connection, and a screen to share. While other students are in a lecture hall getting new information via a Socratic seminar. Also, we might rethink how we use projects and playlists to support personalized learning that defines mastery with application of learning, so all learners have an opportunity to show learning in unique ways. There will likely always be an element of classroom learning at a physical school, however, that will likely look very different in coming years as pedagogy and technology continue to evolve in new ways to empower learners. In-classroom learning remains essential until we can solve the issue of equity. We still have students and teachers that do not have the correct devices or broadband access for virtual learning. We’re seeing schools grappling with how to conduct special education or help ESL students with a balance of synchronous and asynchronous virtual learning. Additionally, in-classroom learning provides additional social and societal benefits including school lunches, after school programs and a safe space for children in less ideal home situations. It also remains essential because learners are social, and the physical building creates opportunities for collaboration and learning that wouldn’t be possible if we were all working in remote locations. In essence, what is the future of classroom-based learning and the technology that plays a role in providing instruction? I am not sure that the vision for the future has changed; I just think we have a new sense of urgency. School systems and institutions are still moving toward a definition of personalized learning that gives students some voice and choice in the learning process. This requires access to technology and the internet at home. If we can solve the inequities that exist today for our learners, then we will be able to shift to environments that provide true blended learning and remove time and space as the barriers. Learners will be involved in competency-based models that allow them to learn at their own pace. The university will become a hub for life-long learning and students will move in and out based on short and long term goals that they set with an advisor. In the end, we will utilize technology as the platform to enable great innovation and shift the model of learning to meet the needs of all learners. By Randy Lack, safety, security and computer vision manager for the Americas, Dell Technologies. Many colleges and universities are working to take advantage of Internet of Things (IoT) technologies to build “smart campuses” that promise new peace of mind for students and their families and a better overall experience for all who set foot on campus. Schools are the largest market for video security systems in the U.S., with an estimated $450 million spent in 2018. Adoption will continue to increase as IoT-enabled security solutions come onto the scene—empowering colleges and universities to do more than monitor security cameras and investigate after-event footage. New kinds of devices and powerful analytics, including artificial intelligence (AI) and machine learning, are transforming cameras and sensors from passive data collectors into intelligent observers with the ability to recognize and alert security to potential problems, provide real-time insight during unfolding events, and help identify patterns to proactively deter and prevent problems. Smart IP cameras with “computer vision” can learn over time to recognize patterns and behaviors in order to zero in on suspicious activity and better predict the likelihood of events. These cameras, combined with sensors that can detect sound, temperature, vibration, chemicals and more, form a system that can alert security to potential problems by relying on insights delivered from analytics-driven interconnected IoT devices. As a result, security teams can help improve response, share critical information with first responders, make better use of available resources, and help prevent situations from escalating or in some cases, help prevent them from occurring in the first place. The following are just some of the innovative secure-campus applications being deployed today: Real-time integrated dispatch solutions that enable live video streams and location mapping to be shared with community police, fire, and other first responders, for faster, more coordinated response Sensor, floorplan and GPS data that combine with incident monitoring, push notifications, and the ability to pinpoint the location of anyone with a cell phone. This allows first responders to isolate events to send in the right kind of help to where it’s needed more quickly. The open visibility of 24/7 IoT technologies such as security cameras across campus serve as a deterrent, helping to prevent theft, assault and vandalism Compact, solar-powered, Wi-Fi / 4G / RF-connected devices help cover “blind spots” without the expense of permitting an infrastructure investment to bring power to them Smart lighting follow people across dark campuses “Escort drones” accompany students and staff from one location to another The need for a holistic, integrated approach To take advantage of these applications, it’s important to understand that security is no longer confined to self-contained, standalone systems and departments. With IoT, campus safety becomes a widely distributed, networked, and data-driven solution, with new requirements for shared campus policies and IT modernization across infrastructure, security, data management, analytics, operations, software development, and more. Indeed, many HiEd safety solutions require integration with security and IT organizations beyond the physical campus. For example, a large urban campus in southern California and surrounding city government are working together to tie together data from campus, municipal and even the shuttle buses that transport students to and from the city for cultural and sporting events. The solution being developed also enables city and campus police to log in to each other’s systems when coordinated efforts are needed. A university CIO is responsible for myriad responsibilities related to improving and maintaining technology and services in support of institutional goals. Still, to do that effectively, the job goes far beyond what many typically consider as part of the role. Hiring engineers and IT specialists? That’s part of your requirements, in addition to protecting personal information of students and faculty, ensuring there is a high-performance infrastructure, as well as providing effective systems and IT services to meet institutional requirements. A CIO needs to have a variety of skills to succeed, including being capable of managing people and change while also considering financials, managing a budget, balancing technology responsibilities and keeping cybersecurity top-of-mind. Having served as a CIO at prominent four-year universities in the United States, I learned that in addition to the responsibilities outlined above, the role of a CIO is an ever-changing position that requires constant evolution and adaption to meet the needs of a heavily technology-driven community. Some of the most important lessons I learned include: 1) Relationships are as important as technology I quickly learned that building relationships with executive decision-makers was crucial to the success of institutional initiatives. Building bonds with business unit leaders from facilities management to public safety to athletics can be as essential at the relationships with the provost, deans and academic department chairs. That is, the CIO should cultivate and maintain healthy relationships at all levels of the university, which can lead to allies in digital transformation efforts. Being connected with students is equally important. I found having a student technology advisory committee was an excellent way to listen to student needs, gain insights on how to improve IT services and build trust with the student community. Building a strong IT leadership team also enables CIOs to form better relationships on campus that will assist in implementing new academic and administrative initiatives. 2) Enforcing shared governance is a must One common CIO mistake is dictating change without receiving input from others on campus. This is why shared governance, placing the responsibility, authority and accountability for decisions on those who will use the technology, should be a top priority. Shared governance with the academic community is essential to being successful. Higher education CIOs should be shifting responsibilities from operating technology to more strategic governance responsibilities. Students and faculty are the primary constituents that require technology and services from a campus IT organization, so naturally, CIOs should consider their requirements when assessing and implementing new solutions. For example, before purchasing new classroom instructional technology, it is crucial to consult faculty on those matters; and include faculty in pilot projects and testing. This approach often leads to better decisions that are made collaboratively, rather than having IT simply dictate decisions from a technical standpoint.
<urn:uuid:b685e882-3137-4dc1-aabe-98407eda3b5e>
CC-MAIN-2022-40
https://educationitreporter.com/tag/dell-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00486.warc.gz
en
0.955209
2,978
3.84375
4
This year, on International Women's Day, governments, organizations, and individuals worldwide are being asked to help envision and create a gender-equal world. A world free of bias, stereotypes, and discrimination. A world that is diverse, equitable, and inclusive. A world where difference is valued and celebrated. That is this year's theme: #BreakTheBias. One of the industries struggling with significant bias and gender stereotypes is cybersecurity. This field plays an increasingly crucial role in our digital world and, as a result, offers many fulfilling career paths and opportunities. However, there are still significant barriers and misperceptions driving the belief that a career in cybersecurity is not for women. While women have been disproportionately impacted by pandemic-driven unemployment (for example, one in four women reported job loss due to a lack of childcare—twice the rate of men), the technology sector was less affected. This was mainly due to their being better prepared to pivot to remote work and flexible work models. As a result, according to a report by Deloitte Global, large global technology firms still managed to achieve "nearly 33% overall female representation in their workforces in 2022, up slightly more than two percentage points from 2019." While such progress is good, the technology sector still has a long way to go compared to other industries. Outside of the high-tech sector, women account for 47.7% of the global workforce. And they also make up 50.2% of the college-educated workforce. And the gender gap is even wider within the cybersecurity industry where, according to the (ISC)² Cybersecurity Workforce Study, women only make up 25% of the global cybersecurity workforce. This gap is certainly not because there aren't any jobs. According to that same study, the cybersecurity industry urgently needs 2.72 million more professionals. And while 700,000 cybersecurity professionals entered the workforce in the past year, the global workforce gap was only reduced by 400,000, indicating that global demand continues to outpace supply. Women are just generally not applying for or being recruited to fill these positions. This lack of gender equity has also directly contributed to the low percentage of women who hold cybersecurity leadership roles. In 2021, for example, only 17% of Fortune 500 CISO positions were held by women, with only one female CISO in the top ten US companies. There are three main reasons why women continue to be underrepresented in the cybersecurity industry: Many women don't consider cybersecurity a career path because it's primarily seen as a male profession. This image is reinforced by popular media, such as Eliot Alderson in the Mr. Robot TV series, where cyber activities are performed by young geeks in hoodies working late at night in a dark room lit only by their computer screen. While it may make for compelling TV, this stereotype is inaccurate and off-putting for many women, inadvertently contributing to gender disparity in the workforce. While cybersecurity certainly has its technical aspects, it is not just a technical industry. Like any growing industry, there are a wide variety of job opportunities that require human skills. These include analytical, communication, management, and interpersonal skills that are equally important to the organization's success and positively impact the industry. One reason why so few women apply for cybersecurity positions is they are less represented in STEM-based programs. But there is no reason why the technical aspects of a career in cybersecurity should be off-putting for women. The fact is, standardized math tests for fourth, eighth, and 12th graders show little gap in the scores between female and male students. But according to MIT WIM (Women in Mathematics), one of the drivers of the gender gap in technology fields is not ability but "stereotype threat." This happens when an individual worries about confirming negative stereotypes, leading women to conform to gender expectations by performing worse on assessments and decreasing their interest and persistence in STEM fields. Pervasive gender biases, few female role models, mistaken beliefs about technology being a male-oriented industry, and, sadly, teachers and parents who steer girls away from technology studies have combined to break the confidence of many young women otherwise suited to pursue a STEM-related degree. This is a global issue, with women generally earning less than 20% of all STEM degrees. According to Yale University, US women only earned 18.7% of computer science degrees. In the UK and across 35 European countries, fewer than 1 in 5 computer science graduates are women. And women hold only 18.5 percent of STEM positions in South and West Asia and 23.4 percent in East Asia and the Pacific. This bias starts early in their college careers. 49.2% of women intending to major in science and engineering switch to a non-STEM major during their first year. We cannot cure the lack of women in STEM overnight. So, organizations need to think differently about the composition of their cybersecurity staff. Many hiring managers—and HR—view individuals with backgrounds in computer science, engineering, and other STEM fields as the most qualified cybersecurity candidates, often ignoring those with degrees in other areas. But if they want to build successful cybersecurity teams, they need to broaden the scope of backgrounds they consider when looking for new employees. But the challenge goes beyond hiring. The reality is that women in cybersecurity roles also tend to be promoted more slowly than men—something known as the "first rung" problem. According to Fortinet CISO Renee Tarun, "Men are four times more likely to hold executive roles than their female counterparts, they're nine times more likely to have managerial roles than women, and [on average] they're paid 6% more than women." In addition, women tend to leave the field at twice the rate of men, citing gender bias, discrimination, and harassment as their reasons for leaving. In addition to the primary objectives of the UN's Sustainable Development Goals that call for equality and equity for women (goals four and five), organizations need to seriously consider how to merge their DEI (Diversity, Equity, and Inclusion) objectives into their equally important digital innovation strategies. Because the evidence is clear: businesses that employ gender equality practices across their organization report increased profitability and productivity. Given the rate at which digital innovation is transforming organizations (and the efforts of cybercriminals to exploit those digital acceleration efforts), now is the time to break our cybersecurity stereotypes. We must work together to remove the bias that cybersecurity is a gender-specific field and change the perception that it is purely a computer science discipline. In cybersecurity, technology is only one of the silver bullets required to eliminate cyberattacks. The three critical elements of an effective cybersecurity strategy are People, Products, and Processes. But when we continue to recruit the same people—same gender, same educational background, same perspective—we are unlikely to develop strategies that allow us to get out ahead of our cyber adversaries. For example, it is not a stretch to say that the failure to rethink security strategies—starting with who makes up cybersecurity teams—played a part in the nearly 1100% increase in ransomware attacks organizations worldwide experienced last year. To change this perception and get out ahead of the cybercrime crisis we all face, we must bring more voices, perspectives, and diversity to our cybersecurity teams. Here are five basic principles we need to adopt as we work to refine our cybersecurity teams and strategies: Cybersecurity plays an essential role in our modern society. However, a variety of skills and experiences must come together to guarantee the cyber industry's success. And as with any other industry, diversity is crucial. By bringing greater awareness to the diverse skills and backgrounds cybersecurity requires, we can help shrink both the gender and skills gaps while making strides in our battle with our cyber adversaries. Cybersecurity offers many fulfilling career paths and opportunities for women. Because technology—and cyberthreats—continue to accelerate, it is an industry in constant evolution, making the field of cybersecurity very stimulating intellectually. And because there are so many open jobs to fill, this sector is also attractive financially. But joining the cybersecurity industry also means having a significant impact on society. We live in a digital world where protecting data and individual privacy has become a critical sustainability issue. And as always, women play a vital role in making this possible. Find out more about how Fortinet’s Training Advancement Agenda (TAA) and NSE Training Institute programs, including the Certification Program, Security Academy Program and Veterans Program, are helping to solve the cyber skills gap and prepare the cybersecurity workforce of tomorrow. Learn more about Fortinet's efforts in closing the cybersecurity skills gap: Skills Gap Perspectives
<urn:uuid:eb9aa0a5-529c-4a44-bde1-2b11f56f7747>
CC-MAIN-2022-40
https://www.fortinet.com/blog/industry-trends/break-bias-create-diverse-and-inclusive-cybersecurity-workforce
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00486.warc.gz
en
0.957
1,763
2.828125
3
Researchers have discovered a security vulnerability in crypto chips produced by Infineon Technologies AG that generate RSA public keys. If you’re thinking to yourself, “What…?” or, “Okay, just tell me what to do, please,” then you should keep reading. If you’re looking for more technical details, the author’s disclosure is available here. You can also check out affected products and keys on GitHub. Although this is a major flaw, I want to tell you that it only affects a very small subset of RSA keys, probably not even 1% of RSA keys. RSA is used by most websites to secure traffic, but not a single website has been found vulnerable. Where are the chips and what you should do If you don’t have a chip then you don’t have a problem. These chips show up in many different devices, such as smartcards, security tokens, laptops, and other devices using these cryptography chips chips, that may or may not be Infineon-branded. If you have an affected device it is most likely a computer with a TPM chip or a hardware authentication device. The adopted software library allows for a practical factorization attack, which means the attacker computes the private part of an RSA key. TPM stands for “Trusted Platform Module”. It is a specialized chip that may be found in your laptop, or other endpoint device, that makes sure someone has not fiddled with the laptop’s hardware and software. These TPMs are used in various devices to generate RSA key pairs for secured crypto processes, and are part of the flaw allowing adversaries to potentially gain control and decipher data secured through these integrated keys. If you have a TPM chip on your laptop (and even if you don’t) you need to make sure you are getting updates. Microsoft already released an update for this October 10th. Other vendors have done the same. Check with your computer manufacturer for details. Hardware Authentication Devices There are various hardware devices that provide authentication, or sign you into things. This includes certain ID cards with embedded chips and U2F devices (like Yubikey). If you have a device that does this, you should consult with the vendor who provided it. They will likely have information posted on their website if they’re affected. Even if you do have one of these devices you may still be safe. The vulnerability is only in how RSA private keys are generated on the device. So, if crypto keys were created on a computer and transferred to the device then you are not affected. Also, Elliptic-Curve Cryptography (ECC) keys would not be affected, even if they were created on the device. If you can get your public key off the device, you can then check it with various tools made available by the authors here. If you are vulnerable then you should backup your private key and create a new private key on your computer and transfer it to you device. This allows you to continue using the device without suffering from the vulnerability. Details on how to do this should be found on the manufacturer’s website. We won’t know all the details of this vulnerability until the authors present their findings on November 2 at the ACM CCS conference. The authors did drop some hints though, such as the title of their paper and the test tools they published. So, because of these hints, it might be possible for an malicious cryptographer to use them and start his attack early. The authors tell us that on a modern processor it takes 97 CPU days to crack a 1024 bit RSA key. This time can, of course, be reduced with more computational power. Yet again, we have another race between the “good guys” patching and the “bad guys” exploiting. Even though the nature of this vulnerability may give the “good guys” a bit of a head start, it is no excuse to sleep. Rotate your affected private keys as soon as you can.
<urn:uuid:1972e80e-d648-4199-8b28-c3844bb3d5e3>
CC-MAIN-2022-40
https://hurricanelabs.com/blog/secure-those-keys-what-you-need-to-know-about-the-roca-bug/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00486.warc.gz
en
0.957236
844
2.6875
3
From citizenship applications, employment status, filing taxes to census recording – all provide massive amounts of data to the Government. A torrent of data in the form of medical files, satellite images, financial records and social media information. To a Big Data junkie, this “plethora of data” offers limitless opportunities to drive quantum leaps from information to insights, but the question is: Do we need to read the massive amount of data the Governments gather and bring data insights? What is the significance? Well, the answer is why not! Big data analytics can be applied to a wide range of Government responsibilities such as crime prevention, threat prediction and prevention, tax compliance, transportation, defence, national security, revenue management, environmental stewardship and social services. Algorithms can be designed to find suspicious transaction occurring in very function of government in real time by combining local, national and social data. To name a few, Las Vegas has implemented a crime management system which successfully detects and predicts high probability crime scenes, optimising the coverage of their police force and reducing response times. Prediction of violent crimes and malicious activities is subsequently possible using the crime history data points. The Dutch government prefills the tax forms of citizens by collecting relevant information from citizens’ employers and banks and also understands the payment behaviour of every citizen, and minimise the tax gap, improving efficiency, and saving taxpayer’s dollars. The insurance association of Malawi, in association with World Bank and Opportunity international, continuously analyses massive weather and rainfall data to provide weather-indexed insurance to formers for securing their loan while reducing risk. Big data analytics also has a potential to bring advancement in areas like disease reconnaissance, student curricula, microcredit, traffic control etc. In Nairobi, engineering social system uses geo-coded mobile phone transaction data to monitor and model growth of slums and unauthorized settlement. The group of researcher analysed location data from 138000 cell phones to understand the magnitude and pattern of cholera outbreak in Haiti. Now, let’s take a country like India – one among the top 10 countries in the globe with a high crime rate, scoring 36/100 for corruption (where 0 is highly corrupt). Can it gain the benefits of big data and its applications? Can Big Data help transform a nation like India by using data to achieve things that previously have been unattainable? Yes, of course. India’s introduction of the “Aadhaar” card or national identity card is an ocean full of profiles with a possibility of moving in the right direction. Information gathered and analysed has the potential to help build a robust employment system, augment tax information and help reduce fraud. As governments slowly open up their data e.g. data.gov, we’ll see big data platforms driving the way countries are governed. The only roadblock to this is “imagination” and how one can apply all the data available to us effectively. If Governments leave behind conventional methods and embrace innovation with data, a systematic integration of big data into policy and strategies will create new and significant change in everything.
<urn:uuid:7f5709b7-dbd7-4d0a-a9e2-b5c2810e9d9e>
CC-MAIN-2022-40
https://www.crayondata.com/government-big-data-analytics-playground/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00686.warc.gz
en
0.917955
636
2.796875
3
Dinosaur teeth bird Yale university researchers have discovered a bird beak that inspired the likes of Charles Darwin bore the very first beak. Scientists said, the dinosaur teeth bird, Ichthyornis lived 100–66 million years ago and plays a key role in the evolutionary journey from dinosaurs to modern-day birds. While Ichthyornis dispar fossils first uncovered in the 1870s. The new ones locate from Kansas and Alabama chalk stores, including a delightfully safeguarded skull, uncover significantly more about than previously known. These birds evolved from small feathered dinosaurs. According to scientists, Ichthyornis was a strong flier, its body streamlined, simplified and adapted for flight like modern birds. Its primitive characteristics were largely in its skull. Regardless of the advancement of its body and wings, it held very nearly a full supplement of dinosaur teeth, and it had a solid chomp with large, dinosaur jaw muscles. In any case, this science study shows the reality and thought like a bird, with a feathered creature’s huge eyes and extended. Ichthyornis was the span of a tern, with a 60cm wingspan, and likely ate fish and shellfish. It imparted the skies to flying reptiles called pterosaurs when dinosaurs overwhelmed the land. Toothed birds vanished alongside the dinosaurs and other different species after space rock affect 66 million years prior. Researchers said, Ichthyornis shows the ways in which evolution is both complex and elegant, permissive of individual changes and massive integrated transformations.
<urn:uuid:2ee885b1-00a8-43d4-8cab-5bce2f2ad5cc>
CC-MAIN-2022-40
https://areflect.com/2018/05/06/american-paleontologists-discovered-a-dinosaur-teeth-bird/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00686.warc.gz
en
0.938484
323
3.46875
3
What is an epoch in machine learning? An epoch in machine learning refers to one full pass of the training dataset through the algorithm whenever you wish to train a model with some data. As a result, it is a learning algorithm hyperparameter. With the rise of the digital age, many people have started looking for information on this rapidly evolving topic of machine learning. According to Acumen Research and Consulting, the global deep learning market will reach 415 billion USD by 2030. Are you wondering about machine learning benefits for your business, but do these terms confuse you? Don’t worry; we have already explained what is an epoch in machine learning for you. Table of Contents What is an epoch in machine learning? A complete cycle through the entire training dataset can be considered an epoch in machine learning, reflecting how many passes the algorithm has made throughout the training. The number of epochs in training algorithms can reach thousands, and the procedure is designed to go on indefinitely until the model error is sufficiently reduced. Examples and tutorials frequently include 10, 100, 1000, or even greater numbers. Advanced algorithms are used in machine learning to evaluate data, learn from it, and apply these learning points to find interesting patterns. Machine learning models are developed using many epochs. As this entails learning per what is learned from the dataset, some human interaction is necessary for the early stages. There are two different categories of machine learning models: supervised learning models and unsupervised learning models. Specific datasets are needed for these models to build their learning ability, and these training datasets must be planned following the desired result and the task(s) that the agent will need to complete. Check out the history of machine learning When trying to fully define an epoch, which is primarily considered as one cycle of the entire training dataset, it is important to comprehend the fundamental concepts and terms that make up an epoch in this context. The aggregate of data batches and iterations that make up an epoch is ultimately what makes up an epoch. Datasets are organized into batches (especially when the data is very large). One batch is run through the model and sometimes considered one iteration by those who misuse the phrase. Iteration and an epoch are typically used synonymously. The number of epochs equals the number of iterations if the batch size is the entire training dataset. Generally speaking, this is not the case for practical reasons. Multiple epochs are often used while creating models. The general relationship would be de = ib when the dataset size is d, the number of epochs is e, the number of iterations is I, and the batch size is b. For instance, if we define the “task” as getting from point A to point B, we may define each feasible path from point A to point B as an “epoch,” and the precise route information, such as stops and turns, as the “iterations.” Are you confused? Let’s explore them separately. What is a batch size in machine learning? The number of training samples used in one iteration is referred to as the “batch size” in machine learning. There are three possibilities for the batch size: - Batch mode: The iteration and epoch values are equal since the batch size equals the complete dataset. - Mini-batch mode: The overall dataset size is smaller than the batch size, which is more than one. Usually, a sum can be divided by the size of the entire dataset. - Stochastic mode: Where there is a single batch size. As a result, the gradient and neural network parameters are changed following each sample. Batch size vs epoch in machine learning - The batch size is the number of samples processed before the model changes. - The quantity of complete iterations through the training dataset is the number of epochs. - A batch must have a minimum size of one and a maximum size that is less than or equal to the number of samples in the training dataset. - You can choose an integer value for the number of epochs between one and infinity. The process can be run indefinitely and even be stopped by criteria other than a predetermined number of epochs, such as a change (or lack thereof) in model error over time. - They both have integer values and are hyperparameters for the learning algorithm, i.e., learning process parameters instead of internal model parameters discovered by the learning process. - You must provide a learning algorithm’s batch size and the number of epochs. To configure these parameters, there are no secret formulas. You must test many values to determine which ones solve your situation the best. What is an iteration in machine learning? A machine learning concept called an iteration denotes how many times the parameters of an algorithm are changed. The context will determine what this implies specifically. The following actions would typically be included in a single iteration of training a neural network: - Batch processing of the training dataset. - Calculating the cost function. - Modification and backpropagation of all weighting factors. Epoch vs iteration in machine learning An iteration entails the processing of one batch. All data is processed once within a single epoch. For instance, if each iteration processes 10 images from a set of 1000 images with a batch size of 10, it will take 100 iterations to finish one epoch. How to choose the number of epochs? The weights are changed after each iteration of the network, and the curve shifts from underfitting to ideal to overfitting. The number of epochs is a hyperparameter that must be decided before training starts, and there is no one-fit formula for choosing it. Does increasing epochs increase accuracy? Whether working with neural networks or determining geologic timescales, more isn’t always better. You should find the best number for each case. Check out the challenges of machine learning lifecycle management Why is the epoch important in machine learning? Epoch is crucial in machine learning modeling because it helps identify the model that most accurately represents the data. The neural network must be trained using the supplied epoch and batch size. Since there are no established guidelines for choosing the values of either parameter, specifying them is more of an art than a science. In reality, data analysts must test a variety of values before settling on one that solves a particular issue the best. Monitoring learning performance by charting its values against the model’s error in what is known as a learning curve is one method of determining the appropriate epoch. These curves are highly helpful when determining if a model is overfitting, underfitting, or properly trained. How many epochs to train? 11 epochs are the ideal number to train most datasets. It may not seem right that we must repeatedly run the same machine learning or neural network method after running the full dataset through it. So it must be remembered that we employ gradient descent, an iterative process, to optimize learning. Therefore, updating the weights with just one pass or epoch is insufficient. One epoch may also cause the model to become overfit. Learning rate in machine learning The learning rate is a tuning parameter in an optimization method used in machine learning and statistics that choose the step size at each iteration while aiming to minimize a loss function. Learning rate in machine learning figuratively depicts the rate at which a machine learning model “learns” because it determines how much newly obtained information supersedes previous knowledge. The term “gain” is frequently used in the literature on adaptive control to refer to the learning rate. Epoch is a term used in machine learning to describe how often the training data is run through the algorithm during all the data points. A decent level of test data correctness may require hundreds to thousands of epochs due to real-world applications’ richness and variety of data. Check out the real-life examples of machine learning
<urn:uuid:e7ca3a66-4485-4a35-850a-f8f57b28af57>
CC-MAIN-2022-40
https://dataconomy.com/2022/08/what-is-an-epoch-in-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00686.warc.gz
en
0.927694
1,642
3.28125
3
Within the B2B industry, there has been a long time standard for doing business with other organizations. A standard that remains today, and will for much longer into the future. That standard is EDI. So, what does EDI stand for? What is EDI? EDI stands for Electronic Data Interchange. EDI Definition: "EDI is a method for electronically exchanging standardized business information between trading partners." The EDI order process allows the computer-to-computer exchange of business documents from one company to another, thereby enabling automated, paperless communication. Businesses that communicate electronically are called EDI trading partners. Think of EDI as a common electronic business language that allows trading partners to quickly and easily communicate with each other. What is the Difference Between a Paper-Based and an EDI-Based Business? Before EDI, businesses relied on moving paper-based documents, which would take far too long and did not provide the flexibility that an electronic exchange would. There are hundreds of specific EDI transaction codes that represent different business documents, but two documents that should be familiar to any business are purchase orders and invoices. In a traditional invoice exchange, a company will create an invoice using a computer system, print a copy of that invoice on paper, and then send that invoice via mail to their customer. With both email and fax, that transmission might occur slightly faster, but the core of the process remains the same. Once the customer receives the invoice, there are often markups required and the customer must also enter the invoice contents into its own back-end computer system. This traditional invoice cycle is essentially a manual transfer of information from a seller's back-end computer system to the customer's backend system: from one ERP to another ERP, or from one accounting solution to another accounting solution. By nature, EDI software replaces postal mail, fax, and email to directly connect business trading partner systems, eliminating the manual steps necessary for a traditional invoice transfer. The Function of EDI So, how does EDI work? In a traditional purchase order document exchange, the entire cycle could take anywhere from days to weeks. Below you can see an example manual process showing the traditional document exchange for a purchase order: The buyer creates a purchase order The buyer prints the purchase order and then sends it to the supplier via the mail, fax, or email The supplier receives the purchase order and manually enters it into its own order management system (Netsuite, Salesforce, Quickbooks, SAP, etc.) The supplier creates an invoice The supplier prints the invoice and then sends it to the buyer via mail, fax, or email. The buyer receives the invoice and manually enter the invoice contents into its own back-end systems Oftentimes, there are far more steps than the ones outlined in the below diagram, including acknowledging the purchase order reception or requesting changes to an invoice. In contrast, EDI order processing can take far less time (hours, if not minutes). The buyer chooses to purchase a good or product The buyer's EDI system automatically creates an EDI version of a purchase order and then sends the order directly to the supplier. The supplier's order management solution receives that purchase order and then automatically ingests and updates its own system. The supplier's order management solution automatically creates an acknowledgment and an invoice and then transmits both documents to the buyer to confirm the reception of the purchase order and provide a receipt for the order. Here's how the exchange would play out using actual EDI transactions, with each transaction labeled in order: You receive/accept an EDI 850 Purchase Order You send an EDI 855 Purchase Order Acknowledgment to your buyer to confirm the successful reception of the EDI 850 You send an EDI 856 Advanced Shipping Notice (ASN) to describe to your buyer the contents of each shipment and how items were packed You send an EDI 810 Invoice to your buyer You can match the EDI documents listed above to the Cleo Integration Cloud EDI interface screenshots below: First, an EDI 850 purchase order is received from a trading partner (Rick's Sporting Goods). Second, an EDI 855 Purchase Order Acknowledgment is sent to Rick's Sporting Goods to acknowledge the reception of the EDI 850 purchase order. Third, an EDI 856 is sent to Rick's Sporting Goods, giving information on the contents of the shipment. Fourth, an EDI 810 invoice is automatically generated and sent to Rick's Sporting Goods. For more on, what is an EDI format, try this EDI transaction blog. 6 Benefits of EDI EDI allows companies utilize to save time, money, and make life a whole lot easier. For supply chain companies, in particular, the value of an efficient EDI provider can be the difference between business success and failure. EDI creates cost savings: Through the use of an EDI service, companies can execute workflows that reduce costs. Previous paper expenses, ranging from printing, reproduction, storage, and postage, are gone. A streamlined documentation process helps companies comply with EDI service standards, which helps to avoid fines due to SLA violations, delays, and other performance gaps. According to Infoworld Magazine's survey of an enterprise company, the cost for processing a paper-based purchase order is almost $70, whereas the same transaction performed through EDI costs less than $1.1 EDI enhances speed: EDI allows enterprises to cut processing time remarkably through automation, speeding up business cycles. Order-to-shipment cycles, in particular, can be cut by 50 percent to 60 percent. There is a tremendous difference in a transaction that is exchanged in minutes instead of days or weeks, as is common with many forms of manual transfer. EDI ensures order accuracy: Nobody wants to make errors, and that’s exactly what EDI helps organizations avoid. When employees are forced to manually enter data into enterprise resource planning (ERP) software or order systems, odds are that a mistake will happen eventually. EDI solutions are designed to automate that cumbersome process of moving EDI data into an ERP, eliminating human-caused errors. EDI mapping ensures lost orders, or incorrectly entered phone orders are left behind, saving employee time handling data disputes or finding errors. EDI drives business efficiency: EDI is proven to be fast and accurate, which is why EDI integration is such a popular automation method for enterprises everywhere. No longer are companies expected to perform hands-on processing, so customer relationships are improved, errors happen less, and the delivery of goods and services are expedited. EDI establishes security: Companies can feel safe and secure when they have an EDI model in place. In fact, EDI-capable solutions are designed to ensure security and only allow strict access to authorized users, and are usually equipped with archive tracking and audit trail capabilities. Companies are also able to share data securely across many communication protocols and security standards, to ensure EDI compliance with mandates in global business. EDI elevates strategic decisions: The right EDI solution will allow businesses to onboard trading partners faster, quickly resolve errors, improve productivity, and provide real-time EDI visibility into every single transaction. When configured correctly, EDI can create a seamless flow of data that allows companies to shift their focus from the processing of transactions to the ultimate goal of growing their business. Here is a look at EDI by the numbers to further illustrate the benefits of the EDI supply chain: The Importance of EDI How many languages does your business-speak? Perhaps a strange question, but if the entire business world could be given an official language, it would be that of EDI. Without EDI, efficient communication between organizations wouldn’t exist. Instead, retailers, manufacturers, healthcare providers, governments, and multiple other industries would have to rely on manual and outdated procedures to facilitate business-to-business (B2B) communication. Think of EDI alternates like fax, email, and manual data re-entry. Using these manual methods creates a jumble of unreliable, insecure, non-scaling, and non-programmatic systems for communicating business-to-business information – the most important data to the business at large. Furthermore, inconsistent messaging formats result in higher error rates and confusion among supply chains and logistics partners. And that’s why EDI is so important. The use of a standardized communication format creates an international language across every industry in the world. EDI has evolved over the decades to become the most globally recognized business communication standard, allowing enterprises to conduct business electronically across business networks and geographic borders. In this way, EDI has become the dominant language of business. 3 Reasons EDI Will Elevate Your B2B Workflows EDI has always been a great technology, one that automates business processes and simplifies once-cumbersome data workflows to enable faster transactions. But there’s been a resurgence in EDI use cases as of late. The data movement and transformation technology driving EDI workflows have evolved to better enable the governance, visibility, and partner relationship management critical to doing business today. Cloud EDI integration and application integration capabilities, for instance, arm enterprises with the tools to support emerging connectivity use cases that enable more flexible ways to interact with new customers. Additionally, modern EDI solutions can be deployed in a way – in the cloud, on-premise (or private cloud), or in a managed services model – that fits an organization’s current and future needs. Here are three reasons why electronic data interchange implementations continue to benefit businesses across the supply chain: EDI has standards EDI’s rigid standardization that technologists often complain about is also a huge reason for its staying power. It’s difficult to maintain and scale B2B data exchanges without standardization, and the absence of standards to dictate a universal format is why APIs haven’t been able to replace EDI in the B2B world. An API is infinitely customizable to meet the requirements of the business for which it was produced, so connecting is in the hands of the developer. Taking the manual effort out of the data exchange processes means businesses can automate more of the work to be more efficient, eliminate the labor-intensive activities, and put precious IT resources on more important projects. Companies may have spent a lot of time and resources getting EDI up and running over the years (the technology admittedly has evolved into a less massive undertaking), but the result was a communication solution that works. It’s still the most widely accepted communication tool out there, and the largest organizations in the world won’t do business with you unless you support EDI. For these reasons, the market isn’t explicitly pushing for a technology change, but rather is clamoring for better ways to do EDI. These include supporting more advanced protocols to move EDI data, offering EDI-as-a-service products, and providing fully-managed EDI solutions that provide flexible licensing options and predictable costs. EDI technology is not the standalone data beast it used to be. Its purpose was to move order and invoice information among external trading partners. More often, companies are seeking to integrate other applications into that flow and require consolidated integration for ERP, CRM, WMS, and other cloud and on-premise applications. Modern integration solutions enable this holistic connectivity and business-wide visibility at greater speeds so they can achieve faster time to revenue. Vendors like Cleo have engineered exciting platforms that address EDI integration needs holistically, across the full range of application and process integration workflows within the modern enterprise. (Learn how Cleo can seamlessly integrate EDI documents into the most popular business applications and cloud services, such as Salesforce, Expensify, and PayPal.) Below you can see a comprehensive diagram on how Cleo does EDI integration. - Starting on the lefthandside, a traditional EDI order document set is automatically processed between trading partners. - Next, those EDI documents are converted into the necessary internal format. - Finally, the information that was previously contained in those EDI documents, now in the correct internal format, is pushed into the ERP systems on the right-hand side using an API connection. To fully understand the importance of EDI, we have to go back in time to first understand its history. And it all started in 1948 in Berlin. The History of EDI Advent of EDI The year 1948 was one of the most tumultuous times in the history of the United States. Ed Guilbert was an Army Master Seargent who had a problem. How was he supposed to deliver supplies to U.S. troops in Western Berlin after it had been cut off from Western Germany? The result was a marvel of logistics, the ‘Berlin Airlift’ which transported over 2.3 million tons of goods into West Berlin over the next year. So how did Guilbert do it? He designed a standardized shipping manifest that organized the entire process and tracked what was contained in each shipment and which pilots were delivering the cargo. Years later, in the 1960s, Guilbert took his idea to the next level by developing an electronic messaging format to send shipping information around cargo. The transportation industry was the first to really take advantage of this new process and recognized the endless potential of EDI. The Holland-American steamship line sent shipping manifests in 1965 across the Atlantic Ocean by using telex messages. This allowed the Holland-American line to send a full page of data within roughly two minutes. EDI really gained steam in 1968 with the creation of the Transportation Data Coordination Committee (TDCC) which started to create electronic standards within the transportation industry. Companies across other verticals began to adopt EDI as well and soon were able to pass documents electronically through radioteletype (RTTY), telex messages, and telephone. The year 1973 saw the file transfer protocol (FTP) published to enable the file transfer between internet sites. The FTP protocol was created as a standard in the RFC959 Declarative, outlining the different ports used, commands that FTP accepted, and values for the transfer parameters and modes allowed. In 1975, the first national EDI specification was published, and Guilbert was a big contributor. The first VAN (value added network), Telenet, was also established in 1975 and was the first commercial packet-switching network to add more than just linking basic computer systems. Two years later, in 1977, a group of grocery companies and partners drafted an EDI project, and in 1978, the TDCC becomes the Electronic Data Interchange Association (EDIA). Soon, the EDIA is chartered by the American National Standards Institute and becomes the ANSIX12 committee which is responsible for publishing EDI standards. ANSIX12 published its standards for the first time in 1981 and include the transportation, food, drug, warehouse, and banking industries. Soon after, major companies in the automotive industry, including Ford and General Motors, as well as retailers such as Sears and Kmart mandate EDI for their suppliers. The EDIFACT EDI standard was created by the UN in 1985 in order to aid with the global reach of technology. Interestingly enough, EDIFACT was adopted by the automotive industry, but other industries insisted on remaining with ANSIX12. The 1990s and 2000s By 1991, as many as 12,000 companies were regularly using EDI. The Uniform Code Council (UCC) began EDI over the Internet (EDIINT) so it could standardize communications of EDI reporting data over the Internet. By 2001, the AS/2 communication standard is published by the UCC to enable encrypted transmission of data over the Internet with the HTTP protocol. Walmart adopts the AS/2 standard in 2004 to communicate better with its suppliers, and other major retailers follow, although many remain on VAN communication. EDI is Everywhere Today Whether you purchase a jug of orange juice at the grocery store, order new shoes from Amazon, buy medicine at the drugstore, or sip that nice wine at your favorite restaurant, EDI plays a critical role in ensuring a dependable, repeatable experience. Sure, it’d be possible for these things to happen in a non-standard way. Each manufacturer, retailer, or other business could manually fax paper orders and invoices, or email shipping data and confirmation docs. It would just cost a lot of time, money, and simplicity to do so. EDI enables the standardization and automation required to expeditiously execute – and track – these processes. Some of the biggest companies in the world – Walmart, Target, and Home Depot, to name a few – mandate sending EDI via AS2 and prioritize securely connecting with suppliers, vendors, and trading partners to keep doing billions of dollars in business every day. While the process itself isn’t all that exciting, it’s cool to know that EDI drives much of the global economy and has a hand in much of the commerce we transact daily. The Criticality of Modern EDI In 2021, the importance of a modernized EDI solution has never been greater or more obvious. Modernizing your EDI software positions your company to take on new business and respond quickly to customer and trading partner requests. The last thing you need is to disrupt your data flow and business processes because your environment isn’t up to date. Cleo Integration Cloud elevates your EDI integration processes and streamlines your B2B communications. Cleo Integration Cloud helps automate your EDI solutions in order to connect, transform, and route your EDI and non-EDI transactions through your ecosystem without piling on the custom code. Discover the benefits of using Cleo Integration Cloud for EDI integration by watching this short demo video: 1Millman, Howard. "A Brief History of EDI." Info World, pp. 83
<urn:uuid:0a37097f-f0a6-49d7-8d92-3c33e382140f>
CC-MAIN-2022-40
https://www.cleo.com/blog/knowledge-base-what-is-edi
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00686.warc.gz
en
0.936736
3,781
2.828125
3
How a Popular Company Could’ve Prevented a Phishing Attack The first known mention of the word “phishing” happened in the America Online (AOL) user group named appropriately “AOHell.” Phishing has raised hell ever since. As technology has evolved so has the sophistication of targeted phishing attacks. In this report, we walk through a real-world case study of how a socially engineered phishing attack worked on a popular company, and show you some steps on how it could have been prevented. This report guides you through some big questions and answers about phishing, including: What is social engineering? What is spear-phishing? What happened when a popular company was breached? How was the breach successful? How can social engineering, targeted phishing and lateral movement lead to a security breach? Authentication best practices to prevent sophisticated phishing attacks Phishing is low effort yet very effective at allowing hackers to steal credentials. Phishing and socially engineered attacks can be prevented with multi-factor authentication (MFA or 2FA) in most instances. MFA, (which is a security measure recommended by the Department of Homeland Security), requires multiple factors such as your device, biometrics, location and more to prove you are who you say you are before granting access. It is the first layer of a zero trust cyber security framework. We live in a distracted multitasking world that makes it easy to accidentally click without checking out a message thoroughly. Security Boulevard reports 2020 saw an 85% overall increase in all categories of cybercrime for the year, including a more than 600% increase in phishing attacks. Learn how trusted devices, zero trust, adaptive user policies and more can thwart phishing. Download Anatomy of A Modern Phishing Attack today and learn how to implement preemptive prescriptive cyber protection for your cloud and on-prem applications and network, and say farewell to phishing threats. Try Duo For Free With our free 30-day trial and see how easy it is to get started with Duo and secure your workforce, from anywhere and on any device.
<urn:uuid:528a8ec1-8699-46f3-95f2-186842f051bd>
CC-MAIN-2022-40
https://duo.com/blog/how-a-popular-company-couldve-prevented-a-phishing-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00686.warc.gz
en
0.951275
439
2.65625
3
There are a bunch of different terms here, all with slightly different meanings: Remote Code Execution Remote Command Execution These subtle differences have caused confusion enough times in my life now for me to write a blog post about it, clearly defining the differences. Command Injection is a type of vulnerability that allows an attacker to inject operating system commands directly into an application and have them execute (the type of commands that one would enter into a Bash or Powershell terminal). Remote Code Execution is the impact of a vulnerability that allows an attacker to execute code remotely, but it is not the actual vulnerability itself. The vulnerability does not necessarily need to be a Code Injection vulnerability, it could be something else, such as an arbitrary file upload that allows an attacker to upload a web shell. Remote Command Execution is the impact of a vulnerability that allows an attacker to execute commands remotely, but it is not the vulnerability itself. Again, the vulnerability does not necessarily need to be a Command Injection vulnerability, it could be any vulnerability that results in an attacker being able to execute commands. It should be noted that typically, if a vulnerability allows remote code execution, then it will also allow remote command execution, and vice versa. RCE is a generic term that can refer to either Remote Code Execution OR Remote Command Execution. In other words, RCE is the impact of a vulnerability that allows an attacker to execute code and/or commands remotely. TL;DR:Injection is a type of vulnerability, execution is a type of impact. Command is a shell command, while code is some type of server-side code other than shell commands, such as PHP. RCE is used interchangeably to mean remote (code|command) execution. Attack surface monitoring has become increasingly important and popular in recent years as the internet footprint of organizations has increased. Hackers are utilizing advanced recon methods for discovering and monitoring internet-facing assets of an organisation. As changes occur in the attack surface, it is beneficial for hackers to be notified so that they can immediately check if these changes may have introduced security issues. Of course, this makes it equally important for organisations to monitor their own attack surface, so that they have at least the same visibility as their attackers. Today there are a lot of tools available to help automate the process of monitoring an attack surface. Many of them are extremely expensive, and designed to be used in an enterprise setting. Thankfully for individual users, OSINT hobbyists and bug bounty hunters, there are some great free, open source alternatives too. Today I’ll be diving into one of them, SpiderFoot. The open source version of SpiderFoot is pretty amazing, and totally free. It’s been worked on for almost a decade now making it very stable and feature rich. If you want a full range of attack surface monitoring capabilities, you’d need to use SpiderFoot HX, the premium paid offering that’s cloud-hosted. If you don’t want to do this stuff from the cloud, or are operating on a tight budget, I’m going to cover some simple things you can do using the open source version plus some other tools and scripts to get some basic attack surface monitoring capabilities. This will enable you to: Be notified when new hosts appear in certificate transparency, SHODAN and other places that SpiderFoot hooks into to identify new hosts Capture screenshots of new hosts as they are identified The rest is up to your imagination – if SpiderFoot detects it, you can get alerted about it First of all, let’s choose the SpiderFoot HX features we need to mimic to get this functionality. Out of the main SpiderFoot HX features beyond the open source version, those highlighted in bold look like good candidates because they offer a lot of value and also seem achievable: Hosted, pre-installed and pre-configured Better performance (5x-10x) Multiple targets per scan Monitoring with change detection and notifications Investigations through a graph-based UI Built-in TOR integration Feed data to Splunk, ElasticSearch and REST endpoints Perhaps surprisingly, this one is very easy to implement. The open source SpiderFoot uses SQLite3 as the backend database. If you have run a scan, you can view the raw results in the database easily by simply opening the spiderfoot.db file in the root directory of the SpiderFoot installation. There is an excellent open source project called Datasette which ingests any SQLite file and turns it into a browsable web interface, along with a full JSON API. It also has some other handy features like the ability to run raw SQL queries and export data in CSV format. A word of warning, Datasette does not have any authentication and it allows anyone who visits the page the ability to run arbitrary SQL commands and view all of your data, so be sure not to expose this beyond localhost! In order to set this up we can simply install Datasette with pip (requires Python3.6 or higher): pip install datasette If you’re using a Mac, you may also use Homebrew: brew install datasette Then we can start datasette with the following command. You may need to change the location of the database, depending on where it’s stored on your system. datasette serve ./spiderfoot.db You should see something similar to the following: The scan results are stored in the table called “tbl_scan_results”, navigating to this table will show the data in a table within the web UI. You may notice a “json” link indicated by the red arrow in the screenshot above. Clicking this will take you to the JSON endpoint for that table. To filter the data, you can use HTTP GET parameters. For example, to only view results from the SpiderFoot sfp_whois module, we can navigate to the following URL. Note the &module=sfp_whois at the end. Just for convenience, I’m going to save it into a bash file for easy execution later, to achieve this I just copied the command into gethosts.sh and added execution permissions. Now head over to the Aquatone releases page to grab the latest version for your operating system, download and unzip it. Inside you will find a binary file. The screenshot functionality utilises headless Chromium or Chrome. The Aquatone docs say that Chrome is sometimes unstable, so they recommend installing Chromium, which you can find here. Once it’s installed, all we need to do is pipe the output of gethosts.sh into aquatone like this: ~/gethosts.sh | aquatone I’d recommend doing this in an empty directory, because it will create a bunch of files and folders. The output should look similar to the following: When aquatone finishes, it will have created a number of files in your current directory, as shown below. If you take a look in the screenshots directory, you can view the raw screenshots: Alternatively, you can open the aquatone_report.html file to see a nice UI overview of the scanned hosts including screenshots (grouped by similarity), raw responses and HTTP headers. See below for a sneak peek. Monitoring With Change Detection and Notifications As is, the open source version of SpiderFoot provides single scans, but no means of continuously monitoring a target by scanning at regular intervals. This is something only offered by SpiderFoot HX. Ideally, if we’re hacking a target or defending our own organisation, we would want scans to be performed at least once per day, and any changes should be sent to us as a notification. SpiderFoot is quite a comprehensive application that pulls many different data types. Alerting on all of these data types may lead to a lot of notifications, so for the purpose of this blog post we are just going to monitor for newly discovered subdomain names. This would be very useful to a bug bounty hunter for monitoring a large scope, or to a security team monitoring their own systems. Note that you could use any data gathered by SpiderFoot with similar methods. Setting Up Continuous Scanning Using Cronjobs First let’s set up regular scans by utilising a cronjob! To start, simply run: This will open up the file which will contain all of your cronjobs in vim. If you know how to use vim, simply enter the following line, then save and quit. You will need to edit the location of sf.py based on your setup, and also change “yourtarget.com” to whatever your scan target will be. The “0 2 * * *” tells cron to run the command every day at 2am. To better understand how cron scheduling works or create your own, check out crontab.guru. That’s it! Now that you’ve edited your crontab, the scan will run every day at 2am. As I stated earlier, for the scope of this blog, we’re only interested in sending notifications for new subdomains, so we can reuse the gethosts.sh script that we created earlier. Firstly, let’s create another bash script to append new hosts to a file. Save this script to a file called appendhosts.sh: for line in `~/gethosts.sh`; do grep -qxF $line ~/hosts.txt || echo $line >> ~/hosts.txt; done Be sure to give it execute permissions with chmod +x ~/appendhosts.sh Next, run crontab -e and add this new line to make the script run every hour. 0 * * * * ~/appendhosts.sh Now, any time a new subdomain is discovered, it will be added to the end of the ~/hosts.txt file. Just one step left, setting up notifications! Setting up Notifications The last step is setting up notifications for the changes we detected in the previous step. Again, we can do this with some bash magic and a cronjob. I’m going to be using a Discord webhook for the notifications. Essentially, we can send a message using curl to our own Discord webhook, and it will send through as a Discord message. To set up your Discord webhook URL, follow the instructions here. Save the webhook URL for later. Now copy the following script into ~/sendnotification.sh and edit the example webhook to be your own. This script will continuously monitor ~/hosts.txt for changes. Whenever a new subdomain is appended, it will send that subdomain as a Discord message. Once again, be sure to make the script executable with: chmod +x ~/sendnotification.sh As this script will need to run continuously, it may be best to run it within tmux or screen on a VPS so that if your SSH connection drops, it will continue to work. Here’s a screenshot of the outcome: The point of this article is to show that even the free version of SpiderFoot is an extremely powerful tool and can be easily extended to provide some basic attack surface monitoring capabilities. By implementing some simple scripts around it, a few of the key features of SpiderFoot HX can be mimicked free of charge. This might be ideal for individual users, bug bounty hunters and OSINT hobbyists. If you do OSINT a lot or you’re using this as an organisation, you may be better off paying for SpiderFoot HX for the additional speed, support, hosting, multiple targets, correlations, etc. The whole JBS Meatworks ransomware attack caused some inner conflict for me. Firstly I’m an ethical hacker and I don’t believe that ransomware attacks are ethical. I’ve spent a lot of my time defending organisations against these types of attacks. Secondly I’m a vegan who is against slaughtering animals for human consumption. What happens when ransomware halts animals from being killed for human consumption? How do I feel about that? ¯\_(ツ)_/¯ I won’t lie, my initial gut reaction to the news of the JBS Meatworks ransomware attack was joy. It warms my heart that the power of hacking can yield an individual the ability to cause significant positive change in the world, especially disrupting an operation that slaughters tens of thousands of animals every single day. Reading further, I wasn’t so happy about it. Legality aside, this was not an ethical attack. It was clearly financially motivated, many of the employees of JBS Meatworks did not get paid on time, there were probably casual staff who lost income and the animal slaughter will probably continue in a week or so anyway. This got me thinking though… what if the attack’s sole purpose was to halt the slaughter of animals, not to make money? Would I consider that to be ethical? Or to abstract it further, and eliminate my own personal beliefs: It’s a tough question, right? Firstly the word “ethical” is purely subjective. Secondly, there are varying degrees of ethicalness. So while this question is good to think about, there is no definitive answer. What really matters is people’s opinions on these topics, because those beliefs are what end up being translated into actions in the real world. So how do we measure people’s opinions? Getting some answers If you want to gather opinions, Twitter polls are a terrible idea. But that’s exactly where I turned. I love Twitter polls, but they’re not exactly the epitome of scholarly research. Most of the Twitter polls I’m about to show you only allow binary answers. Due to this, they don’t allow the opportunity to fully explore the complexities of the topics. Regardless, the results are super interesting and they do give an insight into the initial gut instinct of my Twitter followers (who are primarily hackers). First of all, I asked if it is ethical to launch a ransomware attack against an organisation that primarily makes money from something unethical. I was surprised to see that 56% of voters said yes. Next, I asked a very similar question, but this time I changed it to a DDoS attack instead of a ransomware attack. The main difference is that the attacker would gain nothing from a DDoS attack. It is an attack designed to purely disrupt operations, unlike a ransomware attack which is more likely to be financially motivated. About 61% of voters feel that this type of attack is ethical, provided that the organisation they are attacking is (subjectively) unethical. Relating this back to the JBS Meatworks attack, I asked my followers whether killing thousands of animals per day for human consumption is ethical: About 53% of respondents said that it is ethical, leaving 47% of respondents believing that killing animals for human consumption is unethical. Then I asked whether the respondent would actively attack an organisation that consistently partakes in actions that they feel are unethical. In this poll, I also added a “see results” button, because I only wanted people to respond if they had a particularly strong opinion one way or the other. The results are staggering. 31% of respondents felt strongly enough about this question to respond “yes”. In other words, almost one third of respondents would actively attack an organisation that consistently partakes in actions that they feel are unethical. When you combine these two outcomes… ~31% or more of the respondents would personally, actively attack an organisation that they feel is unethical ~47% of the respondents feel that killing animals for human consumption is unethical It is easy to see that organisations who supply meat are likely to be attacked because there is quite a large cross-section of hackers who would be willing to disrupt resources of these plants, whether there is money to be gained or not. This doesn’t stop at animal agriculture though, nearly a third of these hackers are willing to attack any organisation that they deem to be unethical. That’s a pretty crazy thought, and it begs the question: And the answer is…. maybe? The polls seem to suggest that this is the case, but It’s hard to say. There’s a big difference between answering a couple of polls on Twitter and actually attacking an organisation. And just to make things even more grey – JBS meats recently released a plant-based meat alternative range, and also bought Vivera, a company that sells plant-based protein. So yeah… I dunno… *confused stare* It’s interesting to think about though. For a while now – the whole “hacktivism” scene seems to have been pretty stagnant. The “Anonymous” movement has mostly fizzled out, although it did pop its head up briefly in support of the BLM movement. Other than that, there really has not been much going on. The responses to these polls tell me that there is still an underlying thirst within hackers to drive (subjectively) positive change in the world, and they certainly have the power to do so. It seems that it is only a matter of time before a new group of vigilante hackers join forces again to wreak havoc against organisations that they feel are unethical. In these cases, the behaviour of the attackers is far less predictable than your run-of-the-mill ransomware attack because the motivation runs deeper than money, and is far more complex. Whether you’re for or against it – it’s something worth thinking about, especially if you are involved with an organisation that partakes in activities that are ethically questionable. Something about titling the blog “Why I Quit My Job at Bugcrowd” might have you thinking that I’m about to explode into a dramatic display of anger and resentment towards Bugcrowd, scaaaalding them with mighty words. In fact, I absolutely loved working there. I’d recommend it to anyone. It’s a great organisation. My pay was great, the people were great and I got to work on a lot of purposeful projects. This isn’t so much a blog about why I left Bugcrowd as it is about leaving a job in general. Many would say I am crazy for leaving. Maybe I am! At this stage, I’m not even sure I have made the right decision myself. In this blog I want to explain why I left, and what I’m doing next. Reasons for leaving For me, true wealth is the ability to earn enough money to live comfortably without having to work. I don’t want to achieve this when I’m 65, I want to achieve it as soon as possible. Why do I want wealth? Sometime around late 2019 my wife and I were looking to buy our first home. It quickly became apparent that we would not be able to afford the house of our dreams. At this point I realised that I needed to start paying more attention to money. I’d been working for a decade, why couldn’t I buy the house that I wanted? How can one obtain wealth? I started reading books about how to become wealthy. I devoured all the classics, “Rich Dad Poor Dad”, “Think and Grow Rich”, “Secrets of the Millionaire Mind”, etc. They all basically say the same thing. Don’t trade time for money. Robert Kiyosaki puts it well in his books, he segments income types into four different categories that he calls the “cashflow quadrant”. The four main types of income are: Employee – you are employed by someone else and paid for your time. Self employed – you are employed by yourself, but still paid for your time. Business Owner – you own a system that makes you money. Investor – your money makes you money. In order to be “wealthy”, Robert Kiyosaki says that you should prioritise earning money from income streams as far down that list as possible. Notice that the further down the list you go the more scalable the income streams become and the more opportunity you have to free up your time. At Bugcrowd, I was 100% employee. My plan is to get further down that list by starting a business. I found it hard to do this when I spent the better part of my time/brainpower working as an employee. Now I’ll be refocusing all of that brainpower and time into generating income as a business owner. Most money that I earn above my living expenses will be invested. The other reason that I quit my job is for personal freedom. What is freedom? 🤷♀️ Personal freedom comes in many forms. To name a few: Freedom of time Freedom of location Freedom of expression Freedom of choice The disconnect between freedom and employment 💔 No matter how good the culture at your company is or how much you love your job, you will still be required to forgo some amount of freedom when you are an employee. That’s why you get paid. You must work during specific times (sacrifice time freedom). You can’t say anything on social media that would negatively affect your employer (freedom of expression). If your boss asks you to do something, you have to do it (freedom of choice). If an employer decides that they don’t want to pay you money anymore, they can sack you (financial freedom). This isn’t a dig at any company, it’s just how employment works. Employees get paid to forgo their freedom. This thought has been eating away at me for a long time, and it has contributed greatly to my decision to take this risk. I am trying to reconfigure my life to look more like this: I decide when I work. I decide how hard I work. I decide what I work on. I decide where I work from. I decide who I work with. I express myself freely. What am I doing next? 🚀 There are a few ways that I’m planning to make money. Starting My Own Cybersecurity Consultancy 👨💼 I’ve started my own cybersecurity consultancy, Haksec. This is my first public mention of it! Haksec provides virtual CISO (vCISO) and penetration testing services. I want to focus more on the vCISO side of things, because my experience as a penetration tester has taught me that a lot of businesses need general guidance more than a pentest. If you know anyone who may be interested please send them my way, it would mean the world. Bug Bounties 👾 I cut back on bug bounty hunting a lot since I started at Bugcrowd back in March 2020. I just haven’t felt overly motivated to do it because after a full day working full-time at Bugcrowd I was all bugged out. I am really looking forward to having more time to sink into this again – I can feel my motivation bubbling back already and I’ve landed a few good bugs in the last couple of weeks! Content Creation 👨🎨 I’m going to be creating a lot more content. Firstly, I’ll be creating content on my personal channels (YouTube, Twitter, Instagram, TikTok and my blog). I will be fully transparent about my bug bounty hunting journey including what bugs I find and how much I’m earning. I also want to make general life videos. I will also be creating cyber-security related content on behalf of other organisations. I’ve already started doing a bit more of this. If you want any type of cybersecurity-related content created for you, feel free to get in touch. I am scared 😬 This is one of the biggest decisions I’ve ever made and it’s a huge risk. Even more so with a family to provide for. The truth is, I don’t know if it will work out and if it doesn’t I hope that I will come back to the workforce in 6+ months with a whole new appreciation for the safety and security of employment. If you’d like to support me on my journey, there are a bunch of things you can do: Refer people to Haksec if they are looking for cybersecurity services or advice. Yes, I made a logo for my tool. It’s a wolf with a moon on it’s head. It has nothing to do with the tool but if you like wolves then you will probably enjoy it. I am quite talented at graphic design, I changed the text to “haktrails” all by myself. The wolf bit was a free Canva template. Quick Ad Break Full disclosure – SecurityTrails has sponsored me to write this tool and create some content because they’re running Bug Bounty Hunting Month. As part of that, they’ve released a plan that is catered directly to bug bounty hunters. If you’re a bug bounty hunter, you should buy this. I know it doesn’t quite mean as much when I’m being sponsored, but I would legitimately recommend this product even if I wasn’t. They’re offering the plan for $50 per month. If you sign up after April 15th you’ll be paying double that. I’ve used the features included in this plan for ages, but I paid a lot more for them! If you actively use it, even at $99 per month, the ROI is insanely good, and now you’ll have the perfect companion tool to make full use of it! Click here to check out the details. Okay I’ll stop harassing you now. Building a huge distributed recon system is great and all but at some point it becomes more cost/time effective to just pay for access to recon data that someone else has gathered. Working with APIs can be a bit awkward though. Wouldn’t it be lovely if there was a nifty little tool that did all of the API calls for you, and integrated nicely with your existing tools? 🤔 Yes. Yes it would! That’s exactly what haktrails does. Stdin input for easy tool chaining “JSON” or “list” output options for easy tool chaining Associated root domain discovery Associated IP discovery Historical DNS data Historical whois data Company discovery (discover the owner of a domain) Whois (returns json whois data for a given domain) Ping (check that your current SecurityTrails configuration/key is working) Usage (check your current SecurityTrails usage) How to Use It Setting Up the Config File Before you do anything, you need to create a config file. The default location for the config file is: The config file should look like this: key: <your api key> You are all hackers so I know I don’t need to say this, but make sure you replace “<your api key>” with your actual SecurityTrails API key. Installing the Tool First, install golang on your computer, then run the following command: go get github.com/hakluke/haktrails You should now have the haktrails binary at ~/go/bin/haktrails. If you haven’t already, I’d recommend adding ~/go/bin/ to your $PATH so that you can just type haktrails instead of ~/go/bin/haktrails. Using the Tool Note: In these examples, domains.txt is a list of root domains that you wish to gather data on. For example: The output type can be specified with -o json or -o list. List is the default. List is only compatiable with subdomains, associated domains and associated ips. All the other endpoints will return json regardless. The number of threads can be set using -t <number>. This will determine how many domains can be processed at the same time. It’s worth noting that the API has rate-limiting, so setting a really high thread count here will actually slow you down. The config file location can be set with -c <file path>. The default location is ~/.config/haktools/haktrails-config.yml. A sample config file can be seen below. The lookup type for historical DNS lookups can be set with -type <type>, available options are a,aaaa,mx,txt,ns,soa. Warning: With this tool, it’s very easy to burn through a lot of API credits. For example, if you have 10,000 domains in domains.txt, running cat domains.txt | haktrails subdomains will use 10,000 credits. It’s also worth noting that some functions (such as associated domains) will use multiple API requests, for example, echo "yahoo.com" | haktrails associateddomains would use about 20 API requests, because the data is paginated and yahoo.com has a lot of associated domains. This will gather all subdomains of all the domains listed within domains.txt. cat domains.txt | haktrails subdomains Of course, a single domain can also be specified like this: echo "yahoo.com" | haktrails subdomains Gathering associated domains “Associated domains” is a loose term, but it is generally just domains that are owned by the same company. This will gather all associated domains for every domain in domains.txt cat domains.txt | haktrails associateddomains Gathering associated IPs Again, associated IPs is a loose term, but it generally refers to IP addresses that are owned by the same organisation. cat domains.txt | haktrails associatedips Getting historical DNS data Returns historical DNS data for a domain. cat domains.txt | haktrails historicaldns Getting historical whois data Returns historical whois data for a domain. cat domains.txt | haktrails historicalwhois Getting company details Returns the company that is associated with the provided domain(s). cat domains.txt | haktrails company Getting domain details Returns all details of a domain including DNS records, alexa ranking and last seen time. cat domains.txt | haktrails details Getting whois data Returns whois data in JSON format. cat domains.txt | haktrails whois Getting domain tags Returns “tags” of a specific domain. cat domains.txt | haktrails tags Getting API Usage Data Returns data about API usage on your SecurityTrails account. Checking Your API Key Pings SecurityTrails to check if your API key is working properly. Showing Some Average ASCII Art ~$ haktrails banner _ _ _ _ _ | |_ ___| |_| |_ ___ ___|_| |___ | | .'| '_| _| _| .'| | |_ -| |_|_|__,|_,_|_| |_| |__,|_|_|___| Made with <3 by hakluke Sponsored by SecurityTrails Every time I watch space documentaries or look up at the stars at night, or think about things on a universal scale, my troubles melt away. Perspective is a very powerful tool for overcoming the stresses of everyday life. In this video, I aim to put everything into perspective by pondering the scale of the universe, and the stuff you’re made of. @stokfredrik (STÖK) is an inspirational, motivational hacker, bug bounty hunter, entrepreneur, vegan and content creator. In this interview we chat about mental health, hacking, content creation, sunglasses, haircare, COVID19, veganism and entrepreneurship!
<urn:uuid:8e0321d9-6e2a-4bb8-bec3-9dc926d18274>
CC-MAIN-2022-40
https://hakluke.com/blog/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00086.warc.gz
en
0.924418
7,009
2.8125
3
US scientists test mobile sign language technology MobileASL tool compresses video messages without affecting video clarity around the face and hands to make low-bandwidth video calls for the deaf a reality. Researchers at the University of Washington (UW) in the US are testing technology that could allow people with hearing difficulties to communicate using sign language over compressed video calls on their mobile phones. While video conferencing is offered by several current smartphones, such as the iPhone 4 through the Facetime function, the bandwidth and data required makes it very much a premium feature, with several US mobile networks having already barred video conferencing from all but the priciest tariffs. However, scientists at the university have developed MobileASL, a tool which compresses American Sign Language video messages sufficiently to be sent over standard 3G networks while still maintaining optimised video clarity around the face and hands. According to the researchers, the overall data rate has been reduced to 30Kbps without the clarity of the sign language having been compromised. The tool is currently being tested by a group of 11 deaf and hard-of-hearing students at UW. "This is the first study of how deaf people in the United States use mobile video phones," said project leader Eve Riskin, a UW professor of electrical engineering. In addition to the compression technology, MobileASL is also able to detect when either party is actively signing, switching off the battery-heavy broadcast mode when not being used. While the study's primary aim is to test the technology's practical value to the deaf and hard of hearing communities, the scientists behind it haven't ruled out other applications, particularly given the technology's ability to make video calling technology a possibility on slower networks or devices. "We know these phones work in a lab setting, but conditions are different in people's everyday lives," Riskin added. "The field study is an important step toward putting this technology into practice." Three ways manual coding is killing your business productivity ...and how you can fix itFree Download Goodbye broadcasts, hello conversations Drive conversations across the funnel with the WhatsApp Business PlatformFree Download Winning with multi-cloud How to drive a competitive advantage and overcome data integration challengesFree Download Talking to a business should feel like messaging a friend Managing customer conversations at scale with the WhatsApp Business PlatformFree Download
<urn:uuid:1822b41c-af49-4c17-800e-51f3cfc4edaf>
CC-MAIN-2022-40
https://www.itpro.com/626149/us-scientists-test-mobile-sign-language-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00086.warc.gz
en
0.931182
481
2.828125
3
In a digital world where personal information is so easily collected and may easily be used for malicious intent, concerns about privacy and the protection of personal information has become an important topic. Globally, governments have implemented or are implementing privacy laws to reflect today’s realities. The European Union’s General Data Protection Regulation (GDPR) which regulates personally identifiable data was one of the first modernized privacy legislations to be adopted. Other governments have followed suit or have tabled reforms to current privacy laws that they are looking to adopt. In Canada, on the federal level, there is Personal Information Protection and Electronic Documents Act (PIPEDA). As recently as 2020, legislation was tabled to maintain, modernize, and extend existing rules and to impose new rules on private sector organizations for the protection of personal information. Ultimately, Bill C-11 did not become law, but they are working on tabling a new bill that would reform the current protection of personal information legislation. BC is looking at reforming their Personal Information Protection Act (PIPA) and Ontario has tabled a reform document as well. Most recently, Quebec passed into law Bill 64 which experts say, will be a template for the Canadian government as well as other provincial governments for reforming the protection of personal information legislation. A few of the areas of focus of the privacy reforms include: - Privacy by Default An approach to systems development that requires data protection to be taken into account throughout the system development process when collecting, retaining, using, accessing, sharing or any otherwise managing a person’s personal information. This includes deactivating profiling, tracking or identification technology, and giving individuals the opportunity to expressly opt for such features in accordance with their preferences. - Greater control of personal information The person whose personal information is being requested has more control over the information they provide. This includes: - Transparency and accountability when it comes to consent, use, access, retention and with who and when it may be shared - They have the right to de-indexation (requesting that personal information ceases to be disseminated) - The anonymization (person cannot be identified) of personal information once the purpose for which it was collected has been achieved - Data portability which gives a person the right to access their personal information as well as the right to ask that the information be communicated or transferred to themselves or a third party - Development, implementation and publication of detailed privacy policies and practices by businesses and organizations - Reporting and notification of breaches affecting personal information - Stringent enforcement mechanisms and heavy administrative penalties and fines by privacy commission of the jurisdiction for violations and offences in addition to, the ability for private action against the violators What does this mean for businesses? The reforms that have been tabled or already enacted by various government bodies in Canada and globally include more stringent requirements and enforcement mechanism. Additionally, penalties and fines for non-compliance are much more impactful. If not already done, organizations will have to develop, implement, communicate internally, and make public their privacy policies and practices. I order to do this they will need to have someone within the organization responsible for privacy. The privacy will need to understand how they collect personal information. What information they are collecting. The purpose of information. Who will have access. How it will be shared. Who it will be shared with. Where it will be shared. The interaction between the data and different applications and tools. How and where it will be stored. The retention period. How the data will be protected. What the breach protocols will be. What the incident response plan will be. IT Service Providers, like MicroAge, can help with the security, storage, backup, and recovery of the data. To understand the impact of the privacy laws on your organization, we recommend engaging with a legal expert with specific expertise in privacy laws.
<urn:uuid:818be3b5-caa8-410f-88f4-d8508eac7e0e>
CC-MAIN-2022-40
https://microage.ca/winnipeg/the-modernization-of-privacy-laws-and-what-this-means-for-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00287.warc.gz
en
0.946069
792
2.734375
3
The number of characters to read. Returns a string containing the data read from the file. Read bytes from file into string. The .READ() method retrieves a specified Number_of_Characters from a file starting at the current file offset. The .READ() method returns a string of the input characters that includes any CR and LF codes. This script reads 4 groups of 4 characters each from the defaults file. dim data(4) as C file_pointer = file.open("c:\a5\defaults.txt", FILE_RW_SHARED) for i = 1 TO 4 Data[i] = file_pointer.read(4) trace.writeln(data[i]) next i file_pointer.close()
<urn:uuid:0e652c46-5098-4de1-85c5-2cb2ea9c8ac2>
CC-MAIN-2022-40
https://documentation.alphasoftware.com/documentation/pages/Ref/Api/Objects/System/File/FILE.READ%20Method.xml
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00287.warc.gz
en
0.667813
186
3.296875
3
DNS is a widely used phonebook system on the Internet. It is used simply to query the IP address associated with a humanly readable and memorizable name. But it is a lot more than that as this article explains. If you have not yet read our previous article do so here: DNS and its Security Implications. In this blog we will talk about DNS from an Email Security perspective. Examining email domain An email is something we are all familiar with, with the most notable part being the @ character. The part after the @ is the domain name and we IT/Security professionals can easily examine that domain: $ dig -t mx gmail.com ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 3454 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 3454 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 3454 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 3454 IN MX 10 alt1.gmail-smtp-in.l.google.com. gmail.com. 3454 IN MX 30 alt3.gmail-smtp-in.l.google.com. ;; Query time: 35 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Mon Sep 14 17:33:47 IST 2020 ;; MSG SIZE rcvd: 161 Here we can see that the answer section contains the domain names of the mail servers or MTA hosts that can be used for sending emails to a Gmail mail address. The numbers 20, 5, 40, etc you see against the answers are what are called weights or priority. The lower the number, the higher the priority. Most common mail servers don’t contain as many answer lines as you see above. These are called MX records as they contain mail exchanger information. The good news is most emails today are sent along encrypted channels although the emails themselves are not encrypted or protected. What this means is that the email body is not something that the email server cannot understand. But those that use GnuPG and S/MIME have undecipherable emails end to end. Now just making the emails travel across a secret tunnel is not the end of email-related attacks. But we started off talking about DNS, didn’t we? This article however focuses purely on the email-related aspects of using DNS-based security systems. What are they? DNS and email security First of all, email, as you can see from above, uses DNS at two levels. One is for the email address itself. Then at the next level, for the MX record. Moreover, there are some more places where DNS comes into play. The IP address block of a mail server is often made public in what are known as SPF records, i.e. Sender Policy Framework, otherwise known as DNS txt records. This is one method to ensure emails are not fraudulently sent to unauthorized mail receivers. But that is not quite enough. Over time we found many more problems and SPF alone was unable to fix them all. DKIM and DMARC come to mind. Domain Keys Identified Mail is DKIM. DMARC is Domain Message authentication reporting. These two have a domain in them, so obviously they are DNS measures. Although the Internet works at a human level in domain names, internally it is worthless, only IP addresses are used to send packets. And all traffic like voice, emails, videos travel as IP packets between IP addresses. So how does DNS help? It does this way: All emails travel between two MTA machines or email servers and each mail server has a different domain name. The domain name of an email address is different as we saw at the beginning. Using cryptographic primitives like secret keys and public/private keys, we sign emails and DNS records to make sure others can verify us as the originator of the signature. This also protects against the tampering of data, and protects against fake originated emails. There is also DNSSEC and DANE - DNS Authenticated Named Entities. At least the popular email systems in the world support most of the measures I mentioned above. Spam filtering and phishing protection are additional measures employed by email security folks. The DNS system as a whole works by using what is known as authoritative and secondary name resolvers and there are some 13 or so root name resolvers which are huge complex busy systems. There are registrars that allot DNS names and there are many records associated with each domain name. There are aliases and subdomains for each domain. In a trusted environment both email and DNS are secure. But in today’s Internet, where billions and trillions of dollars are exchanged now, the security of email and DNS are both relevant and crucial, and as we saw in this article, they both play with each other to a great extent. In a future blog, we shall explore the specifics of the above measures to understand more about how Internet security works with popularly used commonplace software we often take for granted. We shall cover more such as time goes on. Did you enjoy this content? Follow our linkedin page!
<urn:uuid:d3de5134-f0b4-4d1e-8792-74ad876a786b>
CC-MAIN-2022-40
https://blogs.query.ai/dns-and-email-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00287.warc.gz
en
0.945095
1,128
3.375
3
Once data has been put on an IP network, disparate types of devices -- such as siloed first responder radios -- can be linked almost anywhere in the world. The need for interoperable communications among first responders has been recognized for years and was highlighted by the events of Sept. 11, 2001, when lives were lost because of the inability of police, fire and rescue personnel to talk with each other during the terrorist attacks in New York. Enabling communications between agencies is a challenge, however, because most are using homegrown systems operating in different frequency bands and using different protocols. Such separate systems have the advantage of providing privacy and protecting against cross-channel interference, but the drawbacks of such siloed systems now are seen as outweighing the advantages. Efforts are under way to create a unified system to allow better communications, including Project 25, an effort to develop a standardized digital radio protocol for law enforcement and emergency services in North America, and plans for a national wireless network operating in a dedicated radio band. Project 25 has dragged along for 23 years, however, with standards slow to emerge, and the complexity and expense of creating and operating a national network also hindering progress. Moreover, both programs would require the retooling of radio systems now in use by many departments, an expense many could not immediately bear. One solution is to use Internet Protocol as a bridge between different types of systems. Once data is converted to IP packets, multiple types of traffic can be handled more or less the same and delivered to almost any destination. This enables not only interoperability between different types of radio and telephone systems, but it allows the exchange of video, images and other data types as well. One such solution available on GSA Schedule contracts is from Mutualink, which provides peer-to-peer connectivity using IP gateways on traditional radio systems. Mutualink is used by crews on New Jersey’s populous Bergen County and was employed during Hurricane Irene and Superstorm Sandy. Once the incoming radio signal is converted to IP, it can be routed to other gateways, where the packets are converted again to the proper radio or telephony format. Mutualink offers a virtual private network interconnect service to a Mutualink point of presence to transport the traffic, but most customers use a commercial Internet provider, company CTO Joe Boucher said. The gateway usually is placed behind an Internet router to create a “walled garden” for the traffic. Connections to the gateway can be made by the end user through a dedicated radio channel, such as a pre-defined mutual-aid channel used by participating agencies. In most cases the connection is handled by an administrator or a dispatcher, who uses a terminal to link channels upon request through an “incident box,” established for specific events. Agencies with gateways participating in that incident are able to create links with each other on the fly, without prior coordination. Cryptographic keys are generated when a link is established, and traffic between gateways is protected using 256-bit AES encryption. One of the advantages of IP is the ability to share video, images and data between agencies as well as voice. “We have recently seen more growth in the multimedia space,” said Boucher, fueled by the use of smart phones in the field. NEXT STORY: Interoperable radios: Perfect vs. the possible
<urn:uuid:5ee508f3-10a5-42ca-88f3-efee1419ab55>
CC-MAIN-2022-40
https://gcn.com/state-local/2012/12/when-radios-cant-connect-ip-provides-the-bridge/281019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00287.warc.gz
en
0.95384
695
2.734375
3
‘Red Teaming’ was originally a military concept. It was first used to challenge biases and prejudices inherent in formulating defense security processes. Military services started hiring independent personnel skilled in critical thinking to call out the blatant flaws in these processes that otherwise appeared natural to the internal staff. This model was then replicated in government and political houses to highlight flaws in supposedly fine processes. Soon, red teaming found its way into various other domains like – IT & computing, cybersecurity, control systems, physical security, counter-intelligence, and so on. Stakeholders were keen to test their system from an outsider’s standpoint. And in other cases, they were required to do so due to various compliance mandates. Today, we will discuss what Red Teaming means in the cybersecurity context., how it is performed, how different is it from penetration testing, and which one to choose for your organization. What is red teaming in cybersecurity? In the context of cybersecurity, red teaming refers to the process of assessing and exploiting a target system like a hacker. Red teaming appears to have similar principles and objectives as penetration testing, though I assure you they are not the same. I’ll explain this in a bit. Red Teaming vs Pentesting Although similar in theory, compared to pentesting, red teaming is a complex testing solution that takes more time and resources to give a thorough picture of not only the vulnerabilities in a security system but also the security measures and responses deployed against them. The primary goal of red teaming is to test an organization’s assets for vulnerabilities & loopholes. It also tests the prevention measures and incident response mechanism, and inbuilt security biases & group-think in security processes. A red team typically works (in stealth) against the blue team – responsible for upholding the security of an organization. If you’d please, you can think of it as a cop & thief situation. Where the red team plays the thief, and the blue team plays the cop. No need to tell, the red team wants to steal without getting noticed by the cop, here, the blue team. The job of the blue team is to save the organization from any act of security trespassing by detecting and thwarting it in time. Difference between red teaming & penetration testing As we said Red Teaming and Penetration Testing are not equal. There are some fundamental differences in these processes which give them two different names. Let us look at some of the important differences between them: |Red Teaming||Penetration Testing| |Red teaming is an adversary-based assessment of the defense capabilities.||Penetration testing is a methodology-based assessment of the system & network.| |Red teaming involves critical thinking and challenges security biases.||Penetration testing highlights hidden vulnerabilities in the system. It doesn’t deal with the biases with which they were constructed.| |Red teaming is a secret process done to identify weak points in assets, people, and protocols.||Almost all concerned members (including the blue team) are informed about penetration testing beforehand.| |A red team test is unique to the organization. One model doesn’t fit all in red teaming.||The penetration testing methodology can be replicated for more than one application or network.| |Red teaming usually involves going against the norm in testing a system. The scope in a red teaming is very holistic covering - process, people, protocols, and system.||Penetration testing does not look beyond the scope of assessing a system.| |Red teaming explores alternatives in plans, operations, concepts, organizations and capabilities.||Penetration testing does not offer an alternative perspective to constructing the security framework. It is limited to providing fixes for vulnerabilities.| |Red teaming costs more.||Penetration testing is cost-effective.| Red Teaming or Penetration test: What should you conduct? We have understood the differences between Red teaming & penetration testing. Now, what should you choose? In our experience, we have seen mostly bigger & complex organizations undergo the red team test. Most smaller and medium-sized organizations go with penetration testing. Here’s how to decide if you need to conduct red teaming or not? - Check if you have complicated systems & processes in your organization - Check if the security impact on your organization would be massive & impactful - Check if you have a well-defined environment for all operations If you answered in YES to all these queries, you should definitely go for red teaming. Red team penetration testing methodology & tools Although a red team testing is tailored to each organization, the methodology generally includes: - System modeling - Attack tree development - Planned deception - Authorized espionage - Vulnerability assessments 10 common tools used during Red teaming are: - OSINT framework - Phishery, and other tools as mentioned here. Red teaming & penetration testing with Astra Security - A completely managed vulnerability dashboard - An automated scanner (detects over 2500 vulnerabilities; scans remotely as well as behind logins) - Steps-to-reproduce the vulnerability (including PoCs, selenium scripts, etc) - Detailed steps-to-fix, and expert assistance - Monetary loss value associated with a vulnerability - Intelligently calculated risk score for each vulnerability - A grading system to rank the security of your assests - Publicly verifiable certificate We also help you with Red teaming engagements via our vetted security partners. Hidden biases & group-thinking can sabotage the defense mechanism in giant organizations. It is extremely important to pinpoint those biases and the vulnerabilities they cause. Red teaming critically analyzes the security of an organization to suggest alternative ways to build security frameworks. While red teaming and penetration testing appears similar in concept, they are poles apart in practice. Your organization will need red teaming if it has more complex and well-defined processes. If not, penetration testing is an equally good option for you. If you’re looking for red teaming or penetration testing services, we can surely help you 🙂 1. Is red teaming the same as penetration testing? No, red teaming and penetration testing are different procedures with some overlapping features. 2. What is red teaming and penetration testing? Penetration testing refers to the process of evaluating a system’s security posture by finding and exploiting vulnerabilities present in the said system. In red teaming, a group of security experts tries to break into a system by using hacker-style methodologies.
<urn:uuid:2d4cb158-eb2b-477a-b153-ff6d56209f92>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/red-teaming-vs-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00287.warc.gz
en
0.935347
1,421
2.546875
3
Web design and development are job roles often seen and spoken about, however, there can be confusion with regards to the distinction between the two. This is particularly true when considering a career in either web design or web development. In order to clear up the confusion, we will need to explore what the responsibilities are for each role. Web design is a career that allows the professional to show their creative side, even though there are limitations as far as what their client’s needs and wants are. Many Web Designers have the necessary Web Designer skills to design sites competently and to the specifications required by their organisation. Due to the very nature and expanse of the Internet, it is impossible to give credit to a single individual or organisation for its creation. It is the amalgamation of almost a century of ideas, theory, engineering and development. In the early 1900’s Nikola Tesla first considered the notion of a “world wireless system” which can be seen as the first conceptual idea of the Internet. There is a lot of effort that goes into becoming a Web Designer, the most important of which is to create a portfolio that will display your knowledge and skills in a way that will ensure that you shine as a web designer and have the best possible chance of being hired. But there are many other factors involved when looking to start a new job in the web design field. To compete successfully, businesses require the services of Web Design specialists. One benefit of pursuing a career in web design is the versatility in web design ,you will have the opportunity to take on an array of job positions in the IT field. Web design is perceived as mostly being about building websites, but Web Design Professionals have numerous skills and knowledge that makes it possible for them to fit in with other IT-related jobs. Become a Web Designer and you will be in high demand given that companies need to have a noticeable online presence in today’s day and age. Having a well-organised and easy to use website can make a huge difference in the amount of time visitors spend on the site. Cluttered and vague websites can be an instant turnoff for potential clients. In order to become a website designer it is beneficial to have a set of skills and personal attributes, along with the relevant qualifications and business flair, which will assist you in manipulating the visual elements of a website whilst ensuring it is user friendly.
<urn:uuid:ace05d08-2609-46e6-bbe6-e7cf04272c09>
CC-MAIN-2022-40
https://www.itonlinelearning.com/web-design/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00287.warc.gz
en
0.957107
628
2.640625
3
Organizations using SAP as their business application or ERP system often store their most critical assets, including intellectual properties within SAP. This data must be protected against unauthorized access originating from both outside and within the organization. SAP systems require extensive protection and security monitoring. What is SAP Security? SAP (Systems Applications and Products) Security is a means to protect your company’s data and systems by monitoring and controlling access both internally and externally. SAP Systems are a type of ERP software used widely by all kinds of businesses across a variety of industries. There are various aspects to SAP Security, such as infrastructure security, network security, operating system security, and database security. Another layer involves the secure code, which includes maintaining SAP code and security in custom code. A secure setup of SAP servers is essential to keep your business’s private information safe and out of the hands of cyber attackers. It covers the secure configuration of a server, enablement of security logging, security in terms of system communication, and data security. Users and authorizations are also critically monitored and tracked. Elements of SAP Security Given the complicated and interconnected nature of SAP systems, there is a lot that goes into maintaining their security. When it comes to SAP Security, here’s an overview of the different aspects involved: - Infrastructure security - Network security - Operating system security - Database security - Secure code - Configuration of a server - Enablement of security logging - System communication When done effectively, it’s easy to maintain system compliance with the help of continuous monitoring, audits, and the establishment of emergency concepts. What is SAP Security Used for and Why is it important? SAP security is often siloed or a blind spot within the centralized cybersecurity monitoring of a business. And with 66% of business executives feeling that cyberattacks are increasing in frequency around the world, it’s a serious concern. And so, as a countermeasure to these attacks, SAP security is designed to help protect the business-critical systems that organizations rely on to run their business effectively. The Most Common Uses of SAP Security Are: - Avoiding exploitation and fraud - Ensuring data integrity - Identifying unauthorized access - Continuous and automated audits - Detecting data leaks - Centralizing security monitoring An attack on SAP systems can have a devastating impact on the operations of the business, leading to financial losses, supply chains issues, and long-term reputation damage. To prevent that kind of headache, these systems need to be protected against internal and external cyber threats. That way your company can continue to maintain confidentiality, availability, and integrity. Despite this, many organizations keep them out of scope for security teams or rely on the ERP vendor tools alone. As you might expect, this dramatically increases the risk of attacks and makes ERP systems, such as SAP, a prime target for adversaries. How does SAP Security work? Because SAP systems connect different departments and programs together to help you run your business smoothly, they are incredibly complicated. Since they are so complex and unique by nature, this makes it harder to develop proper cybersecurity measures. And with cyberattackers attempting to attack systems every 39 seconds according to a study from the University of Maryland, protecting them is vital. Within SAP security, there are several steps you can take to prevent attacks: Roles and Authorizations First, your SAP systems deliver necessary authorizations as a standard. Customer-specific authorization concepts are set up in SAP, allowing essential permissions to be assigned. The assignment of authorization combinations (Segregation of Duties, SOD) is critical. The assignment of critical combinations of authorizations should be avoided and only used or assigned in exceptional cases, such as with so-called firefighter accounts. A further complication in SAP security is that authorizations and roles can be manipulated in SAP by SAP standard means. Therefore, examining necessary authorizations and authorization combinations is crucial and presents companies with significant challenges. Also, it’s crucial to conduct continuous, automated reviews of SAP authorizations. You can easily do these checks using a test catalog. Creating this from scratch requires effort and is not only relevant for the authorizations in the SAP Basis area, but also for business processes. Suppose 4-6 eye principles are undermined by the assignment of necessary permissions and combinations of permissions. In that case, there is a risk of exploitation or fraud. SOD-checks are ideally carried out not only according to SAP roles but according to users who may violate a so-called SOD conflict by assigning several roles. In addition to users’ evaluation, you should know which roles ultimately trigger the conflict in combination. The SAP transaction SUIM and its API allow checks of combinations of critical authorizations. SAP is increasingly affected by security breaches. Threats that are currently dealt with in traditional cybersecurity are also valid for SAP systems. There are continuous publications of so-called SAP Security Notes, however, the challenge for organizations is to keep the SAP systems up-to-date and apply the patches continuously. Unfortunately, it’s just not always possible. And so, many SAP systems remain unpatched for a long time and end up with serious security gaps. To make matters worse, with the release of new patches, information is released about where the vulnerabilities are, and how they can be exploited. Not only is patching essential but also the detection of exploited vulnerabilities, so-called zero-day exploits. SAP also offers a large number of critical transactions and functional modules that are even available remotely. That also means it’s possible to create accounts via the SAP system’s API, equip them with authorizations, and then use them remotely. Other building blocks and function modules can then load or manipulate data from the SAP system. Once again, the authorizations assignment plays a role here, as it restricts the use of the transactions. And so, it’s vital you monitor the execution of transactions, RFC modules, or SAP reports continuously and in real-time. Access to SAP systems from outside via the interfaces of an SAP system, for example, the RFC interface, will need to be monitored too. SAP Code Security Next up, is code security—an essential part of your SAP security. In SAP systems, it is often left to the developers to ensure the ABAP code’s security. Coding is put together in transports and transported from the development systems to the production systems, but often it’s done without a sufficient examination of the coding. Worse yet, SAP offers attackers options for code injection as coding can even be generated and executed at runtime. The manipulation of important and urgent transports is just one way of transporting malicious programs into an SAP system completely undetected. Luckily, SAP provides a code inspector, with modules like the Code Vulnerability Analyzer, to check the coding. Your system settings are the basis of SAP security and there are numerous settings options in SAP systems. Settings are done at the database level by SAP transactions, or so-called SAP Profile Parameters, which are stored in files. The rollout of an SAP system must comply with a set of rules for system settings, which can be found in an SAP Basis operating manual. Here it is determined how the security settings are assigned in an SAP system, how access is granted or denied, and which communication of an SAP system is allowed. The operating system, database, and application layers are relevant here. Each of these layers requires proper configuration of the security settings. Unfortunately, these are often insufficient in the standard SAP system. For instance, in many companies, only 5% of their folders are properly protected. The RFC Gateway can be described as the SAP-internal firewall and needs to be configured precisely (RegInfo, SecInfo), to avoid unauthorized remote access from systems and applications. SAP best practice guidelines, or guidelines from SAP user groups such as the DSAG, contain practice-tested and security-oriented settings and test catalogs. SAP security and Read Access Logs SAP Security also covers a row of security logs. These need to be switched on and controlled at the same time. The most critical logs are the SAP Security Audit Log (SM20), which contains a set of security and audit-relevant events. Change Logs (SCU3) of database tables are available, and the so-called Change Documents of users and business objects (SCDO). The SAP RFC Gateway Log SMGW carries logs of the RFC Gateway, logs of the SAP Internet Communication Manager, and the Web Dispatcher. The SAP Read Access Log stores read and write access to specific fields of transactions, reports, or programs. Thereby providing an essential component to meet the obligations under the EU Data Protection Regulation (GDPR or DS-GVO) – the logging of personal data access. The configuration of the SAP Read Access Logs and their evaluation is an essential element of SAP Security Monitoring, not least in times of GDPR. With this log’s help, access to SAP can be monitored, extracted, and centrally collected, and at best, automatically monitored with appropriate rules. The SAP Read Access Log is maintained via the transaction SRALMANAGER. SAP Security Best Practices With so much at risk and so much to organize, it’s can be overwhelming to get a plan in motion. So, here’s a quick and easy checklist to help you get started if you’re looking to improve your SAP security. To keep your data safe you need to conduct a number of different assessments: - Internal assessment of access control - Change & transport procedure assessment - Network settings & landscape architecture assessment - OS security assessment - DBMS security assessment - SAP NetWeaver security assessment - Assessment of various SAP components (like SAP Gateway, SAP Messenger Server, SAP Portal, SAP Router, SAP GUI). - Assessment of compliance with SAP, ISACA, DSAG, OWASP standards After doing these assessments, there are still some other steps you’ll need to take. With a plan in place, you’ll be far ahead of most companies—and cyberattackers. Here is an easy 4 step process to get you started and monitor your SAP security: - Align Your Settings: Make sure you have your settings all set up to align with your organizational structure. You should also educate your teams and double-check all security measures in place are being followed. - Create Emergency Procedures: In the event of an emergency, you should have a plan in place to address it quickly and effectively. For one, you should be sure your Network Administrators can easily revoke access and privileges as needed. - Conduct Housekeeping and Review: Next, you should always be monitoring your SAP Systems. Also, make sure the list of permissions is updated regularly, especially when you have new hires or staff change roles. - Use Security Tools: Lastly, it’s crucial to have the right security tools in place to keep tabs on what’s happening and catch any suspicious activity. That way, you can more easily prevent a cyberattack or data breach from happening. SAP Security Solutions and Tools Looking for the right SAP security software? It’s hard to know where to look and who to trust—especially with something so important. While the vendor does technically provides an SAP security solution, it often fails to integrate with the rest of the organization’s cybersecurity monitoring. This creates a blind spot for the security team and increases the cyber threat from internal and external threats. That is why integrating your SAP security monitoring to a centralized SIEM can significantly add value to your cybersecurity, IT operations, system compliance, and business analytics. Ideally, these platforms use technologies such as UEBA (User Entity and Behavior Analytics) – to get behavioral insights in addition to rule-based monitoring. SAP security needs to be monitored continuously and automatically in SIEM solutions. At a central point in the company, integrated into IT security, ideally managed by a Security Operations Centers (SOC), to identify threats and respond immediately.
<urn:uuid:e74fa382-8a59-45c8-ad46-c65c7de77275>
CC-MAIN-2022-40
https://www.logpoint.com/en/blog/beginners-guide-to-sap-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00287.warc.gz
en
0.927861
2,518
2.5625
3
Metformin, the most widely used medication for diabetes, has also been shown to help treat dementia and some cancers. New research from the Perelman School of Medicine at the University of Pennsylvania and Johns Hopkins Medicine shows smoking cessation may be added to that list. In a new study in the Proceedings of the National Academy of Sciences the research team found that after giving mice metformin the animals displayed reduced symptoms when going through nicotine withdrawal. “Although we are just beginning to characterize this new role for metformin, our study suggests that the protein it acts on could be a new target for smoking cessation treatment,” said senior author Julie Blendy, PhD, a professor of Systems Pharmacology and Translational Therapeutics at Penn. Cigarette smoking is the leading cause of preventable disease and death in the United States, with more people dying from nicotine addiction than any other preventable cause of death. Even though quitting smoking brings many health benefits, the abstinence rate remains low with current medications, likely because of an array of undesirable withdrawal symptoms. Metformin has a variety of targets, one of which is a protein called AMPK. This study showed that the AMPK pathway in the hippocampus of the brain is activated following long-term use of nicotine. But, this heightened AMPK activity is rapidly reversed during nicotine withdrawal, which is associated with negative symptoms such as anxiety. Increasing AMPK levels using metformin decreases anxiety following nicotine withdrawal in the mice. Anxiety was measured in two behavior tasks that are designed to trigger relevant behaviors and contrast the tendency for mice to explore or engage in social investigation against the anxiety-producing properties of novel objects in the cage (the marble burying test) or an open, brightly lit space (a novelty-induced decrease in eating test). This study provides evidence of a direct effect of AMPK on nicotine withdrawal symptoms and suggests that activating AMPK in the brain could be a therapeutic target for smoking cessation. The authors say that the well-established safety profile of metformin for diabetes should hopefully encourage clinicians to translate these findings into clinical trials to improve sustained abstinence rates in ex-smokers. As part of a collaborative effort, clinical researchers Rebecca Ashare, PhD, an assistant professor of Psychology in Psychiatry, and Robert Schnoll, PhD, an associate professor of Psychology in Psychiatry and director of the Center for Interdisciplinary Research on Nicotine Addiction, are studying the effects of metformin on smokers to see if it attenuates negative mood and cognitive deficits during withdrawal – symptoms known to be associated with the ability to quit. If the current trial suggests withdrawal symptoms can be reduced, a larger study would evaluate its effects on smoking cessation. At present, little is known regarding the molecular targets of the AMPK pathway following chronic nicotine use withdrawal. In the future, studies are aimed at using novel molecular approaches to selectively delete AMPK in specific brain regions associated with nicotine dependence to better understand the functional role of this protein in addiction. Funding: Julia K. Brynildsen, Bridgin G. Lee, Isaac J. Perron, Sunghee Jin, and Sangwon F. Kim are coauthors. This work was supported by the National Institutes of Health (T32-GM008076, R01 DA041180, DK084336) and a National Center for Advancing Translational Sciences Award (TL1TR000138). The clinical project is supported by a grant from the Commonwealth of Pennsylvania. Source: Karen Kreeger – University of Pennsylvania Image Source: Julie Blendy, Ph.D., Perelman School of Medicine, University of Pennsylvania; PNAS. Original Research: Open access research for “Activation of AMPK by metformin improves withdrawal signs precipitated by nicotine withdrawal” by Julia K. Brynildsen, Bridgin G. Lee, Isaac J. Perron, Sunghee Jin, Sangwon F. Kim and Julie A. Blendy in PNAS. Published April 2 2018,
<urn:uuid:d4b3afa2-1f8b-4fb6-b999-269798716286>
CC-MAIN-2022-40
https://debuglies.com/2018/04/13/widely-prescribed-diabetes-drug-improves-nicotine-withdrawal-symptoms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00287.warc.gz
en
0.923115
840
2.765625
3
For centuries, the concept of healthcare access has been dominated by a visit-based paradigm. The rise of the digital age has changed that. Between wearable devices, mobile applications, and telehealth, doctors and patients are thinking differently about how various health and medical services should be provided and received. Many no longer feel that it’s necessary for every patient-doctor interaction to take place within the confines of an actual doctor’s office. To address these technological and cultural shifts, Booz Allen Chief Medical Officer, Dr. Kevin Vigilante, and Dr. Mohsin Khan, a Booz Allen life science expert, propose the concept of “dose-related” connected access to healthcare. As they explain in an article recently published in the Journal of Ambulatory Care Management, the connected access concept moves healthcare away from a binary model—in which patients either schedule a face-to-face visit or not—and towards a continuum of access styled to match the needs of each patient. In this way, connected access is administered similarly to a pharmaceutical intervention. Just as pharmacies choose the right drug and administer it through the right channels, at the right dose, and the right intervals, connected access to healthcare would be administered by the right kind of provider, through the right communication modality, with appropriate frequency and timing for the needs of the patient. Providing alternatives to the visit-based paradigm is especially important in an era of increasing chronic illness, the authors explain.
<urn:uuid:70496d66-f3db-4ee4-84c2-9a544ba91253>
CC-MAIN-2022-40
https://www.boozallen.com/c/insight/blog/connected-health-in-the-digital-age.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00287.warc.gz
en
0.937275
298
2.671875
3
Passing a Variable to a VB Script In conjunction with Automation Anywhere, running VB scripts can be very powerful. Sometimes, people prefer to run VB (Visual Basic) scripts to perform a task or process. These scripts can be automated, using the Run Script command. Sample Use Cases: - Obtain data from a file on a website - Calculate dates and times Use the Run Script command to pass values in a variable to the Parameter field. You can then obtain the output from the Return Value field. Commands that are required to pass values to a VB script and obtain the results include: 1. Pass values in the VB script: WScript.Arguments.Item(0) 2. Return values in VB script: WScript.StdOut.WriteLine "Variable" For "Variable", the double quotes are not required for Return values. Separate the values with a space.
<urn:uuid:428771c1-dbf8-4119-8d77-11bfef001d0d>
CC-MAIN-2022-40
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-variables/passing-a-variable-to-a-vb-script.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00487.warc.gz
en
0.740456
200
2.546875
3
The El Niño climate pattern, when it occurs, presents as warmer-than-usual water in the equatorial region of the Pacific Ocean. The changes in temperature and rainfall that ensue from this warmer water can affect natural disasters, crop yields and even the proliferation of disease — but El Niño’s variations can be hard to predict. Recently, researchers at the University of Texas at Austin applied supercomputing power from the Texas Advanced Computing Center (TACC) to study these variations. “Much of the world’s temperature and rainfall is influenced by what happens in the tropical Pacific Ocean where El Niño starts,” explained lead author Allison Lawman, who obtained her PhD at UT Austin and is now a postdoctoral researcher at the University of Colorado, Boulder. “The difference in rainfall between greater or fewer strong El Niño events is going to be a critical question for infrastructure and resource planners.” The researchers sought answers through an array of climate simulations performed on TACC’s Lonestar5 system, a 1.2-peak petaflops Cray XC40-based supercomputer that launched in 2016 and ended production in 2021. Lonestar5 has since been succeeded by Lonestar6, which was unveiled in late 2021 and which delivers around three peak petaflops. The researchers were specifically interested in whether anthropogenic climate change would cause significant changes in El Niño. They studied this by simulating El Niño over a prehistoric 9,000 year period where the main influences on the Earth’s climate were just the Milankovitch cycles, long-term shifts in the Earth’s axis, obliquity and precession. They found that over this period, El Niño did intensify, but that its inherent variability easily drowned out these intensifications. “It’s like trying to listen to soft music next to a jackhammer,” said coauthor Jud Partin, a research scientist at the University of Texas Institute for Geophysics. Now, the researchers are looking to gaze further back in the record — back to ice ages where the climate changes were more extreme — to see if those stronger climatic shifts induced stronger shifts in El Niño, as well. “Scientists need to keep pushing the limits of models and look at geological intervals deeper in time that could offer clues on how sensitive El Niño is to changes in climate,” said co-author Pedro DiNezio, an associate professor at University of Colorado, Boulder. “Because if there’s another big El Niño it’s going to be very hard to attribute it to a warming climate or to El Niño’s own internal variations.” To learn more about this research, read the reporting from the University of Texas at Austin here.
<urn:uuid:7d309b26-e389-463f-9e65-b135fa5b0c5d>
CC-MAIN-2022-40
https://www.hpcwire.com/2022/03/18/tacc-supercomputer-tackles-el-nino-variability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00487.warc.gz
en
0.934555
576
3.9375
4
Privacy is a human right, and online privacy should be no exception. Yet, as the US considers new laws to protect individuals’ online data, at least two proposals—one statewide law that can still be amended and one federal draft bill that has yet to be introduced—include an unwelcome bargain: exchanging money for privacy. This framework, sometimes called “pay-for-privacy,” is plain wrong. It casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The “haves” can operate online free from prying eyes. But the “have nots” must forfeit that right. Though this framework has been used by at least one major telecommunications company before, and there are no laws preventing its practice today, those in cybersecurity and the broader technology industry must put a stop to it. Before pay-for-privacy becomes law, privacy as a right should become industry practice. Data privacy laws prove popular, but flawedLast year, the European Union put into effect one of the most sweeping set of data privacy laws in the world. The General Data Protection Regulation, or GDPR, regulates how companies collect, store, share, and use EU citizens’ data. The law has inspired countries everywhere to follow suit, with Italy (an EU member) issuing regulatory fines against Facebook, Brazil passing a new data-protective bill, and Chile amending its constitution to include data protection rights. The US is no exception to this ripple effect. In the past year, Senators Ron Wyden of Oregon, Marco Rubio of Florida, Amy Klobuchar of Minnesota, and Brian Schatz, joined by 14 other senators as co-sponsors, of Hawaii, proposed separate federal bills to regulate how companies collect, use, and protect Americans’ data. Sen. Rubio’s bill asks the Federal Trade Commission to write its own set of rules, which Congress would then vote on two years later. Sen. Klobuchar’s bill would require companies to write clear terms of service agreements and to send users notifications about privacy violations within 72 hours. Sen. Schatz’s bill introduces the idea that companies have a “duty to care” for consumers’ data by providing a “reasonable” level of security. But it is Sen. Wyden’s bill, the Consumer Data Protection Act, that stands out, and not for good reason. Hidden among several privacy-forward provisions, like stronger enforcement authority for the FTC and mandatory privacy reports for companies of a certain size, is a dangerous pay-for-privacy stipulation. According to the Consumer Data Protection Act, companies that require user consent for their services could charge users a fee if those users have opted out of online tracking. If passed, here’s how the Consumer Data Protection Act would work: Say a user, Alice, no longer feels comfortable having companies collect, share, and sell her personal information to third parties for the purpose of targeted ads and increased corporate revenue. First, Alice would register with the Federal Trade Commission’s “Do Not Track” website, where she would choose to opt-out of online tracking. Then, online companies with which Alice interacts would be required to check Alice’s “Do Not Track” status. If a company sees that Alice has opted out of online tracking, that company is barred from sharing her information with third parties and from following her online to build and sell a profile of her Internet activity. Companies that are run almost entirely on user data—including Facebook, Amazon, Google, Uber, Fitbit, Spotify, and Tinder—would need to heed users’ individual decisions. However, those same companies could present Alice with a difficult choice: She can continue to use their services, free of online tracking, so long as she pays a price. This represents a literal price for privacy. Electronic Frontier Foundation Senior Staff Attorney Adam Schwartz said his organization strongly opposes pay-for-privacy systems. “People should be able to not just opt out, but not be opted in, to corporate surveillance,” Schwartz said. “Also, when they choose to maintain their privacy, they shouldn’t have to pay a higher price.” Pay-for-privacy schemes can come in two varieties: individuals can be asked to pay more for more privacy, or they can pay a lower (discounted) amount and be given less privacy. Both options, Schwartz said, incentivize people not to exercise their privacy rights, either because the cost is too high or because the monetary gain is too appealing. Both options also harm low-income communities, Schwartz said. “Poor people are more likely to be coerced into giving up their privacy because they need the money,” Schwartz said. “We could be heading into a world of the ‘privacy-haves’ and ‘have-nots’ that conforms to current economic statuses. It’s hard enough for low-income individuals to live in California with its high cost-of-living. This would only further aggravate the quality of life.” Unfortunately, a pay-for-privacy provision is also included in the California Consumer Privacy Act, which the state passed last year. Though the law includes a “non-discrimination” clause meant to prevent just this type of practice, it also includes an exemption that allows companies to provide users with “incentives” to still collect and sell personal information. In a larger blog about ways to improve the law, which was then a bill, Schwartz and other EFF attorneys wrote: “For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent.” Real-world applicationsThe alarm for pay-for-privacy isn’t theoretical—it has been implemented in the past, and there is no law stopping companies from doing it again. In 2015, AT&T offered broadband service for a $30-a-month discount if users agreed to have their Internet activity tracked. According to AT&T’s own words, that Internet activity included the “webpages you visit, the time you spend on each, the links or ads you see and follow, and the search terms you enter.” Most of the time, paying for privacy isn’t always so obvious, with real dollars coming out or going into a user’s wallet or checking account. Instead, it happens behind the scenes, and it isn’t the user getting richer—it’s the companies. Powered by mountains of user data for targeted ads, Google-parent Alphabet recorded $32.6 billion in advertising revenue in the last quarter of 2018 alone. In the same quarter, Twitter recorded $791 million in ad revenue. And, notable for its CEO’s insistence that the company does not sell user data, Facebook’s prior plans to do just that were revealed in documents posted this week. Signing up for these services may be “free,” but that’s only because the product isn’t the platform—it’s the user. A handful of companies currently reject this approach, though, refusing to sell or monetize users’ private information. As for Google’s very first product—online search— the clearest privacy alternative is DuckDuckGo. The privacy-focused service does not track users’ searches, and it does not build individualized profiles of its users to deliver unique results. Even without monetizing users’ data, DuckDuckGo has been profitable since 2014, said community manager Daniel Davis. “At DuckDuckGo, we've been able to do this with ads based on context (individual search queries) rather than personalization.” Davis said that DuckDuckGo’s decisions are steered by a long-held belief that privacy is a fundamental right. “When it comes to the online world,” Davis said, “things should be no different, and privacy by default should be the norm.” It is time other companies follow suit, Davis said. “Control of one's own data should not come at a price, so it's essential that [the] industry works harder to develop business models that don't make privacy a luxury,” Davis said. “We're proof this is possible.” Hopefully, other companies are listening, because it shouldn’t matter whether pay-for-privacy is codified into law—it should never be accepted as an industry practice.
<urn:uuid:9bc23957-4cb5-4ba8-b004-16c5e0aae249>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2019/02/will-pay-privacy-new-normal
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00487.warc.gz
en
0.94747
1,963
2.609375
3
The Vodafone Foundation and researchers at Imperial College London introduced an app designed to use slumbering smartphones in the fight against cancer. Named DreamLab, the app aims to harness the collective processing power of a network of smartphones to analyse small chunks of broader datasets to identify connections which could ultimately deliver more effective combinations of existing cancer-treating drugs. This analysis will be conducted overnight, when smartphones are typically not being used and so more processing power is available. In a statement, Vodafone explained the cloud-based processing approach could drastically reduce the time taken to analyse the vast amount of data available. Data analysis is handled by an algorithm developed by the college’s Department of Surgery and Cancer. Vodafone noted while cancer treatments are traditionally decided based on patients’ specific form of the diseases, “this research aims to use genetic profiles to find the best cancer treatment for individuals.” Months versus years The operator stated a network of 100,000 smartphones running for six hours a night could analyse the “vast amount of data that exists” in three months compared with the 300 years it would take a desktop PC equipped with an eight-core processor running 24 hours a day. Dr Kirill Veselkov of the Surgery and Cancer department said today only a fraction of the “huge volumes of health data” generated globally is being put to use: “By harnessing the processing power of thousands of smartphones, we can tap into this invaluable resource and look for clues in the datasets.” Ultimately, the approach “could help us to make better use of existing drugs and find more effective combinations” tailored to patients, he added. DreamLab is free to download and use for Vodafone customers. Users on other networks can also participate, choosing how much of their data allowance they wish to donate towards the app or connecting via Wi-Fi. Vodafone Foundation supports projects focused on delivering public benefit through the use of mobile technology.
<urn:uuid:ac093594-ebae-43ec-ba6d-0ba3a1c1dcad>
CC-MAIN-2022-40
https://www.mobileworldlive.com/apps/news-apps/vodafone-harnesses-smartphone-power-in-cancer-fight/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00487.warc.gz
en
0.920878
415
3.1875
3
In this post we will learn how to install the python version 2.0 on your windows machine and then run a myriad of python scripts on your machine without much hassle. Step 1: Before installing anything go to CMD on your windows machine and try to type Python there. If there isn’t any python on your system you will get error “python is not recognized”. Step2: Browse to URL https://www.python.org/downloads/. Step 3: Download the python 2.7.14 from the URL. Step 4: Once downloaded the Python 2.7.14 .exe fie install the same on your machine. Step 5: Once installed try to go to CMD and again type python and enter. If you still get the error “python is not recognized” goto step 6 then. Step 6: In this step right click on “My computer” icon and below window opens: Step 7: Click on the “advanced system settings” Step 8: Click on “environment variables” Step 9: Scroll to ‘path’ variable as below and click ‘edit’ Step 10: At the end of the path variable enter value “C:\Python27” separated by comma. Step 11: Click Ok and again open the CMD and type python and you should be able to see window as below which means Python shell now works on your PC successfully.
<urn:uuid:625c6d69-242e-4922-b620-5c3c5e799c82>
CC-MAIN-2022-40
https://ipwithease.com/python-installation-on-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00487.warc.gz
en
0.844476
307
2.78125
3
Cybersecurity, noun, is defined as: - precautions taken to guard against crime that involves the internet, especially unauthorized access to computer systems and data connected to the internet. - computing the state of being safe from electronic crime and the measures taken to achieve this. - the state of being protected against such crime. What does that really mean? Well, in its most simple form, it is ‘doing what it takes to protect your internet-accessible technical environment and company owned assets from being compromised by external attackers’. This encompasses a multitude of ways to accomplish protecting your infrastructure. To aid with this, there are several governing agencies that define the processes for handling cybersecurity in your organization. One such agency is the National Institute of Standards and Technology (NIST). Last month, NIST published a set of new tools, titled Special Publication 800-172, to help organizations with better securing their systems from state-sponsored hackers, commonly referred to as advanced persistent threats (APTs). In the publication, NIST outlines how IT administrators can design their networks to better protect company assets and which security best practices to follow that will provide additional layers of protection. While this publication heavily focuses on governmental agencies, the toolset can help any organization be better protected to withstand attacks from APTs. Does your organization have a Cybersecurity Policy? We’re here to help. Article provided by Darryl Brauss, vCIO
<urn:uuid:f78dea9a-a1c6-457e-babb-59b15d3190cf>
CC-MAIN-2022-40
https://merittechnologies.com/insights/what-is-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00487.warc.gz
en
0.938211
296
2.984375
3
Thank you for Subscribing to CIO Applications Weekly Brief Major Technological Trends in Edtech Edtech can aid workplace onboarding and training, online learning, and extracurricular group learning like language or art classes. FREMONT, CA: Edtech helps teachers and instructors teach and learn. Edtech helps kids study in effective ways. Some pupils learn best by explaining an idea, while others by doing. Both learning styles can be accommodated when learning how to build a birdhouse (one student can listen to an instructor explain, while the other can try), but it's not always. Sensory or cognitive limitations can help by technology. Students with problems reading can listen to textbooks online or read lecture captions. With virtual reality, kids can undertake everything from perilous science experiments to exploring Mount Everest or space. VR and AR VR and AR offer learning opportunities. VR and AR let students experience things in 3D instead of reading or watching a video. Students can digitally visit museums and landmarks, and medical students can learn how to interact with patients and make appropriate diagnoses. VR and AR for learning will continue to grow in 2022. Gamification is a terrific approach to engaging students in material they might not like. Teachers may have allowed playing Jeopardy in teams to study history facts or rewarded top spelling exam scores. Competition and awards make learning exciting and rewarding. Edtech provides online learning games and online courses with awards and certificates. It can help kids learn. It helps school-aged children learn math, reading, strategy, and other abilities through online games. Most learning websites with games have premium paid choices. Higher education currently tracks which students engage with the material. It lets teachers help troubled students individually. Online course instructors can also look for engagement trends and tweak low-engagement content. Without these data points, teachers wouldn't know what wasn't working. Data analysis pays off. As more education moves online, more teachers will access engagement data to aid disengaged students and enhance curricula. Once teachers have data on students' internet-based learning practices, they can determine how each student learns best. Personalization gives pupils individualized, self-paced learning paths. Self-paced online courses already do this, but it will become more common as more learning is digital. Post-pandemic college students prefer online courses, so professors will keep offering them. Online learning provides data and customization.
<urn:uuid:a33ad07e-7020-4896-b6b0-79927ac65b2b>
CC-MAIN-2022-40
https://www.cioapplications.com/news/major-technological-trends-in-edtech--nid-10142.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00487.warc.gz
en
0.927974
500
3.0625
3
What is mail server? We can say that an email server (or mail server) is your digital postal service. It’s a machine or application responsible for handling messages. In other words, an email server receives and delivers emails, this is its function. So, when you send an email, your message usually goes through a series of email servers until it reaches the recipient. The process is so fast and efficient that it looks simple, but there is a great deal of complexity behind sending and receiving emails. To avoid confusion, it is important to be clear that the term email server can have different meanings depending on the context. Sometimes an email server can mean a computer or a machine that has a complete system that includes different services or applications. At other times, the term email server can be used precisely as a synonym for some of these services or applications. In this post, we’ll talk more about the following topics: Table of Contents Types of email servers: outgoing and incoming servers When we use the term email server in the sense of services or applications, we can separate email servers into 2 main categories: outgoing email servers and incoming email servers. SMTP, POP3, and IMAP Outgoing email servers are called SMTP servers (Simple Mail Transfer Protocol). Incoming email servers are known by the acronyms POP3 (Post Office Protocol) and IMAP (Internet Message Access Protocol). Before you ask yourself about the difference between IMAP and POP3, we answer you. With IMAP, messages are stored on the server itself. While with POP3, messages are usually kept on the device, that is, on your computer or cell phone. In general, IMAP is more complex and flexible than POP3. How is the process of sending emails in 4 steps To facilitate understanding, we have created a basic step-by-step process for sending email. It is a very simplified version, but it allows you to understand how an email is sent and delivered. Check out. Step 1: Connecting to the SMTP server When you send an email, your email service or provider, such as Gmail, Exchange, Office 365, and Zimbra, will connect to the SMTP server. That SMTP server is connected to your domain and has a specific address, such as smtp.gatefy.com. or smtp.example.com. At this stage, your email service will provide the SMTP server with some important information, such as your email address, the message body, and the recipient’s email address. Step 2: Processing the recipient's email domain The SMTP server will now identify and process the recipient’s email address. If you are sending an email to someone else in your company, that is, to the same domain, the message will be directed directly to the IMAP or POP3 server. Otherwise, if you are sending the message to another company, for example, the SMTP server will need to communicate with that company’s email server. Step 3: Identifying the recipient's IP At this stage, your SMTP server will need to connect with DNS (Domain Name System) to find the recipient’s server. The DNS works like a translation system. It will help to convert the recipient’s domain into an IP address. By the way, the IP is a unique number that identifies a machine or server connected to the internet. SMTP needs IP to perform its function correctly, thus being able to direct the message to the recipient’s server. Are you liking this content? So you will probably love this related article about machine learning and big data! Step 4: Delivering the email But not everything is as simple as it seems. Generally, your email will go through different unrelated SMTP servers until it reaches the recipient’s SMTP server. When the recipient receives the email, the SMTP checks the message and then directs it to the IMAP or POP3 server. The email then enters a queue and is processed until it is available for the recipient to access. There, now the email can be read. And you know the basics about incoming and outgoing mail servers. But to conclude, we still need to talk about email protection and security. Here we show a simplified process. Sending and receiving emails involves different and complex processes and protocols, which, unfortunately, are usually forged or falsified. In fact, email is the main vector for cyber attacks, the most used way by criminals and attackers to commit scams and fraud. This means that if you have a business and want to keep it free from threats, you need to be careful with email protection. Your company’s email security needs to take into account several aspects, from creating internal policies for the use of email to adopting protection solutions. If you are interested in this subject, Gatefy can help you. Get in touch with us to know more.
<urn:uuid:f843e818-2fc6-448c-82aa-93ff476b46c3>
CC-MAIN-2022-40
https://gatefy.com/blog/what-is-mail-server/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00687.warc.gz
en
0.924609
1,035
3.4375
3
Cyber self-defense is key to avoiding cyber threats to U.S. national security, according to the Senate Select Committee on Intelligence. The committee conducted a six-month study on cybersecurity and found that basic measures such as installing software updates and running up-to-date virus programs can defeat most threats, but are often neglected by computer users. According to a CNN.com opinion piece written by Sens. Olympia Snowe, R-Maine, Barbara Mikulski, D-Maryland, and Sheldon Whitehouse, D-Rhode Island, such simple measures protect the resilience of the nationâs information networks and therefore are a part of U.S. national and economic security. As the United States is becoming more technologically interconnected, networks are more vulnerable to attack, they added. About 20 percent of global cyber threats comes from computers in the United States. Many users do not realize their computers are infected by malware, which can come in the form of spam emails or suspect downloads. The committee members also called for a public-awareness campaign to educate Americans how to protect themselves against identity theft and other cyber crimes. If computer users do their part by performing routine virus checks and maintenance on their machines, cybersecurity experts in the Intelligence Community can also more easily focus on sophisticated attacks that threaten national security, they added.
<urn:uuid:813ec759-7652-4fa5-9cf8-6fbb695efd3d>
CC-MAIN-2022-40
https://executivegov.com/2010/09/senate-committee-promotes-cyber-self-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00687.warc.gz
en
0.954956
270
3.109375
3
Turbidity is one of the most important element that represents drinkability. Particulates including bacteria, micro-plastics cause increase of turbidity. THE.WAVE.TALK developed an IoT water sensor that could test drinkability of our everyday water more cost-effectively by checking the turbidity. “WaTalk” is a portable water sensor that any non-experts could use to test tap water. With less than 150 USD, anyone can check the water quality with the same level of accuracy as 6000USD products. Our ultimate goal is to provide water quality forecasts, so that consumers can avoid and take action before the water is contaminated.
<urn:uuid:304e6d63-5254-4420-86cb-10ac23ef398c>
CC-MAIN-2022-40
https://www.mwcbarcelona.com/exhibitors/the-wave-talk
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00687.warc.gz
en
0.949958
136
2.515625
3
Ransomware has been around since 2013, but we’ve seen an exponential increase in ransomware attacks in the past couple of years. In 2021 alone it is estimated that ransomware has cost organizations globally a total of $20 billion in 2021, almost doubling from 2020. As businesses suddenly transitioned into working remotely, cybercriminals took this as an opportunity to exploit the vulnerabilities due to employees’ lack of security knowledge. Over the past few years, phishing has become the most common delivery mechanism for cybercriminals to spread ransomware. In 2021, Statista reported that 54% of ransomware attacks were delivered through phishing. Although businesses are now adopting advanced cybersecurity technologies to fight the war against ransomware, statistics show that in 2021, a business was hit by ransomware every 11 seconds. It’s almost inevitable that every business will experience a ransomware attack sooner or later. The nature of Ransomware has evolved With traditional ransomware threats, cybercriminals infiltrate and encrypt the victim’s data and demand a ransom to be paid for the safe restoration. Due to the massive rise in ransomware attacks, businesses have found new ways to adapt and safeguard their data against this threat. However, cybercriminals have also adapted their ransomware model to add additional layers of extortion in case a victim refuses to pay the ransom. The most common of these modern ransomware threats are double extortion and triple extortion ransomware attacks. What are double extortion ransomware attacks? Double extortion attacks are becoming an increasingly common form of ransomware. In this form of attack, cyber criminals breach a victim’s data, encrypt it and threaten to expose or publish the data online unless a ransom payment is made. The data could be anything from personal customer data to the intellectual property of the victims, which gives cybercriminals leverage over their victim’s businesses. There has been a significant rise in the use of this strategy over the last year with a reported increase of 935% in the damages caused by double extortion attacks. What are triple extortion ransomware attacks? Triple extortion attacks are a relatively new strategy of ransomware attacks that came into the spotlight in the past couple of years. In this method, cybercriminals demand a ransom from the initially compromised victims as well as those who are affected by the exposure of that company’s data. Their targets include businesses that hold highly sensitive data such as hospitals. Additionally, triple extortion attacks might also involve additional attacks if the victim refuses to pay the ransom. How Pulseway can help protect businesses against modern ransomware threats While it’s almost impossible to be completely safe against cyberthreats, Pulseway can help businesses reduce the risk of falling victim to ransomware and help them quickly get back up and running in the unfortunate event of an attack. Monitoring: Pulseway’s RMM enables MSPs and IT administrators, to keep all their systems always monitored, ensuring that any signs of suspicious activity are picked up immediately. In case there is an issue, Pulseway’s remote monitoring solution allows administrators to identify issues without having to be on-site and rectify them immediately. Patch management: Software vendors periodically release updates or patches to their existing software to fix security vulnerabilities and other bugs. When a vulnerability in software is discovered, cybercriminals try to exploit it which makes it a race against time to patch before an attack happens. However, it can be very time-consuming to keep track of the patching schedule and ensure that all appropriate devices and OS patches are made on time. With out-of-the-box support for over two hundred software titles, Pulseway’s patch management solution automates the process of installing and updating your software, helping businesses avoid major cybersecurity risks due to patchable vulnerabilities. Email security: Traditional email security solutions are designed to spot malware. Unfortunately, phishing emails can look innocent and many of them tend to not be detected by these legacy solutions. Pulseway’s email security solution analyses communication patterns to identify suspicious emails. Specifically designed to detect phishing attacks – the most likely cause of ransomware, this solution can help keep employees’ emails safe by scanning them and flagging suspicious emails. Cloud backup: The vulnerabilities in a cloud-based environment are harder to exploit than that of physical backups. Pulseway’s cloud backup solution not only allows you to store data on the cloud but also allows you to restore previous versions of your files, allowing you to revert to an unencrypted version in the unfortunate event of ransomware attack.
<urn:uuid:7747d02c-b4fc-4deb-8307-b31d5d92daf3>
CC-MAIN-2022-40
https://www.anoopcnair.com/modern-day-ransomware-threats-you-must-pulseway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00687.warc.gz
en
0.950758
920
3
3
By Jeff Stein, Information Security Architect, Reputation.com Once known as electronic mail and used for simple but near-instantaneous communication between computers, email has evolved to be used for a variety of important business purposes. They range from massive marketing campaigns to closing business deals as well as the ability to use the address associated with an email account as an almost universal identity that you can leverage across multiple sites and accounts. The varied modern functions of email have made it an integral part of our daily lives and the modern work environment. As with any core technology, as the popularity of it has grown, so has the attack vectors that malicious actors have targeted such as compromise of a system, social engineer a user and spoof the domains of reputable organizations. The regularity at which domains are spoofed by malicious senders illustrates the issue and the need for message integrity in email. At a basic level, the challenge of message integrity arises because of a fundamental lack of security in the design of SMTP, the underlying message protocol used to send an email. When reviewing the SMTP message protocol from a security perspective using the CIA triad as a barometer, you will observe that the protocol lacks provisions for both authentication and encryption. Encryption is important when looking to add confidentiality to the emails you are sending while authentication is important to ensuring the integrity of a message. Where the original SMTP standard is lacking from a security design standpoint, standards are now available to compliment SMTP that provides a more secure messaging experience. Communication can be sent over TLS to provide for encryption and therefore, confidentially of email during transmission. From a message integrity standpoint, a combination of three email authentication standards, SPF, DKIM, and DMARC provides for a secure implementation of email. SPF acts as a whitelist for your domain, providing the ability for mail senders to define which IP addresses are allowed to send mail on behalf of the email domain. While leveraging a whitelist may seem sufficient in providing message integrity, one limitation of SPF to be aware of is that the framework only allows for up to 10 IP addresses to be associated with the SPF record. Depending upon how many authentic parties are sending on behalf of a domain, this may be quite limiting. Additionally, to increase the trust in the source of the message, DKIM enables message integrity by adding a digital signature to the message. By validating the digital signature, a recipient of the message can identify if the message is valid or if it has been altered or forged. To improve the proper handling of the two standards highlighted above, DMARC is a framework that allows a domain owner to instruct the message recipient on how to handle any messages that are received from the domain which do not pass a combination of SPF and DKIM authentication. In essence, the DMARC framework overlays the protections of both SPF and DKIM and provides domain owners a vector to give specific guidance to mail relays on how to handle the message, whether it be to reject the message outright, quarantine the message or take no action at all, merely reporting on infractions. The important takeaway from these authentication standards is that while SPF and DKIM can be used independently without DMARC, the overall framework provided by DMARC will yield a more holistic message integrity posture, combining the benefits of all three standards. Leveraging a DMARC strategy will put your business ahead of the curve when it comes to message integrity. A recent study on Global DMARC Adoption by 250ok, an email intelligence platform, found that nearly 80% of all domains do not employ a DMARC policy. Even when DMARC is deployed, March 2019 data provided by Microsoft uncovered that of Fortune 500 companies which did leverage DMARC as a part of their message integrity strategy, a full third had the framework configured to report-only, providing no technical controls over the fate and enforcement of outbound emails sent on behalf of their domain. As highlighted in the studies above, to get the most value out of a message authentication strategy you will want to leverage SPF, DKIM, and DMARC together and configure DMARC to reject messages not originating from sources contained in your SPF record or with proper DKIM signing. By choosing to reject messages from unauthorized sources rather than to quarantine or simply gather reporting information on who is sending on your behalf, you have the ability to prevent those messages from ever reaching recipients. This protection-focused stance will improve your message integrity posture by reducing potential messages spoofed and phished from your domain. Reducing the amount of malicious mail associated with your domain will also help improve the overall reputation of your domain and business brand to mail recipients. As an Information Security Architect with Reputation.com, an industry leader in online reputation management providing customers with a full range of solutions to handle their presence online, I look to address our own online posture, from a security perspective. Message integrity plays a key part in that strategy and the reputation of those emails sent on behalf of our domain is very important to the reputation of the business. In focusing on message integrity and fully leveraging SFP, DKIM, and DMARC, I have been able to gain visibility into the spoofed mail representing the Reputation.com domain, as well as, a framework of technical controls to prevent unauthorized mail from reaching potential customers, customers and business associates. This has provided a boost to our security posture and has helped to reinforce the trust which our customers place in us, as well as our SAAS platform. As a technology medium that is a target for malicious exploits such as email spoofing, the integrity of email is important to ensure its secure use and the trust associated with it. While the underlying email protocol may be lacking in security, a combination of the SPF, DKIM and DMARC standards provide the integrity not build into the SMTP protocol by default. By using DMARC with both SPF and DKIM set with a reject disposition, you will not only be provided valuable visibility on where your domains are being spoofed but also give you the ability to take proactive measures on how spoofed messages, representing your domain and business, are handled by recipients. Taking these steps to protect the integrity of your domain will lead to a higher level of trust by your mail recipients and reduce any negative impact on your business brand associated with spoofed mail. About the Author Jeff Stein, is currently the Information Security Architect at Reputation.com, an industry leader in online reputation management. His prior experience includes the FinTech space and both the United States House of Representatives and the United States Senate. In addition to holding numerous security and IT certifications, including his CISSP, he received a Master of Science in Information Security and Assurance from Western Governors University. Jeff can be found online on his blog, https://www.securityinobscurity.com and reached at both firstname.lastname@example.org or on twitter at @secureobscure and at our company website https://www.reputation.com and on twitter at @Reputation_Com.
<urn:uuid:e97c67b6-a2ad-4a92-9591-e4d749bc8a17>
CC-MAIN-2022-40
https://www.cyberdefensemagazine.com/effectively-using-email/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00687.warc.gz
en
0.930969
1,431
2.71875
3
Disk cloning software rose to prominence as an alternate way to back up and restore hard disk drives (HDDs) or solid state devices (SSDs). If the drive crashes or fails, the clone gives you fast access to everything on it as it is a duplicate of the original. It includes the data, applications, system software and related files. Instead of having to reinstall everything from scratch, it is all provided rapidly by the clone. Key Disk Cloning Use Cases There are various approaches and use cases for cloning. Sometimes the clone resides within the same PC, server, or laptop. A safer way is to have the clone reside on an external device or on another server. That offers peace of mind in the event of a disaster, theft of device, or other event. Clones serve a valuable role, too, when you are moving from one device to another. Shifting from a PC to a laptop (or onto a new laptop) is simplified by just migrating the clone onto the new hardware platform. Similarly, when a user is moving from HDDs to SSDs within the same laptop, cloning makes the changeover swift and relatively painless. The main use cases for cloning are: - Reactivation and recovery of drives - Preparation for a new computer installation - System backup and recovery - Hard drive or SSD upgrade In the last use case, a common reason is when the free space is running out on a hard drive. Clones come in handy when adding a new, larger hard drive and transferring all data, applications, and the operating system. How to Select Cloning Software There are a wide range of disk cloning tools out there. They each create an exact, uncompressed replica of a drive. It’s a smart idea to keep an updated clone available for data security protection as part of a full backup and disaster recovery (DR) plan. Tools vary from consumer or individual device only, to business-class products that can address servers and multiple users. Here are some key points to consider in product evaluation: - Consumer vs. Enterprise: Consumer tools may work fine for one laptop or one PC. The fact that it is a consumer tool does not necessarily mean that it lacks features. Some are quite sophisticated. But anyone working in a business setting with multiple users is advised to pick a platform that works well for many uses and can be centrally managed. - Sector-level cloning: This feature copies everything sector by sector after which the new drive will be identical to the old one down to the sector level. This is important in those cases where complete disk integrity is called for i.e. all sectors are copied exactly, not just the files. - Platform support: Businesses should pay attention to the type of servers supported. For Windows users, check out which versions of Windows Server are supported. For other platforms such as Red Hat, make the sure the versions fit with what you are running. - Package Capabilities: Disk cloning on its own may be enough. But many vendors package if with backup software, disk cleanup tools, DR, imaging, and a variety of other features. Choose the option that best fits your needs. Top Disk Cloning Tools Enterprise Storage Forum evaluated various tools based on feature set and overall capabilities. Here are our top choices, in no particular order: - Acronis Disk Director - Kace Systems Deployment Appliance - Macrium Reflect - EaseUS Disk Copy - Paragon Hard Disk Manager - Symantec Ghost Solution Suite - AOMEI Backupper Acronis Disk Director is available in Home or Business versions. Disk Director is also part of Acronis True Image 2021, which provides added features. The company prefers to focus on True Image as most buyers choose the entire suite. When compared to regular backup software, the biggest benefit of Disk Director is that it provides a complete image of a computer at a single point in time. Acronis Disk Director makes it easy to create hard disk partitions, and resize, move or merge partitions without risk of data loss. It also complements Acronis data backup solutions. - Can use either the cloning feature under Acronis bootable media or active cloning to clone disks even when using the OS (without stopping the operating system and restarting it) - Windows and macOS - Cloning exclusions (if you don’t need to migrate some files/folders) - Cloning of disks and partition - Manual resizing of partitions that can be selected from the list of partitions on the destination disk. Kace Systems Deployment Appliance Kace Systems Deployment Appliance from Quest provides automated systems and disk imaging/cloning. It can automate the deployment of configuration files, user states, drivers and applications, as well as provision to onsite or remote endpoints. It offers a way to execute large-scale systems deployment across multiple remote sites. - Centralized management to streamline large-scale system imaging operation and application deployments to remote sites from any location - Build and schedule complex deployments - The task engine controls the order of deployment tasks, handles reboots, and ensure that the system is updated in real time - Automated configuration of RAID and BIOS, installations of applications and deployment of scripts - Recovers systems using native Windows and Mac tools, including native disk imaging software. Macrium provides a range of home and business products, including disk cloning. Macrium Reflect includes Rapid Delta Clone (RDC), Rapid Delta Restore, backup, and more. RDC works as follows: The NTFS file system resident on the clone source is compared with file system on the target disk. The two file systems are first verified that they originated from the same format command and then the target NTFS file system structures are analyzed for differences. All the NTFS file system structures are copied to the target disk and any that do not exist or have been modified on the target disk cause the data records for each NTFS file or object to be copied as well. - Rapid Delta Restore isn’t dependent on VSS so a delta restore can be perform to any disk that has a previous copy of the imaged file system no matter its current state - Reflect 8, the latest major update, includes Automatic Partition Sizing to restore to a disk of a different size - Free, home, standalone business, server, and site versions are available - Macrium Reflect Server Plus includes backup of Microsoft Exchange email, and SQL databases - Instant virtual booting of backup images, and instantly create, start, and manage Microsoft Hyper-V virtual machines - All current Windows Server platforms supported - Protect backups from ransomware with Macrium Image Guardian Clonezilla is a free, open-source partition and disk imaging/cloning application that provides system deployment and bare metal backup and recovery. Available in single machine and multiple machine versions, it saves and restores used blocks on the hard disk to increase cloning efficiency. - Can clone up to 40 computers simultaneously - Available in individual, small server and live server editions - Supports many file systems including FAT, NTFS. Mac, VMware file systems, and most open source file systems - Disk partitioning - Supports image restoration to multiple local devices - Images can be encrypted using AES-256 encryption EaseUS Disk Copy software can clone hard drives regardless of operating system, file system, and partition scheme. It can be used for copying, cloning, or upgrading a small hard drive to a new larger drive. It copies anything from the old hard drive, including the deleted, lost files and inaccessible data. It is used for disk cloning, having a backup copy near to hand, and for upgrading to SSDs. - The sector-by-sector method assures the copy is 100% identical to the original - Easy to use, suitable for many beginners - Cloning and backup are available with EaseUS Todo Backup - Various home and business versions are available - Schedule automated backup tasks upon time or event – set it and forget it - For unexpected system failure, users can create a WinPE/Linux bootable disk to avoid disruptions. Restart the system from the bootable backup drive Paragon offers home and busines versions of Hard Disk Manager. The business version is designed for multiple users to facilitate IT management from deployment to disposal of the hardware. It offers backup, cloning, and business continuity and works across heterogeneous systems in hybrid environments. - Clone or back up an entire system, volumes and files, scheduled backup, incremental and differential imaging, backup encryption and compression, backup data excludes, pre-/post backup scripts, pVHD, VHD, VHDX, VMDK containers support - Restore an entire system or individual volumes - Pre-mounted network connection capabilities during setup helps to prepare bootable (uEFI and BIOS compatible) Windows PE or Linux USB sticks or ISO images to use the product utilities on bare metal machines or when OS is down - Create, format, delete/undelete, hide/unhide, active/inactive, assign/remove drive letter, change volume label, file system conversion (FAT to NTFS, NTFS to FAT, HFS to NTFS, NTFS to HFS), and file system integrity check - Advanced partitioning to split, merge, expand, redistribute free space, change cluster size, convert to logical/primary, edit sectors, convert to MBR/GPT, change primary slots and serial number, and test surface - Data wiping Broadcom picked up this and many other enterprise security and data protection products from its acquisition of Symantec in 2019. The Symantec Ghost Solution Suite (GSS) is a way to deploy and manage desktops, laptops, tablets, and servers from one console. It enables IT to migrate operating systems, inventory machines, deploy software, and perform custom configurations across multiple hardware platforms and OS types including Windows, Mac, and Linux. - GSS provides a single solution for deploying desktops, laptops, tablets, servers, and thin clients - Works across heterogeneous operating systems, including Windows, Mac, and Linux - Simplifies image management and cloning, and reduces the number of images needed for deployment - Inventory and software delivery functions - Ability to group machines based on application versions AOMEI Backupper backup and recovery software spans backing up of files, folders, hard disk drives, partitions, dynamic volumes, applications and system drives, as well as restoration in the event of data loss. It includes a disk imaging/cloning tool which can be used to create an exact image of your entire hard disk drive and operating system, and to migrate to another hard drive if desired, for Windows PCs and Windows Server. - Version ranging from free to professional, server, and site - Clone disks, dynamic disk volumes, system cloning and migration, command line and partition cloning - Integrated backup, restore, and cloning - Designed for Windows 10, 8, 8.1, 7, XP, Vista, Windows Server 2003, 2008 (R2), 2012 (R2), 2016, 2019 and Windows Small Business Server 2011.
<urn:uuid:a9e2518a-8dcc-417a-a8e0-f5efe093552a>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/backup/best-disk-cloning-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00687.warc.gz
en
0.896625
2,353
2.796875
3
The quest to predict the unpredictable and to control the uncontrollable is a part of an ongoing process. In the field of science and technology, sometimes it led to discoveries and useful innovations too. AI can be seen as the product of such human endeavors. In the current trends in AI, it is interesting to see that some discussions are now entering into a phase, where ideas of self-consciousness, free will, and unpredictability are referred to as the ‘features’ of the systems while reliability, efficiency, and predictability kind of ‘labels,’ are more and more promoted as some expectations from the human users. “The thought of every age is reflected in its technique,” wrote mathematician Norbert Wiener in his classic work ‘Cybernetics or Control and Communication in the Animal and the Machine.’ Wiener’s findings have played a significant part in the foundation of autonomous systems, communication, and AI. But that generation of the scientist was not that hopeful about the future use of their inventions, Wiener highlighted those aspects too, where he wrote: “Those of us who have contributed to the new science of cybernetics thus stand in a moral position, which is, to say the least, not very comfortable. We have contributed to the initiation of a new science which embraces technical developments with great possibilities for good and for evil.” And “we can only hand it over to the world that exists about us…and this is the world of Hiroshima and Belsen.” September 2021 was an interesting month! It saw the launch of two critical AI predictions, from two different parts of the world. Kai-Fu Lee, CEO of Sinovation Ventures and former president of Google China, together with a science fiction writer Chen Qiufan, published a book “AI 2041- Ten Vision for our Future” and coincidently during the same time Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher’s book “The Age of AI and Our Human Future,” was launched too. To put it a bit sarcastically, the ‘changing nature of jobs’ is one of the most talked-about AI topics, and an example of that can be seen here as well. One can see in the above case how the job of scientists, mathematicians, and technologists, is now snatched/replaced by some fiction writers and ex-diplomats, who often prefer to produce their views in an entertaining/interesting fashion! In the month of November, the UNESCO members (193 countries) have adopted an agreement on the ethics of Artificial Intelligence that defines common values and principles needed to ensure “healthy development of AI.” And in the same month, NATO defense ministers have agreed to their first-ever strategy for artificial intelligence. In the last few years, AI has produced a good number of futurists too. And if ‘Singularity’ has been achieved, or is about to, as some of them claim that means AI is going to be more superior to ‘Human Intelligence,’ then can’t we expect AI algorithms to regulate, restrain or discipline themselves? Something that his low human competitor is good at doing, hope this concern will get addressed in the next set of AI predictions! Some Silent Developments Some real-world developments in this domain, with more practical objectives and implications, are needed to be looked upon as well. This year, in October, a senior pentagon official resigned as he thought “the US could not compete with China on AI.” His move directed the attention of all to the emerging patterns of the AI arms race that were otherwise going silently (in a typical AI Cold War Manner). Earlier this month, the US Department of Defense released its annual report on Chinese Military Power, the report highlights China’s growing focus on developing the next generation warfare capabilities that include the use of more “mechanized, informatized, and intelligentized” systems with the help of specific emerging tech including AI and advanced robotics and the use of AI for social media analysis and propaganda are mentioned in the report as well. On the other side, the US Innovation and Competition Act 2021 was enacted by US Senate in June this year, which is seen as a counter-reaction to China’s growing tech aspirations. There are several other overt and covert moves that are going on, on both sides but what is more interesting is that a new kind of mutual understanding between the two, on the question of the division of AI’s strategic pie, is taking shape as well. As Kai-Fu Lee perfectly highlights in this book that “China and the United States have already jumped out to an enormous lead over all other countries in artificial intelligence, setting the stage for a new kind of bipolar world order,” and “as AI companies in the United States and China accumulate more data and talent, the virtuous cycle of data-driven improvements is widening their lead to a point where it will become insurmountable.” In a nutshell, both sides have not just big tech and big data but big ambitions as well and their maneuvers in this domain can certainly take a more ‘uncertain’ turn in the years to come! From science to technology to politics to geopolitics, the pace of shift in the world of AI is quite impressive. The ‘Human’ perspective If the mind can be decoded then it can be reprogrammed, if it can be reprogrammed, then it can be controlled- these ideas have fascinated researchers for decades. And it is important here to note that the progress in the fields of Artificial Intelligence is not the product of computers and information technology alone, it includes physiology, neurology as well as psychological experiments. How AI is going to shape and influence the way of human life, is a question that has been answered in many different and innovative ways. But ironically, the ‘Human’ perspective in AI is yet to be introduced. And AI cannot be blamed for this, it is the fault of our human systems which cannot distinguish between a user and an instrument. For technology and its improvisations to remain significant, it is necessary that its core interest must shift with the changing time. AI is created to make humans more productive and to reduce their labor content. A fair brainstorming on how optimally the human can use “AI” is still due on our part. The time to shift the focus from AI to the ‘Human’ angle has come!
<urn:uuid:27ecc6ac-5870-4189-a1e9-66310e4a4525>
CC-MAIN-2022-40
https://www.dailyhostnews.com/new-dimensions-of-ai-and-the-human-perspective
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00087.warc.gz
en
0.963369
1,356
2.65625
3
This year, the National Cybersecurity Awareness Month (NCSAM) entered its 18th year. The initiative was started by the US Department of Homeland Security and the National Cyber Security Alliance to ensure that organizations and users take necessary steps to enhance cybersecurity. It also ensures that people have all the resources they need to be safer and more secure online. Cybersecurity was already an issue and with more and more people using the internet, the hackers started getting into areas where they don’t belong, thus multiplying security risks. This is why, since its emergence in 2004, the cybersecurity month has included within its ambit several sectors, including small and medium-sized businesses, corporations, education, etc. NCSAM was formed to ensure that every individual stays safe and secure online. Ever since the pandemic changed our lives, billions of users have taken to the internet to get their work done. People have become more dependent on the Internet as events, employment, and momentous occasions moved to virtual platforms. It’s a risky affair for people who do not know how to protect their networks. Each day, thousands of networks get compromised and people’s security is stolen because they do not know how to secure their web-equipped devices correctly. Desktop computers, laptops, tablets and phones, all are susceptible because they contain information, which can be acquired by the wrong people. This called upon the need to enhance security for the online digital world. According to the Federal Bureau of Investigation (FBI), 2020 suffered a loss of more than USD 4 billion to cybercrime activities, which is 20% more than 2019. After tracking consumers’ cybersecurity activities and behaviors since 2015, an international survey concluded that cyberthreats have reached new heights, pushing users to take essential steps to protect themselves. Users who feel the Internet is unsafe and have changed online habits have risen from 58% in 2015 to 65% in 2021. While 59% said that data privacy issues have increased, resulting in a change of habits, it is up from 52% in 2015. People using the web need to inculcate ways to make their online experience safe and enjoyable. The NCSA is doing its bit by issuing pamphlets, seminars and awareness months to help people understand the rising cyber threats. Theme: ‘Do your part. #BeCyberSmart’ This year, the slogan for National Cybersecurity month is “Do your part. #BeCyberSmart.” It conveys the message that every individual and organization should be equally responsible for Internet safety while maintaining personal accountability. We need to take active steps to keep upgrading cybersecurity policies in the digitally growing environment. Throughout the month, the National Cyber Security Alliance (NCSA) and its partners have designed programs to raise awareness of specific aspects of cybersecurity. Each week of the month marks a different theme. - October 4 (Week 1): Be Cyber Smart Human lives have become dependent on technology. All the important data related to personal and businesses is stored on Internet-connected platforms, turning into data treasures for attackers. October’s first week of Cybersecurity Month focused on best security practices and overall cyber hygiene to keep the data safe. Utilizing multi-factor authentication, creating strong passwords, maintaining a backup of the data, and timely updating software are a few initial steps to start with ‘Do Your Part. #BeCyberSmart.’ - October 11 (Week 2): Fight the Phish! Since the pandemic, phishing attacks and scams have increased multifold. According to a report, phishing attacks have contributed to more than 80% of reported security incidents. Thus, the second week of Cybersecurity Month underscores the importance of remaining alert and more aware of emails, text messages, or chat boxes from unwanted sources. It pushes users to verify before clicking on suspicious emails, links, or attachments and reporting any malicious activity. - October 18 (Week 3): Career Awareness Week: Explore. Experience. Share The third week of the Cybersecurity Awareness Month highlighted the Cybersecurity Career Awareness Week led by National Initiative for Cybersecurity Education (NICE). As the cybersecurity field is dynamically growing, this week-long campaign inspires everyone to choose the best careers for themselves and get a fair chance. - October 25 (Week 4): Cybersecurity First The fourth week talks about keeping security a priority. For businesses, it means implementing security into products and processes. The week ensures it is important to make cybersecurity training part of employee onboarding and educate the staff with the tools required to keep the organization safe. At an individual level, keeping cybersecurity at the forefront to connect daily is essential. It is necessary to perform complete research about a particular product while purchasing it. While setting up a new device or app, security and privacy settings should be considered, along with updating default passwords. In the end, cybersecurity should never be an afterthought. Tips on how to handle cybersecurity threats Cybersecurity Awareness Month aims to educate people about the concrete steps everyone can take to protect their devices, data, and identity. To facilitate this provision, OCABR has listed simple proactive tips that can help organizations enhance cybersecurity. - It is important to protect it while you connect it – Computers, tablets, smartphones, and washing machines are connected via a network. This opens the door for potential security breaches. Using updated versions of the browser, security, operating system, and antivirus can help protect devices against cyber threats. - Setting up a complicated password – Users should avoid using the same password for more than one account. The password should be a combination of lowercase letters, uppercase letters, numbers, and special characters so that it is difficult for hackers to interpret the same. - Be alert while posting on social media – Cybercriminals are on the lookout to grab user details answered as security questions such as name the dog, first school teacher, or favorite place. Users must be careful while disabling the location and limiting the profile’s reach by applying strong security settings. - Applying Multi-Factor Authentication (MFA) – Applying MFA ensures that only the owner is authorized to access the protected account. MFA comes with an option that can be enabled using a trusted mobile device, such as a smartphone, an authenticator app, or a security token. - Verify apps – Most of the connected appliances, toys, and devices usually work on mobile applications. Mobile devices are full of apps, many of which run in the background and keep collecting information. One must keep track of the app permissions and use the ‘rule of least privilege’ option to delete the least critical data. - Be cautious of a public network – While connecting to any public wireless hotspot like hotels, restaurants, or airports, it is essential to check the provider’s exact name and login procedure so that the user does not get into any danger. Cybersecurity Awareness Month is crucial, especially for the US Government, as the country has undergone a series of cyberattacks and ransomware attacks, thus hampering its cybersecurity infrastructure. To keep up the national security and resilience, US President Joe Biden has alerted people, businesses, and institutions to focus more on the importance of cybersecurity and protection against cyberthreats. “Our Nation is under a constant and ever-increasing threat from malicious cyber actors. Ransomware attacks have disrupted hospitals, schools, police departments, fuel pipelines, food suppliers, and small businesses, delaying essential services and putting the lives and livelihoods of Americans at risk. During Cybersecurity Awareness Month, I ask everyone to Do Your Part. Be Cyber Smart. All Americans can help increase awareness on cybersecurity best practices to reduce cyber risks,” Biden said. To learn more about cybersecurity and other related technology, visit our latest whitepapers on secuirty here.
<urn:uuid:7d2ba2d4-4942-4907-a2d8-fdbdf5bcac5b>
CC-MAIN-2022-40
https://www.itsecuritydemand.com/insights/security/cybersecurity-awareness-month-2021-do-your-part-becybersmart/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00087.warc.gz
en
0.940758
1,613
3.109375
3
GPAI said Monday the report developed by the Climate Change AI and the Centre for AI & Climate classifies the recommendations into three categories: supporting the responsible use of AI for climate change mitigation and adaptation; reducing the negative impacts of AI on the climate; and building implementation, evaluation and governance capabilities. For the first category, themes include data and digital infrastructure, research and innovation funding and deployment and systems integration. The study offered several recommendations to advance the use of AI in mitigating climate change, including establishing data task forces in climate-critical sectors; facilitating data creation and open data standards in climate-critical industries; supporting cloud compute resources; and overseeing the development of data collection systems and digital twins for transport, energy and other physical infrastructure. “This report is urgently needed to help governments unlock the potential for AI in fighting climate change, in areas like energy, land use, and disaster response,” said Yoshua Bengio, co-lead of the GPAI Responsible AI Working Group. The report also highlighted several initiatives where AI is being used to help drive climate action and these are the UN Satellite Centre, National Grid ESO and Climate TRACE. GPAI said the document outlines bottlenecks that hamper the adoption of AI to address climate challenges and offers recommendations for governments how to deal with such challenges. These include improving data ecosystems and increasing support for innovation, research and deployment. GovCon Wire will hold its Climate Resilience: Reducing Risk and Creating Opportunities Fireside Chat on Nov. 18. Sign up for this virtual event to hear from experts as they discuss success stories of climate resilience and adaption efforts in the U.S. and abroad and describe future initiatives to enable communities to bring together diverse stakeholders to address vital issues.
<urn:uuid:96f3e6bd-1183-467f-bfdc-f8eb87812064>
CC-MAIN-2022-40
https://executivegov.com/2021/11/gpai-report-highlights-potential-of-ai-in-climate-change-mitigation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00087.warc.gz
en
0.909544
362
2.921875
3
Digital process automation (DPA) is the use of advanced technologies to execute complex business processes from beginning to end, with and without human participation. Researchers studying digital process automation (DPA) have developed competing terms for this emerging field. Digital process automation technology—and the language used to describe it—has simply moved too fast to have established conventions, at least so far. For this reason, you may see DPA referred to as: DPA streamlines business processes using advanced technology, including Robotic Process Automation (RPA), Business Process Management (BPM), and Artificial Intelligence (AI). AI technologies involved include (but are not limited to) machine learning, computer vision, predictive analytics, and Natural Language Processing (NLP). This combination of technologies allows for automation of many use cases that were not previously possible; it also greatly reduces the amount of exception handling required. The goal of DPA: to free up human workers so they can complete higher-value tasks. DPA optimizes human contributions to core business goals while orchestrating people and automated systems into a seamless whole, with proven results (see examples below). Sign up for a free, personalized demo of the Nividous DPA platform. To understand how DPA platforms accomplish these goals, it helps to understand the technologies and how they work together to automate complex processes from beginning to end. Robotic process automation involves the use of “bots”—software trained to automate rules-based tasks through existing software applications. These tasks are often very repetitive and time-consuming, and would typically be completed by a human. Bots work sort of like people work, accessing user interfaces in a variety of digital systems—and thereby avoiding code-heavy custom integrations. For instance, a bot may go through your email, identify invoices, copy relevant data from those invoices, and paste that data into accounting software and an ERP—all without accessing the back-end code of any of those systems. Having bots do these tasks frees up time for the people that were doing them. RPA bots automate individual activities within a process, not entire processes themselves. Most business processes consist of strings of individual tasks, often including human participation or review. Business process management (BPM) is the practice of automating and orchestrating these complex end-to-end processes. You might use a BPM system to fully automate a multi-step business process, whether that’s tracking shipments or approving loans. Tasks can be assigned to an individual role or queue and tracked and reported on from end to end. A BPM system—like the Control Center on the Nividous platform—can also be used to orchestrate the workflow for a complete enterprise process that consists of tasks performed by humans and tasks performed by RPA bots. It can also be used to: While BPM is related to DPA, DPA does not require the same level of human intervention BPM does. Long story short: DPA leverages a combination of RPA, BPM, and AI. The latter continues to evolve, allowing DPA to help turn more forms of information into structured data easily. With DPA, organizations can address problems including high error-rate, low customer satisfaction, poor employee engagement, and high turnover by improving efficiencies organization-wide. DPA has countless use cases—let’s take a closer look at three in particular. Using human workers to manually isolate, capture, and transfer data between systems is unnecessarily time- and labor-intensive. With DPA, businesses of all kinds can automate data extraction to increase productivity and conserve resources. A good example of this is one specialty healthcare organization’s process automation. Before the automation, ten team members were consistently occupied with manually extracting and reviewing patient data. This data was unstructured, contained in multiple formats, and high volume. Manual data management facilitated errors and resulted in delayed claim filings. Using Nividous Bots with cognitive capabilities to automate data extraction, review, and claim submission end-to-end, the provider reduced manual work by 80% with DPA, resulting in a 65% increase in productivity and a 45% decrease in operational costs. The accounts payable (AP) process involves multiple steps that are highly adaptable to automation. DPA can be used to automate a range of manual AP tasks, including touchless invoice processing, invoice data entry, two-way and three-way purchase order matching, and invoice/AP approval processes. In fact, manually processing one invoice can cost as much as $23. With automation, the process cost can be reduced to $4—a savings of 80%. Small to mid-sized companies across industries (from consumer goods to media) can leverage DPA to significantly reduce manual work and enhance business performance. Case in point: When a premier accounting advisory firm implemented Nividous RPA Bots with AI capabilities, BPM, and embedded analytics, it improved its client invoice validation process turnaround time by 85% and reduced errors by 100%). The underwriting process is most often characterized by rigorous due diligence. From financial and legal validations to operations checks, underwriting performed by human workers requires substantial time and energy. Using DPA, financial services companies and insurance providers can streamline and improve the process dramatically. Consider an insurance company that, prior to deploying intelligent automation, relied on 25 full-time workers to perform hundreds of validations manually in an error-prone and tedious process requiring several business days. With RPA Bots and Smart Bots (the latter with Computer Vision-based Optical Character Recognition capabilities), Nividous DPA saved the company 6,000 hours of staff time per month and raised throughput by 75%, dramatically increasing business value. By streamlining work and freeing staff from rote tasks, DPA has shown powerful benefits, including: The Nividous DPA platform can bring these benefits to your operation, whether the goal is to simply eliminate manual data entry or fully automate your accounts payable (or other) processes. Talk to us to see if DPA offers a solution for your workflow bottlenecks—or schedule a demo to see the Nividous DPA platform in action. Once you determine that DPA is right for you, you’ll probably need to prove a business value to fellow decision-makers. The Nividous Quick Start program is the ideal next step. Our team works with you to deploy a customized RPA bot in just three or four weeks for a fixed price. That first RPA bot proves the value of full digital process automation in terms that everyone can understand—an easily discernible return on investment.
<urn:uuid:0bd98b4b-f831-42dd-87cb-e1663e17fd04>
CC-MAIN-2022-40
https://nividous.com/blogs/what-is-digital-process-automation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00087.warc.gz
en
0.928156
1,370
3.15625
3
IBM researchers have created prototype computing chips that mirror the human brain, enabling them to not only collect and analyze information, but essentially learn from their mistakes, understand the data they’re seeing and react accordingly. The “cognitive computing” chips are able to recognize patterns and make predictions based on data, learn through experiences, find correlations among the information and remember outcomes, according to IBM officials. The chips represent a significant departure from how computers are traditionally programmed and operated, and open opportunities in a wide range of fields, they said. “Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture,” Dharmendra Modha, project leader for IBM Research, said in a statement. “These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.” IBM has been pushing efforts to drive more intelligence into an increasingly wider range of devices, and to create ways to more quickly and intelligently collect, analyze, process and respond to data. Those efforts were on public display in January when IBM’s “Watson” supercomputer beat human contestants on the game show “Jeopardy.” Watson, like many projects at IBM Research Labs, is focused on analytics, or the ability to process and analyze data to arrive at the most optimal decision. Watson was a revelation because of its ability to think in a humanlike fashion and answer questions posed in natural language-with puns, riddles and nuances, etc.-by quickly running through its vast database of information, making the necessary connections and returning not with a list of possible correct answers, but the correct answer itself. The cognitive computing chips echo those efforts. IBM officials are calling the prototypes the company’s first neurosynaptic computing chips, which they said work in a fashion similar to the brain’s neurons and synapses. It’s done through advanced algorithms and silicon circuitry, they said. It’s through this mimicking of the brain’s functionality that the chips are expected to understand, learn, predict and find correlations, according to IBM. Digital silicon circuits create what IBM is calling the chips’ neurosynaptic cores, which include integrated memory (replicating synapses), computation (replicating neurons) and communication (replicating axons). With those capabilities, computing can move away from the current if-then programming scenario and toward one where computers dynamically react, learn and problem-solve on the go. The two working prototypes offer 45-nanometer SOI-CMOS cores that contain 256 neurons. One core contains 262,144 programmable synapses while the other holds 65,536 learning synapses. The chips are undergoing testing and have worked with simple applications such as navigation, machine vision, pattern recognition, associative memory and classification. The effort is getting $21 million in new funding through DARPA (the Defense Advanced Research Projects Agency) for phase 2 of what IBM is calling the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project. The project’s goal is to create a computing system that not only collects and analyzes complex information gathered simultaneously from multiple sensors, but can dynamically rewire itself as it goes, and to do this in a compact, energy-efficient form factor. IBM officials see countless applications for cognitive computing systems. In one, such a system that is used to monitor the world’s water supply-collecting and analyzing such data as temperature, pressure, wave height, acoustics and ocean tides-could determine the threat of a tsunami and decide to issue a warning based on its findings. Another cognitive system could monitor sights, smells, texture and temperatures to warn grocers of bad or contaminated produce. “Imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens or imagine cognitive coprocessors that turn servers, laptops, tablets and phones into machines that can interact better with their environments,” IBM’s Modha said.
<urn:uuid:0ed4931c-74a4-4a41-b4ae-2af7ff578666>
CC-MAIN-2022-40
https://www.eweek.com/networking/ibm-unveils-chip-prototypes-that-mimic-human-brain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00087.warc.gz
en
0.933877
858
3.71875
4
While technology has created boundless new opportunities for learning, it has also produced ever-growing data security challenges. In years past, cyber defensive strategies were confined to internal environments. The traditional perimeter was secured using firewalls, network rules, endpoint detection, intrusion detection and prevention, and education of the user base. Fast forward to today, and that’s no longer the case. Learning has extended well-beyond internal classroom walls. The internet and proliferation of devices with access to it have opened the door to a world of anywhere, anytime learning for students, teachers, and staff. And through that open door, these cloud-based tools, laptops, smartphones, and countless other devices have wiped away the traditional perimeter wall and the trusted security it once provided. Because the learning ecosystem now extends well-into the proverbial cloud, the number of potential targets for cybercriminals is huge— according to Microsoft Security Intelligence, Education is by far the #1 targeted industry with a staggering 80% of all reported enterprise malware encounters in the last 30 days. Securing these digital learning environments requires K-12 to fundamentally change its approach to cybersecurity. Without the traditional perimeter to rely on, K-12 districts must shift to a zero trust approach that assumes all users, endpoints, and resources are untrusted and require verification, and the key to this approach is digital identity management. What is Zero Trust? While the technology landscape is ever-changing, a user’s digital identity is the one factor that remains constant. Each user in your district’s community is represented by a digital identity. From principals, to kindergarten students, and even parents, these identities are at the core of your network and resources. Regardless of user type, all are dependent on access to assets and tools within your digital ecosystem. “Zero trust,” a term originally coined by Forrester, describes a security model in which no one is assumed to be trusted. “Times have changed. You can't think about trusted and untrusted users," explained John Kindervag, who was a Forrester analyst at the time the model was developed. The zero trust model doesn’t distinguish between internal and external users or devices. Instead of “trust, but verify,” the zero trust approach says, “Verify everything, trust nothing.” This is accomplished by only delivering applications and data to authenticated and authorized users and devices. Thus, implementing zero trust in K-12 requires authentication across the entire digital ecosystem that strikes the right balance between security and productivity— without inhibiting fast access to learning and administrative tools. Why IAM is the Key to Implementing Zero Trust in K12 While multi-factor authentication (MFA) is often associated with zero trust, MFA alone is not enough to achieve this needed cybersecurity posture— you have to start at the authentication source: a user’s digital identity. And that’s where Identity and Access Management (IAM) comes in: it’s the foundation that spans the entire digital ecosystem and enables the granularity required to address diverse access and security needs. Unlike the typical corporate structure where employees are interviewed and selected, schools receive students and work to embrace their strengths and adapt to their challenges. Therefore, IAM in Education must address the uniqueness of a K-12 environment that includes non-tech savvy personnel, highly technical experts, administrators, teachers, substitutes, special needs students, parents, and more. While each of these users require some level of access to school resources and tools, not every situation requires the same level of authentication. For example, a kindergarten student accesses less-sensitive information than someone in faculty, administration, or IT. These younger students also might not have the same abilities as older students and adults. For these students, pictograph authentication combined with a QR code badge might be an ideal and secure alternative to other authentication methods. On the same note, while a mobile app authenticator is a cost-effective option for organizations, school districts must consider students who lack access to smartphones or teachers who push back on downloading an app on their personal device. Hardware-driven methods are another option, but associated cost likely makes them cost-prohibitive at the scale of a K-12 user population. The bottom line is, in Education, the mandate is to educate students. If security protocols interrupt that mandate, then adoption cannot be enforced. Therefore, in K-12, MFA cannot be applied with a “one size fits all” approach that broadly applies across all users. Authentication must be flexible enough to tailor the login experience to each user’s unique needs. Further, cybersecurity should never be a hurdle to learning that inhibits productivity or access to educational resources. An education-centric IAM platform overcomes this challenge with identity-driven policy enforcement that enables flexible authentication. Granular policies can be defined to tailor authentication based on individual needs, abilities, and risk-level and then enforced across the entire digital ecosystem. By using IAM as a foundation for enforcing zero trust, not only is security enhanced, but the platform serves as an enabler for educators that reduces the chances of lost learning time, delays, account lock-out, and data leakage. Ready to Employ Zero Trust in Your Cybersecurity Strategy? The foundation for zero trust starts with putting education-centric IAM at the core of district security. A zero trust approach ensures security is consistently enforced across the digital ecosystem with the IAM platform acting as the new security perimeter for school digital environments— bringing together users, their devices, the network, and the applications relied upon each day— and authorizing each instance of access based on customizable policy. Leveraging an identity-centered approach to zero trust ensures users have streamlined access to learning and administrative resources, while striking the right balance between productivity and security.
<urn:uuid:f076b006-7fb1-451e-a007-d0cfc5b31566>
CC-MAIN-2022-40
https://blog.identityautomation.com/why-zero-trust-is-the-foundation-of-a-strong-k-12-cybersecurity-program
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00287.warc.gz
en
0.929196
1,206
3.1875
3
Multilayer switches can be used for a number of different tasks. We can use them for switching, routing or a combination of both. Cisco switches use the TCAM (Ternary Content Addressable Memory) to store layer 2 and 3 information for fast lookups. If you have no idea what TCAM is about, you might want to read my lesson about CEF before you continue. SDM (Switching Database Manager) is used on Cisco Catalyst switches to manage the memory usage of the TCAM. For example, a switch that is only used for switching won’t require any memory to store IPv4 routing information. On the other hand, a switch that is only used as a router won’t need much memory to store MAC addresses. SDM offers a number of templates that we can use on our switch, here’s an example of a Cisco Catalyst 3560 switch: SW1#show sdm prefer The current template is "desktop default" template. The selected template optimizes the resources in the switch to support this level of features for 8 routed interfaces and 1024 VLANs. number of unicast mac addresses: 6K number of IPv4 IGMP groups + multicast routes: 1K number of IPv4 unicast routes: 8K number of directly-connected IPv4 hosts: 6K number of indirect IPv4 routes: 2K number of IPv4 policy based routing aces: 0 number of IPv4/MAC qos aces: 0.5K number of IPv4/MAC security aces: 1K Above you can see that the current template is “desktop default” and you can see how much memory it reserves for the different items. Here’s an example of the other templates: SW1#show sdm prefer ? access Access bias default Default bias dual-ipv4-and-ipv6 Support both IPv4 and IPv6 ipe IPe bias routing Unicast bias vlan VLAN bias | Output modifiers <cr> Here are the SDM templates for this switch. We can change the template with the sdm prefer command:
<urn:uuid:d71c21e3-6d9e-429c-8c88-3a5c4a9dc523>
CC-MAIN-2022-40
https://networklessons.com/switching/cisco-sdm-switching-database-manager
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00287.warc.gz
en
0.818516
438
2.515625
3
A well-known study from Gartner Group estimates that 43 percent of businesses who experience a major disaster fail within five years and 29 percent fail within the first two to four months. As a direct result of this, business continuance represents one of the top IT data center initiatives for company of all sizes, and especially for large enterprises. Every business continuance plan is based on the creation of one or more secondary data centers, where storage and applications are replicated in such a way that if a disaster occurs, the equipment hosted in the secondary site will be able to take over and keep the business up and running. Common sense suggests that the secondary data center should be located far enough away from the primary one to reduce the likelihood of all IT operations being affected by the same disaster. But longer distances imply higher latency, which is not well tolerated by some applications. In fact, this is particularly true for synchronous replication, while it becomes less relevant in the case of asynchronous replication. Therefore, companies, who need synchronous replication to guarantee that primary and remote data centers constantly have the same information, will have to deploy their secondary site within a typical radius of about 100km from the primary site and use synchronous replication between them. At these distances it is possible to choose amongst many different transport technologies, including of course IP, but also CWDM (Coarse Wavelength Division Multiplexing) or DWDM (Dense WDM,), which can be more expensive, but provide very high performance and excellent reliability. Companies who do not necessarily require synchronous replication can locate their secondary data center further (more than 100km from the primary site) and will have to consider transport technologies that are capable of spanning across those distances, such as IP. The technology that leverages IP for the interconnection of remote sites is called FCIP (Fibre Channel over IP) and allows Fibre Channel fabrics to be transparently interconnected over any IP infrastructure. Fibre Channel Over IP FCIP is a protocol specification developed by the Internet Engineering Task Force (IETF) that allows a device to transparently tunnel Fibre Channel frames over an IP network. The operation of the FCIP protocol is very similar to any other tunneling mechanism. There are two edge devices that interface between the local Fibre Channel SAN in each of the data centers and the IP network. Each of these devices takes Fibre Channel frames from the SAN and encapsulates them within IP packets that can be transferred over an IP network in a reliable manner by using the TCP (Transmission Control Protocol) as the transport layer protocol. At the remote site, another FCIP device receives incoming FCIP traffic, strips off the additional headers and places the original Fibre Channel frames back on to the SAN. This mode of operation represents both the strength and the weakness of the protocol as it makes FCIP completely transparent to the Fibre Channel SANs that are being interconnected; this implies that most, if not all of the management procedures that a SAN administrator is used to perform on a Fibre Channel SAN within a single data center are easily extended to the interconnected environments, but it also means that the two Fibre Channel SANs have effectively been “bridged,” thus creating one, large, geographically dispersed SAN. There are two problems associated with bridging Fibre Channel SANs across distance. The first one is the fact that the stability of the extended SAN is now dependent on the stability of the link that connects the two environments. The second issue is related to having one unified SAN across the two (or more) data centers, which means that any fault on any of the two sites gets propagated to the other and may cause disruption to the entire environment. The main purpose of building a secondary data center is to protect the data and the application that are hosted at the primary site, but the probability of having a service interruption increases as soon as any connection between the primary and secondary locations is established. The solution to this paradox exists and it’s based on the combined use of FCIP and two advanced Fibre Channel features called Virtual SANs (VSANs) and InterVSAN Routing (IVR.) The Role of VSANs Very much like Ethernet, Fibre Channel is a layer-2 protocol without any hierarchical network domain concept that would serve to isolate and localise control protocols and messaging within a given region of the network. Instead, like Ethernet, Fibre Channel maintains a set of control protocols that are fabric-wide in scope such as zoning or state change notifications. Local control protocol events can potentially result in disruptions that span the full extent of the fabric. Obviously this is as true for SANs that are fully confined within a data center as it is for extended SANs, built using any kind of transport. In the case of an extended SAN, the consequence of a disruption on either side of the link equally affects both sides. In Ethernet the problem of segmenting large physical domains into multiple logical infrastructures is solved by VLANs (Virtual LANs,) whose key attribute is the fact any disruptive fault on one VLAN does not affect any of the others and this is achieved by having a separate control plane per each VLAN. In Fibre Channel the equivalent of VLANs is represented by VSANs. Now part of the ANSI T.11 standard, VSANs behave in a very similar way to VLANs and provide exactly the same benefits in terms of security, scalability and fault isolation. When multiple VSANs need to be carried over one ISL (Inter Switch Link) each frame is tagged with explicit VSAN membership information in such a way that the receiving switch can take the appropriate forwarding decision also considering the VSAN tag. Of course this VSAN tag is never exposed to end devices, such as HBAs (Host Bus Adapters) or storage array interfaces. A switch-to-switch link that supports VSAN tagging is called EISL (Enhanced ISL.) It goes without saying that ISLs and EISLs can also be extended over long distance, possibly using FCIP. VSANs alone do not yet solve the problem of interconnecting two data centers while keeping them isolated from a control plane perspective. VSANs make it possible to isolate two or more data centers by using different sets of VSANs, but this also inhibits data traffic from flowing between the sites. In the Ethernet world, the problem of enabling nodes belonging to different VLANs to communicate to each other is solved by IP. By leveraging a hierarchical addressing scheme and a set of routing techniques, IP, a layer-3 protocol, lets data traffic cross the boundaries of VLANs without merging their control planes. Unfortunately, there is no IP in SANs and there is not even any kind of layer-3 protocol. The common upper layer protocol for storage networking is SCSI (Small Computer Systems Interface,), which was designed on the basis of completely different assumptions than IP. When SCSI was first conceived, nobody could have ever imagined that the distances span by the protocol could have been as long as those of a FCIP link. SCSI was originally meant to be a bus protocol to connect peripherals, including storage devices to a computer. As such, SCSI architects never thought about including a proper layer-3 protocol, but limited themselves to design a local I/O (Input/Output) protocol. Since nobody would even dream about changing SCSI today, the solution to SAN internetworking has been built within the fabric and goes under the name of InterVSAN Routing (IVR.) Using IVR, a set of policies can be configured on fabric devices to selectively allow nodes belonging to different VSANs to talk to each other. In this way IVR achieves what IP does in the IP/Ethernet world. By combining VSANs with IVR, it becomes now possible to join together primary and secondary datacenters and let relevant traffic flow between the two sites, but still preserve the control plane isolation needed to guarantee that any fault on any of the two sites or along the connection link would not adversely affect the operation of the entire IT infrastructure. Building a reliable, secure and scalable business continuance solution over IP is possible by astutely combining FCIP, VSANs and IVR. This solution is architecturally very powerful as there is no restriction on the transport technology that can be used for the long distance connection. IVR strictly relies on basic Fibre Channel services and therefore can leverage any transport option available to Fibre Channel such as Fibre Channel itself, CWDM or DWDM, SDH/SONET and of course IP.
<urn:uuid:345fa9c3-da61-427c-8780-ab047fc48a44>
CC-MAIN-2022-40
https://it-observer.com/building-true-business-continuance-solutions-over-ip.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00287.warc.gz
en
0.945198
1,786
2.671875
3
Community Wi-Fi – A Primer When walking through a typical residential neighborhood today, all Wi-Fi access points in range are almost always locked, preventing access to anyone but the owner. Although subscribers pay for a certain high-speed broadband connection, bandwidth caps are not reached most of the time. An opportunity exists to optimize bandwidth resources for the benefit of the greater community, especially since cellular data networks are overloaded due to an ever-increasing number of users and resource intensive applications such as Google Maps, Facebook, etc. Now imagine walking through the same residential neighborhood and having ubiquitous Wi-Fi connectivity from your neighbors’ access points. Instead of all access points being locked to visitors and passers by, everybody is able to connect. Phones (or any other Wi-Fi client) connect seamlessly from the coverage area of one house to another while neighbors still maintain the security and privacy of their locked access points. This scenario depicts a Community Wi-Fi network, and actually, it’s more real than you might imagine. What is Community Wi-Fi? Community Wi-Fi networks allow service providers to leverage unused capacity on existing Wi-Fi infrastructure to offer Wi-Fi network access to visitors and passers by. An operator can also use this excess capacity to offer services to retail and roaming–partner operators’ subscribers. The residential subscribers accessing the network from inside their homes have prioritized access to the Wi-Fi resources. The residential Wi-Fi infrastructure is configured in a manner that allows for a secure and independent access channel to retain service quality, safety, and privacy for both residential and visitor customers. Roaming users are only allowed to use the Wi-Fi network capacity that is not currently used by the subscriber at home. Basically, the wireless Access Point (AP) in the home will provide two networks: a private one for the home owner/subscriber, and a community network for on-the-go subscribers passing through the neighborhood. While the user is at home, all of their Wi-Fi devices (smartphone, tablet, etc.) should automatically connect to the private network. When the user travels outside the vicinity of their AP’s coverage area, and passes in range of another AP operated by the same service provider, their client devices will be able to connect to the public network. How does this benefit me? A subscriber’s access point is made available to other on-the-go subscribers, adding to the number of access points within a Community Wi-Fi network. In return, the subscriber is able to tap into other shared access points within the Community Wi-Fi network. This gives access to the high bandwidth and speeds offered by cable Wi-Fi networks when on-the-go instead of having to use more costly cellular data networks. The subscriber is part of a community of shared Wi-Fi networks. What about privacy and security concerns? Traffic on the Public (Community Wi-Fi) network routes differently from that on the Private (subscriber home) network. At no point does a user on the Public network have access to any other device on the Public or Private network. The user of the Private network similarly does not have access to the traffic or devices on the Public network. All traffic on the Public network is sent through a secure tunnel to the core network before it is routed to the Internet, ensuring that traffic is separated between the two networks. How far in the future is Community Wi-Fi? It is actually already happening. A number of operators are planning or actively deploying community Wi-Fi networks. By year-end, Comcast Cable’s Xfinity WiFi network is expected to reach eight million hotspots. Community Wi-Fi deployments in the US alone are expected to reach a million APs this year. Many European operators already have large active community Wi-Fi deployments and are planning to expand. Great! Where can I get more information? CableLabs is working with our member companies and vendors to help solve challenges associated with the implementation of Community Wi-Fi. We are also working with Wireless Broadband Alliance (WBA) on publishing a white paper on Community Wi-Fi to be released in fall, 2014. By Vivek Ganti -
<urn:uuid:4aa0bd93-9424-458d-b9bf-1fe1718237da>
CC-MAIN-2022-40
https://www.cablelabs.com/blog/community-wi-fi-a-primer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00287.warc.gz
en
0.942391
870
2.59375
3
The Raspberry Pi is hardware, a single-board computer with an ARM-compatible CPU, while Kubernetes is software for running and managing containers. They’re both popular: the Raspberry Pi hit the 30 million units shipped milestone near the end of 2019, and apparently has seen a new sales surge this year. Kubernetes adoption is growing leaps and bounds, too. The nature of their popularity differs, though. The Raspberry Pi has become commonly associated with computer science education, especially in K-12 education, since their relative affordability makes these devices accessible. In general, the Raspberry Pi lends itself to the kinds of tinkering and project-based work that cultivates the curiosity and learning that draws many people to technology in the first place. [ Want Raspberry Pi projects to try with kids? Check out these five cool project ideas. ] Kubernetes, meanwhile, has become synonymous with containerization. In particular, it’s a top choice for container orchestration, meaning how teams automate deployments, scheduling, scaling, and other critical work involved in running containers. (You probably won’t see Kubernetes become a go-to platform for teaching children computing principles, though.) At first blush, pairing these two technologies together – low-cost personal computing hardware and a powerful open source container orchestration platform – might seem unusual. But actually, the Raspberry Pi’s suitability for learning extends well beyond the classroom or home. Raspberry Pi + Kubernetes = Easy experimentation So how do these technologies go together? The crux of their relationship is experimentation – and in particular low-cost experimentation, which means failure is just fine as someone travels their individual learning curve and develops experience. “Raspberry Pis can unlock a lot of knowledge and creativity around hardware, and the membrane of software to the hardware,” says Ravi Lachhman, evangelist at Harness. “Dabbling in items that could potentially roast or brick your expensive laptop can be tried on a Raspberry Pi device without worry, since they are relatively inexpensive.” In particular, Lachhman notes that a group of Raspberry Pi devices can help someone stretch their distributed systems wings, and be used to simulate the nodes (or machines) that host containerized workloads in a Kubernetes cluster. While a tool like Minikube is also great for learning and other purposes, it runs a single-node cluster on your laptop. [ Related read: What’s the difference between a pod, a cluster, and a container? ] “Each Raspberry Pi is effectively its own node with its own operating system,” Lachhman says. “Because Raspberry Pis come with connectivity that allows them to cluster together, you could create a Beowulf cluster representing several nodes at your disposal. If you just had your laptop, you would have to run multiple VMs to achieve the same thing, which can tax your machine.” Indeed, this means you can build experience with cloud computing, hybrid cloud, and Kubernetes from home (or anywhere, really.) How to build a Kubernetes cluster with Raspberry Pi devices As Red Hat SRE Chris Collins wrote recently over at our sibling site, Opensource.com: “Nothing says ‘cloud’ quite like Kubernetes, and nothing screams ‘cluster me!’ quite like Raspberry Pis. Running a local Kubernetes cluster on cheap Raspberry Pi hardware is a great way to gain experience managing and developing on a true cloud technology giant.” Be sure to check out Collins’ step-by-step by guide to installing a Kubernetes cluster on three or more Raspberry Pi machines. For the imaginative and ambitious, Lachhman notes that once you’re up-and-running, you could even start to scale your hardware by continuing to add new Raspberry Pis as additional worker nodes in your cluster. “You might want to dedicate a node to act as a load balancer if you start to get really large clusters or network heavy workloads you are placing on your set of Raspberry Pis,” Lachhman says. Whether you run a cluster of several or dozens of machines, though, the goal is the same: Kubernetes experimentation and learning. It’s not just about getting your hands dirty with Kubernetes (and how a cluster runs across distributed hardware), though. “When it comes to deciding what you do with your own little supercomputer with several or several dozen Raspberry Pis, the world is your oyster,” Lachhman says. [ Kubernetes 101: An introduction to containers, Kubernetes, and OpenShift: Watch the on-demand Kubernetes 101 webinar. ] What can a Raspberry Pi cluster be used for? Think testing and simulation, for starters Both Lachhman and Raghu Kishore Vempati, director of technology, research, and innovation at Altran, point to the simulation and testing possibilities of running a Kubernetes cluster on a fleet of several or more Raspberry Pis. You can test for resiliency, performance, and scalability under various conditions, for example. Lachhman likens the setup “to the art of the possible:” “Testing workloads [when] you are unsure of scalability are great to place on your Raspberry Pi- powered Kubernetes cluster,” Lachhman says. “Does adding more replicas impact application performance?” Similarly, if you want to field test latency, Lachhman notes that you can try running everything over your home WiFi setup instead of wired connectivity in an office. And the possibilities continue from there. “You can answer some old compute/mathematical problems such as the Infinite Monkey Theorem, where you can generate any text with random characters/numbers,” Lachhman says. “Don’t be afraid to experiment, even pull Raspberry Pis from your cluster, to see how Kubernetes handles failure.” Vempati concurs that the fundamental value of running Kubernetes on Raspberry Pis is the learning experience, including the infrastructure set-up (meaning, setting up your Raspberry Pi devices and installing a cluster on them.) That latter part isn’t something you necessarily get from some other learning tools. “For practitioners who would like to get hands-on experience to experiment with the setup of Kubernetes cluster from scratch on hardware without incurring significant cost, Raspberry Pis are a viable solution,” Vempati says. From the simulation and testing standpoint, Vempati notes there are lots of possibilities for IoT scenarios and how containerized applications could run on IoT devices. “Kubernetes has been a proven container orchestration solution for containerized applications hosted on physical and virtualized platforms, both on-premises and cloud,” Vempati says. “It is possible to mirror the same capability on a cluster of IoT devices simulated using Raspberry Pis. An array of Raspberry Pi devices can be provisioned and Kubernetes can be installed on them to run containerized applications.” The same principle of simulating IoT environments and testing for things like performance and latency applies here. Vempati also notes another possible category of testing: Simulating the use of ARM processors in a datacenter. “There is an increasing interest across the industry in having ARM processors in data centers along with the x64 processors, with a view to reduce infrastructure costs,” Vempati says. “The ‘performance-per-watt’ of ARM-based CPUs is relatively lower compared to legacy processor architectures.” Because Kubernetes can run containerized applications anywhere, including in your own private cloud or datacenter, this offers another simulation scenario. “Given Raspberry Pis have ARM-compatible CPUs, they can be used as a first step by users to verify scalability and performance of containerized applications on ARM based processors,” Vempati says. Subscribe to our weekly newsletter. Keep up with the latest advice and insights from CIOs and IT leaders.
<urn:uuid:814d7878-e043-4467-a7b6-3f8a73fbafb4>
CC-MAIN-2022-40
https://enterprisersproject.com/article/2020/9/how-raspberry-pi-and-kubernetes-go-together
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00487.warc.gz
en
0.908969
1,732
2.578125
3
As industrial applications drive the next major growth phase of the Internet of Things (IoT), there are growing concerns about how the data that flows through its ecosystems is created, validated, protected, transported, shared, and analyzed. Cryptography is the foundation for addressing these issues, but many vendors concentrate on building market share by paring costs rather than implementing security. As a result, many IoT devices are inadequately protected from hacking, which threatens the security of the IoT ecosystem and other networks to which it connects. Internet security, privacy, and authentication are not new issues, but IoT presents unique security challenges. First, many IoT devices have limited processing power and memory, yet robust cryptography involves substantial computational power and needs memory to store temporary or permanent encryption keys. One solution is to give every IoT device a unique and unclonable identifier by deriving it from the microscopic physical differences between silicon chips caused by manufacturing process variations across a wafer. Such an identifier can substitute for stored encryption keys, saving memory. IoT devices with unique identifiers can communicate securely with cloud-based servers that carry out data analysis and decision-making within IoT ecosystems. However, it is critical that devices and servers can authenticate that they are communicating with legitimate members of their ecosystem. This is usually handled using digital signatures and public key infrastructure. A central server protects the IoT network. Credit: Dr. Charles Grover. Digital signatures can also protect against denial-of-service (DoS) attacks, in which malicious actors prevent devices from working properly by creating a fake server to intercept signals sent by the device or by overloading the server using fake devices to issue fake requests. Since IoT networks contain many devices, they are vulnerable to DoS attacks. IoT devices may be widely distributed and vulnerable to physical attacks. These can include side-channel attacks, which try to analyze how a security algorithm is performed to learn secret information about encryption keys. For example, a timing attack may try to exploit the fact that a key-generation algorithm may take varying amounts of time to run depending upon the key value generated. Perhaps it takes longer to write a 1 into memory than a 0, so analyzing how long it takes to store a key provides insights about the relative population of 0s and 1s within it. Power analysis is another option. If it takes more energy to write a 1 into memory than a 0, this may yield secret information. IoT fragmentation makes securing an ecosystem tougher. The silicon vendor building the chips must have access to and manage the information that needs to be embedded in each to enable it to find and access the intended IoT network. The device maker that uses these chips must ensure that it's properly implementing cryptographic tasks. IoT hub manufacturers and integrators must provide software for managing, collating, and parsing the data obtained by devices. These providers may also be responsible for managing authentication. Many parties have access to an IoT ecosystem and the data flowing through it, but none take overall responsibility for security. Hiring third-party specialists to do that will not work if the scope of the task they are given isn't clearly defined. Much work is being driven by concerns that upcoming quantum computers may undermine or invalidate today's approaches. These post-quantum cryptography (PQC) strategies may also have valuable properties for enabling IoT security. In the quantum computing era, it's a challenge to handle encryption keys and digital signatures that are long enough to offer good security on devices with limited memory, power, and communications resources. Digital signatures are vital for authentication between devices and servers. America's National Institute of Standards and Technology (NIST) is exploring technologies to replace today's approaches to digital signatures. A comparison with ECDSA, a standard approach used today, reveals the issue: To transmit a signature with 128 bits of security, ECDSA must send a public key of 256 bits and a signature of around 576 bits. The most compact PQC digital-signature strategy remaining in the NIST analysis uses an 896-byte public key and a 690-byte signature. In other words, the PQC implementation of a digital signature needs about 15 times more bandwidth than ECDSA, as well as more computation and more memory to store cryptographic keys. Other PQC digital-signature schemes may emerge that use the same bandwidth as ECDSA. If not, IoT devices will have to rely on other ways to authenticate with servers, such as greater use of key-encapsulation mechanisms and pre-shared keys. NIST is also looking for PQC algorithms that are inherently less subject to physical attacks than those used today. IoT security practitioners need to be aware of standardization processes and work out which PQC strategies will work within the constrained resources of IoT devices. NIST is exploring how robust the new algorithms will be to side-channel attacks and the issues relating to the bandwidth requirements of digital signatures in a post-quantum computing world. Today's signature schemes are too unwieldy, but ongoing work should help secure the IoT in the upcoming age of quantum computing.
<urn:uuid:1f7b6518-af6a-4a70-b6d1-d68fb883cdb2>
CC-MAIN-2022-40
https://www.darkreading.com/iot/securing-the-internet-of-things-in-the-age-of-quantum-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00487.warc.gz
en
0.926825
1,046
3.28125
3
Defining the Scope of Smart Manufacturing and Industry 4.0 Smart manufacturing technologies are changing the game in the industry. New innovations and industry 4.0 are creating a more efficient and productive factory with better data reporting, giving manufacturers an easier time using real data to make informed decisions. Read on to learn more about the technologies and scope of smart manufacturing and how it can be used to grow businesses exponentially to stay ahead of competitors and streamline processes. What is Smart Manufacturing? Smart manufacturing is a term used to describe the digitization of the manufacturing processes, including supply chains, production, logistics and distribution, sales, and stocking. It’s made up of many unique technologies and solutions that each play a role in the production process. Related: What is Smart Manufacturing? Smart manufacturing utilizes the internet of things and industry 4.0 to connect these technologies, equipment, and software. Afterward, these technologies work together to record, report, and analyze streams of data using AI, machine learning, and automation to make more accurate predictions and decisions. This connectivity and collaboration between machines helps to streamline supply chains, inventory management, and distribution. Connecting operations and helping manufacturers stay on the cutting edge of technology while becoming more efficient, productive, and competitive in their industry. The Technologies that Make Up Smart Manufacturing and Industry 4.0 As mentioned above, smart manufacturing is made up of many different types of modern technologies. Here are a few of the most common smart manufacturing technologies and how they help manufacturers optimize and streamline their operations. AI and Machine Learning AI and machine learning in smart manufacturing work together to gather more data and make data analysis faster and more accurate. Advanced AI and machine learning is more able now to also make simple decisions based on the analyzed data, decisions like predictive maintenance to anticipate upcoming needs. Automation in manufacturing streamlines repetitive processes by giving them to robots who can perform them faster, more efficiently, and more accurately. This also allows humans to focus on tasks that require more critical thinking. Cloud storage in manufacturing means not having to store critical data on-site, saving you space, money, and improving the overall security of the data. Cloud usage in manufacturing can also link multiple locations, combine datasets, and help you get a better overview of your business. Smart Manufacturing Can Reach into Every Part of a Business When we say that smart manufacturing technology makes your business more transparent, we mean it. With these new innovations implemented, you’ll get a better idea of how each part of your operation is performing, down to the smallest details, because it becomes noted in the data that’s being tracked by an array of sensors set up across your entire operation. Is a machine slowing down and lowering production? If so, the data collected by the sensors inside will help you see the slowdown, identify the cause, order parts, and schedule maintenance before it gets bad enough to truly affect output. In this example, not only does this maintain steady production, it saves money by decreasing potential downtime before it happens. With the right technology, this becomes standard across your entire business, including the factory floor, shipping, supply chains, production, logistics and distribution, sales, stocking, and supply. With a clearer view of your business, smart technology helps you adjust on the fly based on real data in all corners of your operation. How Does Smart Manufacturing Help Manufacturers Grow? Smart manufacturing helps manufacturers grow by giving them the tools and technology needed to outpace their competition and understand where within their operation they can improve, streamline, and optimize. With the data collected by the connected sensors across a factory floor, businesses that use smart technology have a better understanding of what’s working, what’s not, and what can be improved. This advantage over those who aren’t gathering this data is immense and can be used to stay multiple steps ahead of the competition. What Comes Next in Smart Manufacturing? The future of smart manufacturing is being written every day as new technologies are created and digital innovation experts continue to find exciting ways to utilize them within supply chains, on factory floors, and everywhere else that they can. If you want to learn more about smart manufacturing and what it can do for your business, take a look at one of our latest videos on the benefits of smart technology or read our new eBook on creating a more modern and efficient supply chain.
<urn:uuid:e7c8a6af-7f18-4c2c-b8f4-412779312b1c>
CC-MAIN-2022-40
https://www.impactmybiz.com/blog/defining-the-scope-of-smart-manufacturing-and-industry-4-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00487.warc.gz
en
0.934045
906
3.03125
3