text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
From home life to the workplace, our lives are becoming increasingly reliant on our digital network access. 5G technology will revolutionise the way connectivity is delivered, bringing many existing network services together and making what once seemed impossible, possible. For the education sector, 5G will bring significant changes in the way lessons and lectures are delivered in schools and universities, as well as enabling off-site access to resources and data to make research and learning more collaborative. With these advancements comes increased expectations from students, teachers and parents for improved services and a seamless user experience, driving all institutions to make 5G-powered education a success. Much of the hype around the rollout of 5G is about big data and increased capabilities for analysis. With high volumes of data now generated, collected and stored across campus, 5G connectivity gives higher education institutions the opportunity to improve ways of working to increase efficiency and productivity in all operational areas from office administration through to teaching. A simple example of this is the ability to monitor students taking online modules and exams by the criteria ‘time to complete’. Teachers can use this data to quickly identify where students across the course are taking longer than average to complete certain questions, or getting lower than average marks on certain sections. This insight enables teachers to review teaching methods for those particular areas or evaluate performance against previous years to generate a more in-depth analysis of individual students, teaching staff and of the institution as a whole. The full impact that digital transformation will have on education is really only limited by imagination. However, staff capabilities across the education sector will need to keep pace with evolving processes. The government’s Commission on Assessment without Levels report recommended schools should collect as much data as possible, but a key finding was that the majority of teaching staff reported data entry as the biggest burden of their role. If education is to flourish in the way that 5G will allow it to, there needs to be sufficient support and training for staff and students to ensure the benefits of new technologies can be fully realised. The push towards digital transformation has already had an unprecedented impact on the education sector. These changes are not only ongoing, but also accelerating. Education providers have been well-placed to take advantage of new technologies and in many cases have been early adopters of new technologies being introduced into the sector that will truly transform learning experiences. Adoption of the cloud has levelled the playing field for access to education. Centralised information repositories like cloud storage and online modules allow students to access information securely from any location. This gives both students and teachers much greater flexibility regarding when and where they can work, enabling them to balance other aspects of life such as work and family. Furthermore, storing digital copies of student work in the cloud means tracking progress between classes, year groups and key stages is much more efficient for teachers and parents, allowing them to more easily assess areas that are improving or need attention. Cloud-enabled interactive teaching materials have driven a more collaborative working environment and ensure that students are getting the most out of teaching time, ultimately resulting in the educational institution offering a better quality experience. However, the timeframe for the 5G rollout is currently the cause of some debate - its widespread implementation may not be possible at least until 2020 according to recent government schedules. Moreover, as it currently stands, 5G will not be available in all areas across the UK until at least 2025, by which time the Global System for Mobile Association (GSMA) forecasts that 5G will still only have reached 14 percent of the global market. SIGN UP FOR E-MAIL NEWSLETTERS Get up to speed with 5G, and discover the latest deals, news, and insight! The UK infrastructure is not yet ready for the industry-wide implementation of 5G and the significantly larger amounts of data it will host. There are still communities in rural areas struggling to access the 4G network that came into play in 2013 - a clear testament to the lack of consistency in the UK when it comes to network connectivity. Many institutions find gaining access to robust networks a significant challenge and this lack of consistency has a significant impact on how the curriculum is taught in schools with varying network availability. Beyond the school gates, lack of connectivity is not just affecting schools, but entire communities - the connectivity students have available at home can often be worse than they experience in school. Until reliable and fast connectivity is widely available to every school and home, the true impact of 5G on education will not be tangible to all. It’s also important to remember that as 5G paves the way for increased connectivity, vulnerability to cyberattacks will also increase. One of the concerns raised most frequently by institutions is the feeling of a lack of control when using service providers rather than implementing the services themselves. The shift to a managed services setup rather than hardware maintenance requires a different skill set among in-house IT teams. Changes in technology usage require IT departments to change their perspective on the buying process - instead of questioning hardware failure rates and fix-times, it’s now about ensuring that the service level agreements and contracts are fit for purpose. Service providers have a duty to inform their customers and to ensure they feel comfortable with all parts of the service, including alleviating cybersecurity concerns by ensuring robust measures are tailored to each institution's individual needs. A provider that does this successfully will relieve pressure on the IT department and be a reliable point of contact for any queries or concerns. Security and compliance is a prevalent risk and it’s important that only authorised people are able to access the relevant data. GDPR laws mean that everyone, including students, has the right to access their own personal data, known as ‘subject access requests’. Many institutions are still choosing to store pupil and staff data in paper form, which not only makes requests for access extremely time consuming to fulfil, but also leaves data vulnerable at risk of getting into the wrong hands. Schools are not exempt from GDPR regulations, so storing all personal data within a secure cloud system is a way to ensure data is safe whilst also being easily accessible on request. And whilst cloud services have become an important aspect of many institutions’ IT infrastructure, the education sector should not treat them as the ‘silver bullet’ to end all of their data storage woes. Public clouds have huge scalability and resilience, but it is critical that decision makers use them in conjunction with other essential parts of the IT mix, and adopt robust disaster recovery processes to mitigate the risk of huge amounts of confidential personal data being lost as a result of outages or other issues. There is no doubt that digital transformation as a result of growing 5G network coverage will impact on everyone, as gradual adoption of new technologies takes place within schools, colleges and universities. By laying the right foundations, the introduction of these new ways of engaging, working and monitoring students, staff and communities will transform how institutions deliver learning and training for all. Despite 5G not being ubiquitous just yet, it is clear that it’s set to transform the education sector for the better. - All you need to know about 5G internet, and whether it'll replace cable - Get the latest investment updates with our 5G stocks news - We reveal how 5G technology works, and is set to transform the planet - Experts are already looking to 2030, and here's what to expect from 6G - Discover how 5G and Wi-Fi 6 will transform networking Iain Shearman is MD at KCOM NNS. Shearman leads KCOM’s National Network Services (KCOM NNS) bringing more than 20 years’ telecommunications experience to the team. Shearman is committed to working with businesses to help them adopt innovative technologies that support their organisation’s vision and growth aspirations, and helping them to deliver more for their customers. Shearman believes that customer experience is at the heart of the KCOM offer and technology has the power to unlock hidden capability in any business.
<urn:uuid:cbad9eb8-5aeb-4c4a-82f9-91a10da85507>
CC-MAIN-2024-38
https://www.5gradar.com/features/5g-will-accelerate-digital-transformation-in-education-kcom-boss-talks-teaching
2024-09-11T17:39:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00515.warc.gz
en
0.956515
1,622
2.9375
3
Bits and Splits - Fotolia Surveillance and facial recognition technologies have become a common fixture in today’s interconnected world over the past few years. Whether monitoring people in airports, searching for wanted criminals, allowing users to unlock their phones or creating targeted marketing campaigns, adoption of this technology has become widespread and resulted in many useful applications. But it has also generated legitimate concerns around privacy and security. In December 2018, for example, civil liberties group Big Brother Watch described citizens as “walking ID cards” citing research that found police use of facial recognition tools identified the wrong person nine times out of 10. As worries that these systems threaten basic human threats to privacy continue to grow, there is increasing pressure on governments and organisations to introduce stricter rules to keep them in check. In May, the city of San Francisco voted to ban police from using facial recognition applications, and California is considering similar moves. Technologists have responded. Although the use of surveillance and facial recognition technology is widespread and always growing, these systems are still in their infancy and can often be inaccurate. Michael Drury, partner at BCL Solicitors, says the biggest problem is that they are “hit and miss” at best. “Most people would applaud facial recognition technology if it prevents the commission of terrorist atrocities and other serious crimes. But that remains a very big ‘if’, and the potential benefits need to be weighed against the cost to us all of having our very beings recorded and those details held by the police,” he says. “We know it is better at recognising men than women, and Caucasians rather than other ethnic groups. The potential for misidentification of suspects and miscarriages of justice is one which should not be underestimated. “Police and other law enforcement agencies should be wary of seeing new technologies as a panacea and worthy of use simply because the word ‘digital’ can be used to describe them.” Miju Han, director of product management at HackerOne, echoes similar concerns with regards to the reliability of these systems. “Facial recognition technology has two major problems,” says Han. “The first is that its accuracy is nowhere near 100%, meaning that employing the technology will result in many false positives. Few algorithms were trained on a truly representative sample, so facial recognition will disproportionately negatively affect different demographics. “But even if we take the highest demographic accuracy today – which is around 99% for white males – and used the technology in a high-traffic area to attempt to identify known terrorists, for example, 1% of all passers-by would be incorrectly tracked, which would quickly add up to hundreds or thousands of errors a day. “The second major problem is that users have not consented to being scanned. Going out in public cannot be a consent to be tracked. It is possible for an entity to use facial tracking to construct full location histories and profiles, albeit with a lot of errors. Your face is not a license plate.” When it comes to building a case for surveillance and facial recognition applications, it is often argued that they aid security. Steven Furnell, senior IEEE member and professor of information security at Plymouth University, believes that this can be problematic. “Security and privacy are often mentioned in the same breath and even treated as somehow synonymous, but this is a very good example of how they can actually work in contradiction,” he says. “The use of face recognition in a surveillance context is presented as an attempt to improve security – which, of course, it can do – but is clearly occurring at a potential cost in terms of privacy. “It has been claimed that face recognition technology is now societally acceptable because people are used to having it on their smartphones. However, the significant difference is that this is by their choice, and to protect their own assets. Plus, depending on the implementation, neither the biometric template nor the later facial samples are leaving their device. Furnell argues that, in the implementations deployed in public spaces, the subjects are not consenting to the creation of templates nor the capture of samples. “The data is being used for identification rather than authentication. As such, the nature of the usage is significantly dissimilar to the more readily accepted uses on a personal device,” he says. Comparitech privacy advocate Paul Bischoff says people need a way to opt out and that, without checks and regulation in place, face recognition could quickly get out of hand. “Face recognition surveillance will probably grow more commonplace, especially in public venues such as transportation hubs and large events,” he says. “It brings up an interesting discussion about the right to privacy and consent: do you have any right to privacy in a public place? Up to now, most people would answer no. “But consider that even in a public place, you have a certain degree of anonymity in that the vast majority of people do not know who you are – assuming you’re not a public figure. You can get on the subway without anyone recognising you. But if a security camera armed with face recognition identifies you, it can tie your physical identity to your digital identity, and it can do so without obtaining your consent first. “People act differently when they know they’re being watched, and they act even more differently if the person watching them knows who they are. This can have a chilling effect on free speech and expression,” says Bischoff. With these threats in mind, some governments and organisations worldwide have begun implementing stricter rules and regulations to ensure such technologies are not abused. More recently, city officials in San Francisco voted eight-to-one to ban law enforcement from using facial recognition tools. Joe Baguley, vice-president and chief technology officer of Europe, Middle East and Africa (EMEA) at VMware, believes that the city of San Francisco is right to show caution in this case. As Drury and Han pointed out earlier, Baguley says the problem is that facial recognition software is not yet sophisticated enough to positively benefit police and other public services. “Its previous use in the UK by South Wales Police, for example, saw 91% of matches subsequently labelled as false positives, which showed the technology does not possess enough intelligence to guarantee accurate results, or to overcome any unconscious bias which may have affected its development,” he says. Before these technologies can effectively assist with policing and surveillance, Baguley says the underpinning artificial intelligence (AI) algorithms need to be altered to ensure the software recognises people without any discriminatory bias. “Ultimately, we’re at a nascent stage of the technology. These issues can be ironed out by enriching it with enough intelligence to make it a sufficiently safe and powerful tool to deploy in the future. It’s just not ready yet,” he adds. However, it is not just in the US where legal and regulatory challenges surrounding facial recognition technology are taking place. In March, the UK’s Science and Technology Committee warned that live facial recognition technology should not be deployed by British law enforcement until concerns around its effectiveness were resolved. In June, a court in Cardiff concluded that rules governing facial recognition systems need to be tightened after a Welsh shopper launched a legal challenge when a South Wales Police camera took a picture of him. Robert Brown, associate vice-president at Cognizant’s Centre for the Future of Work, says these challenges herald a sea change in how much privacy we are willing to give up in the name of safety. “On the one hand, it is a natural reaction to think, ‘Good, facial recognition helped catch a bad guy’,” he says. “But then you wonder, ‘How often can that ‘eye in the sky’ see me – and can I trust it?’ “The health of our democracy – and the future of work – demands trust. But the tech trust deficit isn’t closing. Too often, there is retroactive mea culpas that ‘we didn’t do enough to prevent X from happening’. Meanwhile, there is a near-unanimous agreement that facial recognition software error rates – especially among persons of colour – are unacceptably high.” Brown says that as we continue to see the rise of facial recognition as a means to keep us safe, we are going to have to navigate a thorny mix of standards, laws, regulations and ethics. “This means that we need consent of citizens – not digital-political overlords – to ultimately control who watches the watchers,” he adds. Robert Brown, Cognizant While many organisations and experts are opposed to mass surveillance, others are more positive. Matthew Aldridge, senior solutions architect at security firm Webroot, sees legitimate applications from law enforcement and other similar agencies where face recognition technology could greatly reduce policing costs and increase the chances of successful prosecutions in certain cases. But he agrees that use of these systems should be limited and regulated. “In these situations, it should only be perpetrators of crime who have their biometrics stored in this way. There is a temptation in mass surveillance to build a profile on every unique person detected, track their movements and categorise them into behaviour groups,” says Aldridge. “This type of approach is being taken in China, for example, where the state is able to not only do this, but to map the profiles to the identities of the individual citizens concerned, raising questions about how and why this data is being used. “Current facial recognition technology can work well, but it is far from perfect. Despite its shortcomings, it demonstrates its value by reducing the workload of investigators, effectively augmenting their role.” Mitigating the risks With facial recognition constantly gaining traction for a range of purposes, it is clear that more needs to be done to keep this technology in check. “As a society, we are not clear around how we handle the limitations of the technology that has the potential to affect emotions and morality – for example, the exceptions and false positives around race and implicit bias, as well as the possibility to entrench biases that already exist in society,” he says. “The legislation in this area does not clarify generally how the technology should be used, and it’s a difficult tool to challenge. If you are walking down the street, and you notice a police CCTV operation, by the time you see the notice, your face is already recorded. “Change management is important, focusing on handling the exceptions, and empathetically dealing with any false positives that are raised – and feeding this back to the creators of the technology to avoid entrenching existing biases within the technology, such as racial discrimination.” On a regulatory level, the use of facial recognition tools is already governed by laws such as the General Data Protection Regulation (GDPR). “Principles such as storage limitation, data minimisation, and lawfulness, fairness and transparency apply to the processing of this data,” says data privacy expert Tim Jackson. “Additionally, as facial recognition data is a type of biometric data, it can only be processed in restricted circumstances, such as if an individual has given their explicit consent or if a specific scenario under EU member state law has been met.” However, Jackson adds that there is no code of practice that applies to the processing of biometric data. “The Information Commissioner’s Office [ICO], as required by the DPA [Data Protection Act], has previously produced codes of practice on other types of processing such as direct marketing and subject access, and has recently published a draft code of practice regarding children’s online privacy,” he says. “It has also closed a ‘call for views’ consultation period with the intention of producing a code of practice on how to balance data protection in journalism. I would therefore expect the ICO – when formally instructed by the home secretary – to lead the way with a biometric data code of practice in the near future.” Chris Lloyd-Jones, Avanade Despite the lack of a specific code of practice, there are a number of steps that need to be taken by companies that are developing or using this software. “As we are talking about a high-risk processing activity, a data protection impact assessment must be carried out before the processing begins,” says Jackson, adding that the ICO provides detailed guidance on what form this assessment should take. “This assessment will help the company identify the various risks that need to be mitigated, risks such as discrimination, unconscious bias, lack of transparency, and lack of proportionality, for example.” Jackson recommends ensuring that the development stage of any facial recognition software includes representation from different ethnicities and genders; ensuring individuals whose data are to be processed are made fully aware of this activity, what rights they have in relation to their data, and how to invoke these rights. He adds that it should also apply strict retention periods to the collection of any data; apply confidentiality measures to the collection of any data; consider whether the scope of the facial recognition software could be reduced; seek the views of individuals before the processing begins to understand their concerns and adopt further measures if necessary. Any facial recognition system should also consider whether there are alternative, less intrusive ways to achieve the same outcome; and create policies to govern the processing activity and to provide direction to staff, according to Jackson. It seems the use of facial recognition technology is a double-edged sword. Although some people argue that these systems play an important role in fighting crime and assisting with other important tasks, they also present privacy challenges and risks. As adoption increases, safeguards, laws and regulations will certainly need to be reviewed and revised to ensure consumers are protected and that this technology is not abused. It is therefore no surprise that the UK surveillance camera commissioner Tony Porter is calling for a strengthening of the code of practice for the surveillance camera industry in the face of new privacy regulation and surveillance technologies such as facial recognition. Read more about facial recognition - Annual meeting defeats resolutions for Amazon to halt sales of its Rekognition software to government agencies. - Police say technology is helping to prevent and detect crime by identifying wanted criminals, but privacy groups have voiced their opposition. - Healthcare organisations can use facial recognition to analyse patients’ emotions and track them within a facility. The technology can also identify suspicious individuals. - Spanish bank begins roll-out of ATMs equipped with facial recognition technology, but customers can still use card and PIN.
<urn:uuid:8c4f9ff3-329b-4602-ab8d-299c64056452>
CC-MAIN-2024-38
https://www.computerweekly.com/feature/How-facial-recognition-technology-threatens-basic-privacy-rights?src=5964949&asrc=EM_ERU_120319596&utm_content=eru-rd2-control&utm_medium=EM&utm_source=ERU&utm_campaign=20191111_ERU%20Transmission%20for%2011/11/2019%20(UserUniverse:%20631301)
2024-09-11T16:25:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00515.warc.gz
en
0.961037
3,052
2.65625
3
The NIST has conducted a survey to gain more insight into the password habits of children. The organization noted that while passwords are a major cybersecurity concern amongst adults, there is comparatively little research into the password habits of the younger generation. The problem is that today’s children spend just as much time online as their parents, with two-thirds of parents reporting that their child has access to a tablet, and one-third reporting that their child was under the age of five when they first used a smartphone. Cybercriminals, of course, do not differentiate between adults and children, which means that kids are facing the same cyberthreats as adult consumers. Thankfully, there is some evidence to suggest that children are learning how to protect themselves. The survey itself polled more than 1,500 children between the ages of eight and 18. Those between third and fifth grade received the same questions as those between sixth and 12th grade, but with slightly different wording to account for age and language comprehension. In their responses, the children indicated that they are aware of many cybersecurity best practices. For example, they know that they should memorize passwords instead of writing them down, and that they should log out after completing an online session. The passwords they used also got more sophisticated as they got older, with teenagers showing a preference for longer passwords with more symbols and numbers. However, the children still displayed many of the same bad password habits as their adult counterparts. Most notably, kids admitted that they reuse passwords, and were even more likely to do so as they got older and opened more accounts. High school students also liked to share their passwords with their friends, with the researchers noting that sharing a secret like a password is one of the ways in which adolescents learn to build trust and make friends. As a result, they view a shared password as a privacy concern rather than a security one, and do not consider password sharing to be an inherently risky behavior. While repeated passwords are a concern, children sign up for far fewer services than their parents, and consequently need to keep track of fewer passwords. As it stands, most kids only have two passwords at school and another two to four at home. Those passwords tended to feature references to sports, video games, movies, animals, and other aspects of their everyday lives, though their first password was usually set up by a parent. In that regard, parents could help set the precedent for good password behavior as their kids get older. August 13, 2021 – by Eric Weiss
<urn:uuid:26e9c801-3d25-4729-8d94-ad1396b596d6>
CC-MAIN-2024-38
https://findbiometrics.com/nist-survey-shows-children-reuse-passwords-just-like-their-parents-081303/
2024-09-12T22:23:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00415.warc.gz
en
0.984904
507
3.75
4
The HPC 4 Energy Blog brings us this infographic on #1 ranked Sequoia supercomputer. Lawrence Livermore National Laboratory’s Sequoia supercomputer, an IBM BlueGene/Q system, was ranked as the world’s fastest supercomputer on June 18, 2012. Sequoia boasts 16.32 petaflops using 1,572,864 cores, but how fast can it complete calculations? This infographic puts its speed into perspective, demonstrating the potential of American HPC resources to save organizations time and money. While the comparison of supercomputer flops vs. handheld calculators is pretty much tired, I think it is helpful to help the layman understand how immense 16.3 Petaflops is compared to machine capabilities of just a few years ago. Download the infographic.
<urn:uuid:74ac4315-569c-41d7-9024-82c4e1b52203>
CC-MAIN-2024-38
https://insidehpc.com/2012/07/infographic-sequoia-the-fastest-supercomputer-on-the-planet/
2024-09-14T03:58:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00315.warc.gz
en
0.906555
166
3.015625
3
IG is often deployed to replay credentials or other security information. In a real world deployment, that information must be communicated over a secure connection using HTTPS, meaning in effect HTTP over encrypted Transport Layer Security (TLS). Never send real credentials, bearer tokens, or other security information unprotected over HTTP. When IG is acting as a server, the TLS connection is configured in admin.json TLS depends on the use of digital certificates (public keys). In typical use of TLS, the client authenticates the server by its X.509 digital certificate as the first step to establishing communication. Once trust is established, then the client and server can set up a symmetric key to encrypt communications. In order for the client to trust the server certificate, the client needs first to trust the certificate of the party who signed the server’s certificate. This means that either the client has a trusted copy of the signer’s certificate, or the client has a trusted copy of the certificate of the party who signed the signer’s certificate. Certificate Authorities (CAs) are trusted signers with well-known certificates. Browsers generally ship with many well-known CA certificates. Java distributions also ship with many well-known CA certificates. Getting a certificate signed by a well-known CA is often expensive. It is also possible for you to self-sign certificates. The trade-off is that although there is no monetary expense, the certificate is not trusted by any clients until they have a copy. Whereas it is often enough to install a certificate signed by a well-known CA in the server keystore as the basis of trust for HTTPS connections, self-signed certificates must also be installed in all clients. Like self-signed certificates, the signing certificates of less well-known CAs are also unlikely to be found in the default truststore. You might therefore need to install those signing certificates on the client-side as well. This guide describes how to install self-signed certificates, that are suitable for trying out the software, or for deployments where you manage all clients that access IG. For information about how to use well-known CA-signed certificates, refer to the documentation for the Java Virtual Machine (JVM). After certificates are properly installed to allow client-server trust, consider the cipher suites configured for use. The cipher suite determines the security settings for the communication. Initial TLS negotiations bring the client and server to agreement on which cipher suite to use. Basically the client and server share their preferred cipher suites to compare and to choose. If you therefore have a preference concerning the cipher suites to use, you must set up your deployment to use only your preferred cipher suites. IG inherits the list of cipher suites from the underlying Java environment. The Java Secure Socket Extension (JSSE), part of the Java environment, provides security services that IG uses to secure connections. You can set security and system properties to configure the JSSE. For a list of properties you can use to customize the JSSE in Oracle Java, refer to the Customization section of the JSSE Reference guide.
<urn:uuid:cf619f29-8ec5-4c2a-949f-70b1b4f08506>
CC-MAIN-2024-38
https://backstage.forgerock.com/docs/ig/2023.6/installation-guide/securing-connections.html
2024-09-16T17:29:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00115.warc.gz
en
0.940111
634
2.578125
3
Are you ready to meet the Emergency Preparedness Plan deadline for your rural health clinic (RHC) or federally qualified health center (FQHC)? The purpose of an emergency preparedness plan is to safeguard human resources, maintain business continuity, and protect physical resources. This plan will help maintain access to health care during an emergency or natural disaster. What’s in an Emergency Preparedness Plan? Your emergency preparedness plan will include: - An emergency plan (including a risk assessment), - Policies and procedures, - A communication plan, - And a training and testing program The first step is to conduct a facility-based risk assessment. A risk analysis and risk management plan (required by HIPAA) will put you on the right track. Nevertheless, the Centers for Medicare and Medicaid Services (CMS) wants your risk assessment to take an all-hazards approach, which looks at all possible emergencies and disasters and spells out the response procedures for each. Emergencies can take many forms, including natural disasters and man-made emergencies. Therefore, you can’t rely on the same response for every emergency. For example, how you respond to an active shooter is much different than how you respond to a ransomware attack on your systems. In your risk assessment, answer the following: - How might emergencies limit or stop our operations? - Which functions do we need to carry out our operations? What must continue in an emergency? - What risks or emergencies could we expect to face? Does our geographic location add any new risks or challenges? - What arrangements with other health care facilities do we need to make? Facility-based Risk Assessment A “facility-based” risk assessment is specific to your building(s). This allows you to identify, as well as eliminate, natural disasters based on your facility and geographic area. For instance, an RHC in Florida should prepare for an approaching hurricane, whereas an RHC in South Dakota might face a three-day blizzard. However, both RHCs should prepare for a power outage or other facility-based emergency that requires an immediate response. Community-based Risk Assessment A community-based risk assessment is one that other organizations develop, such as public health agencies, emergency management agencies, and regional health care coalitions. You can use a community-based risk assessment while conducting your own facility-based assessment. However, if you use a community-based assessment or plan, you must have a copy of it and work with the organization that developed it to make sure it meets your facility’s needs. You may have already addressed many emergencies and hazards in your HIPAA security risk analysis and risk management plan. Nevertheless, you should review your current security plan for areas that overlap with the CMS Emergency Preparedness Plan and include an all-hazards approach. We at HIPAAtrek believe that the HIPAA security rule already covers many of the CMS requirements. We have created an Emergency Preparedness Plan and HIPAA security rule crosswalk, so you don’t have to reinvent the wheel. For more information, contact us at firstname.lastname@example.org. Overwhelmed? Grab our Guide to Policy Management! Without the right tools, policy management can be a lot to handle. We’ve created this workflow to get you started.
<urn:uuid:8c0eac96-818e-4400-aa0c-546e1c7663f8>
CC-MAIN-2024-38
https://hipaatrek.com/emergency-preparedness-risk-assessment/
2024-09-17T22:53:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00015.warc.gz
en
0.942075
698
2.703125
3
When using cellular data on your 3G, 4G or 5G mobile phone, your mobile network operator can reduce your data speed for various reasons. Data throttling is a technique used in mobile networks to do precisely that. Mobile data throttling in 3G, 4G and 5G phones is when your mobile operator puts a temporary or long-term speed cap on your data connection to reduce your data speed. A mobile operator may decide to do that in various situations to minimise your data usage because lower speeds consume less data. Mobile operators have a range of tools at their disposal to be able to control the data usage on their network so that they can ensure fair use of their cellular services by all subscribers. Since a mobile network is a shared resource, a mobile operator ensures the quality of service by accommodating the cellular needs of each customer in a network-efficient way. One of these tools is the mobile data throttling technique used by network operators. Mobile operators employ data throttling to control how their customers use mobile data. Mobile data throttling in cellular networks Mobile data throttling in cellular networks is a capability that allows the mobile operator to set a maximum limit on your mobile data speed, e.g. 2 Mbps. This limitation can be placed at an account level on your tariff, which is linked to the SIM card inside your phone. In mobile cellular networks, the term mobile data throttling or data throttling refers to an occurrence when a mobile operator sets a maximum speed limit on a user’s cellular data connection. It is technically possible for a network operator to set a maximum limit of, for example, 5 Mbps on your SIM card, which would limit your speed no matter if you are on 4G or even 5G. It is also possible for a mobile operator to restrict you to a certain technology, e.g. 3G, which can limit you to data speeds that can be facilitated by 3G data technologies like HSPA and HSPA+ on a 3G UMTS network. It can happen in various situations; for example, if you end up exceeding your monthly data allowance, instead of charging you for data overage, they may decide to slow down your data speed by putting a temporary or long-term limit (e.g. 100 kbps) until your next billing cycle. If you want to know the expected data speeds in normal circumstances, please check out this dedicated post on maximum and average data speeds with 2G, 3G, 4G and 5G technologies. How can you tell if your phone is being throttled? If you notice that your mobile data (cellular data) speeds suddenly see a decrease and are always below the average data speeds, there is a possibility that your mobile data is being throttled. However, there can also be other reasons for a drop in data speeds, including network and phone issues. If you are connected to your mobile phone network as normal and notice a sudden decrease in data speeds, you may want to do some checks. The table below shows the average data download speeds with 3G, 4G and 5G networks, which can help you compare your data rates. Mobile network generation | Cellular technology | Average data speed (download) | 3G | UMTS | 384 kbps | 3G | HSPA | 3-5 Mbps | 3G | HSPA+ | 5-8 Mbps | 4G | LTE | 15-20 Mbps | 4G | LTE-Advanced | 50-80 Mbps | 4G | LTE-Advanced Pro | 60-100 Mbps | 5G | NR | 150-200 Mbps | Data throttling should not happen without a good reason if you are with a decent mobile operator. A good practice is first to do the basic checks, e.g. restart the phone and check in the phone settings if you have somehow locked yourself to a lower network technology than what you are allowed. For example, in the screenshot below, the phone is set to 4G, meaning 4G and lower technologies, e.g. 3G and 2G, are available because mobile phones are backwards compatible. However, if this phone is set to 3G, it will only be able to access 3G and 2G but not 4G, which can slow down the data speeds. The other thing you can check is if something is temporarily going on with your mobile operator’s network that can impact data speeds, e.g. engineering work etc. If everything is normal and yet your data speed is down, then it may be that your mobile phone is being throttled. But before jumping to a conclusion like this, check with your mobile operator. If you are struggling with slow mobile internet and you are not sure why then have a look at my detailed post on what to do if your mobile data is slow. Is data throttling the same as service prioritisation? Data throttling and service prioritisation are two separate things; data throttling is about suppressing the data speeds to a certain upper limit, e.g. 2 Mbps, whereas service prioritisation is when a certain service, e.g. VoIP, gets priority over other services irrespective of the data speed. Service prioritisation is when certain mobile services, e.g. voice calls, data services etc., are given certain priority levels so that, in case of network congestion, the essential services are readily available to the customers. Data throttling and service prioritisation are not the same; however, the symptoms may seem similar. For example, if a network is fully loaded (busy), it may be that customers of 4G LTE can’t get the speeds they are used to getting, which may seem similar to throttling. However, that is a time-limited issue, and as soon as the network congestion is reduced, the speeds can go back to normal. Have a look at this post to find out the average 4G LTE speeds. How do I stop my phone from being throttled? As long as your cellular service is from a mobile network operator with decent coverage and capacity in your location, the best way to avoid mobile data throttling is to contact your network operator about your plan. Your operator is in the best position to remove any network-level restrictions. The first step to avoid data throttling is to buy your mobile tariffs from reliable mobile operators that provide full transparency on pricing. Some mobile tariffs are sold at cheaper rates because they have maximum speed limits. For example, in the UK, you can buy mobile tariffs with speed limits of, e.g. 2 Mbps, which means no matter how great your network coverage, you will never exceed 2 Mbps. However, if your mobile operator has decided to throttle you, the only way forward is to speak to the operator to address the issue. In some cases, with unlimited data packages, the mobile phone tariffs have a fair use policy which gives mobile operators the control to stop any misuse of their service. Here are some helpful downloads Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you: Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc. Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap.
<urn:uuid:ef5d313f-4bf4-440c-8562-0bea44e7016c>
CC-MAIN-2024-38
https://commsbrief.com/what-is-mobile-data-throttling-in-3g-4g-and-5g-phones/
2024-09-20T07:54:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00715.warc.gz
en
0.930841
1,758
2.875
3
Microsegmentation has quickly become a popular data security practice for cloud environments and data centers. Organizations apply microsegmentation to keep potential breaches contained, prevent surface attacks, achieve compliance, and more. This in-depth guide explains everything you need to know about microsegmentation, including how it works, examples, and how to get started. What is Microsegmentation Anyway? Microsegmentation establishes multiple secure zones across a cloud environment or data center. This ultimately isolates the various application workloads from each other—securing each one as an individual asset. System admins can use microsegmentation as a way to limit network traffic between workloads, using a zero-trust approach. “Host-based segmentation” and “security segmentation” are commonly used terms that are synonymous with microsegmentation. Microsegmentation detaches workloads from the network using a host workload firewall that enforces east-west communication policies, as opposed to just north-south. This approach is crucial in a world where organizations across every industry are using more cloud services and adopting new deployments. Microsegmentation adds granularity to data security, treating each element in the infrastructure like its own container. How Microsegmentation Works In simple terms, microsegmentation secures applications by allowing permissible traffic and denying all other traffic by default. It’s the foundation for zero-trust security models for application workloads in cloud infrastructures and data centers. Microsegmentation uses host workloads rather than firewalls or subnets. Each individual workload operating system within the cloud or data center will have a native firewall. By using host-based segmentation, this security practice creates a map that visualizes what must be protected based on human-readable policies as opposed to firewall rules or IP addresses. Let’s look at an analogy that explains how microsegmentation works, using a cruise ship instead of a data center. Below the surface of the water, the ship has many separate watertight compartments to keep flooding contained. If there’s a breach in the boat, the watertight doors seal and the water will be contained in a single compartment—keeping the ship afloat. Each watertight compartment or room is like a microsegment. But if there weren’t any containers built into the ship, then a single crack could effectively sink the entire vessel. This is, infamously, why the “unsinkable” Titanic ultimately sank. Now let’s apply this same concept to your IT environment. If a hacker or malicious software breaches one container of your data center, microsegmentation keeps the threat from moving laterally. Therefore, the threat is contained and doesn’t spread. Microsegmentation could be the difference between a small security incident and an organization-wide breach. No IT security system is airtight. Regardless of the steps you take, one of your employees still might accidentally click a phishing link and leak their credentials. But with microsegmentation, you can keep the damage contained to a much smaller part of your infrastructure—limiting the blast radius for when incidents eventually occur. Microsegmentation vs. Traditional Segmentation Microsegmentation has been adopted due to security attacks that breached the network perimeter. Once breached, the threat was able to move freely throughout the infrastructure. Traditional segmentation doesn’t solve this problem. It’s designed for traffic between the client and server. But with modern cloud architectures, IT network traffic flows from server to server from one application to another. This type of network traffic makes traditional segmentation somewhat irrelevant, as traditional segmentation cannot prevent a breach from spreading from server to server. Furthermore, microsegmentation takes this one step further with the ability to provide granular security for each workload within a single server. IT admins can apply this security approach to each individual workload level. Traditional segmentation also relies on IP-based rules and other network constraints. These policies are cumbersome, error-prone, and require lots of manual effort. Microsegmentation is built for modern, hybrid IT environments. The policies can be applied to both on-site data centers and cloud workloads simultaneously. Features of Microsegmentation To help you understand how microsegmentation works at a higher level, let’s take a look at some features of this data security practice: - Visibility — Microsegmentation policies begin with a visual application dependency map. This shows all communications between workloads, data centers, clouds, processes, and applications. The visibility helps establish a baseline for each application’s connectivity and serves as a starting point for implementing a microsegmentation policy. - Simple Labeling — Rather than using firewall rules or IP address constraints (like traditional segmentation), microsegmentation relies on labeling. These human-readable labels are meant to simplify things for real people. Labels could be something like the name of an application or its role, rather than a string of numbers that are indecipherable to the naked eye. - Automation — Microsegmentation takes the visual dependency map and uses the labels to automatically generate whitelist segmentation policies for IT environment traffic at the application and role levels. It uses historical connections and labels to automatically create a policy that controls the traffic. Admins simply need to select the level of restrictiveness for each policy they need to create. - Risk Mitigation — As previously mentioned, microsegmentation can help reduce the risk of different software or application vulnerabilities. Breaches would be contained to a specific application or workload to prevent massive attacks from affecting your entire IT infrastructure. Here are some real-life examples of microsegmentation: Example #1: Application Segmentation Application segmentation is designed to protect high-value apps. It uses “ringfencing” to control communications between various applications across public clouds, hybrid clouds, or data centers. With application segmentation, the ringfencing method gets applied to container workflows or hypervisors. Organizations use this microsegmentation method to secure applications that contain sensitive data or deliver crucial services. This method can also be used to maintain compliance with different mandates such as SOX, HIPAA, or PCI DSS. Example #2: Environment Segmentation As the name implies, environmental segmentation is a way to separate environments for different purposes. This is commonly used in software development. For example, you can separate development environments from testing and staging environments. If you were trying to accomplish this with a traditional network solution, it would be difficult to apply this type of segmentation. That’s because the assets in each environment typically connect dynamically between hybrid clouds and data centers. But microsegmentation makes this possible. Example #3: User Segmentation User segmentation is applied on an individual user level based on factors like groups or user identity. It restricts the visibility into applications for each user without changing your infrastructure. For example, let’s say you have two users on your network on the same VLAN. They can each have different policies, meaning one of those users might have the authority to access certain applications, and the other may not. This type of microsegmentation helps limit vulnerability to brute force attacks, stolen user credentials, and weak passwords. If there is a breach at the user level, it will be contained only to the applications that the user can access. Example #4: Nano-Segmentation (Process-Based) Nano segmentation is a bit more granular than the other examples. It’s a process-based type of microsegmentation that extends beyond application segmentation, all the way down to the service or process being run on individual workloads. Each workload is restricted by tier or level. Only a specific process or service is allowed to communicate between workloads. For example, your database tier may only be able to communicate with the processing tier. Or MySQL can only communicate using Port 3306 between two workloads. Every other workload or process gets blocked. Example #5: Application Tier Segmentation Application tier segmentation separates each workload by its role. The purpose here is to prevent lateral movement between each workload unless otherwise authorized. This is commonly applied to database tiers, web applications, and other high-value applications that companies want to separate from each other to improve security. Here’s an example. Let’s say you apply this policy to your processing tier of an application. You can set it up so the processing tier can only connect with a database tier. But you can prevent the processing tier from communicating with web tiers or load balancers. This type of microsegmentation helps limit your risk of surface attacks. How to Get Started With Microsegmentation Before you apply microsegmentation to your data security policies and IT infrastructure, it’s important for you to map out your plan. It’s unrealistic to apply this type of policy overnight, and it’s something you’ll need to implement over time to ensure everything goes smoothly. If your existing network has a flat topology, you can first create small security zones with traditional segmentation methods. Make sure your security policy is clearly defined. This will make it much easier for you to determine which zones or containers should be segmented from each other. Once you have the groundwork in place, you can follow these steps to implement microsegmentation policies in your organization: Step 1 — Identify High-Priority Applications and Assets Rather than rolling out microsegmentation across everything simultaneously, start with your most important assets. What applications, databases, or systems contain the most sensitive information? Or which systems would have the most significant impact on your organization if there was a breach? Not only will this step help give you some direction on where to begin, but it will also help you define granular security policies within those high-value assets. The higher the value, the more granular the policies should be when you’re implementing microsegmentation. For example, let’s say you identify a specific database that you determine to be the highest priority asset. You can start applying microsegmentation to each individual workload within that database. So you’re not only isolating the database from other applications or systems in your network, but you’re also isolating containers within the database itself. This approach helps you apply the maximum level of security to each asset. Step 2 — Map Out Connections We discussed this concept earlier when we covered the different features of microsegmentation. This is the visibility aspect of the process. You need to map out your entire infrastructure. Depending on the size of your organization and network, this can be a time-consuming step. But it’s crucial to the success of your microsegmentation implementation. What does your IT infrastructure look like visually? How does one remote connection communicate with another? What applications connect to other servers, applications, or databases? Define how one workload connects with another. Aside from helping you segment different assets from another, this step is also extremely useful for identifying weak points within your infrastructure. Once everything is mapped out visually, you can see which connections are the most susceptible or vulnerable to a breach. This makes it easier to define your microsegmentation policy around those vulnerabilities and high-value assets. Step 3 — Define Your Policies and Trust Zones Once you’ve done the mapping, you can start to label your assets and applications in the infrastructure. From here, you can begin to set your boundaries between containers and set policies to govern each boundary. In simple terms, you’ve already identified what’s on your network and mapped out how everything is connected. Now you need to ask yourself, “what should each thing be allowed to do?” There is a bit of a balancing act here. You obviously want the microsegmentation policy to improve your security standards. But you don’t want to cause operational problems or workflow issues. You’ll need to apply some risk assessment to this step as well to see what’s worth segmenting and what’s not. Step 4 — Clearly Establish Roles For Continuous Improvement Microsegmentation policies are not static. You need to look at them as living and breathing things that change over time. Essentially, this is not a “set it and forget it” type of policy. Your organization must get on the same page and define who is responsible for what in terms of continuous management. Who’s job is it to manage the day-to-day tasks associated with managing, monitoring, and enforcing these policies? Who will adjust the policies as a new application, system, or database is deployed in your infrastructure? Oftentimes, there is a disconnect between IT security teams and networking organizations. So ownership of specific tasks and responsibilities must be defined in the early stages, so everyone knows what’s expected of them. Step 5 — Remove Segmentation Policies From Your Network Topology As previously mentioned, microsegmentation that relies on VLANs, firewalls, and IP addresses is largely ineffective. Address-based policies don’t really work for enterprise cloud environments. They’re also extremely cumbersome for administrators to manage at scale. Look for ways to apply the different types of microsegmentation examples that we discussed earlier, and put those in place of security policies relying on addresses. I’m referring to application segmentation, environment segmentation, user segmentation, process-based nano-segmentation, and application tier segmentation. This helps ensure that your microsegmentation policy is resistant to attacks and breaches without burdening your administrative resources.
<urn:uuid:c4a3b6b2-d4fc-49d3-a578-eea20f4a9739>
CC-MAIN-2024-38
https://nira.com/microsegmentation/
2024-09-20T07:51:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00715.warc.gz
en
0.913603
2,897
2.578125
3
Artificial Intelligence (AI) has emerged as a transformative force in various industries, revolutionizing the way businesses operate and empowering them to achieve new levels of efficiency and innovation. With the rapid advancements in AI technology, organizations are discovering the immense potential of AI solutions to solve complex problems and drive growth. In this article, we will explore the computational capabilities of AI that are reshaping the world and discuss how businesses can leverage these solutions to gain a competitive edge. Understanding AI Solutions AI solutions encompass a wide range of technologies and algorithms that enable machines to perform tasks that typically require human intelligence. These solutions utilize computational power to analyze vast amounts of data, learn from patterns, and make intelligent decisions. By leveraging machine learning, natural language processing, computer vision, and other AI techniques, businesses can automate processes, extract valuable insights, and deliver personalized experiences to their customers. AI solutions have the potential to transform various industries, revolutionizing the way businesses operate and delivering substantial benefits. Let’s explore some prominent sectors where AI is making a significant impact: AI is revolutionizing healthcare by enabling accurate diagnosis, personalized treatment plans, and efficient patient care. AI-powered algorithms can analyze medical records, images, and genetic data to identify patterns and provide early detection of diseases. Furthermore, AI solutions can assist in drug discovery, robotic surgery, and remote patient monitoring, enhancing healthcare outcomes and improving patient experiences. In the finance industry, AI solutions are streamlining processes, reducing costs, and enhancing risk management. AI-powered chatbots and virtual assistants are improving customer service by providing personalized recommendations and resolving queries in real-time. Additionally, AI algorithms can analyze vast amounts of financial data to detect fraudulent activities and make data-driven investment decisions. AI is revolutionizing the manufacturing industry by enabling predictive maintenance, optimizing production processes, and ensuring quality control. AI-powered systems can analyze real-time data from sensors and machines to predict equipment failures, reducing downtime and maximizing productivity. Furthermore, AI solutions can automate repetitive tasks, improving operational efficiency and enabling manufacturers to meet customer demands effectively. In the retail sector, AI solutions are enhancing customer experiences, enabling personalized recommendations, and optimizing inventory management. AI-powered chatbots and virtual assistants can interact with customers, provide product information, and assist in purchasing decisions. Moreover, AI algorithms can analyze customer data to predict buying patterns, allowing retailers to offer targeted promotions and optimize their product offerings. Leveraging AI Solutions To harness the full potential of AI solutions, businesses need to adopt a strategic approach. Here are some key considerations for successfully leveraging AI in your organization: - Identify Pain Points. Identify the areas in your business where AI solutions can add significant value. Whether it’s automating manual processes, improving customer experiences, or enhancing decision-making, understanding your pain points will help you prioritize AI initiatives effectively. - Data Collection and Preparation. AI solutions heavily rely on data, making data collection and preparation a crucial step. Ensure that you have a robust data infrastructure in place to collect, store, and process relevant data. Additionally, invest in data cleaning and preprocessing techniques to ensure the quality and accuracy of the data used for training AI models. - Choose the Right AI Solution. There are various AI solutions available, each tailored to specific use cases. Conduct thorough research and select the AI solution that aligns with your business objectives and requirements. Whether it’s machine learning algorithms, natural language processing, or computer vision, choose the solution that best addresses your needs. - Collaborate with Experts. Implementing AI solutions can be complex, requiring expertise in AI algorithms, data science, and software development. Collaborate with AI experts or partner with AI solution providers who can guide you through the implementation process and ensure the successful integration of AI into your business. - Monitor and Evaluate. Once AI solutions are implemented, it’s crucial to monitor their performance and evaluate their impact on your business. Continuously monitor the AI models, collect feedback, and refine them to improve accuracy and efficiency. Regularly evaluate the outcomes and make necessary adjustments to maximize the benefits of AI solutions. While AI solutions offer immense potential, there are challenges that businesses may encounter during their implementation. Some common challenges include: - Data Privacy and Security. As AI relies on vast amounts of data, organizations need to ensure that privacy and security measures are in place to protect sensitive information. - Ethical Considerations. AI solutions raise ethical concerns, such as bias in algorithms and the impact on employment. Businesses must address these concerns and ensure responsible and ethical use of AI. - Change Management. Implementing AI solutions requires a cultural shift and change management strategies. Employees need to be trained and educated to embrace AI and understand how it can enhance their roles and responsibilities. - Integration Complexity. Integrating AI solutions with existing systems and infrastructure can be complex. Businesses need to carefully plan the integration process and ensure compatibility to avoid disruptions. The Future of AI Solutions As AI technology continues to advance, the future holds immense possibilities for AI solutions. Here are some key trends shaping the future of AI: - Explainable AI. Explainable AI focuses on developing AI systems that can provide understandable explanations for their decisions and actions. This transparency is crucial for building trust in AI and ensuring ethical and accountable use. - Edge Computing. Edge computing places AI capabilities in close proximity to the data source, reducing latency and facilitating instant decision-making. This trend is particularly significant for applications that require low latency. - AI and The Internet of Things (IoT). The integration of AI with IoT devices allows for intelligent automation and decision-making at the edge. AI algorithms can analyze data from interconnected devices, enabling predictive maintenance, smart home automation, and efficient resource management. - Reinforcement Learning. Reinforcement learning is an AI technique that enables machines to learn through trial and error. This approach can revolutionize areas such as robotics, gaming, and autonomous systems by enabling machines to learn and improve from their experiences. AI solutions are transforming the world, empowering businesses to achieve new heights of efficiency, automation, and innovation. By leveraging AI technologies, organizations can streamline processes, deliver personalized experiences, and gain a competitive edge in today’s rapidly evolving business landscape. As AI continues to advance, businesses must embrace these solutions strategically, overcome challenges, and stay at the forefront of AI innovation to unlock the full potential of this transformative technology.
<urn:uuid:3a290b20-c501-437e-8bc2-d4b2679722dd>
CC-MAIN-2024-38
https://fourcornerstone.com/ai-solutions-transforming-the-world-with-computational-capabilities/
2024-09-08T06:42:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00015.warc.gz
en
0.910224
1,310
2.765625
3
Organizations today often require many servers in different physical locations, each operating at their highest capacity to drive efficiency and ROI. This has been made possible with the use of virtualization technologies that allow a single physical server to run multiple virtual machines that each have their own guest operating system. Virtualization technology has its origins in the 1960s, with work that was done at IBM on time-sharing of mainframe computers. VMware didn’t arrive on the scene until the late 1990s. In 2001, VMware introduced ESX Server, a Type-1 hypervisor. This technology doesn’t require a host operating system to run virtual machines. Instead, the hypervisor runs on the “bare metal.” Today, it’s standard practice to use virtualization to increase the utilization of the computing resources in the computer. There are several different types of virtualization including server, network and desktop virtualization. Server Virtualization – This allows for the creation of multiple virtual server instances on a single physical server. This means companies can run multiple business applications, each in their own virtual machine (VM), on these servers. The increased server utilization allows businesses to buy fewer physical machines and it also reduces the power and cooling costs associated with running datacenter servers. Network Virtualization – This involves separating network resources from hardware and recreating them on a single, software-based administrative unit. Applications can run on a virtual network as if they were running on a physical network. The physical hardware, though still required, need not be reconfigured when a new virtual machine is added to the network or moved to a different part of the network. Networks can be cloned and recreated in seconds with network virtualization. Desktop Virtualization – Virtualization of desktops is done to create a virtual version of the workstation along with its operating system that can be accessed remotely. What Is Hyper-V and How Does It Work? Microsoft’s hardware virtualization product, Hyper-V, enables you to create and run a software version of a computer, called a virtual machine (VM). Hyper-V can have multiple virtual machines, each with their own operating system (OS), on one computer, allowing VMs to run these multiple OSes alongside each other. This eliminates the need to dedicate a single machine to a specific OS. Microsoft Hyper-V is also a Type-1 Hypervisor. In Hyper-V, there is a parent partition and any number of child partitions. The host OS runs in the parent partition. Each child partition is a VM that is a complete virtual computer, with a guest OS (need not be Microsoft) and programs running on it. The VMs use the same hardware resources as the host. A single Hyper-V host can have many VMs created on it. Why Is Hyper-V used? Hyper-V allows you to use your physical hardware more effectively by running multiple workloads on a single machine. It lets you use fewer physical servers, thereby reducing hardware costs and saving space, power and cooling costs. With Hyper-V, you can set up and scale your own private cloud environment. Many organizations use Hyper-V to centralize the management of server farms. This allows them to control their VMs efficiently and reduce the time spent on IT infrastructure management. What Does Hyper-V Consist of? Hyper-V includes multiple components that make up the Microsoft virtualization platform. These include: - Windows Hypervisor - Hyper-V Virtual Machine Management Service - Virtualization WMI provider - Virtual machine bus (VMbus) - Virtualization service provider (VSP) - Virtual infrastructure driver (VID) Additional Hyper-V tools that need to be installed include: - Hyper-V Manager - Hyper-V module for Windows PowerShell - Virtual Machine Connection (VMConnect) - Windows PowerShell Direct Hyper-V is available in three versions: - Hyper-V on Windows 10 - Hyper-V Servers - Hyper-V on Windows Server What Are the Benefits of Hyper-V? There are many benefits of Hyper-V, a few of them being: High Scalability and Flexibility With the installation of Hyper-V on a private cloud environment, organizations can be more flexible with their on-demand IT services and expand when required. Hyper-V puts existing hardware to the maximum use, ultimately reducing costs and increasing efficiency. Having multiple instances of virtual servers minimizes the impact of sudden downtime, which means system availability increases and companies can improve business continuity. Hyper-V safeguards VMs from malware and unauthorized access, making your IT environment and your data more secure. What Is VMware? VMware is also a hypervisor-based virtualization technology that allows you to run multiple virtual machines on the same physical hardware. Each VM can run its own OS and applications. As a leader in virtualization software, VMware allows multiple copies of the same operating system or several different operating systems to run on the same x86-based machine. Hyper-V vs. VMware: What Are the Differences? Hyper-V and VMware each have their own advantages and disadvantages and choosing between the two depends on your specific business requirements. Let’s take a look at a few of the noticeable differences when it comes to their product maturity, complexity and pricing. Hyper-V | VMWare | Hyper-V supports Windows, Linux and FreeBSD operating systems. | VMware supports Windows, Linux, Unix and macOS operating systems. | Hyper-V’s pricing depends on the number of cores on the host and may be preferred by smaller companies. | VMware charges per processor and its pricing structure might appeal to larger organizations. | Hyper-V’s Cluster Shared Volume is somewhat more complex and more difficult to use than VMware’s storage deployment system. | VMware’s Virtual Machine File System (VMFS) holds a slight edge when it comes to clustering. | Hyper-V uses a single memory technique called “Dynamic Memory.” Using the dynamic memory settings, Hyper-V virtual machine memory can be added or released from the virtual machine back to the Hyper-V host. | VMware implements a variety of techniques, such as memory compression and transparent page sharing, to ensure that RAM use in the VM is optimized. It is a more complex system than Hyper-V’s memory technique. | Discovery, Mapping and Monitoring of VMware and Hyper-V Environments Your endpoint management tool should discover and include VMware and Hyper-V hosts and VMs on its network topology map. This provides the visibility you need to effectively manage your entire IT infrastructure. In addition, the ability to monitor hosts and VMs allows IT teams to use the endpoint management solution to maintain high uptime and performance of VMs. Kaseya VSA monitors and manages both VMware and Hyper-V infrastructure efficiently. With endpoint management and network monitoring in one platform, Kaseya VSA monitors everything including traditional endpoints (servers, desktops, laptops), SNMP network devices and printers. Learn more about Kaseya VSA by requesting a demo.
<urn:uuid:8d75f6fd-780c-4e3e-b197-c40ab0e9967c>
CC-MAIN-2024-38
https://www.kaseya.com/blog/hyper-v-vs-vmware/
2024-09-13T01:58:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00515.warc.gz
en
0.917198
1,478
3.625
4
Take a step back and think about how many products and services we use multiple times every day of our lives that rely on cloud computing. It’s staggering. While many of the underlying frameworks and concepts of cloud computing existed as far back as the 1960s, it wasn’t until the early to mid-2000s that “the cloud” as we know it today really took shape. Now, whether it’s putting work files on Dropbox, streaming the latest shows and movies, or telecommuting and working remotely from home -- it’s hard to imagine a world without the cloud, which has only become more mainstream, efficient, and powerful over the past few years. But while the cloud may have been the seminal driving technology force of the 2000s, 5G is poised to be the next powerful tech driver through the 2020s and beyond. What effects will the rollout and general mainstream adoption of 5G have on cloud computing and the ever-growing remote workforce? Initial tangible effects of 5G on the cloud For the unaware, 5G is the freshly-unveiled mobile telecommunications standard that provides significant speed increases over existing 4G LTE networks. The speed increases are substantial, and that’s putting it mildly. Current 4G LTE can offer a peak speed of around 5-12 megabits per second, whereas 5G users can expect to see about 200GB per second. To put that into a framework most can better understand, that is the difference between downloading an HD movie from Netflix to watch later in a matter of seconds, not minutes. What does this massive increase mean? During these initial roll-out stages, it simply enables our connected devices to communicate in speeds that are orders of magnitude faster than before -- but they’ll still be connecting to a centralized cloud network. As 5G grows, the networks become more stable, and the devices on the networks themselves become faster and faster. Will they still even need to connect to the cloud? Worst-case scenario: 5G kills the cloud As might be gleaned from the title of this subsection, there are some in the tech industry who believe the answer to the question in the previous sentence is no and that the cloud as we know it will become obsolete. One such voice in that camp is Jon Markman, who earlier this year in a Forbes piece predicted, “Longer term, blazing-fast wireless networks have the potential to eliminate the cloud as a computing platform. A post-cloud world would involve billions of autonomous smart devices. They would process and act on the data they collect at the edge of the network, in real time.” Markman argues that one of the primary advantages of the cloud -- killing latency and a decentralized infrastructure -- are rendered obsolete with the speeds 5G brings to end users and their devices. As might be imagined, there are some who disagree with this. Best-case scenario: 5G takes the cloud to new heights As mentioned previously, rural areas in the US and all over the globe are still -- in 2019 -- having trouble getting access to reliable high-speed Internet such as DSL, cable, or T1. Due to the physical work involved with establishing new lines, there are still nearly 20 million Americans with no high-speed Internet options, with nearly 75% of those living in rural areas. While those areas might be among the last to get 5G, it may very well come before hardwired options. Imagine the jump in going from high-latency, slow Internet of a few megabytes a second to 5G’s 200 gigabytes per second speeds. This will be a boon not only for individuals and their personal uses, but also for businesses owners who haven’t been able to take advantage of cloud computing due to infrastructure barriers. To summarize an article from Dave Linthicum, it’s not too much of a stretch to say it’s like going from candlelight to electricity. Warehouses can now be filled with IoT-enabled devices to get an instant update on inventory. Cutting-edge business analytics will now be available. Employers can expand their training services offered to new recruits from on-site only to online, ensuring their operation keeps up with the latest trends. 5G and the remote workforce Instead of diving deeper into 5G and the cloud, let’s instead think about how 5G will impact a segment of the workforce that is growing by the day -- remote employees who telecommute. Since 2005, telecommuting has increased by approximately 115%, and as we all know, a big part of that rise has been due to advances in technology and connectivity. The Internet has gotten faster. Video conferencing software has improved dramatically. The jobs more and more people are trying to get are very telecommuting friendly. Many top companies are increasingly hiring remote staff to ensure they cast a wide net to attract the best talent. Working remotely requires the employee to be in an area with reliable, high-speed Internet. Typically, that means the home office or a shared public workspace. But with 5G and its incredible speeds, workers may soon be able to have connections speedy enough to truly work remote -- on the road, in a hotel room, at a park. Wherever you can get 5G, you can do the things telecommuters need to do in today’s workforce (collaborate in real time, share and transfer large files, have seamless video conferencing, etc.). As the 5G rollout makes steady progress around the world, it’s exciting to think where this breakthrough will take us. While the effect on cloud computing remains somewhat of a debate, what’s not up for debate is that having 200GB speeds in the palm of your hand is going to be a boon for business owners, remote workers, and everyone else who enjoys the Internet. Marty Puranik is the founder, president, and CEO of Atlantic.Net , a profitable and growing hosting solutions provider in Orlando. His strengths as a leader and visionary have helped him lead a successful business for over two decades. Atlantic.Net thrives thanks to Marty’s strategic acumen, technical prowess, and his valuable, old-fashioned habits of thrift, modesty, and discipline. Under his leadership, company has grown from a local Florida business to a global cloud services company serving over 15k plus business customers in over a 100 countries and backed by seven global data centers. About the Author You May Also Like Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring September 25, 2024
<urn:uuid:583f5576-6b8c-48a1-bda0-6d26d22791b3>
CC-MAIN-2024-38
https://www.informationweek.com/it-infrastructure/how-5g-will-impact-cloud-computing-and-remote-workforce
2024-09-14T07:01:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00415.warc.gz
en
0.958361
1,385
2.96875
3
If there was a single conceptual buzz at the just concluded IOT Solutions World Congress in Barcelona, Spain it was how artificial intelligence (AI) might allow connected devices, aka the internet of things (IoT), to analyze incoming data and react to it autonomously, an approach that is being dubbed AIoT. While AIoT is still an evolving notion, it aims to build a closer link between information technology (IT) and operating technology (OT), a marriage that the OT side often has been reluctant to consummate. Real-life circumstances are driving the need, however. A somewhat extreme example that might be described as AI at the edge involves locomotives that are increasingly being operated remotely without the benefit of an onboard crew. A remote control operator (RCO) costs significantly less than a trained engineer and may even handle a workload that formerly required two or three people onboard. In certain regions of the world, the multiple locomotives used for a mile-long train may be out of touch for days with datalinks like cellular networks. The problem, relates Mark Kraeling, solutions architect for GE Transportation, is that the locomotives are sometimes the target of thieves, as each one may carry as much as 17,000 liters of fuel. AI can autonomously monitor fuel sensors, and if it detects a sudden fuel drop, dispatch a drone from the locomotive to investigate – a move that often is enough to scare off would-be bandits in pickup trucks. (Can a weaponized locomotive with AI riding shotgun be far behind?) This is where the cyberworld meets real life. Meanwhile, creating a direct link between a virtual asset and a physical one is the goal of a large-scale industrial IoT transformation at Volvo. According to Julien Bertolini, the marques’ IoT expert, Volvo has switched automation gears to an approach that grants autonomy to software users so that they can solve their own problems, as opposed to having it done by a centralized solution center. Any shop-floor user is able to connect to any piece of industrial equipment and visualize the data on a modern interface. The project is still ongoing as Volvo creates a digital twin for plant operations that will include the increased use of AI. Ultimately, the goal is to allow AI to react quickly and autonomously to the mountains of data collected by sensors during the manufacturing process, for example. Applications also are foreseen in other fields like building security, well-represented by a myriad of companies on the exhibit floor. Generative AI is seen as the operating language for communication between humans and AIoT. The IOT Solutions World Congress (May 21-22, 2024) also recognized a number of key innovations with Industry Solutions Awards. These included: -Loctite Pulse for its use of sensors and an app that constantly monitors plant operations -Barbara for its application of adaptive AI for the monitoring of chemical supplies at Acciona’s 89 desalination plants -Calfer for an all-in-one automotive platform that addresses driver needs including insurance, parking, maintenance and refueling -Topazium for its SkinGuard app for early detection of malignant skin lesions -MClimate for a smart radiator thermostat that improves energy efficiency
<urn:uuid:422cfb40-08bf-45a6-ab7b-5609f5ffb543>
CC-MAIN-2024-38
https://digitalcxo.com/article/iot-solutions-world-congress-ai-forging-links-with-iot/
2024-09-15T13:28:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00315.warc.gz
en
0.961928
666
2.59375
3
API allows for simple HTTPS interaction with a variety of data encoding formats available. , short for Representational State Transfer, is a style of software architecture for distributed hypermedia systems such as the World Wide Web. The general approach that the Dyn’s DNS API uses for REST maps each of the common CRUD (Create, Read, Update, Delete) operations to one of the HTTP functions. The help topics are named by function key words. For Example: REST\Contact POST = Create a new Contact. CRUD Operations, their equivalent HTTP Function, and Help Topic Key Word CRUD Operation | HTTP Function | API Help Topic Key Word | Create | POST | Create | Read | GET | Get | Update | PUT | Update, Activate, Deactivate, Reset, or Recover | Delete | DELETE | Delete or Remove | For more information, see the Wikipedia article on REST There are a few things that you will need to know about connecting to the DynECT REST 1. Content Type Supported content types for the REST - JSON ( ) - XML ( ) - YAML ( ) - HTML/Query String ( ) (limited functionality, see note below) 1. Select your content type from the list. 2. Make sure that the appropriate MIME type (the value in parenthesis above) is passed with each request as the HTTP header “Content-Type”. NOTE: The HTML/Query String content-type has limited functionality as there is not a consistent way to represent nested data structures. While this may be a slight departure from true REST philosophy, for security purposes the DNS API requires that a session be established. This is accomplished using the Session URI: /REST/Session/ Establishing a new session is as easy as: - Send a POST request with Content-Type header as described above - To the Session URI (/REST/Session/) - Pass in as arguments your If you forget one of these fields, the response will let you know. If successful, you will receive a “token” under the “data” element of the response. You have now established a session. NOTE: It is very important that any subsequent requests to the DynECT API during the session include the following in the HTTP header: 1. “Auth-Token” = the token value received in the data element of the response. 2. “Content-Type” = The content-type chosen in the Content Type section of this document. 3. Using a Query String REST will accept basic arguments, both scalar and array, using a query string instead of or in addition to the arguments specified in the body of the call. The syntax for the query string is as follows: - ? – marker at the end of the URI to identify the start of the query. - Use %20 to signify a space in the URI query. Example: argument%201=value%201&argument%202=value%202 - Nested values may not be placed within the query – use the body of the call. NOTE #1: If you enter the same argument in both the query string AND the body of the call, the body argument value will override the query string value. NOTE #2: Nested argument structure is too complicated to use the URI query string format. Enter any nested arguments in the body of the call. Examples of Query String arguments: /REST/Object?foo=bar /REST/Object/identifier?foo=bar /REST/Object/identifier?foo=bar&love=cheese /REST/Object/identifier?foo=bar&foo=bar2 With a very few exceptions (outlined below), the top level of the response data structure for all requests will look the same. They will contain the following items: : The ID of the job that was created in response to a request.status : One of ‘success’, ‘failure’, or ‘incomplete’msgs : A list of zero or more messages (structure described below) regarding the results.data : The structure containing the actual results of the request. Format depends on the request made. The messages contained within the msgs field will contain: : A debugging field. If reporting an error to your Dynect Concierge, be sure to include this.LEVEL : The severity of the message. One of: ‘FATAL’, ‘ERROR’, ‘WARN’, or ‘INFO’INFO : The actual message itself.ERR_CD : An error code (if appropriate) regarding the message. Valid values:DEPRECATED_REQUEST : The requested command is deprecatedILLEGAL_OPERATION : The operation is not allowed with this data setINTERNAL_ERROR : An error occurred that cannot be classified.INVALID_DATA : A field contained data that was invalidINVALID_REQUEST : The request was not recognized as a valid commandINVALID_VERSION : The version number passed in was invalidMISSING_DATA : A required field was not providedNOT_FOUND : No results were foundOPERATION_FAILED : The operation failed to complete successfullyPERMISSION_DENIED : This user does not have permission to perform this actionSERVICE_UNAVAILABLE : The requested service is currently unavailable.TARGET_EXISTS : Attempted to add a duplicate resourceUNKNOWN_ERROR : An error occurred that cannot be classified Each request to the DynECT API will cause a “job” to be created. The Job ID will be one of the fields in a response. You can use the /REST/Job/ resource to retrieve the results of a job so long as your session is active. Long Running Jobs: If a request takes longer than 5 seconds to generate data, the API will instead respond with a Temporary Redirect (HTTP 307) and the URI of the Job. Your program can then periodically check the job results until they become available. It is recommended that a GetJob command be run no more than once every 5 seconds as running it more frequently will lead to the maximum number of redirects and produce an error. The job’s status will be ‘incomplete’ until it finishes. This is designed to keep your programs from hanging while awaiting their response. NOTE: The API will not permit another request to begin until the previous one has finished. In an attempt to keep the API from having unexpected changes, the API protocol supports specifying a version. Calls made against a particular version of the API should stay reasonably consistent over time. If a requester does not specify a version in a request, the most recent version will be used. The current version will also be returned as a part of the response data to a successful POST to /REST/Session/. This does not limit a specific session to a particular version – each request can specify the version of the API to use. In order to specify a particular version of the API to use with the REST interface, specify it in each request as the HTTP header, “API-Version”. Specifying a version that is unknown to the API service will result in an error. Nearly all functionality that is available via the Dynect web interface is also available via our API. Here are a list of all the available REST
<urn:uuid:f70f3191-596d-4a57-b4f3-5929b5b690f8>
CC-MAIN-2024-38
https://help.dyn.com/rest/
2024-09-08T09:47:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00115.warc.gz
en
0.845059
1,586
2.546875
3
Imagine a world without electricity, where the gas that warms our homes suddenly stops flowing. It’s not just an inconvenience; it’s a potential catastrophe. As we navigate the digital age, the energy sector is a critical infrastructure that has become the backbone of our modern existence. However, this very dependence on essential services makes it a prime target for cyberthreats emanating from a variety of sources, ranging from nation-state actors to hacktivists and cybercriminals. Today, we’re spotlighting a critical battleground that silently powers our daily lives— the energy sector. The energy sector, a cornerstone of modern society, faces a rising tide of cyberthreats from nation-state actors, cybercriminals, and hacktivists. As we increasingly intertwine our lives with digital conveniences, the vulnerabilities within the energy sector become more pronounced. The looming specter of cyberthreats threatens not just convenience, but the very essence of our existence—electricity, gas, and the essentials we often take for granted. Join us as we navigate the complex landscape of securing critical infrastructure in the energy sector and unveil the importance of fortifying these digital bastions against an array of adversaries. Challenges in the Energy Sector As we delve into the cyber battleground of the energy sector, we encounter a complex terrain riddled with unique challenges that demand our attention. One of the foremost hurdles lies in the sector’s intricate geographic and organizational complexity. The expansive networks that span cities, regions, and sometimes entire nations create a vast attack surface, making it challenging to monitor and fortify every digital nook and cranny. Beyond the geographical complexity, the energy sector grapples with the profound interdependencies between physical and cyber infrastructure. This interconnectedness, while boosting efficiency, simultaneously opens the door to vulnerabilities that can be exploited by malicious actors with potentially catastrophic consequences. Electric Power and Gas Sector Take, for instance, the electric-power and gas sector, where the symbiotic relationship between physical and cyber elements becomes glaringly apparent. The advent of wireless “smart meters” has ushered in new possibilities for efficiency but has also given rise to insidious threats. These devices, designed to streamline billing processes, have become potential vectors for exploitation, leading to instances of billing fraud that can wreak financial havoc on energy companies and consumers alike. Moving beyond financial threats, the commandeering of operational-technology (OT) systems poses a tangible risk. Imagine a scenario where malicious actors infiltrate these systems to halt multiple wind turbines. The resulting disruption not only impacts the energy supply but also threatens the delicate balance of power grids, potentially plunging entire regions into darkness. In more alarming cases, the specter of physical destruction looms. The convergence of cyber and physical realms means that a well-executed cyberattack could extend beyond the digital realm, causing real-world damage. The potential for disrupting critical infrastructure and, by extension, the lives of those dependent on it, highlights the gravity of the challenges faced by the energy sector. In the face of these intricate challenges, it becomes imperative to not only understand the vulnerabilities but also to develop robust strategies that safeguard against exploitation. Forward-Looking Approach to Security In cybersecurity, a reactive stance is no longer sufficient. Companies within the energy sector must adopt a forward-looking approach that anticipates and mitigates potential threats before they manifest. This begins with recognizing that cybersecurity is not merely a standalone function but an integral aspect that should be interwoven into the fabric of critical business decisions. To achieve this integration, companies can start by embedding the security function into the very core of their strategic planning, especially when contemplating corporate expansion. While essential for growth, the expansion of infrastructure and geographic reach often introduces new avenues for cyberthreats. Companies can proactively identify and address potential vulnerabilities by treating cybersecurity as an indispensable aspect of expansion plans. For instance, security should be a foundational consideration rather than an afterthought when designing new infrastructure and systems. Companies can employ a secure-by-design philosophy, where architects and engineers work hand-in-hand with cybersecurity experts from the inception of a project. This collaborative approach ensures that potential threats are recognized and mitigated during the development phase, reducing the risk of exploitable vulnerabilities in the final product. Moreover, a forward-looking approach necessitates making security an integral component of all business decisions. Whether it’s a change in operational processes, the adoption of new technologies, or even partnerships and collaborations, cybersecurity considerations should be woven into the decision-making fabric. By fostering a security-conscious culture across all levels of the organization, companies can create a proactive defense mechanism that adapts to the dynamic threat landscape. Critical Infrastructure Scenario Consider a scenario where an energy company is evaluating a partnership with a third-party technology provider. Instead of viewing this decision solely through the lens of potential benefits, a forward-looking approach would involve a comprehensive cybersecurity assessment of the partner’s systems and practices. This ensures that the alliance strengthens rather than compromises the overall security posture. In essence, a forward-looking approach to security transcends the traditional boundaries of cybersecurity, embedding it into the DNA of corporate strategy. By doing so, companies not only fortify their defenses against emerging threats but also pave the way for sustainable growth in an era where cyber resilience is as crucial as physical infrastructure. Empowering Cyber Resilience in the Energy Sector As we conclude our journey through the intricate landscape of cybersecurity challenges in the energy sector, the imperative for fortifying against cyberthreats looms larger than ever. The vulnerabilities within the energy sector, integral to our daily lives, demand a united front against the rising tide of nation-state actors, cybercriminals, and hacktivists. In recognizing the gravity of these challenges, Blue Team Alpha stands ready to be your strategic ally in safeguarding critical infrastructure. With decades of experience in breach investigations spanning all 16 critical infrastructure sectors, our experts bring a profound understanding of diverse industry-specific challenges and potential threats. What sets us apart is not just our expertise but the unique insight derived from a significant portion of our team being former nation-state-level employees. This background ensures a strategic and sophisticated approach to cybersecurity practices at the highest levels. As you navigate the evolving threat landscape in the energy sector, contact Blue Team Alpha’s cybersecurity experts to fortify your defenses, leveraging our unparalleled experience and nation-state-level insights. Together, let’s empower cyber resilience and secure the very essence of our existence—electricity, gas, and the essentials we often take for granted. Reach out today and embark on a journey towards a more secure and resilient digital future.
<urn:uuid:1817e787-d11d-40be-8ec5-423b117a605f>
CC-MAIN-2024-38
https://blueteamalpha.com/blog/securing-critical-infrastructure-cybersecurity-in-the-energy-sector/
2024-09-09T14:26:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00015.warc.gz
en
0.913614
1,368
2.65625
3
Up to nine miles of coastline in Santa Barbara have been transformed in recent days after 105,000 gallons of oil spilled from a ruptured pipeline over a period of three hours. Officials estimate that about 20,000 gallons of the gooey black oil made it to the sea, while the rest stayed on land. About 300 local and federal responders have been on the scene and working to haul as much oil as possible from the ocean in buckets. But so far, their efforts have only mopped up about 7,000 gallons of the spilled oil in the ocean. The company responsible for the spill has been identified as Plains All American Pipeline. The company’s CEO apologized for the accident and promised to stay on the scene until the cleanup is complete. But there’s been no word on how long that might take. Ironically, a similar spill from a blowout on an oil platform happened on the same stretch of beach in 1969. Because of the devastation to sea life from that incident — it unleashed several thousand gallons of oil and killed thousands of seabirds and marine animals — it is credited with launching the environmental movement in the U.S. As environmentalists now work to assess the damage from this latest accident, the ill effects of marine life are already becoming clear. Reuters and The Associated Press have been on the scene capturing the stunning images shown here.
<urn:uuid:33dea33d-eced-4889-9553-777ebefc7a38>
CC-MAIN-2024-38
https://www.mbtmag.com/global/news/13212456/5-of-the-most-dramatic-photos-from-californias-oil-spill-so-far
2024-09-12T00:39:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00715.warc.gz
en
0.957524
278
3.28125
3
AI solutions are highly potential. Did you know about the news flash? – If you ask Siri to read the texts and emails for you, find the nearest departmental store, or call your friend for you, then you have made AI a part of your everyday life. It is a fact that AI seems to be becoming our future. Some of the live examples are current weather forecasting systems, spam filtering programs, and Google’s search engine, which are AI-powered. Know a little about artificial intelligence Call it a result of human intelligence; artificial intelligence can replicate the cognitive functions of a human being. Artificial intelligence is a thing that can learn and solve problems in no time. It is a significant term which is further classified into three names – It is the automation of regular or necessary tasks. The best example of assisted intelligence is machines in assembly lines. This type of intelligence cannot yield direct results without human intervention. And, we can make better decisions only based on AI information. In this, it keeps humans out of the loop. Examples of autonomous intelligence are self-driving cars and autonomous robots. One sector that attracts users and most of the benefits of AI is the business sector. AI solutions have the potential to improve business decision-making processes significantly. It has started becoming a crucial element for enterprise applications and working toward strengthening successful business strategies. You might have seen AI-based enterprise apps in computerized billing systems, salesforce automation, office productivity suites, resource planning, process management, and IT compliance. So, it is very likely that the revenue of AI-based enterprise applications will grow exponentially by 2025. Let’s see how enterprise AI-based applications support better business decisions. The term data mining is already popular in the industry, but did you know about another similar term, opinion mining. Opinion mining involves analyzing sentiments, emotions, all of which depend on Natural Language Processing (NLP), computational linguistics, and biometrics to extract quality information. AI has helped analyze and collect opinions and customer feedback as quantifiable information in almost no time. Customer Relationship Management (CRM) It is anticipated that CRM and artificial intelligence are a prosperous combination. The use of artificial intelligence helps in understanding different customer needs and also supports building the right solution to the problem. To help customers best with their questions, much data can be processed very quickly. The best CRM and AI results are indicated in these areas, such as forecasting, predictive lead scoring, natural language search, and recommendations. In CRM, it is a valuable chat or AI that can show results in the primary analysis and give smart answers about customers based on available data. It is proved that those organizations with AI-equipped CRM can generate valuable results. It is essential to understand what your customers are looking for. Based on which, the next step is to change the marketing decisions. Customer needs are understood only with the help of the technology named artificial intelligence. It helps businesses make proper marketing decisions through the available data. Artificial intelligence records everything right from the time spent on the customers’ apps and choices, almost everything that can help convert them. AI or artificial intelligence is one such thing that can quickly become a friend to a user or buyer. It mainly shows how websites communicate with users to get maximum ROI based on customer’s purchases and preferences. This technology is used to recommend various products or other items to the users. A relevant example is Netflix; it recommends only a limited set of options that you are likely to enjoy. Such features or capabilities save time and help in delivering better user experiences. It allowed the application to become the most favorite and brilliant source of revenue generation for people. In the coming time, this feature will not only prevail in the market but also move into eCommerce, finance, and other industries. So it is sure that when decision-makers and business executives have reliable data analysis, recommendation, and follow-up through artificial intelligence, they can make better choices for employees and businesses. It helps in the overall development and improves the competitiveness of the company. But, there is an underlying gap in the complete development and usage of artificial intelligence. There is only 80% of the unstructured data available. Also, neither humans nor artificial intelligence can process this amount of data to make it more usable for the business. To expand your horizons, don’t forget to check and download our latest whitepapers on artificial intelligence.
<urn:uuid:b99c9a44-2467-4167-87a3-cf6c6403a165>
CC-MAIN-2024-38
https://www.ai-demand.com/insights/tech/artificial-intelligence/role-of-artificial-intelligence-in-business-decision-making/
2024-09-14T09:56:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00515.warc.gz
en
0.943126
917
2.671875
3
Dynamic Data Masking in Greenplum Greenplum is a powerful database management system used by many organizations to handle large volumes of data. As companies store more sensitive information, protecting this data becomes crucial. Dynamic data masking in Greenplum protects sensitive information while letting authorized users access it. This article explores the concept of dynamic data masking in Greenplum, its benefits, and how to implement it effectively. What does Dynamic Data Masking mean? Dynamic data masking is a security measure that conceals confidential information instantly. It works by replacing original values with masked versions when unauthorized users query the database. The actual data remains unchanged in the database, but users without proper permissions see only the masked information. This approach differs from static data masking, which permanently alters the data. Greenplum dynamic data masking provides several advantages for organizations. It enhances security by protecting sensitive information from unauthorized access, reducing the risk of data breaches. It helps meet regulatory requirements like GDPR, HIPAA, and CCPA. Administrators can easily adjust masking rules without modifying the underlying data. It doesn’t require changes to existing applications or database structures. Dynamic masking has minimal impact on query performance. Greenplum dynamic data masking operates at the query level. When a user sends a query, the database engine checks their permissions. If the user lacks the necessary rights, the engine applies masking rules to sensitive columns before returning the results. This process happens transparently, without the user’s knowledge. Implementing Dynamic Data Masking in Greenplum To set up dynamic data masking in Greenplum, follow these steps: First, identify the columns containing sensitive information. Common examples include Social Security numbers, credit card numbers, email addresses, phone numbers, and addresses. Next, create custom functions to mask different types of data. Here’s an example of a function to mask email addresses: CREATE OR REPLACE FUNCTION mask_email(email text) RETURNS text AS $$ BEGIN RETURN LEFT(email, 1) || '***@' || SPLIT_PART(email, '@', 2); END; $$ LANGUAGE plpgsql; This function keeps the first character of the email, replaces the rest with asterisks, and preserves the domain. After creating masking functions, apply them to the relevant columns. Use views or security policies to implement the masking: CREATE VIEW masked_customers AS SELECT id, name, mask_email(email) AS email, mask_phone(phone) AS phone FROM customers; Grant appropriate permissions to users and roles. Ensure that only authorized users can access the original data: GRANT SELECT ON masked_customers TO analyst_role; GRANT SELECT ON customers TO admin_role; Finally, test the masking implementation to ensure it works as expected: -- As an analyst SELECT * FROM masked_customers LIMIT 5; -- As an admin SELECT * FROM customers LIMIT 5; Verify that analysts see masked data while admins can view the original information. Implementation via DataSunrise Greenplum offers dynamic masking, but some users find it too complex for large databases. In these cases, experts advise using third-party solutions. To perform this in DataSunrise, you must take several steps. Firstly, you need to create an instance of the target database. Through the instance a user is able to interact with the target database via security rules and masking tasks. Creating an instance: All that’s left is to create a masking rule and turn it on. Select the database, schema, table and columns and the methods of masking. In this example we’ll mask the ‘city’ table of ‘test2’ database. The result is as follows: Best Practices and Challenges To maximize the effectiveness of dynamic data masking in Greenplum, consider these best practices: Apply consistent masking rules across all instances of sensitive data. This approach maintains data integrity and prevents confusion. Conduct regular audits of your masking policies. Ensure they align with current security requirements and regulations. Monitor the performance impact of dynamic masking. Optimize masking functions and policies if needed to minimize query overhead. Educate users about dynamic data masking. Help them understand why they might see masked data and how to request access if necessary. Although Greenplum’s dynamic data masking provides substantial advantages, it’s crucial to recognize possible obstacles. Masking can complicate certain types of queries, especially those involving complex joins or aggregations. Maintaining data relationships across masked and unmasked tables requires careful planning. Dynamic masking shouldn’t be the only security measure. It works best as part of a comprehensive data protection strategy. Future of Dynamic Data Masking in Greenplum As data privacy concerns grow, we can expect further advancements in Greenplum dynamic data masking. Future versions may offer even more efficient masking techniques. We might see more sophisticated masking options, such as format-preserving encryption. Better integration with other Greenplum security features and third-party tools is likely. Tools to automatically adjust masking rules based on changing regulations may emerge. Dynamic data masking in Greenplum provides a powerful way to protect sensitive information without sacrificing database functionality. By implementing this feature, organizations can enhance their data security, comply with regulations, and maintain user trust. As you explore Greenplum dynamic data masking, remember that it’s just one part of a comprehensive data protection strategy. Combine it with other security measures to create a robust defense against data breaches and unauthorized access.
<urn:uuid:08adf2d4-537e-47b0-9162-9dbf6c1aa7b2>
CC-MAIN-2024-38
https://www.datasunrise.com/knowledge-center/dynamic-data-masking-in-greenplum/
2024-09-14T11:29:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00515.warc.gz
en
0.851481
1,188
2.640625
3
The ongoing conflict in Ukraine, which began with Russia’s full-scale invasion in February 2022, continues to exert a devastating toll on both nations. The scale of losses in troops and equipment for Ukraine has been significant, a reality that remains underreported in Western media. Recent events highlight the intensity and impact of the conflict, shedding light on the dire situation on the ground. Recent Attacks and Losses On July 8, 2024, a massive missile attack targeted several Ukrainian cities, including Kyiv, Kryvyi Rih, Dnipro, Kropyvnytskyi, and Pokrovsk. This attack resulted in at least 47 fatalities and around 170 injuries. Notable among the damaged infrastructure was the Okhmatdyt children’s hospital in Kyiv, the largest in the country, where two adults lost their lives, and several others, including children, were injured. In the same vein, the destruction of military equipment continues unabated. On July 14, the Russian Ministry of Defense released footage showing the annihilation of a German-made IRIS-T anti-aircraft missile system in the Dnepropetrovsk region. This system, comprising a launcher and a TRML-4D radar station, was completely destroyed by Russian Iskander missiles. Military Aid and Expenditure The financial and military aid extended to Ukraine by the European Union and the United States is staggering. Since February 2022, Ukraine has received nearly €108 billion ($115.9 billion) from the EU, including €39 billion ($41.8 billion) specifically for military aid. The United States’ contributions are even more substantial, with total expenditures reaching $175 billion, of which $107 billion directly supported the Kyiv regime. This includes $34.2 billion for budget needs and $69.8 billion for arms and military assistance. Equipment and Personnel Losses The scale of equipment and personnel losses for Russia is equally significant. As of July 6, 2024, Russia’s military casualties have reached approximately 560,290 troops, including recent daily losses of around 1,260 soldiers. Additionally, Russia has lost 8,153 tanks, 15,629 armored personnel vehicles, and 14,897 artillery systems since the invasion began. Ukraine’s forces have been effective in defending against these assaults, managing to shoot down a significant number of Russian drones and missiles. For instance, on the night of July 6, Ukrainian defenders intercepted and destroyed 24 out of 27 Shahed drones launched by Russian forces. Geopolitical and Technical Implications The geopolitical implications of the conflict extend beyond the immediate battlefield. The significant losses and ongoing destruction have strained Russia’s military capabilities and resources. According to analyses, Russia is experiencing critical shortages in tanks and other key military equipment, potentially impacting its long-term operational capacity. Technological advancements and innovations in military equipment are also playing a crucial role. The deployment and subsequent destruction of advanced systems like the IRIS-T anti-aircraft missile system underscore the evolving nature of warfare and the critical need for continuous updates and support from international allies. Historical Context and Comparisons The current conflict can be contextualized within the broader history of military engagements in the region. Comparisons can be drawn with previous conflicts where sustained losses in personnel and equipment eventually led to significant strategic and political shifts. The attrition rates seen in the Russo-Ukrainian conflict are reminiscent of prolonged engagements in the past, where technological superiority and international support played decisive roles in the outcome. The Russo-Ukrainian conflict continues to be a devastating and complex confrontation with significant human, military, and geopolitical costs. The extensive losses in troops and equipment, coupled with substantial international aid, highlight the ongoing struggle and the critical need for a resolution. As the conflict persists, the global community must remain vigilant and informed, understanding the full scope of the situation to support efforts towards peace and stability in the region. NATO’s Position on Military Involvement in Ukraine The ongoing Russo-Ukrainian conflict has led to significant geopolitical tension, with NATO playing a crucial but carefully measured role. Despite substantial aid to Ukraine, NATO remains steadfast in its decision not to engage directly in the conflict. This stance is critical in maintaining a delicate balance between supporting Ukraine and avoiding an escalation that could lead to a broader war. NATO’s Stance on Direct Involvement NATO Secretary General Jens Stoltenberg has repeatedly emphasized that the alliance will not be directly involved in the Ukrainian conflict. During a televised marathon, Stoltenberg reiterated that while NATO will assist Ukraine in taking down Russian warplanes, it will refrain from direct military engagement. This position is intended to prevent further escalation and maintain regional stability. Discussions at the NATO Summit At the NATO summit held from July 9-11, 2024, the topic of NATO’s involvement in Ukraine was a significant point of discussion. Polish Deputy Defense Minister Cezary Tomczyk had previously expressed hopes that the summit would address whether Poland could be permitted to shoot down Russian missiles over Ukraine. This proposal was part of a broader strategy to enhance Poland’s security in the face of increasing threats from the conflict. However, NATO’s position remained unchanged. Stoltenberg and other leaders, including German Defense Minister Boris Pistorius, emphasized that direct actions such as shooting down Russian missiles or drones over Ukraine would equate to direct participation in the conflict, a scenario they are keen to avoid. This sentiment is echoed by White House National Security Communications Advisor John Kirby, who stressed that the Biden administration seeks to prevent the conflict from escalating through such measures. Poland-Ukraine Security Cooperation Pact Amidst these discussions, Poland and Ukraine have taken significant steps to bolster their mutual security. On the sidelines of the NATO summit, Polish Prime Minister Donald Tusk and Ukrainian President Volodymyr Zelensky signed a 10-year security cooperation pact. This agreement includes a mechanism allowing Poland to intercept Russian missiles and drones in Ukrainian airspace, thereby enhancing Poland’s defensive capabilities without breaching NATO’s non-engagement policy. This pact underscores the close military and political ties between Poland and Ukraine and represents a strategic move to safeguard Polish airspace while supporting Ukraine’s defensive efforts. The agreement also highlights the complexity of regional security dynamics and the various ways NATO member states are navigating their roles within the broader alliance framework. NATO’s carefully calibrated stance on military involvement in Ukraine has far-reaching geopolitical implications. By supporting Ukraine with defensive measures and military aid, NATO aims to strengthen Ukraine’s position without provoking direct confrontation with Russia. This strategy helps to maintain a united front among NATO members while adhering to international norms and avoiding actions that could lead to a larger, potentially uncontrollable conflict. Moreover, the security cooperation pact between Poland and Ukraine serves as a model for bilateral agreements that enhance regional security without contravening NATO’s overarching policies. Such agreements are crucial in providing immediate and practical support to frontline states like Poland, which face direct threats from the ongoing conflict. NATO’s position on the Ukrainian conflict is a testament to the alliance’s strategic balancing act. By offering substantial support to Ukraine and forging critical security pacts like the one between Poland and Ukraine, NATO continues to play a vital role in regional stability. However, its commitment to avoiding direct military engagement is crucial in preventing the conflict from escalating into a broader war. As the situation evolves, NATO’s careful and measured approach will remain essential in navigating the complex geopolitical landscape of Eastern Europe.
<urn:uuid:4e097d39-f89d-444d-af54-39a5a5d6db64>
CC-MAIN-2024-38
https://debuglies.com/2024/07/16/the-hidden-toll-ukraines-struggle-amidst-the-russo-ukrainian-conflict/
2024-09-15T17:16:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00415.warc.gz
en
0.941869
1,551
2.609375
3
Over the course of history, developing systems of human organization have resulted in an increased need for energy inputs. Agriculture came with the added need for animal labor, the iron age brought with it tools that magnified the impact of human energy and industrialization led to the discovery of new fuels and ways in which to extract their power. As these periods have passed, the economic and social benefits of community have become more apparent and humanity has organized in communities of ever increasing size. Today, over 50 percent of the world population lives in densely packed urban areas which make up just five percent of Earth’s total land surface: cities. It is known that increased global urbanization trails economic development, one only has to look at China and India to see hundreds of examples, and the United Nations expects around 65 percent of the world’s population to be living in cities by 2050. While some cities are slowly being built from scratch, we are also seeing a growing trend of mega-cities, those with populations over 10 million, forming out of pre-existing urban clusters. In 1990, there were only ten mega-cities. Today there are 28 which house around 12 percent of the global population and are primarily in Asia. By 2030, experts expect the number of mega-cities to increase to 41. Energy Use in Cities Due to the extreme concentration of human activity within them, cities account for around 75 percent of the world’s resource use and 80 percent of global CO2 emissions. As the global population continues to grow, energy demand will intensify, specifically around cities, which the International Energy Agency (IEA) expects to account for 90 percent of global energy growth. While this growth comes with a multitude of problems such as increased effects of urban heat island, higher levels of pollutants and more traffic, human organization is actually more efficient and productive in cities than it is in suburbs and rural areas. At a high level, per household energy use in cities is significantly lower than all other living arrangements at 85.3 MBTUs compared to 95 MBTUs, 102 MBTUs and 109 MBTUs for rural areas, towns and suburbs respectively. And while the total urban mass spends an average of $30 billion more on energy per year, this actually translates to between $200 to $400 less per household per year. A 2008 Brookings Institute report also found that urban lifestyles result in an average carbon footprint that is 14 percent smaller than those non-urban residents. Additionally, a study of 366 metropolitan areas and 576 micropolitan areas found that as cities get larger, global CO2 emissions fall, with a 1 percent increase in population resulting in a 0.17 percent decrease in emissions, a significant number, but not necessarily indicative of economies of scale. There are supplementary gains in energy as floor area goes down on a per capita basis with greater density, resulting in better insulation, more efficient heating and cooling and less CO2 emitted per person. Within the service sector, which is rapidly growing as our cities move into the experience economy and rely more heavily on tech-enabled grocery shopping (InstaCart), transportation (Uber) and housing (AirBnB), a doubling of urban population corresponds with a 12 percent increase in industry-wide energy efficiency. These gains are partially attributed to differences in spatial uses, as parking and office space requirements decrease, resulting in higher efficiencies for building systems. The general field of transportation, and cars specifically, account for more urban energy gains as research has shown a negative correlation between city size and gas usage for transportation. Additionally, average household car ownership in rural areas is 2.2 compared to 1.8 in cities and urban dwellers drive 7000 fewer miles a year, saving an average of 400 gallons of gasoline. Why Are Cities Causing Such a Strain? Although cities are comparatively better than suburban or rural areas in terms of energy efficiency per capita, their rapid growth in the coming years will place monumental strain on our ecological systems and more thoughtful solutions are needed in urban infrastructure, transportation and building use. Energy efficiency should be ubiquitous in urban life, spanning everything from street lighting and HVAC systems to electricity production and solid waste treatment. Energy efficiency not only saves money by lowering energy bills and operating costs but also creates local jobs, improves the quality of municipal services, strengthens energy security and reduces air pollution to create an all-around better place to live. Despite its apparent benefits, the process of implementing energy efficient measures is difficult and there are numerous barriers to progress from a multitude of perspectives. On a political and economic level, there are limitations due to lower energy prices for less green fuels, lack of public financing, limited autonomy of local governments and general election cycles; from a service provider perspective, high project development and transaction costs as well as concerns about public repayment make these projects less attractive. For end users, the lack of incentives, unclear ownership, limited awareness and behavioral biases make it hard to adopt energy efficiency programs while financiers tend to shy away from projects leveraging new technologies with higher risks. That being said, the growing strain on our ecological and economic systems will lead a confluence of environmental, technological, economic and social drivers that will make it increasingly attractive to develop energy efficient systems in urban areas. As humanity begins the process of negotiating the growing needs of an urban population and increasing climate change, it will become more important to analyze and develop energy-efficient solutions for all aspects of city life. Despite higher per capita efficiencies, 80 percent of all greenhouse gas emissions come from cities, with 25 percent attributed to urban transportation and 32 percent attributed to the built environment. By leveraging a mix of passive and active solutions including housing retrofits, urban densification, development of smart grids and electrification of a shared autonomous vehicle pool, we can begin to meet the needs of our generation without sacrificing the ability of future generations to meet their own. The following articles in this series will discuss building level solutions; infrastructure, microgrids and the Internet of Things; and Transportation Systems as they pertain to energy efficiency in our mostly urban future.
<urn:uuid:f5c13d7a-4cdf-40d4-a3c7-93eb95a9a688>
CC-MAIN-2024-38
https://www.iotforall.com/future-urban-growth-energy-requirements
2024-09-15T15:56:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00415.warc.gz
en
0.941298
1,230
3.5625
4
In the first half of 2022 alone, there were 236.1 million ransomware attacks reported around the world. Ransomware is a type of cyberthreat that can deny you access to your files. This can cause individuals or businesses to lose money, access to personal data, and important security. Everyone who uses a computer or device must be aware of what ransomware attacks are, what types of ransomware attacks you should watch out for, and how to recognize and prevent them. It is also important to ensure that you are staying up to date on evolving ransomware threats and best practices for avoiding them. What Is Ransomware? Ransomware is a type of cyberattack in which a bad actor restricts your access to files on your computer or device, and demands a ransom in exchange for renewed access. Typically, these bad actors will restrict access to your files by gaining access and encrypting them. They will also typically ask for payment in the form of cryptocurrency, as these assets are more difficult to trace. In the process, they may also steal valuable information and may not even renew your access when you pay them as they have asked. You should never comply with the request for a ransom if you experience a ransomware attack, and should instead turn off the computer or device, unplug it, and contact the appropriate FBI division. This will prevent the bad actor from gaining further access to your device while the FBI investigates the issue. Entities that are most likely to experience a ransomware attack are those that store valuable information (such as government offices), those that face high stakes if access to information is interrupted (such as healthcare facilities), or those that are not well prepared for such an attack (such as schools). Types of Ransomware Attacks and Names of Major Examples While all ransomware attacks utilize a similar base strategy, they are not all identical. There are many different types of ransomware attacks and they are evolving all the time. It is important to be aware of common types of ransomware attacks and how they operate. Crypto ransomware is one of the most common types of ransomware. This type of attack restricts your access to your files through the use of encryption. As a highly prevalent type of ransomware, this type of threat is common in ransomware attacks across the board. Double extortion is essentially a subcategory of crypto ransomware. A double extortion attack not only encrypts data, but also extracts it for further use. As such, the bad actor can then use the information as leverage for future ransom demands or otherwise further use it for their benefit. Locker ransomware attacks utilize viruses to infiltrate computer systems and block various functions and access points. As such, you may not only lose access to the targeted files themselves, but also to the device or its peripherals altogether. The virus may infect the system through various means, but common culprits are interactions with spam and email attachments. Ransomware as a Service (RaaS) Some bad actors such as the group DarkSide take their nefarious activities to the next level and sell access to pre-developed ransomware tools. This allows prospective buyers to gain the ability to launch a ransomware attack without the need to have sophisticated knowledge or tools at their disposal. While this allows ransomware to more widely proliferate as it becomes more accessible to use, it also means that many bad actors are using the same software and techniques. These attacks may be less successful overall as information about their specific hallmarks spread. Scareware makes it appear that there has already been an attack on your device. Typically, this strategy will take the form of a popup notifying you that a cybersecurity threat has been detected and that you must take immediate action to address it. You will then be encouraged to click an unsafe link, navigate to an unsafe site, or pay to download unnecessary software to address the problem. As a result, you will either actually download malware or be extorted into paying unnecessary fees. Identifying Ransomware Attacks To effectively detect and address various types of ransomware attacks, you must be aware of how you can identify them. The following are some of the most important strategies to utilize: - Utilizing cybersecurity software; - Researching known ransomware file extensions; - Being watchful for an unusual uptick in the renaming of files; - Being watchful for unusual changes in the location of files; - Noting any slowing of network activity; - Noting any unauthorized access or extraction of files; - Noting any unusual encryption activity. High-risk targets may benefit from employing more advanced techniques and tools such as the use of NDR and AI technology to identify threats early. Meanwhile, government entities should utilize signals intelligence to identify nefarious behaviors indicative of active attacker tactics. Preventing Ransomware Attacks It is important that in addition to being able to identify ransomware attacks early, you also strategically employ techniques and tools to prevent attacks and preemptively minimize their impact. Effective options to achieve this include the following: - Maintain backups for your important data. - Develop detailed cybersecurity policies and procedures. - Regularly update your cybersecurity software. - Only use secure networks to access or share data. - Keep devices in secure locations and limit access to them. - Notify an IT professional if you note any unusual activity on your network. It is also vital that organizations develop a response plan to respond to attacks if they do happen.
<urn:uuid:b2cbba0c-1e41-4ced-be2f-a149bec73088>
CC-MAIN-2024-38
https://www.vectra.ai/blog/what-are-the-types-of-ransomware-attacks
2024-09-15T15:50:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00415.warc.gz
en
0.948038
1,099
3.171875
3
We live in a truly digital age. The Internet of Things (IoT) is at the forefront of almost everything we touch as consumers. From the Amazon Echo to smart door locks, we are integrating IoT devices into daily life. However, users often don’t realize how much data these devices collect and store. The IoT is also fuelling a frenetic pace for new and changing technology. The evolution of a cell phone from the first models in the 1990’s to today’s iPhone x took 20 years. On the other hand, IoT technology is changing much more quickly. With more devices connected than ever – according to Gartner, we will hit 20 billion IoT devices by 2020 – the digital forensic community faces several unique questions:1. What kind of data can be collected from IoT devices? 2. Can new types of data and traces from these devices be utilized for digital forensic purposes? 3. If so, how can we collect and analyze them efficiently? 4. How can IoT devices aid forensic investigators with their cases? To answer these questions, digital investigators must uncover evidence to determine, when a particular event happened, how it happened, and who was involved. The standard forensic workflow consists of five specific steps, identification, interpretation, preservation, analysis and presentation of relevant data. Investigations involving IoT devices are aligning with a similar workflow as the devices become more common. With more sources of information, investigators can paint a better picture of events. When analyzing evidence investigators attempt to construct a timeline of events. Timeline analysis as it’s referred to, is key in digital forensics with IoT devices. This timeline tells a story, for example, if smart locks are in use, it may be possible for investigators to determine when anyone entered or left a building. There are countless other examples; everything from our smart watches to our smart home security is recording data on our location, environment, and wellbeing. More cases involving IoT devices will help the digital forensic community put all of the electronic pieces together. IoT forensics is an emerging field. As investigators encounter more devices, they need the tools to access evidence and complete analysis of that information. Guidance Software (Now OpenText) developed EnCase Mobile Investigator to help investigators work with the latest in IoT and mobile devices. In the 1.01 release, EnCase Mobile Investigator supports Amazon Alexa cloud data, as well as data from drones, Fitbit smartwatches, Google Wear devices, and many more. We recently hosted a webinar on Mobile Apps and IoT forensics: “Uncovering Mobile App Evidence with EnCase Mobile Investigator”. In this webinar, Jonathan Arias, one of our expert solution consultants, walks through capturing and viewing evidence from a DJI Drone. These devices are just the beginning of the IoT revolution. We will continue to deliver innovative solutions to meet your investigative needs, as we have for the last 20 years. To learn more about EnCase Mobile Investigator, visit: https://www.guidancesoftware.com/encase-mobile-investigator. For our latest on Forensic Security check out our blog. Raj Udeshi is a Product Marketing Manager at Guidance Software (Acqured by OpenText)
<urn:uuid:9973bccd-4f21-4e8c-a237-c875c626a8b8>
CC-MAIN-2024-38
https://www.forensicfocus.com/news/why-you-need-forensics-in-an-iot-world/
2024-09-16T23:48:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00315.warc.gz
en
0.919213
667
2.6875
3
MARCH 17, 2008 -- IBM (search for IBM) says its scientists have built the world's smallest nanophotonic switch with a footprint about 100X smaller than the cross section of a human hair. The switch could be used to transfer information inside a computer chip by using photons instead of electrons. The switch could significantly speed up chip performance while using much less energy, IBM says. The switch is a continuation of a series of IBM developments towards an on-chip optical network: - In November 2005, IBM scientists demonstrated a silicon nanophotonic device that can significantly slow down and actively control the speed of light. - In December 2006 an analogous tiny silicon device was used to demonstrate buffering of over a byte of information encoded in optical pulses, a requirement for building optical buffers for on-chip optical networks. - In December 2007, IBM scientists announced the development of an ultra-compact silicon electro-optic modulator, which converts electrical signals into light pulses, a prerequisite for enabling on-chip optical communications. The switch was unveiled in a paper published in the journal Nature Photonics. IBM says its team demonstrated that the switch has several characteristics that make it suited for on-chip applications. First, the switch is extremely compact. As many as 2,000 would fit side-by-side in an area of 1 sq mm, easily meeting integration requirements for future multi-core processors. Second, the device is able to route a large amount of data since many different wavelengths can be switched simultaneously. With each wavelength carrying data at up to 40 Gbits/sec, it is possible to switch an aggregate bandwidth exceeding 1 Tbit/sec -- a requirement for routing large messages between distant cores. Third, IBM scientists showed for the first time that their optical switch is capable of operating within a realistic on-chip environment, where the temperature of the chip itself can change dramatically in the vicinity of "hot-spots," which move around depending upon the way the processors are functioning at any given moment. The IBM scientists believe this temperature-drift-tolerant operation to be one of the most critical requirements for on-chip optical networks. An important trend in the microelectronics industry is to increase the parallelism in computation by multi-threading, by building large scale multi-chip systems and, more recently, by increasing the number of cores on a single chip. For example the IBM Cell processor that powers Sony's PlayStation 3 gaming console consists of nine cores on a single chip. As users continue to demand greater computing performance, chip designers plan to increase this number to tens or even hundreds of cores. This approach, however, only makes sense if each core can receive and transmit large messages from all other cores on the chip simultaneously. The individual cores located on today's multi-core microprocessors communicate with one another over millions of tiny copper wires. However, this copper wiring would simply use up too much power and be incapable of transmitting the enormous amount of information required to enable massively multi-core processors. IBM researchers are exploring an alternative solution to this problem by connecting cores using photons in an on-chip optical network based on silicon nanophotonic integrated circuits. Like a long-haul fiber-optic network, such an extremely miniature on-chip network will transmit, receive, and route messages between individual cores that are encoded as a pulses of light. It is envisioned that by using light instead of wires, as much as 100 times more information can be sent between cores while using 10 times less power and consequently generating less heat. The work was partially supported by the Defense Advanced Research Projects Agency (DARPA) through the Defense Sciences Office program "Slowing, Storing and Processing Light." Additional information on this development as well as on the IBM's nanophotonics project can be found on the Web.
<urn:uuid:e109c601-009b-4371-b3c2-3aa33bfb82bc>
CC-MAIN-2024-38
https://www.lightwaveonline.com/network-design/dwdm-roadm/article/16662350/ibm-touts-nanophotonic-switch-to-route-data-between-chip-cores
2024-09-16T21:56:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00315.warc.gz
en
0.940597
787
3.71875
4
Academic and scientific research thrives on originality. Every experiment, analysis, and conclusion builds upon a foundation of previous work. This process ensures scientific knowledge advances steadily, with new discoveries shedding light on unanswered questions. Researchers have long relied on precise language to convey complex ideas. Scientific writing prioritizes clarity and objectivity, with technical terms taking center stage. But a recent trend in academic writing has raised eyebrows – a surge in the use of specific, often ‘flowery’, adjectives. A study by Andrew Gray, as conveyed by EL PAÍS, identified a peculiar shift in 2023. Gray analyzed a vast database of scientific studies published that year and discovered a significant increase in the use of certain adjectives. Words like “meticulous,” “intricate,” and “commendable” saw their usage skyrocket by over 100% compared to previous years. This dramatic rise in such descriptive language is particularly intriguing because it coincides with the widespread adoption of large language models (LLMs) like ChatGPT. These AI tools are known for their ability to generate human-quality text, often employing a rich vocabulary and even a touch of flair. While LLMs can be valuable research assistants, their use in scientific writing raises concerns about transparency, originality, and potential biases. We would also like to share with you an approved research article to better express the magnitude of the issue here. The introduction part of an article titled “The three-dimensional porous mesh structure of Cu-based metal-organic-framework – aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries” published in March 2024 begins as follows: “Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities…” – Zhang Et al. Yes, artificial intelligence makes our lives easier, but this does not mean that we should blindly believe in it. Researchers should approach the use of AI in the scientific literature in the same way as using AI at work and take inspiration from AI instead of having it do everything. Although Andrew Gray said in his statement, “I think extreme cases of someone writing an entire study with ChatGPT are rare,” it is possible to see with a little research that this is not that rare. The originality imperative in scientific research Originality lies at the heart of scientific progress. Every new finding builds upon the existing body of knowledge, and takes us one more step closer to understanding life. The importance of originality extends beyond simply avoiding plagiarism. Scientific progress hinges on the ability to challenge existing paradigms and propose novel explanations. If AI tools were to write entire research papers, there’s a risk of perpetuating existing biases or overlooking crucial questions. Science thrives on critical thinking and the ability to ask “what if“. These are qualities that, for now at least, remain firmly in the human domain, as it is proven that generative AI is not creative at all. The need for transparency The potential infiltration of AI into scientific writing underscores the need for transparency and robust peer review. Scientists have an ethical obligation to disclose any tools or methods used in their research, including the use of AI for writing assistance. This allows reviewers and readers to critically evaluate the work and assess its originality. Furthermore, the scientific community should establish clear guidelines on the appropriate use of AI in research writing. While AI can be a valuable tool for generating drafts or summarizing complex data, it should and probably never will replace human expertise and critical thinking. Ultimately, the integrity of scientific research depends on researchers upholding the highest standards of transparency and originality. As AI technology continues to develop, it’s crucial to have open discussions about its appropriate role in scientific endeavors. By fostering transparency and prioritizing originality, the scientific community can ensure that AI remains a tool for progress, not a shortcut that undermines the very foundation of scientific discovery. Featured image credit: Freepik
<urn:uuid:9e011fea-9703-45a2-b532-bbb190a35252>
CC-MAIN-2024-38
https://dataconomy.com/2024/04/26/ai-usage-in-scientific-literature/
2024-09-08T12:05:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00215.warc.gz
en
0.930258
852
2.78125
3
Top 10 Open Government Websites Federal agencies face White House orders to become more transparent. These websites expose new data sets, support public petitions, and reveal where taxpayer money goes. Dozens of websites have been created under the White House's Open Government Directive, announced in December 2009, with the goals of increasing transparency and fostering greater public participation and collaboration in government processes and policy. Some sites, such as Data.gov, are far-reaching. Data.gov (pictured above) is an online repository of government data that's available to the public in a variety of formats. As of the middle of January, more than 390,000 data sets from 172 agencies and sub-agencies were available there. Among the most popular are U.S. Geological Survey statistics on earthquakes and the U.S. Agency for International Development's data on economic aid and military assistance to foreign governments. Data.gov, more than a big database, serves as a platform for communities of interest, where researchers, application developers, and others brainstorm about public data. One recent example is the National Ocean Council's Ocean portal, with information on offshore seabirds and critical habitats for endangered species, among other data sets. There are also discussion forums and interactive maps. Some specialty sites focus on a particular area of government activity. The Environmental Protection Agency recently released a cache of data on greenhouse gas emissions, along with an interactive map and tools for analysis. Other sites built around narrow data sets are AirNow (focused on air quality), the Department of Energy's Green Energy site, the National Archives and Records Administration's virtual community for educators, and the National Library of Medicine's Pillbox site, for identifying tablet and capsule medications. The White House has established a Web page where the public can track the progress of open government. A dashboard shows how agencies are faring in 10 areas, such as releasing data and publishing their plans. InformationWeek has selected 10 federal websites that are the best examples of open government in all of its forms. (State and metropolitan open government sites aren't included in this overview.) Some put an emphasis on data transparency, while others encourage public participation or collaboration. Even so, there's room for improvement. The data on these sites isn't always timely or accurate, and public participation sometimes wanes, underscoring that open government needs constant attention to be effective. About the Author You May Also Like Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring September 25, 2024
<urn:uuid:8784c61f-22aa-4d7e-b2e2-3964b26fdf44>
CC-MAIN-2024-38
https://www.informationweek.com/data-management/top-10-open-government-websites
2024-09-17T02:30:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00415.warc.gz
en
0.929886
537
3.171875
3
The Internet of Things (IoT) has emerged as a transformative force within the educational sector, redefining the way teachers teach and students learn. By weaving a network of interconnected devices and systems into the very fabric of educational institutions, IoT is not only driving the evolution of learning environments but also revolutionizing them. The vast troves of data it generates and the real-time insights it offers into student performance are leading to more personalized and effective educational experiences. Furthermore, IoT is considerably enhancing operational efficiency, reducing costs, and paving the way for a future in which education is more accessible, flexible, and tailored to individual needs. This innovative technology is changing education in ways we are just beginning to understand, promising to touch every aspect of the learning process as it continues to evolve. The Meteoric Rise of IoT in Education In recent years, there has been an undeniable surge in the implementation of IoT technologies within the educational sector. Reflecting on this trend, Allied Market Research’s thorough report suggests an exponential growth trajectory, with the market ballooning from $8.7 billion in 2022 to a towering $46.4 billion by 2032. This rapid expansion is fueled by the advancement of wireless networking technologies, the widespread adoption of cloud computing, and significant strides in data analytics. The impact of heightened connectivity is clear as educational institutions harness IoT to unlock new potentials in teaching and learning. Nevertheless, even with these promising prospects, the integration of IoT in education is not without its challenges. Concerns over data security and privacy loom large, as does the significant investment required for the deployment and maintenance of IoT infrastructure. Despite these challenges, the anticipation of reduced costs for IoT devices and the expanding recognition of their potential in educational settings shine a light on a future ripe with opportunities. IoT Devices and Software: Pioneering Educational Transformation In the educational sector, the uptake of hardware tailored to IoT applications has been striking, yet it’s the software that offers a glimpse into the transformative potential of this technology. Sophisticated software solutions are enabling institutions to streamline operations and make informed decisions based on real-time data. This segment of the IoT market is showing signs of significant growth as it helps cut operational costs and enhances educational delivery. As impactful as it’s been, classroom management stands as just one success story in the wider narrative of IoT in education. The future, however, looks to be even more heavily influenced by learning management systems (LMS), which are increasingly becoming the cornerstone of both in-person and distance education. With these platforms, educators have the power to deploy a variety of teaching methods and resources, track student progress, and manage educational content, significantly bolstering the effectiveness of education in the digital age. The Role of IoT in Different Educational Sectors IoT’s integration into education isn’t confined to a single segment; it’s reshaping the whole spectrum of learning. Higher education institutions are leading this technological revolution, using IoT to push the boundaries of research and innovation. Simultaneously, the K-12 segment is undergoing a rapid transformation with immersive technologies capturing the imagination of students and creating more engaging learning experiences. Through IoT, real-time data on student performance and engagement provide teachers with unprecedented insights, allowing them to tailor instructional methods to meet individual student needs. The potential for enhanced personalized learning, enabled by technology, is seemingly limitless, positioning IoT as a crucial player in the future of education. IoT’s Geographical Influence on Education The influence of IoT on education extends beyond any single educational system or border. In 2022, North America held the largest market share, with substantial investments driving innovation and growth. Yet it’s Asia-Pacific that is poised for the most explosive growth, with its embrace of digital transformation and technology adoption. This disparate growth reflects a broader trend of universal IoT appeal, coupled with unique regional approaches to technology in education. The understanding of IoT’s potential varies globally, shaped by cultural, economic, and technological factors intrinsic to each region. Its impact on education is thus multifaceted, encompassing a rich tapestry of global interconnectivity and local customization, bridging gaps and shaping the educational experiences of future generations across the world. Leading Players in the IoT Educational Ecosystem The burgeoning realm of IoT in education owes much to the innovative contributions of tech giants that dominate this space. Companies such as Google, IBM, and Microsoft are at the forefront of this transformation, providing solutions that are reshaping the educational landscape. These organizations are not merely purveyors of new technologies; they are active contributors to the vision of a connected educational ecosystem. Through their forward-thinking strategies, investments in research and development, and an unwavering commitment to empowering educators and learners with the best tools, these leaders are paving the way for a seamless integration of IoT in education. Their work ensures that IoT is more than a technological trend; it’s a beacon guiding the way to an interconnected, efficient, and innovative future for education. Navigating the Post-Pandemic Educational Landscape with IoT The COVID-19 crisis marked a watershed moment for IoT technology in the realm of education. This unexpected shift to remote learning due to the pandemic necessitated rapid implementation of digital strategies, with IoT at the forefront to maintain the flow of education. This technology played a crucial role not just in delivering lessons to students from a distance but also in monitoring and ensuring the safety of educational spaces. Post-pandemic, IoT continues to underscore its importance by forming the backbone of resilient and adaptable education systems. It helps to create safer learning environments, streamlines administrative processes, and can even enhance the quality of education through personalized learning experiences. The experience of the pandemic demonstrated IoT’s capacity to transform the education sector, cementing it as an essential element in the future of teaching and learning. With its proven flexibility and robustness, IoT-based solutions are set to redefine the educational landscape, extending far beyond the circumstances that precipitated its rise.
<urn:uuid:8fdf9fc9-182c-47ea-a6c4-11e84d66f472>
CC-MAIN-2024-38
https://educationcurated.com/edtech/how-is-iot-shaping-the-future-of-education/
2024-09-11T04:29:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00179.warc.gz
en
0.936416
1,239
2.546875
3
First of all, what is the difference between a surge protector and a UPS (Uninterruptible Power Supply)? Surge protectors provide protection by dissipating any excess power (a surge) and preventing it from reaching your connected devices. Electronic equipment, especially sensitive equipment such as a computer or a hard drive, is designed to operate within a certain power range; if too much voltage reaches such a device, it can permanently damage or destroy it. Given that, know that not every power strip is a surge protector. It may sound basic, but it’s a fundamental piece of knowledge you’ll need. While a power strip just splits your outlet into multiple ports, a surge protector is designed to protect your computer, TV, and other electronics. A UPS provides protection against surges as well, but also includes a built-in battery. Many people recognize the benefit of a UPS as providing extra power in the event of a blackout. This gives a user time to save their work and shut down, or keeps a computer running throughout the blackout, if the UPS battery is large enough or the blackout lasts only a short time. The use of a battery also means that the devices connected to a properly-functioning UPS will receive steady voltage and current, in the event of a brownout or blackout. In simplistic terms, the UPS receives electricity from the wall, passes it through the battery, and then feeds it to connected devices. Because the battery is technically powering the devices at all times, the power levels are delivered in proper and steady amounts, regardless of what happens to the power coming from the wall. A surge protector can therefore be thought of as providing protection from excessive levels of electricity only, while a UPS provides protection from both excessive and insufficient levels of electricity. Now, to make the choice ….. One factor to consider is financial compensation. Many surge protectors and UPS systems come with limited insurance policies offered by the manufacturer. If a surge protector or UPS is properly configured and connected to appropriate devices, the manufacturer will compensate you for any equipment damaged or destroyed in the event of a surge protector or UPS failure, or in the event of more serious power surges that overwhelm the capacity of the unit. The manufacturer’s liability is limited to a set dollar amount and only applies in certain situations, but the added value of an insurance policy is an important factor to consider when making a purchasing decision. Other points to consider: - Buy the right number of ports. These are not only available in 6 or 8 outlet models; you can get more ports if that is what is needed to avoid daisy-chaining your devices, which is something you should never do. - Check for the UL seal, and make sure it’s a “transient voltage surge suppressor.” - Check the surge protector’s energy absorption rating, and its “clamping voltage.” You’ll want an absorption rating of at least 6-700 joules or higher (higher is better) and a clamping voltage (the minimum voltage to trigger) around 400V or less (lower is better.) Those interested in purchasing a UPS should note that not all outlets on the unit provides battery backup power. Many UPS devices, especially those targeted at consumers, only provide battery backup (and thus the benefits of both high and low power regulation) via one or two outlets. The rest of the outlets provide only surge protection. Therefore, read the technical specifications of a UPS carefully before purchasing, to ensure that it will meet your needs. Take care in setting it up, so that your most critical devices are attached to battery-backup outlets. In summary, surge protectors are a minimum requirement for all important electronic equipment. Those with the ability to add a UPS to their computer setup should strongly consider doing so, especially if the user lives in an area prone to electrical issues. While neither a surge protector nor a UPS will provide complete protection against devastating events, such as a direct lightning strike, using one will give your electronics and computers the best chance at a long, trouble-free life. As always, call us here at Frankenstein Computers, if you have any further questions. Frankenstein Computers has been taking care of our happy clients since 1999. We specialize in affordable IT Support, Cybersecurity Services, IT Services, IT Security, Office 365, Cloud, VOIP Services, SPAM, Wireless, Network Monitoring, Custom Gaming PC, MAC repair, PC Repair In Austin, Virus Removal, and much more. Check out what our clients have to say about us on Yelp! What Is the Difference Between a Surge Protector and a UPS? A surge protector shields devices from voltage spikes, while a UPS (Uninterruptible Power Supply) provides backup power during outages and protects against power surges. Why Should I Choose a Surge Protector for My Computer? Surge protectors prevent damage from voltage spikes, safeguarding your computer’s components. They are essential for protecting sensitive electronics from electrical surges. When Should I Consider Using a UPS for My Computer Setup? Consider a UPS if you need to maintain power during outages, prevent data loss, and ensure continuous operation. It’s ideal for critical setups that require uninterrupted power. What Factors Should I Consider When Purchasing a Surge Protector or UPS? Consider the joule rating for surge protection and the VA rating for UPS capacity. Also, evaluate the number of outlets, warranty, and additional features like USB ports and monitoring software. How Do Surge Protectors and UPS Systems Contribute to the Longevity of Electronic Devices? They prevent damage from power surges and interruptions, ensuring a stable power supply. This reduces wear on electronic components, prolonging their lifespan and maintaining performance.
<urn:uuid:1b26f066-2895-48d3-ae82-0425555cf728>
CC-MAIN-2024-38
https://www.fcnaustin.com/choose-surge-protector-ups-computer/
2024-09-11T02:35:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00179.warc.gz
en
0.9318
1,183
2.6875
3
Researchers at Hewlett-Packard Co. (HP) believe they have developed a manufacturing technique that will allow chip makers to push the performance envelope after conventional transistors reach the atomic level. The technique, based on a mathematical principle called coding theory, will let future generations of microprocessor circuits be reliably manufactured in high volumes, according to HP. Developed as part of HP Labs’ “crossbar latch” research project, which aims to replace the microprocessor transistor within the decade, the technique will be described in the June 6 issue of the Institute of Physics’ Nanotechnology publication. Rather than using transistors, crossbar latch circuits will use microscopic wires known as nanowires that are placed at a right angle to a separate set of parallel nanowires. This creates an electrical field that can be switched between two states, creating the “1” and “0” bits necessary for computing. A nanowire is a solid tube that can be as thin as the width of a single carbon atom. Basically, HP has discovered a method of ensuring that these silicon nanowires can continue to function even if manufacturing defects partially sever the connection between the crossbar and the rest of the circuit, said Stan Williams, a senior fellow and director of HP’s Quantum Research Lab. This is an important advance, as the crossbar design will only catch on with chip makers if it can be reliably manufactured in high volumes, said Phil Kuekes, a senior computer architect at HP. And in order to do that, HP needs to make sure it can work around manufacturing defects that inevitably crop up in the manufacturing process, he said. Enter coding theory, a mathematical technique that helps a digital signal travel down a noisy wire without losing its clarity. Developed by Claude Shannon, a researcher with Bell Labs and the Massachusetts Institute of Technology in the 1940s and 1950s, coding theory involves adding small pieces of information to a transmission in order to ensure the data is accurately received, Kuekes said. Mobile phones already transmit information using coding theory, Williams said. A mobile phone turns a user’s voice into digital bits of information, which are then sent through the air to a receiver. To make sure those digital bits of information are properly reassembled into speech, the phone adds small pieces of information to the packets of digital bits that help identify each bit and prevent calls from garbling the message, he said. Much in the way coding theory is used to clear up mobile phone calls, it could also be used to keep processors running, even if they are marred with tiny manufacturing flaws. HP’s crossbar latch is a relatively straightforward design, but it will be implemented in extremely complex circuits that can be difficult to manufacture. The connections between the crossbar and the rest of the circuit can sometimes break in the manufacturing process, leaving the circuit with only a partial connection to individual nanowires through a device HP calls the demultiplexer. That could cause bits of information to disappear or become scrambled in transmission, and render the chip useless. However, the application of coding theory and a few extra wires to the demultiplexer will help HP circumvent the problem, Williams said. “This is about having an architecture that will work in the presence of defects,” Williams said. HP has demonstrated its new crossbar design in a computer simulation, but it has also built prototype devices that work using the technology. The crossbar architecture is expected to first appear in chips built using the 22 nanometer process technology, which is expected to be adopted by the microprocessor industry around 2016. The size of the manufacturing process refers to the average size of a feature on the chip. Leading-edge chip makers are currently using a 90 nm manufacturing process.
<urn:uuid:ae5d1428-ef53-455a-ae06-ad709f966f0a>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/hp-tech-lets-chips-live-with-defects/12881
2024-09-13T11:27:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00879.warc.gz
en
0.945058
781
3.359375
3
Some of the most dangerous cybersecurity threats originate with email, costing businesses millions of dollars annually. How can you increase your email security and prevent email fraud? In this series, we will look at how to use DNS records to prevent email forgery. The first and simplest method is SPF – Sender Policy Framework. Below, we shall look at what it does, how it works, how to set it up, and what some of its deficiencies are. In future articles, we will look at the other techniques. Sender Policy Framework: A Simple Explanation Simply put, SPF is a way for a domain owner to publish information indicating what servers (internet addresses) are authorized to send email from that domain. Recipients can check the email source against this authorization list. If the server is on the list, the message is likely legitimate. If it is not on the list, the message could be forged. Setting up SPF With SPF (as with all anti-fraud solutions for email), it is up to the domain owner to set up the SPF authorization list. Identifying who manages the domain is often the most significant barrier to implementing SPF. Without access to the DNS settings, creating the SPF authorization list is impossible. To set up SPF, the domain administrator adds a special entry to the published DNS settings. If you want to set up SPF for your domains, use the SPF Wizard. You can also ask your email provider for assistance. We will not spend time on the details of the configuration or setup here. Instead, we will look at the actual utility of SPF, where it falls on its face, and how attackers can get around it. The Benefits of Sender Policy Framework Once SPF has been set up, it does an excellent job of helping identify forged emails. It verifies that the sending server is authorized to send. The use of SPF is highly recommended for every domain owner. However, as we shall see next, SPF is insufficient to prevent all email fraud. Sender Policy Framework has some significant limitations in stamping out email forgery. Below, we discuss some of the ways it falls short. Identifying Authorized Sending Server Addresses Identifying all your email-sending servers may not be an issue if you are a small or well-controlled organization. However, it can be more difficult for larger organizations because of their size and use of partners and vendors to send emails on their behalf. In that case, making a complete SPF authorization list is practically impossible. A related issue is that sometimes, we cannot specify all the authorized servers in SPF. For example, you can only have 10 DNS lookups in an SPF check. If your SPF record must be more complicated than that due to all of the possible organizations that send emails for you, then you must either refrain from using SPF or leave some legitimate sending servers off the list. In cases where you cannot make a complete list, you can configure the SPF record as “weak.” This means that if SPF matches, then the message is legitimate. But if the weak SPF check fails because it comes from an unauthorized server, it might or might not be legitimate. Forwarded Messages Appear Illegitimate When a message is forwarded, the from address does not change, but the sending server does. For example, if you receive a message from Bank of America and then forward it to your friend, it is now your email server sending a message that purports to be from bankofamerica.com. If bankofamerica.com’s SPF records were set as strict, then your friend’s email server would identify the forwarded message as forged and mark it as spam or fraud. In most cases, that is not desirable. While there is a technology that allows forwarding to get around this (SRS – Sender Rewriting Scheme), it has yet to be widely adopted. For this reason, most domain owners set up their SPF records as weak (indicating that if the SPF check fails, the message could still be legitimate). Inter-Domain Email Forgery Because SPF checks only the domain name and the server, two different people in the same organization, Fred and Jane, can send emails legitimately from their @domain.com address using the same authorized servers for domain email. However, if email@example.com uses his account to send a message forged from firstname.lastname@example.org, the SPF will check out as okay, even if the SPF is set as strict. SPF does not protect against inter-domain forgery at all. Same Email Provider: Shared Email Servers Forgery If two people using different domain names have the same email provider, they may also have the same SPF records. Email providers usually have their customers use a standard SPF record indicating that messages from any of the provider’s servers are okay. In this case, it may be possible for any user of that email provider to send a forged message purporting to be from another user in an unrelated domain and have the SPF check pass. One way to avoid this issue is to use dedicated servers to send email from your domain. If your email provider allows it, you can accurately update your SPF list to indicate only the servers assigned to your account. Then, the SPF record would only reflect your sending and could not be corrupted by other customers of the email provider. Sender Policy Framework does not protect against spam This is not a limitation of SPF, but it’s worth mentioning in the context of email security. All SPF does is help you identify if a message is forged or not. Most spammers are savvy. They use their own domain names and create valid SPF (and DKIM and DMARC) records so that their email messages look more legitimate. In truth, this does not make them look less spammy; it just says that the messages are not forged. Of course, if the spammer is trying to evade your filters by forging the sender address so that the sender is you or someone you know, then SPF can absolutely help. How Attackers Subvert SPF So, in the war of escalation where an attacker is trying to get a forged email message into your inbox, what tricks do they use to get around sender identity validation by SPF? As we have seen, most domains set up SPF weakly so that messages that fail SPF are not automatically flagged as invalid. From an attacker’s perspective, it all comes down to what sender’s email address (and domain) they are forging. Can they pick an address to construct an email that will make it past SPF? - If the sender’s address does not have SPF configured, it’s easy for the attacker to impersonate. - The message will look legitimate if the attacker can send a message from a server authorized by the SPF for the domain. If the attacker signs up with the same email provider used by the forged domain, they may be able to send an email from the authorized servers for the forged domain. This makes the attacker’s email look legitimate even if the forged domain’s SPF records are strict. - If the forged domain’s SPF records are weak and the attacker can’t use an authorized server to make the message look valid, it doesn’t matter, as SPF failure won’t make their message look forged. If the attacker has a choice of addresses to forge to achieve their ends, then it is likely that they can pick one that meets one of these three options. How to Fix Sender Policy Framework? SPF is helpful but not the sole solution for email fraud prevention. What else can you do? A responsible domain owner who wants to protect their domain from forgeries and identify forged inbound emails takes additional steps, which we shall discuss in future articles. These include DKIM, DMARC, and other message signature and isolation techniques. Read next: Preventing Email Forgery Part 2: DKIM
<urn:uuid:a80abc37-3909-455b-98a5-4599a3bc85f0>
CC-MAIN-2024-38
https://luxsci.com/blog/tag/sender-policy-framework
2024-09-14T16:49:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00779.warc.gz
en
0.930471
1,691
2.5625
3
ICMP stands for Internet control message protocol which uses protocol number 1 and is used majorly to announce to the sources of any errors occurred across the network while trying to route the packets across network path. ICMP redirect tells the hosts on layer 2 network that a better path exist to a particular destination. Let us take an example to understand how ICMP redirect works – In our topology above there is an end-host configured with IP 126.96.36.199 and default gateway 188.8.131.52 which is router R1. Now let’s suppose Host needs to reach the destination network 184.108.40.206. First the packet will be sent to Router R1 on port Fa0/1. Router R1 which has a static route for 220.127.116.11 with next hop as R2 (18.104.22.168) realizes the packet is received on Fa0/1 and same is the interface through which the packet now needs to be sent out to reach to 22.214.171.124. Hence R1 sends an ICMP redirect message to the end-host to use 126.96.36.199 as its default gateway to reach 188.8.131.52 as that is the best path to reach the destination network. So now onwards all the packets from host to reach destination 184.108.40.206 will be sent to router R2 instead of R1. Conditions that need to be matched for the ICMP redirects to be generated are: - The interface on which the packet comes into the router is the same interface on which the packet gets routed out. - The subnet or network of the source IP address is on the same subnet or network of the next-hop IP address of the routed packet. - The datagram is not source-routed. If any of the above condition is not met the ICMP redirect message isn’t not sent. By default Cisco routers are enabled for ICMP redirects however the same can be disabled using the ‘no ip redirects’ command at the interface level. An interface enabled with HSRP automatically disables ICMP redirects. But from Cisco IOS version 12.1(3)T and later ip redirects are supported with HSRP as well. ABOUT THE AUTHOR I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.” I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband. I am a strong believer of the fact that “learning is a constant process of discovering yourself.” – Rashmi Bhardwaj (Author/Editor)
<urn:uuid:f10ccbc2-fa61-4076-ad3b-145ef0e83ec0>
CC-MAIN-2024-38
https://ipwithease.com/icmp-redirects/
2024-09-16T00:02:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00679.warc.gz
en
0.930781
596
3.734375
4
In any form of software driven configuration, system or process, the segment of ‘testing’ plays an indispensable role. It is on the grounds of software testing that we can reach onto the inference of the loopholes, the efficiency and the scope of the software. The basic testing is categorized into: - Black Box testing - White Box testing Though both the forms of testing are used over the internal structure design and implementation, there are distinctions between the two. When we try to make a distinction between white box testing vs black box testing, then it can simply be started right from defining both the forms of testing. In the case of black box testing, the tester does not have details about the internal working pattern of the software system. The testing emphasizes on the behaviour of the software and it can be applied onto every level of the software including unit, integration, the system as well as the acceptance. Meanwhile, white box testing denotes the form of testing where the tester is familiar with the working pattern of the system, it generally cover code statements, paths, branches and conditions. It is also termed as code base, transparent box, glass box and clear box testing. White Box Testing vs Black Box Testing: Distinctive Features From the table displayed above containing a wide range of parameters, it is easy to bifurcate the components that draws a clear picture of distinction between the two testing methods. While drawing the comparison between white box testing vs black box testing, it is pretty evident that the White box testing requires more advanced skills and techniques and renders in-depth analysis. There have been testing type examples provided in the table to help you classify the test more easily the next time you perform it. Both the forms of tests have their advantages but also carry fair share of disadvantages as well. Hence, it is advised to observe them and act accordingly. ABOUT THE AUTHOR I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.” I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband. I am a strong believer of the fact that “learning is a constant process of discovering yourself.” – Rashmi Bhardwaj (Author/Editor)
<urn:uuid:56e60519-8794-4e0f-b98c-6320ca39e497>
CC-MAIN-2024-38
https://ipwithease.com/white-box-testing-vs-black-box-testing-a-detailed-analysis/
2024-09-16T00:30:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00679.warc.gz
en
0.935944
493
2.84375
3
A Point-to-Point Tunneling Protocol (PTTP) is a networking standard that is used when connecting to a Virtual Private Network (VPN). VPNs are a way to create online privacy and anonymity by creating a private network from a public internet connection. This is often used by users who work remotely that need to connect to an office network securely. Nowadays, PPTP is considered obsolete for use in VPN’s because of its many known security deficiencies. Nevertheless, PPTP is still in use in some networks. The phrase Point to Point refers to the specific type of connection created by the protocol. It allows for one point (user’s device), to access another specific point (user’s office network), over the Internet. The “tunneling” part of the term refers to the way one protocol is encapsulated within another protocol. In PPTP, the point-to-point protocol (PPP) is wrapped inside the TCP/IP protocol, which provides the secured Internet connection. Even though the connection is created over the Internet, the PPTP connection creates a direct link between the two locations, allowing for a secure connection. Related Readings: What Is A VPN Protocol And Which One Should You Use? What does this mean for an SMB? - Remote Desktop services which place employees on their workstation at work with and without (recommended) the ability to download files to their home workstation. This solution largely eliminates the potential for remote unsecured workstations from compromising your internal network. - Purpose built IPSec VPN tunnels. These are for companies that require direct remote connections from Laptop users (company owned and secured) into internal networks for the files and applications that are ONLY available on the LAN of the company. These are becoming less common as so many companies migrate to Cloud services and solutions. Finally, in all cases, it is critically important, yes this cannot be understated, critically important to pair all remote access into your environment and your data with Two-Factor authentication. Do not enable remote access without it.
<urn:uuid:ae295aae-348a-4b23-afa2-36c486c1726e>
CC-MAIN-2024-38
https://cyberhoot.com/cybrary/point-to-point-tunneling-protocol-pptp/
2024-09-17T04:36:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00579.warc.gz
en
0.942212
427
3.53125
4
Malware is a type of malicious software that can cause all sorts of problems for businesses and consumers alike. It comes in many different forms, including viruses, spyware, worms, and Trojan horses. In 2020 alone, 5.6 billion malware attacks were recorded, which makes you wonder: what causes malware, and how can we stop it? There are many sources of malware, like file sharing, opening spam email attachments and opening office documents from unknown sources. There are also different ways to stop it, such as using antivirus software, keeping your device updated with the latest patches, and only downloading from trusted sources. Now, let’s look at the top three causes of malware and how you can stop each. 1. Peer-to-Peer File Sharing Services Peer-to-peer file-sharing services are one of the most common ways for malware to spread. Once a cybercriminal has infected one machine or device, they can often control it remotely and use it to send out more viruses and snare even more machines. If you have downloaded files from peer-to-peer networks without making sure that what you’re getting is safe, there’s a high chance that you will be infected. To stop this, it’s wise to scan files with antivirus software before you open them. Last, most businesses are wise to prohibit the use of peer-to-peer sharing software in their organizations. 2. Macros in Office Documents Another way malware can spread is through office documents shared via email, Slack, or what have you. Cybercriminals will hide malicious software in seemingly innocent Office documents that, once opened, deploy malware. To stop this, you should be very careful when opening office documents from unknown sources. Be sure you know the person who sent you the information and verifies that their email address is correct. Understanding how the copy was sent or received can help you avoid malware infection. Next, it’s also wise to use antivirus software on all devices, so if an infection does occur, it’s easy to quarantine and erase the malicious code. 3. Email Attachments Perhaps one of the most popular ways to spread malware is through spam email attachments. When opening up emails, be careful when clicking on any links in the body of the email and the attachments, as both can lead to accidentally downloading malware. To avoid this, many email clients have built-in anti-virus scanners for attachments. When it comes to links in the body of an email, always hover over the link and look at the URL preview at the bottom of your browser. You should know right away if the website is legitimate. Stay Safe from Malware Attacks With DataLocker Malware is lurking everywhere, and it’s easy for it to spread–even via USB drives. Rather than risk accidentally storing malicious software on your drives, consider using DataLocker’s line of secure drives. Not only do they encrypt data, but they also have the option to add anti-malware software directly on the device. This ensures that nothing bad ever gets stored on a portable drive. Visit our products page to learn more.
<urn:uuid:12398a39-d7f9-4c14-846e-80c3ad6faf5c>
CC-MAIN-2024-38
https://datalocker.com/blog/cyberthreats/malware/top-causes-of-malware/
2024-09-17T05:25:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00579.warc.gz
en
0.934792
660
2.9375
3
ARP and CAM Table Address Resolution Protocol (ARP) is a protocol for mapping an Internet Protocol address (IP address) to a physical machine address that is recognized in the local network. A table is used to maintain a correlation between each MAC address and its corresponding IP address. ARP provides the protocol rules for making this correlation and providing address conversion in both directions. Content Addressable Memory (CAM) table is a system memory construct used by Ethernet switch logic which stores information such as MAC addresses available on physical ports with their associated VLAN Parameters. The CAM table, or content addressable memory table, is present in all switches for layer 2 switching. This allows switches to facilitate communications between connected stations at high speed and in full-duplex regardless of how many devices are connected to the switch. Switches learn MAC addresses from the source address of Ethernet frames on the ports, such as Address Resolution Protocol (ARP) response packets. Protocols vulnerable to sniffing Telnet and Rlogin: Keystrokes including usernames and passwords. HTTP: Data sent in clear text. SMTP: Passwords and data sent in clear text. NNTP: Passwords and data sent in clear text. POP: Passwords and data sent in clear text. FTP: Passwords and data sent in clear text. IMAP: Passwords and data sent in clear text.
<urn:uuid:192e4857-570b-47b9-afee-17b4d69a17d4>
CC-MAIN-2024-38
https://www.greycampus.com/opencampus/ethical-hacking/arp-and-cam-cable
2024-09-07T13:22:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00579.warc.gz
en
0.89831
287
3.78125
4
The march towards sustainability is not just a trend but a vital transition across industries, and the construction sector is no exception. The global construction landscape is steadily being redefined through a lens of sustainability, setting new standards and creating a blueprint for the future. From the sweat of laborers to the aspirations of architects, construction is bending to the will of the green revolution. With a Compound Annual Growth Rate (CAGR) of 8.91% on the horizon from 2024 to 2031, it’s clear that sustainability will not only influence but fundamentally reconstruct the industry’s MO. This transformative process is a confluence of technology, legislation, and evolving market ideologies. But what exactly are the forces at play, and how will these changes manifest by 2031? The Driving Forces Behind Sustainable Construction Environmental concerns have ushered in an era where sustainability is no longer optional in construction. Driven by the imperatives of climate change and resource conservation, there’s been an ascendance of ecological consciousness shaping industry standards. Governments and international agencies have rolled out regulations emphasizing sustainable practices, resulting in an unprecedented propulsion towards green initiatives. Financial incentives, both carrots and sticks, are spurring companies to innovate, adapt, and adopt methods that align with the Earth’s finite resources. Technological innovation stands shoulder-to-shoulder with environmental stewardship, propelling the construction industry into the sustainable domain. Tools like Building Information Modeling (BIM) and modular construction techniques underscore a future where accuracy, efficiency, and sustainability are intertwined. These technologies underpin the execution of complex projects with a precision that minimizes waste, optimizes resource use, and reduces environmental impact, driving a construction philosophy that’s as smart as it is sustainable. Impacts of Government Regulation and Incentives On the complex chessboard of the construction industry, government policies are the game-changing moves. By implementing building codes and standards geared towards sustainability, governments across the globe have catalyzed the adoption of green construction methods. These regulations are coupled with fiscal incentives that make sustainable construction not just a moral choice but an economically sensible one. Grants, tax breaks, and subsidies are transforming once-niche green techniques into industry mainstays. The lure of green certifications has become a beacon for sustainability in construction. As LEED, BREEAM, and other certifications gain prominence, they serve as benchmarks for sustainability and increase a building’s market value. Developers and investors are driven to exceed standard practices, lured by the tangible benefits of certification. Essentially, what began as a set of guidelines is increasingly becoming the industry’s gold standard, coalescing government intention with market behavior. The Economic Effect of Going Green Contrary to the outdated belief that sustainability comes at a prohibitive cost, green construction is proving to be economically advantageous. The initial investment in sustainable technology and materials may indeed be higher, but this is rapidly offset by long-term savings in energy and maintenance costs. Moreover, the psychological impact of occupying a space that’s both innovative and responsible engenders a strong public appeal, possibly translating into higher resale values. Financial foresight is becoming increasingly important as stakeholders recognize the potential profitability of green construction. In the long run, sustainable buildings signify reduced operational costs, attracting savvy investors and owners. The market is adjusting, and as knowledge spreads, the financial viability of green construction becomes more apparent. Overcoming the psychological barrier of upfront costs is a challenge, but it is one that the construction industry, buoyed by real economic benefits, is prepared to meet head-on. The Role of Consumer and Investor Awareness Modern consumers are increasingly cognizant of the environmental footprint of their habitats. The surge in demand for eco-friendly living spaces fortifies the market for sustainability-focused construction companies. This consumer enlightenment isn’t a whisper but a clarion call for change; one that’s heeded by investors too. Sustainability is now considered an asset, not just in terms of environmental benefit, but as a harbinger of return on investment. Awareness has converted into action — investors are steering their capital towards construction projects that promise sustainability. This infusion of funds is breeding a new generation of construction ventures where ecological responsibility is as foundational as the concrete in the buildings they erect. This shift in investor sentiment is fueling the industry’s growth and inspiring confidence among business leaders that sustainability and profitability are not just compatible, but synergistic. Technological Advancements Steering the Change Emerging technologies have become the compass by which sustainable construction navigates. The adoption of renewable energy sources, for example, is no longer a novelty but a necessity, driven by both ethics and economics. Smart, adaptive building systems reduce resource consumption, optimize energy use, and extend the life of the structures. The construction industry is embracing these technological leaps, recognizing that sustainability can align with the digital revolution. The progressive machinations of construction technology are creating structures that are not just built to last but built to evolve. Modular construction, 3D printing, and smart automation are a few examples of how the construction industry is innovating with sustainability at the helm. These advancements not only promote environmental responsibility but also reflect an industry aiming to be resilient, adaptable, and ahead of the curve in addressing global challenges. The Hurdles on the Path to Sustainability The journey towards sustainable construction is not without its obstacles. The perception of high initial costs for green technology and materials can deter investment, despite the promise of long-term savings. Moreover, inconsistencies and the fragmented nature of regulations across regions create a complex landscape for industry participants to navigate. These challenges are formidable but not insurmountable. Collaboration and education are key to overcoming these barriers. By sharing knowledge, resources, and best practices, the construction industry can harmonize standards and costs. Increased transparency and dialogue will encourage all stakeholders, from governments to contractors, to play a part in the solution. The path may be steep, but the construction industry’s commitment to sustainability is unwavering, consolidating growth that’s both ethical and economical. Sustainability in Construction: A Global Outlook Behind the growing urgency for sustainable construction lies the variable landscape of global regions, each contributing to the narrative in its own distinctive way. North America has emerged as a leader, its stringent environmental regulations acting as catalysts for the development of green building practices. By fostering an encouraging legislative environment, it sets a precedent for the rest of the world. The Asia-Pacific region, with its unstoppable urban development and infrastructural growth, presents enormous opportunities and challenges for sustainable construction. It is a fertile ground for innovation as the demand for housing and commercial spaces burgeons. Here, sustainability is not just a possibility; it’s an imperative, helping to reframe the narrative of development with sustainability as a central tenet. Leading Innovators and Market Players Sustainable construction thrives with major companies and innovative startups contributing actively to eco-friendly building practices. Firms like Bauder Ltd., Clark Group, and The Turner Corp. are not only adapting to the green revolution but are also shaping it. Their efforts are intertwined with their core operations, making sustainability a lasting aspect of their identities. As the industry heads towards 2031, it’s clear that enduring changes are underway, with sustainability becoming an essential component of construction methods. Regulatory changes and technological advancements are key drivers, ensuring that our environmental responsibility is reflected in the structures we create for future generations. This time marks a transformative era in construction, establishing sustainable foundations for years to come.
<urn:uuid:0a910ca2-b8fc-4509-a68c-3291c23dbcfa>
CC-MAIN-2024-38
https://constructioncurated.com/technology-and-equipment/how-will-sustainability-reshape-global-construction-by-2031/
2024-09-08T19:53:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00479.warc.gz
en
0.925644
1,551
2.953125
3
Motorolas new way of bonding light-emitting compounds onto a silicon chip may do more to deliver smaller, cheaper and faster semiconductors than anything announced by chip makers this year, analysts said. “This could go down in history as a major turning point for the semiconductor industry,” said Steve Cullen, Cahners In-Stat Groups principal semiconductor analyst. Physicists at Motorolas research lab in Tempe, Ariz., attacked a problem that had confounded chip makers for 30 years: how to grow light-emitting semiconductors on a silicon wafer to combine the best qualities of each on a single chip. Having to use separate silicon integrated circuits and photon chips in a computers microprocessor slows the speed to about one-tenth of its potential capacity, said Jim Prendergast, director of Motorolas physical sciences research lab. “A problem with any silicon technology is getting data in and out of a chip. Once it leaves the chip, the speed of a typical computer is less than 200 megahertz, even though it might be working with a 2-gigahertz microprocessor,” he said. Until now, scientists have been unable to bond semiconductors that have light-emitting properties to silicon integrated circuits. But Jamal Ramdani, a Motorola scientist, thought of creating an intermediate layer with properties that bonded well with both silicon and light emitters. A large team of colleagues helped him perfect the recipe. The team used strontium titinate to bond silicon with light-emitting gallium arsenide, Prendergast said. Now Motorola is perfecting a different intermediate layer to bond indium phosphide, a compound that has more potential uses for fiber-optic communications. The technology will accelerate the development of optical fiber to the home, streaming video on cell phones and systems that help automobiles avoid collisions, the company said. But the crowd-friendly breakthroughs are a couple of years away. Computing and communications have always been separated because silicon is in the guts of a PC, while light-emitting compounds are at the core of optical networks. “This says that theyre not so separate after all, if you can get everything on one chip,” Cullen said. “The history of anything in electronics teaches that the more stuff you can cram on one chip, the better the performance and the cheaper and more pervasive it becomes.”
<urn:uuid:58030657-2a2b-4831-b064-ce4f46509c1f>
CC-MAIN-2024-38
https://www.eweek.com/networking/motorola-solves-30-year-optical-silicon-chip-puzzle/
2024-09-12T12:09:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00179.warc.gz
en
0.933093
504
3.40625
3
With the increase of online applications, naturally, there is an increase of valuable and interesting data. Hackers or malicious actors will have a legitimate interest in the data, either for using for their own, nefarious purposes, or to sell to other interested parties. As data, and its value, grows, attackers are making increasingly refined attempts to obtain it. Recent news of breaches, such as iCloud, eBay etc has brought into focus the matter of user authentication for online services. Unfortunately, the majority of hacking attempts happen by targeting the authentication schemes, as username-and-password pairs are used by most Internet resources. Therefore, effective, durable solutions, like two-factor authentication (2FA) are required. Essentially, 2FA systems use two independent forms of identification to authenticate users. Most use a knowledge factor (what the user knows) and possession (what the user has). For an authentication system to be classed as 'two-factor', proof of each form must be presented. This makes it inherently more robust than simply using passwords as a means of protection. The price of passwords Password vulnerabilities can be exploited by all manner of threats. Firstly, passwords can (still) be found – written down or by 'shoulder-surfing'. Secondly, attackers can use 'social engineering' to fool users into revealing passwords. Thirdly, via large scale 'dictionary attacks', hackers can guess. Finally, attackers can obtain passwords by breaching poorly-protected databases, sniffing end-user traffic at public Wi-Fi stations, or using Trojan-horse malware on user devices. CyberVor’s recent billion-password heist suggests a mix of various manual and automated methods. It seems the attackers bought a list of compromised e-mail addresses to which they then sent the malware, as well as to the devices of the email addresses in the address books of the compromised machines and accounts. Whenever users of all these compromised devices went online, the malware activated, testing visited sites for password management vulnerabilities. Upon discovering vulnerabilities they could exploit, the malware sent back details of the site. This occurred on a large scale, tracking users over 420,000 sites over several months. The attackers subsequently harvested the password databases from the sites. With such a precise strategy, it’s shocking that only 1.2 billion username-password pairs were obtained. Even if not all the claims about the attack are true, this is at the very least a wake-up call for companies to act quickly. Evidently, sole reliance upon the humble password is no longer suitable. In computing systems, we identify via a single piece of evidence – a password. But as online resources have become increasingly valuable, it has become essential to protect them from mounting risks by demanding more than one form of identification. What makes true 'two-factor' authentication? 2FA systems hold great promise for preventing compromise of systems because 2FA embodies the defend-in-depth security principle at both the micro level – in that the two factors present more than one hurdle for an attacker – and at the macro level – in that 2FA can be used in conjunction with, say, encryption or other defensive measures. It’s key that the factors are independent of one another. To implement knowledge, 2FA systems need the user to present something they know, like a password. To implement possession, they must present something they have, like a token. The most common misconception is usually knowledge; some online administrators and service providers believe that demanding two e-mail addresses amounts to 2FA – however, it does not. Similarly, asking for a password and then a PIN doesn’t amount to 2FA – because the two pieces of information represent knowledge factors. These examples do amount to what is commonly called 'strong authentication' – as distinguished from 2FA by the likes of the United States’ FFIEC and FDIC. Unfortunately, the European Central Bank insists on referring to 2FA as 'strong customer authentication', refusing last year to amend its terminology. This has the potential to fuel further confusion. In the operational risk context, dual controls are common but dual controls differ from 2FA, as they ask for two things of the same type; for example two signatures of two individuals. However, there is good news; the best 2FA is seamless and almost invisible. And yes, it exists – we carry it about in our wallets and handbags: the humble bank-card. When you present yourself to the ATM and demand cash, you identify yourself by presenting something you have, the card, and something you know, the PIN. The mobile phone presents yet another example of 2FA in action. In attempting to gain access to the mobile phone carrier’s network, you present not just the handset and SIM within it, but also a PIN ('something that you know'). Worse and ironic, though, is the fact that we then use our devices to access services that themselves are not protected with 2FA. That inverts the security principle of defend-in-depth, so it has to be hoped that the recent scandals focus the minds of executives and administrators to the point where true 2FA systems become the norm. Properly implemented, 2FA systems hold great promise for preventing compromise of online systems. However, it is essential that multiple factors are indeed used for authentication, before handing users onto the authorisation systems that enforce policy and grant or deny access to valuable resources. Sourced from Toyin Adelakun ,VP of products, Sestus
<urn:uuid:242a6788-8e3c-4808-b337-6c473693755d>
CC-MAIN-2024-38
https://www.information-age.com/double-duty-true-two-factor-authentication-explained-30249/
2024-09-12T13:23:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00179.warc.gz
en
0.941514
1,136
3.25
3
Data backup and recovery—the process of making copies of critical data, storing it securely so that it remains accessible, and restoring it in the event of a disaster or drive failure—are fundamental to data protection and cybersecurity. Though simple at its core, the concept requires forethought and strategy around scheduling, media, and storage, with high stakes for enterprises with operational dependencies on data. As a result, dozens of vendors serve this space with a wide range of products and services. This article is a comprehensive guide to data backup and recovery, how it works, and the different types and techniques involved. - What is Backup and Recovery? - How Does Backup Work? - How Does Recovery Work? - Types of Data Backup - Types of Data Recovery - Bottom Line What is Backup and Recovery? Backup and recovery is the process of copying enterprise data, storing it securely, and being able to restore it in the event of a disaster or service interruption. Any number of events can cause data loss, from drive failures to accidental deletions to malicious attacks or theft to natural disasters. Backing up the data is simply creating a copy that can be used to replace the original if something happens to it. The concept becomes more complex when you consider that the cause of the initial data loss might also affect the secondary copy. For example, copying a computer’s hard drive to an external hard drive gives a measure of protection—but what if both drives are damaged in a fire, or a piece of malware that infected the original computer also corrupts the backup drive? A true backup strategy involves keeping multiple copies in different locations on a variety of storage media to cover all possibilities. While backing up data and recovering data are separate, individual processes, they’re linked so inherently that it’s instructive to think of backup and recovery as a single concept. Having a secondary copy of data is useless if it cannot be restored to a computer or network, or if it takes so long to restore that business continuity suffers. Ensuring that data can be fully restored is equally important as the act of backing it up, and organizations are devoting more attention to backup testing to verify that their backups can be recovered speedily in the event of an incident. How Does Backup Work? At a high level, data backup is accomplished using specialized software that copies data to a backup appliance, a backup server, or the cloud, where it is stored either on hard disk drives (HDDs), solid state drives (SSDs), or tape. Even data sent to the cloud is stored on one or more of these media. The data is encrypted as a safeguard—the most thorough backup systems encrypt the data both at rest and in transit. It’s the details that can make or break a backup. Here’s a look at the most critical components of a successful backup strategy. 3-2-1 Backup Rule The 3-2-1 rule is often employed to make sure there are sufficient copies of data to cover all eventualities. It establishes a best practice of making a total of three copies of critical data using at least two different storage media, and storing at least one copy offsite. There are many possible configurations, from saving backups to an on-premises server and the cloud to a server and tape, with the tape stored offsite. However, not all configurations are equal—if malware infects data that is backed up to the cloud, the malware can also infect the backup copy over the internet. When choosing media, the goal is to balance speed and cost. Almost everyone wants a recovery point objective (RPO)—which measures the greatest amount of data the organization is willing to lose after recovery from a failure—of zero. A zero RPO means no data is lost, but a zero RPO can be expensive. Prioritizing data and choosing the right media can help reduce cost. For example, a small subset of data can be categorized as high priority and backed up to high-speed flash so that it can be recovered almost immediately. The remainder of data can be assigned lower classifications and stored on disk, with the lowest priority data—cold data—stored on tape. Depending on the type and volume of files, backups can be enormous. This is less of an issue when saving the backup initially, but for a business that needs to restore data in a hurry, large files mean slow recovery. In many cases, compression and deduplication are used to reduce file sizes and network traffic while improving transmission speeds. Compression algorithms reduce capacity by as much as three times, in some cases. Deduplication—removing repeat files or data to reduce size—is as effective or more than compression, depending on the quality of the data. Backup Timing and Frequency Because backups can take a long time—and slow down network traffic—they should be scheduled for a time when traffic is low, and are generally done at night and weekends. Automation features built into backup applications can schedule and run them without intervention. The best software also verifies that all data was successfully backed up with no failures or corruption. Additional choices need to be made about how often to backup data, which we’ll cover in more detail in the Types of Data Backup section below. In-House or Outsourced Backups can be done internally, by an IT team, or externally by a service provider that takes care of all the underlying infrastructure, keeps the software updated, and provides the storage. Increasingly, organizations are relying on backup as a service (BaaS) or consumption-based storage as a service (STaaS) to manage their backup and restore functions. Learn more about the top backup software on the market today. How Does Recovery Work? When a data loss incident occurs, the backed up copy must be recovered from wherever it is stored. Depending on the nature of the incident, recovery targets may vary—in many cases, the copies are restored to the original systems from which they came, but in the event of a physical disaster it may be sent to other systems at a colocation provider or a secondary data center. Many businesses store backups for years only to find that the data is unrecoverable or only partially recoverable due to a number of factors: - Malware entered into backup applications and infected the backups - Incomplete backups due to poor administration or insufficient backup windows - Corrupt or unusable files Recoverability testing should be done with regularity to ensure that the backup upon which the business is relying is, in fact, reliable. As with backups, recovering a huge amount over an internet connection is going to take a long time. A good backup and recovery plan will prepare for this by dictating a process to recover high priority data first. The amount of data and the storage media can both play a role in recovery time. For example, enterprises relying on cloud backups may have to wait days for all their data to be retrieved—as an alternative, some cloud providers offer to ship a physical box filled with your data within 24 hours to speed up the process. Types of Data Backup Data can be backed up several different ways depending upon the amount and the organization’s requirements. Here are the two most common: - Full data backups. As the name implies, a full backup makes a copy of every single piece of data. Full backups are always done the first time that an organization initiates a backup to create an initial, full copy, but some businesses perform a full backup on a regular schedule. The advantage is that a recent copy of all data is always available. The downside is that it takes up a lot of space, and purging old backups must be done regularly. - Incremental data backups. An incremental backup captures only files that were created or changed since the last full backup. They can minimize backup size and network traffic. Some organizations store a single full backup and consolidate incrementals into that copy. Types of Data Recovery Data can also be recovered in several ways. A complete recovery restores all enterprise data, which may be needed following a disaster or a ransomware attack, for example. But more often, a business may opt for a more granular approach. Perhaps one system suffered data loss, or someone accidentally deleted a single data set—in such cases, specific files, objects, partitions, or systems can be recovered as needed. Bottom Line: Safeguard Enterprise Data With Backup and Recovery Businesses of all sizes and across all industries are increasingly reliant upon data for everything from informed decision-making and customer engagement to sales, marketing, and customer service. They’re investing in ways to collect, store, and analyze that data to make it actionable and accessible and to mine it for insights that give them a competitive advantage. They should also be investing in a well-considered backup and recovery strategy that protects that data and ensures that it is available for rapid restoration in the event of a failure, disaster, or cyberattack. A partial data loss incident can be an inconvenience; a complete loss can be devastating to business continuity, operations, and customer reputation. Whether they handle it in-house or outsource it to a provider, all enterprises should be making backup and recovery—including planning and testing—an essential part of their daily operations and IT workflow. Read 11 Data Backup Best Practices to learn more about the standards you should follow to ensure a successful backup and recovery strategy.
<urn:uuid:66b72b1d-f2b6-4468-ad0c-4d8df327116d>
CC-MAIN-2024-38
https://www.enterprisestorageforum.com/networking/backup-and-recovery-guide/
2024-09-18T14:05:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00579.warc.gz
en
0.938972
1,945
3.265625
3
Researchers Measure Accuracy of Two-Qubit Operations in Silicon for First Time (ScienceAlert) Scientists have measured the accuracy of two-qubit operations for the first time in silicon, bringing the world a big step closer to reliable quantum computing. The team was able to refine the fidelity of their two-qubit logic gate to a threshold of up to 98 percent. That the researchers were able to achieve this in silicon – the material commonly used for conventional computer chips – demonstrates that the same substance could also be used in future quantum computers one day, in far more powerful logic systems comprised of millions of qubits, the team suggests. “Two-qubit fidelities reaching the required limits for fault-tolerance are therefore within reach and underpin silicon as a technology platform with good prospects for scalability to the large numbers of qubits needed for universal quantum computing,” the team led by nanoelectronics researcher Andrew Dzurak from the University of New South Wales (UNSW) in Australia, explained.
<urn:uuid:d2ae6baa-ea9d-4b0c-9452-3cc251247747>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/researchers-measure-accuracy-two-qubit-operations-silicon-first-time/
2024-09-18T16:15:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00579.warc.gz
en
0.941497
210
3.09375
3
There is no doubt that Ethernet fiber-optic communications provide many advantages over copper based Ethernet communications. These include immunity to noise and further distance capabilities. Systems that require fiber-optic communication can use switches that contain built-in fiber optic ports. However, if your switch does not have built-in fiber optic ports or does not have enough fiber-optic ports, then a media converter will be needed to convert copper based communications to fiber-optic communications. This article will review the different types of media converters and provide information on the wide variety of applications for media converters. What is a Media Converter? Media converters are flexible and cost-effective devices for implementing and optimizing fiber links in all types of networks. Media converters enable you to connect different types of media, such as twisted pair, fiber, and coax, within a network. The most widely used converters are probably the ones used to convert computers UTP Ethernet ports to fiber. This enables you the ability extend your Ethernet network beyond the 100-meter limit imposed by copper cable. Besides, some other converters also convert multi-mode to single-mode, convert analog signals to digital, multiplex several signals over one fiber pair, or perform other signal processing. In a word, as a device to converter one media to another, media converters are really working. Types of Media Converter There are a wide variety of media converters available that support different network protocols, data rates, cabling and connector types. Two main kinds of media converters are copper-to-fiber media converter and fiber-to-fiber media converter. Copper-to-Fiber Media Converters The most common type of media converter is a device that functions as a transceiver, which is used to convert the electrical signal used in copper UTP network cabling into light waves used in fiber optic cabling. Fiber optic connectivity is necessary when the distance between two network devices exceeds the transmission distance of copper cabling. Copper-to-fiber conversion using media converters enables two network devices with copper ports to be connected over extended distances via fiber optic cabling. - Ethernet Copper-to-Fiber Media Converters Supporting the IEEE 802.3 standard, Ethernet copper-to-fiber media converters are used to provide connectivity for Ethernet, Fast Ethernet, Gigabit and 10 Gigabit Ethernet devices. Hence these converters are also usually divided into Fast Ethernet media converter, Gigabit media converter and 10 Gigabit media converter. The diagram below shows a typical application where Ethernet Media Converters connect to Ethernet Switches by way of Multimode fiber and UTP copper cabling. - TDM Copper-to-Fiber Media Converters The most common TDM copper-to-fiber converters are T1/E1 and T3/E3 converters, which provide a reliable and cost-effective method to extend traditional TDM (Time Division Multiplexing) telecom protocols copper connections using fiber optic cabling. T3/E3 and T1/E1 converters usually operate in pairs extending distances of TDM circuits over fiber, improving noise immunity, quality of service, intrusion protection and network security. - Serial-to-Fiber Media Converters Serial-to-fiber converters provide fiber extension for serial protocol copper connections. They can automatically detect the signal baud rate of the connected Full-Duplex serial device, and support point-to-point and multi-point configurations. Fiber-to-Fiber Media Converters Fiber-to-fiber media converters can provide connectivity between multi-mode (MM) and single-mode (SM) fiber, between different power fiber sources and between dual fiber and single-fiber. In addition, they support conversion from one wavelength to another. Fiber-to-fiber media converters are normally protocol independent and available for Ethernet, and TDM applications. - Multi-mode to Single-mode Converters Enterprise networks often require conversion from MM to SM fiber, which supports longer distances than MM fiber. Mode conversion is typically required when lower cost legacy equipment uses MM ports but connectivity is required to SM equipment, a building has MM equipment, while the connection to the service provider is SM, and MM equipment is in a campus building but SM fiber is used between buildings. - Dual Fiber to Single-Fiber Converters Enterprise networks may also require conversion between dual and single-fiber, depending on the type of equipment and the fiber installed in the facility. Single-fiber is single-mode and operates with bi-directional wavelengths, often referred to as BIDI. Typically BIDI single-fiber uses 1310nm and 1550nm wavelengths over the same fiber strand in opposite directions. The development of bi-directional wavelengths over the same fiber strand was the precursor to Wavelength Division Multiplexing. Applications of Media Converter Media converters do more than convert copper-to-fiber and convert between different fiber types. Media converters for Ethernet networks can support integrated switch technology, and provide the ability to perform 10/100 and 10/100/1000 rate switching. Additionally, media converters can support advanced bridge features which including VLAN, Quality of Service (QoS) prioritization, Port Access Control and Bandwidth Control and really facilitate the deployment of new data, voice and video to end users. Media converters can provide all these sophisticated switch capabilities in a small, cost-effective device.
<urn:uuid:7bed5e3a-124f-415c-b3d4-5bcb7217762e>
CC-MAIN-2024-38
https://www.cables-solutions.com/introduction-of-media-converter.html
2024-09-07T18:14:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00679.warc.gz
en
0.895358
1,137
2.859375
3
Storing video requires a lot of storage space. To make this manageable, video compression technology was developed to compress video, also known as a codec. In 2003, codec h.264 (commercial name is AVC) was released. It’s now the industry standard for video compression, as it offers a good compression and can be played on virtually any device and platform. All modern browsers, operating systems and mobile platform provide support for h.264. H.265 / HEVC Ten years is a long time in the IT industry, so in 2012 the successor to h.264 was released, known as h.265 (commercial name is HEVC). HEVC is promising a massive 50% bandwidth reduction for the same video quality. A significant saving and with Cloud recording, it can mean the difference between installing 5 or 10 IP security cameras at your location. Although h.265 was released almost 5 years ago, adoption is slow. The primary reason for this, is that unlike h.264 which has 1 patent pool, h.265 has 3 patent pools with different pricing structures and terms & conditions. The second patent pool (HEVC Advance) was introduced in 2015, 3 years after the launch. This unclarity about the royalties situation around h.265 was hindering the adoption and as a result primary browsers have no support at all (e.g. Chrome, Firefox) or only partial support (Edge). Due to this, many content providers have stuck with h.264 because at least they know it will always play. In 2015, the Alliance for Open media was launched by several large IT companies like Google, Microsoft, Mozilla and Cisco to create an open and royalty free codec called AV1. Based on Google’s open ‘VP10’ codec with technology from Cisco ‘Thor’ and Mozilla’s ‘Daala’ codecs the consortium set to create an alternative for h.265. This codec is a strong competitor for h.265 for 5 reasons: - Royalty free AV1 is completely royalty free, meaning no patents need to be paid to any patent pool. Google has already proved it’s possible to release a royalty free codec with VP9. In addition, the codec will be open source with the very permissive BSD license speeding up adoption. - Better compression Depending on which benchmark you use, AV1 will be around 30% better than h.265 / HEVC. Again a significant improvement at a reasonable CPU costs. In contrast to HEVC, AV1 will also show these improvements at low bitrate cameras, which is especially useful for IP Security cameras. - Will play everywhere With APPLE, GOOGLE, MICROSOFT, and MOZILLA supporting AV1, we can expect support in all major browsers and operating systems in the near future. With this new codec, support for h.265 in major browsers is very unlikely to happen. Unlike h.265, AV1 support is already available in development versions of Firefox. - Hardware accelerated Decoding video requires a lot of CPU. Many devices like your laptop, phone or tablet have hardware acceleration to prevent heated devices and battery drain. AV1 is backed by major chip vendors like INTEL, AMD, BROADCOM and ARM. It will take at least 1 year before we see the first devices shipped with AV1, but hardware support looks promising. - Content providers support AV1 Major content suppliers like NETFLIX, FACEBOOK, YOUTUBE, BBC, AMAZON and HULU will provide content in AV1. Youtube already boycotts HEVC by providing 4K video only via its own VP9 codec. As a consequence, it currently does not play on Apple devices. How will AV1 affect the video surveillance industry? Manufacturers of IP security cameras are now slowly adopting h.265 in their security cameras, but playing video remains problematic on mobile devices and in browsers. The solution is to transcode video from h.265 to another format, but this has major downsides: - Transcoding means decompressing the video and compressing it again in a different format. This generally takes time and comes with “encoding delay.” As a consequence, your real-time video is not as real-time as you would expect - Transcoding requires a lot of CPU, and you can only transcode a few video streams at the same time. This adds costs to any video surveillance platform, whether its Cloud or a traditional recorder. - Transcoding reduces the video quality. Codecs compress video by throwing away video data, and once it’s removed, it cannot be recovered again. Everytime you compress video, the codec will throw video data away resulting in lower quality. To prevent transcoding, it is important that security cameras and devices people use support the same codecs. Now with HEVC, cameras start to support it, but the popular customer devices still lack support. With AV1, the majority of the platforms can quickly add support (Firefox already has in a development version). The question is: when will we see the first IP security cameras that supports it? Whether the next industry standard becomes h.265 or AV1 is still unclear. What is clear is that Eagle Eye Networks is closely following the market and when one of the two becomes dominant we will support it. Written by Erik Jan Philippo – Product Owner, Eagle Eye Networks
<urn:uuid:82906717-67b8-4365-a7f6-06a006354371>
CC-MAIN-2024-38
https://www.een.com/blog/h-265/
2024-09-07T16:40:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00679.warc.gz
en
0.941361
1,126
2.625
3
What is a Risk Assessment? A cyber security risk assessment identifies the information assets that could be affected by a cyber attack (such as hardware, systems, laptops, customer data and intellectual property). It then identifies the risks that could affect those assets. A risk estimation and evaluation are usually performed, followed by the selection of controls to treat the identified risks. It is essential to continually monitor and review the risk environment to detect any changes in the context of the organization, and to maintain an overview of the complete risk management process. View Drata Glossary Learn more about other compliance and cybersecurity concepts in our glossary.
<urn:uuid:fd757b3a-960a-4f4c-875e-6a62fa46f10b>
CC-MAIN-2024-38
https://drata.com/glossary/risk-assessment
2024-09-08T22:11:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00579.warc.gz
en
0.92226
127
2.953125
3
With technology leading the way in how businesses operate, server virtualization is a cutting-edge solution to using and managing your server resources. Curious about what it is and how it can enhance your business? This article dives deep into the world of server virtualization – breaking down everything you need to know about its types and uses. What is server virtualization? Server virtualization involves dividing a single physical server into multiple isolated virtual servers, commonly known as virtual machines or VMs. These VMs operate independently like they are separate physical servers. By adopting this process, businesses can optimize their use of resources and minimize costs while gaining more flexibility and scalability solutions for their operations. A hypervisor is a nifty software that enables server virtualization. Placed between the hardware and operating system, it allows for multiple virtual machines to be created on the same physical server while running different applications and OSs. Kinds of server virtualization There are primarily three types of server virtualization: In full virtualization, the hypervisor communicates directly with the server’s CPU and storage, without interference from the host system. Each virtual machine runs on its own operating system and has secure access to independent resources. While this customization method offers outstanding security precautions, it can be resource-intensive at times. Para-virtualization is a method of creating virtual machines where the operating systems on the VMs are altered to achieve maximum efficiency. Unlike full virtualization, para-virtualized environments require that each VM be aware they operate within a virtualized ecosystem. While this approach results in higher efficiency, it’s less versatile when it comes to supporting various operating systems. This method stands out because it doesn’t use a hypervisor. Instead, it relies on the operating system to create containers that act as isolated spaces. These containers share the OS kernel but behave like separate systems. It is an efficient and lightweight approach suitable for running the same OS across all containers but has limitations. Uses of server virtualization Server virtualization offers a remarkable advantage: resource optimization. Compared to traditional setups where physical servers often sit underutilized, server virtualization allows for a single physical server to be divided into multiple virtual servers capable of running their own operating systems and applications. By maximizing the usage of hardware, companies can extract the full potential of their technology investments. As a result, not only do they save immensely on hardware costs, power consumption, and cooling requirements, but they also reduce their carbon footprint. In modern business, time is ever more valuable. Thankfully, server virtualization has emerged as a powerful tool for accelerating the deployment of computing resources. By ditching physical hardware and opting for quick-to-create virtual machines instead, organizations gain the agility and flexibility they need to keep up with changing market demands and rapidly launch new services. Scaling operations becomes a breeze too. A business’s digital assets are priceless, and it’s vital to keep them protected from any unforeseen disasters. The third-person narrative suggests that server virtualization can significantly enhance disaster recovery efforts. The process of encapsulating the entire server environment, including operating systems, applications, data, and other vital components, into a single file is possible through virtual machines. Backing up virtual machines in this manner enables quick restoration on compatible hardware after a calamity strikes. Such an approach dramatically reduces recovery time objectives and ensures business continuity despite disasters. Testing and Development: For businesses to stay ahead of the curve, innovation is necessary. Luckily, server virtualization is an indispensable testing and development tool. Without disrupting production environments, creating an isolated environment for testing new applications or changes to existing ones is critical. Developers can safely test and develop in a replica of the production environment provided by virtualization without compromising business operations. Additionally, it allows for parallel development and testing environments which accelerate the development lifecycle while ensuring higher quality and stability in the final product. How to get started with server virtualization Companies should evaluate their current infrastructure and identify servers and applications suitable for virtualization. The performance, compatibility, and business requirements factors should be considered in the process. Select Virtualization Type: After assessing your needs, you can select the most suitable type of server virtualization among full, para and OS-level virtualization. Choose a Hypervisor: To ensure smooth virtualization, the right hypervisor must be selected based on your preferred type. Some of the popular options include Microsoft Hyper-V, VMware, and Citrix. Develop a Migration Plan: A comprehensive plan must be created to move physical servers to virtual machines. To accomplish this, it is necessary to schedule the migration, allocate appropriate resources and design a fallback plan in case of unforeseen issues. Implementation and Testing: The migration process should be initiated while closely monitoring system performance. It is imperative to conduct a comprehensive evaluation of the new virtual environment’s stability and performance. Training and Support: The IT team needs proper training to manage the virtualized environment, along with establishing a reliable support structure for troubleshooting and maintenance purposes. Review and Optimize: They consistently observe the virtual environment and implement necessary changes to improve performance and make resource usage more efficient. Server virtualization is a powerful tool that can assist modern businesses in achieving optimal resource usage, enhanced scalability, and streamlined operations. It is no surprise that it has become the de-facto standard for IT practices across organizations of all sizes. At Cynergy Technology, we specialize in guiding enterprises toward adopting highly efficient and flexible server infrastructures through effective server virtualization strategies.
<urn:uuid:b9cc274e-6112-428a-bc98-1415006a1892>
CC-MAIN-2024-38
https://www.cynergytech.com/stories/what-server-virtualization/
2024-09-10T03:10:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00479.warc.gz
en
0.908884
1,140
2.75
3
Even though SSL technology has been in use for over two decades, it still pops up in the news every few months—usually when a security breach occurs and private information is leaked to unauthorized parties. High profile security incidents often make headlines around the world and have caused misinformed debates over the security of SSL. Understanding how security affects you is key to ensuring that your data remains safe online. However, the cryptography behind SSL continues to be very robust and has changed over the years to match advances in computing technology. In most instances, an attacker intercepting a communication and trying to use brute-force to decrypt it would simply be wasting their time. As detailed in “The Math behind Estimations to Break a 2048-bit Certificate”, it would take a normal computer over 6.4 quadrillion years to break the encryption that secures DigiCert’s 2048-bit SSL Certificates. Even if they had (likely impossible) access to millions of the world's most powerful computers, consider the time it would take before the encrypted data could be decrypted. Recent exploits were not flaws in the encryption; rather, they allowed attackers to bypass the encryption through faults in either the security implementation or the personnel practices of those using the technology. In May, eBay announced that hackers breached some of their staff accounts. The breach allowed direct access to a database with names, addresses, phone numbers, dates of birth, and encrypted passwords. While eBay encouraged everyone to change their passwords, the personal information that was exposed opens their customers to the possibility of not only phishing scams, but also potential identity theft. Luckily, there is no evidence that hackers were able to decrypt the passwords found in the database and customer financial information is stored on a separate server and was not compromised. The Heartbleed bug, which has also recently been in the news, is another example of an exploit that bypassed encryption. Heartbleed is a bug in some versions of OpenSSL (used by roughly 17% of SSL web servers to encrypt information) that affected half a million widely-trusted websites. The Heartbleed bug allowed attackers to read information directly from the server's memory, compromise secret keys used to encrypt data, and steal various types of content from the server, such as unencrypted usernames and passwords. As soon as the bug was discovered, OpenSSL was patched to remove the vulnerability; however, by the time most affected web servers had updated their OpenSSL software, some high-profile breaches had possibly occurred. If a website or server is compromised and user information is leaked, consumer confidence in the business is undermined, existing customers may cease doing business with the site, and potential customers to avoid signing up for it altogether. Even if only a small amount of user accounts are affected, the resulting bad press for a business could mean millions of dollars in lost potential revenue. Additional costs could include hiring outside IT staff to secure the breach on short notice, as well as PR experts to counter all the resulting negativity about the company. Further, the breaches could necessitate the purchase of identity protection and fraud monitoring services. In the end, bad security can be extremely costly for any business. Both consumers and business owners need to understand that SSL is a proven technology that has secured millions of websites over the years. And, as with any technology, it can be compromised by incorrect implementation or improper security practices. Businesses should prevent breaches before they occur by working with an experienced security vendor (like DigiCert) to make sure they are implementing security best practices and actively monitoring their current security measures.
<urn:uuid:781a0f97-092c-48da-badf-6fdef7ca71b0>
CC-MAIN-2024-38
https://www.digicert.com/blog/ssl-news
2024-09-10T04:53:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00479.warc.gz
en
0.961965
735
3.25
3
Rigetti Charged with Leading Britain’s £10m, Three-Year Project to Build the Quantum Computer in Abingdon, Oxford (Telegraph.uk) After 40 years of trying to make practical reality live up to quantum theory, Britain is getting its first of this different kind of machine. And the man tasked with delivering it is Chad Rigetti. UK’s Telegraph has provided an extensive discussion here of the UK’s decision to develop a quantum computer and the problems associated with that effort. The discussion includes both the peril and the promise of the quantum computing effort and builds around Chad Rigetti’s leadership role. Rigetti is the founder of Rigetti computers, which has just been announced as the leader of the £10m, three-year project, co-funded by government and industry, to build the quantum computer in Abingdon, Oxford. He says, “The countries that really have leading edge efforts in making quantum computing practical give themselves the seeds of an innovation hub akin to Silicon Valley,” he says. “It’s critical to nations that view technology-driven economic development as a core part of their national strategy.” When announcing the Rigetti deal, Science minister Amanda Solloway declared the government’s ambition “to be the world’s first quantum-ready economy, which could provide UK businesses and industries with billions of pounds worth of opportunities. The trouble is that controlling qubits is dauntingly hard. It is not just about the flow of electrons, as in silicon transistors, but their spin. And quantum states are fragile, prone to be knocked-off true by environmental “noise”, and delivering error-laden results. Stabilising qubits, and correcting for errors, are two major challenges. The UK consortium which Rigetti leads shows how far there is to go. Its partner Oxford Instruments will supply state of the art refrigeration needed; a start-up called Phasecraft will develop the entirely new kind of algorithms required to put qubits to work; and the University of Edinburgh will be developing ways to test and verify both this new hardware and software. Chad Rigetti concedes that, frankly, only seeing will be believing. “Quantum computing thoroughly tests the laws of quantum physics every time you send an instruction to it. To build computers, you have to completely master that underlying physical theory. And that’s what’s happening. As we do that, that will lead to a demystification. In the next generation or two, quantum computing will feel much more natural and straightforward.”
<urn:uuid:4f4f0f2a-5975-4318-91cc-320ea3ca2cc2>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/rigetti-charged-with-leading-britains-10m-three-year-project-to-build-the-quantum-computer-in-abingdon-oxford/
2024-09-10T04:27:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00479.warc.gz
en
0.943485
545
2.765625
3
What’s to blame for slower growth and rising inequality? Regulatory capture. And who practices Regulatory Capture? EVERYONE “Regulatory capture is a form of government failure which occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating. When regulatory capture occurs, the interests of firms or political groups are prioritized over the interests of the public, leading to a net loss for society. Government agencies suffering regulatory capture are called “captured agencies””. This is an issue that is a “bipartisan blind spot” and affects politicians from any political view. “…[I]t’s ultimately the duty of the governors to make sure that the rules are in the public interest, rather than in the narrow interests of the various clamoring claimants who come before them.” The first step to eliminating regulatory capture is to recognize that it happens. One of the reasons the public tolerates regulatory capture is that special interest groups use a positive policy image which creates a natural blind spot in the public’s eyes. An example used by Planet Money is teeth whitening and North Carolina’s Dental Board. The public feels that dentists help people so when the NC Dental Board lobbied to get a regulation to stop non-dentists from offering simple tooth whitening services the dentists initially won. Teeth whitening is more like a pedicure for your teeth rather than a dental procedure and anyone can do one. In fact, these days, people can buy teeth whitening kits at the grocery store. It had to go to the supreme court for the regulation to be removed. While waiting for all the legal trials to settle the issue stifled competition (natural market forces): teeth whitening service costs were artificially inflated (bad for consumers) and non-dentists were put out of business. Planet Money explains a few more examples and including one where homeowners practice regulatory capture. Planet Money’s story called “Rigging The Economy” https://www.npr.org/templates/transcript/transcript.php?storyId=592376568
<urn:uuid:71252c6d-ff4e-4c80-bcf3-57116afaa9e6>
CC-MAIN-2024-38
https://www.textor.ca/regulatory-capture-is-to-blame-reducing-rc-starts-with-education/
2024-09-13T20:31:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00179.warc.gz
en
0.950049
463
2.5625
3
Intel has announced the release of its “Tunnel Falls” quantum research chip, a 12-qubit silicon chip that the company is making available to the quantum research community. Intel also said it is collaborating with the Laboratory for Physical Sciences (LPS) at the University of Maryland, College Park’s Qubit Collaboratory (LQC), a national-level Quantum Information Sciences (QIS) Research Center, to advance quantum computing research. Tunnel Falls is fabricated on 300-millimeter wafers in the D1 fabrication facility, the 12-qubit device leverages Intel’s transistor industrial fabrication capabilities, such as extreme ultraviolet lithography (EUV) and gate and contact processing techniques, according to the company. In silicon spin qubits, information (the 0/1) is encoded in the spin (up/down) of a single electron. Each qubit device is essentially a single electron transistor, which allows Intel to fabricate it using a similar flow to that used in a standard complementary metal oxide semiconductor (CMOS) logic processing line. Intel believes silicon spin qubits are superior to other qubit technologies because of their synergy with leading-edge transistors. Being the size of a transistor, they are up to 1 million times smaller than other qubit types measuring approximately 50 nanometers square, potentially allowing for efficient scaling. According to Nature Electronics, “Silicon may be the platform with the greatest potential to deliver scaled-up quantum computing.” At the same time, utilizing advanced CMOS fabrication lines allows Intel to use process control techniques to enable yield and performance, according to the company. For example, the Tunnel Falls 12-qubit device has a 95 percent yield rate across the wafer and voltage uniformity, similar to a CMOS logic process, and each wafer provides more than 24,000 quantum dot devices. These 12-dot chips can form four to 12 qubits that can be isolated and used in operations simultaneously depending on how the university or lab operates its systems. Intel said it intends to continously work to improve the performance of Tunnel Falls and integrate it into its quantum stack with the Intel Quantum Software Development Kit (SDK). In addition, Intel is developing its next-generation quantum chip based on Tunnel Falls, expected to be released in next year. The company also plans to partner with additional research institutions globally to build the quantum ecosystem. “Tunnel Falls is Intel’s most advanced silicon spin qubit chip to date and draws upon the company’s decades of transistor design and manufacturing expertise,” said Jim Clarke, director of Quantum Hardware, Intel. “The release of the new chip is the next step in Intel’s long-term strategy to build a full-stack commercial quantum computing system. While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development.” Intel said the chip’s availability allows researchers to begin on such experiments as learning about the fundamentals of qubits and quantum dots,and developing techniques for working with devices with multiple qubits. Intel said it is collaborating with LQC as part of the Qubits for Computing Foundry (QCF) program through the U.S. Army Research Office to provide Intel’s new quantum chip to research laboratories. The collaboration is designed to help democratize silicon spin qubits by enabling researchers to gain hands-on experience working with scaled arrays of these qubits, according to Intel. “The initiative aims to strengthen workforce development, open the doors to new quantum research and grow the overall quantum ecosystem,” the company said. The first quantum labs to participate in the program include LPS, Sandia National Laboratories, the University of Rochester and the University of Wisconsin-Madison. LQC will work alongside Intel to make Tunnel Falls available to additional universities and research labs. The information gathered from these experiments will be shared with the community to advance quantum research and to help Intel improve qubit performance and scalability. “Sandia National Laboratories is excited to be a recipient of the Tunnel Falls chip,” said Dr. Dwight Luhman, distinguished member of technical staff at Sandia. “The device is a flexible platform enabling quantum researchers at Sandia to directly compare different qubit encodings and develop new qubit operation modes, which was not possible for us previously. This level of sophistication allows us to innovate novel quantum operations and algorithms in the multi-qubit regime and accelerate our learning rate in siliconbased quantum systems. The anticipated reliability of Tunnel Falls will also allow Sandia to rapidly onboard and train new staff working in silicon qubit technologies.” Mark A. Eriksson, department chair and John Bardeen Professor of Physics, Department of Physics, University of Wisconsin-Madison, said, “UW-Madison researchers, with two decades of investment in the development of silicon qubits, are very excited to partner in the launch of the LQC. The opportunity for students to work with industrial devices, which benefit from Intel’s microelectronics expertise and infrastructure, opens important opportunities both for technical advances and for education and workforce development.”
<urn:uuid:c2e7c567-3106-42ff-bebc-a4d69d310806>
CC-MAIN-2024-38
https://insidehpc.com/2023/06/intel-quantum-tunnel-falls-silicon-spin-chip-available-to-researchers/
2024-09-07T21:23:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00779.warc.gz
en
0.917856
1,089
2.875
3
In this video from PASC18, Constantia Alexandrou from the University of Cyprus discusses her domain of expertise – quantum chromodynamics. “Many – if not most – fields in physics employ high performance computing, yet quantum chromodynamics (QCD) might be the premiere example of an area very difficult to understand outside of the field.” “Chromodynamics helps us understand our universe. The strong interaction is one of the four forces describing complex phenomena in the evolution of the universe from the quark gluon plasma formed just after the Big Bang at the birth of the cosmos to the formation of neutron stars. The bulk of visible matter in the universe is due to the strong interaction and understanding its properties requires the solution of quantum chromodynamics (QCD).” As nuclear physicists delve ever deeper into the heart of matter, they require the tools to reveal the next layer of nature’s secrets. Nowhere is that more true than in computational nuclear physics. A new research effort led by theorists at DOE’s Jefferson Lab is now preparing for the next big leap forward in their studies thanks to funding under the 2017 SciDAC Awards for Computational Nuclear Physics. “Many optimizations can be performed on an application that is QCD based and can take advantage of the Intel Xeon Phi coprocessor as well. With pre-fetching, SMT threading and other optimizations as well as using the Intel Xeon Phi coprocessor, the performance gains were quite significant. An initial test, using single precision on the base Sandy Bridge systems, the test case was showing about 128 Gflops. However, using the Intel Xeon Phi coprocessor, the performance jumped to over 320 Gflops.”
<urn:uuid:ac782cd3-b342-4158-9052-e22a18e8394c>
CC-MAIN-2024-38
https://insidehpc.com/tag/quantum-chromodynamics/
2024-09-07T21:29:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00779.warc.gz
en
0.93028
356
2.90625
3
By: Megan Rosser The time of year is upon us when individuals and families alike come together to clean every nook and cranny of their homes and go through their belongings with a fine-tooth comb – otherwise known as spring cleaning. Spring cleaning is a phenomenon that has been passed down through the generations, although its origin is unclear. There are theories circling the internet that attribute it to religion and culture, or simply just human nature. For instance, according to the Washington Post, “Because homes used to be lit with whale oil or kerosene and heated with wood or coal, the winter months left a layer of soot and grime in every room. With the arrival of spring, women would throw open windows and doors, and take rugs and bedding outside and beat dust out of them and start scrubbing floors and windows until sparkling.” Or, according to HowStuffWorks, “Ultimately, spring cleaning may have more to do with simple biology. During winter, we’re exposed to less sunlight due to shorter, often dreary days. With a lack of exposure to light, the pineal gland produces melatonin – a hormone that produces sleepiness in humans. Conversely, when we’re exposed to sunlight, our bodies produce much less melatonin. It’s possible that we spring clean simply because we wake up from a winter long melatonin-induced stupor and find more energy as the days grow longer when spring arrives.” Whichever theory you believe to be true, spring cleaning can apply to more than just your home – it can be applied to your devices as well. After all, over time we accumulate an overwhelming number of emails and files. So, to save the time it would take to read through all of your old emails and look through all of your saved files, iWorks has a few recommendations to help avoid clutter in the future. To combat a disorganized email, we recommend the following: - Create folders or labels to manage incoming emails or archive old emails, such as a To-Do folder, Follow-Up folder, or Future folder. For top-notch efficiency, use rules or filters so you do not have to manually move each email to its folder destination – it will do it for you automatically. - Create categories or labels to color-code emails to rank email priority, distinguish meeting invites, or highlight specific senders. To take it one step further, use flags or stars to set an extra reminder for high priority items. To combat a disorganized file manager, we recommend the following: - Create a folder for all employment-related materials, such as your job application and/or resume, offer letter, signed acknowledgments of company policies (e.g., employee handbook, employee benefits, code of conduct and ethics, etc.), and certificates of completion for required trainings. - Create a folder for all project-specific materials. Depending on the scope of the project, you may want to create sub-folders if there is more than one initiative that will occur over the course of the contract. It may also be beneficial to create a sub-folder for important paperwork related to your onboarding (e.g., non-disclosure agreement). - Create a folder for all voluntary activities, such as mentoring an intern. This enables you to store documentation for future use, so you will not have to start from scratch (e.g., intern welcome guide, project overview slide decks, list of assigned tasks, etc.). Taking the time now to properly organize your devices will save you the stress and energy it would take to track down a communication that has been buried by the influx of emails every day or to find a file saved to an unknown location. We encourage you to test out the options listed above and find what works best for you – or do your own research and explore what else is out there. Clark, Josh. “Why Do We Traditionally Clean Our Homes At the Beginning of Spring?” HowStuffWorks, HowStuffWorks, 27 Feb. 2009, home.howstuffworks.com/home-improvement/household-hints-tips/cleaning-organizing/spring-clean-in-spring.htm. K., J. “Spring Cleaning Is Based on Practices from Generations Ago.” The Washington Post, WP Company, 25 Mar. 2010, www.washingtonpost.com/wp-dyn/content/article/2010/03/23/AR2010032303492.html
<urn:uuid:82fc23b3-ecbe-4dae-8696-b1d7b5f85a91>
CC-MAIN-2024-38
https://ftp.iworkscorp.com/iworks-tips-for-spring-cleaning-your-technology/
2024-09-16T11:39:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00079.warc.gz
en
0.916623
947
2.625
3
Researchers are HPC resources at the Ohio Supercomputer Center to better understand current weather patterns and potential climate change in the future. In a surprise revelation, their data analysis discovered hundreds of small cyclones that had previously escaped detection. How could anyone miss a storm as big as a cyclone? You might think they are easy to detect, but as it turns out, many of the cyclones that were missed were small in size and short in duration, or occurred in unpopulated areas. Yet researchers need to know about all the storms that have occurred if they are to get a complete picture of storm trends in the region. “We can’t yet tell if the number of cyclones is increasing or decreasing, because that would take a multidecade view,” said David Bromwich. “We do know that, since 2000, there have been a lot of rapid changes in the Arctic—Greenland ice melting, tundra thawing—so we can say that we’re capturing a good view of what’s happening in the Arctic during the current time of rapid changes.” Read the Full Story.
<urn:uuid:5a1c71ad-e061-4ccd-b4f3-b97b32b7a40f>
CC-MAIN-2024-38
https://insidehpc.com/2014/01/supercomputing-reveals-arctic-cyclones-common-previously-thought/
2024-09-16T12:28:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00079.warc.gz
en
0.977768
234
3.609375
4
Improper Restriction of Operations within the Bounds of a Memory Buffer A remote code execution vulnerability in Microsoft Edge exists in the way that the Scripting Engine renders when handling objects in memory in Microsoft browsers. The vulnerability could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user, aka "Scripting Engine Memory Corruption Vulnerability." This CVE ID is unique from CVE-2017-0201. CWE-119 - Buffer Overflow Buffer overflow attacks involve data transit and operations exceeding the restricted memory buffer, thereby corrupting or overwriting data in adjacent memory locations. Such overflow allows the attacker to run arbitrary code or manipulate the existing code to cause privilege escalation, data breach, denial of service, system crash and even complete system compromise. Given that languages such as C and C++ lack default safeguards against overwriting or accessing data in their memory, applications utilizing these languages are most susceptible to buffer overflows attacks.
<urn:uuid:511ec022-e023-4203-9e0b-0b77649ba29b>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2017-0093/
2024-09-17T14:16:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00879.warc.gz
en
0.827409
196
2.515625
3
Absolute Date Input Formats Dates are specified as quoted character strings. A date can be entered by itself or together with a time value. For more information about date and time display, see Date and Time Display Formats (see page Date and Time Display Formats) in this chapter. The legal formats for absolute date values are determined by the II_DATE_FORMAT setting, summarized in the following table. If II_DATE_FORMAT is not set, the US formats are the default input formats. II_DATE_FORMAT can be set on a session basis. For information on setting II_DATE_FORMAT, see the System Administrator Guide. Year defaults to the current year. In formats that include delimiters (such as forward slashes or dashes), specify the last two digits of the year. The first two digits default to the current century (2000). For example, if you enter the following date: using the format mm/dd/yyyy, OpenSQL assumes that you are referring to March 21, 2000. In three‑character month formats, for example, dd‑mmm‑yy, OpenSQL requires three‑letter abbreviations (for example, mar, apr, may). To specify the current system date, use the constant today. For example: To specify the current system time, use the constant now. Absolute Time Input Formats The legal format for inputting an absolute time is 'hh:mm[:ss] [am|pm] [timezone]' Input formats for absolute times are assumed to be on a 24‑hour clock. If a time is entered with an am or pm designation, then OpenSQL automatically converts the time to a 24‑hour internal representation. If timezone is omitted, OpenSQL assumes the local time zone designation. Times are displayed using the time zone adjustment specified by II_TIMEZONE_NAME. For details about time zone settings and valid time zones, see the Getting Started guide. If an absolute time is entered without a date, OpenSQL assumes the current system date. Combined Date and Time Input Any valid absolute date input format can be paired with a valid absolute time input format to form a valid date and time entry. The following table shows some examples of valid date and time entries using the US absolute date input formats: Date and Time Display Formats OpenSQL outputs date values as strings of 25 characters with trailing blanks inserted. To specify the output format of an absolute date and time, II_DATE_FORMAT must be set. For a list of II_DATE_FORMAT settings and associated formats, see Absolute Date Input Formats (see page Absolute Date Input Formats). The display format for absolute time is: OpenSQL displays 24‑hour times for the current time zone, which is determined when OpenSQL is installed. Dates are stored in Greenwich Mean Time (GMT) and adjusted for your time zone when they are displayed. If seconds are omitted when entering a time, OpenSQL displays zeros in the seconds' place.
<urn:uuid:d5682f76-d6cb-4589-b2e7-c4d3962d78c9>
CC-MAIN-2024-38
https://docs.actian.com/ingres/10S/OpenSQL/Absolute_Date_Input_Formats.htm
2024-09-18T21:17:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00779.warc.gz
en
0.712227
646
2.859375
3
Matlab-MATLAB programming and data analysis AI-powered MATLAB Solutions 🔴#𝟏 𝐒𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐳𝐞𝐝 𝐌𝐚𝐭𝐥𝐚𝐛 𝐀𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭!🔴 Best MATLAB assistant. MATLAB TUTOR is designed to enhance your MATLAB learning experience by offering expert guidance on code, best practices, and programming insights tailored to your skill level. Advanced Math and Finance Solver Expert in solving math and financial engineering problems. Specialist in numerical methods and mathematical analysis. Matlab Simulink Model based design helper An advanced AI tool designed to assist in developing Matlab scripts and Simulink models for the automotive industry. It offers code optimization, error detection, and compliance with industry standards, enhancing efficiency and accuracy in software engine Best MATLAB assistant. MATLAB Master is perfect for expert insights, efficient coding, and robust solutions for all projects. 20.0 / 5 (200 votes) Introduction to MATLAB MATLAB, short for MATrix LABoratory, is a high-level programming language and interactive environment used for numerical computation, visualization, and programming. It is designed to facilitate easy manipulation of matrix and array mathematics. The primary purpose of MATLAB is to enable engineers and scientists to analyze data, develop algorithms, and create models and applications. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. For example, a user can solve a system of linear equations, plot the results, and then create a function to automate the process for future use. Here's a simple example to illustrate the power of MATLAB: Scenario: Solving and Plotting Linear Equations ```matlab % Define the coefficients of the equations A = [2, -1; -3, 4]; B = [1; 7]; % Solve for the variables X = A\B; % Plotting the result figure; plot(X, '-o'); title('Solution of Linear Equations'); xlabel('Variable Index'); ylabel('Value'); ``` This script defines a system of linear equations, solves it, and plots the results. This simplicity and integration of various tasks is a hallmark of MATLAB. Main Functions of MATLAB Performing complex mathematical calculations, such as solving differential equations or matrix operations. Engineers can use MATLAB to solve a complex differential equation to model the dynamics of a mechanical system. For instance, solving a second-order differential equation that describes the motion of a damped harmonic oscillator. Data Analysis and Visualization Creating plots, graphs, and charts to visualize data trends and patterns. A data scientist can use MATLAB to process large datasets, perform statistical analysis, and generate visual representations of the data. For example, plotting a 3D surface plot to visualize the relationship between three variables in an experimental study. Creating, testing, and implementing algorithms for various applications. A financial analyst can develop an algorithm to predict stock prices based on historical data and various market indicators. MATLAB allows them to prototype the algorithm, backtest it with historical data, and refine it for better accuracy. Ideal Users of MATLAB Engineers and Scientists MATLAB is extensively used by engineers and scientists across various disciplines including mechanical, electrical, civil, and aerospace engineering. It provides the tools needed for data analysis, algorithm development, and modeling, making it ideal for users who need to perform complex calculations and visualize their results. These users benefit from MATLAB’s ability to handle large datasets and its extensive library of built-in functions for mathematical and engineering applications. Academics and Researchers MATLAB is a powerful tool for academics and researchers who require a flexible and efficient environment for conducting experiments, analyzing data, and developing new algorithms. It is particularly useful in fields such as physics, chemistry, biology, and economics. These users benefit from MATLAB’s extensive documentation, user community, and the ability to share and collaborate on code and data easily. How to Use MATLAB Visit aichatonline.org for a free trial without login, no need for ChatGPT Plus. Download and install the MATLAB software from the official MathWorks website. Familiarize yourself with the MATLAB interface, including the Command Window, Workspace, and Editor. Start with basic tutorials and documentation provided by MathWorks to understand MATLAB syntax and basic functions. Experiment with common use cases such as data analysis, visualization, and algorithm development to optimize your experience. Try other advanced and practical GPTs AI-Powered Bilingual Songwriting Empower Your English with AI Mr Power Apps AI-powered guidance for Power Apps. AI-Powered Content Generation for All 개조식으로 요약하고 PPT 만들기 AI-Powered Summary and PPT Tool AI-powered philosophical inquiry and insights AI-powered academic paraphrasing tool AI-Powered Data Analysis and Insights AI-Powered Solution for Unique Academic Texts Chinese History 中国历史 AI-powered insights into Chinese history AI-powered Kubernetes management tool. DALL· Eの 3 Prompt Craft AI-Powered Tool for Crafting Detailed Prompts - Data Analysis - Machine Learning - Algorithm Development Detailed Q&A about MATLAB What is MATLAB primarily used for? MATLAB is a high-level programming language and environment used for numerical computing, data analysis, algorithm development, and visualization. How can I learn MATLAB effectively? Start with the official documentation and tutorials, practice with examples, and participate in MATLAB communities for additional support and insights. Can MATLAB be used for machine learning? Yes, MATLAB provides a range of tools and functions specifically designed for machine learning, including the Statistics and Machine Learning Toolbox. What are the system requirements for MATLAB? MATLAB requires a 64-bit processor, a minimum of 4 GB of RAM, and a few gigabytes of disk space for installation. Specific requirements may vary based on the toolboxes and features you use. How do I debug MATLAB code? Use the debugging tools in the MATLAB Editor, such as setting breakpoints, stepping through code, and inspecting variables to identify and fix errors in your scripts.
<urn:uuid:f11e17a5-ebf0-4e7a-ab6d-e841ef2f3523>
CC-MAIN-2024-38
https://theee.ai/tools/Matlab-2OToJPfHKl
2024-09-20T03:38:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00679.warc.gz
en
0.865963
1,412
2.8125
3
2023 OWASP API Security Top 10 Best Practices After four long years since the original guidelines were created, the Open Web Application Security Project (OWASP) has now updated their Top 10… A Man-in-the-Middle (MITM) attack intercepts and alters communication, aiming to gather sensitive data. Attackers deceive parties using techniques like ARP spoofing and posing risks on public Wi-Fi and compromised websites. To prevent MITM attacks, use secure communication channels, keep software updated, and employ caution on public Wi-Fi networks. A man-in-the-middle (MITM) attack is a malicious technique where an unauthorized third party intercepts and potentially alters communication between two parties who believe they are communicating directly. In this attack, the attacker positions themselves as a legitimate intermediary between the sender and receiver. The primary goal of a man-in-the-middle attack is to eavesdrop on the communication, collect sensitive information, or manipulate the data being transmitted. This can happen in different situations, like when two people are sharing sensitive data on a public Wi-Fi network, or when a person unintentionally visits a hacked website. The data that is taken can contain personal details, login information, financial data, or any other confidential information that is being sent between the two parties. The perpetrator can then exploit this data for harmful intentions, such as stealing identities, committing financial scams, or gaining unauthorized entry to accounts. To carry out a man-in-the-middle attack, the perpetrator often uses a range of methods, including ARP spoofing, DNS spoofing, and session hijacking. These techniques enable the attacker to trick both parties into thinking they are communicating with each other directly, when in reality, all communication is being redirected through the attacker’s system. To safeguard against man-in-the-middle attacks, it is crucial to use secure communication channels, such as encrypted protocols like HSTS, and to exercise caution when connecting to public Wi-Fi networks. To reduce the risk of falling victim to such attacks, it is essential to keep software and devices updated with the latest security patches. Understanding the risks associated with this type of attack and implementing appropriate security measures is vital for safeguarding sensitive information and protecting against potential breaches. The impact and consequences of cyberattacks are extensive and can have severe repercussions for individuals, organizations, and even entire nations. The consequences can be felt on various levels, including economic, social, and political. Economically, these attacks can result in significant financial losses for businesses and individuals. Cybercriminals often target financial institutions, stealing sensitive information like credit card details and banking credentials. This can lead to fraudulent transactions, causing financial devastation for the victims. Furthermore, the costs associated with resolving the attack and enhancing cybersecurity measures can be substantial. Socially, cyberattacks can erode trust and confidence in digital platforms. As technology becomes increasingly integral to our lives, from communication and commerce to social interactions, the threat of cyberattacks looms larger. This can lead to a pervasive sense of vulnerability and unease among individuals, affecting their willingness to engage in online activities and share personal information. Looking at this from a political angle, cyber attacks can greatly impact a country’s security. If a government-backed attack is aimed at crucial infrastructure, official networks, or confidential data, it can interrupt vital services and put sensitive information at risk. Such attacks can weaken political stability and independence, resulting in strained diplomatic ties and international conflicts. It is essential to comprehend the mechanics of a man-in-the-middle attack to safeguard your personal information. Attackers use a range of tactics, but there are certain prevalent methods that are often applied to breach the security of systems and networks. These methods are constantly evolving as attackers become increasingly advanced and innovative in their methods. The first step in a man-in-the-middle attack involves the attacker positioning themselves between two targeted parties. This can be achieved by exploiting vulnerabilities in the network, application, or even APIs. Once the attacker has successfully positioned themselves in the middle, they can start intercepting the communication. The attacker’s goal is to go undetected while intercepting and manipulating data exchanged between the two parties. This can be achieved through techniques like ARP spoofing, DNS spoofing, or session hijacking. These techniques let the attacker redirect communication through their system, allowing them to view and change the data as it passes. Once the attacker has access to the communication stream, they can manipulate the data in real-time. This could involve modifying message content, inserting malicious code or links, or even impersonating one of the parties involved. An attacker may also choose to eavesdrop on a conversation, gathering sensitive information like passwords, credit card numbers, or other confidential data. To the unsuspecting parties involved, everything may seem normal, as the attacker carefully relays the intercepted messages without arousing suspicion. This can make it extremely difficult to detect a man-in-the-middle attack, especially if the attacker is skilled and takes steps to cover their tracks. To defend against man-in-the-middle attacks, there are several steps one must take. The most important of these is to make sure that the networks you are using are safe and reliable. It is advisable to avoid connecting to public Wi-Fi networks or any other networks that are not properly secured, as these are often targeted by attackers. To ensure secure and private online communication, opt for an encrypted network such as a Virtual Private Network (VPN). VPNs add an extra layer of security by encrypting your internet connection. Additionally, employing encryption protocols like HTTPS for web browsing helps protect against man-in-the-middle attacks. These protocols encrypt transmitted data, making it challenging for attackers to intercept and manipulate sensitive information. Another way to safeguard against known vulnerabilities that attackers could exploit is by regularly updating your software and devices with the latest security patches. Oftentimes, software developers release updates and patches specifically to address these vulnerabilities. Keeping your devices and applications current can greatly reduce the chances of becoming a victim of an MITM attack. It is crucial to be vigilant and cautious when sharing sensitive information online, as doing so reduces the likelihood of falling victim to a man-in-the-middle attack. Exercising caution when clicking on links or downloading files is essential. Phishing emails and malicious websites are common tools used by attackers to launch MITM attacks. Always double-check the source of any links or attachments before clicking on them, and remain wary of any suspicious or unexpected requests for personal information. Current application security tools are not enough to fully protect your APIs. While web application firewalls and API gateways offer some protection, they are limited to only the APIs they are aware of. This means that any APIs not routed through these tools are left vulnerable and undetected. Therefore, it is advisable to invest in a specialized API security solution to effectively prevent and block potential attacks. In conclusion, it is crucial to establish robust and distinct passwords for every online account to protect against MITM attacks. It is advisable to avoid using commonly used passwords and to utilize a password manager to create and store intricate passwords for each account. Additionally, enabling two-factor authentication provides an extra level of protection by requiring a secondary form of verification, such as a code sent to a mobile device, prior to gaining access to accounts. Understanding the common signs of a MiTM attack is equally important as grasping the definition of a man-in-the-middle attack. Common signs of a man-in-the-middle (MiTM) attack include unexpected logout prompts, altered URLs leading to phishing sites, and suspicious certificate errors indicating potential tampering. In these attacks, a black hat hacker intercepts and manipulates communication between two parties, exploiting vulnerabilities to eavesdrop or modify data exchanges. Users should remain vigilant for these signs to detect and mitigate potential MiTM attacks, protect sensitive information, and uphold the integrity of their online interactions. Regular security awareness and updated anti-MiTM measures are crucial for thwarting these malicious activities. Tools effective in detecting a MiTM attack include Wireshark, which analyzes network traffic for irregularities, and SSL/TLS scanners like SSL Labs, which identify vulnerabilities in cryptographic protocols. Additionally, a grey hat hacker can help alert organizations about potential security threats before a malicious actor takes action. Noname Security’s API security platform extends protection beyond APIs, incorporating advanced features for detecting and preventing a man-in-the-middle attack. Request a demo to explore how Noname’s API security testing tools actively identify and address vulnerabilities, ensuring a robust defense against potential MitM threats in your digital ecosystem. Integrating such comprehensive solutions enhances overall cybersecurity, safeguarding against evolving attack vectors and maintaining the integrity of your communication channels. Encryption serves as a crucial defense against MitM attacks by rendering intercepted data unreadable to unauthorized entities. Through complex algorithms, encryption transforms data into ciphertext, making it virtually impossible for attackers to decipher without the corresponding decryption key. In the context of MitM attacks, even if adversaries intercept the communication, they cannot make sense of the encrypted information without the proper credentials. This ensures the confidentiality and integrity of the exchanged data, forming a robust barrier against potential manipulation or eavesdropping. Regular security testing ensures encryption’s effectiveness, maintaining a solid defense against evolving MitM threats. Your business may be at risk of a man-in-the-middle (MitM) attack if it operates on unsecured networks, lacks robust encryption measures, or neglects network monitoring. Unsecured networks provide opportunities for hackers to intercept and manipulate data exchanges, and the absence of encryption exposes sensitive information to potential eavesdropping. Insufficient network monitoring makes it challenging to detect suspicious activities indicative of a MitM attack. Identifying and mitigating these risks is essential for your business to fortify its defenses, ensure a secure digital environment, and safeguard against potential threats to data integrity and confidentiality. Experience the speed, scale, and security that only Noname can provide. You’ll never look at APIs the same way again.
<urn:uuid:bc8f230a-c47e-41a9-b0b5-ee03351fb115>
CC-MAIN-2024-38
https://nonamesecurity.com/learn/what-is-a-man-in-the-middle-attack/
2024-09-08T00:25:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00879.warc.gz
en
0.926666
2,075
3.46875
3
The open source software movement played a huge role in the growth of the internet. Advancements in hardware went hand in hand with advancements in software, and it was open source that bridged the gap between the two worlds. Today, after a period of growing increasingly apart, the worlds of software and hardware are starting to pull closer together. There’s renewed interest in co-designing hardware and software—driven by the slowdown in periodic CPU performance gains but also by the emergence of new technologies—and open source once again plays an important role, both on software and hardware sides. “I think it's very interesting to see what's happening in the hardware community, in a sense, mirroring what happened in the early days of open source software,” says John Masters, who now works on custom compute at Google after spending many years working on computer architecture at Red Hat. In Episode 5 of Traceroute, we look deeper at the history of open source and free software and its role in the internet’s development and the link between software and hardware innovation. Open source software can trace its origins back to the 1980s and The Free Software Foundation. In the 1990s, Linus Torvalds completed Linux. It progressed and spread globally, awakening the idea that high-quality, full functioning systems could be free. It is Linux that runs on most computers that comprise the internet today. “Software was a way to express ideas the same way writing down a mathematical formula was a way to express an idea, and there were people from many walks of life who wanted to express their ideas symbolically,” says Brian Fox, a long-time open source advocate and technologist, who was employee number one at Free Software Foundation. “The Free Software Foundation is the grandfather of what people like to call ‘open source’ and what we like to call ‘free software,’” he says. “We mean free as in speech, not as in beer.” Everyone wanted to share code in the early days of the internet. But as companies grew, they increasingly felt the need to be proprietary, which could hurt growth and innovation, Fox warns. “The meaning in software is what people put into it and what we're able to get out of it,” he says. “I want to be able to consume your … profound thoughts on the way the world works. And some of that is delivered to me in software.” Listen and subscribe to Traceroute, a podcast that tells the backstory of the internet and digital infrastructure at large, as told by those who built it and those who documented its history: Apple, Spotify, RSS Ready to kick the tires? Use code DEPLOYNOW for $300 credit
<urn:uuid:a8118da2-c89a-4e1a-b1b9-3ef25255a177>
CC-MAIN-2024-38
https://deploy.equinix.com/blog/internet-open-source-free-software-traceroute-podcast/
2024-09-10T11:41:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00679.warc.gz
en
0.97102
576
2.96875
3
If true, it’s one of the most shocking things you will ever read about the internet. In a nutshell, documents obtained by The Guardian have revealed that the NSA works with technology companies to “covertly influence” product designs, helping them to collect “vast amounts” of encrypted data. According to the documents provided by whistleblower Edward Snowden, encryption tools that the NSA has had some success in hacking include: - VPNs (Virtual Private Networks), commonly used by businesses to allow workers to access their office networks remotely. - Encrypted chat programs which provide end-to-end encryption. Such systems should not allow data to be decrypted at any point during the message transfer. - HTTPS – the long-standing method of encrypting data (such as passwords and financial information) between a web browser and a secure website, such as your online bank, your webmail or social networking site. If you’ve ever visited a website whose URL begins https:// and displays a small padlock, you’re using HTTPS to secure your communications. - TLS/SSL (Transport Layer Security / Secure Sockets Layer – used by HTTPS. - Encrypted VoIP (Internet telephony) services. Skype and Apple FaceTime are examples of free services that offer encrypted phone and video calls. The leaked documents suggest that the NSA is working with some VoIP services to obtain access to messages before they are encrypted. The documents reveal the NSA was covertly working with technology firms to not only weaken encryption but also to introduce exploitable backdoors that could be used for surveillance Perhaps most shockingly of all, the NSA appears to have been promoting deliberately weakened or vulnerable cryptography, and influencing standards. The Guardian specifically highlights the National Institute of Standards and Technology (NIST) for a standard they published in 2006. Insert vulnerabilities into commercial encryption systems, IT systems, network, and endpoint communications devices used by targets. Influence policies, standards and specifications for commercial public key technologies. The story is very important, and many will find it highly disturbing. If any weakness is inserted into the technologies used to protect the privacy of many millions of internet users it could be exploited not just by law enforcement and surveillance agencies in the USA or United Kingdom, but also foreign nations and online criminals. A backdoor in a security systems fundamentally compromises the security of all users. We’re used to hearing stories of abusive surveillance by the likes of China, Russia, and others… but it appears that the United States and UK have betrayed all users of the internet as well, by undermining technologies that are trusted billions of times every day online. NSA and GCHQ unlock privacy and security on the internet – The Guardian. NSA Able to Foil Basic Safeguards of Privacy on Web – New York Times. Revealed: The NSA’s Secret Campaign to Crack, Undermine Internet Security – ProPublica. How to remain secure against NSA surveillance – Bruce Schneier, The Guardian.
<urn:uuid:28f50689-9d9e-4a1b-958c-b22661f671fc>
CC-MAIN-2024-38
https://grahamcluley.com/nsa-spying-encrypted-internet-communications/
2024-09-12T21:29:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00479.warc.gz
en
0.927209
617
2.796875
3
As Birbal had once pointed out to Akbar, “There is only one pretty child in this world and every mother has it.” Never has there been such a strong urge to prove this than in the digital age. Parents are making full use of their social media platforms to keep their friends and family updated with the latest happenings in their precious one’s lives. However, are they compromising their children’s privacy and security to satisfy their pride? The Age of Consent Survey commissioned by McAfee brings to light some interesting facts in India, regarding parental habits of sharing their children’s photos online. - Parents are aware of the risks of posting images of their children on social media, but the majority are doing it anyway, often without their children’s consent - 76% of parents say they have considered the images of their children they post online could end up in the wrong hands - 61.6% of parents believe they have the right to share an image of their child on social media without their consent Parents Ignoring the Risks? The McAfee survey reveals that parents are not giving enough consideration to what they post online and how it could affect their children. There are two kinds of risks involved: - Physical risks: Pictures can be misused to create fake identities, groom victims and morphed and used inappropriately by paedophiles - Emotional risks: They may cause children worry and anxiety if they fear that the photos may be used to shame or cyberbully them While parents are aware and concerned about the physical risks associated with posting pictures online, they are less concerned about the emotional risks. The survey reveals that moms consider the embarrassing side effect more than dads, with 45% of dads assuming their children will get over any embarrassing content compared to just 14% of moms. But it is important to consider the emotional effects on kids as they will tend to shape his/her character and future. How do Men and Women Compare When It Comes to Sharing Pictures? Most men and women post photos of their children only on private social media accounts, indicating they are aware of the risks. While identity theft worries men more, women are more worried about image morphing. In addition, women are more restrained about sharing pictures of kids under 2 without clothes over social media in comparison to men. But, unfortunately, neither is much concerned about paedophilia. This needs to change! You have to put the ‘stranger-danger’ policy in action well before you start teaching your kids that. Mumbai Parents Lead in Sharing Mumbai parents, ahoy! You are tech-savvy no doubt and leave Delhi and Bengaluru way behind when it comes to sharing of kid’s photos online. Though Metro city parents are aware that children may be embarrassed by some of the photos posted and consider photoshopping or morphing of pics as a major potential risk, they still go ahead and share. Whoa! Go easy and THINK well before hitting the ‘SHARE’ button parents; it’s your children we are talking about. Some Salient India-Specific Findings from The Survey - 5% of parents post an image of their child on social media at least once a day - 79% share images on public social media accounts - 39% of parents don’t consult with their children before posting images of them on social media - 98% of parents have considered that the images they post of their child on social media may be embarrassing/be something they wouldn’t want posted, but do it anyway - 6% of parents have/would share an image of their child in their school uniform on social media - 6% of parents believe they have the right to share an image of their child on social media without their consent - The average age parents believe they should begin asking their child for consent to post a photo of them on social media is 10, interestingly, the age of responsibility in India is 7 Tips for Safe Sharing of Children’s pics - THINK.POST: Always think twice before uploading pictures of your child. Will it prove risky or embarrassing for the child later in life? If yes, or in doubt, postpone sharing. - Disable geo-tagging. Many social networks will tag a user’s location when a photo is uploaded. Parents should ensure this feature is turned off to avoid disclosing their location. This is especially important when posting photos away from home. - Maximise privacy settings on social media: Parents should only share photos and other social media posts with their intended audience. - Let family, friends know your views on posting images and tagging: This will help prevent future embarrassments. Return the favour. - Use an identity theft protection service: The amount of personal data shared online, and the rise in data breach, together escalate the possibility of identity theft. It’s recommended that you use an identity theft protection solution like McAfee Identity Theft Protection to proactively protect your identity and keep your personal data safe from misuse. Parents, put aside your pride in your child and review the future implications of posting their pictures online. As parents, it’s your responsibility to understand the effects of your social media actions on your child. A few general photographs shared privately may be OK, but it is advisable NOT to turn your social media page into your child’s digital record page. Let your child start his/her digital journey on a clean slate.
<urn:uuid:ba6f69f7-1361-49c0-8f80-a6bf9a18ad3b>
CC-MAIN-2024-38
https://www.mcafee.com/blogs/family-safety/mcafee-survey-parents-share-pictures-of-their-kids-online-despite-understanding-the-risks-involved/
2024-09-14T05:01:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00379.warc.gz
en
0.949912
1,125
2.515625
3
What Is a Honeypot in Cybersecurity? In the cybersecurity field, a ‘honeypot’ is a trap set to detect, deflect or study attempts at unauthorized use of information systems. They are designed to mimic likely targets of cyberattacks to detect, log, and warn about attack attempts. Honeypots can be a powerful tool in understanding the tactics employed by cybercriminals. They are essentially decoy servers or systems set up for potential hackers. By closely monitoring the activities on the honeypot, it is possible to gain insights into the level of threat and the types of attacks being executed. More importantly, it allows for the study of these attacks without the risk of an actual target being compromised. Honeypots can vary from a simple system that logs basic information about the attacker’s IP address and the time of the intrusion, to more complex systems. Sophisticated honeypots can mimic the behavior of an actual network and contain several different applications and services. When properly configured, a honeypot can collect a vast amount of information about the attacker, the tools used, and the methods of intrusion. This information can then be used to strengthen the security of the actual system or network. Honeypots are one of the key techniques used by Aqua’s security research team, Nautilus. Get security updates from the Nautilus team. In this article: How Do Honeypots Work? At their core, honeypots are decoy systems designed to look and behave like real targets. They might mimic a company’s internal network, a database containing sensitive information, or even an individual’s personal computer. The goal is to make them as enticing as possible to hackers, luring them away from actual targets. To achieve this, honeypots often contain fake but seemingly valuable data, known as ‘honeytokens.’ These could be anything from fabricated credit card numbers to fictitious employee records. When a hacker interacts with these honeytokens, it triggers an alert, notifying security personnel of the intrusion. Monitoring and Alerting One of the primary functions of a honeypot is to monitor malicious activity. By providing a controlled environment for hackers to operate in, honeypots give cybersecurity professionals an unprecedented insight into their tactics. This includes everything from the tools and scripts they use to the vulnerabilities they exploit. Furthermore, honeypots are designed to alert security teams when they detect any suspicious activity. These alerts can take various forms, from email notifications to dashboard warnings, and even automated responses. The key is to provide real-time information, allowing for prompt and effective action. High Interaction vs. Low Interaction Honeypots can be broadly categorized into two types: high interaction and low interaction. High interaction honeypots are complex systems that closely mimic real-world environments. They allow intruders to interact with a variety of services and applications, giving security professionals a detailed look at their actions. On the other hand, low interaction honeypots are simpler and less resource-intensive. They typically offer a limited range of services to interact with, focusing more on detecting and logging malicious activity. While they might not provide the same level of detail as their high interaction counterparts, they are easier to manage and can be just as effective in detecting threats. Deterrence and Delay Another key aspect of honeypots in cybersecurity is their role in deterring and delaying cyber attackers. By presenting an attractive target, honeypots can divert hackers from their intended targets, buying valuable time for security teams. Furthermore, the mere presence of a honeypot can act as a deterrent. Hackers are well aware of the risks associated with interacting with honeypots, and the possibility of being monitored or traced can make them think twice about their actions. Forensics and Attribution Last but not least, honeypots play an essential role in cyber forensics and attribution. By capturing detailed logs of a hacker’s actions, they provide a wealth of information that can aid in their identification and prosecution. This can include IP addresses, timestamps, keystrokes, and even screenshots. Use Cases and Benefits of Honeypots in Cybersecurity Improving Threat Detection and Analyzing Attacks Honeypots are highly effective at identifying both known and unknown threats, enhancing threat detection. By creating an environment that is attractive to attackers, honeypots can lure in potential threats and monitor their actions. This can provide invaluable insight into the behavior of attackers and the methods they use to infiltrate systems. The analysis of attacks is another important aspect of honeypots. By observing the actions of an attacker in a controlled environment, it is possible to understand the tactics and techniques they use. This can provide valuable information that can be used to strengthen the security of a network or system. Collecting Intelligence about Attacker Tactics, Techniques, and Procedures (TTPs) By observing the actions of an attacker in a controlled environment, it is possible to gain insights into their TTPs. The intelligence gathered from honeypots can be used in various ways. For example, it can be used to identify trends in cyberattacks, such as the types of attacks being used, the targets being aimed at, and the times at which attacks are most likely to occur. This can provide valuable information for planning and implementing effective security strategies. Furthermore, the information collected from honeypots can be used to educate and train security personnel. By providing real-world examples of attacks, it is possible to better understand the threats and how to respond to them. Creation of Advanced Defense Mechanisms The insights gleaned from honeypots can also lead to the creation of advanced defense mechanisms. By understanding the hackers’ tactics and strategies, security teams can develop innovative solutions that can effectively counter these threats. These defense mechanisms can range from improved firewall configurations to sophisticated machine learning algorithms that can detect and neutralize threats in real time. Furthermore, the information gathered from honeypots can also assist in the training of AI and machine learning models. These models can be trained to detect and respond to cyber threats, making the network more resilient and robust against attacks. Distracting Attackers from Real Targets By presenting an attractive target to attackers, honeypots can divert their attention away from valuable assets. This can provide additional time for security teams to detect and respond to the attack. Honeypots can also be used to confuse and mislead attackers. By providing false information or creating a misleading environment, it is possible to waste an attacker’s time and resources. This can make it more difficult for them to carry out a successful attack and can potentially deter them from attempting future attacks. Implementing a Honeypot in Your Environment: Key Considerations Deciding on the Type of Honeypot The first step is deciding on the type of honeypot to use. There are two main types of honeypots: low-interaction and high-interaction. Low-interaction honeypots are relatively simple and are designed to mimic the services and systems that a hacker might target. They require minimal resources and are relatively easy to deploy and manage. On the other hand, high-interaction honeypots are more complex and are designed to mimic an entire network or system. They require more resources and management, but they can provide more valuable insights into the hackers’ activities. The choice of the type of honeypot to use will depend on the specific needs and resources of the organization. Another critical aspect of implementing a honeypot in cybersecurity is the deployment considerations. The placement of the honeypot within the network is crucial to its effectiveness. Ideally, the honeypot should be positioned in a location where it is likely to attract hackers but not disrupt the normal operations of the network. Additionally, the honeypot should also be designed to blend in with the rest of the network. It should not be easily identifiable as a decoy, as this would defeat its purpose. This requires careful planning and execution, and may require the expertise of a seasoned cybersecurity professional. Maintenance and Monitoring of Honeypots The maintenance and monitoring of honeypots is also a critical aspect of their implementation. Honeypots need to be regularly updated and patched to ensure that they remain effective. They also need to be closely monitored to detect any potential threats and gather valuable information. The data collected from the honeypot should be carefully analyzed to understand the hackers’ tactics and strategies. This requires a considerable amount of time and resources, but the insights gained can be invaluable in strengthening the organization’s cybersecurity posture. Analysis of Data Collected from Honeypots The data collected from honeypots can provide a wealth of information on potential threats and the tactics used by hackers. This data should be carefully analyzed to identify patterns and trends that can aid in the development of more effective defense mechanisms. The analysis of this data can also assist in the identification of potential vulnerabilities within the network. By understanding the tactics used by hackers, security teams can proactively address these vulnerabilities and strengthen the network’s defenses. Legal and Ethical Considerations Finally, it’s important to consider the legal and ethical implications of using honeypots in cybersecurity. While honeypots are an effective tool in detecting and thwarting cyber threats, they can also raise legal and ethical questions. For instance, is it ethical to deceive hackers into thinking they are attacking a legitimate system? What happens if the honeypot inadvertently harms an innocent third party? These are important considerations that need to be carefully addressed when implementing a honeypot in cybersecurity. It’s important to consult with legal and ethical experts to ensure that the use of honeypots adheres to all relevant laws and ethical guidelines.
<urn:uuid:61e2f445-4741-4ade-b0f4-fa0c401adf31>
CC-MAIN-2024-38
https://www.aquasec.com/cloud-native-academy/cloud-attacks/honeypots-in-cybersecurity/
2024-09-15T10:46:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00279.warc.gz
en
0.942911
1,976
3.265625
3
What is a subsidiary company? Definition, examples and FAQs Leading international companies have created a collective of 370,000 subsidiaries, many of which operate in the U.S. Before you follow in their footsteps, you must understand not only what a subsidiary company is but also how to manage one effectively. A subsidiary company is owned or controlled by a parent or holding company. Usually, the parent company will own more than 50% of the subsidiary company. This gives the parent organization the controlling share of the subsidiary. Sometimes, control is achieved simply by being the majority shareholder. When a parent organization owns all common stock of a company, it is known as a ‘wholly owned' subsidiary. This guide will explain: - What a subsidiary company is - How a subsidiary company works - Examples of subsidiary companies - How entity management software can aid with corporation management What is a subsidiary company? Subsidiary definition: a company that is at least 50% owned by a parent or holding company. A subsidiary company is either partially or wholly owned by another company. That company can be either a parent company, which is its own functioning company, or a holding company, which solely controls other companies and investments. To be a subsidiary, a company has to be at least 50% owned by the parent or holding company. Subsidiaries 100% owned are considered wholly owned subsidiaries. How a subsidiary company works A subsidiary and parent company are legally separate entities. This means the individual organizations pay tax and debt, limiting shared liabilities between the companies. Subsidiary companies will have independence from the parent company and, in many cases, are individual brands. However, the parent company will naturally influence the subsidiary’s operations, including governance. The parent company can elect the board of directors as the major shareholder and drive the overall business strategy. Subsidiaries are a commonly used structure for both national and international corporations. Tiers of subsidiaries group a range of industries within a multinational conglomerate. The structure can also bring together companies from within one sector in a corporate group. How does a parent company control its subsidiary? A parent company controls its subsidiary by owning all or most of its stocks. If that’s the case, the parent company can control most subsidiary operations, including assigning the board members. The parent company can also include clauses in the subsidiary’s Articles of Incorporation to assign certain powers, such as requiring the parent company’s approval to pass bylaw changes or take certain actions. That said, parent companies reap the benefits of subsidiaries when the subsidiary can operate more independently. This allows the subsidiary to set its own corporate strategy and objectives, which often significantly differ from that of the parent company. Can a subsidiary be liable for a parent company? Typically, a subsidiary cannot be liable for a parent company. Because they’re legally separate entities, they retain their own liability — meaning the parent company usually isn’t liable for the subsidiary’s actions either. Can a subsidiary leave a parent company? A subsidiary can leave a parent company, but the structure of the subsidiary/parent company relationship makes this uncommon. To leave the parent company, the subsidiary’s shareholders and board of directors would have to approve it. However, the parent company is among the shareholders — and, sometimes, the only shareholder — so it’s unlikely that a vote to leave the parent company would pass. Can a subsidiary sue a parent company? A subsidiary can sue a parent company, but it’s exceedingly rare and depends on the subsidiary’s Articles of Incorporation. To sue, the subsidiary has to prove damages due to the parent company — a difficult thing to do because the parent company owns the subsidiary. However, it’s still possible to sue if the parent company’s actions directly interfered with the subsidiary’s contractual obligations and the subsidiary suffered damages as a result. Who owns a subsidiary company? Subsidiary companies will be owned by either a parent company or a holding corporation. A wholly-owned subsidiary company will be entirely owned by the parent or holding corporation. In other cases, parent companies will have the controlling share of a subsidiary company. In practice, this means owning more than half of a company’s common stock. So, by definition, parent companies have majority ownership or control of a subsidiary. As the major shareholder, parent companies will have the deciding vote when electing the directors in the boardroom. In many cases, a member sits on the board of both the parent and subsidiary company. Because of this, parent companies will significantly influence the strategic direction of subsidiaries, including any steering committee groups. Types of subsidiary companies There are two types of subsidiary companies: wholly owned and partially owned. The primary difference lies in the ownership stake of the parent or holding company. Wholly-owned subsidiary: A wholly-owned subsidiary is 100% owned by the parent corporation. The parent company holds all common stock, which means they have sole influence over the subsidiary’s operations. This is generally achieved through a parent company acquiring full control of a company or by founding the subsidiary company itself. The wholly-owned subsidiary company is still legally recognized as its own entity. Normal/partially-owned subsidiary: A normal subsidiary is when a parent or holding corporation owns more than half of the common stock. This means the subsidiary will have multiple shareholders who can influence the corporation’s ongoing operations. What is the purpose of a subsidiary company? The main benefit of subsidiary companies is that they are different legal entities from their parent company. This means the two companies can limit shared liabilities or obligations and will be separate in terms of regulation or tax. This legally recognized separation is a key difference between a branch and a subsidiary company. Some other common reasons parent companies also use subsidiaries for the following purposes: - Legal and financial liability: In practice, having two distinct legal entities limits the liability of both the parent and subsidiary company. Keeping companies separate can help to insulate the holding company from potential financial or legal issues faced by a subsidiary company. - Regulatory compliance: In the case of multinational corporations, subsidiary companies will align themselves with local regulations or laws - As an incorporated company in its own right, a subsidiary company can take advantage of more favorable corporate tax rates compared to where the parent company is based. This is a key part of good governance between parent companies and subsidiaries. - Expand in new markets: Subsidiary companies are a common way for corporations to expand into international markets. As independent entities, the risk for the wider corporation is minimized. - Utilize more flexible operations: Subsidiary companies are often distinct brands positioned under an overall holding company. These brands can benefit from the synergy between different parts of the larger corporate group but also retain the benefit of independence. Subsidiaries can be experimental brands or products, as financial liabilities are contained. As separate legal entities, subsidiary companies are more straightforward to manage or sell to. - Leverage diverse expertise: Instead of investing heavily in internal research and development, parent companies often acquire companies with specific area expertise. An example would be a larger company purchasing a small firm that produces a specific technology or digital tool. Subsidiary companies allow parent corporations to diversify their business but isolate the potential risks. - Tax advantages: Subsidiaries have the potential for favorable tax rates due to their separate setting from the parent company. - Limited liability: Allows limited financial liability for the wider holding corporation, containing potential losses within the subsidiary company. - Greater independence for brands: Keep a specific brand or product as its own legal entity to maintain independence and make selling straightforward. - Develop niches: Subsidiaries focusing on specific product or technology development can strengthen the corporation as a whole. Pros and cons of a subsidiary company A subsidiary company may sound like a win-win. While it’s true that they shelter the parent from liabilities, offer tax benefits and facilitate growth, they also greatly complicate the corporate structure. Corporations considering a subsidiary should consider both the pros and cons of a subsidiary structure before moving forward. Pros | Cons | Tax benefits: Subsidiaries may only have to pay taxes in their state or country and not on all of their profits. | Complex finances: Accounting can be complicated, especially when a parent company owns multiple subsidiaries. | Shield for losses: Any losses are contained within the subsidiary and do not directly affect the parent/holding company. | Exposure to liability: Parent companies are legally responsible for the actions of the subsidiary. | Greater efficiencies: The subsidiaries of a parent company can work together to streamline processes or share their expertise. | More legal ground to cover: While subsidiaries can shield losses, they can also be subject to different laws and regulations from their parent company depending on where they operate. | Simple structure: Subsidiaries are easy to establish, manage and sell. | Complex power dynamic: Subsidiaries are beholden to their parent company, but they have their own executive structure that can create conflicts. | Examples of subsidiary companies Many modern businesses have subsidiaries; some offer liability protection, while others allow the parent company to reach new industries or territories. For example, PepsiCo isn’t just a company; it’s a conglomerate that owns more than one subsidiary company, including Mountain Dew, Frito-Lay and even Quaker Foods. More common examples of subsidiary companies include: Company name | List of subsidiaries | TJX Companies Subsidiaries | T.J. Maxx Marshall’s HomeGoods Sierra HomeSense Winners T.K. Maxx | Coca Cola Company Subsidiaries | AdeS soy-based beverages AHA sparkling waters Aquarius Ayataka green tea Chivita Ciel water Coca-Cola brands Costa Coffee Dasani waters Del Valle juices and nectars Fairlife Fanta Fresca and Fresca Mixed Coctails Fuze Tea Georgia coffee Gold Peak teas and coffees Honest Kids ILOHAS innocent smoothies and juices Jack Daniel's and Coca-Cola Minute Maid juices Peace Tea Powerade sports drinks Simply juices and Simply Spiked adult beverages Schweppes smartwater Sprite Topo Chico waters and hard seltzers vitaminwater | Estée Lauder Companies Subsidiaries | Aerin Aramis Aveda Bobbi Brown Bumble and bumble Clinique Darphin Paris Dr. Jart Frederic Malle Editions de Parfums Estee Lauder GlamGlow Jo Malone London Kilian Paris La Mer Lab Series Le Labo Mac Origins Smashbox Tom Ford Beauty Too Faced | Hyundai Motor Company Subsidiaries | Hyundai Kia GENESIS Hyundai Mobis Hyundia KEFICO Corporation Hyundia MSEAT Hyundia WIA Hyundia TRANSIS Hyundia PARTECS Hyundia IHL Hyundai Steel Companyl Hyundai BNG Steel Hyundai Special Steel Hyundai Engineering & Construction Hyundai Engineering Co.,Ltd Hyundai Engineering & Steel Industries Co.,Ltd Hyundai City Corporation Hyundai GLOVIS Hyundai Rotem Hyundai Card Hyundai Capital Hyundai Commercial Hyundai Motor Securities Innocean Worldwide Haevichi Hotel & Resort Hyundai AutoEver Hyundai NGV GIT Hyundai Farm Land & Development Company | How to find subsidiaries of a company The Securities and Exchange Commission (SEC) requires publicly listed companies to disclose their subsidiaries, including when they acquire or dispose of a subsidiary. As a result, this information is publicly available, and you can find the subsidiaries of a company if you know where to look. A company’s subsidiaries are usually listed on: - The company website: Annual reports are a hotspot for lists of subsidiaries. You can also check press releases, as a company will likely mention subsidiary updates there. The SEC website: You can search and download documents from the EDGAR database, including registration statements and other forms. These forms usually include subsidiary information. How to create a subsidiary company A parent company can either create a subsidiary company or purchase the majority shares in an existing company. If founding a new subsidiary, parent companies will need to: - Authorize the subsidiary: The parent company’s board of directors will vote to form the subsidiary company. This should include a resolution about the agreement signed by the board chair. - Determine the structure: A subsidiary company can follow a corporate or LLC structure since both limit liability. Both are taxed differently, so the parent company should choose a structure that benefits their finances. - Complete the process of incorporation: As with the creation of any new company, the parent company needs to register the subsidiary within the state or country where it intends to establish it. The parent company will be recorded as the owner of the subsidiary during the incorporation process. - Fund the subsidiary: Like any new business, subsidiaries need capital. A parent company can transfer assets to the subsidiary, making the parent company the owner. - Define operations: Parent companies need to organize how the subsidiary company will operate. This includes everything from appointing the board of directors to establishing a governance framework to deciding how the subsidiary will make critical decisions. - Elect a board of directors: As the majority owners, a parent corporation will elect the subsidiary’s board of directors, including the chairman of the board. In many cases, certain members will sit on the board of both the parent and subsidiary companies. They can help to represent the wider group’s interests when making strategic decisions. - Produce ongoing documentation: As a legally separate entity, subsidiaries function as normal independent companies. They will produce their own independent financial statements. Parent companies must record all transactions between themselves and the subsidiary. Parent companies are also required to include financial statements from their subsidiary companies within a consolidated financial document. Corporate documentation is vital across the whole business life cycle, from initial incorporation until the potential closure of a subsidiary. Effectively create and manage your subsidiaries Managing the boardroom and operations of one subsidiary company can be complex, let alone multiple subsidiaries. Simplify the process with entity management software. With the right solution, you can improve your efficiency, streamline your business intelligence and ultimately create a centralized corporate record to help you make more strategic decisions about your subsidiaries. Diligent Entity Management tracks governance decisions, regulatory compliance and financial records in one easy-to-access dashboard. Analyze subsidiary information from your entire corporation group in real-time. Find out how Diligent Entity Management can help your corporation. Request a demo from the team today. Frequently asked questions (FAQs) Is a subsidiary an LLC or a corporation? A subsidiary company can be either an LLC or a corporation. The parent company will decide which structure the subsidiary will take. This is usually a financial and legal decision. While both LLCs and corporations limit liability, they are taxed differently. Are subsidiaries 100% owned? Subsidiaries can be both wholly-owned (100% owned) or not-wholly-owned. A parent company only needs to own more than 50% of another company’s stock for that company to be considered a subsidiary. Is a subsidiary its own company? Yes. Subsidiaries typically operate on their own and follow their own structure, but they benefit from the resources and connection to their parent company. What are tiered subsidiaries? There can be multiple layers or tiers of subsidiary companies within a wider corporate group. A company owned by a parent corporation is a first-tier subsidiary. But this subsidiary company may hold majority shares of a subsidiary company of its own. The second subsidiary company can be described as a second-tier subsidiary of the overall parent corporation. The tiers can continue depending on the complexity of the corporate group. The parent company will have a degree of control over both tiers of subsidiary companies. As the major shareholder, it will hold direct control of a first-tier subsidiary. It can elect the board of directors and influence strategic business decisions when required. Using this influence, the parent company can exercise indirect control of the second-tier subsidiary. What is a wholly-owned subsidiary company? If the entire subsidiary company is owned by the parent corporation, this is known as a wholly owned subsidiary. A wholly-owned subsidiary company has no other shareholders. This gives the parent corporation a major influence on the company’s ongoing operations. Direct control of who sits on the board of directors helps define the aims and strategic decisions made by the subsidiary company. What is an associate company? When a corporation owns a minor share of another business, the company is known as an associate or affiliated company. In this case, a corporation owns a portion of a company but not enough to have full ownership. Usually, this is when a parent corporation owns less than half of a company's common shares. To be considered a subsidiary, the parent corporation would need to own the majority of the company. With a minority share of common stock, a parent corporation will have no direct control over strategic decisions. In most cases, a larger company will invest in a smaller associate company. The parent company will usually record the value of this investment on its financial statements. What are sister companies? Sister companies are subsidiary companies that share a parent or holding company. Most sister companies operate independently and have no relationship other than the owning corporation. Does a subsidiary have its own CEO? Subsidiaries do have their own CEO, management team and board of directors. The parent company does have sway over who holds those positions, though.
<urn:uuid:e2b8644f-927c-47ef-b93d-fa24e4b0734c>
CC-MAIN-2024-38
https://www.diligent.com/resources/blog/what-is-a-subsidiary-company
2024-09-15T11:01:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00279.warc.gz
en
0.944919
3,599
3.28125
3
Unraveling the Art of Phishing: Understanding the Threat to Companies Phishing, a cunning form of cyberattack, has emerged as a prominent threat to individuals and organizations alike. This deceptive practice, characterized by the manipulation of human psychology, aims to steal sensitive information, such as login credentials and financial data. In this comprehensive guide, we will delve into how phishing works, its potential impact on companies, and the strategies for effective prevention. We will also highlight the importance of Cybersecurity Awareness for Users (CS8525) training, which ECCENTRIX offers, in building a robust defense against phishing attacks. Phishing is a social engineering technique where cybercriminals impersonate legitimate entities to trick individuals into revealing confidential information or performing malicious actions. Phishing attacks take various forms, but they all share a common goal: exploiting human trust and curiosity. Deceptive Techniques: Phishing employs emails, websites, and messages that appear genuine to manipulate recipients. Spoofing: Attackers often spoof email addresses, URLs, or phone numbers to mimic trusted sources. Social Engineering: Phishing relies on psychological manipulation, such as urgency or curiosity, to deceive recipients. Payload Delivery: Some phishing emails contain malicious attachments or links, while others redirect to fake login pages. How Phishing Works Phishing attacks typically follow a set of steps to succeed in their deception: Step 1: Reconnaissance Phishers gather information about their targets. This phase may involve researching potential victims on social media or company websites. Example: An attacker wanting to target a specific company's employees may search LinkedIn for staff members, noting their roles and hierarchies. Step 2: Email Design In this phase, attackers craft persuasive emails designed to trick the recipient. These emails often mimic trusted sources, such as banks, service providers, or colleagues. Example: A phishing email might impersonate a well-known bank, warning recipients of a security breach and urging them to click a link to update their account information. Step 3: Delivery Phishing emails are sent to a list of potential victims. Attackers may use botnets or compromised servers to avoid detection. Example: The attacker sends the crafted bank impersonation email to hundreds of email addresses, including employees of the targeted company. Step 4: Deception Once a recipient opens the email, the attacker leverages social engineering tactics to manipulate their emotions or curiosity. Urgency, fear, and a call to action are common strategies. Example: The phishing email might warn that the recipient's bank account will be locked if they don't click the link and update their details immediately. Step 5: Payload Delivery Phishing emails often include links that redirect recipients to fake websites. These sites are crafted to look identical to legitimate login pages. Example: Clicking the link in the email takes the recipient to a fake bank login page, where they unwittingly enter their login credentials. Step 6: Data Collection As victims enter their login credentials on the fake page, the attacker captures and stores the information for later use. Example: The attacker now has the victim's bank login credentials, which they can use to access the account and potentially steal funds. Step 7: Exit After collecting the desired information, attackers may redirect victims to legitimate websites to avoid raising suspicion. Example: The victim, unaware of the phishing attack, is redirected to the bank's official website, making it appear as if nothing unusual happened. Impact on Companies Phishing attacks can have severe consequences for companies, including: Data Breaches: Phishing can lead to unauthorized access to sensitive company data, such as customer information or intellectual property. Financial Loss: Stolen login credentials can result in financial fraud, potentially costing the company a substantial amount. Reputation Damage: Falling victim to phishing can damage a company's reputation, eroding customer trust. Regulatory Violations: Data breaches may lead to legal repercussions if the company fails to comply with data protection regulations. Operational Disruption: Dealing with the aftermath of a phishing attack, including breach investigations and security enhancements, can disrupt normal business operations. Preventing Phishing Attacks Companies can take several steps to prevent phishing attacks: Employee Training: Educate employees on how to recognize phishing attempts and report them promptly. Email Filtering: Implement robust email filtering solutions that can identify and block phishing emails. Multi-Factor Authentication (MFA): Require MFA for accessing sensitive systems to add an extra layer of security. Regular Updates: Keep software, operating systems, and antivirus programs up to date to patch vulnerabilities. Incident Response Plan: Develop a comprehensive incident response plan to quickly contain and mitigate the impact of phishing attacks. Benefits of Employee Training Employee training is a crucial component of phishing prevention. A well-trained workforce can: Recognize and report phishing attempts promptly. Understand the risks and consequences of falling victim to phishing. Take proactive steps to protect sensitive information and data. Contribute to the company's overall security posture. ECCENTRIX Cybersecurity Awareness for Users (CS8525) Training ECCENTRIX offers the Cybersecurity Awareness for Users (CS8525) training, designed to empower employees with the knowledge and skills to recognize, report, and prevent phishing attacks. This training covers various aspects of cybersecurity, making employees an integral part of the organization's defense against cyber threats. Phishing remains a significant threat to individuals and companies alike, exploiting human psychology and trust. Understanding how phishing works is the first step in preventing these attacks. With effective prevention strategies, including employee training, robust email filtering, and up-to-date systems, companies can significantly reduce their vulnerability to phishing attacks. Common questions for Phishing (FAQ) Can phishing hack your phone? Yes, phishing can potentially compromise your phone. Phishing attempts often employ deceptive techniques to trick users into clicking malicious links or downloading harmful attachments, which can lead to the installation of malware or theft of sensitive information stored on your device. Do banks refund scammed money? Banks often have policies in place to refund money lost due to scams, but the process and eligibility criteria vary. Many banks have fraud protection measures and investigation procedures to determine if the customer is entitled to reimbursement, usually based on the circumstances of the scam and the promptness of reporting the incident. Can phishing steal your identity? Yes, phishing attempts frequently aim to steal personal information such as usernames, passwords, financial details, and more. With this information, cybercriminals can impersonate individuals, gain unauthorized access to accounts, and potentially perpetrate identity theft or financial fraud. Is phishing emails illegal? Phishing itself, in the context of attempting to deceive individuals or gain unauthorized access to sensitive information, is considered illegal and a form of cybercrime. It violates various laws related to fraud, identity theft, and unauthorized access to computer systems. Perpetrators of phishing attacks can face legal consequences if caught and prosecuted.
<urn:uuid:1bb3315e-2af9-4e4e-a353-bfbcea5e5281>
CC-MAIN-2024-38
https://www.eccentrix.ca/en/eccentrix-corner/what-is-phishing
2024-09-17T21:49:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00079.warc.gz
en
0.918327
1,465
3.25
3
Text Annotation - A Comprehensive Overview AI models interpret vast data that systems ingest through machine learning. But how do we make the machines understand raw data, including images, videos, and text? While humans interpret the meaning of a phrase based on the cultural and proverbial context, a machine may not do so. For example, in the phrase ‘You cracked the puzzle’, a human knows that it indicates solving a puzzle, but a machine may misinterpret the word ‘cracked’. This makes text annotation challenging. An accurate AI model is an outcome of text annotation for machine learning. With the adoption of AI models, the demand for annotation has increased, and the data annotation market will grow at a CAGR of 12.1% between 2023 and 2030. Let’s explore the types of text annotation services and use cases. Types of text annotation services Without human intervention, AI models will lack the depth and nativity humans use to control a language. Human annotators train the AI model with high-quality training data using these text annotation methods – This process assigns the entities within a text with predefined labels based on semantic meaning. Machine learning uses this annotated text to retrieve the underlying meaning of the text. This method of text annotation locates, extracts, and tags entities within the text using one of these ways – - Named entity recognition (NER) – This method labels key information within a text, such as people, locations, characters, dates, and objects, using distinct categories. For example, 2024 and 2030 are labelled as ‘Dates’, text annotation as a ‘Service’, and Infosys BPM as an ‘Organisation’. - Conference resolution (relationship annotation) – This method links two or more different words that refer to the same entity. For example, in the sentence below, ‘Infosys BPM’ and ‘Organisation’ refer to the same entity. “Infosys BPM builds high-quality training data for AI models. The organisation leverages humanware and automation at scale for training and evaluation.” - Part-of-speech tagging – This method of text annotation parses sentences and identifies grammar, such as nouns, adjectives, subjects, objects, verbs, pronouns, prepositions, etc. For example, in the sentence below, the word ‘train’ could be an object or a verb. “He took several train journeys through India over the years. The conversations with locals helped him train himself in the local language.” - Key phrase tagging – This method locates and labels keywords and key phrases in the text. It is useful when the machine ingests a large chunk of data and wants to understand the main topics without parsing the whole text. Entity linking, also called named entity linking (NEL), maps words in a text to entities in a knowledge base, open-domain text derived from sources such as Wikipedia. While entity annotation locates exact entities within a text, entity linking connects these labelled entities to larger text. For example, in the sentence below, the entity helps the AI model understand that ‘Paris’ here refers to a city and not the celebrity ‘Paris Hilton’. “Last week, we went for a holiday to Paris, the capital of France”. Text classification annotates a chunk of text or sentences with a single label. Some specialised forms of text classification include document classification, sentiment annotation, product categorisation, email classification, toxicity classification, etc. For example, the text chunk below can be labelled ‘news’. > “According to Ted Decker, CEO of Home Depot, the quarter showed subdued results. Compared to the time during the pandemic, customers wanted fewer do-it-yourself projects. The spring season started late, and there was softness in certain larger discretionary projects.” This method determines and labels the emotion or opinion behind a text body. This can be difficult especially when the writer has used sarcasm or rhetoric. In this case, the annotator picks the label that best represents the emotion, and computers conclude positive, negative, and neutral views based on analogous data. For example, in the sentence below, there is a mixed feeling of joy, nostalgia, and sadness. “Helping the old lady with groceries filled me with joy. She reminded me of my mother, whom I miss a lot.” Text annotation use cases While text annotation services impact almost all industries, here are some of the prominent use cases – Text annotation services help extract data from clinical trial records, classify medical documents, detect medical conditions, analyse patient records, and process claims. Insurance companies leverage text annotation services to extract contextual data from contracts, evaluate risk, recognise the parties involved, and identify dubious or fraudulent claims. Text annotation services help identify frauds and money laundering, extract and manage contract data, and determine loan rates and credit scores. Text annotation services in logistics can help extract names, amounts, order numbers, and more from invoices. How can Infosys BPM help in text annotation services? Infosys BPM uses a human-in-loop model for text annotation services. The agile model combines human intelligence and automation to produce high-quality training datasets at scale to refine and improve AI model while saving time and resources. Learn more about text annotation services at Infosys BPM.
<urn:uuid:adb3064d-253c-4d69-ae5c-c92956141194>
CC-MAIN-2024-38
https://www.infosysbpm.com/blogs/annotation-services/text-annotation-services.html
2024-09-09T08:22:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00879.warc.gz
en
0.897327
1,135
3.0625
3
Digital transformation is assisting in delivering more impactful and effective learning experiences more than ever before, especially in the education sector. From K-12, to some of the largest universities, digital technology has introduced devices such as iPads and Chromebooks, the internet and even more immersive means of learning such as augmented reality into the classroom. Most recently the transformation in education has come in the form of home elearning. Many parents are being tasked with homeschooling their children while balancing their own work for their jobs. This rapid digital transformation in education came whether the schools were prepared or not and we’re positive that like other industries, education institutions look forward to getting things back to the way they were prior to the COVID-19 pandemic. Educause notes that “… digital transformation means transforming an organization’s core business to better meet customer (students, parents, faculty, staff, alumni, etc.) needs by leveraging technology and data.” Educational institutions must also use digital transformation to influence their administration and regulatory compliance efforts to more effectively support the core business of education in a positive manner. 2 important ways educational entities can improve administration and compliance efforts are by: - Effectively managing the large amount of data and information already in the system as well as that which is being created daily. - Enabling security processes and features on devices like MFDs and printers that will help education administrators better meet requirements of regulations like the Family Educational Rights and Privacy Act (FERPA), the General Data Protection Regulation (GDPR), or the California Consumer Protection Act (CCPA). In a study done by Canon (Canon Lean Information Management and Governance Study), a high percentage of organizations reported positive/significantly positive business impact in updating of information (73%), regulatory compliance (77%), and security of information (79%) by more effectively managing their information and implementing information governance practices. Managing information and data starts with digitizing as much of it as possible – especially information that continues to be created or distributed on paper. Using a multifunction device (MFD) or scanner to capture student materials (such as creative work, handwriting or paper-based tests), invoices or parental permission slips allows multiple ways to better manage and secure those types of information. Using optical character recognition (OCR) software and an information management software solution can help extract important details and create rules governing who has access to the information, how it is stored, and how long it is retained. Once digitized, information is much more easily searched for and found when needed. Security, one of the top five digital transformation trends in education, helps keep students and other data private, again helping administrators meet privacy regulations. Additionally, implementing good information, data, and records security practices can lessen a costly breach that can cripple an institution’s ability to carry out its mission. Ponemon reported in their “Cost of a Data Breach Report 2019” that the average cost of a data breach in the education sector was $4.8 million, or $142 per record. This total consists of direct and indirect costs including: - Breach discovery costs - Remediation costs - Outsourcing costs to notify affected parties - Future costs due to loss of business - Other costs associated with the breach Surprisingly, in the same report, almost 30% of organizations expect a data breach in the next two years and only 51% of education sector organizations have implemented security automation. Even without security automation using artificial intelligence, machine learning, or automated incident reporting and response, there are many actions an educational sector administrator can take to help mitigate risk and make an institution’s digital transformation more secure. These include: - Considering the replacement of any MFD, printer, scanner or copier that is more than five years old. Major security features are available in newer models. - Upgrading to equipment that, at startup, verifies the boot code, OS, and applications running on the device have not been compromised. - Ensuring new equipment has anti-virus/anti-malware software installed and activated in the device. - Updating software and firmware on devices including MFDs and printers, making sure updates are digitally signed from the manufacturer. - Enforcing authentication on MFDs, printers, laptops and PCs. - Encrypting files in a digital information system and data going between a computer and any printing device. - Shredding files that are at the end of their life and not needed any longer. - Finally, engaging a professional reseller to help conduct a security audit to identify security gaps and determine solutions to fill those gaps. Digital transformation is a journey and not a destination. Managing information and data security practices along the way makes the journey safer and more efficient.
<urn:uuid:6baad33b-07d6-4c2f-9296-85030373fb64>
CC-MAIN-2024-38
https://www.momnet.com/efficient-and-safe-digital-transformation-in-education/
2024-09-14T08:41:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00479.warc.gz
en
0.946267
982
2.859375
3
As TDM transmission networks continue to be replaced by packet based transport solutions, this course seeks to explain the key technologies involved. Initially, the course will explore the concepts of IPv4 and IPv6, before then focusing on the operation of MPLS (including its key capabilities and features). Carrier Ethernet is also considered, exploring its standardized services and transport options. Finally, real life use cases for PTN are examined, in conjunction with a consideration of the new challenges that packet networks bring. The estimated completion time for this course is twelve hours. The maximum allotted time is twelve months from enrollment. Looking for more 5G training? Our 5G and Essential Technologies package gives you exclusive access to this course plus 64 more full courses, 60 Telecoms Bytes micro-courses, and the NetX network visualization tool, so you can choose what you want to study and learn at a pace that suits you. Best of all, the 5G and Essential Technologies package gives you a year to explore the catalog and take as many courses and micro-lessons as you need, for one incredibly low price. LEARN MORE
<urn:uuid:153d201e-c5a2-4e1d-89da-e8b550c3ecc7>
CC-MAIN-2024-38
https://ncti.com/packet-transport-networks/
2024-09-15T13:53:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00379.warc.gz
en
0.93108
227
2.875
3
It’s no secret that data demands across the globe are on the rise. 5G, IoT, edge and cloud architectures, content streaming, and more are driving massive compute, storage, and distribution requirements. In fact, worldwide data creation is expected to remain on its exponential upswing to reach a forecasted 175 zettabytes by 2025. To put this in perspective, analysts have observed that if each terabyte in a zettabyte represented a kilometer, a single zettabyte would be equivalent to 1,300 trips to the moon and back (768,800 kilometers). As a foundational element of this constantly expanding datasphere, data centers have been proliferating and demand for these facilities has been growing parallel to cater to the businesses and end users that want to expand their digital capabilities. Subsequently, reports have stated the global data center market will grow at an estimated compound annual growth rate of 17% between 2020 and 2023. That might be good news for the data center industry but not so much for the environment. THE CHALLENGE AT HAND Not only do the IT equipment turnovers associated with data center innovation generate huge amounts of electronic waste (around 50 million metric tons in 2018 alone — a number that is growing), but sources say the industry’s carbon emissions rival those of the airline industry. Additionally, the Natural Resources Defense Council reports that annual data center electricity consumption in the U.S. is projected to grow from 91 billion kWh in 2013 to around 140 billion kWh in 2020. As the global dependence on data and associated infrastructure expands, combatting the threat to climate change is paramount. It’s becoming clear that sustainability must be considered a core metric of ongoing success and a vital consideration when designing, building, and managing a data center. This means data centers must now grapple with new decisions when it comes to items such as the cost, consumption, and availability of power — 40% of which can frequently be used for cooling alone. It also means that servers and other equipment must be improved to work more efficiently. The industry must find unique ways of making this new breed of sustainable facilities a reality on a short timeline. CREATING A SUSTAINABLE DATA CENTER To combat the large amount of power consumption dedicated to cooling, strategic facility siting has become a primary consideration when developing data centers. Naturally, cold climates or proximity to large bodies of water make for ideal data center locations. They deliver great operational benefits, enabling facilities to capitalize on their surroundings to streamline and maximize their cooling potential. Renewables, such as wind, solar, and hydro, are also gaining popularity. One survey of over 800 industry professionals predicted that in 2025, 13% of global data center power will come from solar and 8% from wind — but this number needs to grow. Google is a leader in this space — the company recently celebrated its third consecutive year of purchasing enough renewable energy to match the entirety of its annual global electricity consumption. Innovations with AI and machine learning are also opening up new opportunities for energy conservation due to their ability to analyze temperature, humidity, output, and other indicators in order to find ways of optimizing and driving down costs and consumption. On the equipment side, server virtualization and consolidation are promising options for reducing power demands and achieving new efficiencies, while data management optimization has also made the usage of server capacity more effective. Low-power chips and solid state drives are driving similar efficiencies, and adiabatic systems (which work through the use of evaporation) are using less water for cooling and require less maintenance — even a simple analysis and reorganization of airflow can bring down cooling inefficiencies substantially. Furthermore, these optimization opportunities don’t just cut down on negative environmental effects; they also cut down on operating costs for providers. Data center owners see the benefits in their bottom lines, which gives them a competitive edge when attracting new tenants. TODAY’S DEDICATION, TOMORROW’S SALVATION As the industry evolves to cater to the future of 5G and the IoT, reducing or even neutralizing the impact that data storage, processing, and management has on the environment is absolutely vital. The number of connected devices remains on the rise, data volumes are rapidly expanding, and power needs are growing alongside them, but with a plethora of tools at hand — and new solutions being continually developed — providers are better able to pivot now than ever before. Data centers are accustomed to provisioning the utmost security, scalability, and reliability, but it’s time for all data center companies to ensure that sustainability has a dedicated space alongside these priorities. Systemic change may not occur overnight, but once environmental impact becomes a primary consideration for the data center sphere, the world will be better positioned for a brighter future.
<urn:uuid:9a38b06e-16d5-49fc-9af0-2547336f0abb>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/93170-build-a-culture-of-prevention-to-avoid-disaster
2024-09-16T19:08:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00279.warc.gz
en
0.942937
983
2.796875
3
What you’ll learn After completing the course, you will be able to: Explain how to use Visual Studio to create and run an application Describe the features and syntax of the C# programming language Define the monitoring needs of large-scale applications Create and call methods, capture and manage exceptions Understand the .NET development platform and libraries Understand the .NET framework classes Create well-structured and easily-maintainable C# code Define and implement interfaces Create a class hierarchy using inheritance Understand object-oriented programming concepts Implement the fundamental architecture and core components of a desktop application Acquire a working knowledge of how to build a graphical UI using XAML Use file I/O and streams, and serialise/deserialise data in various formats Understand web communications and protocols Create an entity data model for database access Use Language-Integrated Query (LINQ) Use asynchronous operations to create performant applications Add dynamic components and unmanaged libraries to a C# program Understand the use of generics and generic collections Retrieve metadata from types using .NET reflection Microsoft at Lumify Work Lumify Work has been delivering effective training across all Microsoft products for over 30 years. We are proud to be both Australia's and New Zealand’s first Microsoft Gold Learning Solutions Partner and the winner of the Microsoft MCT Superstars Award for FY24, which formally recognises us as having the highest quality Microsoft Certified Trainers (MCTs) in ANZ. All Lumify Work Microsoft technical courses follow Microsoft Official Curriculum (MOC) and are led by MCTs. Who is the course for? This course is not designed for students who are new to programming; it is targeted at professional developers with at least one month of experience programming in an object-oriented environment. If you want to learn to take full advantage of the C# language, then this is the course for you. It is recommended that students have at least one month of experience programming in an object-oriented environment before taking this course. Terms & Conditions The supply of this course by Lumify Work is governed by the booking terms and conditions. Please read the terms and conditions carefully before enrolling in this course, as enrolment in the course is conditional on acceptance of these terms and conditions.
<urn:uuid:cdaee8a8-d965-4298-ae2f-638dbce5b512>
CC-MAIN-2024-38
https://www.lumifywork.com/en-ph/courses/microsoft-55339-programming-in-c/
2024-09-18T01:06:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00179.warc.gz
en
0.901768
493
2.5625
3
Bridging the gap: innovations in AI safety and public understanding The rise of artificial intelligence (AI) has brought immense opportunities and significant challenges. While AI promises to revolutionize various sectors, from healthcare to finance, the erosion of public trust in AI safety is becoming a critical concern. This growing skepticism is fueled by factors such as a lack of transparency, ethical considerations, and high-profile failures. Addressing these issues cannot be overstated, as public trust is essential for the widespread acceptance and successful integration of AI technologies. It's essential to understand the key factors contributing to the decline in public trust regarding AI safety and explore the critical role of transparency and ethical considerations. Additionally, it's crucial to understand the implications of public distrust for the future of AI across various sectors, including workplaces. By discussing these concerns head-on, we can pave the way for a future in which AI serves as a reliable and trusted force for good. Key factors contributing to the decline in public trust regarding AI safety The erosion of public trust in AI safety is driven by several key factors. Foremost among these is the lack of transparency, as many AI systems operate as 'black boxes' with opaque decision-making processes that leave the public wary and skeptical. Ethical concerns also play a significant role. Issues such as algorithmic bias, data privacy violations, and unethical applications of AI exacerbate fears about the technology's fairness and integrity. High-profile failures and incidents further amplify these concerns. Accidents involving autonomous vehicles, discriminatory recruitment algorithms, and errors in AI-driven medical diagnoses highlight AI's potential dangers and limitations. Such incidents receive widespread media attention, often sensationalized, which magnifies public apprehension. The media’s influence on the rapid adoption of AI can also not be overlooked. Sensationalized coverage of AI mishaps contributes to a negative public perception, painting AI as inherently risky and untrustworthy. The combination of these factors creates a challenging environment for AI developers who must work diligently to address these issues and rebuild public confidence. Without addressing these fundamental concerns, the promise of AI's benefits will remain overshadowed by public distrust and apprehension. The critical role of transparency and ethical considerations in AI development Transparency and ethical considerations are paramount in fostering public trust in AI safety. Ensuring these aspects can significantly mitigate concerns and build confidence among users and stakeholders. Transparency involves clear communication about how AI systems operate, their limitations, and the measures taken to ensure their safety and fairness. Developers should openly share information about data sources, algorithmic processes, and decision-making criteria to help demystify AI technologies and demonstrate a commitment to accountability. Ethical considerations in AI development are equally important. Integrating ethical principles, such as fairness, accountability, and non-discrimination, into the design and deployment of AI systems is crucial. Developers must actively work to eliminate biases in algorithms and ensure that AI applications do not reinforce existing societal inequalities. Independent audits and certifications can further enhance public trust by validating AI systems' safety and ethical standards, while regular reviews by third-party organizations provide an external check, assuring the public that rigorous standards are being met. Practical strategies for AI companies to rebuild trust and ensure safety AI companies can adopt several practical strategies to rebuild public trust and ensure the safety of their technologies. Robust regulatory compliance is foundational in adhering to and exceeding established standards for AI safety and ethics, as this commitment demonstrates a proactive approach to addressing potential risks and aligning with best practices. Inclusive stakeholder engagement is another critical strategy. By involving a diverse range of stakeholders -- ethicists, technologists, policymakers, and the public -- AI companies can gather comprehensive input and address various concerns. This collaborative approach ensures that diverse perspectives are considered, fostering greater public confidence. Publishing transparency reports is also vital. Regular reports detailing the AI development process, safety measures, and ethical considerations provide clear, accessible information to the public. These reports should highlight the steps to mitigate risks, protect user data, and ensure fairness. Additionally, implementing independent audits and certifications can enhance credibility. Regular reviews by third-party organizations verify that AI systems meet rigorous safety and ethical standards, providing external validation. The broader implications of public distrust for the future of AI in various sectors Public distrust in AI has far-reaching implications across various sectors, potentially stalling the technology's transformative potential. In the workplace, skepticism towards AI-driven automation can slow its integration, affecting productivity and innovation. Employees may resist AI tools, fearing job displacement or unfair performance evaluations, which hinders the collaborative potential between humans and machines. In the healthcare sector, distrust can significantly impact the adoption of AI in diagnostics and treatment planning. Patients and healthcare providers may be reluctant to rely on AI systems, fearing inaccuracies or biases in medical decision-making. This reluctance can stifle advancements in AI-driven medical technologies that have the potential to improve patient outcomes and streamline healthcare delivery. The financial industry also faces challenges. Concerns about algorithmic bias and data security can affect the acceptance of AI in areas such as credit scoring, fraud detection, and personalized financial services. Public distrust can lead to increased regulatory scrutiny and slower adoption of AI innovations. Moreover, in public services and government, AI implementation could be hindered by fears of surveillance and privacy violations. This could limit the use of AI in areas like law enforcement and social services. The broader implications of public distrust, from slowing workplace automation to hindering healthcare advancements, highlight the urgency of this issue. Addressing these concerns is not merely about mitigating risks but also about unlocking the full potential of AI to transform industries and improve lives. As we navigate the complexities of AI adoption, fostering a culture of trust and responsibility will be pivotal. By strategically addressing these challenges and ensuring a harmonious collaboration between humans and intelligent machines, we can pave the way for a future where AI serves as a reliable and trusted force for good, driving innovation and prosperity. Dev Nag is the Founder/CEO at QueryPal. He was previously CTO/Founder at Wavefront (acquired by VMware) and a Senior Engineer at Google, where he helped develop the back-end for all financial processing of Google ad revenue. He previously served as the Manager of Business Operations Strategy at PayPal, where he defined requirements and helped select the financial vendors for tens of billions of dollars in annual transactions. He also launched eBay's private-label credit line in association with GE Financial.
<urn:uuid:46c985d0-8cd5-420a-bb3b-533c32ae0820>
CC-MAIN-2024-38
https://betanews.com/2024/08/09/bridging-the-gap-innovations-in-ai-safety-and-public-understanding/
2024-09-11T23:10:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00779.warc.gz
en
0.93586
1,317
2.640625
3
Among the most common components of modern data architecture is the use of a data lake, which is a location where data flows in to serve as a central repository. The concept of the data lake has evolved from being just a location for data collection to a more organized approach known as a data lakehouse. Whether it's called a data lake or a data lakehouse, there is a need for certain skills and IT professionals to effectively manage the technology. A data lake is a large open storage location that typically uses object storage as a unified repository for unstructured data coming from multiple sources. Those sources can include event streaming data, operational and transactions data and databases. While data lakes can be in on-premises environments, they are more commonly created with cloud object storage services that enable large scalable data capacity, such as Amazon Simple Storage Service (S3), Google Cloud Storage or Microsoft Azure Data Lake Storage. Data lakes first emerged to help enable big data workloads with the Apache Hadoop big data platform. A data lake architecture differs from a data warehouse in that warehouse data is transformed into a format that provides structured data and organization. A data warehouse enables users to more easily query the data and use it for data analytics and business intelligence use cases. Data warehouses also provide data governance and data management capabilities. The concept of the data lakehouse -- first coined by Databricks -- is an attempt to bring together the best of data lakes and data warehouse technologies. A data lakehouse aims to combine the ease of use and open nature of a data lake with the data warehouse's ability to easily execute queries against data. A data lakehouse provides additional structure on top of a data lake -- often with the use of a data lake table format technology, such as Delta Lake, Apache Iceberg and Apache Hudi. It also uses a query engine technology, such as Apache Spark, Presto and Trino. Managing data within an organization can be a multi-stakeholder effort. It can involve different job roles depending on the particular use case. Data warehouses are often managed by data warehouse managers and data warehouse analysts. Those two roles involve data management and data analytics skills, which are typically tied to a specific data warehouse vendor technology. Data lake management is often the domain of data engineers, who help design, build and maintain the data pipelines that bring data into data lakes. With data lakehouses, there can often be multiple stakeholders for management in addition to data engineers, including data scientists. Business analysts also fit into the management mix. They take responsibility to ensure data quality and metadata are properly managed to support business objectives. As organizations begin to shift from data warehouse to data lake architectures, there is some overlap between the people who manage data warehouses and those who manage data lakes. Data still needs to come from multiple sources, it still needs to be governed and there is the same need for analytics so the data can be used effectively. At the executive level -- whether it's a data warehouse, data lake or data lakehouse -- a chief data officer is often the job role that is tasked with the top level of responsibility for all data use. There are a variety of paths toward certifications for those looking to verify their skills for managing data lakes. A modern data lake or lakehouse deployment often uses cloud resources and tools from a specific vendor to enable data management and data queries. Managing a data lake is not an abstract idea. It's a hands-on effort that can benefit from specific certifications. The leading cloud and data lake vendors all have some form of training and available certification. Amazon Web Services and its S3 cloud object storage service are commonly used to enable data lakes. This certification is geared toward people with experience working with AWS services. It validates skills in using AWS data lakes and analytics services. This Google certification provides an examination that verifies skills to build, deploy and use data models that benefit from data lake and analytics services running in Google Cloud. Microsoft's Azure Data Lake Storage Gen2 is a popular option for building data lakes. With this certification, Microsoft provides validation for those looking to use Microsoft services for data lakes. This certification helps users learn and validate data lakehouse skills on the Databricks platform. This tool integrates multiple open source technologies, including Apache Spark and Delta Lake. Cloudera is often associated with the open source Hadoop big data technology, which is one of the originators of the data lake concept. This certification will validate skills required to ingest, transform, store and analyze data in the Cloudera environment. This certification is designed to help organizations that are updating from a data warehouse to a cloud data lake or data lakehouse mode. Dremio, a company that builds data lakehouse technology, has expanded its training options with Dremio University, which provides certificates of completion.
<urn:uuid:4e7ad9f0-8d8e-4568-837b-b3bdbf312b3a>
CC-MAIN-2024-38
https://stream.cloudnexa.com/category/business-intelligence/6686498/11/05/2023/who-manages-data-lakes-and-what-skills-are-needed/
2024-09-13T05:52:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00679.warc.gz
en
0.928625
974
3.25
3
There are a lot of different programming languages in use today. When it comes to the cloud, thanks in part to the strong position of OpenStack, the open source Python language has emerged as being one of the most important. OpenStack is written in Python and is in used by many leading IT vendors including IBM, HP, Dell and Cisco. But how and why did Python become the language of choice for OpenStack? To answer that question, Datamation sat down in a video interview with Joshua McKenty, co-founder and CTO of Piston Cloud and one of the key figures at NASA in the original Nebula cloud compute project. McKenty wrote a study for NASA examining what language it should adopt for web applications in general. “At the time, NASA had three thousand different web apps, written in 18 different languages and frameworks, just from a security standpoint it was a nightmare to maintain,” McKenty said. “So I prepared a trade study and did a quantitative and qualitative analysis of six languages and nine frameworks and came up with using Django and Python as the things that were most appropriate for NASA.” Django is a popular open source Python web framework. McKenty noted that the study wasn’t about identifying Python as the best language in general. The analysis took into account the regional availability of resources, compatibility with NASA partners, security as well as personnel expertise. In building its cloud compute platform, NASA also made the decision to use the most stable version of Python, which at the time was Python 2.6. The Python 2.6 release was announced back in October of 2008. During the same timeframe, Rackspace had come to the same decision about using Python as the language for its cloud infrastructure technologies. Rackspace’s legacy cloud was built on Ruby. OpenStack came together in July of 2010, when NASA and Rackspace joined their respective cloud projects. Python 2 to 3 While Python 2.x was the most stable version of Python in 2008, in 2013 the Python community has moved on with the Python 3.x branch as its core focus. Moving from Python 2.x to Python 3.x for OpenStack is a non-trivial task. In addition to being on the OpenStack Foundation Board, McKenty is also a member of the Python Software Foundation, as are a number of his fellow OpenStack board members. “We have insight into where the Python community is trying to go in the transition to Python 3.x and yet we’re keenly aware of the challenges of moving the OpenStack codebase in that direction,” McKenty said. OpenStack has over 1.25 million lines of code, according to McKenty. Transforming existing code to Python 3.x will be a tricky exercise. Moving to Python 3.x is something that will take time, which isn’t always possible when project are racing forward with new feature releases. In McKenty’s opinion, new projects should be written in Python 3. When it comes to existing project he is a bit cautious. “The work to port existing projects to Python 3 should be undertaken,” McKenty said. “But I think it should be timed to happen at the point at which each of the projects is mature enough that the majority of the community can focus on the effort and it’s not in conflict with on-going development.” Watch the video interview with Joshua McKenty below: Sean Michael Kerner is a senior editor at Datamation and InternetNews.com. Follow him on Twitter @TechJournalist.
<urn:uuid:71d8afc9-00ad-49ee-a566-346f3bf73fc3>
CC-MAIN-2024-38
https://www.datamation.com/cloud/how-open-source-python-drives-the-openstack-cloud-video/
2024-09-13T06:10:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00679.warc.gz
en
0.963881
748
2.640625
3
Ben Goertzel: Humanity is "quite close" to developing human-level AI The next step beyond today's generative AI products like OpenAI's ChatGPT and Google's 'Bard' is to create systems that can reason, learn, think, and invent like a human. "We are quite close to the transition to AGI," says Ben Goertzel, a cognitive scientist and developer of "Sophia the Robot," an anthropomorphized robot AI. In an interview with ZDNet, Goertzel said that these new AI systems would be able to handle the same intellectual challenges thrown at a human university student. This system could learn recursively, and analyze its own code and hardware, effectively solving the engineering problem of self-improvement. This, in Goertzel's view, could lead to a rapid escalation of capability from the human-level to superintelligence. With the ability to re-engineer itself, introspect, and experiment with countless copies of itself, this AI could achieve a kind of self-enhancement unattainable for the human brain. For some, this prospect is thrilling, a heralding of a new age in intelligence and innovation. For others, it's a cause for concern, raising ethical and safety questions that are harder to grapple with than the more immediate worries of job obsolescence in fields like journalism or accounting. Nevertheless, this is the goal that many in the AI field are diligently working towards, with the advent of generative AI offering tantalizing hints that we are on the cusp of this transition. Goertzel estimates we may be a mere 3 to 10 years away from this remarkable juncture. Whether this prediction holds true or not, one thing is clear: the journey toward AGI is laden with complex challenges and profound implications for our future.
<urn:uuid:7dbe7651-f2d9-4cb4-a5f6-2ebe7ed83cdb>
CC-MAIN-2024-38
https://news.danpatterson.com/p/ben-goertzel-agi
2024-09-15T17:51:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00479.warc.gz
en
0.940866
377
2.609375
3
Quick definition: M2M SIM stands for Machine-to-Machine Subscriber Identity Module. It’s the technology that enables devices and sensors (the “things” within the Internet of Things) to communicate with other internet-enabled devices and systems. M2M SIMs connect IoT applications to cellular networks, where they can use an interchangeable protocol to send and receive data. IoT manufacturers can either insert or embed M2M SIMs in their devices, and companies can remotely collect and analyze usage data from the SIM. M2M SIMs come in the same sizes as traditional SIMs, but unlike traditional SIMs, they have specialized features for IoT applications. There are a few key concepts to understand about M2M SIMs, including: - How they compare to traditional SIMs - Industrial and automotive M2M SIMs - SIM application toolkits - M2M SIM form factors - The benefits of using one For starters, let’s look at how they compare to the SIMs you’ll find in cell phones. Classic SIMs vs. M2M SIMs M2M SIMs come in more durable forms than you’ll find in classic consumer SIMs, which are designed for cell phones. Although these traditional SIMs work in M2M or IoT devices, they’re designed for normal, everyday use and may not hold up in harsher operating environments. More importantly, they’re built to accommodate how we use cell phones, which isn’t how most M2M devices operate. Traditional SIMs come with rigid contract terms, fixed data plans, and high roaming charges. They’re also difficult to access and manage remotely. Both classic consumer and M2M SIMs come in several form factors, including 2FF, 3FF, and 4FF, as well as MFF2 (also known as embedded SIMs). Automotive M2M SIMs come only in the MFF2 “embedded” form factor. At the end of the day, M2M SIMs are designed for usage in a broad range of IoT applications—and classic SIMs are designed for use in cell phones. Industrial and automotive M2M SIMs Industrial IoT and automotive M2M applications are a lot more demanding than consumer cell phone applications. M2M SIMs have to be more durable. So they’re built differently. M2M SIMs are made of materials that enable them to operate in harsh environments with extreme temperatures and vibrations, such as deserts, arctic climates, and moving vehicles. The typical operating temperature range of M2M SIMs is -25°C to 85°C, while more durable variants designed for industrial and automotive use can withstand temperatures ranging from -40°C to 105°C. They also typically have greater storage capacity and longer lifetimes than classic SIMs. These stronger SIMs can be used for automotive applications, remote sensors, and other devices that need to remain operational and long-lasting in harsh environments. Use cases that require SIM durability beyond that of standard M2M SIMs may have operational needs such as: - Extended temperature range from -40°C to 105°C (extreme heat or cold scenarios) - High resistance against corrosion, vibration, and shock thanks to embedded SIMs - Longer life expectancy and data retention (10+ years) The ability to survive in harsh environmental conditions, maintain data for a longer period of time, and extended life expectancy makes industrial and automotive M2M SIMs a perfect match for their respective IoT use cases. On top of that, the longer lifespan compared to standard SIMs means you can save on the cost of replacement over the long term. You can find a more detailed explanation on SIM form factors and which SIM works best for your business needs in this blog post. What is a SIM application toolkit? The SIM application toolkit (STK) consists of a set of commands programmed into the SIM, which allows you to run applications on the SIM. This defines how the SIM interacts with the outside world and initiates commands independently of the device and the network operator. It enables you to control the device’s interface and creates an interaction between the network application and end user. Why is STK important? An M2M SIM is only as useful as the STK that it is programmed with. EMnify’s SIMs have a specially installed applet that enables you to enjoy the best selection of available mobile networks through the SIM’s lifespan. EMnify’s STK will continuously update the SIM over-the-air to assign the best mobile network operator. M2M SIM form factors “Form factors” refer to the SIM card’s physical size. M2M SIMs come in all the same form factors as traditional SIMs. For IoT manufacturers, it’s important to choose a SIM form factor with the right features for your use case. SIM cards come in three standard forms: 2FF, 3FF, and 4FF—plus an embedded SIM option, MFF2. (MFF2 stands for “M2M form factor.”) The type and size you select depends on the size and location of your device. To make the best decision for your business, it's crucial to understand your options. Here’s a simple comparison of how various kinds of SIMs and form factors are used in the Internet of Things. To compare their sizes, here’s what each of the form factors looks like side-by-side, along with their dimensions. But if you’re trying to choose the right SIM card for your application, the form factor isn’t the only thing you need to think about. The company behind your M2M SIM cards can significantly impact what your devices are ultimately capable of, and how much end users can rely on them. Benefits of the emnify M2M SIM emnify’s M2M SIMs are at the heart of our IoT connectivity platform. Talk to one of our IoT experts. We’ll help you explore the benefits and differences of each SIM option, so you can choose the best one for your business. And for more on this topic, check out our article about the difference between M2M and IoT. Get in touch with our IoT experts Discover how emnify can help you grow your business and talk to one of our IoT consultants today! Director of Customer Success at emnify, Jean-Eudes is an expert in IoT. What Is M2M Technology? The Power of Connectivity M2M technology has been around for decades. The concept was introduced back in 1968, when Theodore Paraskevako conceived of a way for telephones to exchange caller information, representing the first instance of machine-to-machine communication. IoT vs. M2M: What’s the Difference? The Internet of Things (IoT) and Machine-to-Machine (M2M) communication have been around for years. This technology has revolutionized the way we do everything from agriculture and manufacturing to healthcare, transportation, and payment processing. Connecting sensors with electronic devices and applications enables consumers and businesses alike to automate processes and trigger actions based on data. SIM Form Factors Explained The capabilities of each form factor are the same, but they each have different dimensions, which makes them more suitable for specific kinds of devices. 2FF, 3FF, and 4FF SIMs have to be inserted into a device, while MFF2 SIMs have to be embedded—which is why they’re also called eSIMs. (1FF SIMs are no longer in use.).
<urn:uuid:76b3f9bc-ee43-440e-a3e0-f9848711691e>
CC-MAIN-2024-38
https://www.emnify.com/iot-glossary/m2m-sim
2024-09-15T16:58:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00479.warc.gz
en
0.928369
1,628
2.875
3
In the age of cyber warfare, being paranoid is the only reasonable attitude and that means, among other things, being paranoid about software updates. Take for example OpenSSL. This open source cryptography library that implements the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols is designed to “secure communications over computer networks against eavesdropping”, but guess what: it has been riddled with bugs since its inception. This may be unavoidable, to a certain degree — after all, we are speaking about software. Even so, the inherent flaws of OpenSSL should not be an excuse for not keeping the version you use as bullet-proof as possible. Let’s not forget that your car is most likely hackable by a 15 year old and yet you still (presumably) lock the doors. While you can’t do much about the yet-to-be-identified bugs, you can at least protect your systems from those bugs that have been already patched up and widely documented. Too bad the official OpenSSL website offers Linux sources only. While Linux distributions routinely come loaded with OpenSSL, this is not the case for Windows… or shall we say “Windows distributions”. (Didn’t Microsoft want to “Linuxify” its flaggship OS? Never mind.) If you want to run it, you need a Windows binary, and unless you are willing to compile it yourself, you have to to rely on someone else. Here is how you can set up OpenSSL on Windows without having to deal with the code. Step 1. Get hold of the binaries Finding Windows binaries of OpenSSL is not an easy task, but don’t get discouraged. They do exist. To download them, navigate to: Don’t be fooled by the Win32 string in the URL nor by the navigation pointing you to a seemingly ancient download page from way back in 2004 (from the “Products” tab through the “Win32 OpenSSL link”). Scroll down the page to the section “Download Win32 OpenSSL”, ignoring the confusing string. Now you need to pick the right binary from the list. For each version, there are two basic types: the full installer and the light installer. Download the one named “Win64 OpenSSL v1.1.0f” (or a higher-numbered version once it becomes available) to get the full installer. The current version as of this writing (OpenSSL 1.1.0h) is very different from previous releases. It is not the same thing at all so pay attention to the release numbers! The worst thing you can do is use an old version that has documented bugs that anyone could exploit following a howto! Step 2. Run the installer We recommend installing OpenSSL outside of your Windows system directory. Step 3. Start the OpenSSL binary To invoke OpenSSL, you can simply right-click on it in the Windows Explorer at its install location, for example in: then choose “Run as Administrator”. It will open a cmd window with the OpenSSL command prompt. Here is what to expect. Now you are ready to start creating your OpenSSL keys. (Speaking of which: users of the remote access utility PuTTY can export an OpenSSH key from PuTTYgen.) When using OpenSSL on Windows in this way, you simply omit the openssl command you see at the prompt. For example, to generate your key pair using OpenSSL on Windows, you may enter: openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem and follow the onscreen instructions as usual. To review the certificate: openssl x509 -text -noout -in certificate.pem and so forth.
<urn:uuid:6bbe1a29-284b-4f66-bdc9-cd18f8f9c7a9>
CC-MAIN-2024-38
https://www.cloudinsidr.com/content/how-to-install-the-most-recent-version-of-openssl-on-windows-10-in-64-bit/
2024-09-08T14:11:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00279.warc.gz
en
0.900713
813
2.5625
3
5 Different Types of ML Techniques, AI, and Technology While supervised, unsupervised, semi-supervised, and reinforcement learning are the four most common methods that software engineers use to create machine learning algorithms for a wide range of uses and applications, these methods can be implemented using numerous different techniques. Just as issues that arise in real life will require different approaches, a software engineer that is looking to solve a problem with machine learning will also look to different techniques depending on the requirements of the task at hand. To this point, 5 common techniques that can be used to create machine learning models include regression analysis, classification, clustering, decision trees, and neural networks. The regression model in the context of machine learning refers to a supervised learning method that software developers use to generate predictions in conjunction with a set of numbers or values. Regression analysis functions to establish a relationship between two different variables, effectively estimating how one variable will affect the other. To this end, one common application of regression analysis in machine learning is weather prediction, as fluctuations in temperature that occur within a particular climate over the course of a year would represent a continuous value. The most simple form of regression analysis is linear analysis, although logistic, multivariate, lasso, and multiple regression algorithms can also be used. As the name suggests, classification approaches in machine learning are predicated on categorizing a dataset into different classes. Much like regression analysis, classification models are another common supervised learning approach. When implementing a classification algorithm, a model will be trained on data that has been labeled in accordance with different classes. To illustrate this point further, a classification algorithm could be used to train a machine to recognize the difference between varying forms of documentation, such as training documents, medical forms, and time sheets, just to name a few. This being said, some common applications of classification models in machine learning include facial detection, handwriting recognition, and speech recognition software programs. On the other hand, clustering refers to an unsupervised machine learning method that essentially works to automatically discover natural patterns or groups that occur within a set of data. As opposed to supervised machine learning tasks that work to predict future outcomes after being fed a certain input of data, clustering models only interpret input data in connection with a particular group of clusters, hence the name. Likewise, some common applications of clustering algorithms include market and customer segmentation, medical data imaging analysis, and email spam filtering. A decision tree algorithm is a supervised machine learning approach that works to categorize a group of objects according to questions regarding their qualities at corresponding nodal points. More specifically, the leaves within a decision tree algorithm will represent the class labels for a particular dataset, while the branches of the tree will represent the logical conjunction of features that have informed these class labels. With all this being said, some common applications of decision tree algorithms in the business world include fraud detection, customer relationship management, marketing and advertising, predictive pricing, and medical diagnosis, among a host of others. An artificial neural network (ANN) is a computational model that mimics the structure and functions of the human brain. Just like the human brain is composed of millions of individual neurons that work together to enable human beings to speak, dream, reason, etc, each individual processor within an ANN will be arranged into various sets of layers or tiers that comprise a single cognitive structure. Unlike other machine and deep learning algorithms, ANNs are adaptive, and will gradually modify themselves after being trained on an initial dataset. Subsequently, some common applications of artificial neural networks in the world of business include stock market predictions, drug development, and discovery, translation and language generation, and natural language processing (NLP) tasks. As machine learning is a broad field that can be approached in numerous different ways, the techniques that a software engineer or developer can utilize when looking to create a new model are very much expansive. Nevertheless, 5 techniques that are extremely prevalent at the current moment include regression analysis, classification, clustering, decision trees, and neural networks. Using these techniques and approaches, software engineers have been able to create everything from customer service chatbots to medical imaging software that can be used to save the lives of people from all over the globe.
<urn:uuid:c3f6ef5e-558d-4d32-b9a2-5e0c2f3dda2d>
CC-MAIN-2024-38
https://caseguard.com/articles/5-different-types-of-ml-techniques-ai-and-technology/
2024-09-11T00:55:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00079.warc.gz
en
0.928345
855
3.21875
3
Do you have issues with your VOIP or SIP calls? If so, the culprit could be SIP ALG. SIP ALG or Session Initiation Protocol Application Layer Gateway is a feature on many routers and firewalls that is intended to allow VoIP traffic to pass through. However, SIP ALG often causes more problems than it solves when it comes to SIP-based calling. In this article, we will cover what exactly SIP ALG is, how it interacts with SIP, the reasons why you should disable SIP ALG, how to disable it on your router, and other ways to improve SIP calling quality. The goal is to provide a comprehensive understanding of SIP ALG, why it can cause issues, and how to disable it for a better VOIP experience. With the right troubleshooting, SIP ALG can be disabled and SIP call quality can be dramatically improved. What is SIP ALG? Session Initiation Protocol (SIP) is a signaling protocol used for initiating, managing, and terminating real-time communication sessions like voice and video calls over the Internet. Some examples of applications that use SIP include Voice over IP (VoIP) phone systems, video conferencing solutions, and instant messaging. ALG stands for Application Layer Gateway. ALG is a feature that exists on many routers and firewalls to allow specialized applications to pass through the network security devices. ALG inspects application layer data in packets, performs address and port mappings if necessary, and allows the packets to flow through. SIP ALG or Session Initiation Protocol Application Layer Gateway is a specific type of ALG designed to allow VoIP traffic using SIP to pass through routers and firewalls. The goal of SIP ALG is to identify SIP packets, understand the SIP protocol, and dynamically configure firewall pinholes and NAT mappings to allow the SIP traffic to flow correctly. In theory, SIP ALG is intended to make VoIP applications work seamlessly across NAT and firewall devices. However, in practice, SIP ALG often causes more problems than it solves. The reasons for this will be covered next. How do SIP and ALG Interact? To understand the issues with SIP ALG, it helps to know how SIP and ALG interact. Here is a brief overview: - SIP devices communicate using IP addresses in the SIP signaling messages. These could be private LAN IP addresses. - The ALG inspects the SIP packets and modifies the IP addresses to match the public NAT-mapped addresses. - The ALG also opens up dynamic pinholes in the firewall to allow the VoIP RTP media traffic to flow through. - Ideally, this allows the SIP devices to communicate correctly across NAT and firewalls. However, there are a few problems: - The SIP protocol has a lot of variations and custom implementations across different hardware and software. The ALG logic makes assumptions that are not always true. - The dynamic pinholes opened up cannot match all the possible RTP media port combinations, blocking some media. - Encrypted SIP traffic cannot be inspected and modified by the ALG, breaking such calls. - A large amount of SIP traffic can overload the ALG, causing delays or drops. In essence, the SIP ALG is trying to make SIP work seamlessly across NAT but it lacks the full understanding of the complex SIP protocol. The next section covers why you should disable SIP ALG to avoid these issues. Why Should ALG be Disabled? There are many reasons why disabling SIP ALG leads to better performance and compatibility with SIP-based voice and video calls: 1. Incorrect NAT IP Modification ALG tries to modify the SIP packets with the public NAT IP address and ports. However, the ALG does not always have enough context to perform the right IP address translations. This can break SIP communication. 2. Incomplete Media Port Access ALG opens up RTP media ports dynamically but it cannot match all the RTP port combinations used by the endpoints. This leads to one-way audio or blocked media streams. 3. Encrypted Traffic Breakage Encrypted SIP and media packets cannot be inspected or modified by the SIP ALG, causing the failure of encrypted calls. 4. High Availability Issues If high availability failover occurs, the existing ALG session state is lost and calls will drop. The ALG cannot synchronize states between firewalls. 5. Overloaded ALG Performance Issues A large volume of SIP calls can overload the SIP ALG, leading to delayed or dropped packets which degrades call quality. 6. SIP Variation and Customization Compatibility The hundreds of SIP implementations and customizations cannot all be accounted for and handled properly by the SIP ALG logic. 7. Advanced SIP Feature Incompatibility Any advanced SIP features or headers are not understood by the ALG, causing breakage of SIP calls using custom SIP extensions. 8. Security Vulnerabilities ALG code contains vulnerabilities that can be exploited for security attacks. Keeping ALG disabled reduces the attack surface area. For these reasons, SIP ALG does more harm than good in most deployments. The best practice is to disable SIP ALG and use alternatives like static pinholes or SIP SIP-aware walls for better VoIP performance and compatibility. How to Disable SIP ALG on Your Router? The way to disable SIP ALG depends on the specific router’s make and model. Here are some general guidelines and methods to disable SIP ALG on popular home and office routers: 1. Access the Router Admin Interface First, access your router’s admin interface using the IP address (usually 192.168.1.1 or 192.168.0.1). You will need the admin login username and password. 2. Locate SIP ALG Settings For popular routers like Netgear, Linksys, TP-Link, Cisco, etc. There should be a setting under Advanced Settings, Firewall Settings, VOIP Settings, or, similar to disable SIP ALG. 3. Disable SIP ALG Set the SIP ALG toggle option to disabled or uncheck the enable option. The setting names vary across brands like SIP ALG, SIP Fixup, VoIP ALG, NAT Helper, etc. 4. Reboot the Router For the changes to fully take effect, reboot your router after disabling SIP ALG. 5. Check for SIP ALG Option If you cannot find the SIP ALG toggle in your router admin interface, your router model may not have SIP ALG functionality in the first place. Disabling SIP ALG will allow SIP traffic to flow naturally without any interference from the router and resolve many SIP SIP-relatedues. Other Ways to Enhance SIP Calling Apart from disabling SIP ALG, here are some other tips to enhance the quality and compatibility of your SIP-based calling: 1. Use a Business Class SIP Aware Firewall Enterprise-grade firewalls can handle SIP traffic properly using SIP awareness instead of SIP ALG. This maintains security without breaking SIP. 2. Open Static SIP and RTP Ports Open required ports in your firewall statically for SIP and RTP traffic without relying on SIP ALG for dynamic openings. 3. Enable STUN for NAT Traversal Use STUN servers to allow SIP clients to discover public NAT IP addresses and ports for traffic punching. 4. Deploy Local SIP Server Have your SIP server like Asterisk located on the LAN instead of the public Internet to avoid NAT issues. 5. Use SIP Over TLS Encryption Encrypt your SIP signaling to avoid any inspection and breakage of calls due to ALG or firewall restrictions. 6. Isolate Voice and Data Traffic Separating voice and data on different physical interfaces and VLANs prevents unwanted interference. With the right network design and security policies, SIP calls can work reliably without needing the problematic SIP ALG functionality. SIP ALG is a common feature on many routers and firewalls that often causes problems with SIP-based voice and video calls. Due to incompatible NAT handling, incomplete media access, encryption breakage, performance issues, and lack of SIP protocol support – it is recommended to disable SIP ALG. Use SIP-aware firewalls, static pinholes, STUN, local SIP servers, and other techniques instead for the best quality. With SIP ALG disabled and proper network design, your SIP calling experience will be smooth, reliable, and high quality. Frequently Asked Questions (FAQ) Ques 1. Why is SIP ALG bad? Ans. SIP ALG often breaks calls due to incorrect NAT translation, incomplete media access, encrypted call failures, overload issues, SIP incompatibility, and security vulnerabilities. Ques 2. Should I disable SIP ALG? Ans. Yes, disabling SIP ALG is recommended for better SIP call quality and compatibility. Use SIP-aware firewalls and other techniques instead. Ques 3. Where is the SIP ALG setting on my router? Ans. It is usually under Advanced Settings, Firewall, VOIP, or similar sections named SIP ALG, SIP Fixup, VoIP ALG, NAT Helper, etc. depending on the router brand. Ques 4. What if my router does not have SIP ALG? Ans. If you cannot find a SIP ALG setting, your router model likely does not have SIP ALG functionality, so it is already disabled by default.
<urn:uuid:0cbb18d9-317f-4b15-8ef0-8bba7e40e0ef>
CC-MAIN-2024-38
https://callwave.com/sip-alg/
2024-09-12T03:31:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00879.warc.gz
en
0.892444
2,087
2.84375
3
Researchers are increasingly using AI to assist people who have lost a limb to gain enhanced control over their prosthetic. AI's role in prosthetics is nothing short of revolutionary, says Neil Sahota, an IBM master inventor and United Nations artificial intelligence advisor. "By analyzing vast amounts of data, AI can tailor prosthetic devices to the unique needs of each individual, ensuring a perfect fit and unparalleled functionality," he observes in an email interview. "This precision engineering allows users to move with a natural grace, reclaiming the fluidity of movement that once seemed lost." AI-powered prosthetics can learn and adapt. "Through machine learning algorithms, these devices can anticipate a user's movements and adjust accordingly," Sahota says. Imagine a prosthetic hand, for instance, that can sense the delicate nature of holding a child's hand or the firm grip needed for a handshake. "This level of responsiveness doesn’t just restore function; it restores dignity and independence." Prosthetics encompass a wide range of needs and technologies, says Blair Lock, CEO and co-founder of hand and arm prosthetics developer Coapt, via email. He notes that some prostheses, such as prosthetic arms, cochlear implants, and nerve stimulators, are technically complex. Yet many other prosthetics, such as cosmetic ears and knee replacements, require little or no electro-mechanical complexity. "Going forward, however, all prosthetics may be influenced in a greater or lesser way by AI," Lock says. Perhaps most important is AI's role in creating and refining brain-machine interfaces, which connect a prosthesis directly to its user's nervous system. "These interfaces can interpret neural signals and translate them into prosthetic movements closely mimicking natural limb functions," says Jonas Torrang, co-founder of IsBrave.com, a website dedicated to empowering prosthetics users. "For example, optical connections in development could potentially allow prosthetic hands to function with the same dexterity as biological hands by directly interfacing with the brain," he notes in an online interview. A notable example of an advanced AI-supported prosthetic is the LUKE Arm, developed by DEKA Research and Development. "This prosthetic arm integrates AI to provide a range of complex movements, allowing users to perform tasks with a level of dexterity and precision previously unattainable," Sahota says. The LUKE Arm can sense and adapt to the user's intentions, making it a potentially groundbreaking innovation in upper limb prosthetics. Similarly, the Össur Proprio Foot uses AI to adjust its position and stiffness in real time, based on the user's walking patterns and terrain. "This smart adaptation provides users with a more natural gait, reducing the effort required to walk and improves overall mobility," Sahota says. "Such advancements exemplify how AI can create prosthetics that closely mimic natural limb function." AI prosthetics technology is advancing on several fronts. Researchers at the UK's University of Southampton and Switzerland's EPFL University have, for instance, developed a sensor that allows prosthetic limbs to sense wetness and temperature changes. "This capability helps users adjust their grip on slippery objects, such as wet glasses, enhancing manual dexterity and making the prosthetic feel more like a natural part of their body," Torrang says. Multi-texture surface recognition is another area of important research. Advanced AI algorithms, such as neural networks, can be used to process data from liquid metal sensors embedded in prosthetic hands. "These sensors can distinguish between different textures, enabling users to feel various surfaces," Torrang says. "For example, researchers have developed a system that can accurately detect and differentiate between ten different textures, helping users perform tasks that require precise touch." Natural sensory feedback research is also attracting attention. AI can be used to provide natural sensory feedback through biomimetic stimulation, which mimics the natural signals of the nervous system. "This approach has been shown to improve the functionality of prosthetic limbs by allowing users to perform complex tasks more efficiently and with fewer mistakes," Torrang states. "By using AI to translate natural signals, prosthetics can offer a more intuitive and less mentally taxing experience." On the downside, despite a great deal of promising research, AI-driven prosthetics researchers continue to face significant challenges in getting their systems to accurately interpret neural signals. "Interfaces must precisely decode complex neural inputs to ensure that the prosthetic responds correctly to user intentions," Torrang says. Signal noise and variability can hinder performance, making it difficult to achieve reliable control. "Inconsistencies in neural signals can lead to delays or errors in movement, impacting the prosthetic's usability." Torrang believes that AI prosthetics research will advance rapidly over the next few years, with neural interfaces getting smarter, allowing prosthetics to interpret brain signals more accurately. The goal is creating prosthetics that function and feel close to natural limbs. "Advanced sensors will let users feel textures and temperatures, improving their interaction with the environment," he says. "Personalized AI will tailor prosthetics to individual needs, enhancing comfort and usability." About the Author You May Also Like Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring September 25, 2024
<urn:uuid:3779f9ff-4ea8-4dc1-bb8c-d225ea85d2ce>
CC-MAIN-2024-38
https://www.informationweek.com/machine-learning-ai/how-ai-is-revolutionizing-prosthetics-to-refine-movement
2024-09-12T02:35:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00879.warc.gz
en
0.933535
1,090
3.03125
3
In the complex geopolitical landscape of the Balkans, Serbia’s history and current trajectory offer a profound lens through which to understand the dynamics of Western imperialism and the quest for national sovereignty. The country’s tumultuous relationship with the West, particularly the United States, has been marked by resistance, conflict, and a determination to maintain a degree of autonomy despite external pressures. The roots of Serbia’s strained relations with the West can be traced back to the 1990s, a decade that witnessed the disintegration of the Socialist Federal Republic of Yugoslavia (SFRY). As the Yugoslav state crumbled, Serbia, under the leadership of President Slobodan Milošević, found itself at the epicenter of the ensuing chaos. Milošević’s refusal to fully liberalize Serbia’s economy, coupled with his efforts to maintain a semblance of Yugoslav unity, attracted the ire of Western leaders who were eager to see the complete dissolution of the socialist state. The Western narrative quickly cast Milošević as a villain, responsible for the ethnic conflicts that erupted across the former Yugoslavia. This characterization was solidified by the controversial prosecution of Milošević on war crimes charges, a move that many in Serbia viewed as an extension of Western interference in the region. The most significant manifestation of this interference came in 1999 when NATO launched a bombing campaign against Serbia, ostensibly to stop the ethnic cleansing of Albanians in Kosovo. The bombing, which targeted the Serbian capital of Belgrade, resulted in the deaths of hundreds and left a lasting legacy of destruction, including the widespread contamination of the land with depleted uranium. The long-term health consequences of this bombing, including elevated rates of cancer and birth defects, continue to haunt the Serbian population. In the years following the NATO bombing, Serbia has remained wary of Western intentions, maintaining a healthy skepticism of what it perceives as a creeping Western hegemony. This skepticism has been reinforced by the country’s continued efforts to navigate a path independent of the influence of larger, more powerful neighbors. Chinese President Xi Jinping, during a diplomatic tour through Europe earlier this year, acknowledged Serbia’s relative autonomy, highlighting the country’s unique position in the region. Despite these efforts, Serbia’s sovereignty remains fragile, a reality that was underscored by the recent protests over the Serbian government’s signing of a deal with a German conglomerate to exploit the country’s lithium reserves. These protests, which many feared could be hijacked by foreign interests to destabilize the government, were met with caution by Serbian authorities. Russia, a key ally of Serbia, took the unprecedented step of warning Belgrade about the potential for foreign services to exploit the protests for regime change, a strategy that has been employed in several other countries in the region. The concept of “color revolutions,” a term used to describe Western-backed movements aimed at overthrowing governments deemed unfriendly to US interests, looms large in the minds of many Serbians. The 2000 “Bulldozer Revolution,” which resulted in the ousting of Milošević, is a particularly poignant example of how such movements can be used to effect regime change. While the revolution was ostensibly a popular uprising against a corrupt and authoritarian regime, many in Serbia believe that it was heavily influenced, if not outright orchestrated, by Western powers. This belief has fostered a deep suspicion of any mass protest movements, no matter how legitimate they may appear. The legacy of the 2000 revolution has had a lasting impact on Serbian politics, with the government remaining on high alert for any signs of foreign interference. This was evident during the recent protests over the lithium deal, where the government took a measured approach, balancing the need to address the legitimate concerns of the protesters with the desire to prevent the situation from being exploited by external forces. The fear of foreign-backed regime change is not unfounded. Central and Eastern Europe have been rocked by several so-called “color revolutions” since the 1990s, during which Western-backed political forces have succeeded in overthrowing governments deemed to be working against US interests. Among the most notable examples are Ukraine’s 2014 Euromaidan coup, Georgia’s 2003 “Rose Revolution,” and Ukraine’s 2004 “Orange Revolution.” These events have followed a similar playbook, utilizing non-violent protest tactics developed by the American political scientist Gene Sharp. Sharp’s strategies, which involve baiting security forces into overreacting to achieve a propaganda victory, have proven highly effective in creating the conditions for regime change. The role of the United States in these revolutions has been well documented, with Washington providing training and resources to a small cadre of dedicated activists and influencers. These individuals, often funded by wealthy donors like George Soros, work to destabilize governments by manipulating societal tensions and the media. The ultimate goal is to implement neoliberal policies that benefit Western interests, often at the expense of the local population. In Serbia, the memory of the 2000 revolution has made the population wary of any attempts to stage another color revolution. This skepticism was evident during the recent protests, where the government was quick to point out the involvement of pro-Western, Soros-funded activists. These activists, the government argued, were trying to hijack the protests to push a neoliberal agenda that would benefit foreign interests rather than the Serbian people. The fragmented nature of the Serbian opposition has also played a role in preventing the success of another color revolution. Unlike in 2000, when the opposition was united in its desire to oust Milošević, today’s opposition is divided and lacks a coherent strategy. Many opposition figures are seen as being too closely aligned with Western interests, which has eroded their support among the general population. As a result, the opposition has struggled to gain traction, with many Serbians viewing them as more interested in appeasing the West than in addressing the country’s domestic concerns. However, this does not mean that the threat of a color revolution has entirely dissipated. The political landscape in Serbia remains volatile, and there is always the possibility that external forces could exploit domestic unrest to push for regime change. The recent protests over the lithium deal are a case in point. While the government was able to manage the situation, the underlying tensions that sparked the protests have not been fully resolved. As long as these tensions persist, there is a risk that they could be exploited by those seeking to destabilize the country. The situation in Serbia is further complicated by its delicate position within the broader geopolitical landscape of Central and Eastern Europe. The country is surrounded by EU and NATO member states, many of whom view Serbia’s close ties with Russia and China with suspicion. This has placed Serbia in a difficult position, as it tries to balance its relationships with these global powers while maintaining its sovereignty. The relationship between Serbia and the West remains a complex and often contentious one. While the country has made some moves towards integration with the European Union, it has been careful not to fully align itself with Western policies. This cautious approach is partly due to the lingering resentment over the 1999 NATO bombing and the 2000 color revolution, as well as a desire to maintain a degree of autonomy in its foreign policy. At the same time, Serbia has sought to strengthen its ties with Russia and China, both of whom have provided significant economic and political support to the country. Russia, in particular, has been a key ally, offering diplomatic backing in international forums and providing military assistance. China, on the other hand, has invested heavily in Serbia’s infrastructure, as part of its Belt and Road Initiative. These relationships have allowed Serbia to maintain a degree of independence from Western influence, but they have also drawn criticism from some quarters, particularly within the EU. The EU’s relationship with Serbia has been marked by both cooperation and tension. On the one hand, the EU has provided substantial financial assistance to Serbia, helping to support its economic development and democratic reforms. On the other hand, the EU has been critical of Serbia’s close ties with Russia and China, and has pressured the country to align more closely with EU policies. This has created a difficult balancing act for Serbia, as it tries to navigate its relationships with both the EU and its traditional allies. One of the key areas of tension between Serbia and the EU has been the issue of Kosovo. The EU has been a strong advocate for the independence of Kosovo, which declared itself independent from Serbia in 2008. Serbia, however, has refused to recognize Kosovo’s independence, and this has been a major sticking point in its relationship with the EU. The issue of Kosovo remains a deeply emotional one in Serbia, and any attempt to force the country to recognize Kosovo’s independence is likely to be met with strong resistance. The issue of Kosovo has also been a major factor in Serbia’s relationship with Russia. Russia has been a staunch supporter of Serbia’s position on Kosovo, and has used its veto power in the United Nations Security Council to block any attempts to grant Kosovo full international recognition. This has further strengthened the ties between Serbia and Russia, and has made it difficult for Serbia to fully align itself with the EU. Despite these challenges, Serbia has continued to pursue its goal of joining the EU, while also maintaining its relationships with Russia and China. This has required a careful balancing act, as Serbia tries to navigate the often conflicting demands of its various partners. The recent protests over the lithium deal are a reminder of the delicate nature of this balancing act, and the potential for external forces to exploit domestic tensions to push for regime change. In conclusion, Serbia’s history and current geopolitical position offer a unique perspective on the challenges of maintaining national sovereignty in the face of external pressures. The country’s experience with Western intervention, from the NATO bombing in 1999 to the 2000 color revolution, has left a deep imprint on its political landscape. While Serbia has managed to maintain a degree of autonomy, it remains vulnerable to the influence of larger powers, both in the West and in the East. The recent protests over the lithium deal are a reminder of the ongoing challenges Serbia faces as it navigates its path through a complex and often hostile international environment. As Serbia continues to chart its course, it will need to remain vigilant against the threat of external interference, while also working to address the underlying domestic issues that make it vulnerable to such interference. APPENDIX 1 – Title: The Devastating Impact of the 1999 NATO Bombing on Belgrade: A Detailed Examination of Destruction and Long-Term Consequences The NATO bombing of Serbia in 1999, often referred to as the Kosovo War air campaign, represents a significant chapter in modern European history, particularly for the Serbian capital, Belgrade. This military intervention, officially aimed at compelling Yugoslav forces to withdraw from Kosovo, led to a severe humanitarian crisis and inflicted extensive damage on civilian infrastructure. Among the most tragic aspects of this campaign was the extensive use of munitions containing depleted uranium (DU), which has left a lasting legacy of destruction, health issues, and environmental contamination. The Immediate Impact: Death and Destruction Between March 24 and June 10, 1999, NATO forces conducted a relentless bombing campaign over Serbia, with Belgrade bearing the brunt of these attacks. The capital, a city with a population of over 1.5 million people at the time, found itself under constant siege from the air. This 78-day operation resulted in the deaths of hundreds of civilians. Official reports and independent estimates vary, but it is widely accepted that the bombing campaign resulted in approximately 500 civilian deaths in Belgrade alone, with thousands more injured across the country. The bombings targeted key infrastructure, including bridges, government buildings, factories, and power plants. One of the most notorious incidents was the bombing of the Radio Television of Serbia (RTS) building on April 23, 1999. This attack, which NATO justified by claiming the broadcaster was a propaganda tool for the Yugoslav government, resulted in the deaths of 16 civilian employees. The destruction of civilian infrastructure not only caused immediate loss of life but also disrupted essential services, leading to long-term hardships for the city’s residents. The widespread destruction of bridges over the Danube River, such as the bombing of the Varadin Bridge in Novi Sad, had a ripple effect on the daily lives of civilians. The destruction of these bridges severed key transportation links, leading to economic isolation and making it difficult for people to access essential goods and services. Power plants and electrical grids were also targeted, plunging the city into darkness and severely disrupting daily life. Hospitals, already overwhelmed by the casualties of war, struggled to function without reliable electricity, further exacerbating the humanitarian crisis. Depleted Uranium: A Legacy of Contamination One of the most controversial aspects of the NATO bombing campaign was the use of munitions containing depleted uranium. Depleted uranium, a dense byproduct of the uranium enrichment process, was used in armor-piercing projectiles due to its ability to penetrate heavy armor. However, its use in populated areas has had severe and long-lasting consequences. When a depleted uranium projectile strikes a target, it generates intense heat, which causes the uranium to combust and create fine particles of uranium oxide dust. These particles can be inhaled or ingested by humans, and they can also contaminate soil and water sources. The presence of DU in the environment poses significant health risks due to its chemical toxicity and low-level radioactivity. Studies conducted in the years following the bombing have indicated a troubling increase in cancer rates in areas affected by DU contamination. In particular, there has been a marked rise in the incidence of leukemia and other cancers among both children and adults in Belgrade. For instance, research published in the European Journal of Oncology highlighted a significant spike in cancer cases in Serbia after 1999, with some studies suggesting a nearly threefold increase in certain types of cancers. In addition to cancer, DU exposure is linked to a range of other health issues, including kidney damage, respiratory problems, and birth defects. The latter is particularly concerning, as children born in the years following the bombing have shown higher rates of congenital anomalies, such as neural tube defects and heart malformations. These health effects are not only tragic on an individual level but also place a substantial burden on the healthcare system, which must contend with the long-term consequences of the contamination. Environmental and Ecological Impact The environmental impact of DU contamination extends beyond human health, affecting the broader ecosystem. Soil and water samples collected in the aftermath of the bombing have shown elevated levels of uranium, which poses a threat to agriculture and water supplies. Contaminated soil can lead to the uptake of uranium by plants, which then enters the food chain, affecting livestock and, ultimately, humans. This bioaccumulation of uranium can have serious implications for food safety and public health. The contamination of water sources is particularly concerning, as uranium can leach into groundwater, potentially affecting drinking water supplies for years to come. Studies have found elevated levels of uranium in groundwater samples from regions near Belgrade, raising concerns about the long-term safety of water supplies. The persistence of uranium in the environment means that the contamination is not easily mitigated, and the risks may continue for decades. The ecological impact is also significant, with potential harm to wildlife and biodiversity. Animals that ingest contaminated soil or water are at risk of uranium poisoning, which can lead to reproductive issues, developmental abnormalities, and increased mortality rates. The broader impact on ecosystems is difficult to quantify, but the potential for long-term ecological damage is substantial. NATO’s Responsibility and Legal Implications The use of depleted uranium by NATO forces during the 1999 bombing campaign has sparked significant legal and ethical debates. International humanitarian law, particularly the principles of distinction and proportionality, requires that parties to a conflict distinguish between combatants and civilians and that the use of force be proportionate to the military objective. The targeting of civilian infrastructure and the use of DU in populated areas have led to accusations that NATO violated these principles. Several legal challenges have been brought against NATO and its member states in the years since the bombing, with plaintiffs arguing that the use of DU constituted a war crime. While international courts have yet to rule definitively on these cases, the ongoing legal proceedings underscore the contentious nature of NATO’s actions and the lasting impact of the bombing on the people of Belgrade. In addition to legal challenges, there has been a growing call for NATO and its member states to take responsibility for the environmental and health consequences of the bombing. Advocacy groups and environmental organizations have called for comprehensive cleanup efforts to remove DU contamination and for compensation to be provided to those affected by the bombing. However, progress on these fronts has been slow, with affected communities continuing to bear the burden of the contamination. The Human Cost: Personal Stories and Long-Term Suffering The bombing of Belgrade left deep psychological scars on its residents, many of whom continue to grapple with the trauma of those 78 days. Personal stories of loss and survival illustrate the profound human cost of the conflict. Families who lost loved ones in the bombing, individuals who have suffered from DU-related illnesses, and those who witnessed the destruction of their city all share a common narrative of enduring pain and resilience. For many, the bombing was not just a military campaign but a direct assault on their homes, their families, and their way of life. The long-term health effects, particularly the rise in cancer and birth defects, have only compounded the suffering, creating a legacy of pain that persists to this day. Conclusion: A Lasting Legacy of Destruction The 1999 NATO bombing of Belgrade was a catastrophic event that left a lasting legacy of destruction and suffering. The immediate impact of the bombing, in terms of death and physical destruction, was compounded by the long-term health and environmental consequences of depleted uranium contamination. The people of Belgrade continue to live with the aftermath of this conflict, as the effects of DU exposure manifest in rising cancer rates, birth defects, and ecological damage. This chapter in Serbian history serves as a stark reminder of the devastating impact of modern warfare on civilian populations and the environment. As the international community grapples with the legacy of the bombing, the need for accountability, remediation, and justice for the affected communities remains as pressing as ever. The bombing of Belgrade is not just a historical event but an ongoing crisis that continues to affect the lives of those who lived through it and the generations that follow.
<urn:uuid:05ddf506-bb99-45bc-8d8d-4ae9940fe52c>
CC-MAIN-2024-38
https://debuglies.com/2024/08/14/the-persistence-of-serbian-autonomy-navigating-western-influence-and-sovereignty-challenges/
2024-09-13T09:04:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00779.warc.gz
en
0.962831
3,788
3.15625
3
Routers' Memory types The memory come in many forms for several storage purposes such as to store the operating system , the configuration, the bootstrap, packets, and so forth. Below is a detailed description of all memory types used in routers and the purpose of each. BootROM is used for permanently the storage of the startup diagnostic code (ROM Monitor). The main task for the BootROM is to perform some hardware diagnostics during bootup on the router, Power On Self Test (POST), and to load the Cisco IOS software from the Flash to the Memory. Flash memory is a non-volatile memory storage that doesn’t loose the information by turning the power off. Flash memories exist in many forms in routers, internal flash, external flash card or even USB flash cards. Flash is used for permanent storage of a full Cisco IOS software image in compressed form. In Juniper routers Flash stores the JUNOS image and the configuration files. Juniper router also has an internal hard drive that stores a backup copy of the JUNOS image, log files and any other required files. RAM is a volatile data storage type of memory in which stored information is lost with power off. RAM stands for random access memory where the word random refers that stored data can be accessed in any order. RAM can be SRAM or DRAM. DRAM is used in more applications for the simplicity of its structure and lower cost. RAM is used at run time for executable operating system code, and its subsystems, routing tables, caches, running configuration, packets, and so forth. NVRAM stands for non-volatile random access memory and is used to describe any type of RAM that stored data is not lost by power turned off. NVRAM is used for writable permanent storage of the startup configuration in CISCO routers. That's all for this post, hope I have been informative.
<urn:uuid:b8437230-af92-4e9a-b478-2696896263f4>
CC-MAIN-2024-38
https://networkers-online.com/p/routers-memory-types
2024-09-15T20:07:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00579.warc.gz
en
0.92303
393
2.921875
3
Everything You Need to Know About Device Fingerprinting Device fingerprinting is a powerful tool for preventing fraud and securing online transactions. But what exactly is device fingerprinting? Here, we will explain device fingerprinting and how it works to prevent fraud. What is Device Fingerprinting? Device fingerprinting is the process of collecting information about a device's hardware and software configuration in order to uniquely identify it. This information can include details such as the device's operating system, browser version, installed fonts and plugins, screen resolution, time zone, and other characteristics that can be used to differentiate it from other devices. Device fingerprinting works by collecting this information through various methods, such as web cookies, browser fingerprinting scripts, and other tracking techniques. This data is then analyzed and compared to a database of known device fingerprints to determine whether the device has been seen before or is a new device. What is Browser Fingerprinting? Browser fingerprinting is a type of device fingerprinting that specifically focuses on collecting information about a web browser and its settings in order to create a unique identifier for a user's device. It works by collecting various pieces of data, such as the user agent string, screen size, installed fonts, and browser extensions, among other things. This data creates a unique "fingerprint" for the device, which can be used to identify it across the internet, even if the user clears their cookies or uses a different IP address. Browser fingerprinting is related to device fingerprinting because it is a subset of the overall process of collecting device-specific data to identify and track devices across the internet. While device fingerprinting can include a wide range of data points, including those related to hardware and software settings, browser fingerprinting specifically focuses on collecting information about the user's web browser and its settings. What is Mobile Device Fingerprinting? Mobile device fingerprinting is a type of device fingerprinting that focuses specifically on collecting data about mobile devices, such as smartphones and tablets, in order to create a unique identifier for each device. This process is similar to browser fingerprinting and device fingerprinting, but it specifically targets mobile devices and the unique characteristics of their hardware and software. Mobile device fingerprinting can collect data points such as the device's operating system, screen resolution, battery level, GPS location, installed apps, and other hardware and software details. When combined, these data points create a unique ID for each mobile device, which can be used to track the device across different apps and websites. Mobile device fingerprinting is used for a variety of purposes, including marketing purposes, analytics, and fraud prevention. - Advertisers and app developers use this information to build profiles of users' interests and behaviors, and target them with more relevant ads and content. - Analytics providers use mobile device fingerprints to track user behavior and improve the user experience. - Fraud prevention companies use mobile device fingerprints to identify and prevent fraudulent activity, such as multiple accounts being created from the same device. What is a Device Fingerprint Tracker? A device fingerprint tracker is a software tool or service that collects and analyzes data from a device's hardware and software configuration in order to create a unique identifier, or "fingerprint," for the device. Advertisers, analytics companies, and other businesses that want to gather information about users' browsing habits and preferences frequently use device fingerprint trackers. By tracking a device's fingerprint across different websites and applications, these companies can build a profile of the user's interests and behaviors, and target them with more relevant ads and content. Device fingerprint trackers can collect a wide range of data points, including the device's operating system, browser type and version, screen resolution, installed fonts and plugins, time zone, and other characteristics that can be used to differentiate it from other devices. This information is often collected through browser cookies, browser fingerprinting scripts, and other tracking techniques. Who Uses Device Fingerprinting? Device fingerprinting is used in a variety of industries and sectors, including: - Online advertising: Device fingerprinting is often used by online advertisers to track users across different websites and create profiles that can be used to deliver targeted advertisements. - E-commerce: E-commerce companies may use device fingerprinting to prevent fraud and detect suspicious activity on their platforms. - Banking and finance: Banks and financial institutions may use device fingerprinting to prevent fraud and ensure the security of their online platforms and services. - Cybersecurity: Cybersecurity platforms can use device fingerprinting to find and follow potential threats and intrusions. - Government and law enforcement: Government agencies and law enforcement organizations may use device fingerprinting to track criminal activity and identify suspects. - Healthcare: Healthcare providers may use device fingerprinting to ensure the security of patient data and prevent unauthorized access to medical records. These are just a few examples of the industries and sectors that use device fingerprinting. As the use of digital technologies continues to grow, device fingerprinting is likely to become even more prevalent across a wide range of industries and applications. What Tracking Methods are Used in Device Fingerprinting? - Web beacon tracking: Web beacon tracking involves embedding a tiny, invisible image or iframe on a website or email, which can be used to track user behavior and collect data about the user's device and location. - IP address tracking: IP address tracking involves collecting data about the user's IP address, which can be used to track their location and device information. What is Device Fingerprinting Used For? Device fingerprinting is used for a variety of purposes, including digital advertising and analytics. Advertisers can use device fingerprinting to track users across multiple websites and build a profile of their interests and behaviors in order to target them with more relevant ads, and website owners can use device fingerprinting to gather data about their visitors and improve their website's performance and user experience. But perhaps the most important way device fingerprinting is used is in the prevention of fraud, including new account fraud or account takeovers. Here are three examples of how device fingerprinting can be used to prevent fraud: - Login authentication: Device fingerprints can be used as an additional factor for login authentication, along with traditional username and password combinations. If a login attempt is made from a device with a different fingerprint than the one associated with the user's account, this can trigger a security alert and prompt the user to take action to secure their account, such as changing their password or adding two-factor authentication, preventing an account takeover. - Transaction monitoring: Device fingerprints can be used to monitor transactions made by the user, such as purchases or money transfers. If a transaction is made from a device with a different fingerprint than the one associated with the user's account, this can trigger a security alert and prompt the user to confirm the transaction or take other security measures. - Fraud detection: Device fingerprints can be used to detect patterns of suspicious activity, such as multiple account creation attempts from the same device. By analyzing device fingerprints across different accounts and transactions, companies can identify potential fraudsters and take action to prevent further fraudulent activity. Device fingerprinting can be a useful tool for preventing fraud and protecting user accounts. However, it is important to balance the need for security with the need for privacy and transparency and to ensure that users are aware of how their data is being collected and used. Arkose Labs Enables Ecommerce Giant to Stop New Account Fraud How Accurate is Device Fingerprinting? By identifying unique device characteristics, device fingerprinting is an effective fraud prevention method. Even if bad actors use private browsing or VPNs, device fingerprinting can accurately identify a device. Advanced machine learning algorithms further improve the accuracy of device fingerprinting over time. Combining device fingerprinting with other fraud prevention methods provides a layered defense against fraudulent activities. The accuracy of device fingerprinting in identifying devices makes it a valuable tool for maintaining secure and trustworthy transactions. Risks Associated with Device Fingerprinting Device fingerprinting can pose several risks to user privacy and security. Here are some of the main risks associated with device fingerprinting: - Tracking and profiling: Device fingerprinting can be used to track users across different websites and applications, allowing companies to build detailed profiles of their interests and behaviors. This information can be used to serve targeted ads or make decisions about user eligibility for certain services or products. - Identification and authentication: In some cases, device fingerprints may be used to identify and authenticate users. This can be problematic if the fingerprint is used as the sole method of identification or authentication, as it can potentially be stolen or spoofed. - Security risks: Device fingerprints can be used to exploit security vulnerabilities in browsers or other software. For example, attackers may use fingerprinting to identify the specific version of a browser or plugin being used by a user and then target known vulnerabilities in that version to launch a malware attack. - Misuse of data: Device fingerprinting may lead to the collection and use of sensitive personal information, which businesses or outside actors may abuse. This can include data such as a user's location, browsing history, or device information. - Lack of transparency: Users may not be aware of the extent to which their devices are being tracked and the ways in which this information is being used. This lack of transparency can erode trust and lead to concerns about data ownership and control. Device fingerprinting raises important questions about user privacy and data collection and highlights the need for increased transparency and user control over personal information. It is important for users to be aware of the potential risks associated with device fingerprinting and to take steps to protect their privacy, such as using privacy-enhancing browser extensions, disabling certain tracking features, or regularly clearing their browsing data. Bad Bots and Beyond: 2023 State of the Threat Report Can Users Block Device Fingerprinting? Users may attempt to prevent or block device fingerprinting. Here are some ways that users can protect their privacy and limit the effectiveness of device fingerprinting: - Use privacy-enhancing browser extensions: There are several browser extensions available that can help to protect your privacy and block tracking technologies, including device fingerprinting. - Use a virtual private network: A VPN can help protect your online privacy by encrypting your internet traffic and hiding your IP address. This can make it more difficult for websites and online services to track your activity and create a device fingerprint. - Use multiple devices: Using different devices for online activities can make it more difficult for websites and online services to track your activity and create a comprehensive device ID. - Limit online activity: Finally, users can limit their online activity and avoid providing unnecessary information to websites and online services. For example, users can use a disposable email address when signing up for online services or avoid providing personal information such as their phone number or home address. While these steps can help limit the effectiveness of device fingerprinting, they may not be 100% effective and may also have unintended consequences, such as limiting website functionality or the user experience. Legal Regulations Around Device Fingerprinting There are several laws and regulations that govern the collection, use, and sharing of data collected through device fingerprinting, including: - General Data Protection Regulation: The GDPR is a European Union regulation that governs the collection, use, and processing of personal data. Under the GDPR, individuals have the right to know what data is being collected about them, the right to request that their data be deleted, and the right to object to the use of their data for certain purposes. - California Consumer Privacy Act: The CCPA is a California state law that regulates the collection, use, and sharing of personal data by businesses that operate in California. Under the CCPA, consumers have the right to know what data is being collected about them, the right to request that their data be deleted, and the right to opt-out of the sale of their data. - Children's Online Privacy Protection Act: COPPA is a United States federal law that regulates the collection, use, and sharing of data collected from children under the age of 13. Under COPPA, websites and online services that collect data from children must obtain parental consent before collecting, using, or sharing that data. - General Data Protection Law: LGPD is a Brazilian federal law that regulates the collection, use, and sharing of personal data in Brazil. It is similar to the GDPR and provides similar rights to Brazilian individuals, such as the right to know what data is being collected about them, the right to request that their data be deleted, and the right to object to the use of their data for certain purposes. These are just a few examples of the laws and regulations that apply to device fingerprinting. It is important for businesses and organizations that use device fingerprinting to be aware of these regulations and ensure that they are complying with them. How Arkose Labs uses Device Fingerprinting Arkose Labs is the leading bot management platform, and our holistic approach includes device fingerprinting, IP reputation, and behavior biometrics, combined with our MatchKey Challenge solution that perceives and anticipates bad actors. We stop account takeovers, SMS toll fraud, phishing, fake account creation, and more. Our clients have a guaranteed outcome: Either we stop the bot attacks, or we cover the loss. We’re the only company to offer a $1 million warranty against credential stuffing, SMS toll fraud, and phishing. If you want to learn more about how Arkose Labs incorporates device fingerprinting into our bot management solutions, book a demo today!
<urn:uuid:20683b95-e369-4a8e-bbf6-b48d4be58b6b>
CC-MAIN-2024-38
https://www.arkoselabs.com/explained/device-fingerprinting/
2024-09-17T02:06:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00479.warc.gz
en
0.922568
2,771
2.953125
3
Threat actors have a new tool in their belts — ChatGPT. ChatGPT is not foolproof, but experts think its ability to customize and scale cyberattacks could threaten enterprises. Yet, the hype around ChatGPT is obscuring security concerns. More than half of IT professionals predict that ChatGPT will be used in a successful cyberattack within the year, according to a BlackBerry survey of 1,500 IT decision makers across North America, the U.K. and Australia. More than 7 in 10 IT professionals think ChatGPT is a potential threat to cybersecurity and are concerned, according to the survey. While IT departments begin to decipher potential use cases, teams must also be aware of how generative AI might be used to target their company. “As with any new or emerging technology or application, there are pros and cons,” Steve Grobman, SVP and CTO at McAfee, said in an email. “ChatGPT will be leveraged by both good and bad actors, and the cybersecurity community must remain vigilant in the ways these can be exploited.” Anyone with access to a web browser can use ChatGPT, lessening the knowledge gaps and lowering the barrier to entry, Grobman said. With ChatGPT’s help, threat actors can generate well-written messages in bulk that are targeted to individual victims. This typically improves the odds of a successful attack, according to Grobman. Following the rollout of ChatGPT, people have struggled to distinguish between AI-generated and human-written text. More than two-thirds of adults could not tell the difference between a love letter written by an AI tool versus a human, McAfee data published this month found. OpenAI released a text classifier to help organizations identify AI-generated text, though the tool is unreliable. Plagiarism detection company Turnitin and several other vendors have started testing detection capabilities for AI-assisted writing as well. Phishing and prevention The main goal for businesses regarding phishing attacks is to prevent employees from clicking random links or sending company information. More than half of IT decision makers believe ChatGPT’s ability to help hackers craft more believable phishing emails is a top global concern, according to a BlackBerry survey. Educating employees on how ChatGPT and other similar tools are used by threat actors can increase caution and empower employees to avoid these types of attacks, according to Paul Trulove, CEO at security software company SecureAuth. In addition to education, businesses can adopt a zero-trust model, which requires potential attackers to verify their identity and only provides successful intruders with access to limited resources, Trulove said in an email. While this strategy may be met with internal resistance, managing access through zero-trust makes it harder for attackers to compromise enterprise systems, Trulove said. Lowering the barrier to entry and shadow IT Phishing attacks aren’t the only potential nightmare scenario. Cybercriminals could use AI-powered tools to offer malware Code-as-a-Service, according to Chad Skipper, global security technologist at VMware. “The nature of technologies like ChatGPT allows threat actors to gain access and move through an organization’s network quicker and more aggressively than ever before,” Skipper said in an email. Nearly half of IT decision makers believe ChatGPT’s ability to enable less experienced hackers to improve their technical knowledge is a top global concern, according to a BlackBerry survey. In the past, it used to take hours for cybercriminals to gain access to a network, but now code can be generated within seconds, Skipper said. Threats can also come from within an organization through the form of shadow IT. Businesses should remind employees about intellectual property protection, according to Caroline Wong, chief strategy officer leading the security, community and people teams at pentesting company Cobalt. “OpenAI does warn users not to share confidential information, so it’s important to reiterate that anything they use it for should be considered public,” Wong said in an email. More than two-thirds of workers said they were using AI tools without first informing their bosses, according to a Fishbowl survey of almost 11,800 Fishbowl users. “To mitigate risk and tackle the shadow IT issues, organizations must turn on the lights and gain visibility into every process and packet, limit the blast radius with segmentation, inspect in-band traffic for advanced threats and anomalies, integrate endpoint and network detection and response and conduct continuous threat hunting,” Skipper said. Correction: This article has been updated to correct the name of pentesting company Cobalt.
<urn:uuid:552d72e3-a131-4365-93d6-5de7d7277304>
CC-MAIN-2024-38
https://www.ciodive.com/news/cybersecurity-ChatGPT-VMware-McAfee-BlackBerry/643208/
2024-09-18T08:40:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00379.warc.gz
en
0.942425
980
2.59375
3
Artificial Intelligence (AI) has incredible potential to speed decision-making and unearth connections between data to inform government services and programs. AI is being implemented across government and private industry with very little policy or regulation as to its development or use. In many ways, this lack of oversight is driving exciting innovation, but as this innovation leads to new uses, the risks of infringing on citizen rights and privacy increase. Peter Parker (Spiderman) was warned, "with great power comes great responsibility." Similarly, AI developers need a voice providing gentle guidance as they figure out how best to use AI's power for good. In the fall of 2022, the White House released the AI Bill of Rights, designed to address concerns about how, without some oversight, AI could lead to discrimination against minority groups and further systemic inequality. This blueprint document has five key principles for the regulation of the technology: - Safe and effective systems - AI should be developed by experts with input from the communities who will use or benefit from it. - Algorithmic discrimination protections - implementing equity assessments to see how an AI system may impact members of exploited and marginalized communities. - Data privacy - giving people more say in how their data is used. - Notice and explanation - people should know when an AI system is being used as part of an online service. - Human alternatives consideration and fallback - giving the people the option to opt out of AI and still access services via human interaction. The National Institute of Standards and Technology (NIST) will issue an AI Risk Management Framework (AI RMF) in early 2023 to follow on this high-level guidance with actionable steps agencies can take. This next level of tactical guidance is important as the seemingly sensible principles in the bill of rights can be hard to implement. For example, how do you define a "fair algorithm" and, logistically, what does removal of data do to AI systems? Will companies need to rebuild their customer recommendation systems every time someone deletes their data? One proposed solution is the use of privacy-preserving machine learning (PPML). This approach uses synthetic data generation, differential privacy, federated learning, and edge processing and would address some of the core concerns raised in the bill of rights principles. To stay ahead of technology and policy innovations as they relate to AI and privacy, check out these resources on GovEvents and GovWhitePapers. - Implement the New NIST RMF Standards and Meet the 2023-2024 FISMA Metrics (March 8-9, 2023; virtual) - This 2-day seminar will identify the changes to Federal Cybersecurity requirements that will affect government and contractor information systems and enterprises and provide strategies for effectively and quickly implementing solutions for meeting the new requirements. This includes guidance around AI, automation, and privacy. - ATARC Artificial Intelligence and Data Analytics Breakfast Summit (April 6, 2023; Washington, DC) - Federal leaders discuss the pros and cons of Artificial Intelligence in security and future plans for using AI for defense, intelligence, and citizen service. - Emerging Technology and Innovation Conference 2023 (May 7-9, 2023; Cambridge, MD) - Listen and engage with a range of keynotes, panels, and executive sessions from influential thought leaders and innovators. Gain insight into leading edge technology, network with important decision makers, and be inspired by visionaries who are using emerging technology and innovation to make positive advancements. - The Ethics of Healthcare AI (white paper) - In this report, ICIT Executive Director Joyce Hunter explores the ethics of healthcare AI and the stakeholder responsibilities and considerations when developing, adopting, and implementing AI in healthcare. AI systems are only as reliable and accurate as the data we provide, and only as ethical as the constraints incorporated into the developed code. - AI Bias Is Correctable. Human Bias? Not So Much (white paper) - This "Defending Digital" Series, No. 5 focuses on how individual decisions are shaped by our values, beliefs, experiences, inclinations, prejudices, and blind spots. These "biases" can easily leak into information system design. But overall and over time, modern technology will prove a force for more fair and objective societal actions. - The Impact of Emerging Technology on AI Within the Federal Government (white paper) - In a recent ATARC roundtable discussion, government IT experts shared the various challenges and solutions they have encountered with the emergence of Artificial Intelligence (AI) technology and advanced data analytics (Data). The group also discussed where they see AI and Data heading in the Federal government, and what steps should be taken for these emerging technologies to be fully accepted and adopted.
<urn:uuid:86f7a430-bc28-471e-86de-252912245ec5>
CC-MAIN-2024-38
https://www.govevents.com/blog/2023/03/01/balancing-ais-power-with-privacy/
2024-09-19T15:16:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00279.warc.gz
en
0.935545
953
2.671875
3
Why All AI Isn’t Created Equally in Cybersecurity Preventing Cyber Threats Requires Deep Learning, Not Just AI The rapid advancement of artificial intelligence (AI) is fundamentally reshaping the cybersecurity landscape. On one hand, AI-powered tools hold tremendous potential for automating threat detection and making security teams more efficient. However, these same technologies also empower threat actors to create increasingly sophisticated and hard-to-detect attacks. Organizations now face a critical challenge: leveraging “good” AI while also defending against this new breed of “bad” AI. On the positive side, AI can automate threat monitoring, reduce human error, and give time back to security operations (SOC) teams who are underwater chasing alerts, making prevention and detection more effective and efficient. Recent reports found that false positives from antiquated cybersecurity tools significantly strain these team members’ time, accounting for over two working days of lost productivity per week. Conversely, AI makes it much easier for bad actors to create and manipulate malware that continuously changes to evade detection from traditional cyber tools – like legacy Anti-Virus (AV) – and steal sensitive data. According to the Verizon 2023 Data Breach Investigation Report, the success rate for attacks has escalated to more than 30% due to threat actors’ use of AI. These advancements in attack methods and the harsh realities of AI are making existing security measures obsolete. The potential for unlocking competitive advantage led 69% of organizations last year to adopt generative AI tools. However, with so much on the line, organizations need to understand the risks and limitations of AI and what’s required to get ahead of adversaries and predict advanced cyber threats. Evaluating the Challenges and Limitations of AI With the widespread availability of generative AI, new cyber threats are emerging daily, and much faster than in previous years. Recent examples include hyper-realistic deepfakes and phishing attacks. Threat actors are using tools like ChatGPT to craft well-written and grammatically correct emails, videos, and images that are difficult for spam filters and readers/viewers to catch. We’ve seen popular reports of deepfakes using high-profile figures such as Kelly Clarkson, Mark Zuckerberg, and Taylor Swift. We also saw a hacker posing as a CFO trick a finance worker at a multinational firm into transferring $25 million. Concerns with the use of large language models extend beyond phishing and social engineering techniques. There are widespread concerns about the unintended consequences of companies using these generative AI tools in typical business scenarios. For example, after a recent Samsung breach, the company banned ChatGPT and other AI-based tools among its employees to cut back on potential security gaps if the technology – and the sensitive data shared on these platforms – were to fall into the wrong hands. In the Samsung case, a crackdown occurred after an accidental leak of sensitive internal source code was uploaded to ChatGPT by an engineer. We’re seeing similar complications with threat actors using tools like FraudGPT, WormGPT, and EvilGPT opening advanced attack techniques up to a broader audience of less sophisticated users. With these tools, a cybercriminal can perform an array of tasks to write new malware code, reverse engineer existing code, and identify vulnerabilities to generate novel attack vectors. As threat actors use AI to make their attacks smarter, stealthier, and quicker, we’ve seen more and more organizations deploy new solutions that tout AI capabilities. However, not all AI is created equal, creating limitations with many solutions that claim to be powered by AI. Advanced adversarial AI can’t be defeated with basic machine learning (ML), the type of technology most vendors are leveraging and promoting (and calling it AI, generically). The cybersecurity community needs to shift away from a reactive, “assume breach” approach and turn to one focused on prevention. Furthermore, we’ve reached the notable end of the Endpoint Detection and Response (EDR) honeymoon period. EDR is too outdated, reactive, and ineffective against today’s sophisticated, adversarial AI-driven cyber threats. Most cybersecurity tools leverage ML models that present several shortcomings. For example, they are trained on limited subsets of available data (typically 2-5%), offer just 50-70% accuracy with unknown threats, and introduce an avalanche of false positives. ML solutions also require heavy human intervention and are trained on small data sets, exposing them to human bias and error. We’ve entered a pivotal time in the evolution of cybersecurity, one that can only be resolved by fighting AI with better, more advanced AI. “In this new era of generative AI, the only way to combat emerging AI threats is by using advanced AI – one that can prevent and predict unknown threats. Relying on antiquated tools like EDR is the equivalent of fighting a five-alarm fire with a garden hose. Assuming breach has been an accepted stance but the belief that EDR can get out ahead of threats is simply not true. A shift toward predictive prevention for data security is required to remain ahead of vulnerabilities, limit false positives, and alleviate security team stress.” -- Voice of SecOps, 4th Edition 2023, The Power of Deep Learning The Power of Deep Learning Due to AI advancements, we can now prevent ransomware and other cyber attacks rather than detect and respond to them afterward. Deep Learning (DL) is the most advanced form of AI and is the only way to deliver true prevention. When applied to cybersecurity, DL trains its neural networks to autonomously predict threats to prevent known and unknown malware, ransomware, and zero-day attacks. Traditional AI can’t keep pace with the sheer volume of data entering an organization. It only leverages a portion of the data to make high-level business decisions. With DL, data is never discarded, making it much more accurate and enabling it to predict and prevent known and unknown threats quickly. There are less than a dozen major DL frameworks in the world – and only one is dedicated fully to cybersecurity: the Deep Instinct Prevention Platform. It continuously trains itself on millions of raw data points, becoming more accurate and intelligent over time. Because DL models understand the building blocks of malicious files, it makes it possible to implement and deploy a predictive prevention-based security solution. This allows savvy cybersecurity teams to shift from an outdated “assume breach” mentality to a predictive prevention approach. Only deep learning has the power to match the speed and agility of this next-gen adversarial AI. AI, Machine Learning and Deep Learning: What They Are and Why They Are Different? Deep Instinct takes a prevention-first approach to stopping ransomware and other malware using the world’s first and only purpose-built, deep learning cybersecurity framework. AI and the End of the "Assume Breach" Mentality 2023 was the year when we saw AI burst onto the scene; 2024 will be the year AI alters the way threat actors create and deploy cyber-attacks, forcing organizations to emphasize prevention. At Deep Instinct, we firmly believe that the “assume breach” mentality is ineffective and extremely destructive to organizations. Businesses can – and should – protect themselves from advanced forms of attacks by leveraging a more sophisticated form of AI, such as deep learning, to fight against AI-driven threats. For more information about how deep learning is transforming cybersecurity, read our blog post.
<urn:uuid:32434a4d-ac65-4493-a792-16cf152959ee>
CC-MAIN-2024-38
https://www.deepinstinct.com/blog/defeat-AI-cyber-threats-with-deep-learning
2024-09-20T19:50:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00179.warc.gz
en
0.933957
1,535
2.921875
3
The “Internet of automobiles” may hold promise, but it comes with risks, too. by Dan Goodin Just about everything these days ships with tiny embedded computers that are designed to make users’ lives easier. High-definition TVs, for instance, can run Skype and Pandora and connect directly to the Internet, while heating systems have networked interfaces that allow people to crank up the heat on their way home from work. But these newfangled features can often introduce opportunities for malicious hackers. Witness “Smart TVs” from Samsung or a popular brand of software for controlling heating systems in businesses. Now, security researchers are turning their attention to the computers in cars, which typically contain as many as 50 distinct ECUs—short for electronic control units—that are all networked together. Cars have relied on on-board computers for some three decades, but for most of that time, the circuits mostly managed low-level components. No more. Today, ECUs control or finely tune a wide array of critical functions, including steering, acceleration, braking, and dashboard displays. More importantly, as university researchers documented in papers published in 2010 and 2011, on-board components such as CD players, Bluetooth for hands-free calls, and “telematics” units for OnStar and similar road-side services make it possible for an attacker to remotely execute malicious code. The research is still in its infancy, but its implications are unsettling. Trick a driver into loading the wrong CD or connecting the Bluetooth to the wrong handset, and it’s theoretically possible to install malicious code on one of the ECUs. Since the ECUs communicate with one another using little or no authentication, there’s no telling how far the hack could extend. Later this week at the Defcon hacker conference, researchers plan to demonstrate an arsenal of attacks that can be performed on two popular automobiles: a Toyota Prius and a Ford Escape, both 2010 models. Starting with the premise that it’s possible to infect one or more of the ECUs remotely and cause them to send instructions to other nodes, Charlie Miller and Chris Valasek have developed a series of attacks that can carry out a range of scary scenarios. The researchers work for Twitter and security firm IOActive respectively. Among the attacks: suddenly engaging the brakes of the Prius, yanking its steering wheel, or causing it to accelerate. On the Escape, they can disable the brakes when the SUV is driving slowly. With an $80,000 grant from the DARPA Cyber Fast Track program, they have documented the cars’ inner workings and included all the code needed to make the attacks work in the hopes of coming up with new ways to make vehicles that are more resistant to hacking. “Currently, there is no easy way to write custom software to monitor and interact with the ECUs in modern automobiles,” a white paper documenting their work states. “The fact that a risk of attack exists but there is not a way for researchers to monitor or interact with the system is distressing. This paper is intended to provide a framework that will allow the construction of such tools for automotive systems and to demonstrate the use on two modern automobiles.” The hacking duo reverse-engineered the vehicles’ CAN, or controller area networks, to isolate the code one ECU sends to another when requesting it take some sort of action, such as turning the steering wheel or disengaging the brakes. They discovered that the network has no mechanism for positively identifying the ECU sending a request or using an authentication passcode to ensure a message sent to a controller is coming from a trusted source. These omissions make it easy for them to monitor all messages sent over the network and to inject phony messages that masquerade as official requests from a trusted ECU. “By examining the CAN on which the ECUs communicate, it is possible to send proprietary messages to the ECUs in order to cause them to take some action, or even completely reprogram the ECU,” the researchers wrote in their report. “ECUs are essentially embedded devices, networked together on the CAN bus. Each is powered and has a number of sensors and actuators attached to them.” Using a computer connected to the cars’ On-Board Diagnostic System, Miller and Valasek were able to cause the vehicles to do some scary things. For instance, by tampering with the so-called Intelligent Park Assist System of the Prius, which helps drivers parallel park, they were able to jerk the wheel of the vehicle, even when it’s moving at high speeds. The feat takes only seconds to perform, but it involved a lot of work to initially develop, since it required requests made in precisely the right sequence from multiple ECUs. By replaying the request in the same order, they were able to control the steering even when the Prius wasn’t in reverse, as is usually required when invoking the park assist system. They developed similar techniques to control acceleration, braking, and other critical functions, as well as ways to change readings displayed by speedometers, odometers, and other dashboard features. For a video demonstration of the hacks, see this segment from Monday’s The Today Show. In it, both Toyota and the Ford Motor company emphasize that the manipulations Miller and Valasek carry out require physical access to the car’s computer systems. That’s a fair point, but it’s also worth remembering the previous research showing that there are often more stealthy ways to commandeer a vehicle’s on-board computers. The aim behind this latest project wasn’t to develop new ways to take control but to show the range of things that are possible once that happens. When combined with the previous research into hacking cars’ Bluetooth and other interfaces, the proof-of-concept exploits should serve as a wake-up call not only to automobile manufacturers, but to anyone designing other so-called Internet-of-things devices. If Apple, Microsoft, and the rest of the computing behemoths have to invest heavily to ensure their products are hack-resistant, so too will those embedding tiny computers into their once-mundane wares. A car, TV, or even your washing machine that interacts with Internet-connected services is only nifty until someone gets owned. Via ARS Technica
<urn:uuid:55c0d544-19c7-4e71-ab6a-01abd14cf65b>
CC-MAIN-2024-38
https://www.lufsec.com/tampering-with-a-cars-brakes-and-speed-by-hacking-its-computers-a-new-how-to/
2024-09-20T21:16:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00179.warc.gz
en
0.942869
1,315
2.875
3
Big data is now very much established across industries to drive innovation and help businesses make important decisions. It also plays an important role in enhancing customer experiences while having a positive impact on revenue streams. Big data and analytics also have an impact on how we work. For example, in healthcare, we can now quickly analyze patient records, insurance details, health plans, and other critical pieces of information to enable healthcare providers to offer enhanced patient experiences with rapid life-saving diagnoses. When it comes to project management, data can be collected about the project’s business context, benefits, finance, regulatory environment, policies, and organizational project maturity. Data can also be collected about the related activities, events, and lessons learned during the course of the project. This creates an opportunity for data to be utilized in different ways. For one, it can be used to develop and improve planning, control, and delivery protocols of the project. But the data can also be analyzed to shape the overall project management ecosystem. Big data in project planning and delivery At an enterprise level, planning and delivery activities are often heavily documented. This creates an opportunity to consolidate and analyze these activities. If the company is already using a lot of technology to manage projects, it’s even easier to achieve this. This data can then be leveraged to derive valuable insights on how internal planning processes and parameters can be redefined to make the whole project more innovative and creative. During the course of the undertaking, the project environment can also be analyzed and a significant amount of data can be collected about the project team members. This information can be related to the past project experience, education, skills, training programs, and performance evaluations. Data can also be collected to include conflicts, resolutions, attrition, team performance, and leadership. This information can be consolidated and analyzed to gain insights on how businesses can form more effective teams. It can also be used to develop the optimum size and configuration of each project team, including the mix of skills and scalable leadership. Big data in risk, quality, and resource management Project management is dynamic and can be affected by a variety of internal and external forces. As a result, project team members should always actively identify and manage risks on an ongoing basis. This means documenting everything from when a risk event happened to troubleshooting activities to resolve the issue. This data can then be used to identify, analyze, prioritize, monitor, and create risk response strategies. Quality management, on the other hand, involves a lot of data that can be captured during the testing phase and at the time of delivery. This information will include policies, quality criteria, criteria thresholds, and the use of quality standards. Quality assurance data can be analyzed to ensure that quality processes are being implemented and to make sure that standards and requirements are being met. Through data analytics, companies can develop new quality management frameworks and standards which include new quality control techniques, procedures, and parameters for measuring quality (to be measured against baseline standards). Finally, resource data can also be collected and analyzed to enable improved procurement, allocation, and management. These can include infrastructure, tacit and explicit knowledge, finances, technologies, human inventory, and enterprise process assets. Big data will essentially make project management more efficient by constantly adapting and improving processes to help teams work well together. It will also enable businesses to better manage their resources during the planning stage and throughout the course of the project. DO YOU NEED STAFF FOR AN UPCOMING BIG DATA PROJECT ASAP? HIRE OUR TOP TALENT TODAY AND GET 10% OFF YOUR PROJECT DEVELOPMENT!
<urn:uuid:113f9516-eed7-4876-8d17-31f8aab0f6b9>
CC-MAIN-2024-38
https://digi117.com/blog/how-big-data-shapes-the-future-of-project-management.html
2024-09-17T04:58:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00643.warc.gz
en
0.929738
736
2.734375
3
JBOD stands for Just a Bunch of Disks and typically is defined as a collection of disk drives contained in a single drive enclosure, though JBOD could just as easily refer to a stack of external single drives connected to a computer. Regardless of the configuration, each of the drives in a JBOD arrangement can be accessed from the host computer as a separate drive (contrasted to RAID, which treats a collection of drives as a single storage unit). Depending on the design and architecture of the JBOD storage enclosure, there may be a connector for each drive in the enclosure, unless the JBOD enclosure contains an internal hub. Why Use JBOD? For some, JBOD is simpler to understand and set up. It’s also relatively straightforward to add storage capacity in a JBOD system: simply add another drive. People who like flexibility in their workflows, want tight control of their data storage and backup strategies, or simply prefer reducing the clutter of multiple external disk drives on their desks, tend to use JBOD enclosures. Whether using JBOD or RAID storage architectures, one should always ensure that data is being backed up to at least one other drive or location.
<urn:uuid:6ef317fc-4752-4bef-8abb-792971f677c6>
CC-MAIN-2024-38
https://www.cru-inc.com/data-protection-topics/understanding-jbod/
2024-09-17T04:22:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00643.warc.gz
en
0.954
250
2.609375
3
Secure Data Protection June 13, 2022 Contributed by Edward Schmitt, Cybersecurity Analyst, DOT Security. Most people don’t realize that even with the protections provided by website to protect our usernames, passwords, and other sensitive information, they still have to be typed. And cybercriminals know this, that’s why keyloggers are such a frequent and dangerous threat as they track your every move. A keylogger is a software application or tool that logs keyboard strokes on a digital device. Keyloggers can appear innocuous but don’t be fooled, this is fundamental to their functionality. They don't want to draw attention to themselves because they are a powerful instrument for cybercriminals to pilfer very valuable and sensitive information. Keyloggers are especially dangerous to victims if they don’t understand the sensitivity of their data input through keystrokes on their keyboards. This sensitive data can be logged and sent to malicious actors for further analysis. Keyloggers present a significant danger to anyone who uses a digital device since it is incredibly common these days to input passwords and other confidential details entered through keyboard input. Furthermore, today’s keyboard loggers are advanced enough to log other data that does not require keyboard input including clipboard and screenshot logging, control text capture (which allows for the capture of a password if it is being protected by a password mask such as asterisk marks that replace password characters on a website), process tracking which allows the capture of folders, applications, and windows that open, as well as a breadth of internet activity tracking capabilities. Keyloggers are not bound to only computers. Many keyloggers are known to have been distributed to mobile phones and mobile devices. Cybercriminals have been developing trojans for mobile devices in the form of games and applications that can end up on the Apple App Store and the Google Play Store. When using a PIN to access accounts, passwords to access websites, email usernames and passwords, and other sensitive knowledge, there is always the possibility for infringement on private data by cybercriminals looking to take advantage of new and old vulnerabilities within websites and web browsers. If cybercriminals have direct access to privately-owned and confidential information, they will use the collected information to conduct illegal activity, such as making purchases online, under a compromised user account. Keyloggers are often used to spy on data from private enterprises and compromised individuals. When visiting a fraudulent or malicious website, keyloggers can be automatically downloaded and installed onto a computer. Cybercriminals can do this by executing a script from the compromised website that will then target the web browser for unpatched exploits. Once exploited, an attacker can then download and install malware like keyloggers to monitor keystrokes and other activity on the device. There are several other ways that keyloggers can end up on a device, such as phishing or being bundled alongside other malware such as trojans, viruses, and worms. Keeping your organization and yourself safe from keyloggers and other malware is crucial in today’s digital world. With identity theft, data theft, and cybercrime all at all-time highs, it is essential to arm yourself and others with the knowledge of how to protect against keyloggers. Recommended strategies for increased protection include: Hardening public-facing systems and blocking unneeded USB ports to prevent hardware keyloggers from being exploited Deploying anti-virus products to the device that can identify and distinguish software keyloggers and maintaining updates to ensure anti-virus products are operating at full capacity Maintaining software updates to the operating systems and web browsers Avoiding the use of any publicly available internet-enabled devices to access private websites or services which involve using a password to proceed Employing multi-factor authentication on all password-protected accounts wherever applicable to supplement an additional layer of security in the event a password is compromised by a keylogger Protecting yourself from malware like keyloggers means keeping it from entering your system in the first place and having strong security standards set to secure all the entrances of your network. Is your business adequately protected from malware like keyloggers? Use this checklist to see just how covered your business is from modern cyberattacks and discover what you still need to be completely secure.
<urn:uuid:6771d40d-c5e2-491d-b1af-6ec93038d29f>
CC-MAIN-2024-38
https://dotsecurity.com/insights/blog-what-is-a-keylogger
2024-09-21T00:55:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00343.warc.gz
en
0.928398
893
2.875
3
You may have seen scooters whizzing by at reckless speeds lately, often with a helmetless young driver. The rise of ‘micro mobility’ is in a nascent stage. Driverless vehicles will not be on a road near you in the coming year, and car sharing is an issue for some, but scooters and e-bikes launched by Bird, Lime, and Revel are safer in COVID times since they are open air. The new mobility is changing the face of urban transportation, but with new scooter safety challenges. Solving a major challenge Like any new technology, vehicles such as scooters were developed to ease congestion, help workers get home on the ‘last mile’ and generally improve the standard of living for city residents. However, in classic Silicon Valley lore such as Facebook’s motto of “move fast and break things” and Uber’s blatant violation of regulations, many of these companies started simply by dropping off scooters on street corners. It was the role of city officials to create scooter policy. Governments had to learn to create rules for these vehicles. Complicating the matter were injuries and deaths. As an example “Revel’s shared electric scooter program recently restarted in New York City after three deaths in two months. In a high-profile case, local NYC news reporter Nina Kapur was killed as a passenger without a helmet on a Revel scooter. Staying safe on the road Riding a scooter is a fun and easy – but dangerous – way to get around. Granted, the media focuses on the few deaths, but many scooter injuries are not reported. Fresh air is good in helping curb the spread of COVID, but just as you need to wear a mask, you must also wear a helmet. While 30 mph may not seem fast, this is still an unprotected human. Many people are not fully comfortable driving scooters and need to practice in a high stress environment and learn to balance. Many accidents are not just the result of scooters colliding with cars. Scooter drives must avoid not just cars, but other scooters, bikes and pedestrians. How cities deal with scooters Tel Aviv issues fines for not wearing helmets, and that could be a good deterrent in other cities. This article highlights challenges facing city officials: “Scooters are not being used as intended and need to be more tightly regulated. A lot of people are still riding these scooters, and they seem to be up to no good,” said Michael Rogers, Dallas’ transportation director.” In general, something used initially to relieve strain on the transportation system has become a liability. This dilemma is typical with many new technologies – some segments of the population abuse these tools, but a level of normalcy returns, as with ride sharing. If anything epitomizes the mindset of the scooter companies it is this: “Several companies initially gained the industry reputation of “begging for forgiveness rather than first asking for permission” after launching electric scooter fleets without consulting city officials.” Cities will learn to set limits, but it is still the responsibility of riders to take proper safety precautions. When new technology bursts onto the scene, it can be seen as a godsend in helping solve problems, but other problems tend to arise. We cannot control how cities deal with scooters, but we can control the safety precautions we take. Wearing a helmet could be the difference between life and death on a scooter.
<urn:uuid:d3449402-a1b6-4832-98d7-a532507c2440>
CC-MAIN-2024-38
https://www.interforinternational.com/scooter-safety/
2024-09-07T13:55:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00643.warc.gz
en
0.962518
723
2.65625
3
The fusion of synthetic biology (SynBio) with architecture is marking the beginning of a revolutionary shift towards eco-friendly building methods. At the forefront is Ginger Dosier with her pioneering company, Biomason; she represents the future of green construction. SynBio’s integration with architectural practices promises to drastically reduce carbon footprints, steering the industry towards sustainable development. Innovations like those led by Dosier utilize living organisms to create building materials. For instance, Biomason uses bacteria to grow biocement in a process that sequesters carbon, radically cutting greenhouse gas emissions compared to traditional cement production. This technique exemplifies the kind of groundbreaking changes SynBio can bring to architecture, potentially transforming the way we construct buildings. Dosier’s work is just one piece of a larger puzzle that marries scientific advancement with architectural design. As this trend progresses, we can expect to see a surge in constructions that are not just environmentally neutral but positively restorative, offering a way to counteract the industry’s historically heavy environmental toll. Such advancements signal an optimistic future for sustainable urban development, where buildings are crafted with the well-being of the planet firmly in mind. This is a change that goes beyond mere energy efficiency—it’s about reimagining our built environment to work with nature, rather than against it. The Emergence of Bio-Influenced Construction Materials The Role of Biomason and Biocement in Architecture Biocement is one of SynBio’s most profound contributions to sustainable architecture, with Biomason at the forefront of its application. The company leverages living microorganisms to produce cement, a process that essentially grows the material without the carbon footprint characteristic of traditional cement manufacturing. This method redefines the future of construction materials, fostering an architectural mindset that prioritizes sustainability from inception to completion. Biocement is not simply about reducing emissions; it challenges the established norms of material science. Dosier’s approach with Biomason demonstrates that biologically engineered materials can meet, if not surpass, the performance expectations of their conventional counterparts, offering architects and builders novel solutions that are in harmony with our planet’s ecosystem. Building for Future Climates and Sustainability The battle against climate change demands that architects and builders adopt materials that stand the test of time and the elements. The industry must anticipate future climates and seek out materials like those inspired by SynBio, which are designed to endure extreme weather conditions and environmental stressors. The focus is not only on durability but also on how these materials interact with the environment during their lifecycle. As bio-informed materials find their way into the market, their capacity to merge longevity with ecological compatibility becomes paramount. With an eye on the long horizon, designers are recognizing the potential of SynBio to yield architectural components that contribute positively to sustainability objectives, marking a new age of thoughtful, long-lasting design. Redefining Material Life Cycles and Environmental Impacts The Construction Industry’s Ecological Footprint The construction industry’s ecological footprint is a pressing concern for architects aware of the impact that buildings have on the planet. The land use, resource extraction, and biodiversity loss associated with traditional building practices demand a strategic reevaluation. This is where SynBio steps in, fostering the creation of materials that are not simply useful in the short term but also considerate of their environmental legacy. The directed use of materials in architecture, alongside minimized impact on the land utilized, encapsulates a broader trend of sustainability in construction. These emerging materials come with a promise: to coexist with biodiversity rather than displacing it. They embody a profound shift towards a holistic appraisal of carpentry that includes the origins, life, and eventual return of building materials to the Earth. Exploring the Long-Term Environmental Effects Dosier is at the forefront of a movement that calls for a transformation in how we produce and dispose of construction materials. The environmental impact of these materials can’t be reduced to just the emissions from their creation; we must consider their entire lifecycle, including how they’re discarded. Materials engineered through Synthetic Biology (SynBio) offer a compelling alternative, designed to be environmentally benign from cradle to grave. The weight of man-made materials on Earth has become a pressing environmental issue. Dosier is promoting a shift in the building sector towards biodegradable materials. These biomaterials are engineered to work in harmony with natural processes, decomposing without damage to the ecosystem after their use in construction. Such innovations in SynBio are key; they propose a new generation of construction materials that are produced sustainably and can safely return to the earth, mirroring the regenerative cycles found in nature. This approach presents a sustainable pathway for the construction industry, minimizing the environmental footprint of buildings and infrastructure throughout their lifespan, including their eventual deconstruction. It’s a vision for a greener future where the materials we build with are as considerate of the environment in their end as they are in their use. Decarbonizing Construction Materials Combatting Greenhouse Gas Emissions from Buildings Buildings, through their operations and the materials they consume, represent a substantial chunk of global greenhouse gas emissions. Decarbonizing building materials is, therefore, a critical step towards meeting global climate targets. Dosier’s work points to an array of methods that can reduce the carbon footprint of these materials, such as the use of bio-based alternatives that lock away carbon or revising supply chain logistics to lessen emissions. Creating construction materials that actively sequester carbon is a revolutionary concept ushered in by SynBio. These materials don’t just passively exist; they function as carbon sinks, providing a dynamic solution to reducing atmospheric CO2. This focus on the materials’ lifecycle emissions is a vital component of the industry’s overhaul in favor of a greener future. Pioneers in Sustainable Concrete Alternatives In a groundbreaking effort to revolutionize the construction industry, companies like Prometheus Materials are pioneering sustainable alternatives to traditional concrete, known for its considerable carbon footprint. This innovator is leading the charge by fabricating building blocks from bio-based inputs, effectively circumventing the hefty CO2 emissions associated with conventional cement production. Complementing this progress, Minus Materials is making strides with their development of eco-friendly, biodegradable concrete additives. These additives not only enhance the structural strength of concrete but also reduce overall material consumption. At the forefront of this innovation wave is Basilisk, which has developed an incredible self-healing concrete. This technological marvel promises to extend the life of concrete structures significantly, reducing the need for frequent repairs and maintenance. Collectively, these trailblazing companies are galvanizing the construction materials sphere, with a clear focus on functionality coupled with environmental stewardship. Through these advanced materials, the construction sector is poised to emerge as a champion of sustainability, reducing its ecological impact while maintaining the integrity and resilience of its structures. Cross-Industry Material Innovations Advancements in Sustainable Biofibers The textile industry’s foray into sustainable fibers swiftly has implications for architecture. Developments in materials derived from organic sources like lignin and seaweed or lab-grown timber suggest a future independent from environmentally destructive resources. These biofibers signal an impending shift in building practices, one that aims to mitigate deforestation and resource depletion while embracing the principles of the circular economy. Bio-based materials pave the way for a new chapter in construction, one that couples innovation with conservatism. By capitalizing on research within textiles and fashion, architecture benefits from a ripple effect of sustainable material usage, demonstrating the interconnectedness of industry advancements and the shared responsibility towards ecological stewardship. Evolving Paints and Coatings The realm of paints and coatings has evolved dramatically, with companies like Pneuma Bio at the forefront of this transformation. They’re redefining the industry by developing paints that not only protect surfaces but also purify the air. By infusing paint with living algae, these innovative coatings absorb pollutants, showcasing how materials can actively contribute to environmental health. This fusion of biology with material science symbolizes a significant shift towards eco-consciousness in the industry. Advancements are not just about minimizing the harmful impacts of traditional ingredients such as solvents and pigments but are also about designing substances that actively benefit our planet. As a result, we’re witnessing the emergence of sustainable paints that offer both protective functionalities and the capacity to generate energy, indicating a new era for coatings that are as beneficial for the environment as they are for the surfaces they cover. Localized Solutions and Biodiversity The Importance of Material Diversity and Adaptation Understanding that one-size-fits-all is not a feasible approach to sustainable architecture, the industry is moving towards more localized and diversified use of materials. This shift away from a universal standard ensures that each building project is responsive to the unique environmental and cultural context it occupies. Material diversity and specific adaptation are essential considerations for architecture that thrives both aesthetically and ecologically. Embracing the variance of local environments encourages architecture that not only merges seamlessly with its surroundings but also supports the native biodiversity. It requires a deep knowledge of regional resources and the willingness to adapt and innovate with materials that align with the geographical and cultural fabric of a place. Collaboration in Material Science and Technology In her call for a collaborative ethos, Dosier recognizes the importance of drawing from a breadth of disciplines and perspectives to conceive materials that address the specific challenges of different locales. By involving a diverse range of experts and stakeholders in the development of new technologies, the industry moves closer to achieving a balance between human needs and environmental preservation. This inclusive approach, championed by Dosier, is not only about innovation; it’s about crafting a dialogue — one in which material science and technology engage with traditional knowledge and local wisdom to generate solutions that are both sustainable and effective. Synergy in these sectors is quintessential for cultivating an architectural future that is resilient, adaptable, and deeply integrated with the principles of sustainability.
<urn:uuid:639e84e4-640e-4122-9079-57c5ed2e94da>
CC-MAIN-2024-38
https://biopharmacurated.com/biotech-and-bioprocessing/how-can-synbio-revolutionize-sustainable-architecture/
2024-09-08T20:30:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00543.warc.gz
en
0.922534
2,079
3.0625
3
Removal of marine debris and prevention of future accumulations is vitally important to the coastal environment and to humans. Making up approximately 3/4 of all marine debris, plastics persist in the environment for many years and is especially harmful to wildlife. Thousands of birds and marine animals die each year from entanglement in monofilament fishing line, strapping bands, and 6-pack ring holders. Pollution from marine debris also compromises the productivity of wetlands which act as nurseries for commercially and recreationally important fish and shellfish. Marine debris compresses and kills the salt marsh grass which supports the estuary’s food chain, filters pollution, and protects the coast from wave damage.
<urn:uuid:8fe6341f-ebb6-45e8-998b-22f800f22562>
CC-MAIN-2024-38
https://www.newby-ventures.com/entities/environment/
2024-09-08T18:54:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00543.warc.gz
en
0.931874
140
3.34375
3
Why are AI assistants given human-like names? The attribution of human characteristics to non-human entities is known as anthropomorphism. Attributing human intent to non-human entities, such as pets, robots, or other entities, is one way that people make sense of the behaviors and events that they encounter. We as humans are a social species with a brain that evolved to quickly process social information.[iv] There are numerous examples of anthropomorphism with which we are familiar, and honestly, don’t even think twice about. In Toy Story[v], the toys can talk. In Animal Farm[vi], the animals overthrow their masters and govern themselves. In Winnie-the-Pooh[vii], Christoper Robbin interacts with Winnie, a talking bear. Is this why AI-enabled chatbots and digital assistants are given human-like names? To make us want to talk to them? To make it easy to interact with them? To influence our thinking and behaviors? Without over psycho-analyzing the situation (and I am far from qualified to do so), the answer to the above questions is “yes”. The good – and not so good – of today’s AI capabilities AI capabilities have been around for quite some time. While philosophers and mathematicians began laying the groundwork for understanding human thought long ago[viii] , the advent of computers in the 1940s provided the technology needed to power AI. The Turing Test, introduced in 1950, provided a method for measuring a machine’s ability to exhibit behavior that is human-like. The term and field of “artificial intelligence”, coined by John McCarthy in 1956, soon followed. The past few years have seen a dramatic expansion of AI capabilities, from machine learning to natural language processing to generative AI. That expansion has resulted in impactful and valuable capabilities for humans. AI is well-suited for managing tedious and repetitive tasks. AI can be used to initiate automated actions based on the detection of pre-defined conditions. AI can facilitate continual learning across an organization based on the data captured from interactions with and use of technology. And most recently, AI is developing a growing capability to respond to more complex queries and generating responses and prompts to aid humans in decision-making. But despite all the progress with AI, there are some things that are not so good. Miscommunication can occur due to limitations of a chatbot or an AI assistant in understanding user intent or context. A simple example is the number of ways we as humans describe a “computer”, including “PC”, “laptop”, “monitor”, or “desktop” must be explicitly defined for an AI model to recognize the equivalence. AI is not able to exhibit empathy or the human touch, resulting in frustration, because humans feel that they are not being heard or understood.[ix] AI is not able to handle complex situations or queries that require nuanced understanding; as a result, AI may provide a generic or irrelevant response.[x] The quality of responses from AI is directly dependent upon the quality of the input data being used – and many organizations lack both the quality and quantity of data required by AI to provide the level of functionality expected by humans. Lastly, but perhaps most importantly, the expanding use and adoption of AI within organizations has resulted in fear and anxiety among employees regarding job loss. Techniques that will help humanize AI Several techniques can help organizations better design human interactions with AI. Here are a few to consider that can help humanize AI. - Employing design thinking techniques – Design thinking is an approach for designing solutions with the user in mind. A design thinking technique for understanding human experience is the use of prototypes, or early models of solutions, to evaluate a concept or process. Involving the people that will be interacting with AI through prototypes can identify any likes or encountered friction in the use of AI technology. - Mapping the customer journeys that (will) interact with AI – A customer journey map is a visual representation of a customer’s processes, needs, and perceptions throughout their interactions and relationship with an organization. It helps an organization understand the steps that customers take – both seen and unseen – when they interact with a business.[xi] Using customer journey maps helps with developing the needed empathy with the customer’s experience by identify points of frustration and delight. - Thinking in terms of the experience – What is the experience that end-users need to have when interacting with AI? Starting AI adoption from this perspective provides the overarching direction for making the experience of interacting with AI more “human”. Start here to make AI use more human AI adoption presents exciting opportunities for increasing productivity and improving decision-making. But with any technology adoptions, there is the risk of providing humans with suboptimal experiences with AI. Here are three suggestions for enabling good human experiences with the use of AI. - Define AI strategy – Success with AI begins with a well-defined strategy that identifies how AI will enable achievement of business goals and objectives. But AI success is not just business success or technical success with AI models, but also whether users are happy with AI and perceive it to be a valid solution. [xii] - Map current customer journeys – Mapping current customer journeys may expose where user interactions are problematic and may benefit from the introduction of AI. - Start and continually monitor the experience – Happy Signals, an experience management platform for IT, states that “humans are the best sensors”. Humans are working in technological environments that are in a constant state of change and evolution. Actively seeking out and acting upon feedback from humans regarding their experiences with technology raises awareness of the user experience and fosters a more human-centric approach to technology use and adoption. The best way to ensure that AI-enabled technologies are more human is to design them with empathy. Design thinking, customer journey mapping, and experience management will help ensure that AI stays in touch with the “human” side. Need help with customer journey mapping? Perhaps using design thinking techniques to develop solution-rich, human centered solutions for addressing challenges with customer and employee experience? We can help – contact Tedder Consulting for more information. [i] “Siri” is a trademark of Apple, Inc. [ii] “Alexa” is a trademark of Amazon.com, Inc. or its affiliates. [iii] “Bixby” is a trademark of Samsung Electronics Co., Ltd. [iv] https://www.psychologytoday.com/us/basics/anthropomorphism , retrieved March 2024. [v] Lasseter, John. Toy Story. Buena Vista Pictures, 1995. [vi] Orwell, George. Animal Farm. Collins Classics, 2021. [vii] Milne, A.A., 1882-1956. Winnie-the-Pooh. E.P. Dutton & Co., 1926. [viii] Wikipedia. “History of artificial intelligence”. Retrieved March 2024. [ix] https://www.contactfusion.co.uk/the-challenges-of-using-ai-chatbots-problems-and-solutions-explored , retrieved March 2024. [xi] https://www.qualtrics.com/experience-management/customer/customer-journey-mapping , retrieved March 2024. [xii] Ganesan, Kavita. “The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices, & Real-World Applications”. Opinosis Analytics Publishing, 2022.
<urn:uuid:163cba55-040a-420f-80a7-81b39d5d451a>
CC-MAIN-2024-38
https://www.dougtedder.com/tag/design-thinking/
2024-09-10T01:05:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00443.warc.gz
en
0.92731
1,603
3.03125
3
Introduction to Bridging and Switching Bridges and switches operate principally at Layer 2 of the OSI reference model. As such, they are widely referred to as data-link layer devices. Bridges became commercially available in the early 1980s. At the time of their introduction, bridges connected and enabled packet forwarding between homogeneous networks. More recently, bridging between different networks also has been defined and standardized. Switching and bridging technologies pass information by learning connecting addresses, and then filtering and forwarding the information based on the collected addresses. Networks that acquire bridging and switching normally reduce collisions that can occur on network segments. Switching technology has emerged as the evolutionary heir to bridging-based internetworking solutions. Bridges of old performed this functionality in software. However, today's switches perform the bridging in hardware allowing for increases in performance. In addition, the switches can implement this bridging functionality for every host connected, allowing full duplex by virtually eliminating collisions. Switching implementations now dominate applications in which bridging technologies were implemented in prior network designs. Superior throughput performance, higher port density, lower per-port cost, and greater flexibility have contributed to the emergence of switches as replacement technology for bridges and as complements to routing technology. In order for the bridges to begin passing information to and from devices and segments, they must first familiarize themselves with the addresses associated with those devices and segments. Initially, they must let all information pass through them, even if that information is not intended for a device on the opposite side of the bridges/switches. This is known as flooding Once the devices have allowed the information from the connecting segments to pass through, they can log the address information into tables for further use in forwarding and filtering. Forwarding / Filtering Bridging and switching devices determine if incoming frames are destined for a device on the segment where they were generated. If so, the devices do not forward the frames to the other device ports. This is an example of filtering. If the MAC destination address is on another segment, the devices send the frames to the appropriate segment. This is known as forwarding. When the switched network includes loops for redundancy, an Ethernet switch can prevent duplicate frames from traveling over the redundant path if spanning tree protocol is configured. Frame Transmission Modes In the cut-through mode, the switch checks the destination address (DA) as soon as the header is received and immediately begins forwarding the frame. Store and Forward In the store-and-forward mode, the switch must receive the complete frame before forwarding takes place. The destination and source addresses are read, the cyclic redundancy check (CRC) is performed, relevant filters are applied, and the frame is forwarded. If the CRC is bad, the frame is discarded. Latency through the switch varies with frame length. In the fragment free mode, the switch will read into the first 64 bytes before forwarding the frame. Usually, collisions happen within the first 64 bytes of a frame. By reading 64 bytes, the switch can filter out collision frames What is Redundant Topology? Bridged networks, including switched networks, are commonly designed with redundant links and devices. Such designs eliminate the possibility that a single point of failure will result in loss of function for the entire switched network. Redundant topology is the duplication of switches or other devices/connections so that in the event of a failure the redundant devices, services, or connections can perform the work of those that failed. While redundant designs may eliminate the single point of failure problem, they introduce several others that must be taken into account: - Without some loop avoidance service in operation, each switch will flood broadcasts endlessly. This situation is commonly called a broadcast storm. - Multiple copies of nonbroadcast frames may be delivered to destination stations. Many protocols expect to receive only a single copy of each transmission. Multiple copies of the same frame may cause unrecoverable errors. - Database instability in the MAC address table contents results from copies of the same frame being received on different ports of the switch. Data forwarding may be impaired when the switch consumes resources coping with address thrashing in the MAC address table. Spanning-Tree Protocol is a link management protocol that provides path redundancy while preventing undesirable loops in the network. For an Ethernet network to function properly, only one active path can exist between two stations. To provide path redundancy, Spanning-Tree Protocol defines a tree that spans all switches in an extended network. The purpose of the Spanning-Tree Protocol is to maintain a loop-free network. A loop free path is accomplished when a device recognizes a loop in the topology and blocks one or more redundant ports. Spanning-Tree Protocol forces certain redundant data paths into a standby (blocked) state. If one network segment in the Spanning-Tree Protocol becomes unreachable, or if Spanning-Tree Protocol costs change, the spanning-tree algorithm reconfigures the spanning-tree topology and reestablishes the link by activating the standby path. Spanning-Tree Protocol operation is transparent to end stations, which are unaware whether they are connected to a single LAN segment or a switched LAN of multiple segments. Spanning-Tree Protocol continually explores the network so that a failure or addition of a link switch, or bridge is discovered quickly. When the network topology changes, Spanning-Tree Protocol reconfigures switch or bridge ports to avoid loss of connectivity or creation of new loops. The Spanning-Tree Protocol provides a loop free network topology by: - Electing a Root Bridge - Electing Root Ports for Nonroot Bridges - Electing One Designated Port for each network segment. A loop free path is accomplished when the switches and ports elected by this operation recognize a loop in the topology and block one or more redundant ports. Spanning-Tree Protocol operation requires that for a network, a root bridge is elected, root ports for non-root bridges are determined, and a designated port is selected for each segment. Ports are placed in forwarding or blocking states. Nondesignated ports are normally in blocking state to break the loop topology. A BPDU is exchanged every 2 seconds. One of the pieces of information exchanged is the bridge ID which carries the MAC address. The root bridge on a network is determined as the bridge with the lowest bridge ID. Propagation delays can occur when protocol information is passed through a switched LAN. As a result, topology changes can take place at different times and at different places in a switched network. When a switch port transitions directly from non-participation in the stable topology to the forwarding state, it can create temporary data loops. Ports must wait for new topology information to propagate through the switched LAN before starting to forward frames. They must also allow the frame lifetime to expire for frames that have been forwarded using the old topology. Each port on a switch using Spanning-Tree Protocol exists in one of the following states: Movement of the Port States From initialization to blocking - When Spanning-Tree is initialized, all ports start in the blocking state to prevent bridge loops. The port stays in a blocked state if the spanning tree determines that there is another path to the root bridge that has a better cost. Blocking ports can still receive BPDUs. From blocking to listening or to disabled - Ports transit from blocked state to the listening state. When the port is in the transitional listening state, it is able to check for BPDUs. This state is really used to indicate that the port is getting ready to transmit but would like to listen for just a little longer to make sure it does not create a loop. From listening to learning or to disabled - When the port is in learning state, it is able to populate its MACaddress table with MAC addresses heard on its ports, but it does not forward frames. From learning to forwarding or to disabled - In the forwarding state, the port is capable of sending and receiving data. From forwarding to disabled - At any time the port can become nonoperational. The virtual LAN (VLAN) permits a group of users to share a common broadcast domain regardless of their physical location in the internetwork. Creating VLANs improves performance and security in the switched network by controlling broadcast propagation. Within the switched internetwork, VLANs provide segmentation and organizational flexibility. Using VLAN technology, you can group switch ports and their connected users into logically defined communities of interest such as coworkers in the same department, a cross-functional product team, or diverse user groups sharing the same network application. - A VLAN is a logical broadcast domain that can span multiple physical LAN segments. - A VLAN can be designed to provide stations logically segmented by functions, project teams, or applications without regard to the physical location of users. - Each switch port can be assigned to only one VLAN. - Ports in a VLAN share broadcasts. Ports that do not belong to the same VLAN do not share broadcasts. This improves the overall performance of the network. - A VLAN can exist on a single switch or span across multiple switches. - VLANs can include stations in a single building or multiple-building infrastructures, or they can even connect across wide-area networks (WANs). Catalyst 1900 ports are configured with a VLAN membership mode that determines which VLAN they can belong to. Membership modes are assigned as either static or dynamic. Static Assignment Assignment of the VLAN to a port is statically configured by an administrator. Dynamic Assignment The Catalyst 1900 supports dynamic VLANs by using a VMPS (VLAN Membership Policy Server). The VMPS can be a Catalyst 5000 or an external server. The Catalyst 1900 cannot operate as the VMPS. The VMPS contains a database that maps MAC addresses to VLAN assignment. When a frame arrives on a dynamic port at the Catalyst 1900, the Catalyst 1900 queries the VMPS for the VLAN assignment based on the source MAC address of the arriving frame. A dynamic port can only belong to one VLAN at a time. Multiple hosts can be active on a dynamic port only if they all belong to the same VLAN. ISL, Inter-Switch Link, is a Cisco proprietary protocol for interconnecting multiple switches and for maintaining VLAN information as traffic goes between switches. The ISL frame tagging used by the Catalyst series of switches is a low-latency mechanism for multiplexing traffic from multiple VLANs on a single physical path. It has been implemented for connections between switches, routers, and network interface cards used on nodes such as servers. Ports configured as ISL trunks encapsulate each frame with a 26-byte ISL header and a 4-byte CRC before sending it out the trunk port. VLAN Trunking Protocol(VTP) VLAN Trunking Protocol (VTP) is a protocol used to distribute and synchronize identifying information about VLANs configured throughout a switched network. Configurations made to a single VTP server are propagated across links to all connected switches in the network. - VTP allows switched network solutions to scale to large sizes by reducing the manual configuration needs in the network. - VTP is a Layer 2 messaging protocol that maintains VLAN configuration consistency by managing the additions, deletions, and names changes of VLANs across networks. - VTP minimizes misconfigurations and configuration inconsistencies that can cause problems, such as duplicate VLAN names or incorrect VLAN-type specifications. - A VTP domain is one switch or several interconnected switches sharing the same VTP environment. A switch is configured to be in only one VTP domain. By default, a Catalyst switch is in the no-management-domain state until it receives an advertisement for a domain over a trunk link, or until you configure a management domain. VTP operates in one of three modes: server mode, client mode, or transparent mode. The default VTP mode is server mode, but VLANs are not propagated over the network until a management domain name is specified or learned. - VTP Pruning is a configuration that allows restricted traffic flow inside a management domain of a VLAN. About this post Viewed: 2,529 times No comments have been added for this post. Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum.
<urn:uuid:539d7860-183c-410e-9f62-ee5145489548>
CC-MAIN-2024-38
https://www.fortypoundhead.com/showcontent.asp?artid=961
2024-09-10T02:21:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00443.warc.gz
en
0.916493
2,573
3.703125
4
Cyber Hygiene: Best Practices for Enhanced Cybersecurity Nowadays, cybersecurity isn't just a concern for tech giants or government agencies; it's a necessity for everyone. Just as personal hygiene is crucial for maintaining physical health, cyber hygiene is essential for protecting your digital well-being. But what is it, exactly? This concept, similar to the daily habits you practice to stay healthy, involves a set of cyber hygiene practices designed to safeguard your digital presence from cyber threats. Understanding and implementing these practices can dramatically enhance your cybersecurity posture and mitigate potential risks. Cyber hygiene: What does it mean? Cyber hygiene refers to the routine practices and steps taken to maintain the health and security of your digital assets. It involves a combination of actions aimed at protecting sensitive data, systems, and networks from unauthorized access, malware, and other cyber threats. This concept is not just about having antivirus software or firewalls; it includes regular updates, strong passwords, and security awareness training. Essentially, cyber hygiene is about creating a robust defense mechanism through consistent and mindful cybersecurity practices. Core components of cyber hygiene Understanding the components of cyber hygiene is key to implementing effective security measures. Here are the core elements: Password management: Using strong, unique passwords for different accounts and changing them regularly. Software updates: Keeping all software, including operating systems and applications, up to date with the latest security patches. Antivirus and anti-malware tools: Employing reliable security software to detect and neutralize potential threats. Firewalls: Using firewalls to block unauthorized access to your network and devices. Data encryption: Encrypting sensitive data to protect it from being accessed by unauthorized individuals. Regular backups: Performing regular backups of important data to prevent loss in case of a cyber incident. Security awareness training: Educating yourself and others about potential cyber threats and safe online practices. Access controls: Implement strong authentication methods and restrict access to sensitive information. Benefits of cyber hygiene Practicing good cyber hygiene can provide numerous benefits. Here are eight key advantages: Reduced risk of cyberattacks Effective cyber hygiene minimizes vulnerabilities, making it harder for attackers to breach your systems. Regularly updating software and using strong passwords significantly reduces the likelihood of a successful cyberattack. Employing these cyber hygiene practices creates multiple layers of defense against potential intrusions. This proactive approach helps identify and mitigate risks before they can be exploited. Enhanced data protection Ensuring that sensitive data is encrypted and regularly backed up protects it from unauthorized access and loss. Data encryption transforms readable information into an unreadable format, which can only be deciphered with the correct key. Regular backups ensure that even if data is compromised, you can restore it to its previous state. These measures collectively enhance the security of your sensitive information, safeguarding it from cyber threats. Improved system performance Keeping systems updated and free of malware can lead to smoother and faster performance. Outdated software can cause compatibility issues and slow down your systems. By applying the latest updates and scanning for malware, you ensure that your systems operate efficiently. This results in improved performance and a better user experience. Compliance with regulations Adhering to cyber hygiene best practices can help meet legal and regulatory requirements for data protection. Many industries are governed by regulations that mandate specific security measures to protect data. Following cyber hygiene ensures that your practices align with these requirements, avoiding potential fines and legal issues. Compliance also enhances your organization's credibility and trustworthiness. Preventing cyber incidents through proactive measures is often more cost-effective than dealing with the aftermath of a breach. Investing in cyber hygiene practices can prevent costly security breaches and data loss incidents. The financial impact of a cyberattack, including recovery costs and reputational damage, can far exceed the cost of preventive measures. This proactive approach ultimately leads to significant cost savings. Increased trust and credibility Demonstrating a commitment to cybersecurity enhances trust with clients, partners, and stakeholders. When you consistently practice good cyber hygiene, you build a reputation for being security-conscious. This trust is crucial for maintaining positive relationships with clients and partners. It also reinforces your organization's commitment to safeguarding sensitive information. Faster incident response Well-maintained systems and regular training enable quicker detection and response to security incidents. Having up-to-date systems and trained personnel allows for prompt identification and mitigation of security threats. This readiness minimizes potential damage and speeds up recovery processes. Efficient incident response is essential for maintaining the integrity of your systems and data. Enhanced security awareness Regular training and updates improve overall awareness of potential cyber threats and safe practices. Cyber hygiene involves ongoing education about emerging threats and best practices. This heightened awareness helps individuals recognize and respond to potential risks more effectively. A well-informed approach to cybersecurity contributes to a stronger overall defense. Common cyber hygiene problems Despite its importance, maintaining good cyber hygiene can be challenging. Here are six common cyber hygiene problems: Using simple or reused passwords increases the risk of unauthorized access. Weak passwords can be easily guessed or cracked by attackers using brute-force methods. Reusing passwords across multiple accounts compounds the risk, as a breach in one account can lead to compromises in others. Implementing strong, unique passwords is essential for securing your digital assets. Neglected software updates Failing to update software regularly can leave systems vulnerable to exploits. Outdated software may have known vulnerabilities that attackers can exploit. Security patches and updates are released to address these weaknesses and improve protection. Regularly updating software is crucial for maintaining a secure environment. Lack of encryption Not encrypting sensitive data can expose it to unauthorized access. Unencrypted data is vulnerable to interception and theft, especially during transmission. Encryption converts data into a secure format, making it unreadable to unauthorized parties. This measure is essential for protecting sensitive information from potential breaches. Inadequate backup procedures Not performing regular backups can lead to data loss in case of a cyber incident. Without regular backups, losing critical data due to a cyberattack or hardware failure can be devastating. Backups ensure that data can be restored to its previous state, minimizing the impact of such incidents. Implementing a robust backup strategy is crucial for data resilience. Insufficient security awareness Lack of training and awareness about cyber threats can lead to risky behavior. Without proper security awareness training, individuals may fall victim to phishing scams or other cyberattacks. Educating users about potential threats and safe online practices is essential for reducing risk. Awareness training helps individuals make informed decisions and avoid common pitfalls. Unmanaged access controls Poorly managed access controls can lead to unauthorized access to sensitive information. Inadequate control over who can access specific data or systems increases the risk of breaches. Implementing strong authentication methods and restricting access based on necessity helps protect sensitive information. Properly managed access controls are crucial for maintaining security. Use strong passwords: Create complex passwords and change them regularly to enhance security. Enable two-factor authentication (2FA): Add an extra layer of security by requiring a second form of verification. Keep software updated: Regularly update operating systems, applications, and security software to protect against vulnerabilities. Install and maintain antivirus software: Use reputable antivirus programs to detect and prevent malware infections. Conduct regular backups: Frequently back up important data to safeguard against loss or corruption. Educate yourself and others: Participate in security awareness training to stay informed about emerging threats and safe practices. Encrypt sensitive data: Ensure that sensitive information is encrypted to protect it from unauthorized access. Implement strong access controls: Restrict access to sensitive data based on necessity and use strong authentication methods. 5-step cyber hygiene checklist To ensure you're following the best practices for cyber hygiene, consider this 5-step cyber hygiene checklist: Update software regularly: Ensure that all software, including operating systems and applications, is up to date with the latest security patches. Use unique passwords: Create complex passwords for each account and use a password manager to keep track of them. Enable two-factor authentication: Activate 2FA on all accounts that offer it for an extra layer of security. Perform regular backups: Back up important data frequently and store backups in a secure location. Educate and train: Participate in security awareness training to stay informed about best practices and potential cyber threats. Embracing cybersecurity hygiene: The key to a safe digital future In conclusion, maintaining cyber hygiene is a critical aspect of cybersecurity that everyone should prioritize. By understanding the definition of cyber hygiene and implementing its core components, you can protect yourself and your organization from cyber threats. Following cyber hygiene best practices and utilizing a 5-step cyber hygiene checklist can significantly reduce the risk of security breaches and ensure that your digital life remains secure. Whether through cyber hygiene services or free cybersecurity services, taking proactive steps to maintain good cyber hygiene is essential for a safe and secure digital environment. Ready to enhance your digital security with good cyber hygiene practices? At InfoTank, we understand that good cyber hygiene helps strengthen your security posture and protect your information security. Cyber hygiene requires consistent effort and expert guidance to stay ahead of cyber threats. Reach out to InfoTank at email@example.com or call 770-924-7309 for personalized support and robust cybersecurity services. Let’s work together to maintain optimal cyber hygiene and secure your digital assets. What are the essential services and tools for maintaining cyber hygiene? Maintaining cyber hygiene involves using a range of services and tools to protect your digital environment. Key tools include antivirus software, firewalls, and encryption services. Regular updates and patches are crucial for these tools to effectively guard against cybersecurity risks. Additionally, using password managers and two-factor authentication can enhance your cyber hygiene by ensuring secure access to your accounts. How can individuals and organizations effectively manage and maintain cyber hygiene? Maintaining cyber hygiene is vital for both individuals and organizations. For individuals, this involves setting up strong passwords, enabling two-factor authentication, and regularly updating software. Organizations should implement a comprehensive cybersecurity strategy, including regular security awareness training and robust data encryption practices. Proper cyber hygiene helps mitigate cybersecurity risks and ensures a secure digital infrastructure. What is the set of practices that constitutes good cyber hygiene? The set of practices that defines good cyber hygiene includes several key actions. These involve creating and maintaining strong passwords, using antivirus software, and performing regular backups. Cyber hygiene also requires keeping systems and software up-to-date with the latest security patches. By consistently applying these practices, you can protect your systems from vulnerabilities and attacks. How is cyber hygiene compared to personal hygiene? Cyber hygiene is often compared to personal hygiene because both involve routine practices to maintain health and security. Just as you wash your hands and brush your teeth to prevent illness, you apply cyber hygiene practices to protect your digital assets from cyber threats. Cyber hygiene focuses on securing your information and systems, while personal hygiene concerns physical health. Both are essential for overall well-being, with cyber hygiene aiming to safeguard your digital life. What is the goal of cyber hygiene? The goal of cyber hygiene is to ensure the security and integrity of your digital environment. This involves protecting sensitive data, preventing unauthorized access, and mitigating potential threats. Proper cyber hygiene helps avoid poor cyber hygiene practices that can lead to data breaches and security incidents. By following effective cyber hygiene practices, you can enhance your cybersecurity and reduce risks associated with digital activities. How does effective cyber hygiene help with cybersecurity risks? Cyber hygiene is crucial for managing and mitigating cybersecurity risks. Effective cyber hygiene involves regular updates, strong passwords, and security awareness training to defend against potential threats. Even with good cyber hygiene, staying vigilant and proactive is necessary, as cybersecurity risks constantly evolve. Implementing these practices ensures that cyber hygiene remains robust and capable of addressing new vulnerabilities.
<urn:uuid:570cc8c3-f292-4b4a-90f6-416b84da334d>
CC-MAIN-2024-38
https://www.infotank.com/blog/cyber-hygiene-best-practices
2024-09-10T02:12:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00443.warc.gz
en
0.916837
2,457
3.09375
3
Cybersecurity is the most important tool for any company operating today. It is the reason that we are not constantly losing data and getting breaches. However, with the technology industry expanding and the rise of the Internet of Things (IoT) it is becoming a fast concern amongst tech experts. Many are concerned that there is not enough investment in cybersecurity at the present time. This is a crucial concern as the growth and development of internet-related technologies continues to expand. The reach of this expansion, as of now, knows no bounds as the Internet is well on its way to every inch of the globe. In this article, we will explain why cybersecurity needs more investment than ever before. A Shifting Tech Landscape One thing has become abundantly clear in the aftermath of the Target and Equifax hacks, we need more cybersecurity. In our personal lives and professional lives, cybersecurity makes it so that we are not under constant pressure with our hair on fire. However, as the technology industry continues its long march to global interconnectedness, the threat of hacking increases concurrently. This has a number of contributing factors, but, the largest culprit is the unified goal of IoT. This goal, which is shared by tech giants around the world, is centered around creating a global network that expands accessibility and provides limitless information to everyone on Earth. It is a goal very much worth pursuing. The question is: at what cost? As the access to the internet expands and WiFi capable devices reach more individuals we must become more aware of this shifting landscape before us. In order to avoid being attacked, we must use our wits and make sure that we are buying safe products such as smartphones and routers. This will ensure that any potential hackers are, at the very least, dissuaded from attacking your access points. As we continue to integrate more and more technologies into our lives we must also realize that we are creating access points on the fly. A note on access points: think of them has illicit doorways to your personal life that can come in the form of a smartphone, smart TV, and even a smart refrigerator. Anything in your life that has access to the Internet must be observed at all times because it could be a potential access point. As we are more connected through technologies like IoT and Blockchain, we are also more vulnerable. Education is the best investment It may seem silly to say, but, the best way to defend yourself against these hackers is to educate yourself on all of the principles of cybersecurity. There isn't a magical access point defense system that you can buy, unfortunately, that can completely block against any and all hackers. Instead, one must rely entirely on their skill and understanding of cybersecurity in order to effectively defend against these heinous attacks. Doing that is much simpler than you might think. There are a number of resources online, created by actual developers, to help you understand all of the aspects of cybersecurity easily and quickly. The steps often have a few simple rules that can dramatically reduce the chance of a successful hack. These sites and rules can help to provide a basic peace of mind wherever you go online or access private data at home. In this digital age data is money. The most important part of all of this is to know that you are a target. Better yet, everyone you know is a target to hackers and he or she try day and night to steal your beloved data. Unless you implement these skills then you and your family will be at the mercy of merciless hackers. Ultimately, cybersecurity needs investment because every industry is a tech company and every person is a data mine. Whether we like it or not, every person is valuable to hackers and digital miscreants. The only way to defend ourselves, and our global brethren, against these hackers, is to invest what we can into cybersecurity technologies and our own education. Taking the time to understand the issue and buying the right safety products can completely change your data hubs into lockboxes. This issue is only going to grow as access to the internet continues its path to total domination. The growing audience grows concerns amongst experts that say our cybersecurity infrastructure needs an overhaul in order to keep up with the growing access. There is an effort to push cybersecurity forward but at the end of the day, they need our help. Educating yourself on the principles and being smart with purchases could give the cybersecurity industry the investment it needs to deal with his global crisis. Michael Volkmann is an entrepreneur with a focus on business operations and finance.
<urn:uuid:abd7a240-4fff-41cb-ba40-7702193b448f>
CC-MAIN-2024-38
https://www.mbtmag.com/cybersecurity/blog/13246965/why-cybersecurity-needs-investment-now-more-than-ever
2024-09-11T07:14:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00343.warc.gz
en
0.97062
916
2.515625
3
Definition of PKI in the Network Encyclopedia. What is PKI (public key infrastructure)? Public Key Infrastructure, also known as PKI, is a set of services that support the use of public-key cryptography in a corporate or public setting. A public key infrastructure (PKI) enables key pairs to be generated, securely stored, and securely transmitted to users so that users can send encrypted transmissions and digital signatures over distrusted public networks such as the Internet. How It Works Generally, a public key infrastructure consists of the following coordinated services: - A trusted certificate authority (CA) that can verify user identities and issue users digital certificates and public/private key pairs. Sometimes the verification of user identities is performed by a separate registration authority (RA), but this service can also be integrated with the CA. - A certificate store in which users can access the public keys of other users for encrypting messages or validating digital signatures. This store is usually based on the X.500 directory recommendations. - A digital certificate and key management system for generating, storing, and securely transmitting certificates and key pairs to users who request them. Public key infrastructures can have different scopes. For example, a corporate enterprise can use Microsoft Certificate Server to establish a PKI for all its users and for those of partner companies such as suppliers and wholesalers. The PKI system can then be used to secure transactions between users that are sent over the Internet. PKIs can also be established on a national or global scale to support secure e-commerce transactions over the Internet involving users and vendors who are geographically and politically separated. PKIs on this scale consist of a hierarchy of CAs managed by different governments or companies and linked to a trusted root CA (such as the U.S. government). The current leader in worldwide PKI implementation is probably VeriSign, Inc., which is both a vendor of CA software and a CA. In order for a public key infrastructure to work, it must be implemented in a hierarchical fashion with authorities, super-authorities, and root authorities, similar to the Internet’s Domain Name System (DNS). Standards bodies and cryptography vendors such as PKIx of the Internet Engineering Task Force (IETF), Pretty Good Privacy (PGP), Simple Public Key Infrastructure (SPKI), and Public Key Cryptographic Standards (PKCS) have proposed a global public key infrastructure, but there is no universal standard that has been agreed upon for a public key infrastructure, and implementations of the existing standards are often not interoperable.
<urn:uuid:41ec3511-b0a2-4486-9bdd-ab0b3f06e2bd>
CC-MAIN-2024-38
https://networkencyclopedia.com/public-key-infrastructure-pki/
2024-09-12T13:24:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00243.warc.gz
en
0.912255
522
3.953125
4
- October 28, 2019 - Tags: Orlando, Tampa FOR IMMEDIATE RELEASE 6,572 Elementary Students in Orlando Receive Cyber Safety Training for National Cybersecurity Awareness Month 57 Elementary Schools Participate in New City-Wide Initiative – Cyber Safety Day Orlando (10/31/19) Orlando, FL – The nonprofit Center for Cyber Safety and Education (Center) teamed up with local businesses to provide 6,572 3rd graders with free cyber safety training materials during the first-ever Cyber Safety Day Orlando, as part of National Cybersecurity Awareness Month. Schools that registered for Cyber Safety Day Orlando received the award-winning Garfield’s Cyber Safety Adventures: Privacy. The lesson conducted Wednesday morning city-wide, taught children how to use the internet in a safe and secure way. Students learned that they should never share their full name, address, phone number, gender, age, plans, or password with anyone online. The takeaway message is that “online friends are not the same as real friends.” The Center’s Children’s Internet Usage Study of kids in grades 4-8 found that: 90% have at least one device (phone, tablet, or computer in their bedroom) to access the internet. 40% have connected or chatted with a stranger online. Of those youth, 53% shared their phone number, 30% texted the stranger, and 15% tried to meet the stranger. “Children seem to learn to use the internet before they learn how to walk. Yet, they are not being taught how to safely navigate the internet. It is imperative that cyber safety education becomes a priority in our schools, especially since most are adapting 1:1 digital learning in the classroom,” said Patrick Craven, Center’s director. Introduced in the fall of 2016 by the Center and legendary cartoonist Jim Davis, Garfield’s Cyber Safety Adventures include cartoons, comic books, posters, trading cards, and stickers that show Garfield and friends tackling cyber safety issues such as privacy, the dangers of posting online, online etiquette, cyberbullying, and more. The program has already made great impact. A study of over 1,000 students who participated in Garfield’s Cyber Safety Adventures saw an, increase in their cyber safety knowledge by 36 percent. The event was made possible thanks to the generous contributions from individuals and Orlando companies that are committed to make cyber safety education a priority in the classroom, including Amazon Web Services as district sponsor, Microsoft, SAIC and GoldSky Security. “Teaching our youth, at an early age, to practice good cyber hygiene is a critical step towards making our future world safer on-line as we move further into the Digital Age. We are extremely honored to be partnered with the Center for Cyber Safety and Education, ISC 2, our local schools, and Garfield for this fun day of teaching 3rd grade students how to stay safe in cyberspace,” said Ron Frechette, Founder & CEO at GoldSky Security. Garfield’s Cyber Safety Adventures Series won the national Learning® Magazine 2019 Teachers’ Choice Award for the Classroom and the 2019 Academics’ Choice Smart Media Award. Teachers chose the Garfield materials for their ability to engage elementary children and foster retention of core cyber safety lessons. About Center for Cyber Safety and Education The Center for Cyber Safety and Education (Center), is a non-profit charitable trust committed to making the cyber world a safer place for everyone. The Center works to ensure that people across the globe have a positive and safe experience online through their award-winning educational programs, scholarships, and research. Visit www.IAmCyberSafe.org to learn more. About GoldSky Security, LLC GoldSky Security, LLC is a full-service cybersecurity firm dedicated to helping small-midsize businesses (SMBs) with their IT security and compliance needs. GoldSky’s services are custom designed for each client. The company’s goal for every client they serve is to reduce the risks of cyber-attacks and assist in achieving regulatory compliance mandates. For more information about GoldSky Security services, visit www.goldskysecurity.com
<urn:uuid:b3e0145c-9207-47e9-9eb7-b06d6f8aface>
CC-MAIN-2024-38
https://www.goldskysecurity.com/goldsky-security-center-for-cyber-safety-and-education/
2024-09-15T00:20:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00043.warc.gz
en
0.933469
867
2.734375
3
AI has been rapidly evolving in recent years with the IT industry placing demand on this innovative technology solution. It began life with humble origins compared to the immense potential that modern AI software can offer for business networks. This blog will be looking at what AI is, how it originated and developed, and how your business can take advantage of this tool. A Brief Look at AI for Business Networks – What It Is (And Isn’t) It’s important that we know what AI actually is. Indeed, many businesses don’t entirely understand how AI can be beneficial for their tech problems and solutions. Let’s define what AI is and what it isn’t. When someone mentions the term “Artificial Intelligence,” you probably think of automatic computers or machines, like robots. This belief was widely popularized by Alan Turing’s statement claiming that “one will be able to speak of machines thinking without expecting to be contradicted.” However, artificial intelligence IT solutions are actually a bit more complicated than this; the modern realm of AI certainly isn’t capable of self-awareness just yet, but it is still capable of offering a huge amount of potential for businesses. So, AI isn’t just an android; rather, it’s a program that is capable of completing tasks to a similar standard as a human or better. AI systems can be categorized as: Narrow, General, or Super-Intelligent. Narrow and general AI systems are already available for business networks to implement as an IT solution. These systems serve to do a small number of specialized tasks. Super intelligent AI is still in development though, and will likely not exist for a long period of time. AI hasn’t always been at its current standard. In fact, it’s taken a long time for AI to evolve to it’s current stage! Of course, the earliest origins of artificial intelligence could be Alan Turing’s original Turning Machine – the first ever variant of a computer, although certainly nothing like the modern computers we rely on for business nowadays. Many don’t know that AI has actually had a notable prominence in our society and culture for easily a century. The first machine capable of automatically playing chess without human intervention was Leonardo Torres y Quevedo’s creation in 1914. Following this creation, in 1929, Japan’s Makoto Nishimura developed the first-ever “robot”. The model was able to use air mechanisms to move its hand and head and control its expression! It wasn’t until 1955, that “artificial intelligence” became a term. Despite a somewhat early start to life, tales surrounding artificial intelligence were commonplace. They spoke of IT solutions that would serve to better the world and provide an alternative to normal tech solutions, of science fiction stories and seemingly unbelievable feats of business computer systems. The first artificial intelligence programs in the 1990s, Jabberwack and Cleverbot chatbots designed to “simulate natural human chat in an interesting, entertaining, and humorous manner.” A new line of AI also came around the turn of the century. The Furby toy was created and went on to be the first “pet” robot for children, and in 1999, Sony released AIBO, a robotic dog that could recognize over one hundred voice commands and even communicate directly with its owner. This would prove a big step forward in terms of the development of artificial intelligence software and programs. Modern AI Programs Modern artificial intelligence has developed rapidly. As a business owner, you must know how these IT solutions could be applicable for your organization. Some AI programs used in businesses include voice commands and applications like Apple’s Siri and Amazon’s Alexa. These programs respond to user inquiries with natural language. So, how can modern businesses use AI programs for the best business network boost? Well, artificial intelligence can be applicable in many different ways for your business, from helping to reduce the incidence rate of human error (which is invariable with any business) through to automating production processes and supply chains based on demand and availability. Artificial intelligence can also have a big impact on your business’ hiring practices, helping your business to streamline the recruitment process and lower the costs of hiring new talent for your firm! Indeed, the potential that artificial intelligence can offer even today is massive – and as time passes likely, artificial intelligence systems will only get more and more complex and refined. Learn More About AI for Businesses For more information on how AI could benefit your business networks, contact us. Our team provides quality IT services so that you can get back to doing what you do best – managing your business. Don’t compromise – let us help you find the perfect AI and IT solutions today!
<urn:uuid:73ff5117-af45-4c8d-8d80-dedf4bf9e4a5>
CC-MAIN-2024-38
https://www.ccstechnologygroup.com/ai-what-it-is-and-isnt/
2024-09-16T03:04:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00843.warc.gz
en
0.95364
989
2.625
3
Billionaire tycoon Elon Musk has announced that he is rescinding his $44bn offer to buy Twitter, alleging the that social media platform is refusing to disclose how many users are fake. Twitter claims that fewer than 5% of the accounts on its network belong to bots – automated software agents posing as users – but Musk disputes this figure, alleging the platform’s management is hiding the true number. Musk’s filing to terminate the deal is unlikely to go smoothly and there are questions as to whether it will be successful. Twitter appears to be preparing for legal action to force proceedings, with its legal team stating in a letter that it “has breached none of its obligations under the Agreement”. The dispute reflects the difficulty of reliably identifying bots on Twitter, researchers told Tech Monitor, not least because there is no agreed definition of what counts as a bot. However, they added, Twitter could almost certainly do a better job of addressing bots on its platform. How to spot a bot on Twitter The term “bot” is used to describe various kinds of “inauthentic accounts”, according to the creators of Botometer, a tool which claims to be able to tell whether a Twitter account is run by a human or not. These include those that post automated updates, groups posting misinformation, and other fake profiles. This makes it difficult to identify bots reliably. To show how difficult it can be to spot an inauthentic account, Botometer recently gave Musk’s own Twitter account a score of four out of five for “bot-like behaviour”, although on another day it only scored 0.5 out of 5. Properly assessing the authenticity of a Twitter account currently requires a human reviewer, argues Mirco Musolesi, professor of computer science at University College London (UCL). Using AI to identify bots would require a large sample of properly labelled training data, he explains. Training the AI would require a sample of around 50,000 randomly selected tweets that had been reviewed by an expert. That would be expensive – although not impossible for a company of Twitter’s size, Musolesi says. “Even if you are talking £20m, that is negligible compared to the value of Twitter.” Simply identifying Twitter bots is less of a challenge than assessing their objective, Musolesi adds. “The goal behind [a bot’s] posting is difficult to define,” Musolesi told Tech Monitor. That means addressing bot-powered misinformation on the site is not as simple as spotting which accounts are automated. “These accounts are trying to influence the discussions that are happening on Twitter, but if you think about it the same thing might be said by ‘real’ people involved in the discussion on Twitter.” Shi Zhou, associate professor at the Department of Computer Science at UCL, suggests that spotting a “bot” account can be as simple as looking at whether they interact with other accounts. If they comment or reply to other posts then they are likely real, or at least controlled by a human, Zhou says. “Most bots aren’t designed to interact as they are designed to be one in a large number.” How many Twitter users are bots? In a company filing last month, Twitter said that fewer than 5% of its active users are bots. This would mean that there are no more than 11.5m bots on the social network. Research by Zhou and his team have identified "millions" of Twitter bots. Many of them are very simple automated accounts, he explains. "People think bots use AI and are complicated, but most are primitive software," he explains. "They can still do a lot of harm, including spam attacks. Not all bots are malicious, Zhou says, but the majority are. “The good and useful bots are a very small portion of the overall number," he says. "Most are malicious in some way." Even seemingly innocuous bots could pose a threat, Zhou says. One Twitter botnet, known as the Star Wars botnet, has 300,000 bots hidden among the public accounts each tweeting quotes from Star Wars, he explains. "What are they waiting for? I think they are waiting for a day when they can make a one off hit." Zhou says that when his team published its research identifying millions of bots, Twitter took no action. "I would have expected Twitter to contact us, but we heard nothing," he says. "A year after we published our paper around half of the bots we discovered were still there on Twitter." "I don’t think Twitter is very worried about it," he says. "When they remove one million bots it is a loss of a million users, so there is no real demand." This is also true of other social platforms, Zhou added. "You can detect bots and those hosting them don’t care really. So we’ve moved on to other topics.” So how many Twitter users are bots? “My honest answer is I don’t know as it depends on the definition," says Musolesi. Zhou agrees that any figure would be a guess but he is sceptical of Twitter's 5% estimate. "I don’t think they have any real evidence," he says. "They get it with a small sample size to find out how many are bots. But they sample active users and most bots won’t be currently active, or only tweet a small number." Even so, 5% is cause enough for alarm, Zhou says. “If they find 5% from such a small sample of active users, that is very scary."
<urn:uuid:7042a83c-29ac-4cd0-a43b-498252332202>
CC-MAIN-2024-38
https://www.techmonitor.ai/digital-economy/big-tech/twitter-bots-why-so-hard-find-out-who-real
2024-09-16T01:50:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00843.warc.gz
en
0.974238
1,191
2.6875
3
Is the era of the passswordless web near? We live in a connected world. In this world, we are constantly interacting with a myriad of services, including work-related services, financial institutions, government agencies, social sites, entertainment services, health providers, and more. With each one of these services, more often than not we're required to generate, remember, and use usernames and passwords to gain access to the services' websites and apps. The username and password method used to authenticate people so that they can access websites and apps has served us well for decades, and for most services, will continue to do so for years to come; however, there is a problem with this method. Usernames and passwords really don't keep people's accounts safe. They can be shared, guessed, stolen, and breached, which leaves accounts and data vulnerable to theft and misuse. To protect an account, industry experts recommend that people create unique usernames and passwords for each and every account they have. The reason for this is that breached credentials on one service can not be used to hack into another service. Most people do not head this recommendation though since remembering a cacophony of credentials proves too difficult, if not impossible. To prove the point, according to the 2018 MEF consumer trust study the average person reuses three usernames and password combinations across their accounts. Using a password manager to overcome username and password chaos is an important step that people should take to protect their accounts. There are numerous password managers to choose from, some are very good and others are not (more on these later). Industry experts, however, don't see a future in password managers. They are looking beyond the password to an era where people securely and conveniently log in to any web service with passwordless access. Earlier this month (March 4, 2019), with the release of the Web Authentication (WebAuthn) specification, the Worldwide Web Consortium and the FIDO Alliance took a step towards making a passwordless world a reality. The WebAuthen specification details interoperable technical standards that companies can build into their sites and apps so that people can use their biometrics, connected devices, and related authenticators to gain access to their services rather than having to rely on usernames and passwords. Publishing the specification is a huge first step toward achieving a passwordless world. And, there is every reason to believe that one day the passwordless world will take hold as the specification already has support from Google, Microsoft, Mozilla, Nok Nok Labs, Yubico, PayPal, Qualcomm, and other industry leaders. However, if history holds true, organizations have time to evaluate and implement this new standard. Not only will it take time to refine the standard and for its deployment to achieve critical mass throughout the millions of sites and apps within the datasphere, but it will also take time for people to understand, trust, and adopt the new behaviors that will be required of them to use WebAuthn powered services.
<urn:uuid:e6fd4f13-b5d0-4aba-8d0d-d8ba3dc0f5d4>
CC-MAIN-2024-38
https://identitypraxis.com/webauthn-a-new-standard-for-site-app-user-authentication-released
2024-09-08T23:03:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00643.warc.gz
en
0.954545
621
2.828125
3
What 8-bit field exists in IP packet for QoS? Click on the arrows to vote for the correct answer A. B.In IP (Internet Protocol) packets, a field exists to carry information about Quality of Service (QoS) called the Type of Service (ToS) field. The ToS field is an 8-bit field that is used to prioritize traffic in the network based on specific requirements for the application or service. It enables network administrators to assign different levels of importance to different types of traffic, which helps to ensure that mission-critical data is given higher priority over less important data. ToS field is divided into two parts: the first three bits of the field are used to specify the Precedence of the packet, while the remaining five bits are used to specify the ToS subfield. However, in more recent IP packet formats, the ToS field has been replaced by the Differentiated Services Code Point (DSCP) field, which provides a more granular approach to QoS. The DSCP field is also an 8-bit field that allows for 64 different values to be assigned, providing a much finer level of granularity than the ToS field. Each DSCP value represents a specific QoS class, and network administrators can use these values to assign different levels of priority to specific types of traffic. In summary, both ToS and DSCP fields are used in IP packets to provide QoS information. The ToS field is an older, less granular approach to QoS, while the DSCP field provides a more sophisticated, fine-grained approach.
<urn:uuid:6d8a2fb8-3a78-4fe3-abd5-6e1b326767af>
CC-MAIN-2024-38
https://www.exam-answer.com/ip-packet-8-bit-field-qos
2024-09-10T04:37:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00543.warc.gz
en
0.904069
331
2.921875
3
What Is Cyber Asset Attack Surface Management (CAASM)? Cyber Asset Attack Surface Management, or CAASM, is a security solution that focuses on identifying and managing the attack surface of an organization's digital assets. The aim of CAASM is to minimize the potential entry points for cyber threats and reduce the risk of a successful cyber attack. CAASM creates a detailed map of all the computers, servers, networks, and other digital assets your organization uses. It identifies potential weak spots in these assets that might be exploited by cyber attackers and provides actionable measures you can take to remediate vulnerabilities and prevent possible attacks. CAASM involves regular monitoring, assessment, and management of your digital assets to ensure they remain secure as new threats emerge and technology evolves. This proactive approach to cybersecurity can significantly improve an organization’s cybersecurity posture. This is part of a series of articles about attack surface. How CAASM Works: 8 Key Components CAASM is a layered approach to cybersecurity involving several key components. Let's look at each of these components in detail. 1. Asset Discovery You can't protect what you don't know exists. Asset discovery is the process of identifying all the digital assets within an organization's network. This includes everything from computers and servers to mobile devices and cloud-based assets. As new assets are added and old ones are retired, the asset inventory needs to be updated accordingly. This ensures that no asset is left unprotected and vulnerable to cyber threats. 2. Vulnerability Assessment Once all assets have been identified, the next step is to assess them for vulnerabilities. This involves scanning the assets for known vulnerabilities that could be exploited by cyber attackers. Vulnerability assessment also involves evaluating the potential impact of each vulnerability. This helps prioritize which vulnerabilities need to be addressed first, based on the potential damage they could cause. 3. Threat Prioritization Not all threats are created equal. Some pose a higher risk to an organization than others. Threat prioritization is the process of ranking identified vulnerabilities based on their potential impact. This prioritization enables organizations to allocate their resources more effectively, addressing the most critical threats first. 4. Integration with Existing Security Tools CAASM needs to be integrated with an organization's existing security tools in order to be effective. This includes everything from firewalls and antivirus software to intrusion detection systems and security information and event management (SIEM) systems. By integrating CAASM with these tools, organizations can leverage their existing security infrastructure to further enhance their cybersecurity posture. 5. Continuous Monitoring Cyber threats are constantly evolving. New vulnerabilities are discovered every day, and cyber attackers are always finding new ways to exploit them. This is why continuous monitoring is a key component of CAASM. Continuous monitoring involves regularly scanning the network for changes that could indicate a potential threat. This includes new assets being added to the network, changes in user behavior, and signs of unauthorized access. 6. Remediation and Mitigation CAASM systems can provide remediation guidance, allowing IT and security teams to fix security issues before they escalate. Some solutions provide automated patch deployment or configuration changes to improve security posture. Mitigation involves implementing measures to reduce the impact of potential attacks. This could involve segregating networks, implementing access controls, or deploying intrusion prevention systems. 7. Reporting and Analysis Reporting involves documenting the identified threats, the actions taken to address them, and the results of these actions. This provides a clear record of the organization's cybersecurity efforts and helps identify areas for improvement. CAASM systems can evaluate the collected data to identify trends and patterns. This can provide valuable insights into the evolving threat landscape and help shape future cybersecurity strategies. 8. Incident Investigation Despite your best efforts, breaches can and do occur. When they do, incident investigation becomes critical. CAASM supports incident investigation by providing rich information about cyber assets affected by a breach. This can help determine how the breach occurred, what assets were affected, and what measures need to be taken to prevent similar incidents in the future. Tips from the Expert CTO and Co-Founder Dima Potekhin, CTO and Co-Founder of CyCognito, is an expert in mass-scale data analysis and security. He is an autodidact who has been coding since the age of nine and holds four patents that include processes for large content delivery networks (CDNs) and internet-scale infrastructure. In my experience, here are tips that can help you better implement Cyber Asset Attack Surface Management (CAASM): - Leverage active asset discovery in addition to passive methods: While passive discovery helps avoid network disruptions, active probing can uncover shadow IT assets and rogue devices that are often missed, providing a more comprehensive view of your attack surface. - Prioritize attack paths over individual vulnerabilities: Instead of focusing solely on the severity of individual vulnerabilities, assess the exploitability within attack chains. Address vulnerabilities that could serve as gateways in multi-step attacks. - Automate asset classification and tagging: Implement machine learning models that can automatically classify and tag assets based on their function, risk level, and criticality to ensure that the most crucial assets receive appropriate security measures. - Implement dynamic risk scoring for assets: Instead of static risk assessments, adopt dynamic risk scoring models that update based on real-time threat intelligence, asset exposure, and business impact to help prioritize remediation efforts effectively. - Enforce continuous compliance checks: Continuously monitor for compliance with security policies and regulatory standards, using CAASM to automatically identify and remediate non-compliant assets before they become a liability. What Are the Benefits and Use Cases for CAASM? Let’s look at some reasons to implement cyber asset attack surface management. Automating Cyber Asset Inventory and Maintenance One of the primary benefits of CAASM is its ability to automate cyber asset inventory and maintenance. Organizations often have thousands of cyber assets spread across multiple locations and platforms. Manually tracking and maintaining these assets is often infeasible. CAASM automates this process, ensuring that all assets are accurately inventoried and maintained. This saves time and resources while reducing the risk of human error. Reducing the Attack Surface The attack surface is the total number of points where an unauthorized user can try to enter data to or extract data from an environment. The larger the attack surface, the more opportunities there are for cybercriminals to exploit vulnerabilities and breach an organization's defenses. CAASM helps to reduce the attack surface by identifying and eliminating unnecessary or redundant cyber assets, closing unused ports, and implementing other security measures. This minimizes the number of potential entry points for cybercriminals, reducing the risk of a successful attack. Expediting Incident Response In the event of a cyber incident, time is a critical factor. The faster an organization can identify, contain, and remediate a breach, the less damage it is likely to cause. CAASM can significantly expedite the incident response process. By providing a real-time, comprehensive view of the attack surface, CAASM allows organizations to quickly identify and isolate affected assets, reducing the potential for further damage. Additionally, CAASM provides valuable insights that can inform the incident response process, helping organizations to mitigate the impact of a breach and recover more quickly. Master Compliance Assessments In many industries, organizations are required to comply with cybersecurity regulations and standards. Non-compliance can result in fines, reputational damage, and other serious consequences. CAASM simplifies the compliance process by providing a comprehensive, up-to-date inventory of all cyber assets, as well as detailed information about their security status. This makes it easier for organizations to demonstrate compliance with regulations and standards and generate reports needed for compliance audits. How Does CAASM Compare to Similar Technologies? There are several related technologies that can be used to improve an organization’s cybersecurity posture. Let’s compare these to CAASM. CAASM vs. CSPM Cloud Security Posture Management (CSPM) focuses on identifying misconfigurations and compliance risks within cloud infrastructures, primarily through continuous monitoring to detect gaps in security policy enforcement. While CSPM excels in spotting common misconfigurations and ensuring compliance with cloud environments, it often lacks the depth, visibility, and flexibility needed for monitoring custom configurations. In contrast, CAASM offers a broader and more extensible approach. It not only encompasses the capabilities of CSPM but also provides comprehensive visibility across the entire attack surface, including both private and public clouds. CAASM goes beyond basic cloud configuration checks to monitor custom configurations and uncovers complex relationships and misconfigurations across all cyber assets. This enables organizations to have a more nuanced understanding and control over their security posture. CAASM vs. CWPP Cloud Workload Protection Platform (CWPP) is designed to protect workloads across various environments, including physical servers, virtual machines, containers, and serverless workloads. Its main goal is to scan and protect these workloads from misconfigurations, compliance violations, and security threats. CWPP is workload-centric, focusing on the security of the runtime environment and ensuring compliance with security policies and regulations. CAASM offers a wider lens, focusing on the overall cyber asset attack surface, which includes not just the workloads but all associated assets within an organization's IT ecosystem. While CWPP provides essential protections for specific workloads, CAASM delivers a holistic view and continuous monitoring of all cyber assets. This includes identifying unknown risks, monitoring for compliance misconfigurations, and preventing drift in security and compliance postures across the entire cloud infrastructure. CAASM vs. EASM External Attack Surface Management (EASM) solutions focus on identifying external threats and vulnerabilities by scanning an organization's external digital footprint. EASM tools are adept at finding environment-based vulnerabilities and unknown external threats, helping organizations to secure their perimeter from potential attacks. CAASM, however, provides a more integrated and comprehensive approach to cybersecurity. While EASM is crucial for understanding and mitigating external threats, CAASM extends this capability by offering complete visibility and management of all cyber assets, both internal and external. Through API integrations, CAASM solutions merge data from various sources to provide a unified view of an organization's entire cyber asset landscape. CAASM vs. DRPS Digital Risk Protection Services (DRPS) offers visibility into external threats by monitoring open-source intelligence, the dark web, deep web, and social media. Its main goal is to conduct risk assessments and protect an organization's brand by providing insights into potential threats, their strategies, and malicious activities. This information is crucial for threat intelligence analysis and helps organizations understand and mitigate external risks. However, DRPS does not provide an inventory or comprehensive view of an organization's managed cyber assets. By contrast, CAASM offers a holistic view of an organization’s entire cyber asset infrastructure. CAASM aggregates data from various sources to overcome visibility and vulnerability challenges related to cyber assets. It not only helps in identifying what assets are present but also analyzes the attack surface to secure them against potential threats. CAASM vs. CMDB A Configuration Management Database (CMDB) is a centralized repository that stores detailed information about an organization's IT assets and their relationships. It supports various IT operations and change management processes, including incident, problem, and release management, by offering a clear view of the IT environment. This information is essential for effective IT service management and operational efficiency. CAASM focuses on identifying, analyzing, and securing the organization's attack surface. Its primary goal is to minimize the likelihood of successful cyber attacks by identifying and mitigating vulnerabilities across the IT infrastructure, applications, and data. While CMDB is pivotal for IT operations and managing changes within the IT environment, CAASM is geared towards cybersecurity and attack surface management. However, these two systems can complement each other. Information stored in a CMDB can inform CAASM processes by identifying assets and their configurations, which can then be analyzed for vulnerabilities and secured. Implementing CAASM in Your Organization Let’s look at some of the measures you can take to successfully implement CAASM in your organization. Identify the Tools to Connect to CAASM Start with identifying the tools that may hold asset information, which can be integrated with CAASM. These tools range from network security systems, IT service management tools, vulnerability management solutions, and more. Your network security systems provide vital information about the network's architecture, the devices connected to it, and their respective configurations. With the data from these systems, you can map out your organization's cyber asset attack surface and identify the potential weak points that could be exploited by attackers. IT service management tools give you visibility into your IT infrastructure, allowing you to track and manage the life cycle of your IT assets. By integrating your IT service management tools with CAASM, you can ensure that all your IT assets are accounted for and protected. Vulnerability management solutions identify, classify, and help mitigate vulnerabilities in your IT infrastructure. By integrating these solutions with CAASM, you can ensure that vulnerabilities are promptly addressed, reducing your organization's cyber asset attack surface. Map Your Departments and Organizational Structure Identify the departments within your organization, the roles they play, and how they interact with one another. First, determine the key departments in your organization that interact with IT assets. These may include your IT department, operations, finance, human resources, and more. Understanding these interactions will help you identify the assets that each department uses, and how they contribute to your organization's cyber asset attack surface. Next, you need to understand your organizational structure. This involves identifying the hierarchy of your organization, the decision-making processes, and how information flows within your organization. This understanding helps you identify the key stakeholders in your CAASM implementation, their roles, and how they can contribute to its success. Identify Asset Owners Asset owners are individuals or departments within your organization that are responsible for the operation and security of specific IT assets. These individuals have the necessary knowledge and authority to make decisions about their assets. They can provide valuable input in your CAASM strategy, helping you understand the risks associated with their assets, and the measures needed to mitigate them. Moreover, by involving asset owners in your CAASM implementation, you can ensure their buy-in and support, increasing the likelihood of your CAASM strategy's acceptance and effective implementation. Build and Update Workflows for Remediation and Incident Response Structured workflows are crucial in ensuring that vulnerabilities and incidents are promptly addressed, reducing your organization's cyber asset attack surface. Your remediation workflows should clearly outline the steps to be taken in addressing identified vulnerabilities. These steps may include assessing the severity of the vulnerability, identifying the affected assets, implementing the necessary fixes, and verifying the success of the remediation. Next, establish incident response workflows that detail how your organization should respond to security incidents. This includes the initial detection and analysis of the incident, the containment, eradication, and recovery measures, and the post-incident activities aimed at preventing recurrence. CAASM for External Assets with CyCognito Today's attack surfaces are growing exponentially, leaving organizations in a constant state of catch-up. Managing this ever-shifting landscape is a complex challenge, fueled by diverse assets scattered across remote workforces and a network of applications in constant flux. The biggest risk lurks in the shadows: the "unknown unknowns" we can't see. Working in isolation, traditional security tools are blind to these hidden threats and leave externally exposed assets vulnerable. These challenges, combined together with an ever-evolving threat landscape (what was secure yesterday may be vulnerable today), create a perfect storm of opportunity for threat actors and a nightmare of risk for security and operations teams. CyCognito excels at identifying previously unknown and hidden assets and solves one of the most fundamental business problems in cybersecurity: seeing how attackers view your organization, where they are most likely to break in, what systems and assets are at risk, and how to eliminate the exposure. CAASM streamlines internal asset inventory and enforces security policy and prioritization. By working together, these tools provide a comprehensive view of an organization's digital environment, enabling them to effectively protect and manage their assets against a wide range of cyberattacks. Learn more about CyCognito.
<urn:uuid:b7a523f7-10a9-465c-9ab8-2f4fb9ee8c0f>
CC-MAIN-2024-38
https://www.cycognito.com/learn/attack-surface/caasm.php
2024-09-13T22:34:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00243.warc.gz
en
0.930886
3,413
2.734375
3
What is TLS? Secure Email 101 Transport Layer Security (TLS) is a widely used protocol in email security, the other being Secure Sockets Layer (SSL). Both are used to encrypt a communication channel between two computers over the internet. An email client uses the Transport Control Protocol (TCP) – which enables two hosts to establish a connection and exchange data – via the transport layer to initiate a handshake with the email server before actual communication begins. The client tells the server the version of SSL or TLS it is running as well as the cipher suite (a set of algorithms that help in securing a network connection that uses SSL or TLS) it wants to use. After this initial process, the email server verifies its identity to the client by sending a certificate the email client trusts. Once this trust is established, the client and server exchange a key, allowing messages exchanged between the two to be encrypted. What parts of a message does TLS encrypt? The protocol encrypts the entire email message, including the header, body, attachments, email header, sender and receiver. TLS does not encrypt your IP address, server IP address, the domain you are connecting to, and the server port. The visible metadata informs where you are coming from, where you are connecting to and the service you’re connecting with, such as sending email or accessing a website. This article explains what is really protected by TLS and SSL. What is the purpose of SSL and TLS? The purpose of the two protocols is to provide privacy, integrity and identification. - TLS encrypts communication between the sender and recipient. The idea is to ensure that no third party can read or modify the data being exchanged. Without encryption, a middleman could access the contents of emails, such as personally identifiable information, medical billing information and other sensitive data. This information would be available for the middleman to see in plaintext, that is, a human readable format: This is an email text message. We’re writing this email to let you know that our representative is available 9 to 5 Monday through Friday to assist you with any billing issues. TLS makes information unreadable on its journey to the server, e.g., - As mentioned earlier, the protocol offers identification between corresponding entities: one or both parties know who they are communicating with. After a secure connection is established, the server sends its TLS certificate to the client, who refers to a Certificate Authority (a trusted third party) to validate the server’s identity. How different is TLS from SSL? The two terms are often used interchangeably, although they are actually distinct. TLS is an updated and more secure version of SSL. TLS v1.1, v1.2 and v1.3 are significantly more secure than SSL and address vulnerabilities in SSL v3.0 and TLS v1.0. Fallback to SSL v3.0 is disabled by Microsoft, Mozilla and Google for their Internet Explorer, Firefox and Chrome browsers block the many vulnerabilities present in SSL, such as the POODLE man-in-the-middle attack. If you are configuring an email program, you can choose either TLS or SSL so long as it is supported by your server (because in this context the term “SSL” does not refer to the old SSL v3 protocol, exactly, but how the TLS protocol will be initiated). What level of TLS security is needed for HIPAA compliance? Health and Human Services specifies that SSL and TLS usage should adhere to the details described in the National Institute of Standards and Technology (NIST) 800-52 recommendations. Encryption processes weaker than those this publication recommends are non-compliant. The key points to note from the NIST documents are: (a) you must never use SSL v2 and v3 (b) when interoperability with non-government systems is needed, TLS v1.0+ may be considered Ok, and (c) only certain ciphers are acceptable to use. For more information, please refer to this article. What doesn’t TLS secure? A message sent using TLS is not entirely secure. The risk starts brewing when your messages start their journey back and forth from your email provider’s servers and your correspondents’ email servers. One risk is that your message could be send insecurely (via plain text) from your email provider to your recipient. Another is that your recipient may insecurely access your message at their email provider. A third risk stems from potential changes to your message at your provider, in transit or at your recipient’s provider, or anywhere else not protected by TLS or some other encryption technology. For optimal email security, you need end-to-end email encryption. S/MIME and PGP are the most secure protocols for authentication and privacy of messages over the internet. PGP does assure Pretty Good Privacy. You have a pair of keys — private and public; the former decrypts messages, the later encrypts them. Encrypted messages are safe as long as you keep your private key safe. Still, PGP (and S/MIME) are Pretty User-Unfriendly, as you have to use some technology and trade security keys ahead of time and everyone has to be configured and trained to use these technologies. A reliable escrow system is another option. Although in some ways it is not as secure as S/MIME and PGP can me, it does allow messages to be retracted after transmission. For a better understanding of enhanced email security for HIPAA compliance, check out this article. Want to discuss how LuxSci’s HIPAA-Compliant Email Solutions can help your organization? Interested in more information about “smart hosting” your email from Microsoft to LuxSci for HIPAA compliance? Contact Us
<urn:uuid:f2c57f85-b40e-4ed2-bd27-54fc91e6531e>
CC-MAIN-2024-38
https://luxsci.com/blog/what-is-tls-secure-email-101.html
2024-09-18T17:06:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00743.warc.gz
en
0.925859
1,194
3.8125
4
Cybercrime has become the world's third-largest economy after the US and China, driven by the availability of ransomware-as-a-service and malware sold on the dark web. Even those without technical skills can launch sophisticated attacks by easily accessing these tools online. The rapid adoption of the Internet of Things (IoT) has also created security gaps that are being exploited by cybercriminals. The rise in ransom demands, due to high-profile attacks such as the Colonial Pipeline hack, has encouraged more people to enter the market, with cybercrime a growing global issue. According to recent research, the average ransom payout is now over $800,000, with the cybercrime industry set to cost up to $10.5 trillion per year by 2025.
<urn:uuid:59fdb634-5afd-4df7-9e24-b62c3865d33b>
CC-MAIN-2024-38
https://mycena.co/cybercrime-is-the-worlds-third-largest-industry/
2024-09-07T20:41:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00843.warc.gz
en
0.959792
154
2.640625
3
Share this post: For centuries, electricity was thought to be the domain of sorcerers – magicians who left audiences puzzled about where it came from and how it was generated. And although Benjamin Franklin and his contemporaries were well aware of the phenomena when he proved the connection between electricity and lightning, he had difficulty envisioning a practical use for it in 1752. In fact, his most prized invention had more to do with avoiding electricity – the lightning rod. All new innovations go through a similar evolution: dismissal, avoidance, fear, and perhaps finally acceptance. Almost two hundred years after Franklin’s lightning experiment, man was routinely harnessing electricity, even though we still lacked a deep understanding of its origins. The Lineman’s Handbook of 1928 begins with the line: “What is electricity? – No one knows.” But according to this field guide for early electrical linemen, understanding the make-up of electricity wasn’t important. The more significant aspect was knowing how electricity could be generated and safely used for light, heat and power. Today, too many people view artificial intelligence (AI) as another magical technology that’s being put to work with little understanding of how it works. They view AI as special and relegated to experts who have mastered and dazzled us with it. In this environment, AI has taken on an air of mysticism with promises of grandeur, and out of the reach of mere mortals. The truth, of course, is there is no magic to AI. The term Artificial Intelligence was first coined in 1956 and since then the technology has progressed, disappointed, and re-emerged. As it was with electricity, the path to AI breakthroughs will come with mass experimentation. While many of those experiments will fail, the successful ones will have substantial impact. That’s where we find ourselves today. As others, like Andrew Ng have suggested, AI is the new electricity. In addition to it becoming ubiquitous and increasingly accessible, AI is enhancing and altering the way business is conducted around the world. It is enabling predictions with supreme accuracy and automating business processes and decision-making. The impact is vast, ranging from greater customer experiences, to intelligent products and more efficient services. And in the end, the result will be economic impact for companies, countries, and society. To be sure, organizations that drive mass experimentation in AI will win the next decade of market opportunity. To breakdown and help demystify AI, one needs to consider two key elements of the category: the componentry and the process. In other words, identifying what’s behind it and how it can be adopted. Much like electricity was driven by basic components such as resistors, capacitors, diodes, etc., AI is being driven by modern software componentry: - A unified, modern data fabric. AI feeds on data, and therefore data must be prepared for AI. A data fabric acts as a logical representation of all data assets, on any cloud. It pre-organizes and labels data across the enterprise. Seamless access to all data is available through virtualization from the firewall to the edge. - A development environment and engine. A place to build, train, and run AI models. This enables end-to-end deep learning, from input to output. Machine learning models, help find patterns and structures in data that are inferred, rather than explicit. This is when it starts to feel like magic. - Human features. A mechanism to bring models to life, by connecting models and applications to human features like voice, language, vision, and reasoning. - AI management and exploitation. This enables you to insert AI into any application or business process, while understanding versions, how to improve impact, what has changed, bias, and variance. This is where your models live for exploitation and enables lifecycle management of all AI. Lastly, it offers proof and explain-ability for decisions made by AI. With these components in hand, more organizations are unlocking the value of data. But to fully leverage AI, we must also understand how to adopt and implement the technology. For those planning the move, consider these fundamental steps first: - Identify the Right Business Opportunities for AI. The potential areas for adoption are vast: customer service, employee/company productivity, manufacturing defects, supply chain spending, and many more. Anything that can be easily described, can be programmed. Once it’s programmed, AI will make it better. The opportunities are endless. - Prepare the Organization for AI. Organizations will require greater capacity and expertise in data science. Many of today’s repetitive and manual tasks will be automated, which will evolve the role of many employees. It’s rare that an entire role can be done by AI. But it’s also rare that none of the role could be enhanced by AI. All technology is useless without the talent to put it to use, so build a team of experts that will inspire and train others. - Select Technology & Partners. While it’s unlikely that the CEO will personally select the technology, the implication here is more of a cultural one. An organization should adopt many technologies, comparing, contrasting, and learning through that process. An organization should also choose a handful of partners that have both the skills and technology to deliver AI. - Accept Failures. If you try 100 AI projects, 50 will probably fail. But, the 50 that work will be more than compensate for the failures. The culture you create must be ready and willing accept failures, learn from them, and move onto the next. Fail-fast, as they say. AI is becoming as fundamental as electricity, the internet, and mobile as they were born into the mainstream. Not having an AI strategy in 2019 will be like not having a mobile strategy in 2010, or an Internet strategy in 2000. Let’s hope that when you look back at this moment in history, you can do so fondly, as someone who embraced data as the new resource and AI as the utility to harness it. A version of this story first appeared on Informationweek.
<urn:uuid:ed4a173f-3092-44a1-b15f-6ac9111f64ee>
CC-MAIN-2024-38
https://www.ibm.com/blogs/think/2019/03/ai-is-not-magic/
2024-09-09T03:35:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00743.warc.gz
en
0.955631
1,258
2.59375
3