text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In recent years, the integration of artificial intelligence (AI) into the education sector has gained significant momentum. The current generation of AI-powered tools offers the promise of personalized learning experiences and streamlined administrative processes, benefiting both students and educators. PowerSchool’s new AI tools, PowerBuddy for Learning and PowerBuddy for Data Analysis, represent a pivotal advancement in this field. But are AI-powered tools truly the future of personalized learning in schools? The potential of such tools to transform the educational landscape, both in enhancing student engagement and optimizing administrative tasks, raises this pertinent question. The Rise of AI in Education The educational landscape has witnessed considerable technological advancements, with AI emerging as a critical driver of change. The ability of AI to process vast amounts of data and deliver insights in real time has made it an attractive solution for personalized education. This technology provides tailored learning experiences, accommodating individual student needs, learning styles, and levels of understanding. The incorporation of AI can potentially address the diverse needs of modern classrooms, creating more inclusive and effective educational environments. Additionally, AI’s capacity to handle enormous datasets allows it to offer individualized feedback and suggestions for students, teachers, and administrators alike. PowerBuddy, PowerSchool’s innovative suite of AI-powered tools, exemplifies this trend. By integrating into the Schoology Learning Management System (LMS), it leverages existing educational data to inform its responses and recommendations. This enables a level of personalization that traditional teaching methods often struggle to achieve. With real-time data analytics, these tools can adapt to the evolving needs of both students and administrators, making education more responsive and dynamic. The rise of AI in education is not merely about adopting new technology—it’s about fundamentally redefining how teaching and learning processes are approached. PowerBuddy for Learning: Enhancing Student Engagement PowerBuddy for Learning, designed to boost student engagement, offers a natural-language AI chatbot that assists with homework, projects, and lesson reviews. This tool leverages natural language processing (NLP) to understand and respond to student queries in an interactive, intuitive manner. Its integration with the Schoology LMS means it can access each student’s assignments, grade levels, and learning capabilities, allowing for a tailored, contextually relevant response. By doing so, it provides personalized academic assistance that aligns with district standards and individual learning needs. By employing the Socratic method, PowerBuddy for Learning transcends simple Q&A functionalities. It engages students in cooperative argumentative dialogue, encouraging them to delve deeper into subjects, thereby fostering critical thinking and a thorough understanding of the material. This method supports immediate academic requirements while simultaneously cultivating essential skills for long-term success. Through this interactive and engaging approach, PowerBuddy helps students become active participants in their learning journey, enhancing both engagement and comprehension. PowerBuddy for Data Analysis: Streamlining Administrative Tasks On the administrative side, PowerBuddy for Data Analysis serves as a natural-language AI assistant that aids education data managers by generating easy-to-understand visualizations from complex datasets. This tool facilitates quicker and more informed decision-making by allowing administrators to query data on metrics like absentee rates and staffing levels. It reduces the workload on technical teams and makes data more accessible to non-technical users, ensuring that important information is available to all stakeholders swiftly and clearly. The AI’s ability to transform ad-hoc data requests into visual narratives simplifies data processing and increases operational efficiency. Its user-friendly interface ensures that even those with limited data expertise can leverage the tool to make data-driven decisions, improving overall school management and supporting academic achievement. By turning complex data into understandable insights, PowerBuddy for Data Analysis empowers educators and administrators to address issues proactively, contributing to a more organized and effective educational environment. The Benefits of Personalized Learning Personalized learning has become a focal point in contemporary education, driven by the recognition that traditional one-size-fits-all approaches are often inadequate. AI-powered tools like PowerBuddy for Learning enable a more customized educational experience, adapting to each student’s unique needs and pace. This can lead to improved academic performance, greater student engagement, and higher levels of student satisfaction. Furthermore, the ability of AI to continuously assess and adjust to a student’s progress ensures that learning remains dynamic and responsive. Moreover, personalized learning helps address learning gaps by providing additional support where needed. AI can identify areas where a student is struggling and offer targeted interventions, ensuring that no student is left behind. It can also continuously monitor and adjust to a student’s progress, providing tailored recommendations and challenges that align with their current understanding. This dynamic adaptability fosters a more effective and enjoyable learning experience, ultimately leading to better educational outcomes and more confident learners. Challenges and Ethical Considerations While the advantages of AI in education are substantial, there are also significant challenges and ethical considerations that must be addressed. Ensuring that AI tools align with district standards and educational objectives is crucial to their successful integration. Schools must carefully manage AI adoption to avoid exacerbating existing inequities or creating new ones. Additionally, there is the challenge of ensuring that these tools do not overly standardize education, thereby stifling creativity and individual expression. Ongoing training and support for educators and administrators are essential to maximize the potential of AI tools. Teachers and staff must be equipped with the knowledge and skills to effectively utilize these technologies, necessitating comprehensive training programs. Additionally, concerns surrounding data privacy and security must be addressed. Clear policies and guidelines are necessary to protect student and staff information, maintaining trust in AI applications within educational settings. Addressing these challenges proactively can help ensure the ethical and effective use of AI in education. The Role of AI in Future Education In recent years, the incorporation of artificial intelligence (AI) into the education sector has gathered substantial momentum. Modern AI-powered tools are heralding a new era with the potential to offer personalized learning experiences and efficient administrative processes, creating advantages for both students and educators. PowerSchool’s newly introduced AI tools, PowerBuddy for Learning and PowerBuddy for Data Analysis, signify a crucial advancement in this domain. However, the question remains: are AI-driven tools genuinely the future of personalized learning in schools? The potential these tools hold to revolutionize the educational landscape—by enhancing student engagement and optimizing administrative tasks—makes this a significant inquiry. The implications are extensive; if AI tools can successfully cater to individual learning needs while making administrative work more efficient, they could greatly transform the way education is delivered and managed. This evolving technology could mark a pivotal shift in educational methodologies, leading to more tailored and effective learning environments.
<urn:uuid:1875195a-9f4b-45ac-a0b5-b1caa1e3819b>
CC-MAIN-2024-38
https://educationcurated.com/online-learning/are-ai-powered-tools-the-future-of-personalized-learning-in-schools/
2024-09-15T19:51:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00576.warc.gz
en
0.927885
1,379
2.75
3
Nuclear power represents perhaps the highest potential for clean energy to power our energy needs for the future. In addition, the industry has a near peerless record in reducing CO2 emissions, minimizing workplace injuries, and avoiding fatalities on site. Despite this record, the major nuclear accidents that have occurred over the decades remain vivid in people's minds and the costs of compliance resulting from increased regulations has continued to grow. These costs make nuclear energy less competitive due to cost constraints, time to roll out new plants (permits and construction can take up to 20 years), and risks around regulatory approvals for plant startup. According to the American Action Forum (AAF), the total cost of regulatory liabilities were $15.7 Billion or roughly $219 Million dollars per plant. In addition, annual regulatory compliance costs range from $7.4M - 15.5M per plant just to shuffle the paperwork with an average cost of $22M in fees to the Nuclear Regulatory Commission (NRC) (Source: American Action Forum). Given the urgent challenges of climate change, one might hope for regulatory reform to make clean nuclear energy more cost competitive. However, history has shown that the fear of nuclear accidents has made regulatory reform an unachievable goal with advocates and critics both firmly locked into their positions. At the end of the day, the industry can't rely on regulatory reform to make nuclear more cost effective. As we used to say during my time at Bechtel, "nobody is coming, it is up to us!" Instead of complaining about the costs of regulations and compliance, we should take advantage of the immense opportunities created by the Age of Digital Transformation. Over the coming years, nearly every manual task performed by people can be automated to some degree. This trend offers enormous opportunities to reduce manual labor costs associated with compliance to allow nuclear to become more cost competitive and to improve the bottom line for the industry. So what are these trends and how should plants think about adopting them? There are a number of important technologies that offer real and tangible cost savings for the industry: - Internet of Things - leveraging automated sensors to take readings, support predictive maintenance, automate material tracking/monitoring, and improve environmental compliance with low cost wireless sensors - Cloud Computing - reducing capital expenditures and allowing capacity to scale up and down based on demand - Drones - to automate security related patrols, take measurements, and leverage machine vision - Robotic Process Automation - to eliminate manual, paper driven processes - Continuous Compliance - integrating sensor platforms to automate the updates of compliance documentation and reporting - Artificial Intelligence (AI) / Machine Learning (ML) - to learn and make autonomous decisions that reduce the need for manual labor At C2 Labs, we believe the future of nuclear centers around automated compliance. This belief has driven our significant investments in product development as we brought Atlasity to market to solve critical nuclear compliance needs in: - Cyber Security - Environmental Compliance - Physical Security - Nuclear Safety - Quality Assurance - Human Resources For many years, the industry has been driven by continuous improvement and proven methodologies such as Lean/Six-Sigma. We have taken these best practices and combined then with our DevOps automation experience to deliver the most advanced continuous compliance automation platform in the industry. Contact us today to learn more about how our Atlasity product and our compliance concierge services can reduce the cost of compliance for your nuclear plant and unlock unprecedented savings. With over 50 years of executive experience in the US Nuclear Weapons program, no other company has a better understanding of the compliance challenges in nuclear along with our unique industry insights and automation best practices on how to overcome them.
<urn:uuid:ac8570f2-167a-4e13-bef4-e97fe7a5461f>
CC-MAIN-2024-38
https://www.c2labs.com/post/the-case-for-continuous-compliance-in-nuclear
2024-09-17T02:55:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00476.warc.gz
en
0.944114
744
2.5625
3
Every company needs a backup – this is a fact no one would deny. As the saying goes, there are two types of people in the IT world – those who back up and those who are yet to back up. In the event of the system malfunction, hacker attack, computer virus infection or malicious user activity, or just simply a human error (accidental deletion of data), a backup copy of our data is the only way to be back on track. Backup copies are simply the essential part of the IT infrastructure of an organization. Simply put, a backup is a full copy of all relevant computer data in the company. However, nowadays the amount of that data is so large that creating backups may take a considerable amount of time. That is why new methods were invented – a differential backup and an incremental backup. When a company operates on a large amount of data that is constantly growing or changing and does that on a daily basis, a full backup is simply impossible. The process is time-consuming and usually takes a lot of storage space. To address this problem, a differential backup method was invented. The differential backup takes a copy of all items that were changed since the last full backup. For example, a full backup was performed on Sunday. Next, on Monday a differential backup job takes a copy of items that were changed or added since Sunday. On Tuesday, the job takes a copy only of the data changed since Sunday, etc.. The cycle is repeated until the next full backup is performed. - The process is much quicker than a full backup since it only takes a copy of what was changed. - The backup copy itself takes far less storage space than when a full copy is created each day. - The size of the data differences part grows with each cycle. If the cycle is long (e.g. the full backup is performed once a month and the differential is taken every day), at the end of it the size of the archive might be quite big and the process itself pretty lengthy. The main difference is that the incremental backup takes a copy of items changed or added since the last incremental backup job. For example, the full backup was performed, as before, on Sunday. On Monday, the incremental job kicks in and takes a snapshot of all data that was changed since Sunday. On Tuesday, the job takes a copy of all changes since Monday, on Wednesday it backs up everything changes since Tuesday and so on. In other words – the process works in a chain order creating the copy of data modified or added since the last backup job. - The backup process is even faster than the differential job, not to mention the full backup. It is, in fact, so fast that it can be performed every hour or even minute. - Each iteration of the backup job copies just the data that was changed. Therefore, only a small amount of storage is required each time. - In some cases, the backup software requires all iterations of the incremental backup for data restoration. If one of the pieces is missing – the restore is impossible. - The restore process might take some time as the software needs to rebuild data from separate incremental pieces and also the last full backup piece too. In the age of the Cloud, it seems that the incremental backup is the best choice – it is fast, pulls down a small amount of data and can be performed even in real time. If you add advanced data versioning and sophisticated recovery processes that rebuild lost information much quicker even without all the incremental pieces, you’ll get a robust tool that has you covered all the time. CodeTwo Backup for Office 365 is an example of such software – not only does it back up entire mailboxes (all types of items) from Microsoft Office 365, but it also gives you all the benefits of the incremental backup with a pinch of what’s best in the differential type of the process. You can also back up SharePoint Online. Learn more…
<urn:uuid:38d06202-5455-40a1-9262-6053746e0909>
CC-MAIN-2024-38
https://www.codetwo.com/admins-blog/difference-differential-incremental-backup/
2024-09-17T02:08:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00476.warc.gz
en
0.965786
806
2.609375
3
- Unsupervised Learning - The Fast-Slow Problem The Fast-Slow Problem We've become so obsessed with fast pleasures that we can't enjoy slow ones anymore I’ve been obsessed lately with the concept of slow versus fast. I’m calling it the Fast-Slow Problem. It refers to the speed and amount of dopamine that you get from a thing. So in the distant past, an apple would have been a treat, and especially an apple pie because it took so long to prepare and enhanced those apples, which were themselves not easy to come by. That’s slow. From the difficulty of finding the thing, and from the long process of creating an apple pie. A box of donuts on the other hand is fast. First of all, you can just walk in and say, “give me a dozen donuts”, and second, a donut has been engineered to taste better than 100 crates of apples. So if you eat six donuts, and then bite into an apple, it’ll taste like 3-D printed styrofoam. There are many other examples of this, including porn being the fast, extracted version of sex, and TikTok being the fast, extracted version of reading a book or watching a movie. I worry that this is what’s happening with dating and relationships. They say young people don’t want to get into relationships or have sex with each other anymore. Evidently they would rather do other things. This feels very much like the Fast-Slow Problem. What if it is as simple as teenagers and young adults having more Fast options on their phones and TVs, and they are so good that the Slow options of companionship and relationships simply can’t compete? It’s one thing when a doughnut competes with an apple, but in this case, the fast versus a slow problem might literally be affecting the future of the species. I’m not sure what the complete answer is here, because I’m sure there are places where the fast version is simply better and not just faster. But what I do know is that I think we should start being very cautious about replacing slow versions of things with fast versions of things. There is meaning in doing things slowly. Slow walks while holding hands. Reading physical books in natural light. And making apple pies from scratch. I think we should do our best to figure out what those slow, meaningful things are, and ensure that we keep as many of them in our lives as possible. This essay is dedicated to Tim Leonard. Being born and raised in the San Francisco, Bay Area, I also find it interesting how conservative this idea is. It’s like I’m literally saying that the old way is better than the new way, and we need to hold onto it. Which is weird for me. But yeah—reading physical books in natural light is probably better than watching TikTok on your phone. I’m willing to take that non-progressive stance.
<urn:uuid:24bd020a-7342-4765-8e96-78f694631971>
CC-MAIN-2024-38
https://danielmiessler.com/p/the-fast-slow-problem
2024-09-20T21:17:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00176.warc.gz
en
0.969365
638
2.59375
3
Please take a moment to fill out this form. We will get back to you as soon as possible. All fields marked with an asterisk (*) are mandatory. Networking and Communications Training Courses Explore our Networking and Communications training courses for Apache, Brocade, Cisco, Citrix, CompTIA, EPI Data Center Training and Certification, IBM Netezza, Microsoft Dynamics, Microsoft Exchange Server, and Microsoft System Center. What is Networking and Communications? Networking in Information Technology (IT) involves linking computers and other hardware devices to share resources and information. Ethernet connections, Wi-Fi, and even the internet can do this. A home network with a single router or a global company network with several sites might be basic or complicated. Network infrastructure, including routers, switches, and servers, is designed, implemented, and managed throughout networking. It facilitates data sharing and cooperation across enterprises. Communications, on the other hand, involve devices sharing information through a network. Protocols and standards are used in IT to send and receive data accurately and effectively. Data transmission, VoIP, video conferencing, and more are all modes of communication. It's about data transfer and data integrity and security. Communication technologies enable organizations to run more effectively and people to communicate and exchange information smoothly in current IT systems. Discover the Benefits of Networking and Communications Networking and communication technologies share resources. Hardware, software, and data are shared. Centralizing and sharing resources saves costs. Communication and networking technologies allow distant collaboration between people and teams. Email, video conferencing, and collaborative software enable remote work and real-time communication. Data Backup and Recovery Data backup and recovery are simpler with networked data storage. If a device breaks, network backup may retrieve data. Communication technologies provide this data quickly and securely. Scalability and Flexibility Networking and communications facilitate development. A network may add new devices and technologies as a firm grows. Communication technology can adapt to greater data transfer speeds and more secure protocols. Boosting Business Efficiency with Networking and Communications Training Networking and communications training may improve corporate productivity for all sizes. This training targets IT professionals, business managers, and workers who use the company's IT infrastructure. Organizations can optimize operations, decrease downtime, and boost productivity by training employees to manage and use their network and communication technologies. The curriculum covers network configuration, communication protocols, network security, and cloud computing. It helps IT experts create and maintain networks, business managers use technology to achieve corporate objectives, and people utilize communication technologies effectively. The ultimate aim is to enable the workforce to use networking and communications technology to drive company success. Want to boost your business efficiency with Networking and Communications training? Reach out to us today! Contact Us For A Free Consultation Free Networking and Communications Training Resources Learn more about Networking and Communications by exploring our extensive library of free articles, webinars, white papers, and case studies. Guide to CompTIA Network+ certification with exam details, prep tips, and LearnQuest training for network management career advancement. Exploring the core principles, types, components, benefits, and training for computer networking, the infrastructure enabling digital communication. Explaining the client-server network model – how it works, benefits and challenges, case study, and why it remains relevant today. Journey through the intricacies of network communication and data travel through interconnected devices. Frequently Asked Questions What networking and communications courses does LearnQuest offer? LearnQuest provides courses from vendors like Cisco, CompTIA, Microsoft, and more. Topics include routing and switching, cloud, security, VoIP, wireless networking, Microsoft Exchange, and gaining certs like CCNA, Network+, MCSA. What is the CCNA certification? CCNA is a Cisco Certified Network Associate certification. It validates skills like configuring, managing, and troubleshooting Cisco routers and switches. LearnQuest offers authorized Cisco CCNA training and exam prep. How do I get CompTIA Network+ certified? The CompTIA Network+ exam tests skills in networking configuration, troubleshooting, security, and more. LearnQuest provides CompTIA Network+ training and exam vouchers to prepare for the certification. Why get certified in networking? Networking certifications like CCNA, Network+, and MCSA validate your skills to employers. They can improve your job prospects and salary. LearnQuest's training prepares you to pass the exams. Where can I take networking courses online? LearnQuest offers authorized online self-paced and live virtual networking courses. Study anytime with 24/7 access, or attend scheduled online classes led by an instructor. How long is networking training? Self-paced online courses allow you to learn at your own pace. Instructor-led training ranges from 2 days to 5 days typically. Exam prep involves 20-40 hours of additional study.
<urn:uuid:de6be901-b30d-4c0e-9792-2bc28aeef69d>
CC-MAIN-2024-38
http://materials.learnquest.com/vendor-training-v20191113-subjects.aspx?topicid=18
2024-09-09T22:37:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00340.warc.gz
en
0.909219
1,011
3.203125
3
In the world of religious attire, the garments worn by clergy members carry deep significance and symbolism. Among these garments is the female bishop chimere, a unique and important piece of clergy women’s wear. But what exactly is a female bishop chimere, and what role does it play in the lives of women who serve in religious leadership? This article will explore the history, meaning, and variations of the female bishop chimere, along with other important aspects of clergy women’s wear. The chimere is a garment traditionally worn by bishops in the Anglican Communion, among other denominations. It is a sleeveless outer vestment, often made of silk or satin, and is worn over a rochet, which is a white vestment with sleeves. The chimere is typically open at the front, resembling a long cloak or mantle, and it is often secured with a button or clasp at the neck. The color of the chimere may vary, but it is commonly red, black, or purple, depending on the occasion and the specific traditions of the denomination. For women serving as bishops, the female bishop chimere is a counterpart to the traditional male chimere, tailored to fit the female form while still maintaining the garment’s symbolic significance. The female bishop chimere serves as a visible sign of the bishop’s office and authority, and it is often worn during important religious ceremonies, such as ordinations, consecrations, and other liturgical functions. The female bishop chimere is not just a piece of clothing; it carries a deep and rich symbolism that reflects the role of the bishop within the church. The chimere represents the bishop’s authority and responsibility to teach, guide, and lead the faithful. It also symbolizes the bishop’s connection to the historical traditions of the church, as the garment has been worn by bishops for centuries. In many denominations, the color of the chimere is significant. For example, a red chimere is often associated with the martyrdom of saints and the passion of Christ, symbolizing the bishop’s willingness to sacrifice for the faith. A purple chimere, on the other hand, is often worn during Advent and Lent, reflecting the themes of penance and preparation. Black chimeres are typically reserved for funerals and other solemn occasions, representing mourning and reflection. For women in the clergy, the female bishop chimere serves as a powerful statement of their leadership and authority within the church. It is a visible sign that women can and do hold important roles in religious leadership, challenging traditional gender roles and paving the way for greater inclusivity within the church. Clergy women’s wear has evolved significantly over the years, reflecting broader changes in society and the church’s approach to gender roles. In the past, religious garments were often designed with men in mind, with little consideration given to the needs and preferences of women. However, as more women have entered the clergy, there has been a growing recognition of the importance of providing appropriate and comfortable attire for female clergy members. The female bishop chimere is just one example of how clergy women’s wear has adapted to meet the needs of women in religious leadership. Today, there are many options available for women in the clergy, ranging from tailored robes and albs to skirts and blouses that offer both style and functionality. These garments are designed to be both respectful of religious traditions and mindful of the practical needs of women, allowing them to perform their duties with dignity and grace. In addition to the female bishop chimere, many clergy women choose to wear skirts and blouses as part of their everyday attire. These garments offer a versatile and comfortable alternative to more traditional robes and vestments, allowing women to move freely and perform their duties without feeling restricted by their clothing. Skirts and blouses designed for clergy women are typically modest in design, with simple lines and neutral colors that reflect the solemnity of the religious office. However, they also offer opportunities for personal expression, with variations in fabric, cut, and embellishment that allow women to create a look that is both professional and reflective of their personal style. Many clergy women appreciate the flexibility that skirts and blouses offer, as these garments can be easily mixed and matched to create different outfits for various occasions. For example, a simple black skirt paired with a white blouse can be appropriate for a variety of settings, from a Sunday service to a church meeting. By adding a stole or other accessory, clergy women can further personalize their attire while still adhering to the traditions and expectations of their denomination. While clergy women’s wear has become more diverse and adaptable over time, tradition still plays a central role in determining what is appropriate attire for women in the clergy. Many denominations have specific guidelines for what clergy members should wear during different types of services and ceremonies, and these guidelines often reflect centuries-old traditions that have been passed down through generations. For example, in the Anglican Communion, bishops are expected to wear the chimere and rochet during formal liturgical functions, regardless of gender. This adherence to tradition serves to maintain a sense of continuity and reverence within the church, linking today’s clergy with the historical practices of the faith. At the same time, there is also room for adaptation and change within these traditions. The development of the female bishop chimere is a prime example of how traditional garments can be modified to better suit the needs of women, without losing their symbolic meaning or importance. As more women enter the clergy, there may be further evolution in clergy women’s wear, as the church continues to balance the demands of tradition with the practical realities of modern religious leadership. For women in the clergy, choosing the right attire is an important decision that requires careful consideration of both tradition and personal preference. The female bishop chimere, along with other clergy garments, must be selected with an eye toward both comfort and appropriateness, ensuring that the clothing enhances rather than distracts from the sacred duties of the clergy member. When selecting a female bishop chimere, women should consider factors such as the fit, fabric, and color of the garment. The chimere should be tailored to fit the individual’s body shape, providing a comfortable and flattering fit that allows for ease of movement. The fabric should be durable and easy to care for, as clergy garments often require frequent use and cleaning. Finally, the color of the chimere should be chosen based on the specific traditions of the denomination, as well as the personal preferences of the wearer. In addition to the chimere, clergy women may also want to consider other pieces of attire that complement their overall look. Skirts, blouses, and other garments should be selected with the same attention to detail, ensuring that the entire ensemble is both functional and respectful of the religious office. As more women enter the clergy, it is important that clergy women’s wear is accessible to all, regardless of budget or location. In the past, finding appropriate clergy attire could be a challenge, particularly for women in smaller communities or those with limited financial resources. However, the rise of online retailers specializing in clergy attire has made it easier than ever for women to find high-quality, affordable garments that meet their needs. One such online store is eClergys, which offers a wide range of clergy attire for women, including female bishop chimeres, skirts, blouses, and more. eClergys is committed to providing accessible, high-quality clothing for women in the clergy, ensuring that every woman has the opportunity to find garments that reflect her role and responsibilities within the church. In addition to offering a diverse selection of clergy attire, eClergys also provides helpful resources and customer support to assist women in choosing the right garments for their needs. Whether you are looking for a traditional female bishop chimere or a more modern option like a skirt and blouse combination, eClergys has something to suit every taste and budget. The female bishop chimere is more than just a piece of clothing; it is a symbol of the important role that women play in religious leadership. As more women continue to break barriers and take on leadership roles within the church, it is essential that their clothing reflects the dignity and significance of their position. Clergy women’s wear, including the female bishop chimere, has evolved to meet the needs of modern women in the clergy, offering a range of options that are both respectful of tradition and mindful of practical considerations. Whether you are a woman entering the clergy for the first time or a seasoned leader looking to update your wardrobe, there are now more options than ever before to help you find the perfect attire for your needs.
<urn:uuid:75f93a6c-55f7-4858-bce3-c209718afa66>
CC-MAIN-2024-38
https://diversinet.com/what-is-a-female-bishop-chimere/
2024-09-11T03:27:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00240.warc.gz
en
0.970197
1,823
3.375
3
TCP is important stuff for network engineers to know. Today's problems aren't so cut-and-dry as they used to be. When a problem strikes, we can't just say "it's not the network" and go along with our day. A core understanding of TCP and how it carries and acknowledges data goes a long way in finding the root cause of performance problems today. One key aspect of TCP that is important to learn is the Sequence and Acknowledgement process. To put it simply, these numbers in the TCP headers indicate how much data has been sent and received. They allow each endpoint to determine if there was packet loss, what needs to be retransmitted, and help to determine how much data is in flight. For a six-minute crash-course on how TCP Sequence numbers work, check out this video: Thanks for checking it out and hopefully it helps all packet-heads out there! Author Profile - Chris Greer is a Network Analyst for Packet Pioneer LLC and a Certified Wireshark Network Analyst. Chris regularly assists companies in tracking down the source of network and application performance problems using a variety of protocol analysis and monitoring tools including Wireshark. Chris also delivers training and develops technical content for several analysis vendors. Got network problems? Let's get in touch.
<urn:uuid:c682f542-a492-45fb-80b6-887c7f334b1f>
CC-MAIN-2024-38
https://www.networkdatapedia.com/post/2017/08/15/how-tcp-works-sequence-numbers
2024-09-14T17:09:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00840.warc.gz
en
0.953935
268
2.78125
3
In an increasingly diverse and inclusive work environment, the IT and tech industries are gradually opening doors to a wide range of talent. Among these is the sizable population of deaf and hard of hearing (DHH) individuals. According to data from the National Deaf Center cited in a Harvard Business Review article, only about 53% of deaf people were employed in 2017, stressing the need for more inclusive hiring practices. Furthermore, roughly 11.5% of the US population has hearing loss, a figure that is projected to rise globally to one in four people, or approximately 2.5 billion individuals, by 2050. Despite these significant numbers, DHH individuals often face barriers to employment, particularly in specialized fields like tech and IT. However, with the right support and opportunities, DHH people can not only enter but also thrive in tech and IT careers. Here’s how. Legal protections for DHH employees Various laws and regulations have been established to protect DHH individuals from employment discrimination. The Americans with Disabilities Act (ADA) is a crucial piece of legislation that prohibits discrimination against individuals with disabilities in all areas of public life, including jobs. Under ADA Title I, employers are required to provide reasonable accommodations to qualified employees with disabilities, ensuring they have equal opportunities to perform their job duties. Per Healthy Hearing, these accommodations might include providing a sign language interpreter for conferences or other meetings. It may also include providing assistive listening devices like captioned phones, computer software, or strobe light emergency alerting systems. Despite these protections, challenges persist. The ADA, while powerful, needs to be fully embraced by employers to create truly inclusive environments. Companies leading the way Several organizations are making strides in supporting DHH individuals in their career aspirations, particularly within the tech industry. The National Technical Institute for the Deaf (NTID) at Rochester Institute of Technology is a prime example. NTID’s Center on Employment works diligently to connect DHH students with potential employers. Their efforts help DHH individuals find jobs through training that reflects real-life situations. At the same time, the organization educates prospective employers about the benefits of hiring from this talent pool and how to create a supportive environment. Additionally, tech giants like Amazon and Microsoft have launched initiatives to recruit and support DHH employees. In 2019, Amazon became the first major tech company to hire full-time American Sign Language (ASL) interpreters for deaf employees. According to Forbes, since the launch of the ASL Interpreter Program, more DHH people have been seeking employment at Amazon, and more of them are finding placement at the company. Tools enhancing DHH inclusion in tech Advancements in technology are playing a crucial role in leveling the playing field for DHH individuals in the tech industry. Various tools and programs are being developed that have the potential to aid DHH tech workers, making it easier for them to succeed in their roles. One such innovation is Nuance’s hearing glasses, designed for people with mild to moderate hearing loss. These glasses use beamforming technology positioned on the front of the frames to isolate audio in front of the listener while also reducing background noise. This effectively amplifies sounds in the wearer’s line of sight to make communication easier, enabling hard-of-hearing users to participate in meetings and collaborative work environments. Another groundbreaking tool is the AI-powered app developed by Lenovo in partnership with Brazil’s Recife Center for Advanced Studies and Systems (CESAR). This app translates sign language into spoken language, bridging the communication gap between DHH individuals and their hearing colleagues. The app uses advanced AI algorithms to recognize and translate sign language gestures in real time, making interactions seamless and more inclusive. At the moment, the app is only available in Latin America, but Lenovo has plans to extend the system’s use to other sign languages worldwide by 2025. DHH individuals have immense potential to thrive in IT and tech careers, provided that the right accommodations and opportunities are in place. Employers who actively engage with and support the DHH community stand to benefit from a diverse pool of talent who bring unique perspectives and skills to the table. To stay updated on the latest IT and cybersecurity news, visit the Latest Hacking News website.
<urn:uuid:21c9d87f-838a-470d-97ac-72223c049267>
CC-MAIN-2024-38
https://latesthackingnews.com/2024/08/07/can-deaf-and-hard-of-hearing-people-thrive-in-it-and-tech-careers/
2024-09-18T11:22:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00540.warc.gz
en
0.937615
870
2.53125
3
A recent, potential breach of protected health information (PHI) – including social security numbers, financial information, and medical data – was reported by a major health system in West Virginia. The cause? A stolen laptop, taken from an employee’s car. Despite equipping the laptop with security tools (including password protection), the health system failed to encrypt the laptop’s hard drive, allowing unauthorized users potential access to the sensitive, PHI data of over 40,000 patients. Far from being overly restrictive, the HIPAA Security Rule was intended for just such situations; namely, to help organizations protect patients from having their personal Information divulged or held hostage for illicit gain. The rule defines safeguards to include “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.” Clearly, the physical encryption of a computer hard drive should be a mandatory practice if PHI data is being stored. Perhaps a related question is, ‘How could an unprotected laptop containing PHI leave a facility in the first place?’ Chances are, the employee intended to work remotely, and the health system believed they had enough protections in place. A more thorough consideration of the security rule, however, would have addressed both hard drive encryption, and the company’s responsibility to: A closer look reveals that the Facility Access Controls Standard has four implementation specifications: 3. Access Control and Validation Procedures (Addressable) – “Implement procedures to control and validate a person’s access to facilities based on their role or function, including visitor control, (validating a person’s access to facilities based on their role or function) and control of access to software programs for testing and revision.” 4. Maintenance Records (Addressable) – “Implement policies and procedures to document repairs and modifications to the physical components of a facility which are related to security (for example, hardware, walls, doors, and locks).” While each of these four facility specifications is “addressable,”(meaning that the specifics of the implementation may vary depending on the entity and precise requirements), protecting electronically protected health information (EPHI) means, at minimum, implementing “reasonable and appropriate physical safeguards related to equipment and facilities.” In Part 2 of this series, we’ll take a closer look at specific procedures related to workstation use.
<urn:uuid:5f7912d3-2739-4886-beeb-17d8e8120c2f>
CC-MAIN-2024-38
https://www.hipaavault.com/resources/hipaa-compliant-hosting-insights/physical-safeguards-for-hipaa-part-1-facility-access/
2024-09-18T11:12:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00540.warc.gz
en
0.936851
510
2.859375
3
Google has trained a large language model to provide natural language explanations of its hidden representations. The use of large language models (LLM) has been skyrocketing. However, despite advancements in the technology, it’s still causing concerns regarding transparency and reliability. AI models have been known to make factual mistakes, which are known in the industry as hallucinating. Now, Google researchers have unveiled a framework called "Patchscopes," which centers on employing LLMs to offer organic language explanations of their so-called internal hidden representations. Internal representations are the way that an AI model represents what it has learned. According to Google scientists, exploring hidden representations could “unlock a deeper scientific understanding” of how these models work and provide control over their behavior. “We are excited about its applications to detection and correction of model hallucinations, the exploration of image and text representations, and the investigation of how models build their predictions in more complex scenarios,” writes Avi Caciularu and Asma Ghandeharioun, Research Scientists at Google Research. Patchscopes inject hidden LLM representations into designated prompts and analyze the supplemented input to generate comprehensible explanations of the model's internal understanding processes. The configuration involves four steps: Setup, Target, Patch, and Reveal, enabling augmentation of the model’s understanding of context. According to a blog post by Google, current research unifies and extends a broad range of existing interpretability techniques and unlocks new insights into how an LLM's hidden representations capture nuances of meaning in the model's input, making it easier to fix certain types of reasoning errors. “The Patchscopes framework is a breakthrough in understanding how language models work,” say the scientists. “This has intriguing implications for improving the reliability and transparency of the powerful language models we use every day.”
<urn:uuid:1173dc93-388f-4a06-85da-45e7718536b1>
CC-MAIN-2024-38
https://cybernews.com/news/google-ai-hallucinating-patchscope/
2024-09-19T17:34:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00440.warc.gz
en
0.935035
382
2.96875
3
On July 19, 2024, a seemingly routine CrowdStrike update caused a global IT meltdown. Millions of systems running Windows 10 and later experienced vital failures, bringing banks, airports, and critical infrastructure to a halt. The cause? A configuration error in the CrowdStrike Falcon sensor update. The CrowdStrike outage was more than just a temporary inconvenience; it demonstrated how a single flaw can lead to widespread chaos. When a security solution trusted by millions fails, the consequences can cripple organizations and disrupt daily life. This event clearly showed the urgent need for a fundamental change in the way some organizations approach their cybersecurity, moving away from dependence on single solutions and towards a more dependable, multilayered approach. The CrowdStrike Outage: The Dangers of Single Points of Failure (SPOF) A single point of failure (SPOF) refers to any component whose failure can compromise the entire security infrastructure. This could be an important server, a specific software application, or even a single individual with unique access privileges. SPOFs are fundamentally at odds with the principles of high availability and reliability in computing systems, posing significant risks: Operational Downtime: A malfunctioning firewall, a compromised authentication server, or even a failed database server can bring operations to a grinding halt. This downtime translates to lost productivity, missed opportunities, and financial repercussions. Security Vulnerabilities: SPOFs are easy targets for cybercriminals. A single vulnerability in a central component can provide an entry point for attackers to infiltrate the entire network, potentially leading to data breaches, system hijacking, and financial losses. Financial Implications: The costs associated with downtime, data recovery, regulatory fines (in case of data breaches), and reputational damage can be substantial. Investing in redundancy and failover mechanisms is not merely a security best practice; it’s a strategic and smart financial decision. SPOF Scenario: A Database Server To give you a practical example, consider an e-commerce company whose operations entirely depend on a single, high-performance database server. This server manages all aspects of the business, including customer transactions, inventory data, order processing, and financial records. While this centralized approach might appear efficient, it presents a significant single point of failure. If this server were to experience a hardware malfunction, fall victim to software corruption, or become compromised by a cyberattack, the consequences could be disastrous. The company could face a complete shutdown of its e-commerce operations, halting order placements, customer service interactions, and order fulfillment. The loss of real-time inventory tracking could lead to inaccurate stock information, resulting in shipping delays and frustrated customers. There is a risk of permanent data loss or corruption, further amplifying the potential damage. Such an event would undoubtedly severely impact the company’s reputation, erode customer trust, and potentially lead to long-term revenue loss. To mitigate this risk, the company could transition from this single server setup to a more diverse architecture. This could involve implementing a database cluster that distributes the workload across multiple servers, ensuring redundancy and minimizing the impact of a single server failure. Incorporating load balancers could optimize performance and prevent any one server from being overwhelmed. Finally, real-time data replication across all nodes in the cluster would guarantee data consistency and availability, even during a node failure. The Layered Cybersecurity Stack A layered cybersecurity stack, often referred to as “defense in depth,” is not about relying on a single, impenetrable wall but rather creating a series of security measures at different levels. This approach acknowledges that no single solution is foolproof. If one layer fails, others are there to provide backup and prevent a complete breach. The National Institute of Standards and Technology (NIST) Cybersecurity Framework outlines six core functions that every cybersecurity stack should encompass: - Identify: Before implementing any security measures, organizations need to understand their assets, potential vulnerabilities, and the threats they face. This involves conducting thorough risk assessments and identifying potential SPOFs. - Protect: This function involves implementing preventative measures such as firewalls, intrusion detection systems, antivirus software, email filtering, and strong password policies to prevent unauthorized access and malicious activities. - Detect: Continuous monitoring of networks and systems is needed to identify breaches or suspicious activities as they occur. This involves using tools like endpoint detection and response (EDR), network detection and response (NDR), and security information and event management (SIEM) systems. - Respond: Having a well-defined incident response plan is important. This plan should outline steps to quickly contain and mitigate threats, minimize damage, and ensure business continuity in the event of a security incident. - Recover: This function focuses on restoring systems and data to their normal operating state after a security incident. This includes having reliable backups, disaster recovery plans, and procedures for testing and validating the recovery process. - Govern: Establishing policies, procedures, and processes to manage and oversee the cybersecurity program is necessary. This includes defining roles and responsibilities, setting performance metrics, and ensuring compliance with legal and regulatory requirements. How BlackFog Helps Address SPOF BlackFog offers a proactive approach to cybersecurity by specializing in anti data exfiltration (ADX). Unlike traditional solutions that primarily focus on keeping attackers out, BlackFog prioritizes preventing data from leaving the device, effectively neutralizing threats before they can cause significant damage. BlackFog operates at multiple points in the attack lifecycle, providing a multi-layered defense strategy: - Blocking Malicious Traffic: BlackFog blocks known malicious IP addresses and domains, preventing communication with command-and-control servers often used by attackers. - Preventing Data Collection: The technology identifies, and blocks attempts to collect sensitive data, such as keystrokes, screenshots, and webcam access, commonly used in espionage and data theft. - Geofencing: This feature allows organizations to restrict data flow to specific geographic locations, preventing data exfiltration to unauthorized countries. By constantly monitoring data flow and identifying anomalous outbound traffic patterns, BlackFog can detect, and block data exfiltration attempts in real-time, even from unknown or zero-day threats. To get started, click here to schedule a demo as soon as possible.
<urn:uuid:f023c9ec-d830-4b5b-bc0b-24a977a105f6>
CC-MAIN-2024-38
https://www.blackfog.com/the-crowdstrike-wake-up-call/
2024-09-10T00:32:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00440.warc.gz
en
0.914657
1,302
2.6875
3
Sea levels are rising faster than ever. The level and rate of change vary by geography, driven by a combination of global, regional and local factors. However one thing is clear – the business-as-usual trajectory on carbon emissions could significantly increase sea levels and have devastating social, economic and environmental impact. Businesses and policymakers alike will need to radically change their approaches to combat climate change and the impact of climate change on sea level. On 28 November 2018, Temasek organised a discussion by Professor Benjamin Horton, Chair at the Nanyang Technological University’s Asian School of the Environment in Singapore and Principal Investigator at the Earth Observatory Singapore, as part of the Ecosperity Conversations series. The session discussed the drivers of sea level change, the impact of carbon emissions and climate change on sea levels, and the potentially devastating effects of this globally and particularly here in Southeast Asia. This summary report covers the key topics discussed during the session and includes additional insights to complement the discussion on the potential implications for businesses and policymaking. Read more here or follow the link below. This report has received coverage in the media, including: - Climate research centre to study how sea level rise could impact Singapore, Channel NewsAsia, March 2019
<urn:uuid:76d97d1f-e1d6-437b-8636-0b0610005cdc>
CC-MAIN-2024-38
https://accesspartnership.com/climate-change-rising-sea-levels-looming-threats/
2024-09-11T07:23:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00340.warc.gz
en
0.936566
254
2.859375
3
Generate checksum for the file Determines whether to generate a checksum of the file. When using classic mode this requires that a network connection be made to the remote machine using a UNC path and the administrative shares, this is not a requirement when using PowerShell remoting. The checksum is used by the item comparer to detect when changes are made to the file. Maximum File Size (KB) The maximum size of the file for which the checksum should be read, in kilobytes. The hash algorithm to use to generate the checksum.
<urn:uuid:13fdd234-df2c-42ee-80f0-26f28c7dd256>
CC-MAIN-2024-38
https://www.centrel-solutions.com/media/xiaconfiguration/adminguideweb/WindowsFileChecksum.html
2024-09-12T11:55:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00240.warc.gz
en
0.810469
117
2.828125
3
Cybercriminals are making use of every trick in the book, old and new, to launch damaging cyberattacks against businesses across the world, and spoofing has become a key weapon in their arsenal. So, understanding the nuances of spoofing is crucial for every IT professional and business owner. Spoofing, a deceptive tactic used by cybercriminals to disguise communication from an unknown source as being from a known, trusted source, can lead to significant security breaches. In this comprehensive blog, we delve into the world of spoofing to shed light on why it’s a critical topic for everyone to grasp. We will unravel how spoofing operates, explore the various forms it takes and offer insights into detecting these cunning attacks. As we navigate the complexities of spoofing, we will also introduce robust security awareness training and email security solutions, like BullPhish ID and Graphus, designed to fortify your defenses against these deceptive attacks. We encourage you to learn more about ID Agent’s lineup of world-class security tools that protect against a wide range of cyber threats (including spoofing) by booking a demo to see them in action. These resources will help you better understand and combat spoofing attacks. What should you be looking for in an EDR solution? This checklist helps you make a smart choice! GET IT>> What is spoofing? Spoofing, in the realm of cybersecurity, refers to the art of deception where a malicious party disguises itself as a known or trusted entity to gain unauthorized access, spread malware, steal data or launch attacks. It’s a technique that leverages social engineering at its core, manipulating human psychology to breach security protocols. This threat is particularly pertinent in today’s digital age, where the integrity of communication channels and data authenticity are foundational to cybersecurity. At the heart of spoofing lies the exploitation of trust. Cybercriminals often use spoofing to masquerade as legitimate sources, such as reputable companies or familiar contacts, thereby tricking individuals or systems into revealing sensitive information, clicking on malicious links or unknowingly granting access to secure networks. The harm caused by spoofing extends beyond individual attacks, as it can undermine the overall trust in digital communications and transactions. Understanding and recognizing spoofing attacks is crucial. These can manifest in various forms, including email spoofing, caller ID spoofing, website spoofing and IP address spoofing, among others. Each type has its unique methods and targets, but they all share the common goal of deceiving the victim into believing they are interacting with a trusted entity. Affordable, automated penetration testing is a game-changer. Learn about it in our buyer’s guide! GET GUIDE>> What is the difference between spoofing and phishing? Understanding the nuances between spoofing and phishing comes in handy when training users to improve your organization’s cyber resilience. Here’s a brief comparison of the two. Aspect | Spoofing | Phishing | Definition | Spoofing involves disguising communication from an unknown source as being from a known, trusted source. | Phishing is a broader strategy used to trick individuals into divulging sensitive information or clicking malicious links. | Primary objective | To gain trust by impersonating a legitimate source. | To deceive individuals into compromising their security by exploiting the established trust, often through spoofing. | Technique | Often involves changing sender information in emails, manipulating caller IDs or creating fake websites. | Typically includes sending deceptive emails that appear legitimate, often made credible through spoofing. | Target | Systems and individuals, to make them believe they are interacting with a trusted entity. | Individuals, aiming to extract sensitive information like login credentials or financial data. | Methodology | More technical, focusing on the alteration of communication or data. | More psychological, leveraging social engineering to manipulate human behavior. | Check out our on-demand webinar to learn about the top phishing scams employees fall for. The webinar highlights real-life phishing scams that have cost businesses dearly and reveals phishing simulations that employees frequently fall victim to. Every business faces insider risk, from employee mistakes to malicious acts. Learn how to mitigate it. DOWNLOAD EBOOK>> How does spoofing work? Spoofing attacks manipulate the identity of a communication source to masquerade as a trusted entity. This deception is not just technical but psychological, exploiting the human tendency to trust familiar sources. The effectiveness of spoofing lies in its ability to bypass initial skepticism, leading victims to reveal sensitive information or grant access to secure networks. These attacks have proliferated across various media, including email, text and phone calls, adapting to different communication channels to maximize their deceptive potential. In email spoofing, for example, attackers alter email headers to make messages appear from legitimate sources. Text message spoofing can manipulate sender information to deliver deceptive SMS. Similarly, phone or caller ID spoofing can trick victims into believing they are speaking with trusted individuals or organizations. See the challenges companies face & how they’re overcoming them in The Kaseya Security Survey Report 2023 DOWNLOAD IT>> What is an example of spoofing? A typical email spoofing attack might unfold as follows: - Preparation: The attacker chooses a target individual or organization and conducts thorough research to identify a trusted contact or entity familiar to the target. - Email header manipulation: Using specialized software, the attacker alters the email header, changing the “From” address to mimic the trusted contact’s email address. - Crafting the message: The email is crafted to appear legitimate, often including urgent or enticing content to prompt immediate action from the recipient. - Sending the email: The spoofed email is sent to the target, who, believing it to be from a known source, is more likely to trust its contents. - Action by target: The target interacts with the email, such as clicking a link, downloading an attachment or replying with confidential information, leading to potential security breaches or data theft. By understanding the mechanics behind spoofing attacks, individuals and organizations can better prepare and protect themselves from these deceptive practices. Learn how to spot today’s most dangerous cyberattack & get defensive tips in Phishing 101 GET EBOOK>> Types of spoofing Spoofing attacks, taking on many forms, can infiltrate various communication mediums. Each type has unique characteristics, making them distinct yet equally dangerous. Email spoofing involves forging an email header so that the message appears to come from a known, legitimate source. It tricks recipients into thinking the email is from a trusted sender, increasing the likelihood of the recipient revealing sensitive information or clicking on malicious links. IP spoofing masks a hacker’s IP address with a trusted, known IP address. This technique is used to hijack browsers or gain unauthorized access to networks, masking the attacker’s true location and identity. Website spoofing is the creation of a replica of a trusted website to deceive individuals into entering personal, financial or login details. This often involves duplicating the design of legitimate sites to collect user data. Caller ID spoofing Caller ID spoofing disguises a caller’s identity by falsifying the information transmitted to the recipient’s caller ID display. It’s often used in scam calls to make them appear legitimate or to mask telemarketing calls. Facial spoofing involves using photos, videos or masks to trick facial recognition systems. This type of spoofing is a concern in security systems that use facial recognition technology for authentication. GPS spoofing tricks a GPS receiver by broadcasting counterfeit GPS signals that are structurally similar to authentic signals. This can mislead GPS receivers on location and time, which is often used in hacking drones or misleading tracking systems. Text spoofing, similar to email spoofing, involves sending text messages from a falsified number or name. It tricks recipients into believing the message is from a known contact or entity and is often used in scams and phishing attacks. Address Resolution Protocol (ARP) spoofing ARP spoofing, or ARP cache poisoning, involves sending falsified ARP messages over a local area network. This exploits the process of linking an IP address to a physical machine address and is used in so-called man-in-the-middle attacks. Domain Name Spoofing (DNS) spoofing DNS spoofing, or DNS cache poisoning, involves corrupting the DNS server’s address resolution cache, leading users to a fraudulent website instead of the legitimate one, often to steal credentials or distribute malware. What cybercriminal tricks do employees fall for in phishing simulations? Find out in this infographic. GET IT>> How to detect spoofing Detecting spoofing involves being vigilant for certain red flags that indicate an attempt to deceive or mislead. Here are common signs that should raise your suspicions and warrant further scrutiny: - Urgent language or calls to action: Spoofers often create a sense of urgency to provoke immediate action. Phrases like “urgent action required” or “immediate response needed” are used to rush you into acting without thinking. - Errors in spelling, grammar or syntax: Legitimate organizations typically ensure their communications are free of errors. Misplaced commas, misspelled words and awkward phrasing can all be indicators of a spoofing attempt. - Unusual link addresses: Before clicking on any link, hover over it to preview the URL. If the address looks suspicious or doesn’t match the expected destination, it’s likely a sign of spoofing. - Altered web URLs or email addresses: Spoofers may use addresses that look similar to the real ones but with slight alterations. Look for subtle changes, such as the inclusion of unexpected characters or misspellings. - Missing encryption or certification: Secure and legitimate websites use encryption to protect your information. A lack of HTTPS in the URL or a missing padlock icon in the address bar indicates a lack of security, which is a common characteristic of spoofed sites. - Unknown or suspicious numbers: In voice spoofing, calls from unknown or suspicious numbers, especially those that closely resemble familiar numbers, can indicate a spoofing attempt designed to make you think the call is from a trusted source. Learn more about growing supply chain risk for businesses and how to mitigate it in a fresh eBook. DOWNLOAD IT>> How to prevent email spoofing As we transition into discussing solutions that help prevent email spoofing, it’s important to understand how security awareness training and anti-phishing software can play a pivotal role in combatting this cyber threat. Security awareness training Security awareness training is vital in preventing email spoofing. It educates employees about the nature of spoofing attacks and how to recognize them. This training typically includes identifying suspicious email elements, such as mismatched URLs and unexpected email requests. By raising awareness and promoting vigilance, this training empowers individuals to become an important line of defense against spoofing attempts. Anti-phishing software provides an essential layer of defense against email spoofing. These tools use advanced algorithms to scan emails for signs of spoofing, such as header anomalies and suspicious sender addresses. They can also verify the authenticity of links and attachments, providing real-time alerts to prevent users from falling victim to spoofed emails. By automating the detection process, anti-phishing software significantly reduces the chances of successful spoofing attacks. Both these solutions can effectively protect against email spoofing, offering both educational and technological means to identify and neutralize threats before they cause harm. Explore the nuts and bolts of ransomware and see how a business falls victim to an attack. GET EBOOK>> Stop email spoofing attacks with ID Agent Tackling email spoofing effectively requires robust solutions like BullPhish ID and Graphus. These tools are specifically designed to foster a security-aware culture and provide strong anti-phishing mechanisms to prevent spoofing. BullPhish ID is a comprehensive security awareness training and phishing simulation solution. It delivers educational video content and simulates phishing attacks to train users to recognize and report spoofing and other phishing tactics. Its key features include ready-made and customizable phishing kits that resemble real-world attacks, making it an effective tool for enhancing vigilance against email spoofing. Graphus employs powerful anti-phishing technology to provide an automated defense against email threats. It uses AI to analyze communication patterns and detect anomalies that indicate spoofing attempts. Graphus’ TrustGraph, EmployeeShield and Phish911 features work together to identify and quarantine malicious emails and help users spot and report suspicious messages ensuring immediate and efficient protection against email spoofing. Explore our advanced security solutions and elevate your cybersecurity strategy. Don’t just stop at learning; experience the difference firsthand by booking a demo today! Read case studies of MSPs and businesses that have conquered challenges using Kaseya’s Security Suite. SEE CASE STUDIES>> Our Partners typically realize ROI in 30 days or less. Contact us today to learn why 3,850 MSPs in 30+ countries choose to Partner with ID Agent! Check out an on-demand video demo of BullPhish ID or Dark Web ID WATCH NOW>> See Graphus in action in an on-demand video demo WATCH NOW>> Book your demo of Dark Web ID, BullPhish ID, RocketCyber or Graphus now!
<urn:uuid:2b14cf63-b74e-4303-a530-b44cfbb7cb3e>
CC-MAIN-2024-38
https://www.idagent.com/blog/what-is-spoofing/
2024-09-07T17:22:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00740.warc.gz
en
0.912251
2,772
2.921875
3
SiCortex opens up its press release from the show last week with an interesting perspective on our business – not the HPC bit, but the things we have HPC for One hundred years ago, Lewis Fry Richardson completed the first true scientific simulation, a two-dimensional model to predict the behavior of a dam under stress. This project involved a team of human “computers” performing slide-rule calculations for a mind-and-finger-numbing period of three years. SiCortex masterfully tied Richardson’s advances to their own product and what they were showing at the show. This week, a team of humans performed a similar simulation, this time by pedaling bicycles to power a supercomputer. The demonstration was held at SC08 in Austin, Texas, by students from Purdue University and seven cyclists from Mellow Johnny’s Bike Shop, a venture partially owned by Lance Armstrong. The Purdue students used a SiCortex SC1458 high-productivity computer to complete a three-dimensional dam-break simulation in less than 20 minutes. “Supercomputing has come a long way over the last hundred years,” said SiCortex president and CEO Chris Stone. This is one of the best examples of “press release as readable story” that I’ve run across in 2 years of reading press releases in HPC. By the way, this is the same Richardson who lent his name to the Richardson number in CFD. He also did a bit with weather modeling and fractals — the guy got around. From the Wikipedia entry One of Richardson’s most celebrated achievements is his attempt in hind sight to forecast the weather on a single day — 20 May 1910 — by direct computation. At the time, meteorologists carried out forecasts principally by looking for similar weather patterns from past records, and then extrapolating forward. Richardson attempted to use a mathematical model of the principal features of the atmosphere, and calculate the next day’s weather ab initio from data taken at a specific time (7 am). As Lynch makes clear , Richardson’s forecast failed dramatically, predicting a huge 145 mbar rise in pressure over 6 hours when the pressure actually stayed more or less static. However, detailed analysis by Lynch has shown that the cause was a failure to apply smoothing techniques to the data, which rule out unphysical surges in pressure. When these are applied, Richardson’s forecast turns out to be essentially accurate — a remarkable achievement considering the calculations were done by hand, and while Richardson was serving with the Quaker ambulance unit in northern France.
<urn:uuid:2e8ecf67-75dd-4079-bc46-4b64a4ce8292>
CC-MAIN-2024-38
https://insidehpc.com/2008/11/centennial-of-scientific-computing/
2024-09-11T09:02:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00440.warc.gz
en
0.95559
533
2.65625
3
Maintaining a healthy body weight is no simple matter. A better understanding of how the body regulates appetite could help tip the scale toward the healthy side. Contributing toward this goal, a team led by researchers at Baylor College of Medicine and the University of Cambridge reports in the journal Nature Communications that the gene SRC-1 affects body weight control by regulating the function of neurons in the hypothalamus – the appetite center of the brain. Mice lacking the SRC-1 gene eat more and become obese. SRC-1 also seems to be involved in regulating human body weight. The researchers identified in severely obese children 15 rare SRC-1 genetic variants that disrupt its function. When mice were genetically engineered to express one of these variants, the animals ate more and gained weight. “The protein called steroid receptor coactivator-1 (SRC-1) is known to participate in the regulation of body weight, but its precise role is not clear,” said co-corresponding author Dr. Yong Xu, associate professor of pediatrics and of molecular and cellular biology and a researcher at the USDA/ARS Children’s Nutrition Research Center at Baylor College of Medicine and Texas Children’s Hospital. “Here we explored the role of SRC-1 in the hypothalamus, a brain area that regulates appetite.” The researchers discovered that SRC-1 is highly expressed in the hypothalamus of mice, specifically in neurons that express the Pomc gene. Pomc neurons are known to regulate appetite and body weight. Further experiments showed that SRC-1 is involved in regulating the expression of Pomc gene in these cells. When Xu and his colleagues deleted the SRC-1 gene in Pomc neurons, the cells expressed less Pomc and the mice ate more and became obese. The researchers also explored whether SRC-1 also would play a role in regulating human body weight. “We had identified a group of severely obese children carrying rare genetic variants in the SRC-1 gene,” said co-corresponding author Dr. I. Sadaf Farooqi, professor of metabolism and medicine in the Department of Clinical Biochemistry at the University of Cambridge and Wellcome Trust Principal Research Fellow. Working together, Xu, Sadaf Farooqi and their colleagues found that many of the SRC-1 variants in the obese children produced dysfunctional proteins that disrupted the normal function of SRC-1. On the other hand, SRC-1 variants in healthy individuals did not disrupt SRC-1 function. Furthermore, mice genetically engineered to express one of the human SRC-1 genetic variants found in obese children ate more and gained weight. This is the first report of SRC-1 playing a role in the hypothalamus in the context of body weight control. “By providing evidence that bridges basic and genetic animal studies and human genetic data, we have made the case that SRC-1 is an important regulator of body weight,” Xu said. More information:Nature Communications (2019). DOI: 10.1038/s41467-019-08737-6 Journal information: Nature Communications Provided by Baylor College of Medicine
<urn:uuid:216cbbe7-563e-4ec7-8e18-99b86a145aec>
CC-MAIN-2024-38
https://debuglies.com/2019/04/14/the-gene-src-1-affects-body-weight-control/
2024-09-13T22:58:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00240.warc.gz
en
0.949375
668
3.265625
3
In the face of escalating global energy consumption and the resultant climate crisis, the need to optimize data center efficiency has become a concern. The carbon footprint of data centers has reached unprecedented levels, currently contributing to 3.5 percent of global greenhouse gas emissions, according to the International Energy Agency (IEA). Without decisive action, this figure is only set to rise. Addressing data center efficiency, particularly in the context of cooling systems, is paramount. Such cooling systems, essential for managing the heat generated by continuously operational servers and electronic components, can easily account for over 30 percent of a data center's total power usage. The choice between different cooling technologies and components in such systems becomes crucial. In this article, we’ll specifically address speed control solutions for cooling – electronically commutated (EC) fans, that integrate a motor, drive and speed controller as a single assembly, versus systems that combine a discrete drive, motor and fan, to show the effect on cooling system performance. The challenge of sustainable data center cooling Given the energy demands of densely packed data centers, selecting the right components and configuration for cooling systems is essential for energy savings. When it comes to air movement solutions, the question then arises: which is the greener choice, EC fans, or discrete drive and motor systems? While both technologies aim to enhance energy efficiency, they operate differently, with their suitability depending on specific data center needs and operational characteristics. What’s more, a combination of technologies may be necessary for the most efficient and sustainable cooling solution. The benefits of EC fans EC fans, known for their energy efficiency through brushless direct current (DC) motor technology and integrated control electronics, offer a considerable reduction in energy consumption. Their integrated design for easier installation, precise speed control and reduced noise levels make them favored in various industries, not least data centers. EC fans contribute to optimized airflow and temperature management, aligning with the need for energy efficiency. But EC fans have limitations at part loads – the realm in which data center cooling systems tend to operate – where their efficiency drops significantly. Maintenance and replacement also pose challenges, as the entire EC fan unit often needs replacement, impacting cost, operational continuity, and sustainability. Optimizing systems with drives Variable speed drives are an alternative, usually standalone speed control solution for fan, pump and compressor motors. These drives prove a great option for achieving optimal energy efficiency and performance in data center cooling systems, especially when paired with highly efficient fan motors. Ultra-low harmonic (ULH) drives specifically offer advanced design that sets them apart from common nowadays EC fans, offering unique features that make them a sound choice. 1. Partial load efficiency Data centers operate at partial loads most of the time. This is why it is crucial to select equipment based on its partial load efficiency. Fans equipped with highly efficient IE5 SynRM motors and variable speed drives typically achieve greater energy efficiency than EC fans, especially at partial loads—often showing at least a 3% to 5% improvement in efficiency under these conditions. 2. Power quality It’s important to know that variable speed solutions, including EC fans and variable speed drives, can generate electromagnetic disturbances in the power network called harmonics which affect the network efficiency and reliability of connected equipment. Furthermore, due to increased current, they impose an extra load on transformers and other power network components, resulting in their overload, unless they are oversized. Thanks to their advanced design, ULH drives produce minimal harmonics under various load conditions, keeping the Total Harmonic Distortion of the input current (THDi) below 3%, thereby meeting the strictest power quality standards. The ULH design not only safeguards data center equipment from the adverse effects of harmonics, but also contributes to a more stable, reliable, and efficient power supply for critical cooling systems. And there is no need to oversize power network equipment to manage harmonic currents, which means direct improvements to the facility’s capital expenditure and carbon footprint. 3. Operational reliability Reliability is key to successful data center operations. Atlassian reports that the estimated cost of data center downtime ranges from $100,000 per hour to over $540,000 per hour. And the potential reputational damage can have even more severe financial implications. Let's evaluate the resilience of the speed control solution to externally-generated power quality issues. It's essential for these systems to be immune to potential power disturbances, which can unexpectedly occur even in mission-critical facilities like data centers. These disturbances can take the form of voltage sags, brownouts, and other forms of undervoltage conditions. The ability of a speed control solution to maintain operation during power loss (ride-through) and automatically restart is vital for managing cooling systems in data centers. If the supply voltage is off, variable speed drives can continue to operate using the kinetic energy of the rotating motor they control. The drive will stay operational as long as the motor rotates and generates energy that is then fed to the drive. This helps to in avoid the need to restart cooling equipment, which can take several minutes. This feature is particularly crucial in the energy-dense environment of data centers, where the temperature in computer rooms can rise by approximately 2-3 degrees Celsius per minute without active cooling. Such rapid temperature increases can lead to the quick shutdown of IT servers. Therefore, variable speed solutions that are resilient to power quality issues are highly recommended to ensure continuous and reliable cooling. 4. Running multiple fan motors An important advantage of using standalone drive technology is that a single drive can synchronize the operation of multiple drive motors. This configuration reduces the need for individual drives for each motor, as one drive can efficiently control, for example, every three fans. This streamlining lowers system complexity, cost, and power losses. Additionally, synchronizing multiple fans under a single drive ensures even airflow and temperature distribution across the data center, enhancing the cooling efficiency. For redundancy, a standby drive can be on hand to take over immediately should the primary drive fail. 5. System integration and control HVACR drives commonly include the popular building automation protocol BACnet as a standard feature, streamlining the integration of fan installations into most data center control systems. In contrast, BACnet is only sometimes provided with EC fans, which more frequently rely on Modbus and require an intermediary controller or gateway to enable communication with the data center's BACnet-based system. Even in cases where EC fans are equipped with BACnet, there is often still a need for an intermediary controller to facilitate general control or to provide manual (hand) control functionality, which is a standard expectation for air handlers and fan walls. However, this additional controller introduces a potential single point of failure risk that could take an entire fan array offline—ironically counteracting the very purpose of having a manual mode. Manual control is designed to be a fallback when the automated controls fail, and it is typically included as a standard feature with most variable speed drives. 6. Ease of maintenance and replacement When selecting technologies for cooling systems, it’s also important to consider how easy they are to repair or replace. Although packaged EC fan solutions might appear appealing in terms of installation convenience, given that most components come pre-assembled, they have a drawback in the event of a malfunction. Typically, if failure occurs, it's necessary to replace the entire unit, even if only a single component such as a fan bearing or control electronics is defective. From a sustainability standpoint, this approach is not ideal because it results in excessive material use and waste when smaller-scale repairs could be more resource-efficient. With fans that are operated by standalone drives, maintenance becomes more manageable as you can simply replace a faulty motor or drive, rather than the entire unit. This approach offers a strategic benefit as well, since it prevents data center owners from being tied to a single supplier. Different manufacturers' EC fans lack standardized dimensions, making it challenging to fit a replacement into an array built with fans from another provider. Adopting a long-term perspective In the context of the escalating climate crisis and increasing energy consumption by data centers, the choice of cooling technology becomes a strategic decision. Balancing immediate savings with long-term sustainability objectives is paramount. While EC fans may seem initially cost-effective, drives, especially ULH ones, present a more economical long-term solution. Besides mitigating harmonic distortion, they enable running multiple fan motors from a single drive, and offer easier system integration, control and maintenance. The comprehensive part load efficiency and minimized failure risk make ULH drives not only a technological leap, but a strategic investment in the sustainable evolution of data center operations, offering unmatched energy savings and reliability in cooling systems.
<urn:uuid:3558b089-e0d2-4218-bb76-73217b7d47bf>
CC-MAIN-2024-38
https://m.digitalisationworld.com/blogs/57922/ultra-low-harmonic-drives-the-sustainable-solution-to-data-center-cooling
2024-09-15T03:19:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00140.warc.gz
en
0.928544
1,787
2.84375
3
Recently we've been comparing using Telnet with Secure Shell protocol to allow remote access to a device such as a router or switch. Now, we're going to compare File Transfer Protocol (FTP) and Trivial File Transfer protocol (TFTP) for a Cisco router or switch. These protocols can be used for managing the files that exist for back up purposes. To begin, the Cisco Internet Operating System (IOS) has many options for saving files. When you look at the options with the copy command, you can see many different locations that your files can be saved. As shown in example 1, you can see all the copying options that are available to use. Note: This display shows some of the most common options. These options may vary and are derived from the IOS version running on the device. You can see that it’s possible to copy to and from the device using TFTP and FTP. You also may have learned that these protocols are different based on how they deliver files to their destinations. FTP uses Transport control protocol, which provides reliability and flow control that can guarantee that the file will reach its destination while the connection is established. TFTP uses User Datagram protocol which doesn’t establish a connection and therefore cannot guarantee that files to get to their destinations. When you compare these protocol on that basic difference, you may conclude that FTP will always be the better option due to its reliability. However, TFTP is a simpler protocol and the server uses less memory when supporting clients and can be a scalable solution for applications such as IP Telephony. So with the basic differences established, let’s make a comparison between TFTP and FTP with our copy options. I will be using freeware applications that will serve the function of an FTP server and a TFTP server. Example 2 displays the main log page of SolarWinds TFTP server. Example 3 displays FileZilla FTP server. If you don’t already know, FTP requires usernames with passwords for access to the device. We will see how this is important later. These applications are necessary for the routers to act as a client device for the session. Additionally you will see in the next example, a test bed for comparison. To copy the running configuration file to the TFTP server in the example you have to type the enable command, copy run tftp. This command is interactive, prompting you to enter the IP address of the TFTP server and the file name you want it to be when it arrives at the destination. If the command copy tftp run was entered, then a file can be merged with the running-config. There you will be prompted with specifying the IP address of the TFTP server, the file name on that server, and the file name you want it to be on this device (if copying to flash). Example 5 displays the file being sent to the TFTP server and how it can be viewed when it’s received from the router. Here you can see that the file (RTR1.TXT) was successfully received on this device. Using this is important for backing up or upgrading files such as the startup-config, vlan.dat, SDM files, and even the IOS. Additionally, you may find that it is necessary to have a router act as a TFTP server for back up purposes. This is accomplished with the global configuration command tftp server <file system> <file name>. The following examples display how file can be used for delivery for TFTP. This is useful in scenarios when redundant routers need to back up every option or function. By having routers backup one another's IOS’s, or having a dedicated TFTP server, multiple layers of redundancy can be achieved. Copying files using FTP is similar but requires more setup. Example 7 displays the configuration commands necessary for FTP. As mentioned before, File Transfer Protocol uses usernames and passwords for setup. Therefore, routers or switches are required to have a username and password setup for FTP. This is done with the global configuration commands ip ftp username <username> and ip ftp password <password>. If there is a requirement for the router to act in passive mode (meansingthe FTP server will provide the client a dynamic port that it will use for its data connection, as opposed to active mode where the client will provide the server the dynamic port to be used for the data connection), it can be enabled with the global configuration command, ip ftp passive. Lastly for security purposes on the ftp server, it may have an IP filter or access list that specifies which IP address are allowed to connect. The command ip ftp source-interface <interface name> is used to specify which IP address can be used on the router, otherwise the router will select the IP address of the interface that connects in the direction of the server. Examples 8 and 9 illustrate how to configure the commands and how to verify its’ configuration. With all of the set up in place, now you can copy a file to and from the FTP server as illustrated below. As you can see, it does appear that it takes much longer to send this file than it took the file earlier when it was sent via TFTP. But if you were sending an extremely large file across a WAN or to a distant end location, you will find that FTP is more useful because of its window sizing and the sequence numbers that are used for reassembly and reliability. Concluding, you can see the difference with using FTP and TFTP. Backup and upgrading files on IOS based devices can be achieved when these commands are used. Using these protocols are a neccesary skill that should be practiced by any technician or network engineer. Author: Jason Wyatte
<urn:uuid:407db4da-253e-49f5-95d7-74be364ed116>
CC-MAIN-2024-38
https://www.globalknowledge.com/ca-en/resources/resource-library/articles/ftp-vs-tftp/
2024-09-18T17:12:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00740.warc.gz
en
0.919735
1,189
2.984375
3
Hyperscale data environments are usually built on hyper-dense hardware infrastructure, which inevitably requires a hyper amount of energy consumption to maintain adequate power and cooling levels. But now that the public is keenly aware of the impact that worldwide data systems have on the energy grid – some estimates put the complete data center/Internet/all-things-digital power draw as high as 10 percent of total capacity – many of the leading hyperscale users have begun talking up the use of renewable sources like wind and solar. Through efficient design and innovative load management techniques, the hope is that data environments could one day become completely carbon neutral. But is that realistic? When you’re talking about a data center with perhaps millions of processing cores, is there a wind farm or solar array big enough to reliably meet all of its energy needs? The short answer is no, but that hasn’t stopped top data providers like Google and Facebook from trying. In Iowa, Facebook has been touting a 100 percent wind-powered data center, but, as ZDNet’s David Chernicoff notes, that is a goal, not a present reality. The idea is to locate the center near a large wind farm built in cooperation with the local utility that would feed into the existing power grid. Facebook would draw off that grid, pulling energy from both the farm and any number of traditional power plants in the area. To get to 100 percent renewable, Facebook plans to turn in Renewable Energy Certificates awarded by the EPA for its support of the wind farm, which will be used to offset its standard energy consumption. So the 100 percent figure exists on paper only, but in all fairness this is the only way a company like Facebook can employ renewable energy at all while retaining the reliability of service that its business depends upon. Google is taking a similar tack with solar. The company recently announced plans for six new solar energy plants across the U.S., bringing its total investment in renewable energy to about $1 billion. As you would expect, most of the solar facilities are in the southwest, with wind and water facilities located wherever they are environmentally appropriate. And as in Facebook’s case, they are also intended to supply local power grids where the energy will be used by everyone from homeowners to large corporations. At the moment, Google estimates that it receives about 20 percent of its energy supply from renewables, even though it, too, has set a 100 percent goal for itself. Part of the problem with utility-based electricity, whether renewable or not, is that power generation is so far removed from power consumption. The fact is that an enormous amount of energy, perhaps 80 percent, is lost on the journey from the source to the data component. This is why Microsoft, for one, is looking to place fuel cells directly into the server rack where they can power equipment without undergoing the multiple conversion processes that account for much of the loss. Even still, results so far have shown perhaps a doubling of efficiency to about 40 percent, which means more than half of what the enterprise is paying for in the fuel cells is wasted. But the company is working on a number of new designs that may produce a solution that is both scientifically sound and financially viable. Clearly, then, a lot of progress has been made in the use of renewables in support of data infrastructure, and it seems that future research can proceed down any number of avenues. And even if 100 percent renewable proves to be an elusive goal, the planet would still be better off with its data infrastructure reducing fossil fuel consumption even by a fraction.
<urn:uuid:5d3d9f1a-9947-4b45-bfde-554c9c02a675>
CC-MAIN-2024-38
https://www.itbusinessedge.com/database/renewable-energy-and-the-enterprise-a-worthy-goal-but/
2024-09-10T08:45:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00640.warc.gz
en
0.966278
725
2.953125
3
What is a robocall, and how to stop robocalls? Robocalls are automated phone calls, often associated with scams and unwanted solicitations, which can be a nuisance to individuals and businesses alike. According to Statista, around 87% of the US population owns a smartphone, which means nearly 9 in every 10 Americans is a potential target for robocalls. Robocalls often start with a prerecorded message but can switch to a live call with an operator. Tracing robocall scammers can be challenging as many use advanced tools to mask their locations and may also be calling from overseas. Read this comprehensive guide for more on: - Robocalls meaning: What is a robocall? - How do robocalls work? - Robocall examples - Stopping robocalls: steps to take to stop robocalls What is a robocall As the name suggests, Robocalls are automated phone calls that deliver pre-recorded messages to a large number of recipients simultaneously. Robocalls are typically made using computerized systems and can be used for various purposes, including political campaigns, telemarketing, and even public service announcements. However, they are primarily associated with fraudulent activity and scams, making them a significant security and privacy concern for anyone with a phone. How do robocalls work Robocalls work by using automated systems, which can quickly dial thousands of phone numbers. These autodialers are programmed to call a list of phone numbers and deliver pre-recorded messages once the call is connected. The autodialers can be set to call specific area codes or phone number prefixes, allowing robocallers to target specific regions or demographics. Robocalls can also employ deceptive tactics such as caller ID spoofing, which allows the caller to display a fake or misleading phone number on the recipient’s caller ID. The spoofing attack technique is often used by scammers to mimic legitimate organizations or government agencies in order to trick people into answering calls. Types of robocalls: Robocall examples Debt collection robocalls Debt collection robocalls are usually from creditors or debt collectors attempting to collect overdue payments. Such robocalls can be intimidating and relentless, employing aggressive tactics to pressure individuals into paying their debts. Please remember to verify the legitimacy of such calls to avoid falling prey to scams. Healthcare or Medicare robocalls Robocalls related to healthcare or Medicare can be scams. Such calls may claim to offer free medical services, insurance coverage, or prescription drugs. Avoid sharing any sensitive information with such callers. Charity robocalls tug at our heartstrings, appealing to our desire to make a difference. Unfortunately, many of these calls are fraudulent and aim to exploit our generosity. Scammers may pose as representatives of well-known charitable organizations. It’s best to donate directly to charities through their official channels to avoid being deceived by scammers. Car warranty robocalls Car warranty calls may claim that your car warranty is about to expire and offer extended coverage options. However, many of these calls are scams trying to obtain personal and financial information. During election seasons, political robocalls flood our phones, promoting candidates, parties, or agendas. While such calls may be legal in some locations, they can still be intrusive. Fortunately, you may be able to opt out of receiving political robocalls in some jurisdictions. Telemarketing robocalls are automated calls aiming to sell products or services. While legitimate telemarketing calls do exist, many robocallers pretend to be telemarketers in an attempt to defraud consumers. Free trial scams type of robocalls Free trial scams are designed to lure unsuspecting individuals into providing their credit card information for a supposedly free trial period. However, once the trial ends, charges are applied to your credit card without consent. Please carefully read the terms and conditions before signing up for any free trial. Please also avoid sharing your credit card information with untrustworthy parties. Loan scams robocalls Loan scams target people that need financial assistance, promising quick and easy loans. These predatory scams involve upfront fees or requests for personal information, resulting in financial loss or identity theft. Spoofing scams involve manipulating caller ID information to deceive recipients into thinking the call is from a trusted source. Scammers may impersonate government agencies, financial institutions, or even friends and family. Please be wary of such calls. In addition to manipulating caller ID, scammers may use AI-generated voices to trick you into thinking the call is from someone you know. Travel scams target people seeking discounted travel deals. Travel scam robocalls may use enticing offers for luxurious trips at low prices. However, these offers are usually fraudulent. As usual, the goal of such scams is to gain your sensitive information or money. Always book your travel through trusted agents or websites. Foreign robocalls are calls originating from outside your country and are a growing threat. Foreign robocalls can be problematic because overseas scammers exploit international regulations and law enforcement limitations. In addition, overseas scammers may use technology to manipulate caller ID in order to trick you into thinking the call is from a local number. Location verification robocalls Location verification robocalls are designed to confirm where you live. Such calls may ask you to enter or confirm your zip code or area code. Although the calls may seem harmless, location verification robocalls are often a ploy to collect data for malicious purposes. Voice verification robocalls In a voice verification robocall, a scammer may try to trick you into speaking to record your voice. For example, they may ask questions that lead you to say your name or the word “yes.” The goal of such calls is to collect enough data to impersonate you for fraud. What happens if you answer a robocall: Potential risks 1: Your phone bills can go up If your cellular provider charges you for every call that you engage with by the minute, your phone bill could spike due to robocalls. Blocking robocalls is the best solution to this problem. 2: You may become a target of identity theft It’s no secret that robocalls threaten user privacy. Scammers may use automated calls to gather personal information from unsuspecting victims by posing as authority figures. For example, they may pose as people from organizations like banks or government agencies and ask for sensitive details like social security numbers or credit card information. Any confidential information you share with a scammer may be used for identity theft. 3: You’re at risk of getting malware Some robocalls are designed to initiate a malware attack on your system. For example, a robocall tech support scam will start with an automated voice claiming your computer has a virus. If you follow the call’s instructions, you may actually download malware to your computer or device. Please ensure that you have security software installed on your computer and phone. In addition, avoid clicking any suspicious links texted or emailed to your device. 4: You may lose money as a result of a robocall Scammers can use robocalls to trick their targets into giving away their money. They may employ tactics such as offering fake investment opportunities or claiming that you owe money to a certain organization. They may even pretend to be the police, like with fake law enforcement arrest warrant calls. Please be skeptical of any unsolicited calls that request immediate payment or promise unrealistic returns. 5: You may get your voice recorded without your knowledge Some robocalls may record your voice without your knowledge for malicious goals such as impersonation or blackmail. That’s why you must avoid engaging in conversations with unknown callers. Protecting your voice is just as important as protecting your personal information in a world where AI can quickly and convincingly mimic your voice. What to do if you answer a robocall - Hang up as soon as you realize that it is an automated robocall - Don’t follow any instructions - Avoid giving any personal information - Do not engage with the call - Secure your accounts - Report the robocall As soon as you realize that you have answered a robocall, hang up immediately. Do not engage in any conversation or respond to any prompts to minimize the risk of fraud. Don’t follow any instructions Robocalls may carry instructions or requests, such as a demand for payment or information. Please remember that legitimate organizations usually don’t request sensitive information over the phone. Avoid giving any personal information Never provide any personal information, such as social security numbers, credit card details, or passwords over the phone to an unknown caller. If you’re concerned, contact the organization directly using a trusted phone number or website. Do not engage with the call Engaging in conversation with a scammer can lead to undesirable consequences. Remember, some experienced fraudsters are skilled at manipulating conversations to extract personal information or money. Secure your accounts More than a quarter of Americans fell for scam calls in 2021, with many suffering from identity theft or financial losses as a result. So, please take immediate action to secure your accounts if you think you may have been targeted. Change passwords for your bank accounts, email, and any other sensitive online platforms. Monitor your financial statements closely for any suspicious activity and report any strange transactions to your financial institution. Report the robocall Contact your phone service provider and provide them with details about the robocall. You can also report scam calls to the Federal Trade Commission (FTC) or the Federal Communications Commission (FCC). The FCC is coming down hard on robocallers, and consumers need to do their part. How to stop robocalls Tip 1: Use the Do Not Call registry The Do Not Call registry is a database maintained by the Federal Trade Commission (FTC). You can use it to opt out of receiving telemarketing calls. Simply visit their website or call their toll-free number to register your phone number. However, many scammers typically disregard this registry, so you must take other measures to stop robocalls. Tip 2: Don’t answer unfamiliar numbers Robocallers often use spoofing techniques to make their calls appear legitimate. By not answering these calls, you deny scammers the opportunity to engage with you or verify your number. If the call is important and authentic, the caller will leave a voice message. Tip 3: Do Not Disturb Most smartphones offer a Do Not Disturb feature that lets you silence calls and notifications from unknown or blocked numbers. Use this feature to filter your calls. Tip 4: Use third-party blockers Many third-party blockers utilize advanced algorithms to identify and filter out unwanted calls. But before installing any of these apps, research their features, user reviews, and compatibility with your device. Avoid downloading blockers that seek unnecessary permissions. Tip 5: Use spam filtering Most smartphones have built-in spam filtering capabilities that automatically detect and divert suspected spam calls. Activate this feature to stop spam calls. Tip 6: Block numbers You can manually block frequent robocall numbers. While scammers often use different numbers, blocking some of them can offer a little relief. Tip 7: Avoid giving out your phone number Minimize the exposure of your phone number to reduce the likelihood of receiving robocalls. Provide your number to businesses selectively and only share it with trusted individuals. In addition, consider using alternate contact methods, such as email or instant messaging. How to stop robocalls on iPhone Apple provides options such as enabling Silence Unknown Callers, which automatically sends calls from unknown numbers to voicemail. You can also utilize third-party apps from the App Store that specialize in blocking robocalls. How to stop robocalls on Android Use the Block Unknown Callers on Android to send unidentified calls to voicemail. Additionally, you can try a reputable call-blocking app on the Google Play Store to stop robocalls on Android. How to stop robocalls on landlines To stop robocalls on landlines, you can consider subscribing to a call-blocking service offered by your telephone service provider. Such services may allow you to block specific numbers, filter out unwanted calls, and customize call settings. What is the difference between a spam call and a robocall? While both are unwanted and intrusive, spam calls refer to any unsolicited calls, including those made by humans. On the other hand, robocalls are automated calls that use pre-recorded messages or artificial intelligence to communicate with recipients. Please keep in mind that some robocalls start with pre-recorded messages but can shift to a live caller. How to recognize a robocall Scammers employ different tactics to trick unsuspecting victims into giving away valuable personal information or parting with their money. That’s why it’s essential to learn how to spot a scam on the phone. Here are some signs of a fraudulent robocall: - A recorded message that attempts to sell you something. - A voice that sounds like a real person but has off-timing responses. - Calls about products or services you do not own. - Messages that convey a sense of urgency and often end with a request for payment. - Messages that threaten you with legal action and often end with a request for payment. - The robocall tries to trick you into saying “yes” with a question like “Can you hear me?” - The call asks you to share confidential information. - The call claims to be from an authority like a government agency, bank, etc. What does it mean when you get a robocall? When you receive a robocall, you receive an automated call with a pre-recorded message. Chances are that thousands of other people are receiving the same call as you. Can a robocall hack your phone? A robocall can’t hack your phone. However, scammers and other threat actors can use them to trick you into downloading malware on your device. For example, a robocall may trick you into visiting a malicious website designed to deliver Trojans or spyware. Can robocalls steal your identity? No, robocalls can’t steal your identity. However, they can be used as an attack vector for identity theft. For example, a scammer may use a robocall to trick you into sharing sensitive information. How do I get a robocall to stop calling me? - Register your phone number on the National Do Not Call Registry to reduce the number of legitimate telemarketing calls you receive. As mentioned, this is unlikely to prevent illegal robocalls though, because scammers often disregard the registry. - There are various call-blocking services and apps that can help filter out unwanted calls. Use them to reduce the number of robocalls you receive. - Avoid sharing your phone number with untrusted sources. Please also exercise caution when filling out online forms or signing up for services. - If you receive a robocall, hang up immediately. In addition, block the number. - File a complaint with the FCC or the FTC when you receive an illegal robocall.
<urn:uuid:5a8c35d9-1ada-4732-b37a-bd6126e65b2b>
CC-MAIN-2024-38
https://www.malwarebytes.com/cybersecurity/basics/robocalls-how-to-stop
2024-09-10T07:02:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00640.warc.gz
en
0.931071
3,180
3.234375
3
In today’s digital era, there is always a threat of unauthorized access to sensitive data. Though privacy and security are must for all organizations, the most targeted industries are financial corporations and payment systems. A single data breach, on average, costs an organization around $3.7 million and the Cybersecurity Ventures estimates that 2021 is going to witness cybercrimes worth $6 trillion! For safeguarding sensitive information, encryption holds great significance as it considerably mitigates the associated risks. For this reason, now symmetric encryption is present everywhere in the digital ecosystem. This article delineates the details of symmetric encryption and throws light on the advantages and challenges. Simply stated, Symmetric Encryption is the technique in which the same key encrypts and decrypts the data sets or the messages transacted within various systems. Such cryptographic techniques were initially used by governmental authorities to make contact with military heads. Now, to improve data security within computer systems, algorithm-based symmetric encryption is being used everywhere. In symmetric encryption, both the sending and receiving units have a similar key, which is kept a secret. In encryption, the algorithm transforms the data in a format which cannot be understood by anyone. A special & confidential key is required to decipher that data set into a readable format. As the intended recipient receives the ciphered data, the confidential key transforms it back to the readable format. Three main steps involved in symmetric encryption are: The secret keys used by the senders and the recipients might be a specially developed passcode or a special sequence developed by the random number generator which also decrypts them back. There are two main types of symmetric encryption: A. Block Encryption: In this, the set numbers of bits are ciphered in electronic data blocks with the help of a secret key. As the information is ciphered, the system stores data in the internal memory. B. Stream Encryption: In this tactic, the data encryption directly streams in place of being stored in the system’s internal memory. The most well-known and the most commonly used algorithms for symmetric encryption are: It is a block-type encryption tactic that ciphers data in 64-bit blocks and uses a single key available in any of the three sizes: 64, 128, 192-bit. DES is one of the earliest symmetric encryption algorithms but now it is considered to be insecure and obsolete. Unlike the DES, this tactic deploys two to three keys, enabling the algorithm to have multiple rounds of encryption and decryption processes. Triple Data Encryption Standard algorithm is much more secure than its predecessor- DES. Now, one will find the advanced encryption standard algorithm being used at all places of the cyber world. With the key options of 128, 192, 256 bits this algorithm is much more efficient & secure than the previous ‘predecessors’. Though it is a block-type cipher tactic, it operates in the substitution-permutation network. Therefore it is much different than other algorithms which work on Feistel Ciphering. Apart from these three most commonly used algorithms, other ones are: IDEA (International Data Encryption Algorithm), Blowfish, RC4, RC5, RC6. Out of them, RC4 is a stream cipher tactic. The main advantage of symmetric encryption over other ciphering techniques is its agility and efficiency for safeguarding a vast amount of sensitive data. The symmetric algorithms provide a greater degree of safety and the sheer simplicity is also a logical advantage since there is a lesser need for processing powers. Furthermore, the protection level can be easily enhanced by increasing the length of the key. With every bit added to the key, the requirement of the forces to breach the security increases exponentially. The biggest and the most important challenge in symmetric encryption is the inherent problem in the transmission of the keys. As the same key is used for ciphering and deciphering the data sets, if the key is passed over to unauthorized malicious users, then the third parties can easily intercept the data sets. With a malicious user getting access to the key, the whole system data security comes into question. So, for symmetric encryption to work, the sender and the intended user must know and have the keys secure with them. If anyone else has got the key then they can easily decrypt data and access them for any type of use. So there would be no point in such an encryption process where the keys are not safe. There are few considerations that specify the strength of encrypting keys. Major ones include: Symmetric Encryptions have various use cases across varied industrial verticals. But major ones needing better security in terms of their data safety are: These industries have a specific standard of security requirements- PCI DSS. This Payment Card Industry Data Security Standard is a set of 12 basic requirements that businesses and organizations in the domain must adhere to. In the PCI compliances, symmetric encryption is a vital component and directly correlates the protection of the data of at-rest cardholders. Data at rest is nothing but the state of all your data that is sitting idle on a server or any device. Being idle means it isn’t being transacted across a network over the internet. Following are some common products/services that incorporate symmetric encryption to secure the backups: Microsoft Azure: The leading cloud computing service platform Microsoft Azure uses symmetric encryption to cipher and decipher a lot of data sets very quickly. Salesforce: Cloud-based customer relationship management service provider Salesforce uses Advanced Encryption System algorithm 256 bits to secure the data at rest. G-Suite: Many of Google’s G-Suite services use in-transit encryption via HTTPS to secure your data. CodeGuard: It is a website data backup tool that greatly helps in getting back the data in case of a failure or complete collapse. It also uses AES-256 encryption to secure all those backups. Encryption for any active session is done via symmetric encryption and is an integral part of website security.
<urn:uuid:b2a9556b-c7c8-4599-a0b9-6099829294b9>
CC-MAIN-2024-38
https://www.appviewx.com/education-center/symmetric-encryption/
2024-09-11T14:15:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00540.warc.gz
en
0.92582
1,253
3.703125
4
We found a new piece of mobile malware, Android/Trojan.Pawost, that’s using Google Talk to make malicious calls. As soon as the malicious app is opened, a blank Google Talk icon pops up in the notifications of the mobile device. Wait a couple of minutes, and all of a sudden your mobile device will make an unwarranted outgoing call to a number with an area code of 259. The area code 259 is unassigned to any region in the United States and considered to be invalid. It is also an unassigned area code for the country from which Pawost originates, China. According to computerhope.com, an incoming call from an unassigned area code means the phone number was likely caller ID spoofed; a trick often used by telemarketers/scammers to hide the originating phone number. An outgoing call to an unassigned phone number is a little more unusual. When the outgoing phone call is placed, Pawost puts the mobile device into a partial wake lock; the CPU will still be running, but the screen and keyboard back light are turned off. This hides the presence of the outgoing call being made. As long as the malicious app is running, it will continue to make calls until you force the app to stop or uninstall it. The Google Talk notification won’t go away until this is done as well. On top of making malicious outgoing calls, Pawost also gathers personal information such as IMSI , IMEI, phone number, CCID which is used to operate USB connected Credit Card readers, phone version, other apps installed on the device, and other information. Once the information is gathered, Pawost encrypts it using its own special algorithm before sending it off to a remote site. Some other capabilities of Pawost is sending SMS messages and blocking incoming SMS messages, although these behaviors not observed during research. The whole time, Pawost masquerades as a simple stopwatch app. While researching Pawost, I used an Android emulator which does not have the capability of making outgoing calls. To see if I could figure out who or what was on the other end, I used Google Voice to call the offending phone numbers. I used both the country code for the United States (+1) and the country code for China (+86); as mentioned earlier this is the originating country of the malware. What I found was that many of the phone numbers were invalid using +1, but worked with +86. This leads me to believe the malware is specific to Chinese users. Even though the phone numbers worked with +86, I still only got a busy line with every number I tried. Although it is not clear who or what is being called, the thought of your mobile device calling anyone without your permission is pretty scary. Uninstalling the malicious app will fix the issue, but it may be a challenge to find the offending app on your device. This is especially true if you have a long list of apps in your downloaded apps list. Using a free mobile malware scanner such as Malwarebytes Anti-Malware Mobile will make this process a lot easier. When installing any app, always be aware of the permissions being granted before accepting the install. In this case, a stop watch app shouldn’t have a long list of permissions like calling, receiving/sending SMS messages, and other permissions way out of scope of it’s functionality.
<urn:uuid:3c8aa747-97d5-4177-ba01-df77b7898b0a>
CC-MAIN-2024-38
https://www.malwarebytes.com/blog/news/2016/06/google-talk-used-to-make-malicious-phone-calls-android-trojan-pawost
2024-09-14T01:54:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00340.warc.gz
en
0.92987
714
2.5625
3
Choose the Acronis True Image plan that meets your needs Formerly Acronis Cyber Protect Home Office Looking for help? Frequently Asked Questions What is cryptojacking? With cryptojacking, online criminals use malware to secretly use the computing resources of your system to mine cryptocurrency — which requires tremendous processing power to calculate exceptionally complex digital equations, called hashes. While the malware does not steal your data, it robs you of considerable system resources, slowing your computer’s performance and significantly increasing your energy use. Sometimes, cryptocurrency mining malware is injected into your system, piggybacking on apps or running in the background hoping to go unnoticed. Other times, the malware attacks via your web browser when you go to an infected website and runs as long as you are connected to that site.What is cryptomining malware? Cryptomining malware is secretely using people’s devices, like computers, smartphones, tablets and even servers, to perform complex mathematical calculations to mine cryptocurrencies. It is basically stealing computing resources to mine a specific cryptocurrency. How does cryptojacking malware work? Hackers are mining cryptocurrencies with your computers resources. This can happen after clicking on a malicious link which you received though an email or a malicious website. How to detect cryptomining viruses on my computer? Cryptomining demands a lot of processing power, and your CPU will be asked to work overtime. If you want to test a PC for mining malware, open your system’s resource monitor (Task Manager for Windows, Activity Monitor for Macs) to see if the CPU use is unusually high. If you’ve closed all the apps on your system and the CPU is still in overdrive, or if CPU use spikes when you visit a specific website, then you may have a cryptominer at work. What are the risks of cryptomining malware? Your device is being used to mine cryptocurrency. You can identify if you are a victim of this cryptomining malware by checking to see if the following is occurring: • High usage of your computer’s CPU. • Your device is slower then normal. • You’re experiencing a slower network. • Power drains from your battery more quickly.How does Acronis True Image detect cryptojacking attacks? It automatically protects you against cryptojacking with its built-in anti-malware solution that protects your computer and your personal data from harm. Can Acronis True Image prevent the installation of cryptomining malware? Yes, as it provides real-time protection against malware and protects your device and Can any cryptocurrency mining cause malware or is it only Bitcoin-miner malware? It can be any cryptocurrency; therefore, you need protection to keep your device and your personal data protected.
<urn:uuid:e21fadf8-65ce-45c0-9e9b-ce5ac310eac6>
CC-MAIN-2024-38
https://www.acronis.com/en-sg/products/true-image/features/cryptomining-blocker/
2024-09-15T06:34:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00240.warc.gz
en
0.912333
580
2.515625
3
How To Do A Virus Scan Whether you think you might have a virus on your computer or devices, or just want to keep them running smoothly, it’s easy to do a virus scan. How to check for viruses depends on the software and device you have, so we’ll go through everything you need to know to run a scan effectively and keep your computers, phones and tablets in tip-top shape. Do You Need a Virus Scan? First, let’s cover a few of the telltale signs your device might have a virus. Is your computer or device acting sluggish or having a hard time booting up? Have you noticed missing files or a lack of storage space? Have you noticed emails or messages sent from your account that you did not write? Perhaps you’ve noticed changes to your browser homepage or settings? Or maybe, you’re seeing unexpected pop-up windows, or experiencing crashes and other program errors. These are all examples of signs that you may have a virus, but don’t get too worried yet, because many of these issues can be resolved with a virus scan. What Does a Virus Scan Do? Each antivirus program works a little differently, but in general the software will look for known malware that meets a specific set of characteristics. It may also look for variants of these known threats that have a similar code base. Some antivirus software even checks for suspicious behavior. If the software comes across a dangerous program or piece of code, it removes it. In some cases, a dangerous program can be replaced with a clean one from the manufacturer. How to Check for Viruses The process of checking for viruses depends on the device type and its operating system. Check out these tips to help you scan your computers, phones and tablets. On a Windows computer If you use Windows 10, go into “Settings” and look for the “Updates & Security” tab. From there you can locate a “Scan Now” button. Of course, many people have invested in more robust antivirus software that has a high accuracy rate and causes less drain on their system resources, such as McAfee Total Protection. To learn how to run a virus scan using your particular antivirus software, search the software’s help menu or look online for instructions. On a Mac computer Mac computers don’t have a built-in antivirus program, so you will have to download security software to do a virus scan. There are some free antivirus applications available online, but we recommend investing in trusted software that can protect you from a variety of threats. Downloading free software and free online virus scans can be risky, since cybercriminals know that this is a good way to spread malware. Whichever program you choose, follow their step-by-step instructions on how to perform a virus scan, either by searching under “help” or looking it up on their website. On smartphones and tablets Yes, you can get a virus on your phone or tablet, although they are less common than on computers. However, the wider category of mobile malware is on the rise and your device can get infected if you download a risky app, click on an attachment in a text message, visit a dangerous webpage, or connect to another device that has malware on it. Fortunately, you can protect your devices with mobile security software. It doesn’t usually come installed, so you will have to download an application and follow the instructions. Because the Android platform is an open operating system, there are a number of antivirus products for Android devices, that allows you to do a virus scan. Apple devices are a little different because they have a closed operating system that doesn’t allow third parties to see their code. Although Apple has taken other security precautions to reduce malware risks, such as only allowing the installation of apps from Apple’s official app store, these measures aren’t the same as an antivirus program. For more robust protection on your Apple devices, you can install mobile security software to protect the private data you have stored on your phone or tablet, such as contacts, photos and messages. If safeguarding all your computers and devices individually sounds overwhelming, you can opt for a comprehensive security product that protects computers, smartphones and tablets from a central control center, making virus prevention a breeze. Why are virus scans so important? New online threats emerge every day, putting our personal information, money and devices at risk. In the first quarter of 2019 alone, McAfee detected 504 new threats per minute, as cybercriminals employed new tactics. That’s why it is essential to stay ahead of these threats by using security software that is constantly monitoring and checking for new known threats, while safeguarding all of your sensitive information. Virus scans are an essential part of this process when it comes to identifying and removing dangerous code. How often should you run a virus scan? Most antivirus products are regularly scanning your computer or device in the background, so you will only need to start a manual scan if you notice something suspicious, like crashes or excessive pop-ups. You can also program regular scans on your schedule. Of course, the best protection is to avoid getting infected in the first place. Here are a few smart tips to sidestep viruses and other malware: - Learn how to surf safely so you can avoid risky websites, links and messages. This will go a long way in keeping you virus-free. - Never click on spammy emails or text messages. These include unsolicited advertisements and messages from people or companies you don’t know. - Keep the software on your computers and devices up to date. This way you are protected from known threats, such as viruses and other types of malware. - Invest in comprehensive security software that can protect all of your devices, such as McAfee LiveSafe. - Stay informed on the latest threats, so you know what to look out for. The more you know about the latest scams, the easier they will be to spot and avoid.
<urn:uuid:a7eb3d5f-672f-4e26-9843-3cbc4f00b8fb>
CC-MAIN-2024-38
https://www.mcafee.com/learn/how-to-run-a-virus-scan/
2024-09-16T12:32:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00140.warc.gz
en
0.930756
1,251
2.671875
3
In the fast-paced world of technology, edge computing is emerging as a transformative force, reshaping how data is processed and managed. By bringing computation and data storage closer to the source of data generation, edge computing promises to redefine the boundaries of what’s possible in IT. This blog explores how edge computing is revolutionizing the industry, its profound impact, and the exciting future it holds. Edge computing involves processing data near its source rather than relying on centralized cloud data centers. This approach minimizes latency, reduces bandwidth use, and enhances data security. It’s particularly crucial for applications requiring real-time decision-making, such as autonomous vehicles, industrial IoT, and remote monitoring. Cutting Through Latency: Traditional cloud computing can introduce delays due to the physical distance between data centers and end-users. Edge computing addresses this by performing computations locally, ensuring near-instantaneous responses. This is vital for applications like autonomous vehicles and real-time analytics, where split-second decisions are critical. Streamlining Data Flow: By processing and filtering data locally, edge computing reduces the amount of data that needs to travel over the network. This optimization alleviates network congestion, lowers transmission costs, and enhances overall efficiency. It’s a boon for environments with high data demand, such as video streaming and extensive IoT networks. Securing Sensitive Information: With data processed at the edge, sensitive information remains closer to its source, reducing exposure during transmission. This localized data handling minimizes the risk of breaches and enables organizations to implement stringent data security policies effectively. Transforming Connectivity: The surge in IoT devices has led to massive data generation. Edge computing supports this ecosystem by offering scalable, local data processing solutions. From smart cities to industrial automation, it ensures that data is processed efficiently, leading to smarter and more responsive systems. Edge computing is not just a technological advancement; it’s a game-changer with far-reaching implications. Its ability to reduce latency, enhance bandwidth efficiency, and improve data security translates into more responsive applications and services. Industries ranging from healthcare to automotive are witnessing significant improvements in operational efficiency and service quality. For businesses, it means faster decision-making, reduced operational costs, and a competitive edge in the market. It’s transforming how we interact with technology, making processes more agile and data management more secure. Looking ahead, edge computing is set to play a pivotal role in the evolution of technology. As more devices become interconnected and data volumes continue to grow, edge computing will become increasingly integral. The technology’s ability to handle data locally will drive innovations in various sectors, leading to smarter cities, enhanced industrial automation, and more personalized user experiences. Future developments will likely focus on further integrating edge computing with AI and machine learning, enabling even more sophisticated data processing and analysis at the edge. This will unlock new possibilities for automation, real-time analytics, and intelligent decision-making. Edge computing is poised to revolutionize the IT landscape, offering transformative benefits across various industries. Its ability to provide real-time processing, optimize bandwidth, enhance security, and support the burgeoning IoT ecosystem makes it a critical component of future technological advancements. At Metaorange Digital, we are at the forefront of this technological revolution, leveraging edge computing to drive innovation and deliver cutting-edge solutions. Interested in exploring how edge computing can enhance your operations and unlock new possibilities? Connect with us today to discover how we can help you harness the power of edge computing for your business. Vishal Rustagi is the Cofounder of Metaorange Digital and EVP at SalesSmyth. A Cloud Advocate and App Modernization & SAAS Expert, Vishal is also an Azure Certified Architect and Blockchain Architect. With a deep passion for technology and innovation, he brings a wealth of knowledge and expertise to the forefront of digital transformation. 5 August, 2024 10 September, 2024 How AI and Microsoft Copilot Are Transforming Business Microsoft Copilot, Business Workflows, Automation, Digital Transformation 3 September, 2024 Harnessing Generative AI with Microsoft Fabric: A Unified Approach to Data-Driven Microsoft Fabric, Generative AI, Data Integration, Real-Time Intelligence 22 August, 2024 Robotic Process Automation: A Comprehensive Guide for RPA Bots, Robotic Process Automation, Digital Transformation, Intelligent Automation 14 August, 2024 How AI and Automation Are Shaping the Future of Remote Work Microsoft Power Platform, automation, Remot work, Artificial Intellgience 5 August, 2024 The Edge Computing Revolution: Transforming the IT Landscape Edge Computing, Real-Time Data Processing, Data Security, Latency Reduction, Bandwidth Optimization, Smart Devices 22 July, 2024 The Financial Benefits of Electronic Health Records for Healthcare Providers Electronic Health Records, EHRs, Lowcode , Healthcare costs, Healthcare IT 9 July, 2024 Power BI for sales, power bi, sales departments, data management, sales productivity, data visualization 1 July, 2024 Transform Your Retail Power BI: Real-World Scenario power bi, retail operation, power platform, data management, office-365, business-productivity 10 June, 2024 Unleash Your Business microsoft-365, business-productivity, microsoft-teams, workflow-automation, project-management, office-365, collaboration-tools 29 May, 2024 The Cloud Computing Revolution: Driving Business Growth in 2024 cloud computing, cloud server, digital transformation, cloud migration, data transfer 20 May, 2024 Leveraging Kubernetes for Advanced DevOps Practices: A Deep Dive Kubernetes, DevOps, Container Orchestration, CI/CD 10 May, 2024 and Copilot Empowerment AI transformation, Microsoft, platform differentiation, Copilot empowerment, Azure, responsible ai, innovation 23 April, 2024 How Power BI is Revolutionizing Supply Chain Management Insights Power BI , Revolutionizing, Supply Chain Management , Supply Chain 16 April, 2024 How Microsoft 365 Delivers Microsoft 365, Delivers Trustworthy AI, Trustworthy AI, artificial intelligence
<urn:uuid:ad646567-32c0-490e-9c08-261bfff2ca8e>
CC-MAIN-2024-38
https://metaorangedigital.com/blog/the-edge-computing-revolution-transforming-the-it-landscape/
2024-09-18T21:13:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00840.warc.gz
en
0.846093
1,294
2.53125
3
Among the many threats that exist in the digital world, one particular form of cyber attack — malvertising — has emerged as a significant concern. This blog post is an overview of malvertising, providing a comprehensive understanding of what it is, how it works, and its various types. What is malvertising? Malvertising, a portmanteau of ‘malicious advertising,’ involves the use of online advertising to spread malware. Online advertisements serve as a conduit for cybercriminals to distribute malicious software without the user’s knowledge. This nefarious activity takes advantage of the trust between website owners and their visitors, and exploits the widespread reach of advertising networks. How does malvertising work? The process of malvertising begins when an attacker purchases ad space on a legitimate website. The advertisement, which is embedded with malicious code, is then served to unsuspecting users. Upon clicking the ad or even just loading the webpage, the malicious code gets executed, leading to the installation of malware on the user’s device. This can result in a variety of harmful outcomes, from data breaches to system damage. Malvertising vs. ad malware While both malvertising and ad malware involve the use of advertisements as a medium for spreading malware, there are distinct differences between the two. Ad malware refers to any type of malware that comes bundled with a software or application and gets installed when the user installs the said software or application. On the other hand, malvertising does not require any action from the user apart from visiting the infected webpage. Types of malvertising A drive-by download is a type of malvertising involves embedding malicious code within an ad. When an unsuspecting user visits the webpage hosting the ad, the code gets automatically downloaded and executed, infecting the user’s device. In this case, the malvertisement redirects the user to a malicious webpage upon clicking the ad. The malicious webpage can then exploit vulnerabilities on the user’s device to install malware. This is a stealthy form of malvertising where a new browser window opens behind the current one. The user usually does not notice this until they close their browser, giving ample time for the malicious code to execute. Here, certain keywords within the text of a webpage are hyperlinked to pop-up ads. When the mouse hovers over these keywords, the pop-up ad appears, and if clicked, it leads to a malicious website. Malvertising: A serious threat in the digital world Malvertising is a potent cybersecurity threat that capitalizes on the ubiquity and reach of online advertising. It poses significant risks to data security and system integrity, necessitating robust protective measures. By exploring its workings and different types, users and website owners can better safeguard themselves against this insidious form of cyber attack.
<urn:uuid:30de5ef2-bb3e-4b3c-a702-fcdd7a1d643c>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/endpoint-security/what-is-malvertising/
2024-09-18T21:47:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00840.warc.gz
en
0.899127
569
3.421875
3
Is it true that a compass will initially indicate a turn to the east and then lag behind until the aircraft is headed west? a. True b. False The notion that a compass will initially indicate a turn to the east and then lag behind the actual heading until your airplane is headed west is false. A compass needle aligns itself and points towards the Earth's magnetic North Pole. If your airplane is heading west, the compass should indicate west, not east. Navigational errors could occur due to factors like magnetic deviation and variation but not in the way described in the question. Consider a situation where a pilot wants to compensate wind velocity. If the wind is pushing the airplane off its course, the airplane can seem as if it's lagging behind from an earth-bound observer's perspective, but the 'lag' doesn't come from the compass. The airplane, once it has established compensation for the wind, should still appear to be heading directly towards its aim point to an observer on the ground. The belief that a compass will indicate a turn to the east before eventually aligning with the west is a common misconception. In reality, a compass needle aligns itself with the Earth's magnetic field, specifically pointing towards the magnetic North Pole. Therefore, if an airplane is heading west, the compass should correctly indicate west as well. Errors in navigation can occur due to various factors such as magnetic deviation and variation. These factors can lead to inaccuracies in the compass reading, but they do not cause the compass to exhibit the behavior described in the question. It's important to understand the principles of navigational instruments and how they function to ensure accurate and safe navigation during flights. Pilots rely on these instruments to navigate through various weather conditions and ensure they reach their destination efficiently. For further insights on navigational instruments and their operation, you may refer to additional resources on the topic.
<urn:uuid:07bfc5f9-010a-4172-abd6-010cf6d7aa9a>
CC-MAIN-2024-38
https://bsimm2.com/physics/understanding-navigational-instruments-myth-or-fact.html
2024-09-09T05:30:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00840.warc.gz
en
0.9487
383
3.765625
4
We’ve talked a lot about cloud computing in the past couple of years. It continues to become more prevalent in our daily lives. But, there’s a new term being added to the mix when it comes to the cloud, “fog computing”. Here’s a quick run down of just what that term means. Where the cloud references our data being stored elsewhere, fog represents the software technologies, processes, and apps created to operate via the cloud but deployed on hardware closer to the edge of the network. It’s bringing the cloud “down to earth”. Tech Target wrote a great piece on fog computing. Fog computing, also known as fog networking or fogging, is a decentralized computing infrastructure in which data, compute, storage and applications are distributed in the most logical, efficient place between the data source and the cloud. Fog computing essentially extends cloud computing and services to the edge of the network, bringing the advantages and power of the cloud closer to where data is created and acted upon. How fog computing works While edge devices and sensors are where data is generated and collected, they don’t have the compute and storage resources to perform advanced analytics and machine-learning tasks. Though cloud servers have the power to do these, they are often too far away to process the data and respond in a timely manner. In addition, having all endpoints connecting to and sending raw data to the cloud over the internet can have privacy, security and legal implications, especially when dealing with sensitive data subject to regulations in different countries. In a fog environment, the processing takes place in a data hub on a smart device, or in a smart router or gateway, thus reducing the amount of data sent to the cloud. It is important to note that fog networking complements — not replaces — cloud computing; fogging allows for short-term analytics at the edge, and the cloud performs resource-intensive, longer-term analytics. Fog computing is going to be used to power great innovations like autonomous cars and smart cities. It is the next level in cloud computing. If you have yet to make the transition to the cloud, but are considering it. Contact us. We can help you decide the ideal deployment of the cloud for your business.
<urn:uuid:32e5877d-4d3e-4f9e-af45-7e1b33589c77>
CC-MAIN-2024-38
https://greatlakescomputer.com/clouds-are-shifting-into-fog-computing/
2024-09-10T10:01:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00740.warc.gz
en
0.934604
468
3.171875
3
Image: Source With the advancements in AI image detection technology, it is now an important part of many applications such as security surveillance and social media. Artificial intelligence is employed to parse visual data and identify objects, faces or behaviors. AI image identification presents pros and cons as it aids in tightening security measures while streamlining … [Read more...] about The Impact of AI Image Detection on Privacy and Security It's pitch-black outside, the kind of darkness where you can barely see your hand in front of your face. But with a simple gadget, you can suddenly see everything around you as if it were daylight, or maybe even better.These technologies are not just for the elite either; they're becoming more accessible to everyone. From spotting wildlife in their natural habitat under the … [Read more...] about From Night Vision to Thermal Imaging – Next-Level Optics Technologies MidJourney is an AI image generation tool that runs on Discord, allowing users to create images from text-based prompts. The AI engine behind MidJourney understands the prompts and uses the diffusion model to create images from scratch. To use MidJourney, you need to create an account on Discord and MidJourney and write text prompts in the chat box to generate images as per … [Read more...] about How to use MidJourney – Text to Image generation using AI A digital world is reliant on image labeling services. There is no comparison between the ability of machine eyes and that of humans to distinguish objects. A machine learning application's visual perception can be improved by image labeling for machine learning. A functional computer vision application requires image labeling. Nearly every global industry has benefited … [Read more...] about How Image Labeling Services Can Empower Computer Vision The concept of generative artificial intelligence (GAI) poses a groundbreaking question that has until recently not been contemplated: at what stage does the relationship between humans and machines evolve from its present-day form into one that is so fundamentally changed that we can no longer regard one as being superior to the other when it comes to creative terms? Humanity … [Read more...] about What is Generative AI, and How Will It Disrupt Society?
<urn:uuid:52fa09f3-45c2-451b-bef1-8708e4ff63f6>
CC-MAIN-2024-38
https://datafloq.com/tag/images/
2024-09-11T17:49:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00640.warc.gz
en
0.934215
436
2.765625
3
While often neglected, an organization’s own personnel pose one of the biggest threats to its security. Human error or negligence is usually the leading cause of data breaches in an organization. According to the research, almost 60% of data breaches are related to human error. However, malicious insider poses a much more serious threat to the organization as they have access and intent to do harm to it, usually responsible for providing initial access or means to exfiltrate data. Compared to external threats, like hackers, internal threats can have much more catastrophic repercussions with annualized average cost to recover from an insider attack estimated at $15.4 million USD. In this article, we will introduce you to the terminology of insider threat, what types of insider threats exist and how can you address them to minimize the risk of a serious data breach. What is an insider threat? To understand what an insider threat is, we first need to know who or what is an “insider”. We borrowed the definition from CISA: An insider is any person who has or had authorized access to or knowledge of an organization’s resources, including personnel, facilities, information, equipment, networks, and systems. An insider threat is the potential for an insider to use their authorized access or understanding of an organization to harm that organization. However, it is important to understand that not all insider threats are of malicious intent. Although all pose same capabilities to do harm, one does so with an intent, while others through human error or negligence (we will explain more on types of insider threats in the following chapters). Even though the latter is not malicious by intent, they do statistically cause most data breaches. Note that different organizations may define term insider threat differently. The way we describe it here is how we define them in Conscia Cyberdefense. Types of Insider Threats We classify Insider Threats based on their intent. As stated before, Insider Threats can either have intent or they are unintentional. These are the two major differences between the types of Insider Threats. We could also refer to the intentional insider threats as malicious insiders. Turncloaks – a malicious insider who acts with the aim of bringing harm to an organization for personal, financial gain or as revenge on the company. Collaborators – a malicious insider who works with third parties (such as company competitors) or external threats to steal sensitive data. These types of insiders are the hardest to identify. Unintentional insider threats Negligence – This type of unintentional insider threat is usually familiar with security and/or IT policies but chooses to ignore them (for example, allowing someone to piggyback through a secure entrance point). Accidental – Or more commonly we refer to them as human error. These are possible to minimize, but we cannot completely prevent them. One such example would be opening an e-mail attachment that contains a virus. How to detect There are two distinct ways to detect Insider Threats: one leverages people, and the other relies on technology. People as sensors Since Insider Threats consists of personnel who have some access to the organization’s assets, we can use other personnel to detect any suspicious behaviour. This type of detection is more common in identifying malicious insiders. People can hear or see someone plotting an idea that could lead to malicious intent and report it to supervisors. Technology can prove helpful in detecting potential insider threats. There are different technologies that can be used: UEBA (User and Entity Behaviour Analytics): UEBA or technologies that have similar capabilities, collect user-related network events over time and uses them to establish a baseline or normal user behaviour for a certain persona. UEBAs are particularly good in catching malicious insiders as they are still unfolding their plan. Any anomalous event for that persona will trigger an alarm for investigation. These anomalies may be: Downloading and copying files containing sensitive information Accessing data that this user does not require for their duties Traversing large number of files in a short period of time SIEM or SIEM-like log management tools: Such tools are also effective in identifying suspicious user behavior. They can also detect unintentional insider threats. These rely more on indicators rather than anomalous behavior. Some examples of indicators would be: Multiple login attempts to systems the user should not have access to Attaching USB drives with suspicious content Interacting with phishing e-mails PAM (Privileged Access Management): Privileged Access Management refers to the set of security policies and procedures that protect and monitor the use of privileged accounts within an organization. This gives the security administrators a platform to monitor and control the user activities related to these privileged accounts. This will enable the detection of any privilege escalation attempts or efforts to compromise credentials. How to prevent While it is not possible to completely prevent Insider Threats, if you combine them with detective controls, your organization will be able to minimize the risk of them occurring. There are several ways to prevent Insider Threats: As stated, these are not comprehensive but will significantly help in reducing the risk of insider data breach. Insider Threats will be the most prevalent threat to any organization. This is because part of the insider threats is also unintentional threats (such as human error, when people fall for a phishing attempt). And exactly this is the most common initial access for threat actors. Nowadays, it is much more costly for threat actors to breach the environment from the ‘outside’ by brute forcing or exploiting vulnerabilities of internet-facing appliances. Moreover, these attempts (with adequate security monitoring) are very noisy and easily detectable. Unfortunately, the cheapest and most effective way is to try and exploit human vulnerabilities. However, malicious insiders also do exist, and collaborating with external threats makes them quite stealthy and does not expose them very obvious, while the damage is as significant. Combining detective and preventive capabilities is the right way to mitigate these risks. That is why at Conscia Cyberdefense we also employ these with our customers too. We leverage mostly the detective part and provide recommendations on certain improvements customers can make on the preventive part. We also offer services to harden customers’ environments after assessing them. Our principle is not to just ‘plug the tech and start monitoring, but rather cooperate with our customers to learn about them (processes, architecture, …) and implement appropriate procedures that are tailored to that specific customer through our MDR (Managed Detection and Response) service. If you think you are not addressing the risks of insider threats enough, feel free to reach out to us and we will help you.
<urn:uuid:cce3e17d-fda6-4888-8f94-7541b5a93d19>
CC-MAIN-2024-38
https://conscia.com/blog/insider-threats-what-are-they-and-how-to-mitigate-them/
2024-09-15T10:24:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00340.warc.gz
en
0.949032
1,356
2.8125
3
How often have you ended a call with your accountant feeling more confused than before it started? If your response is a variation of “pretty much every time,” have no fear. We’ve compiled this handy list of 42 basic accounting terms, along with their common abbreviations (where appropriate) and definitions. Balance Sheet Terms The Balance Sheet is one of the two most common financial statements produced by accountants. This section pertains to potentially confusing basic accounting terms that relate to the balance sheet. 1. Accounts Payable (AP) Accounts Payable include all of the expenses that a business has incurred but has not yet paid. This account is recorded as a liability on the Balance Sheet as it is a debt owed by the company. 2. Accounts Receivable (AR) Accounts Receivable include all of the revenue (sales) that a company has provided but has not yet collected payment on. This account is on the Balance Sheet, recorded as an asset that will likely convert to cash in the short-term. 3. Accrued Expense An expense that been incurred but hasn’t been paid is described by the term Accrued Expense. 4. Asset (A) Anything the company owns that has monetary value. These are listed in order of liquidity, from cash (the most liquid) to land (least liquid). 5. Balance Sheet (BS) A financial statement that reports on all of a company’s assets, liabilities, and equity. As suggested by its name, a balance sheet abides by the equation <Assets = Liabilities + Equity>. 6. Book Value (BV) As an asset is depreciated, it loses value. The Book Value shows the original value of an Asset, less any accumulated Depreciation. 7. Equity (E) Equity denotes the value left over after liabilities have been removed. Recall the equation Assets = Liabilities + Equity. If you take your Assets and subtract your Liabilities, you are left with Equity, which is the portion of the company that is owned by the investors and owners. Inventory is the term used to classify the assets that a company has purchased to sell to its customers that remain unsold. As these items are sold to customers, the inventory account will lower. 9. Liability (L) All debts that a company has yet to pay are referred to as Liabilities. Common liabilities include Accounts Payable, Payroll, and Loans. Income Statement Terms The Income Statement, a.k.a Profit and Loss Statement or P&L, is the second of the two common financial statements. These are the most common basic accounting terms used in reference with this reporting tool. 10. Cost of Goods Sold (COGS) Cost of Goods Sold are the expenses that directly relate to the creation of a product or service. Not included in this category are those costs that are needed to run the business. An example of COGS would be the cost of Materials, or the Direct Labor to provide a service. 11. Depreciation (Dep) Depreciation is the term that accounts for the loss of value in an asset over time. Generally, an asset has to have substantial value in order to warrant depreciating it. Common assets to be depreciated are automobiles and equipment. Depreciation appears on the Income Statement as an expense and is often categorized as a “Non-Cash Expense” since it doesn’t have a direct impact on a company’s cash position. 12. Expense (Cost) An Expense is any cost incurred by the business. Learn more about personal vs. business expenses here. 13. Gross Margin (GM) Gross Margin is a percentage calculated by taking Gross Profit and dividing by Revenue for the same period. It represents the profitability of a company after deducting the Cost of Goods Sold. 14. Gross Profit (GP) Gross Profit indicates the profitability of a company in dollars, without taking overhead expenses into account. It is calculated by subtracting the Cost of Goods Sold from Revenue for the same period. 15. Income Statement (Profit and Loss) (IS or P&L) The Income Statement (often referred to as a Profit and Loss, or P&L) is the financial statement that shows the revenues, expenses, and profits over a given time period. Revenue earned is shown at the top of the report and various costs (expenses) are subtracted from it until all costs are accounted for; the result being Net Income. 16. Net Income (NI) Net Income is the dollar amount that is earned in profits. It is calculated by taking Revenue and subtracting all of the Expenses in a given period, including COGS, Overhead, Depreciation, and Taxes. 17. Net Margin Net Margin is the percent amount that illustrates the profit of a company in relation to its Revenue. It is calculated by taking Net Income and dividing it by Revenue for a given period. 18. Revenue (Sales) (Rev) Revenue is any money earned by the business. Of course, there are those basic accounting terms that don’t pertain to a particular financial statement. For those, we’ve reserved the “general” category. 19. Accounting Period An Accounting Period is designated in all Financial Statements (Income Statement, Balance Sheet, and Statement of Cash Flows). The period communicates the span of time that is reported in the statements. The term Allocation describes the procedure of assigning funds to various accounts or periods. For example, a cost can be Allocated over multiple months (like in the case of insurance) or Allocated over multiple departments (as is often done with administrative costs for companies with multiple divisions). 21. Business (or Legal) Entity This is the legal structure, or type, of a business. Common company formations include Sole Proprietor, Partnership, Limited Liability Corp (LLC), S-Corp and C-Corp. Each entity has a unique set of requirements, laws, and tax implications. 22. Cash Flow (CF) Cash Flow is the term that describes the inflow and outflow of cash in a business. The Net Cash Flow for a period of time is found by taking the Beginning Cash Balance and subtracting the Ending Cash Balance. A positive number indicates that more cash flowed into the business than out, where a negative number indicates the opposite. 23. Certified Public Accountant (CPA) CPA is a professional designation that an accountant can earn by passing the CPA exam and fulfilling the requirements for both education and work experience, which vary by state. An accountant is different from a bookkeeper. A credit is an increase in a liability or equity account, or a decrease in an asset or expense account. A debit is an increase in an asset or expense account, or a decrease in a liability or equity account. Diversification is a method of reducing risk. The goal is to allocate capital across a multitude of assets so that the performance of any one asset doesn’t dictate the performance of the total. 27. Enrolled Agent (EA) An Enrolled Agent is a professional accounting designation assigned to professionals who have successfully passed tests showcasing expertise in business and personal taxes. Enrolled Agents are generally sought out to complete business tax filings to ensure compliance with the IRS. 28. Fixed Cost (FC) A Fixed Cost is one that does not change with the volume of sales. For example, rent and salaries won’t change if a company sells more. The opposite of a Fixed Cost is a Variable Cost. 29. General Ledger (GL) A General Ledger is the complete record of a company’s financial transactions. The GL is used in order to prepare all of the Financial Statements. 30. Generally Accepted Accounting Principles (GAAP) These are the rules that all accountants abide by when performing the act of accounting. These general rules were established so that it is easier to compare ‘apples to apples’ when looking at a business’s financial reports. Interest is the amount paid on a loan or line of credit that exceeds the repayment of the principal balance. 32. Journal Entry (JE) Journal Entries are how updates and changes are made to a company’s books. Every Journal Entry must consist of a unique identifier (to record the entry), a date, a debit/credit, an amount, and an account code (that determines which account is altered). A term referencing how quickly something can be converted into cash. For example, stocks are more liquid than a house since you can sell stocks (turning it into cash) more quickly than real estate. Material is the term that refers whether information influences decisions. For example, if a company has revenue in the millions of dollars, an amount of $0.50 is hardly material. GAAP requires that all Material considerations must be disclosed. 35. On Credit/On Account A purchase that happens On Credit or On Account is a purchase that will be paid at a future time, but the buyer gets to enjoy the benefit of that purchase immediately. “Bartender, put it on my tab…” Overhead are those Expenses that relate to running the business. They do not include Expenses that make the product or deliver the service. For example, Overhead often includes Rent, and Executive Salaries. Payroll is the account that shows payments to employee salaries, wages, bonuses, and deductions. Often this will appear on the Balance Sheet as a Liability that the company owes if there is accrued vacation pay or any unpaid wages. 38. Present Value (PV) Present Value is a term that refers to the value of an Asset today, as opposed to a different point in time. It is based on the theory that cash today is more valuable than cash tomorrow, due to the concept of inflation. A Receipt is a document that proves payment was made. A business produces receipts when it provides its product or service and it receives receipts when it pays for goods and services from other businesses. Received receipts should be saved according to IRS receipts requirements, and catalogued so that a company can prove that its incurred expenses are accurate. 40. Return on Investment (ROI) Originally, this term referred to the profit that a company was making (Return), divided by the Investment required. Today, the term is used more loosely to include returns on various projects and objectives. For example, if a company spent $1,000 on marketing, which produced $2,000 in profit, the company could state that it’s ROI on marketing spend is 50%. 41. Trial Balance (TB) Trial Balance is a listing of all accounts in the General Ledger with their balance amount (either debit or credit). The total debits must equal the total credits, hence the balance. 42. Variable Cost (VC) These are costs that change with the volume of sales and are the opposite of Fixed Costs. Variable costs increase with more sales because they are an expense that is incurred in order to deliver the sale. For example, if a company produces a product and sells more of that product, they will require more raw materials in order to meet the increase in demand.
<urn:uuid:3a0a7fb9-e19b-4258-8677-8c326c57fcb0>
CC-MAIN-2024-38
https://www.boldgroup.com/blog/common-accounting-terms-you-should-know/
2024-09-15T09:50:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00340.warc.gz
en
0.954084
2,363
3.703125
4
Fuzzing is an automated black-box testing method used to uncover bugs and implementation flaws in software. It’s not an application simply for exploiting a specific application but instead, you can use it to acquire knowledge and find out potential application crashes due to the bad practices of coding. Fuzzing is most effective for identifying vulnerabilities that can be exploited by buffer overflow, denial of service (DoS) attacks, cross-site scripting (XSS), and SQL injection. Brief History of Fuzzing. Fuzz testing was developed at the University of Wisconsin Madison in 1989 by Professor Barton Miller and students. Miller noticed considerable signal interference that led to a system crash while he was logging into a UNIX system over a dial-up network during a storm. In response, he instructed his students to mimic his encounter by flooding UNIX systems with noise using a fuzz generator to determine whether they would also crash. You can find their continued work at It was basically focused on command-line and UI fuzzing and it shows that modern systems are vulnerable to even simple fuzzing. Two Types of Fuzzing There are two primary types of fuzzing: coverage-guided and behavioral. Coverage Guided Fuzzing: Coverage-guided fuzzing focuses on probing the source code with random input while the application is running to reveal any bugs. The primary objective is to get the application to crash, as this indicates a possible problem. The coverage-guided fuzz testing process continually generates new tests, and the data obtained enables a tester to reproduce the crash, making it easier to identify potentially problematic code. Behavioral fuzzing functions differently. It uses specifications to demonstrate how an application should work and uses random inputs to evaluate how the application actually functions. Typically, any variance between what is expected and what is observed represents potential bugs or security risks. Advantages of fuzzing over other security testing methods Fuzzing is a testing methodology that differs from traditional software testing approaches such as SAST, DAST, or IAST. Rather than systematically analyzing the code or the environment in which the code runs, fuzzing involves repeatedly “pinging” the code with random or unexpected inputs to identify faults or vulnerabilities that may not be immediately apparent. The goal of fuzz testing is to intentionally trigger errors or crashes in the code, in order to identify and resolve weaknesses that might be exploitable by attackers. Benefits of Fuzzing Fuzzing offers several benefits for improving software quality and security: Here are some of them. Finding bugs: Fuzzing can give a comprehensive view of the quality of the targeted system and software. It can help identify hard-to-find bugs, including those that are not detectable by traditional testing methods. By generating random inputs and testing the application with unexpected and invalid data, fuzzing can uncover errors that might otherwise go unnoticed. Improving security: Fuzzing can identify security vulnerabilities that could be exploited by attackers. By detecting buffer overflows, DoS, XSS, SQL injection, and other vulnerabilities, fuzzing can help developers identify and address these security issues before they can be exploited. Cost-effective testing: Fuzzing is an automated testing technique that can be performed with minimal human intervention. This makes it a cost-effective way to find bugs and security issues that might be difficult or time-consuming to detect using manual testing methods. Improved testing coverage: Fuzzing can provide better testing coverage than traditional testing methods, which often rely on predetermined test cases. By generating a wide range of inputs, fuzzing can help test the application under different conditions and identify issues that might not be found using traditional testing methods. Identifying performance issues: Fuzzing can help identify performance issues that could impact the usability and reliability of an application. By testing the application under different conditions, fuzzing can identify areas where the application may be slow or unresponsive, allowing developers to optimize its performance. 1. Exposure of Sensitive Information in Microsoft Windows Michal Zalewski, the director of Information Security Engineering at Google, reported a vulnerability where memory initialization for TIFF images was not correctly performed in Windows Servers and Windows 7/8. This flaw enabled remote attackers to obtain sensitive information from process memory. 2.Out-Of-Bounds Read in Intrusion Detection System Suricata Sirko Höer, an IT Security Consultant at Code Intelligence GmbH, discovered a bug in Suricata where a function attempted to access an unallocated memory area while sending multiple IPv4 packets with invalid IPv4Options. This led to the software attempting to read data beyond the intended buffer, which could result in a crash or unauthorized access to sensitive information in other memory locations. 3.Memory Corruption in Adobe Reader This issue was exposed by Trend Micro’s zero-day initiative. Memory corruption is a situation where an application performs operations on a memory buffer, but it is able to read from or write to a memory location that is beyond the intended boundary of the buffer. This can lead to unexpected behavior, crashes, or security vulnerabilities. Challenges And Limitations of Fuzzing Practitioners who implement fuzz testing may face two main challenges: setup and data analysis. Setting up fuzz testing can be difficult and complex, as it requires the creation of testing “harnesses,” which can be even more challenging to develop if the fuzz testing is not integrated into an existing toolchain. Additionally, fuzz testing can produce a large amount of data, including potential false positives, so it is essential for testing teams to be prepared to handle the high volume of information. Another big challenge is that it is less easy to document and the negative attitudes toward the “vague” nature of fuzz testing that still persist in the QA community. How Does Fuzzing Workflow Work? Fuzzing is an old mechanism that was developed by the University of Wisconsin in 1989 to detect potential implementation weaknesses. And to do this a specific fuzzer is used where semi-random data is injected to any program to detect bugs or potential crashes. There are several steps involved in the process such as: - Identify target systems - Identify inputs - generate fuzzed data - execute fuzzed data - monitor system behavior - Log defects First of all, the target system is identified to select the specific fuzzer, the target input and right character to generate the final payload to test. Once the target input is identified, the payload is generated. Data of various sorts, including strings, digits, characters, and combinations of them, may be included in various input sizes. The payloads are then executed by launching the fuzzer under the appropriate circumstances. Another important part of the process is monitoring the system behaviors and finding the log defects. Here, using an offline approach, we can assess the test’s outcomes. During this stage you can find the potential flaw, bug or crashes. Finally, the results can be used to generate a particularly designed payload. How can InfoSec4TC help? Do you want to know how vulnerable your systems are AND how to simply remove the risks? Let InfoSec4tc guide you to find the vulnerabilities with our advanced Cybersecurity courses on Fuzzing and learn how to find bugs and allocate your security resources more efficiently. We help individuals kickstart their Cyber Security careers with the best courses in the market.
<urn:uuid:046c3e58-b004-4bfc-809e-fc15f9e4fe13>
CC-MAIN-2024-38
https://www.infosec4tc.com/fuzzing-the-indispensable-tool-for-improving-cybersecurity/
2024-09-16T16:06:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00240.warc.gz
en
0.927221
1,526
3.59375
4
Who Made Me Regret It — As the author of the humorous children’s novel The Annoying Ghost Kid, I am occasionally invited to make an author visit to an elementary school. I’ve developed a 3-part program that includes an animated reading from my book; an interactive lesson in how the creative process of writing a story works; and, since the book is about a ghost who bullies 2 living kids, I share some techniques for dealing with a bully. The anti-bullying techniques I share are simple and designed to be easy to remember even in the unexpected moment of a bully attack. There is one for verbal attacks and one for physical attacks, both of which are peaceful and positive, yet still enable the victims to stand up for themselves without escalating the aggression. Today, I’m writing about my technique for physical attacks. I begin by explaining that most bullies want an easy victim, someone who is going to cry and run away. They don’t want to bully someone who will fight back. Bullies are cowards at heart because they themselves are afraid. They don’t believe anyone likes them; they feel worthless; and think that if they bully someone it will make them feel important. I go on to say: if someone hits you, shoves you, trips you, or tries to hurt you in any other way, I want you to immediately face them, throw out the palm of your hand like a traffic cop signaling "Stop," and yell as loud as you can, "BACK OFF!" All while giving them a squinty-eyed stare which you hold for a long moment. Then continue on your way as if nothing happened. What this will do is make your bully think twice about attacking you again. It will make them think that you are tougher than they thought. You don’t have to believe it, you only have to make them believe it. Now I’d like to illustrate how well this works by sharing a story with you of a time that I bullied someone. You see, I was once guilty of bullying a snake, but it was an accident and I didn’t realize I was being a bully until he told me. It happened several years ago when I went with some friends to visit the Civil War battlefield at Kennesaw Mountain State Park near Atlanta, Georgia. We arrived at the park and started walking across a large grass covered field. Nearby was a bronze historical marker, so we headed toward it to read about the events that took place on the battleground spreading out before us. As we approached, we saw what looked like a coil of black cable, glistening in a spot of sunshine on the ground in front of the sign. When we got closer, the cable uncoiled itself and slithered off about 20 feet away. It was a beautiful black snake that was very thick and nearly 8 feet long. I recognized it as being either a black racer or a black rat snake, both of which are non-poisonous and like open grassy areas. It was such a pretty snake that I just wanted to get a better look. I followed it over to where it stopped, but when I got within 10 feet, it slithered off again another 20 feet. I paused and figured that if I kept following, it would simply keep slithering away, so I thought, I’m smarter than a snake, I’ll just circle around and approach from the opposite direction. Crouching low as I walked, so it wouldn’t see me, I got within 3 feet of it. I completely startled it. It reared up about 2 feet in the air, flattened its neck out making it look like a cobra, then it opened its mouth and hissed very loudly. That in turn startled me. I leapt backwards farther than I’ve ever jumped before and nearly fell on my backside. Now, I don’t know if you speak snake or not, but I can understand, "BACK OFF" in any language. So, I decided it was time to leave that snake alone, and rejoin my friends. As I said, I knew that snake wasn’t poisonous, and if it bit me that it wouldn’t hurt more than giving me a small cut, but even so, when it hissed at me, it sure did make me think twice. That’s all you have to do with a bully, make him have doubts. Make him think you will be more trouble than it’s worth. This technique isn’t just for kids; it also works for adults. I have a friend who has successfully used this technique while riding the subways in New York City. If you know of someone who is bullied, please share this article with them.
<urn:uuid:f8807b45-e50a-45d3-a7e1-c9f715bffba7>
CC-MAIN-2024-38
https://www.isemag.com/columnist/article/14268592/human-network-i-bullied-an-unlikely-victim
2024-09-16T14:56:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00240.warc.gz
en
0.980959
994
2.671875
3
Why Encrypting Data At Rest On Mainframes Is Important Mainframes have been used for business processing, where large amounts of data and transactions are processed, so they are a preferred choice for hackers. Confidential data like accounting records, customer databases, and business secrets that are valuable to any organization are processed in these superior processing systems. Not encrypting this data at rest is dangerous as it exposes it to being tampered with, stolen, or used in the wrong way. When applied to data at rest, data encryption will help protect your organization’s critical data and adhere to legal requirements and industry standards. Encrypting Data In Transit On Mainframes To Enhance Security But it is equally important to safeguard the data as it is being transmitted between your mainframe systems. Since information passes through many components, applications, and systems both internal and external, it is vulnerable to malicious interception, modification, and listening. For data that is transmitted over the network, the best solution to these risks is to adopt strong encryption mechanisms like SSL/TLS or IPsec that will protect the mainframe communication. Data Encryption: At Rest And In Transit Data encryption can be broadly categorized into two main types: Encryption at Rest: - Symmetric-key encryption: Employing a single key for encryption and decryption, DES can offer speedy encryption for data stored in the mainframe. - Asymmetric-key encryption (Public-key encryption): Uses a pair of keys (public and private) to encrypt information and provide a higher level of data protection for information that is considered to be private. Encryption in Transit: - SSL/TLS (Secure Sockets Layer/Transport Layer Security): Creates a safe SSL connection between mainframe computers and other computing devices or other systems, thus safeguarding data during transmission. - IPsec (Internet Protocol Security): It offers the ability to encrypt all network traffic, which in turn guarantees the confidentiality and integrity of the data exchanged between mainframe systems and distant sites. Mainframe Data Encryption: Techniques And Approaches The protection of data on the mainframe systems involves the use of proper encryption techniques and compliance with the right measures. Some key approaches include: Disk-level Encryption: Encrypting the whole disk or volume in which the data is located, offering an all-encompassing and fully integrated means of protection. File-level Encryption: The ability to encrypt one file or a directory, so that the security of information can be provided at the micro level. Database Encryption: Implementing encryption as a part of the DBMS, thus, providing data security to the mainframe databases. Key Management: Ensuring the key management process for generating, distributing as well as protecting the keys employed in securing your data. Access Controls: To prevent unauthorized access to the stored data the following measures need to be taken: Advantages Of Data Encryption At Rest For Mainframes Implementing data encryption at rest on your mainframe systems can provide numerous benefits, including: - Enhanced Data Security: Making sure that information cannot be accessed, stolen, or misused by those who should not have access to it, or in the event of a breach. - Regulatory Compliance: Adhering to the standards of the particular industry and data protection regulations like GDPR, HIPAA, and PCI DSS. - Improved Risk Management: Managing contingent financial and reputational losses from data breaches and cyber-attacks. - Increased Customer Trust: To show the population of your organization’s concern with the protection of their data and hence build trust and loyalty. - Streamlined Incident Response: Supporting quicker and better handling of the incidents and their resolution in case of the occurrence of any security incident. In the contemporary world where technology is rapidly advancing, it is evident that the security of mainframe systems and the contents therein cannot be overemphasized. Therefore, through proper data encryption technologies that can be used when the data is at rest and when the data is in motion, your organization’s most valuable assets can be protected, legal requirements met, and customer trust gained. Embrace The Power Of Identity Management Private Cloud Solutions. Effortlessly connect, reset, provision & audit any identity or app using today’s latest platforms. Start your free trial today.
<urn:uuid:6e702f4a-9f00-4699-9ac0-52c65a07a008>
CC-MAIN-2024-38
https://blog.avatier.com/the-essential-guide-to-data-encryption-at-rest-and-in-transit/
2024-09-17T22:54:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00140.warc.gz
en
0.903591
894
2.921875
3
Creator: University of Colorado Boulder Category: Software > Computer Software > Educational Software Topic: Arts and Humanities, History Tag: assessment, blended learning, Discuss, language, learning Availability: In stock Price: USD 49.00 This course is for language educators who wish to learn how to build and teach a blended language course. You may receive 1.5 Graduate Teacher Education credits (GRTE) for completing this course. See below for more information. This course is divided into four modules. In the first module we are going to discuss the origins and effectiveness of the blended learning model. In the second module, we are going to look at course level considerations such as how to choose a blended format or how to build a blended course syllabus. Interested in what the future will bring? Download our 2024 Technology Trends eBook for free. In the third module, we are going to focus on the unit level. We are going to discuss how to plan a blended unit, how to go about presenting content, designing blended activities and assessment strategies. In the fourth module, we're going to turn our attention to the teaching aspect of the blended learning experience. At the end of the course, you have the option to put what you have learned into practice through an optional peer-reviewed assignment (Honors lesson). This assignment is not required to complete the course unless you wish to obtain Graduate Teacher Credits (GRTE). This course includes short quizzes, discussion questions, and an optional scaffolded peer-reviewed assignment. If you complete all the assignments, at the end of this course you will have fairly fleshed out a blended unit and you will be well on your way to building your own blended language course. Upon completion of this course, students will be able to: Explain blended learning as a teaching modality and discuss its effectiveness List and discuss the main instructional design steps involved in building a blended language course Select tools and technologies to support blended language learning Discuss effective blended teaching strategies GRTE credits: Credits earned for GRTE courses are not applicable toward a degree program offered at the University of Colorado Boulder but may be used for teacher professional enhancement, including relicensure and school district salary advancement. You may receive 1.5 GRTE credits for completing this course ($225). The form is provided at the end module 5.
<urn:uuid:d32f7497-4448-4509-ac05-692f2216e8f9>
CC-MAIN-2024-38
https://datafloq.com/course/blended-language-learning-design-and-practice-for-teachers/
2024-09-20T07:15:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00840.warc.gz
en
0.931251
476
2.65625
3
This Emerging Cybercrime Hurts Everyone It Touches Synthetic identity fraud is a crime. It is either a felony or misdemeanor depending on its scale and financial impact. In this emerging form of fraud, a cybercriminal combines stolen information, such as an actual Social Security number, with other data that may be a mix of real and invented information, such as name, date of birth, address, and social media handles. The result of this forged alliance is a fake or synthetic identity that can then be used to commit acts of financial fraud. Synthetic identities are effective because they appear to be real and legitimate. According to Experian, a global leader in credit information, synthetic identity fraud is a rapidly accelerating threat, and its victims can experience serious credit and financial consequences. Individuals, businesses, and government entities can be victims. Once a fake or synthetic identity has been created around an individual’s Social Security number, criminals will use it to defraud banks, financial services providers, credit card companies, and other lenders. Businesses who sell goods and services become victims of synthetic identity fraud as fake credit is used to make purchases that leave the seller holding the bag. These particular criminals also steal from federal government programs such as Medicare, Medicaid, and similar benefit programs, and from local governments as well. How a Synthetic Identity is Built Every new hack or data breach we read about exposes personal information to cybercriminals. Once they have access to Social Security numbers, the rest of the fake profiles can be created. The best synthetic identities are those that combine some real and some false information and use it to build a believable online profile of a fake person. These are commonly known as Frankenstein identities for being cobbled together from a host of disparate parts. To add credibility to the synthetic identity, criminals may create fake social media accounts, obtain easy credit cards, and slowly build a credit history. They may incubate a synthetic identity for months or even years before finally using it to “borrow” a large amount of money or commit other financial crimes. Once they’ve enriched themselves, they generally abandon the used identity and move on to begin creating a new one. The Serious Financial Impact of Synthetic Identity Fraud In the past several years, synthetic identity fraud has outpaced credit card fraud and identity theft as the fastest growing form of fraud in the world, according to Thomson Reuters. One reason is that organized cybercrime is active in synthetic identity fraud, thus exponentially increasing the impact. Another reason is that using a synthetic identity to steal money is easy and inexpensive, with a low risk of detection. In fact, it is estimated that 95% of synthetic identities go undetected when new customer accounts are created at financial institutions. Similarly, among retail and e-tail businesses, losses due to synthetic identity fraud either go undetected or are written off as a cost of doing business. The estimated cost of synthetic identity fraud ranges from $20 billion to $40 billion globally, according to various sources. The Deloitte Center for Financial Services estimates that synthetic identity fraud will cause $23 billion or more in losses in the U.S. by 2030. Data compiled by Statista illustrates that synthetic identity fraud was the second greatest source of fraud in the U.S. in terms of merchant losses in 2022. Specifically, 28% of merchant losses due to the creation of new accounts and purchase of goods and services were the result of synthetic identify fraud, as were 29% of losses due to account logins. Only first party fraud generated more financial losses, at 30% to 31% overall. For cybercriminals, and for cybercriminal gangs, the greatest attraction of synthetic identity fraud is that so many organizations are vulnerable to it. Because it is still an emerging threat and difficult to detect, governments, financial institutions, insurers, retailers, and e-commerce sites have not yet effectively upgraded their customer authentication tools to detect synthetic identity thieves before they strike. Unfortunately, synthetic identity scams have been so successful in so many ways that they have expanded beyond their original parameters. Today, this type of fraud enables other criminal activities that range from romance scams and money laundering to illegal arms sales, human trafficking, and terrorism. Clearly, this is more than a financial crime and is far from victimless. The Real Human Effects of Synthetic Identity Fraud Social Security numbers are the Holy Grail of personal data due to their unique and closely guarded nature. And they are the most common and preferred basis upon which to build a synthetic identity. When your Social Security number is compromised in a data breach, for example, and obtained by a cybercriminal, their fraudulent activities can lead to a “split or fragmented credit file,” as Equifax explains. Fragmented credit files occur when information from another person (in this case, a synthetic identity created using your SSN) becomes attached to your very real and personal credit history. This parasite can negatively affect your credit score, your ability to obtain credit in the future, your financial profile, and your credibility in the eyes of bankers and other lenders. What’s more, this parasite can be extremely difficult to remove. Cybercriminals prefer easy marks, and may target individuals who don't actively or frequently use credit, such as children or the elderly. A child whose SSN has been compromised and used to create a synthetic identity may not realize they’ve been victimized until they reach adulthood, by which time the fraudulent activity tied to their Social Security number can be an obstacle to employment or to obtaining credit. An elderly individual who collects government benefits may not learn that their Social Security number has been compromised until they miss a few benefit payments. Thus, the personal toll may be significant. How Businesses Can Protect Data A recent article in Forbes observes that this emerging cybercrime has created a two-fold responsibility for businesses to (1) safeguard their customer information and (2) protect their own organizations. Synthetic identity fraud relies on cybercriminals obtaining personal information, which is often sourced from compromised websites. To better protect online data—which may range from personally identifiable information and protected health information to payment card data and other forms of sensitive information—cybersecurity experts recommend installing protections to block cyberattacks on websites. Safeguards include: - Data encryption - Network security safeguards - Threat monitoring and detection tools - Timely software updates and security patches - Multifactor authentication tools for user logins - Security audits to gauge the effectiveness of these measures. Conducting a website risk assessment and web application testing are sensible first steps to identify vulnerabilities and security gaps that can lead to exploitation by cybercriminals. It’s also important to regularly train employees to be aware of cybercrime, how to recognize phishing and other social engineering schemes, and what to do if they become suspicious of a request for information. How Individuals Can Protect Data Taking action to reduce synthetic identity fraud is not solely the responsibility of businesses and government entities. Individuals can take effective actions, too. Experian notes that the key to avoiding synthetic identity fraud is to keep your personal information safe and routinely monitor for any signs that it has been compromised. Following are four practical actions consumers can take immediately: - Treat your Social Security number like gold and keep it private. When a business, healthcare provider, or any other organization requests your SSN, ask specifically why they need it, and what alternative form of identification would suffice, such as a driver’s license. If filling out a form that requests your SSN, don’t enter it until you have confirmed that it is, in fact, required in order to obtain the service or merchandise you are seeking. More often than not this won’t be the case. - Be alert for phishing schemes. Many cybercrimes begin with phishing schemes designed to trick individuals into sharing information they shouldn’t. Phishing can occur via email or phone call. The request for information usually sounds legitimate or harmless. But ask pointed questions about why the information is needed and for what actual purpose. Ask who you can call to verify the request before providing the information. Being sharp and vigilant will scare away most schemers and send them angling for easier fish. - Be careful what you share online. Too many people share too many personal details and place too much trust in social platforms. Sharing your complete date of birth, your maiden name, an old address, or other personal information on social media may seem innocent enough. However, these details can be used by cybercriminals to pose as you for fraudulent purposes. They can also be used together to create a synthetic identity without you even knowing it. Think twice before you overshare. - Monitor your credit reports. Regularly checking your credit reports at all three credit bureaus (Experian, TransUnion, and Equifax) can be an effective and early way to detect suspicious activity and signs of identity theft. And locking your credit reports can make it more difficult for criminals to open up new credit using your name. Reports can be unlocked temporarily when you need to apply for credit. The rapidly emerging crime known as synthetic identity fraud costs the U.S. billions of dollars each year and is projected to top $23 billion in the next six years. It affects businesses in banking, credit and finance, insurance, retail sales and ecommerce, as well as government programs that dispense financial benefits. It also victimizes individuals, from the theft of Social Security numbers to the wreckage of fragmented credit files. To help reduce this growing cybercrime, actions can be taken immediately by businesses, individuals, and government organizations. Network and website safeguards, heightened cybersecurity awareness, user training, and multifactor authentication can be effective tactics in the war on synthetic identity fraud, along with the implementation of universally accepted cybersecurity frameworks from NIST, HITRUST, and other proven sources. The experienced cybersecurity experts at 24By7Security can help you get started.
<urn:uuid:fd0a9f37-ff10-419a-bd16-b79ea97db598>
CC-MAIN-2024-38
https://blog.24by7security.com/synthetic-identity-fraud-and-its-very-real-impact-on-business
2024-09-19T07:32:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00140.warc.gz
en
0.942855
2,038
2.609375
3
An Enterprise Guide on Service-Oriented Systems Integration Service-oriented architecture and later microservices have become highly beneficial providing organizations with the level of flexibility they need. It is especially significant today when the number of applications adopted by companies grows each year, demanding more efficient governance approaches. Number of Applications per Company What Is SOA and Its Role in Software Design? As Red Hat defines, “service-oriented architecture (SOA) is a type of software design that makes software components reusable employing service interfaces based on a common communication language over a network”. With 35% of organizations choosing service-oriented architecture for their system design, it has become a proven vendor- and technology-agnostic approach to integrate autonomously deployed and maintained software components. SOA enables efficient communication between services by passing data or by coordinating the activity of two or more services. Each service in the SOA performs as an independent piece of software with its responsibilities, user interface, databases, supported protocol, etc. A good example of a SOA-based system is a set of customer services, like CRM, ERP, Product Information Management System (PIM), etc. These services can be implemented using different technologies and support diverse protocols of communication, data models, etc. However, to enable effective service-to-service communication and align their workflows, the system needs an additional integration mechanism in between. Most Common Service Integration Approaches Integration is a backbone of SOA-based system and can be implemented in different ways depending on business demands, platforms, and SOA best practices. Here we will take a closer look at some of the most common SOA integration approaches. If you are looking at how to gather all your IT systems, services, and applications within a unified IT landscape, have a look at Infopulse’s advanced integration operations services to sync your data and optimize operations across different sites. In architectures, powered by direct connection, services can communicate with one another using HTTP requests. Sometimes services build a chain of requests in which the final response depends on the responses of all the services involved in the chain. Direct Connection Sample Benefits: It is the easiest and fastest way of implementing connections between services. Most modern services have advanced API layers, mainly RESTful API endpoints. Using HTTP protocol makes it easy to connect services that are built and being hosted on different and incompatible platforms. Drawbacks: Since the services are tightly coupled and rely on synchronous communication, they become completely dependent on one another and even a temporary unavailability of one service can block all other services. Message-Driven Design Pattern To address the challenges of tight coupling, you can opt for the Message-Driven Design Pattern that allows applications, services, and systems (producers) to send messages to a delivery service rather than to other services directly. Then, the delivery service brings messages to consumers (services that consume messages sent by applications). The most common types of delivery services are Message Broker and Enterprise Service Bus. Benefits: There are no direct connections and request chains between services and thus, the entire system works in an asynchronous way. With loosely coupled systems, each service acts and scales independently and if one service fails, other components of the system can still function. Together with high scalability and resilience, you can also overcome integration complexities. Message Delivery Services within SOA Message Broker Integration A message broker is a lightweight service that simply delivers messages from producers to consumers without any possibility to extend its business logic. This middleware service allows services, applications, and systems to communicate freely with each other literally translating messages of different protocols. So, whatever languages the platform or apps use, they will still be able to “talk” with each other. Benefits: A message broker is a reliable message delivery service that can support hybrid-cloud, cloud-native, serverless, and microservices architectures. It serves as a single endpoint for all the providers and consumers and allows for flexible communication between apps, services, and systems. Consumers can either pull the messages from the message broker or subscribe to topics and react to the message appearance events of the message broker. This results in higher information availability. Message Broker Architecture Challenges: To work with a message broker directly, you should significantly rewrite the code of services. This would facilitate the implementation of specific message-driven business logic and protocols. Especially if the Direct Connection integration has already been enabled. Another drawback is that changing business logic in one service may require rewriting the code in other services too. Moreover, a development team must have high expertise in architecture, infrastructure, and other aspects of a particular message broker. Besides, it is essential to ensure that platforms where services root in have rich facilities of interactions with the broker, e.g., SDK. On top of that, enabling integration with the help of a message broker assumes that producers and receivers are responsible for establishing connection with the message broker and can only interact with the broker using its specific protocol. The business logic of the integration also rests solely on the producers that must know queues or topics to send messages to. While consumers must use the same queues and topics also taking into account the format of the messages. Besides that, the business logic of integration, specifically processing messages that cannot be delivered, should be implemented on the app level. What Is Enterprise Service Bus? Another commonplace and better option for services integration and handling information exchange within service-oriented architectures is Enterprise Service Bus. Primarily functioning as a centralized software component, it allows integrating apps and implementing necessary transformations through a service interface that can be further reused by new apps. ESB is also responsible for enabling connectivity, communication protocols conversion, transforming data models, routing messages, and handling composition of numerous requests. Benefits: While a message broker is just a transport layer of a distributed app, an ESB is a comprehensive integration solution, which in turn includes a message broker and other services that cover all the integration aspects. Enterprise Service Bus helps set up integration flows, validate inbound data models, and transform data formats and models. It is also in charge for routing messages, guaranteeing their delivery, and handling any delivery failures. Challenges: ESB is a complex solution with a lot of cloud services involved. This may complicate the infrastructure of the whole application and demand extra monitoring efforts. Enterprise Service Bus Architecture On-premises vs. Cloud-based Enterprise Service Bus Architecture There are two ways of Enterprise Service Bus implementation – on-premises or in the cloud. When on-premises, a customer has full control over the ESB infrastructure that is more favorable if you should meet strict security requirements to the application. However, in this case, the customer takes full responsibility for the infrastructure setup and support. The latter is a complex enterprise that suggests broad expertise and a lot of efforts of in-house developers to maintain highly available and scalable services. Besides that, you will need to purchase the necessary equipment to match availability and scalability requirements during the peak-load periods. The support efforts should be taken independently on a real load of the system. To address these on-premises limitations, many lean toward cloud-based ESB solutions that continue functioning without degrading in the environments with unpredictable throughput and load. A cloud provider shoulders responsibility for the infrastructure creation and support, automatic scalability, failover, etc. The provider also guarantees a particular level of quality, availability, and reliability of their services to the customer. Plus, the customer does not need to purchase any equipment and pays only for the consumed resources. The architecture itself is very flexible: ESB consists of logical blocks with distinct functionalities. Each block can be implemented by using different services or a combination of them. The choice of services depends on the Enterprise Service Bus requirements to the throughput, availability (desired SLA – service level agreement), the price of the system, etc. ESB Architectural Components Backend systems – customer’s services involved in the integration. The same service can send messages to the ESB as well as receive them from the ESB. - Producers – services that send messages to the ESB - Consumers – services that receive messages from the ESB. Adapters – ESB components, responsible for interaction with backend systems. Adapters hide details of processes in the ESB and provide backend systems with a unified way to interact with the ESB, independently of the services platforms and technology they ground on. - Inbound adapters – RESTful API endpoints represent inbound ESB interface and support HTTP POST method. Inbound adapters perform initial validation of inbound requests data models, transform messages to a compatible format of an internal message broker (if necessary) and send messages to message queues or publish them to message topics. - Outbound adapters – represent outbound ESB interface. After taking a message from a message broker queue, the adapter transforms a message format (if necessary) and sends the message in the Request Body to a desired backend system (Consumer) using a POST method. Outbound adapters can also support a poll model allowing backend systems to request messages from the ESB themselves. Transport – a message broker that accepts messages, buffering, and delivering them to the various components of the ESB. Transformations – ESB components that transform the message format/model during the transportation and redirect them within the business logic workflow. How Does ESB Work? Customers’ producers interact with the ESB through RESTful API endpoints (Inbound adapters) and connect with consumers through the Direct Connection integration. Each Inbound adapter represents the beginning of a predefined business logic pipeline. Business logic can include inbound data models validation, different message data transformations, enabling message persistence until they are delivered. It also enacts the redirection of messages to one or many outbound adapter(s), getting them to the end of a business logic pipeline. Outbound adapters are responsible for delivering messages to corresponding RESTful, SOAP consumers. A Typical Sequence of Tasks Integration in the ESB A pair of inbound/outbound adapters are placed at the beginning and the end of an integration flow to ensure the delivery of a message from one backend system (Producer) to one or more backend systems(s) (Consumers). If, after a defined number of attempts, the outbound adapter fails to deliver the message to the Consumer(s) due to the unavailability of the Consumer service(s), the message goes to the Dead-messages queue for further analysis. In case the message is not delivered because of the backend system’s Bad Request response, the message goes to the Poison Queue for further analysis. Enterprise Service Bus Use Cases The main ESB functionality lays the ground to a range of use cases for SOA implementation. Below we will also mention some of enterprise service bus best practices. 1. Routing and messaging/data exchange (a set of Adapters, Message Broker, Transformation) The routing components in the ESB environment enable: - Connection of services to the transport environment - Data transformation and routing - Exchange of messages - Information update - Subscription to send messages - Organization of asynchronous information transfer - FI-FO message order (if required) - Dead- and Poison-messages handling The routing components use special dedicated software tools, adapters, and brokers (gateways) that provide connection to the integration service, transportation (data / message transmission) with guaranteed delivery, data transformation, and processing. 2. Enterprise Service Bus security and access control The protection and governance components at the ESB level must provide: - Protection and control of access to the services - Encryption of transmitted data - Connection protection - Prevention of unauthorized access to information Built-in authentication and authorization tools, as well as automation software tools, are needed to track and block unauthorized actions from integrated services and systems to prevent the leakage of sensitive data. Security measures must provide segregated access to the ESB facilities, industrial level of endpoints protection against DDoS and other malicious attacks, enable OWASP compliance, etc. 3. Monitoring, control, diagnostics, and tracking of integration processes (logging in) The mechanisms of the monitoring, diagnostics, and tracking of the state of processes execution must provide: - Identification of problems related to both data transmission and the status of operations performed on the service bus as a whole. - Identification of potential problems in the initial stages before the problem manifests itself in full. - Timely implementation of a set of preventive measures. - Monitor and control of key indicators and metrics of business processes at different stages. 4. Guaranteed delivery of messages The ESB must have facilities of storing messages until they are delivered to the addressees, buffering the messages in case consumers are not able to process them in real time during the peak-load periods. A reserve channel for messages delivery should also be in place in case the main one fails. - Failover message broker cluster - Autoscale of all ESB components - Implementation of Retry and Circuit Breaker patterns in the message delivery to ensure high resilience 5. A single environment for ESB integration services Gain from a unified user interface for integration scenario development, centralized management, control, monitoring, and administration of the ESB integration scheme. For this, a user interface should contain a set of tools: - Configuring adapters, transforming messages, queuing, and routing messages and data - Management and configuration of data transport between ESB services - Management and administration of control, monitoring and diagnostic services Benefits: Enterprise Service Bus technology simplifies a message-driven integration of diverse services from heterogeneous environments. The backend system developer teams can focus on the backend business logic relying on the existing ESB integration facilities, which otherwise, should also be implemented. Thanks to the cloud and opened architecture, ESB within SOA-based systems can bring the highest resilience and scalability of integration and help optimize cost spending since you only pay for the consumed resources. Comparing the benefits and drawbacks of both Message Broker and Enterprise Service Bus, we can assume that the most effective integration approach for enterprise-level SOA systems is cloud-based Enterprise Service Bus. Not only does it optimize development and support processes, but also brings in flexible pricing, high availability, reliability, efficiency, and predictability of costs.
<urn:uuid:dcd4b4ca-f19e-4329-98d2-eb518e79aa30>
CC-MAIN-2024-38
https://www.infopulse.com/blog/integration-approaches-service-oriented-systems
2024-09-19T08:09:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00140.warc.gz
en
0.909603
2,972
2.671875
3
Sitting in my son's bedroom, I am reminded of when we used to play World of Warcraft together. I have Steam (the popular video game distribution platform) on my laptop currently and have actively fought in quite a few virtual, online wars. When I think about such things, I always experience a sense of wonder: How do they make these games? Who is behind the scenes and what does it take? Who makes up the look and feel? I have always wanted to explore what a video game designer does, so let's dive into this subject together. For those of us who play video games, being a video game designer might sound like a dream job. It could be seen as the ultimate pinnacle of your career as a technology professional. In actuality, however, the life of a video game designer isn't too far off from that of a regular technology professional. With video games being a $30 billion industry, most games are the product of coordinated effort among a large team. Most individual game designers function as part of a comprehensive unit of designers and developers who all pull together to create a fun playing experience for you and I. All video games have to start with a concept, of course. What is the story of the game? What are players trying to accomplish? If you choose to work as a self-employed designer, you will almost certainly come up with some concepts on your own. If you work at a studio, concept work will be done with a team of other designers. Concept work, of course, consists of more than just formulating an idea for a game. In addition to the basic scenario, concept designers flesh out gameplay, visual layouts, storylines, characters, maps, difficulty levels, and other details. These concepts need to be very detailed. For example, concept designers have to determine small details like how fast characters can move, or how high they can jump. Designers must be detail-oriented to come up with concepts on this level. Graphic designers at small studios may be involved with many or all aspects of the game's concept. At larger studios, graphic designers may have a more specialized role. For example, some graphic designers might work on a team whose only focus is the game's map, or its character set. We've come a long way since the earliest game systems, and most games today involve sophisticated artwork. Traditional graphic designers are needed to deliver the artwork, animation, and mockups required for the user interface (UI) aspect of the game. The trolls that dance across the screen during the World of Warcraft opening sequence take shape in the minds of one of these people. Creative types will be drawn toward this role on the team and, at between $100,000 and $150,000 per year, they are paid handsomely for their ability to create. I am of the school that believes an individual can grow or sharpen his or her creative ability, but he or she must be born with that seed to ultimately succeed. I can't even draw a good stick figure, though I can run a project to create a game as well as anyone. Which brings me to my next role that falls under the video game designer umbrella. The project manager role, usually called lead game designer, is the perhaps the most pivotal role. This person coordinates everything from start to finish. The lead designer make the production schedules, herds the cats, and works with both teams and individuals to help them meet deadlines. Project management for a lead designer on a game design project can be tricky. A lead designer can't just give orders but must successfully work with people who are accustomed to self-supervision. Rather than tell people what to do, a lead designer draws attention to choke points in the production chain and encourages problem solving. A good lead designer uses his or her own behavior to guide others' performance — by starting meetings on time, for example, and following through on between-meeting assignments. Lead designers often rely heavily on such tactics, since they typically cannot use promotions, compensation, or threats of dismissal to influence team members. As a certified life coach (among many other certifications), I am drawn toward coaching opportunities. Coaching opportunities are abundant within creative teams because the skills members eventually need are often ones they don't already have. Everyone on the project needs the skills of everyone else involved. Success requires a team effort. In addition to providing direction, a lead designer must often do a share of the work, particularly in areas where he or she has special competence. Ideally, a lead designer should also take on one or two of the unpleasant or unexciting jobs that no one else wants to do. If you're interested in this role, then getting a PMP (Project Management Professional) certification is a good place to start. Project management skills are easily translatable to many other IT sectors; you will never be out of work as a project manager. The role that everyone probably thinks of first when they hear the words video game designer� is the programmer or developer. These are the folks who write the code that makes everything come together. The characters move, the shapes render, a virtual gun shoots (or a sword swings), as a result of computer code. A skilled developer can expect a six-figure starting salary. Beginners will most often move up to this role from a more traditional background in software development or web development. They will morph into hard-core back end developers who make the game's engine run smoothly. Great developers are often highly creative and may even think of their own game ideas and/or create their own companies. If you have your eye on this role, then you will want to pursue IT certifications, training, and education that align with both software development and game design. There are other roles on the massive teams that design game. You could become a tester, a game flow designer, a video editor, a copywriter, and so forth. No matter which role you end up filling on a game design team, you are part of the finished product. You can tell all your friends and family which parts of the game wouldn't have come to life without you. And you'll have the satisfaction of having worked on something that delights and entertains people for hours on end — that is arguably the best part of being a game designer.
<urn:uuid:60aa619f-08cb-4cce-89e7-f0caa6cf0e4f>
CC-MAIN-2024-38
https://www.certmag.com/articles/job-profile-become-video-game-designer
2024-09-20T13:47:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00040.warc.gz
en
0.969834
1,294
3.015625
3
Today’s internet users are very conscious about their digital privacy. They want to be sure that their data is safe and not accessible to just anyone. But users need to log in to dozens of websites, applications, and services on a daily basis. How can we have so many different accounts with so many different login requirements? The answer is: authentication methods. Authentication is the process by which we prove our identity or verify ourself as an authorized user of a service or resource. There are several ways you can authenticate yourself when accessing different services and resources online. Here’s a detailed overview of the most common authentication methods: If you’re reading this article, chances are you’re looking for the best authentication method for your website or application. While there are several authentication methods that are commonly used, only a few of them can be considered the “best”. The best authentication method for your business depends on a number of factors, including the resources and data that need to be protected, the type of users your site attracts, and the technology used on your website. Some authentication methods are more secure than others. While some are more convenient for the user, others force the user to go through an unnecessary and arduous process. Each authentication method has its upsides and downsides. It’s up to you to decide which authentication method is best for your business. SMS authentication is one of the most common authentication methods out there. It’s usually used to add an additional layer of security to your account. When you log in to your account, you’re asked to enter your password. Once you’ve done that, you’ll receive a 6-digit code via SMS on your phone. You have to enter that code in order to log in successfully. The code expires after a few minutes, so you’ll have to enter it again after a few seconds. SMS authentication is a good choice if you want to add an extra layer of security to your account. It’s especially useful if you’re worried about someone trying to access your account from a public computer. SMS authentication is easy and convenient to use. The only downside is that users have to have access to their phone in order to log in successfully. Email authentication is used to control access to a particular piece of information. Let’s say you have a mailing list. You can create a rule that only allows people who have authenticated with your email address to be able to access your mailing list. Email authentication is convenient and easy to use. The only disadvantage is that anyone can create an email address and try to authenticate with your account. If you’re worried about malicious users trying to access your account, you should use a different authentication method. Email authentication is not the best choice if you want to add an extra layer of security to your account. However, it’s a great way to limit access to specific information. Single sign-on (SSO) authentication allows you to log into different websites or applications with a single login. Instead of creating an account on every single website or application you use, you can use one account to access all of them. That way, you don’t have to remember dozens of passwords. You only have to remember one. Single sign-on is convenient, efficient, and easy to use. It’s a great choice if you want to reduce the hassle of logging into different accounts. It doesn’t add an extra layer of security to your account, though. If someone manages to hack into your account, they’ll have access to all the websites and applications associated with it. Single sign-on is the best authentication method if you want to save time logging into different websites. Biometric authentication uses your physical traits (like your fingerprint, face, or iris) to prove your identity. It’s one of the most secure and reliable authentication methods. Biometric authentication is a good choice if you want to add an extra layer of security to your account. Biometric authentication is easy and convenient to use. The only disadvantage is that many users find it intrusive. Not everyone wants to use their fingerprint or face to log in to their account. Biometric authentication is the best authentication method if you want to add an extra layer of security to your account, without having to go through a tedious process. Google Authentication uses your Google account to log in to other websites and applications. It’s a good authentication method if you already have a Google account. Google Authentication is easy and convenient to use. The only disadvantage is that you have to use a Google account. If you don’t have one, you’ll have to create one. Google Authentication is the best authentication method if you want to log in to another website using your Google account. In this article, we’ve discussed the most common authentication methods used today. We’ve examined each of them, their benefits and drawbacks, and how they compare to one another. You can use this information to select the best authentication method for your business. Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more
<urn:uuid:1fa73d91-020d-46fa-9a68-6360281d4fe6>
CC-MAIN-2024-38
https://asmed.com/the-best-authentication-methods-which-one-is-right-for-you/
2024-09-08T08:46:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00240.warc.gz
en
0.92937
1,147
2.96875
3
Privilege escalation, in simple words, means getting privileges to access something that should not be accessible. Attackers use various privilege escalation techniques to access unauthorized resources. For web application security, privilege escalation is an important concern because web intrusions are usually only the first stage of a complex attack. Malicious parties often use web attacks to gain basic access to certain resources and then continue with privilege escalation attacks to gain more control. The ultimate goal might be accessing sensitive data, installing malware, introducing malicious code, or even hijacking a single computer system or multiple systems. There are two types of privilege escalation: horizontal privilege escalation and vertical privilege escalation. Horizontal Privilege Escalation in Web Security The term horizontal privilege escalation applies to all situations when an attacker acts as a specific user and gains access to resources belonging to another user with a similar level of access. For example, if an attacker impersonates a user and gains unauthorized access to their bank account, this is an example of horizontal privilege escalation. Many web vulnerabilities may lead to horizontal privilege escalation. For example, Cross-site Scripting (XSS) attacks may allow the attacker to steal the user’s session cookies to access their user account. CSRF attacks are also examples of horizontal privilege escalation. Vertical Privilege Escalation in Web Security Vertical privilege escalation is often referred to as privilege elevation. It applies to all situations when the attacker gains higher privileges, most often root privileges (administrative privileges). Privilege elevation is most often the second step of an attack. If the attack is aimed directly at the web server, the malicious user often aims first to get any kind of file system and/or console access. After they gain shell access to the web server, they may attempt to use other techniques to gain access to a privileged account, most often the system administrator. Local privilege escalation in this case often relies on misconfigurations or unpatched operating systems. Both Microsoft Windows and Linux/UNIX have been known to have vulnerabilities that allow attackers to gain administrator privileges via executing arbitrary code. Avoiding Privilege Escalation Vulnerabilities Privilege escalation vulnerabilities may arise for different reasons: - Programming errors: This includes vulnerabilities that lead to web attacks but also other vulnerabilities such as buffer overflow. - Misconfigurations: Especially risky when the principle of least privilege is not followed and normal users have too many privileges. - Lack of security hygiene: For example, delayed patches and updates for the operating system and other software. - Weak access control: For example, weak passwords. - Social engineering: Attackers may gain access to accounts by exploiting gullible users. Therefore, to avoid privilege escalation vulnerabilities, you must consider your entire security stance. Installing software such as a vulnerability scanner will certainly help eliminate some potential causes, but not all of them. Get the latest content on web security in your inbox each week.
<urn:uuid:7bbdd8f0-de5c-4051-a9b7-c88da128c0f4>
CC-MAIN-2024-38
https://www.acunetix.com/blog/web-security-zone/what-is-privilege-escalation/
2024-09-08T09:05:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00240.warc.gz
en
0.891288
596
3.359375
3
By Ryan Miller According to a 2023 ITU report, approximately 67% of the global population—roughly 5.4 billion people—have access to the internet. With billions of people and businesses relying on the internet every day, it’s worth stopping to consider the vital infrastructure—largely cloud infrastructure—that makes all of this global connectivity possible. Currently, cloud infrastructure is dominated by large, powerful corporations who control vast centralized data centers that store and process the world’s data. However, with the explosive growth of data volumes and demand for internet services, the centralized approach to cloud computing is showing its vulnerabilities. The world needs a better, more resilient internet to cope with projected demand. A modern paradigm in cloud connectivity is emerging called Decentralized Physical Infrastructure Networks (DePINs), offering a more flexible and inclusive approach for scalable internet infrastructure. While the term DePIN might initially seem confusing, it can be easily understood by comparing it to a more familiar concept: the gig economy. Just as ridesharing and homesharing platforms have transformed the service industry by creating marketplaces for individual providers, DePINs are set to revolutionize physical internet infrastructure—enabling a rapidly scalable, distributed, and democratized cloud computing network similar to the networks the gig economy birthed for services. The gig economy revolutionized how we think about work by allowing individuals to participate on their own terms. Whether it’s driving for a ride-share service a few hours a week or renting out a spare room, the gig economy is built on the premise of flexible, decentralized participation. DePINs bring this same flexibility to the world of hardware and infrastructure. In a DePIN, individuals can contribute their own hardware—be it a small IoT device or rack in a certified tier 3 data center—to a larger network. Just like gig workers, these participants decide how much they want to contribute and when, enabling a more dynamic and responsive approach to infrastructure management. This flexibility is particularly valuable in an era where rapid technological change often makes traditional, rigid infrastructure models obsolete. One of the defining characteristics of the gig economy is distributed services. In contrast to traditional businesses where infrastructure is centrally owned and managed, gig platforms distribute ownership across their participants. Ridesharing platforms don’t own cars; Homesharing platforms don’t own their properties. Instead, individuals contribute their resources to the platform and receive compensation in return. DePINs apply this model to physical infrastructure. Rather than relying on a single entity to build and maintain hardware, DePINs distribute this responsibility across a network of hardware providers. This decentralization reduces barriers to entry, allowing more people to get involved and reducing the cost and complexity of infrastructure development. By doing so, DePINs democratize access to the infrastructure market in much the same way the gig economy democratized access to the service economy. The success of the gig economy is largely due to its clear and immediate incentivization structure—gig workers are paid for the services they provide, creating a direct link between contribution and compensation. DePINs operate on a similar principle but use tokens or cryptocurrencies as the medium of exchange. Participants in a DePIN are rewarded with tokens for their contributions, which can be used within the network or exchanged for regular currency like Euros or Dollars. This tokenized reward system not only incentivizes participation but also aligns the interests of all network participants. Just as gig workers are motivated to provide better service for higher pay or ratings, DePIN participants are incentivized to maintain and enhance their hardware contributions to maximize their rewards. This creates a self-sustaining ecosystem where infrastructure is continuously improved and maintained without the need for a central authority. One of the strengths of the gig economy is its resilience. Because the workforce is decentralized, there’s no single point of failure. If one driver or host drops out, another can step in to fill the gap. This resilience is a key advantage of DePINs as well. By distributing infrastructure across many hardware providers, DePINs create a network that is inherently more robust and less vulnerable to failures or attacks. In traditional centralized models, a single point of failure can bring down an entire system. In contrast, the decentralized nature of DePINs ensures that even if some parts of the network go offline, the rest can continue to function smoothly. This resilience is critical for maintaining the reliability of essential infrastructure in a rapidly changing and increasingly unpredictable world. The gig economy has proven its ability to adapt to new demands and technologies, evolving rapidly in response to changing consumer needs. DePINs are similarly future-proof, offering a flexible and inclusive approach to infrastructure development. Traditional infrastructure projects are often slow to adapt, hampered by high costs, long lead times, and centralized decision-making. DePINs, on the other hand, can scale up or down quickly in response to demand, integrate new technologies seamlessly, and adapt to changing conditions with agility. This makes them particularly well-suited to a world where the pace of technological change is accelerating. At first glance, DePINs might seem like just another complex concept in the crypto world. But when you strip away the jargon and look at their underlying principles, it becomes clear that they are simply the next logical step in the evolution of our increasingly decentralized, on-demand economy. Just as the gig economy transformed service industries by leveraging the power of individual contributors, DePINs are set to revolutionize physical infrastructure by harnessing the power of decentralized participation. For those who may still be wary, it’s important to recognize that DePINs aren’t as exotic as they might seem. They represent a familiar and proven model applied to a new domain, offering flexibility, resilience, and innovation in infrastructure development. By embracing DePINs, we can unlock new opportunities and create a more inclusive, robust, and adaptive infrastructure for the future. Learn more about Impossible Cloud’s DePIN project: Impossible Cloud Network by visiting their website.
<urn:uuid:763692a2-fb1a-4556-813f-fd04e111c9c4>
CC-MAIN-2024-38
https://www.impossiblecloud.com/blog/depins-the-gig-economy-for-hardware-providers
2024-09-08T08:42:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00240.warc.gz
en
0.932578
1,228
2.609375
3
The new generation of cars is being equipped with advanced technologies for networking and autonomous driving, both of which require greater computing capabilities inside the car as well as at the data centre. In the past, when cars were entirely mechanical, they generated no computer data at all. Even as they integrated more electronics, they could deal with all the data processing onboard, mostly using application-specific integrated circuits. But in the near future, new cars will have higher levels of connectivity and autonomous features, and will generate as much as one gigabyte of data per second. “Processing such huge quantities of data calls for more than classic control units,” says Dirk Hoheisel, Bosch board member responsible for autonomous vehicles. Bosch is one of the leading suppliers of automotive electronics, or technologies collectively called advanced driver assistance systems, or ADAS for short. Bosch has built a new computer – which is about the size of a car radio and loaded with artificial intelligence – in partnership with Nvidia. This “carputer” is scheduled for launch in 2020. But even the “carputer” – which could be considered an edge device – will not be able to deal with all the data generated by newer cars. Edge computing and devices – computing that takes place at the outer edge of the network, away from the data centre – will likely deal with the most time-critical functions of a car which need instant reactions, but for less urgent processing requirements a connection to a data centre would be useful, or perhaps necessary. In anticipation of this huge increase in both the volume of data and the processing required, some companies are developing networking and computing infrastructure especially for cars and driving, much of which will be reliant on next-generation mobile telecommunications networks, or 5G. For example, Denso, the main supplier of automotive components to Toyota, is part of a new consortium which will develop infrastructure for driving data. As well as Denso, other companies in the consortium are Ericsson, Intel, NTT, NTT Docomo, Toyota InfoTechnology, and Toyota Motor. Together, the companies have initiated the formation of the Automotive Edge Computing Consortium. The objective of the consortium is to develop an ecosystem for connected cars to support emerging services such as intelligent driving, the creation of maps with real-time data and driving assistance based on cloud computing. It is estimated that the data volume between vehicles and the cloud will reach 10 exabytes per month around 2025, approximately 10,000 times larger than the present volume. This expected increase will trigger the need for new architectures of network and computing infrastructure to support distributed resources and topology-aware storage capacity. The architectures will be compliant with applicable standards, which requires collaboration on a local and global scale. The consortium will focus on increasing network capacity to accommodate automotive big data between vehicles and the cloud by means of edge computing and more efficient network design. It will define requirements and develop use cases for emerging mobile devices with a particular focus on the automotive industry, bringing them to standards bodies, industry consortiums and solution providers. The consortium will also encourage the development of best practices for the distributed and layered computing approach recommended by the members. The companies will invite other technology companies to join the consortium. Ericsson, one of the consortium partners, has already expanded the scope of its ambition with a partnership of its own. Ericsson is partnering with Zenuity, an automotive software development joint venture between Autoliv and Volvo, to develop a connectivity platform for cars. Ericsson says the “end-to-end” platform will be for connected safety, advanced driver assistance systems, and autonomous driving software and functions. In the first phase of the collaboration, Ericsson and Zenuity will work together to develop the Zenuity Connected Cloud – powered by Ericsson internet of things accelerator. The offering will accelerate speed to market with cloud-based solutions for original equipment manufacturers, using the companies’ existing assets to create a base for innovation and new services. The service will consist of in-vehicle software integrated with other vehicle functions, onboard sensors and cloud support functions that will provide external data from other vehicles and cloud infrastructure. This will enable real-time visibility to guide vehicles with information and events in transit, says Ericsson. Dennis Nobelius, CEO, Zenuity, says: “Zenuity was formed to develop the software and solutions the industry requires to create a truly global connected automotive ecosystem. “With a strong focus on increasing safety through ADAS and AS software and functions, our unique expertise in ADAS and autonomous technologies combined with Ericsson’s technology leadership in complex connectivity solutions is a win-win for the entire automotive industry. “Together, we will provide OEMs with unique, robust and scalable cloud-based solutions with much faster time to market, allowing them to capture the seemingly endless possibilities in the connected automotive ecosystem of the future.” As part of the collaboration, Ericsson will provide its IoT platform, IoT Accelerator, which orchestrates the cloud required to run Zenuity’s support functions. The platform, a scalable IoT solution, will be tailored to the specific needs of ADAS and autonomous vehicles delivering the robustness, responsiveness and security needed. Niklas Heuveldop, senior vice president, chief strategy officer, head of technology and emerging business, Ericsson, says: “Connectivity is reshaping the future of transport, and Ericsson is right at the heart of this with our IoT Accelerator platform that underpins the connected automotive ecosystem. “We will work closely with Zenuity to develop a truly end-to-end offering that enables higher levels of safe driving through ADAS and AD software and functions.” Autoliv, which specialises in automotive safety systems, is the exclusive supplier and distribution channel for all Zenuity products. The offering created by Ericsson and Zenuity will be sold and promoted by Autoliv to car manufacturers worldwide.
<urn:uuid:62ca9c42-db7c-4c42-9aa3-97f8adf63c7a>
CC-MAIN-2024-38
https://em360tech.com/tech-news/networks-emerge-connected-autonomous-cars
2024-09-09T15:57:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00140.warc.gz
en
0.94024
1,202
2.796875
3
Which two options are fields in an Ethernet frame? (Choose two.) Click on the arrows to vote for the correct answer A. B. C. D. E.AE The two fields in an Ethernet frame are the header and the frame check sequence (FCS). On the other hand, source IP address and destination IP address are fields used in the Internet Protocol (IP) layer, which operates at a higher level than Ethernet. Ethernet frames do not contain IP addresses. In summary, Ethernet frames consist of a header, which contains information about the source and destination devices, the type of data being transmitted, and other information, and a frame check sequence, which is used to verify the integrity of the data in the frame.
<urn:uuid:2c5adb89-9d43-42d4-96bb-236bbb14a24d>
CC-MAIN-2024-38
https://www.exam-answer.com/fields-ethernet-frame
2024-09-09T15:55:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00140.warc.gz
en
0.876388
151
2.5625
3
To explain auto-generated virtual disks, a brief explanation of logical disks is required. Logical disks are used internally by the software and are not revealed in the DataCore Management Console. A logical disk is a representation of a virtual disk on a server. Logical disks are created by the software when virtual disks are created from disk pools or pass-through disks. A single virtual disk is comprised of one logical disk from one server. A mirrored or dual virtual disk is comprised of two logical disks—one logical disk from each server that was used as a storage source for the virtual disk. Therefore, logical disks are most closely related to "storage sources" that are selected when a virtual disk is created in the console. When a logical disk is detected by the software and a corresponding virtual disk (including rollbacks and snapshots) cannot be found, a message that the system will be creating the auto-generated virtual disks will appear on the screen. The user must click OK to proceed and the virtual disks will be created after a brief delay. When possible, the auto-generated virtual disk will have the same name and be the same size as the original virtual disk, however virtual disk SCSI Information is not retained. (This means the SCSI device Id on the auto-generated virtual disk will not be the same as it was on the original virtual disk, thus any host identifying a virtual disk by NAA / EUID may see the auto-generated virtual disk as a new device not previously discovered.) The description in the Virtual Disk Details page will read "This is an automatically generated virtual disk." When the logical disk properties cannot be found, auto-generated virtual disks are assigned default names beginning with "Auto Generated Virtual Disk 1" and have a default size of 2 TB. Auto-generated virtual disks may also include snapshots, rollbacks, history logs, logstores, and mapstores. Auto-generated virtual disks that are differential snapshots, rollbacks, logstores, mapstores, and history logs are invalid and should be deleted from the configuration to regain space in the pool. The validity of full snapshots depends on the state and circumstances of the storage sources. Auto-generated virtual disks can be created under several different scenarios. These auto-generated virtual disks can either be used or deleted by the user based on the circumstances. The servers must comply with the licensing for auto-generated disks to appear. This is especially applicable when importing a foreign pool that may place the server out of compliance. For example, rebuilding a server without a configuration backup when the trial license will be in effect. Some example scenarios: - A foreign pool is imported from a different server. The software detects the logical disks created from the foreign pool, but cannot find corresponding virtual disks on that server, so auto-generated virtual disks are created. In this case, the auto-generated virtual disks could be served to a host and used as storage. (See Importing Foreign Pools.) - A back-end physical disk fails in a disk pool resulting in failed virtual disks. The storage sources in the failed disk pool are in the process of being replaced with new storage sources from a healthy pool. During this operation, new logical disks are created. If the failed pool goes healthy during this process at a certain stage, the software will detect logical disks without the corresponding virtual disks on the server and auto-generated virtual disks will be created for the virtual disks that were replaced. In this case, the auto-generated virtual disks can be deleted (after verifying that the mirrors are up-to-date and healthy after the repair) in order to reclaim the space used in the disk pool. This scenario could also occur with the Split and Unserve operation or when repairing virtual disks in a failed pool by running the Windows PowerShell script file named "Repair-DcsVirtualDisk.ps1" which is included with the DataCore SANsymphony installation. (See Repairing Virtual Disks.) - Virtual disks are deleted while a disk pool is unavailable or not healthy. If the unhealthy disk pool comes back online, the software will detect the logical disks from the formerly unhealthy pool and not find the corresponding virtual disks which were deleted. Auto-generated virtual disks will be created. In this case, these virtual disks can be deleted since they are no longer wanted and use disk pool resources.
<urn:uuid:f71c60ec-69ed-4b7b-9ca3-3cd42ea8da75>
CC-MAIN-2024-38
https://docs.datacore.com/SSV-WebHelp/SSV-WebHelp/Autogenerated_Virtual_Disks.htm
2024-09-10T21:13:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00040.warc.gz
en
0.91912
884
3.078125
3
Quality of Service (QoS) measures the consistency with which certain standards of IP packet and data services are met. Dependable, high-quality service is a key concern of network administrators for critical applications and protocols. As applications and devices use more and more bandwidth, QoS becomes more important as a tool to guarantee a known level of application availability. One of the key concerns in maintaining QoS is ensuring that critical or time-sensitive applications receive priority over other traffic. For instance, Voice-over-IP (VoIP) packets should receive higher priority than typical data transfers so that users do not experience dropped phrases or delays. The F5 BIG-IP L7 Rate-Shaping Module affords highly sophisticated and flexible QoS controls.
<urn:uuid:a3247877-4fb8-4845-8bac-40ef27d96549>
CC-MAIN-2024-38
https://www.f5.com/glossary/quality-of-service-qos
2024-09-11T22:29:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00840.warc.gz
en
0.919464
150
2.609375
3
In today’s world, where global connectivity is stronger than ever before, ensuring the health and safety of animals is of paramount importance. Animal health biosecurity plays a critical role in protecting our livestock from disease outbreaks and maintaining the integrity of our farming systems. As new challenges emerge, it is essential to embrace innovative solutions that can enhance the effectiveness of biosecurity protocols. One such solution that is revolutionizing the field of agricultural management is the use of spatial workflows in Gruntify. The Importance of Animal Health Biosecurity Animal health biosecurity encompasses a set of practices and measures aimed at preventing the introduction and spread of infectious diseases among animals. It is crucial for maintaining the viability and sustainability of our food production systems. By implementing robust biosecurity protocols, farmers can safeguard the health and well-being of their animals, avoid devastating disease outbreaks, and ensure the supply of safe and high-quality products to consumers. However, traditional approaches to biosecurity often rely on manual processes, making it challenging to monitor and address potential risks effectively. This is where spatial workflows come into play. Spatial Workflows in Agricultural Management Spatial workflows in agricultural management refer to the coordinated use of location-based data and technology to optimize farming practices. By utilizing geographical information systems (GIS), farmers can gather, analyze, and visualize data to gain valuable insights into their operations. Gruntify, a powerful digital solution, leverages this technology to enhance animal health biosecurity. With Gruntify, farmers can create customized spatial workflows tailored to their specific biosecurity needs. These workflows streamline the collection of data, track disease patterns, and identify potential risk areas. By incorporating real-time information, such as weather data, animal movement, and disease prevalence, farmers can make informed decisions to prevent disease outbreaks and minimize their impact. Integrating Technology for Biosecurity Enhancement Integrating technology into animal health biosecurity processes brings numerous benefits. One of the key advantages is the ability to automate data collection and processing. With Gruntify, farmers can capture and record data directly from the field, eliminating the need for manual data entry. This not only saves time but also reduces the risk of errors that can compromise the accuracy of information. Furthermore, the integration of technology enables real-time monitoring and early detection of potential disease outbreaks. By setting up alerts and notifications, farmers can promptly respond to any signs of disease, preventing its spread and mitigating its impact. The power of spatial workflows lies in the ability to identify high-risk areas and implement targeted actions based on accurate and up-to-date information. Benefits of Utilizing Gruntify in Animal Health Incorporating Gruntify into animal health biosecurity practices offers a multitude of benefits. Firstly, it allows for comprehensive data collection and management, providing farmers with a holistic view of their operations. By storing and analyzing spatial data, farmers can identify trends, make data-driven decisions, and establish long-term strategies for disease prevention. Secondly, Gruntify enhances collaboration and information sharing among stakeholders. With its cloud-based platform, relevant parties, such as veterinarians, government agencies, and farmers, can access and contribute to the same dataset. This facilitates effective coordination and cooperation, making it easier to implement biosecurity measures at a broader scale. Lastly, Gruntify supports compliance with industry regulations and standards. By maintaining transparent records of biosecurity activities, farmers can demonstrate adherence to guidelines and specifications. This fosters trust among consumers and regulators, ensuring the continued viability of our agricultural industry. Enhancing Data Collection for Animal Health Monitoring Data collection is a fundamental aspect of animal health monitoring. Traditional methods often involve manual surveys and forms, which can be time-consuming and prone to errors. Gruntify simplifies this process by enabling digital data collection through mobile devices and sensors. For example, farmers can use the Gruntify app to report animal health status, vaccination records, and disease symptoms directly from the field. This real-time data is then automatically integrated into the spatial workflows, providing a holistic view of the health situation and enabling timely interventions. By digitizing data collection, Gruntify reduces the burden of paperwork and ensures the accuracy and accessibility of critical information. Improving Efficiency in Biosecurity Protocols Biosecurity protocols are only effective if they can be implemented efficiently and consistently. Gruntify optimizes the execution of biosecurity measures by providing step-by-step guidance and checklists. Farmers no longer have to rely solely on memory or guesswork, as all the necessary information and actions are readily available at their fingertips. Moreover, Gruntify enables the automation of routine tasks, such as animal tracking and movement permits. By reducing the administrative burden, farmers can focus more on the actual implementation of biosecurity practices, ensuring their effectiveness and adherence to protocols. Best Practices for Implementing Gruntify in Animal Health Biosecurity When implementing Gruntify in animal health biosecurity, it is essential to follow best practices to maximize its potential. Firstly, it is crucial to establish clear goals and objectives for utilizing spatial workflows. By identifying specific challenges and desired outcomes, farmers can tailor the workflows to address their unique needs. Secondly, training and education are vital for successful implementation. Farmers and their teams should receive thorough training on how to use Gruntify effectively and interpret the insights derived from the spatial workflows. This empowers them to make informed decisions and take appropriate actions to mitigate disease risks. Lastly, ongoing monitoring and evaluation are essential to measure the impact of Gruntify on animal health biosecurity. This involves regularly reviewing the collected data, analyzing trends, and making adjustments to the spatial workflows as needed. By continually improving the workflows, farmers can enhance the efficiency and effectiveness of their biosecurity protocols. Future Trends in Spatial Workflows for Biosecurity As technology advances and data availability increases, the future of spatial workflows in biosecurity holds great promise. One emerging trend is the integration of artificial intelligence (AI) and machine learning algorithms. These technologies can analyze vast amounts of data and provide predictive analytics, enabling proactive biosecurity measures. Additionally, the Internet of Things (IoT) is expected to play a significant role in biosecurity management. IoT devices, such as sensors and wearables, can continuously monitor animal health parameters and alert farmers in real-time to any abnormalities. This level of connectivity and automation will further strengthen biosecurity practices and contribute to the overall health and well-being of our livestock. In conclusion, enhancing animal health biosecurity is essential for the sustainability of our agricultural systems. The integration of spatial workflows through Gruntify revolutionizes the way we approach biosecurity, enabling efficient data collection, effective disease monitoring, and informed decision-making. By embracing these innovative solutions, farmers can safeguard the health of their animals, protect their livelihoods, and continue to provide safe and nutritious products for generations to come.
<urn:uuid:a9cd41cd-b896-4064-906c-b9e8897a1706>
CC-MAIN-2024-38
https://www.gruntify.com/enhancing-animal-health-biosecurity-with-spatial-workflows-in-gruntify/
2024-09-14T11:34:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00640.warc.gz
en
0.913033
1,474
2.96875
3
The goal of education is to create the best learning opportunities for as many students as possible. Unfortunately, not all educational institutions have the ability or resources to offer the enriched programs that they wish they could offer, and students may be left hungry to learn. With video conferencing, you can bring the world into the classroom. Lifesize is leading the way to a more interconnected, inspiring learning experience, and our background working with school districts and universities all over the world has helped us tailor our technology to meet the particular needs of educators. In this guide, we will look at some of the best uses of video conferencing technology in education. “As an educator, I find that the most successful technology solutions we have are those that are simple to use, manage and deploy. At Educational Service Unit (ESU) 10, we’ve deeply integrated video in our curriculum through programs such as virtual field trips, guest lecturers, and cross-campus collaboration. This is a critical tool as we continue to offer enriching, video-enabled programs to our students across the district.” ??— John Stritt, Distance Learning Coordinator, ESU 10 Imagine a class enriched through the addition of virtual field trips, collaboration exercises with remote classrooms and accessibility to subject-matter experts all around the globe. From visiting world-renowned museums to learning about the stars from astronauts who have seen them from space, the power of HD video in the classroom can facilitate incredible, interactive learning experiences. All educators want to make coursework more engaging, and video conferencing enhances curricula without straining resources. At Barrow County Schools, a high school science class collaborated with a Georgia Tech University professor for a nanotechnology project. They viewed examples of carbon nanotubes in such crystal-clear quality that the students felt as if they were in the professor’s lab with him. “Video conferencing allows our students to see things that they would have never experienced any other way. For this county in particular, it is critical to show these students the options, opportunities and potential they all have.” — Wanda Creel, Superintendent, Barrow County Schools Enable Distance Learning In the past, students in sparsely populated rural areas have missed out on many of the opportunities that their urban and suburban peers have taken for granted. Connecting these rural students via video conferencing can dramatically improve the quality of their learning experience. This enables them to attend school from home and gives them access to field trips and experts they would otherwise never have the opportunity to experience. “Suddenly we can offer our students a much wider variety of subjects, which will give them a vital advantage before heading off to university. It is a major boost to rural schools who previously could only offer a limited choice of course options.” — Edward McEvoy, CEO, Co Offaly VEC (Vocational Education Committee) In our increasingly global society, it’s more important than ever for students to have an appreciation for and understanding of the world around them. Instantly connecting to other classrooms in countries all over the globe puts a face on geographically distant communities and cultures, preparing students to be citizens and stewards of the more open, more tolerant and more interconnected world of tomorrow. With video conferencing, Alief Independent School District (ISD) connects its schools with other schools around the globe — from Houston to South Carolina and even all the way to Germany. One school, Chancellor Elementary, uses this technology to cross traditional classroom boundaries and celebrate literacy and reading around the world. Students present book reports over video and share a brief overview of their location, the time of day it is and current weather conditions. Because of the program, the children are able to meet new students and even be introduced to new cultures, all while sharing their experiences with various books to a broader audience. “Students don’t want to watch movies anymore — they want to be able to see other people and what they’re doing and talk and have actual conversations. If you’re not using video conferencing in your library, you’re not reaching your potential or your kids’ potential.” — Stephanie Bennett, Library Information Specialist, Chancellor Elementary Record and Share in the Classroom Video sessions can be recorded for students to watch later, whether they were absent or just wanted to go over the content during study time. Video recording and sharing enables the delivery of recorded course material to be displayed in the classroom, at home on a laptop or via mobile device while on the go. School counseling and art therapy students at George Washington University conduct mock therapy and counseling sessions with patients and have them recorded for professors to review later. Video conferencing technology in education enables students and educators alike to learn on a schedule that suits them best. “We don’t want our students to have to worry about the technology; it should be an enabler, not a hindrance. The students are excited to use it, and, most importantly, they can focus their attention on what matters most — learning how to become the best counselors and therapists they can be.” — Sean Connolly, Director of Information Technology for Columbian College of Arts and Sciences, George Washington University Use Video Beyond the Classroom The in-classroom video benefits may be the leading reasons for investment, but there are enormous benefits of using video conferencing as a tool for internal and administrative practices too. - Broadcast administrative updates and school news straight to the classroom over video rather than holding assemblies. - Conduct remote parent-teacher conferences to help minimize scheduling conflicts using just an Internet connection and a webcam. - Host video staff meetings and provide a recorded session for those who are unable to attend the live meeting. - Use video to train teachers and teaching assistants wherever they are, on any device. - Record onboarding sessions and stream HR updates for personnel throughout the district. “Technology is not an issue with Lifesize. The quality is superb. A student assistant can easily set up a video meeting and manage the conference. The only thing to worry about is agreeing on the right schedule, the right time zone, the number of participants and the booking of the room.” — Arjan Sas, ICT Coordinator at the Faculty of Social and Behavioral Sciences at the University of Amsterdam Lifesize in the Classroom Video conferencing solutions for education can bring the world to your classroom more simply and cost-effectively than you’d imagine. We invite you to explore how video conferencing from Lifesize will help you optimize learning opportunities and extend the reach of your classroom. Contact us to learn about special discounts for K–12 and higher education solutions. Built for Classrooms and Auditoriums The Lifesize® Icon 800™ delivers the ultimate communication experience to auditoriums and other large meeting spaces. Additional audio and video inputs let you connect multiple devices — from cameras and laptops to DVRs and microphones — to drive an immersive large-room meeting experience. The rack-mountable design enables system integrators to install the Lifesize Icon 800 into any industry-standard server, podium or other rack-mount space. Unlimited Guest Invites Send a meeting or calendar invite to anyone inside or outside of the classroom and connect over video right from your browser with the Lifesize Web App. Connect with students, teachers and professionals all over the world without long dial strings or hold music. All both parties need are a camera internet access and the link to the meeting, and you’re ready to connect. Share Your Screen with the Classroom We make it easy to share your screen with others during class. You don’t have to sacrifice your video feed for your PowerPoint® or graph — with Lifesize, you can narrow in on a single app or share your whole desktop with your audience with a single click. Supports BYOD and Browser-based Calling Whether you log in from a Lifesize® Icon™ video conferencing system in the classroom or a laptop or mobile phone from your couch, it’s easy to connect face to face with Lifesize. We support a wide range of devices to support BYOD (Bring Your Own Device) policies and user preferences with apps for PC and Mac® computers, Android™ and iOS phones and tablets and even a browser-based web app for Chromebooks™ and anything else that can’t download applications. The experience is the same — it’s just the size of the screen that changes.
<urn:uuid:98958059-0b05-46e1-8100-864ed737c79b>
CC-MAIN-2024-38
https://www.lifesize.com/resources/video-conferencing-in-the-classroom/
2024-09-15T17:08:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00540.warc.gz
en
0.939436
1,775
3.015625
3
In this edge computing vs cloud computing comparison, we will explain the definition of both terms, their best examples, and more. Is edge computing just a rebranded form of cloud computing, or is it something genuinely new? While cloud computing use has been on the rise, advances in IoT and 5G have given birth to technological breakthroughs – edge computing being one of them. The hybrid cloud enables IT administrators to leverage the strengths of both the edge and cloud. Still, they must understand the benefits and drawbacks of each technology to integrate them into business operations properly. Edge computing draws computers closer to the data source, whereas cloud computing makes sophisticated technology available over the internet. Edge computing vs cloud computing: What do they mean? Businesses and organizations have already taken their computing activities to the cloud, which has shown to be a successful method for data storage and processing. On the other hand, cloud computing is not efficient enough to handle the fast stream of data produced by the Internet of Things (IoT). So, given current cloud-centric architecture limitations, what else can be done? Edge computing is the answer. Today’s computers are moving from on-premises servers to the cloud server and then, more rapidly, to the Edge server, where data is collected from the outset. Edge computing and cloud computing are two important elements of today’s IT environment. But before edge computing vs cloud computing, we should understand what these technologies entail. What is cloud computing? Cloud computing is the provision of computing resources, such as servers, storage, databases, and software for on-demand delivery over the Internet rather than a local server or personal computer. Cloud computing is a distributed software platform that employs cutting-edge technology to create highly scalable environments that may be used by businesses or organizations in a variety of ways remotely. IF you wonder about cloud computing vulnerabilities and the benefits of cloud computing, go to these articles. Any cloud service provider will offer three major characteristics: - Flexible services - The user is responsible for the costs of various memory, preparation, and bandwidth services. - The cloud service providers handle and administer the software’s entire backend. Cloud computing jobs are also on the rise. We have already explained cloud computing jobs requirements trends and more in this article. What is edge computing? One of the most significant features of edge computing is decentralization. Edge computing allows for using resources and communication technologies via a single computing infrastructure and the transmission channel. Edge computing is a technology that optimizes computational needs by utilizing the cloud at its edge. When it comes to gathering data or when someone does a particular action, real-time execution is possible wherever there is a need for that. The two most significant advantages of edge computing are increased performance and lower operational expenses. There is also fog computing-related to them. If you wonder, “is fog computing more than just another branding for edge computing?” we discussed fog computing definition, origins, and benefits. Edge computing vs cloud computing: The differences The first thing to realize is that cloud computing and edge computing are not rival technologies. They aren’t different solutions to the same problem; rather, they’re two distinct ways of addressing particular problems. Cloud computing is ideal for scalable applications that must be ramped up or down depending on demand. Extra resources can be requested by web servers, for example, to ensure smooth service without incurring any long-term hardware expenses during periods of heavy server usage. Edge computing is also well suited for real-time applications that produce a lot of data. IoT, for example, is the networked use of smart devices. The internet of things (IoT) is a type of data collection that involves connecting various physical devices that exist today to the internet. These devices lack powerful computers and rely on an edge computer for computational demands. Doing the same thing with the cloud would be too slow and infeasible because of the amount of data involved. In a nutshell, both cloud and edge computing have applications that can be effective, but they must be utilized depending on the application. So, how do we choose? What are the differences between edge computing and cloud computing? Edge computing vs cloud computing: Architecture The term cloud computing architecture refers to the many loosely coupled elements and sub-components needed for cloud computing. It describes the components and their connections. Cloud computing provides IT infrastructure and applications as a service over internet platforms on a pay-as-you-go basis to individuals and businesses. Edge computing is a more advanced version of cloud computing, combining distributed computing and on-premises servers to solve latency, data security, and power consumption by bringing apps and data closer to the network edge. Edge computing vs cloud computing: Benefits Things not only consume data, but they also produce it in edge computing. It allows compute, storage, and networking services running on end devices to communicate with cloud computing data centers. Because the cloud demands a lot of bandwidth, and wireless networks have restrictions. However, edge computing enables you to use less bandwidth. Because devices in close proximity are employed as servers, most concerns such as power consumption, security, and latency are alleviated effectively and efficiently. Edge computing is used to enhance the IoT’s overall performance. Edge computing vs cloud computing: Programming Several application programs may be utilized for development, each with a distinct runtime. On the other hand, cloud development is best when developed for a development environment and uses only one programming language. Edge computing vs cloud computing: Security Because edge computing systems are decentralized, the cybersecurity paradigm associated with cloud computing is changing. This is because edge computers may send data directly between nodes without first communicating with the cloud. An edge system that utilizes cloud-independent encryption techniques that work on even the most resource-constrained edge devices is required. However, this may have a detrimental impact on the security of edge computers vis-à-vis cloud networks. A chain is only as strong as its weakest link, after all. On the other hand, Edge computing improves privacy by making data less likely to be intercepted while in transit since it restricts the transmission of sensitive information to the cloud. Because cloud computing platforms are inherently more secure due to vendors’ and organizations’ centralized deployment of cutting-edge cybersecurity measures, they are often more secure. Cloud providers frequently employ sophisticated technologies, rules, and controls to boost their overall cybersecurity posture. In the case of cloud technologies, data security is simpler due to the widespread adoption of end-to-end encryption protocols. Finally, cybersecurity professionals implement tactics to protect cloud-based infrastructure and applications against potential hazards and advise clients on how to do the same. Edge computing vs cloud computing: Relevant organizations Edge Computing may be better for applications with bandwidth difficulties. Edge computing is especially beneficial for medium-scale firms on a tight budget who wish to optimize their money. Given that large data processing is a typical issue in development programs, cloud computing is more appropriate. Edge computing vs cloud computing: Operations Edge computing is when a system rather than an application handles data processing. Edge computing vs cloud computing: Speed & Agility Edge technologies take their data-driven counterparts’ analytical and computational capabilities as close to the data source as feasible. This improves responsiveness and throughput for applications running on edge hardware. A well-designed and sufficiently powerful edge platform could outperform cloud-based systems for certain applications. Edge computing is superior for apps that require little reaction time to ensure secure and efficient operations. Edge computing may emulate a human’s perception speed, which is useful for applications such as augmented reality (AR) and autonomous vehicles. Traditional cloud computing configurations are unlikely to match the agility of a well-designed edge computing network, yet cloud computers have their way of oozing with speed. For the most part, cloud computing services are available on-demand and can be obtained through self-service. This implies that an organization can immediately deploy even huge quantities of computing power after a few clicks. Second, cloud platforms make it easy for businesses to access a wide range of tools, allowing them to develop new applications rapidly. Any business may obtain cutting-edge infrastructure services, massive computing power, and almost limitless storage on demand. The cloud allows businesses to conduct test marketing campaigns without investing in costly hardware or long-term contracts. It also allows enterprises to differentiate user experiences through testing new ideas and experimenting with data. Edge computing vs cloud computing: Scalability Edge computing demands scalability according to the heterogeneity of devices. This is because different items have varying levels of performance and energy efficiency. Furthermore, when compared to cloud computers, edge networks operate in a more dynamic environment. This implies that an edge network would require solid infrastructure for smooth connections to scale resources rapidly. Finally, security measures on the network might cause latency in node-to-node communication, slowing downscaling operations. One of the primary advantages of cloud computing services is scalability. Businesses may quickly expand data storage, network, and processing capabilities by using an existing cloud computing subscription or in-house infrastructure. Scaling is usually rapid and convenient, with no downtime or interruption associated. All of the infrastructures are already in place for third-party cloud services, so scaling up is as easy as adding a few extra permissions from the client. Edge computing vs cloud computing: Productivity & Performance In an edge network, computing resources are located close to end-users. This implies that client data is analyzed with analytical tools and AI-powered solutions within milliseconds. As a result, operational efficiency—one of the system’s major advantages—is improved. Clients who meet the specified use case will benefit from increased productivity and performance. Cloud computing eliminates the need for “racking and stacking,” such as setting up hardware and correcting software related to on-site data centers. This increases IT personnel’s productivity, allowing them to concentrate on more important activities. Cloud computing providers also help organizations improve their performance and achieve economies of scale by constantly adopting the newest computing hardware and software. Finally, companies don’t have to worry about running out of resources because changing demand levels cause fluctuations in supply. Cloud platforms ensure near-perfect productivity and performance by ensuring that there is always the right amount of resources available. Edge computing vs cloud computing: Reliability Edge computing services require smart failover management. Users will be able to access a service entirely effectively even if a few nodes go down in an adequately set up edge network. Edge computing vendors also ensure business continuity and system recovery by using the redundant infrastructure. Edge computing can also improve performance by limiting or eliminating duplicate application data and packaging processes that are not directly related to one another. Edge computing systems may provide real-time detection of component failure, allowing IT staff to act promptly. On the other hand, Edge computing networks are less dependable because of their decentralized nature. Finally, because edge computers can function without access to the internet, they have several benefits over cloud platforms. Edge computing is not as reliable as cloud computing. Data backup, business continuity, and disaster recovery are all simpler and less costly in the case of cloud computing because it is centralized. If the closest site becomes unavailable, copies of critical data are kept at various locations that may be accessed automatically. Even if the entire data center goes down, large cloud platforms are frequently capable of continuing operations without difficulty. On the other hand, Cloud computing requires a solid internet connection to function properly on both the server and client sides. Unless continuity procedures are in place, the cloud server will be unable to communicate with connected endpoints, bringing operations to a halt unless continuity mechanisms are in place. The hybrid approach As previously stated, cloud computing and edge computing are not rivals; instead, they address distinct difficulties. That raises the question: can they both be utilized in tandem? Yes, this is possible. Many applications use a mixed approach that combines both technologies for maximum effectiveness. An on-site embedded computer, for example, is often linked to industrial automation equipment. The main computer operates the device and handles complicated computations quickly. However, this computer also transmits limited data to the cloud, which manages the digital framework for the entire process. By combining the power of both technologies, the app draws on the advantages of both paradigms, relying on edge computing for real-time processing while leveraging cloud computing for all other tasks. Foggy edges of the cloud As new ingredients are added to the tech term salad, we like to compare them, and the same goes for edge computing vs cloud computing comparison. However, this comparison only gives some of the answers we are after. The real question is how edge and cloud computing change the modern IT infrastructure. Cloud and edge computing complement each other with several advantages and applications. Edge computing was developed to address cloud technology’s centralized data collection and analysis challenges. However, the cloud is still a great choice with its flexible resource management and higher overall utilization rates equate to cost savings. The puzzle is completed for the nonce Edge comes into the picture when there is no time to wait for data to be sent to and analyzed on the cloud. Edge computing completes the contemporary real-time data processing puzzle with the cloud and IoT. These three can work connected for real-time data processing. Cloud and edge computing can do great things together for an organization, but we will remember what these technologies bring to the table separately before we delve into that. But first thing first, let’s clarify why we can’t wait for data to make its journey to central cloud platforms for analysis anymore. Need for speed Cloud computing platforms allow organizations to extend their infrastructure across multiple locations and scale computational resources up or down. Hybrid clouds provide businesses with unprecedented flexibility, value, and security for IT applications. However, things have changed. Real-time AI apps need a lot of computer power, and they’re frequently located far from central cloud servers. Some workloads must remain on-premises or on a certain site due to security, latency, or residency regulations. With the introduction of GPU-based AI solutions, organizations looked to augment networks with edge computing, a method of processing that takes place where data is generated. Edge computing refers to handling and storing data on-site in an edge device rather than processing it remotely on the cloud. The rapid expansion of IoT is one of the cloud’s most challenging problems. Devices are strewn about an organization’s physical IT environment, performing various activities from simple readings to complex operations in response to the production line or smart building requirements. IoT devices are data-rich, but they’re also “noisy,” which means a lot of that data is useless. This information is chatty in nature, as it isn’t a continuous flow rather than a series of incidents over time. Such data does not need to travel across the network; however, many IoT components lack the inherent intelligence to recognize this. Tackling the complexities of an IoT environment with a cloud-only platform is not the ideal approach. The issue is that to utilize all of this data from these IoT devices, it must travel through the network and reach where that cloud capability exists. The latency caused by these processes also slows down the data itself and imposes a serious bandwidth restriction on the cloud. And this is exactly where edge computing steps in. Taking the edge off workloads IoT devices are generating ever more data, but we haven’t seen the peak yet. As 5G networks expand to more mobile devices, the amount of data generated will increase further. The promise of cloud computing and AI has long been to automate and accelerate innovation by promoting actionable knowledge from data. However, the enormous scale and complexity of data produced by networked devices have outpaced network and infrastructure capacity. That device-generated data would have to go to a centralized data center or the cloud, creating bandwidth and latency problems. Edge computing is more efficient than this approach since data is processed and interpreted at or closer to its source. Thus, latency is considerably reduced because data does not travel over a network to be processed. Edge computing allows faster and more comprehensive data analysis, detailed insights, quicker responses, and better customer experiences. If a network’s endpoints are connected by edge devices that can provide storage and processing capabilities, those devices’ storage and computing resources will be abstracted, pooled, and shared across a network—essentially becoming part of larger cloud infrastructure. Edge computing is not always connected to the cloud. Actually, edge computing’s usefulness stems from the fact that it is intentionally disconnected from clouds and cloud technology. And now some fog The emergence of edge computing also paved the way for developing new computing approaches that are very efficient in some scenarios. And fog computing is one of them. Some consider fog computing as Cisco’s ideal interpretation of edge computing, consequently their latest contribution to the terms bonanza we enjoy (!) today. However, this new “approach” has its differences and a few aces up on its sleeves, including the repeatable structure and scalable performance. Fog computing also brings computing to the network’s edge Cisco-way. Moving storage and computing systems near the applications, components, and devices that require them reduces processing latency. This is especially important for connected IoT devices that create massive data. Because they are closer to the data source, these devices have less latency in fog computing. The fog metaphor derives from the meteorological term for a cloud close to the ground, just as fog computing focuses on the network’s edge. Cloud-optimized fog computing uses standard procedures to guarantee repeatable, organized, and scalable performance within the edge computing framework. But it differentiates by utilizing both edge processing and the infrastructure and networks for data transfer. Fog computing eliminates the gap between the processing location and the data source by utilizing edge computing methods in an IoT gateway or fog node with LAN-connected processors or within the LAN hardware itself. This approach results in a greater physical distance between the computations and sensors, yet no additional latency.
<urn:uuid:0ddea0c1-de05-4b8a-8501-e1288ace7752>
CC-MAIN-2024-38
https://dataconomy.com/2022/05/18/edge-computing-vs-cloud-computing/
2024-09-16T21:42:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00440.warc.gz
en
0.926656
3,725
2.96875
3
The burgeoning realm of artificial intelligence (AI) seems to be hurtling towards a paradigm shift, one that could see it transition from an isolated, singular AI framework to a vast, interconnected web — i.e. an Internet of AIs or IoA. Imagine a global brain trust, constantly collaborating, sharing knowledge, and tackling problems on a scale humanity has never witnessed. However, as one can imagine when dealing with a power structure of such magnitude, it is imperative to consider the risks posed by it. Therefore, to ensure the safe and beneficial development of this technology, projects working in this space need to carefully consider its structure and implement robust security measures so as to keep it safe for mass consumption. A Scalable, global AI network and the advantages it presents The Internet of AI (IoA), in its present yet admittedly limited iteration, seems to be headed in a decentralized direction where its resources are not controlled by a single entity. Such an architecture offers several key advantages. First, it boasts near-infinite scalability. For example, a recent study by PwC predicts the AI industry contributing $15.7 trillion to the global economy by 2030. A distributed IoA can perfectly complement this mammoth growth; so that as more nodes (i.e. individual computers or clusters spread across the globe) continue to join the network, the IoA’s processing power can increase conjunctially. This has the potential to make the IoA even more capable than the most advanced supercomputers of today. Secondly, IoA offers an unparalleled level of reach. Unlike geographically restricted computing resources, the technology can allow virtually anyone with an internet connection to access a massive information pool, thereby fostering a new level of innovation and problem-solving. Finally, distributed networks are inherently more resilient than centralized ones. That is, if one node goes offline, others can take over its tasks, ensuring the network’s continued operation. This redundancy minimizes the risk of single points of failure, making the IoA less susceptible to disruptions and outages. This characteristic is crucial for ensuring the network’s reliability and uptime as it grows and evolves. Building a safe IoA Securing the IoA and its vast potential demands a multi-pronged approach. First, robust encryption throughout the network is crucial. For instance, algorithms like AES-256 can safeguard sensitive data as it travels, protecting it from unauthorized access. Second, stringent access controls are essential so that only authorized users and AI agents should be able to access and manipulate data. Implementing multi-factor authentication can help achieve this. Third, fostering transparency is key. The network’s operations need to be clear enough to allow for monitoring and auditing of activities. Finally, establishing standardized protocols for communication and data exchange within the IoA is crucial. These common protocols ensure different AI agents can understand each other and collaborate effectively, minimizing miscommunication and errors that could compromise the network’s security. Ensuring safe and secure data flow While specific structures pertaining to the yet nascent IoA ecosystem are still emerging, several firms are currently working on creating secure and efficient models to help achieve this dream. One project at the helm of this mission is HyperCycle. The project utilizes a system of encrypted data movement where data owners don’t share raw info directly with the network; instead, they submit encrypted queries. The network’s AI agents then process these queries and return encrypted results. This approach mitigates the risk of sensitive information being exposed within the network. To ensure user trust and long-term network sustainability, the platform’s “transact” module facilitates all internal financial transactions with a high degree of transparency. Finally, the “VM” component plays a critical role in safeguarding the network’s integrity, primarily via the creation of a secure execution environment for different AI models within the broader IoA ecosystem. This compartmentalization acts as a shield, preventing malicious code from infiltrating the network and compromising its functionality. By combining data encryption with these specialized components, HyperCycle aims to create a secure environment for collaboration within the IoA. IoA is primed for big things From the outside looking in, IoA has the potential to revolutionize various fields, from scientific discovery with faster drug development cycles to medical research with personalized medicine approaches. However, as with any development of this magnitude, there needs to be a strong accompanying focus on security. And, with projects like HyperCycle continuing to prioritize data privacy and advance this niche technological front at a rapid rate, it will be interesting to see how things pan out from the IoA sector in the near to mid-term. Featured image credit: Freepik.
<urn:uuid:60af9d40-9d98-4033-9394-b2710422d6ec>
CC-MAIN-2024-38
https://dataconomy.com/2024/03/12/the-internet-of-ai-will-be-more-powerful-than-any-single-ai-how-do-we-keep-it-safe/
2024-09-16T23:12:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00440.warc.gz
en
0.932781
977
3.0625
3
Johns Hopkins Researchers Announce New Approach to Device Control for Measuring Qubits Low-Temp Environments (Newswire) Scientists at the Johns Hopkins University Applied Physics Laboratory (APL), in Laurel, Maryland, have developed a new device for controlling and measuring qubits inside the low-temperature environment of quantum computers; the new device can be manipulated at lower frequency, without the need for microwave lines, thus reducing cost and complexity. Their paper shows the design and modeling of a new type of tunable, microwave cavity tailored for quantum computing and quantum information experiments. The device consists of a metamaterial — a material made up of large artificial atoms — composed of an array of superconducting quantum interference devices (SQUIDs) that allow users to tune the properties of the cavity by applying a small magnetic field to the artificial atoms. All of this can be done inside of a dilution refrigerator that is 20 thousandths of a degree above absolute zero, where traditional ways of doing this aren’t possible. “It’s an entirely new approach to device control that will be an important piece of scaling quantum computer systems to the larger sizes needed for more complex applications,” Shrekenhamer added.
<urn:uuid:c781e306-67bf-4d72-8b68-c4b24440f187>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/johns-hopkins-researchers-announce-new-approach-device-control-measuring-qubits-low-temp-environments/
2024-09-16T22:07:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00440.warc.gz
en
0.886672
252
2.625
3
Hurricane Ian: A Brutal Reminder About Critical Communications in Disasters Hurricane Ian slammed Florida as an “absolute beast” of a storm, with sustained winds of 155mph at landfall. Some lost their lives, some lost their homes, and many residents face an immense cleanup that will take significant time. Recovering from a storm of this magnitude requires coordination among government agencies, power crews, and emergency services to begin repairing the damage from both winds and water. And coordination requires fast and accurate communication. In some parts of Florida, open lines of communication did not exist following the storm. The day after the storm moved through, the Collier County Sheriff’s department turned to Facebook to share this message: “For those...worried about family and loved ones: we have limited power, virtually no cell service and no internet. This is especially the case in the coastal area where the surge came in. Portable towers are on the way for cell service. Chances are your loved ones do not have the ability to contact you.” That message is heartbreaking. And as a former police officer, I have seen a lack of communications get in the way of too many response and recovery efforts. In my experience, it slows everything down at times where responders need to be moving quickly. Lessons From Previous Weather Emergencies When it comes to emergency preparedness, history is our best teacher. With recovery from Ian underway in Florida, let’s turn our attention to a recent storm of even greater magnitude: Hurricane Maria. In the busy hurricane season of 2017, Maria hit Puerto Rico with sustained winds of 175 mph (280 km/h). As a result of the storm, residents of Puerto Rico suffered months-long blackouts when the island’s electrical grid collapsed. In addition to the destruction of the region’s power supply, homes, bridges, and roads were also destroyed. Residents and recovery personnel simultaneously lost both vital utilities and infrastructure. And 95.6% of all cellular communication sites in Puerto Rico were out of service in the days following the hurricane. The storm also knocked down the Puerto Rico Electric Power Authority’s radio network of repeaters across the island. The lack of communication magnified the impact of the devastating natural disaster. Critical Communications Preparedness Mitigates Loss and Chaos Even though each disaster brings unique challenges, response teams always benefit when they can reliably communicate, coordinate and plan. In an after-action report from Hurricane Maria, the experts agreed that preparation and training across state and federal agencies could result in better outcomes in the future. But those involved in implementing rescue and recovery plans must be able to share information with each other, something that was extremely difficult following Hurricane Maria. Communications are so critical to recovery that the report spells out this specific recommendation: “Communication infrastructure should be treated as critical infrastructure that should be restored to enhance broader restoration planning efforts.” In other words, communication is key to restoring everything else after an emergency. And it takes planning to make that possible. This is something to consider for any natural disaster or other emergency that could impact normal operations, or lead to an incident response in your region or at your organization. Critical Event Management Communication Strategy at Disasters Coordination and collaboration are essential in preparing for — and recovering from — hurricanes and other natural disasters. As part of your planning, I suggest adopting an effective critical event management communication (CEM) strategy and platform. This frees first responders and agencies from siloed and bottlenecked communications, or from having to rely on social media or other inherently unreliable and unsuited communications channels to support decision-making. I know this because I’ve seen professional-grade CEM systems and strategies work in practice, and I’ve experienced the powerful difference it makes. This type of approach also eliminates the risk of using consumer-grade apps or messaging platforms, which some agencies attempt to rely on. CEM Approach Also Supports Long-Term Goals The need for a multi-agency crisis comms platform does not end once the storm has passed. In cases like Maria and Ian, the initial impact is followed by massive restoration efforts to normalize operations, which will be worked on for months afterward, if not years. Displaced residents will need to be rehoused, and it is vital that they are kept regularly informed. When it comes to coordinating the huge number of resources involved from central government through to the local tradespeople, the ability to communicate effectively should not be underestimated. With the right tools, cloud-based resources can now make it possible to communicate with all agencies, responders, utilities, and the public — regardless of the devastation after something like Hurricane Ian rolls through. One option to consider for your organization is BlackBerry® AtHoc®, which solves multiple critical communications challenges at one time, including: - Able to communicate across multiple agencies, departments, and responders, regardless of the networks they use - Cloud-enabled to work even when your local power is out by running on “an out of band” network-based elsewhere - Includes templates that can be prepared ahead of a disaster, so when a critical event like a hurricane brings destruction, emergency managers can execute and communicate prepared plans and make changes in real time, as needed Responders and emergency agencies with the proper communication tools can focus on helping residents instead of spending time trying to communicate with traditional and typically outdated systems. AtHoc is designed to make it easy for your organization to prepare a response plan, train staff, and refine emergency workflows. Resilience Following an Emergency I’ve written quite a bit here about the resilience of communications systems but there is most certainly a human side to this. As a former first responder, one thing that often surprises me is the way communities pull together and reveal how resilient they are following an emergency situation. Residents of Puerto Rico showed this determination following Hurricane Maria and now the people of Florida are showing it in the wake of Hurricane Ian. Recovering from disasters of this scale can take years but when communities pull together, it can transform areas once devastated by a disaster.
<urn:uuid:ed6e3e49-77b7-4f76-801c-93f3d6f8193e>
CC-MAIN-2024-38
https://blogs.blackberry.com/en/2022/10/hurricane-ian-a-brutal-reminder-about-critical-communications-in-disasters
2024-09-20T18:09:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00140.warc.gz
en
0.944071
1,271
2.578125
3
The Federal Aviation Administration (FAA) has granted CNN approval to operate drones over people in real-world conditions, which means that for the first time, drones will be allowed to fly over wide ranges of urban and suburban environments. CNN requested a Part 107 waiver, which was granted by the FAA, which will allow it to fly drones up to an altitude of 150 feet above ground level. When applying for the wavier, CNN developed a “Reasonableness Approach” and concentrated on the ability to fly the drones safely. The FAA then evaluated factors such as the operator’s history of safe operations, internal company safety procedures, and design features of the system that increase safety. Prior to this approval, the process to obtain a waiver from the FAA to fly over people seemed extraordinarily difficult, as just seven of the 1,317 Part 107 approved waivers allowed for this. However, CNN’s victory may lead to other approvals in the future, if they are reasonable and similar in nature to CNN’s request.
<urn:uuid:264cdc64-a7a1-41b8-a906-7ee475551b5b>
CC-MAIN-2024-38
https://www.dataprivacyandsecurityinsider.com/2017/10/faa-grants-cnn-approval-to-operate-drones-over-people-in-the-real-world/
2024-09-09T17:12:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00240.warc.gz
en
0.962095
210
2.625
3
The electromagnetic waves you pick up with a cellphone or a radio are radio waves, which are basically the same as wi-fi waves. The main difference between these two waves are the length, where AM radio waves are hundreds of meters long, conventional wi-fi waves are only about 12 cm long. This means that as the wi-fi waves you desire get farther from the router, your signal will get weaker. Generally, these waves can’t go more than 150 feet from the router. Along with this, they get absorbed and blocked by walls and metal surfaces, so the positioning of you and your router will make a significant difference as well. WiFi Network Consulting in Vancouver Here are five tips to get the high-speed internet you desire, at even higher speeds: - Situate your router in the center of the office By being in the center of all devices, as well as out in open, your router will broadcast signal in equal strengths and all directions. - An easy way to find the perfect spot for your router is simply by using your line of sight. Wherever you can see the farthest in your office, or the most number of rooms, is likely a good spot. - Elevate your router off the floor Routers can’t penetrate certain materials that make up floors (like metals, concrete or cement). Also, most routers are designed to broadcast signals slightly downward, so avoid sending your wi-fi into the basement or ground for no reason. - Keep your router away from other electronics Pretty much anything with a motor (televisions, microwaves, computers, etc.) can interfere with your signal. - Point your router’s antennas in different directions Majority of routers have two antennas. Ideally you would like to have one facing vertically, and the other horizontally. When the internal antenna of a device is parallel to the router’s, it works better. As most laptops have horizontal antennas, and mobile devices depend on which way you hold them, keeping one antenna in each direction ensures your highest potential signal for any device. - Measure signal strength. If you find your internet isn’t working quite as well as it should, it wouldn’t hurt to take the time to measure your signal strength. There are many apps to help you do this, and from there you can figure out what needs to be done to improve it. These are all fairly quick and simple tricks that are sure to increase your internet speeds, and that’s sure to save you time in the long run. Contact us at (604) 986-8170 or send us an email at email@example.com for more information.
<urn:uuid:54ae345b-2a8e-40b1-84d0-042e15369555>
CC-MAIN-2024-38
https://www.compunet.ca/blog/how-to-make-your-wi-fi-move-at-the-speed-of-light/
2024-09-12T04:58:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00040.warc.gz
en
0.947715
555
2.75
3
Big Data as it pertains to health care has emerged at the center of the revolution in personalized medicine. Simply put, the proliferation of data offers great possibilities for more precise diagnosis, as researchers are able to drill down to see what’s happening and create more targeted therapies, specifically at the molecular and tissue levels. In this slideshow, Thomas Heydler, CEO of Definiens, a leading provider of image analysis and data mining solutions for quantitative digital pathology in the life sciences, diagnostic biomarkers and health care industries, explores the top five reasons Big Data is advancing personalized medicine as we know it. Click through for five reasons Big Data is advancing personalized medicine as we know it, as identified by Thomas Heydler, CEO of Definiens. Exposing the unknown Technologies that can now extract large amounts of data out of samples or biopsies are allowing for previously unknown factors involved in disease to be discovered and utilized as drug targets or disease biomarkers. Data is also able to expose the complexity of a disease, especially cancer, and that there will never be one drug or treatment option that works for every patient. Correlating multiple sources for diagnosis and therapy decisions Big Data analysis of clinical outcomes, genetic profiles and tissue morphology will be a big driver of personalized medicine. As we are able to align and compare multiple data points from various sources, tailoring individualized treatment plans for each patient will be possible. Decisions based on hard facts, and less on subjective interpretation Historically, much diagnosis has been based on subjective visual analysis of a biopsy sample viewed through a microscope. Depending on the clinician’s experience or background, the diagnosis could be different from clinician to clinician. Hence, the oft requested “second opinion.” The datafication of patient samples, where discrete data points are extracted from qualitative samples, yields a vast quantity of knowledge that can be statistically analyzed and quickly reviewed by multiple clinicians for solid diagnosis and therapy recommendations. Systematic analysis of tissue and genomic information Through the datafication of patient tissue samples and genomic fingerprints, clinicians can systematically extract more information from each patient without requiring multiple rounds of testing. By having all available information at the same time while determining diagnosis and the patient prognosis, the best treatment decisions can be made on an individual basis at a faster rate. Reproducibility within and between clinicians Reproducing testing results in the clinic is very important. Each clinician should be able to produce the same diagnosis from day to day and diagnoses should be the same between clinicians. By using Big Data generated from clinical samples and testing, consistently reproducible test results are possible between clinicians and doctors for more accurate diagnosis and appropriate spending on therapy options.
<urn:uuid:29cfb1b9-cb4b-4885-960d-acfeaab2921a>
CC-MAIN-2024-38
https://www.itbusinessedge.com/business-intelligence/top-five-reasons-big-data-is-advancing-personalized-medicine/
2024-09-17T01:11:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00540.warc.gz
en
0.932021
558
2.515625
3
IoT (Internet of the Things) has the greatest potential to advance society since the Industrial Revolution. It will rise in a world in which all kinds of things are interconnected, smart, communicating, and improving the quality of life. The devices/hardware functionality is exposed as APIs where developers can do what they want. API Role In IoT An Application Program Interface (API) is a set of routines, protocols, and tools for building software applications. It specifies how software components must interact. APIs are tightly linked with IoT because they allow to securely expose connected devices to customers, go-to-market channels, and other applications. APIs connect important “Things” like sensors, cars, medical devices, energy grids, and thermostats to the IoT ecosystem; it’s important to deploy API management that is flexible, scalable, and secure. APIs allow developers to build context-based applications that can interact with the physical world instead of purely through UI (proximity and location-aware). However, to truly achieve IoT, we need a REST API for every device. REST allows data to flow over internet protocols and to delegate and manage authorization. With APIs’ help, a single app can utilize software written with multiple programming languages thanks to a unified architectural style called REST. Developing IoT Systems IoT means no shortage of apps, so no matter what, you’re bound to need RESTful services. Unstructured data goes to object storage, semi-structure goes to MongoDB, Cassandra, traditional and transactional data goes to SQL, MySQL, and so on. As a developer, it’s challenging to deal with proprietary APIs exposed by these data sources. For example, to integrate one unstructured, semi-structured, and structured database in the app, we will have to deal with at least three proprietary APIs. When plotting connections within an IoT system, nodes are devices, and arcs are APIs. Indeed, to fully realize the benefits IoT has to offer, OT assets will need to be designed with web technologies built directly into them, such as HTTP for interaction, SSL/TLS encryption, and authentication for data security, and JSON for data format. This approach is available today through RESTful architecture. REST APIs typically use methods of the HTTP specification to perform different actions. For example, POST, GET, PUT, DELETE can be logically mapped to SQL CREATE, SELECT (READ), UPDATE, and DELETE functions. This is known as CRUD, and it means that everything you might want to do to a piece of data stored on a remote server can be done predictably. The real trick in an Internet of Things product is moving data in an efficient and fast way—so at the heart of any IoT implementation rests the API(s). Device people and software people rarely understand each other. For the device folks, the API is the product and the app developers are the primary consumers. When building APIs for devices, we need to understand the needs of the consumers in terms of design and the preferred protocols that mimic dominant web architectures. REST and JSON APIs generally enable software engineers to avoid reinventing the wheel when building new apps. We are witnessing the growth of businesses and solutions thanks to high-quality software with robust and user-friendly APIs. There is a lot of potential to leverage data, and there are numerous development opportunities on both the device side and software side.
<urn:uuid:6eaf6667-2060-46f8-ad69-926833fa4e41>
CC-MAIN-2024-38
https://www.iotforall.com/iot-algorithms-and-apis
2024-09-20T21:14:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00240.warc.gz
en
0.925811
711
3.40625
3
Data centers are at the heart of our digital economy. Everything we do online passes through these facilities, from the shows we stream to the transactions we make. As the amount of data we create grows exponentially, data centers are just getting more important. Global data creation is projected to reach up to 181 zettabytes (about a trillion gigabytes) by 2025. Processing that data on such a large scale just isn’t feasible with a traditional data center. To keep pace with ever-growing data processing and storage demands, more companies are building their own enterprise data centers. But what are enterprise data centers? And does building them make sense for your company? This article will cover what an enterprise data center is, including its advantages and disadvantages and the components that make up these facilities. We’ll also look at the tools you can use to manage your compute infrastructure. What Is an Enterprise Data Center? An enterprise data center is a facility that an organization operates to support its data processing and storage needs. It houses physical computing equipment like servers, network systems, and storage devices, as well as supporting infrastructure like power, cooling, and environmental monitoring systems. Organizations can build enterprise data centers on-premises or off-premises. Each option has distinct advantages and disadvantages. Building a data center on-premises gives you direct access to your servers and more control overall. But without the right physical infrastructure in place, you won’t be able to efficiently and securely operate a data center. Many companies build off-premise data centers in locations based on factors like connectivity, reliability, and accessibility to end-users. For example, some companies build data centers in colder climates to save on energy costs. However, the costs to build these facilities make them financially unviable for most companies. Other types of data centers include: - Colocation: A colocation data center is a data center that allows companies to rent space for servers and other equipment. These servers are highly scalable, meaning they can scale up or down based on your application traffic. - Hybrid: A hybrid cloud setup is a mix of on-premise infrastructure and private cloud services. For example, you might store sensitive data on local servers, but outsource your processing needs to a cloud provider like Amazon Web Services (AWS). - Edge: An edge data center is a smaller data center that’s placed closer to the end-users it serves. With more companies deploying Internet of Things (IoT) devices, edge computing helps reduce latency and increase overall speed. - Hyperscale: A hyperscale data center is a data center that major companies like Apple, Amazon, and Google operate to deliver services worldwide. What sets these facilities apart is their massive size and scalability to meet high computing demands. What Are the Core Components of an Enterprise Data Center? Enterprise data centers consist of three main components: compute, storage, and network. These work together to deliver the processing power that organizations need to run their applications and deliver digital services. Compute is the hardware that provides the memory and processing power that companies need to run their applications. Some organizations deploy purpose-built servers for data-intensive workloads like Artificial Intelligence and Machine Learning. Enterprise data centers produce a large volume of data. Storage refers to the devices and software technologies that data centers deploy to store this data. - Hard disk drives and solid-state drives - Backup and recovery software - External storage facilities - Storage Area Networks (SAN) and Network Attached Storage (NAS) Enterprise data centers rarely exist in a vacuum. They rely on networking infrastructure to bring the servers online. These include routers, switches, cables, and firewalls. These three primary components allow companies to manage their IT operations and store important data. There’s also critical infrastructure that support these components. - Uninterruptible power sources (UPS) - Environmental monitoring sensors - Cooling and ventilation systems - Building security - Backup generators Monitoring each of these systems is key to keeping costs down and ensuring uptime. 25% of organizations report that the average hourly cost of unexpected downtime is between $301,000 and $400,000. Given these figures, organizations build multiple layers of redundant systems into their data centers. One of the most widely adopted standards is ANSI/TIA-942 — a rating system that grades the reliability of a data center from Tier 1 to Tier 4. Tier 1 data centers are adequate for most small businesses, as they cost less to implement and have an annual uptime of 99.671%. But enterprises with higher processing demands opt for Tier 4 data centers to ensure minimal downtime and prevent data loss. These facilities have an uptime of 99.995%. What Are the Advantages of an Enterprise Data Center? While building an enterprise data center comes with significant costs, it offers a number of advantages over cloud computing. An on-premise environment gives organizations direct access to their servers and more control over their infrastructure. You can install proprietary hardware and deploy applications that are specific to your organization. An enterprise data center also allows companies to retain their data in-house and hold their own encryption keys. It’s for this reason that many organizations hesitate to move to the cloud. Another benefit of a dedicated enterprise data center is the visibility it provides. Companies can install extensive monitoring tools and track metrics in real-time with data center infrastructure management (DCIM) software. Tracking the usage of resources over time can help companies estimate future needs and know when to purchase new equipment to scale their operations. Organizations that handle sensitive information have to adhere to certain standards. For example, companies that store patient health information must have multiple safeguards in place according to the Health Insurance Portability and Accountability Act (HIPAA). By keeping everything in-house, companies can ensure their servers are compliant and provide the documentation to prove it. Companies that store sensitive information can still use cloud providers, but they must comply with all regulatory mandates. Even as more of the world adopts modern technology, moving to the cloud could prove problematic for companies that still rely on legacy software. Enterprise data centers allow companies to continue running legacy systems that may not be supported elsewhere. What Are the Disadvantages of an Enterprise Data Center? A dedicated enterprise data center offers many advantages, but there are also some downsides to consider with this computing model. Building an enterprise data center can cost between $10 million to $12 million. That figure doesn’t even take into account ongoing costs for maintenance and labor. These high expenditures are why companies opt for cloud providers, as many simply don’t have the capital to build their own IT infrastructure. Facilities that were built in the early years of the internet are likely to be inefficient in terms of their Power Usage Effectiveness or PUE. An inefficient infrastructure with obsolete hardware means you’re using more resources with minimal performance improvements. Common causes of data center outages include equipment failure, cyber attacks, and human error. Natural disasters like floods and earthquakes can also cause outages. Unexpected downtime can result in significant revenue loss and cause customers to lose trust in your business. Managing an enterprise data center means planning for these scenarios and taking steps to mitigate their impact. Managing Your Enterprise Data Center Operating a dedicated enterprise data center is a complex undertaking. Here are some tips to manage your facility and increase its efficiency. - Use DCIM software: DCIM software provides complete visibility across your entire IT infrastructure. It helps you maintain a centralized database, perform capacity planning, and automate your workflows. You can also track key resources in real-time and analyze historical trends. - Monitor environmental factors: Unplanned outages are often tied to environmental factors. High temperatures can cause hardware to overheat and shut down. Deploy monitoring sensors in your facilities and set thresholds. - Have a power management strategy: A power management strategy allows you to optimize power distribution across your facility. It also helps you identify stranded servers that are wasting energy. - Increase efficiency: A more efficient data center translates to more cost savings in the long-term. Monitor metrics like PUE and aim to get this score as low as possible. Examples include improving airflow and replacing inefficient equipment. - Plan for hardware lifecycles: Computing equipment advances quickly. Replacing hardware allows you to take advantage of more efficient designs. 42% of respondents in one survey said they refreshed their servers every two to three years. Have a plan when upgrading and replacing hardware in your facility. Enterprise data centers are ideal for companies that operate in heavily regulated industries and have intense data processing needs. But given the capital needed to build these facilities, an organization needs to be certain that such an investment will be viable in the long-term. Whether you deploy applications in an on-premise facility or rely on colocation services, you’ll need the right tools to manage your infrastructure and optimize your workloads. View a demo on how Nlyte’s DCIM software can help optimize your workloads and automate the management of your assets.
<urn:uuid:a95a3bfc-7301-4507-b6b5-823e300803ff>
CC-MAIN-2024-38
https://www.nlyte.com/faqs/what-is-an-enterprise-data-center/
2024-09-07T12:16:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00604.warc.gz
en
0.931655
1,898
2.796875
3
This chapter contains the following sections: - Dynamic Routing Review - EIGRP Features and Function - Troubleshooting EIGRP - Implementing EIGRP for IPv6 - Chapter Summary - Review Questions EIGRP, Enhanced Interior Gateway Protocol, is an advanced distance vector routing protocol that was developed by Cisco over 20 years ago. It is suited for many different topologies and media. EIGRP scales well and provides extremely quick convergence times with minimal overhead. EIGRP performs in both well-designed networks and poorly designed networks. It is a popular choice for a routing protocol on Cisco devices. EIGRP did have a predecessor, Interior Gateway Protocol (IGRP), which is now obsolete and is not included in Cisco IOS 15. EIGRP was historically a Cisco proprietary and closed protocol. However, as of this writing, Cisco is in the process of releasing the basic functions to the IETF as an RFC (Request For Comments, a standards document; see http://tools.ietf.org/html/draft-savage-eigrp-00). This chapter begins with a review of dynamic routing. It then examines the operation, configuration, and troubleshooting of EIGRP for IPv4 and IPv6. - Review key concepts for Dynamic Routing Protocols - Understand how a Cisco Router populates its routing table - Understand the features, operation, theory, and functions of EIGRP - Configure and troubleshoot EIGRP for IPv6 and IPv4 Dynamic Routing Review A dynamic routing protocol is a set of processes, algorithms, and messages that is used to exchange routing and reachability information within the internetwork. Without a dynamic routing protocol, all networks, except those connected directly with the router, must be statically defined. Dynamic routing protocols can react to changes in conditions in the network, such as failed links. All routing protocols have the same purpose: to learn about remote networks and to quickly adapt whenever there is a change in the topology. The method that a routing protocol uses to accomplish this purpose depends upon the algorithm that it uses and the operational characteristics of the protocol. The performance of a dynamic routing protocol varies depending on the type of routing protocol. Although routing protocols provide routers with up-to-date routing tables, there are costs that put additional demands on the memory and processing power of the router. First, the exchange of route information adds overhead that consumes network bandwidth. This overhead can be a problem, particularly for low-bandwidth links between routers. Second, after the router receives the route information, the routing protocol needs to process the information received. Therefore, routers that employ these protocols must have sufficient resources to implement the algorithms of the protocol and to perform timely packet routing and forwarding. An autonomous system (AS), otherwise known as a routing domain, is a collection of routers under a common administration. A typical example is an internal network of a company and its interconnection to the network of an ISP. The ISP and a company’s internal network are under different control. Therefore, they need a way to interconnect. Static routes are often used in this type of a scenario. However, what if there are multiple links between the company and the ISP? What if the company uses more than one ISP? Static routing protocols would not be suitable. To connect the entities, it is necessary to establish communication with the bodies under different administration. Another example would be a merger, acquisition, or development of a subsidiary that maintains its own IT resources. The networks may need to be connected, but they also may need to be maintained as separate entities. There must be a way to communicate between the two. The third example, which is intimated by the first, is the public Internet. Many different entities are interconnected here as well. Figure 3-1 is a representation of three autonomous systems, one for a private company and two ISPs. Figure 3-1 Connection of Three Distinct Autonomous Systems (AS) To accommodate these types of scenarios, two categories of routing protocols exist: - Interior Gateway Protocols (IGP): These routing protocols are used to exchange routing information within an autonomous system. EIGRP, IS-IS (Intermediate System-to-Intermediate System) Protocol, RIP (Routing Information Protocol), and OSPF (Open Shortest Path First) Protocol are examples of IGPs. - Exterior Gateway Protocols (EGP): These routing protocols are used to route between autonomous systems. BGP (Border Gateway Protocol) is the EGP of choice in networks today. The Exterior Gateway Protocol, designed in 1982, was the first EGP. It has since been deprecated in favor of BGP and is considered obsolete. BGP is the routing protocol used on the public Internet. Classification of Routing Protocols EGPs and IGPs are further classified depending on how they are designed and operate. There are two categories of routing protocols: - Distance vector protocols: The distance vector routing approach determines the direction (vector) and distance (hops) to any point in the internetwork. Some distance vector protocols periodically send complete routing tables to all of the connected neighbors. In large networks, these routing updates can become enormous, causing significant traffic on the links. This can also cause slow convergence, as the whole routing table could be inconsistent due to network changes, such as a link down, between updates. RIP is an example of a protocol that sends out periodic updates. - Distance vector protocols use routers as signposts along the path to the final destination. The only information that a router knows about a remote network is the distance or metric to reach that network and which path or interface to use to get there. Distance vector routing protocols do not have an actual map of the network topology. EIGRP is another example of the distance vector routing protocol. However, unlike RIP, EIGRP does not send out full copies of the routing table once the initial setup occurs between two neighboring routers. EIGRP only sends updates when there is a change. - Link-state protocols: The link-state approach, which uses the shortest path first (SPF) algorithm, creates an abstract of the exact topology of the entire internetwork, or at least of the partition in which the router is situated. Using a link-state routing protocol is like having a complete map of the network topology. Signposts along the way from the source to the destination are not necessary because all link-state routers are using an identical “map” of the network. A link-state router uses the link-state information to create a topology map and select the best path to all destination networks in the topology. Link-state protocols only send updates when there is a change in the network. BGP, OSPF, and IS-IS are examples of link-state routing protocols. Classful Routing Versus Classless Routing IP addresses are categorized in classes: A, B, and C. Classful routing protocols only recognize networks as directly connected by class. So, if a network is subnetted, there cannot be a classful boundary in between. In Figure 3-2, Network A cannot reach Network B using a classful routing protocol because they are separated by a different class network. The term for this scenario is discontiguous subnets. Figure 3-2 Sample Classful Routing Domain Classful routing is a consequence when subnet masks are not disclosed in the routing advertisements that most distance vector routing protocols generate. When a classful routing protocol is used, all subnetworks of the same major network (Class A, B, or C) must use the same subnet mask, which is not necessarily a default major-class subnet mask. Routers that are running a classful routing protocol perform automatic route summarization across network boundaries. Classful routing has become somewhat obsolete because the classful model is rarely used on the Internet. Because IP address depletion problems occur on the Internet, most Internet blocks are subdivided using classless routing and variable-length subnet masks. You will most likely see classful address allocation inside private organizations that use private IP addresses as defined in RFC 1918 in conjunction with Network Address Translation (NAT) at AS borders. Classless routing protocols can be considered second-generation protocols because they are designed to address some of the limitations of the earlier classful routing protocols. A serious limitation in a classful network environment is that the subnet mask is not exchanged during the routing update process, thus requiring the same subnet mask to be used on all subnetworks within the same major network. Another limitation of the classful approach is the need to automatically summarize to the classful network number at all major network boundaries. In the classless environment, the summarization process is controlled manually and can usually be invoked at any bit position within the address. Because subnet routes are propagated throughout the routing domain, manual summarization may be required to keep the size of the routing tables manageable. Classless routing protocols include BGP, RIPv2, EIGRP, OSPF, and IS-IS. Classful routing protocols include Cisco IGRP and RIPv1. Multiple routing protocols and static routes may be used at the same time. If there are several sources for routing information, including specific routing protocols, static routes, and even directly connected networks, an administrative distance value is used to rate the trustworthiness of each routing information source. Cisco IOS Software uses the administrative distance feature to select the best path when it learns about the exact same destination network from two or more routing sources. An administrative distance is an integer from 0 to 255. A routing protocol with a lower administrative distance is more trustworthy than one with a higher administrative distance. Table 3-1 displays the default administrative distances. Table 3-1 Default Administrative Distances Route Source | Default Administrative Distance | Directly connected interface | Static route | eBGP (external BGP; between two different AS) | RIP (both v1 and v2) | EIGRP External | iBGP (internal BGP, inside AS) | Unknown/untrusted source | As shown in the example in Figure 3-3, the router must deliver a packet from Network A to Network B. The router must choose between two routes. One is routed by EIGRP, and the other is routed by OSPF. Although the OSPF route appears to be the logical choice, given that it includes fewer hops to the destination network, the EIGRP route is identified as more trustworthy and is added to the routing table of the router. Figure 3-3 Administrative Distance A good way to detect which routing protocols are configured on the router is to execute show ip protocols. Example 3-1 gives output from a sample router running OSPF, EIGRP, and BGP. The command provides details regarding each routing protocol, including the administrative distance (Distance), values the routing protocol is using, and other features such as route filtering. Example 3-1 show ip protocols Command Output Branch# show ip protocols Routing Protocol is "eigrp 1" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Default networks flagged in outgoing updates Default networks accepted from incoming updates EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0 EIGRP maximum hopcount 100 EIGRP maximum metric variance 1 Redistributing: eigrp 1 EIGRP NSF-aware route hold timer is 240s Automatic network summarization is in effect Automatic address summarization: 126.96.36.199/24 for Loopback0, Loopback100 192.168.1.0/24 for Loopback0, Vlan1 172.16.0.0/16 for Loopback100, Vlan1 Summarizing with metric 128256 Maximum path: 4 Routing for Networks: 0.0.0.0 Routing Information Sources: Gateway Distance Last Update (this router) 90 00:00:18 Distance: internal 90 external 170 Routing Protocol is "eigrp 100" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Default networks flagged in outgoing updates Default networks accepted from incoming updates EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0 EIGRP maximum hopcount 100 EIGRP maximum metric variance 1 Redistributing: eigrp 100 EIGRP NSF-aware route hold timer is 240s Automatic network summarization is in effect Automatic address summarization: 192.168.1.0/24 for Loopback0 172.16.0.0/16 for Loopback100 Summarizing with metric 128256 Maximum path: 4 Routing for Networks: 172.16.1.0/24 192.168.1.0 Routing Information Sources: Gateway Distance Last Update (this router) 90 00:00:19 Distance: internal 90 external 170 Routing Protocol is "ospf 100" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Router ID 172.16.1.100 Number of areas in this router is 1. 1 normal 0 stub 0 nssa Maximum path: 4 Routing for Networks: 255.255.255.255 0.0.0.0 area 0 Reference bandwidth unit is 100 mbps Routing Information Sources: Gateway Distance Last Update Distance: (default is 110) Routing Protocol is "bgp 100" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set IGP synchronization is disabled Automatic route summarization is disabled Maximum path: 1 Routing Information Sources: Gateway Distance Last Update Distance: external 20 internal 200 local 200
<urn:uuid:1251d2e7-6ccf-465a-8f3a-556a1d06731f>
CC-MAIN-2024-38
https://www.ciscopress.com/articles/article.asp?p=2137516&amp;seqNum=3
2024-09-09T21:39:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00404.warc.gz
en
0.890979
2,883
2.515625
3
There was a recent dispute between OneWeb and SpaceX regarding the possibility of a collision between two of their low-Earth orbit (LEO) satellites. OneWeb’s satellite (OneWeb-1078) was launched on March 25 and headed for its orbit at an altitude of 1,200 km when, in early April, it passed near a SpaceX satellite (Starlink-1546) in orbit at about 450 km. There was no collision, but subsequently, OneWeb’s government affairs chief Chris McLaughlin said SpaceX had turned off their autonomous collision-avoidance system so OneWeb could maneuver around their satellite and SpaceX denied that they had switched the system off and said, “there was never a risk of collision.” This sounds a little like a PR battle, but both sides may have been sincere because satellite tracking is imprecise. The satellites were being tracked by SpaceX, OneWeb, and two independent organizations, the Air Force 18th Space Control Squadron (18 SPCS) and LeoLabs which offers a commercial satellite tracking service. As we see in the plot shown below (source), their estimates of the probability of collision vary. In addition to helping explain the dispute between SpaceX and OneWeb, the variance in these estimates illustrates the difficulty of accurately tracking and predicting satellite orbits. The incident also highlights the difficulty of communicating and cooperating in deciding on maneuvers to avoid collisions. SpaceX and OneWeb began deploying their broadband Internet service constellations recently, but as of early 2020, LeoLabs was tracking about 14,000 objects in LEO that were 10 centimeters across and larger. About 1,700 were functional satellites working on other applications, and the rest were debris. We also had a debris-collision warning Last week when the astronauts en route to the International Space Station were instructed to put on their spacesuits due to the possibility of a collision with a piece of space junk. The SpaceX-OneWeb and astronaut-debris events provide an early warning. There are around 3,000 satellites in LEO today, but how about five years from now? As shown below, the five would-be broadband constellation operators have the authorization to launch over 30,000 satellites and, if we add in pending requests for approval, the total increases to around 100,000. Furthermore, Russia and the European Union are working on broadband satellite plans, as are the US and Chinese militaries. And don’t forget the forthcoming large non-broadband constellations, like Geely’s Geespace constellation. GuoWang | 12,992 | SpaceX | 12,000 | Kuiper | 3,236 | OneWeb | 2,000 | Telesat | 298 | Total | 30,526 | We’ve had a few scares recently, and the number of close calls requiring an automated or manual maneuver to avoid a collision will increase dramatically in the future. Runaway debris in LEO would cost us more than just having to give up the goal of global broadband service. It would disrupt critical applications we already depend upon in climate science, emergency response, agriculture, etc. I am confident that LEO broadband providers can find solutions to the technical problems they face, like low-cost antennas, inter-satellite laser links, and spectrum sharing. Still, collision avoidance is tougher because, in addition to technical innovation like improving the accuracy and resolution of terrestrial and space-based orbit tracking, it will take political action. The map below shows that 72 nations own and/or operate satellites, but last week, English-speaking engineers at SpaceX and OneWeb were unable to agree on the orbit data and communicate and cooperate when it appeared that a collision might be forthcoming. If SpaceX and OneWeb cannot agree, what will happen when, for example, military satellites from China and the US are involved? Space, like the oceans, is a global commons, and it is not in anyone’s interest to spoil it. We need global regulations, standards, procedures, and a means of enforcing compliance if we are to avoid collisions and mitigate debris. We are running out of time. We’ve been working on maritime law for centuries, and the UN International Maritime Council has 174 member states—have we begun talking about this issue with the Chinese? The European Space Agency (ESA) has produced a 12-minute video (with cool animations) that describes the debris problem and the current debris mitigation guidelines, which are neither universally followed nor adequate for the future. ESA projects for automating collision avoidance, refueling and repairing satellites in space, active debris removal, and technology to hasten the deorbiting of defunct satellites are mentioned in this call for action. Indeed the title of the video is “Time to Act.” Update May 2, 2021: Several readers pointed out that the Chinese ratified the UN Outer Space Treaty and Registration Convention in the 1980s. If one takes my question “have we begun talking about this issue with the Chinese?” literally, the answer is “yes.” But, clearly, the situation has changed since the 1980s so a more relevant question would have been “are SpaceX and Guowang engineers talking with each other?” I don’t know if we can wait for Outer Space Treaty-2. Update June 1, 2021: Ground segment market researcher Alexandre Corral predicts that “over 2,100 Earth observation satellites will be launched as part of about 70 constellations during the next decade, driven by the operators’ ambition to provide higher revisit with global coverage.” That is nothing like what SpaceX and the other broadband constellations anticipate, but it will add to congestion in LEO.
<urn:uuid:53a51ab7-b971-43c0-a65d-9b22dadbb962>
CC-MAIN-2024-38
https://circleid.com/posts/20210427-avoiding-low-earth-orbit-collisions-the-clock-is-ticking
2024-09-11T03:17:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00304.warc.gz
en
0.951063
1,182
2.953125
3
What is a recursion limit? Recursion limits are set to protect your users from seeing long page load times, keep your page from getting dinged by Google for poor performance, and to allow us to protect our own parser processing performance. This usually appears when reusing content between two articles. Below are some common circumstances along with options for resolving the issues. Resolve recursion limit error - Discover the details of the error. Recursive DekiScript errors will display the following error: "SecurityError: Recursion limit exceeded 2 pages." Click the link inside to get more details. You will see something like this: - Determine the page where the error starts. The detailed error message will tell you where the DekiScript error starts. If it is a loop between two pages it will always point to the page you are not on as the start of the recursive loop. Our parser will run through it twice to confirm that there is indeed a recursive loop which is why you will see it display the pages twice. - Diagnose the type of recursive loop error. Once you know which page the error is on, you can start fixing the problem. The way you are reusing content, either through "wiki.page" or "template" calls, is causing a recursive loop which can cause long page loads and cause parsing errors. There are two common reasons why this occurs. Bi-directional reused content This error is caused by reusing content between two pages going in both directions. To solve the issue you will need to create a single source page to share your content from. This will also make managing content easier. Reusing content from the same page This error is caused by reusing content from a section of the same page it already exists on. You cannot reuse a content block within the same page as itself. You will either need to copy/paste the content within that page or if you find you or your peers reusing a block across multiple pages you can create a snippet template to reuse that content. Resolve the recursive loop error After you diagnose what type of error it is and which page it exists on, make the appropriate changes to fix the problem.
<urn:uuid:d4fe8334-d8f8-4c35-8890-60e845e1f02d>
CC-MAIN-2024-38
https://expert-help.nice.com/Manage/Author/DekiScript/Resolve_DekiScript_recursive_limit_errors
2024-09-11T03:49:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00304.warc.gz
en
0.896073
451
2.59375
3
Most cyber threats, whether designed to steal data or extract money from a user, have malicious code at their core. Once this malicious code finds its way onto a device, it can be devastating: particularly if it first infects one machine before spreading through an organisation’s network. Worse, the more sophisticated the malicious code, the harder it is to remove. So, it’s wise to be proactive about avoiding downloading malicious code in the first place. It’s far better to ask, “how can you avoid downloading malicious code?” than to be forced to ask, “how can I get rid of this malicious code?!”. With that in mind, this post explores 7 top tactics for protecting yourself from malicious code. How can you avoid downloading malicious code? – 7 effective tips Install a robust antivirus solution Antivirus software is often the first line of defence for protecting yourself from malicious code. Antivirus software scans your device for malicious code in the form of malware, adware, spyware, viruses, etc., before removing them. Antivirus software also lets you know if you’re trying to connect to an unsecured site, which is more likely to contain malicious code. Additionally, many antivirus applications include a firewall, which analyses incoming web traffic for extra security. Consistently apply patches and updates Unpatched software is responsible for 1/3 of all security breaches, so applying all available patches and updates is crucial. Although you should get into the habit of doing this for any application you use at least semi-regularly, it’s most especially to consistently update your operating system (OS) and antivirus software. Updating your OS regularly is vital because hackers frequently exploit their vulnerabilities. Similarly, the developers of antivirus software release patches and updates to account for new and emerging cyber threats, so consistently applying them gives you the most secure tools to avoid the question how can you avoid downloading malicious code on your device. Avoid unsecured websites A secure website is recognisable through a small padlock next to its URL or the “https://” prefix within its URL. This signifies that the site has a Secure Socket Layer (SSL) certificate, which creates an encrypted connection. Subsequently, an SSL certificate makes it incredibly difficult for bad actors to infect a site with malicious code. Conversely, unsecured sites don’t have an SSL certificate and have an “http://” prefix in their URL. Such websites are less secure, so hackers can infect them with malware, ransomware, Trojan horses and other types of malicious code. Luckily, as mentioned above, antivirus software alerts you when you’re about to access an unsecured site, making it less likely you’ll stumble onto one by accident. Use DNS filtering A potent way to avoid unsecured sites is through DNS filtering. By blocking unsecured sites at a DNS level, you’ll avoid downloading malicious code by preventing certain domains from even loading in the first place adn avoid DNS hijacking. As all requests to access a website go through a DNS resolver, it can be configured to filter out queries for domains on a blocklist. A URL might end up on a blocklist because they’re unsecured or contained malicious code in the past. Alternatively, instead of a blocklist, DNS filtering can use an allowlist that only resolves a determined set of domains while preventing all others from loading. Don’t download free software Downloading free software can sometimes lead to downloading malicious code for two reasons. Firstly, bad actors may insert malicious code into the website from which you download the software. By luring visitors to their site with the promise of a great, highly-functional application – at no cost – they can infect their machine with malicious code. Secondly, cybercriminals can infect your device with malicious code through the software itself. Very simply, the software masquerades as secure and legitimate – while really being malware, adware, ransomware, or spyware. Once the malicious code is on your device, it’s far easier for bad actors to install more and get a firmer grip on your machine. However, with all that in mind, not all free software is dangerous. Chances are you’ve downloaded a free application before and didn’t fall victim to a cyber threat. So, how can you avoid downloading malicious code from free software? First and foremost, make sure it comes from a reputable company: if you’ve heard of them, that’s a good sign. If you haven’t heard of the developer, do a quick search for reviews for the application you’re looking to download. Reddit is usually a place for objective reviews, and you’ll quickly find out if there have been issues with malicious code. Be extra vigilant when clicking links Phishing is the most common type of cyber attack and centres around tricking people into clicking on malicious links on websites, social media posts and, especially, emails. When an unsuspecting victim clicks on a phishing link, it often results in the installation of malicious code in the form of malware or ransomware. Because phishing links frequently look like they’re from authoritative domains, like banks and payment processors, you can avoid them by looking at the domain carefully before clicking it. The cybercriminals behind phishing attacks usually slyly replace one character with another that looks just like it – such as an ‘O’ with ‘0’ – which is sometimes hard to notice at a glance. Fortunately, many phishing attempts are easily foiled if you pause, consider the message’s origin, and, crucially, if it seems too good to be true. So, if you want to know how can you avoid downloading malicious code – avoid phishing emails. Use advanced email filters While on the subjects of emails, you can avoid lots of phishing emails altogether with advanced spam filters (ASF). ASFs protect you from malicious code by conducting deep scans on emails, including attachments. ASFs add a layer of security by highlighting messages as potentially malicious based on specific message properties commonly found in spam. Want to know more about how can you avoid downloading malicious code? Contact RiskXchange for a free attack surface risk score today.
<urn:uuid:5053b643-f5be-46f4-8940-2bcefc290af9>
CC-MAIN-2024-38
https://riskxchange.co/6193/how-can-you-avoid-downloading-malicious-code/
2024-09-13T13:49:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00104.warc.gz
en
0.917598
1,317
2.609375
3
Secure generation and transmission of OTP tokens In the above code, an OTP (One Time Password) is generated on the client-side within the method of the . This OTP is a random number between 100000 and 999999. The generated OTP is then sent to the client as a JSON response. The vulnerability here is that the OTP is generated on the client-side and sent to the server. This means that if an attacker intercepts the request, they can access the OTP directly. This makes it possible for an attacker to continue the application flow without needing access to the phone number used. In a secure system, the OTP should be generated server-side and sent directly to the user's phone number. The server should then wait for the user to submit the OTP for verification. This way, even if an attacker intercepts the request, they won't have access to the OTP because it's sent directly to the user's phone and not included in the server's response. The original code was generating the OTP (One-Time Password) on the client-side and sending it to the server. This is a security vulnerability because an attacker could intercept the request and gain access to the OTP, allowing them to continue the application flow without needing access to the phone number used. The fixed code generates the OTP on the server-side using Ruby's module, which generates a truly random number in a secure manner. This OTP is then sent to the client via a secure communication channel (e.g., SMS or email). The specifics of this step are not shown in the code and would depend on the specifics of your application and infrastructure. The response to the client no longer includes the OTP. Instead, it simply confirms that the OTP has been generated and sent. This prevents the OTP from being exposed in the client-server communication, adding an additional layer of security. Finally, it's important to ensure that proper authentication and authorization mechanisms are in place to ensure that only authorized users can generate and use the OTP. This is not shown in the code, but it's a crucial part of securing the OTP generation and usage process.
<urn:uuid:750ad8e8-632f-41f7-b45f-5c6ef5f77329>
CC-MAIN-2024-38
https://help.fluidattacks.com/portal/en/kb/articles/criteria-fixes-ruby-383
2024-09-14T20:48:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00004.warc.gz
en
0.912609
453
3.078125
3
What is Zero-day Exploit? A zero-day attack is one of the most unpredictable cybercrimes you can experience. The term “zero-day” comes from a situation where software developers have zero days to fix the vulnerability before hackers exploit it for nefarious purposes. Most times, the developers are unaware of the vulnerabilities being exploited. As a result, they can’t create a security patch or a mitigation strategy to prevent the invasion either! Before we get to what a zero-day exploit is, let’s start by defining zero-day vulnerabilities. What are zero-day vulnerabilities? Zero-day vulnerabilities are newly discovered security flaws that hackers can target to infect software, computers, or networks. The cause of the vulnerability itself varies. It might be something to do with the setting, or unsafe code might slip under the developer’s detection. A recent example of a zero-day vulnerability occurred on Apple iOS. Hackers used a zero-day vulnerability that allows them to control your iPhone remotely. Fortunately, this vulnerability has been fixed through Apple’s iOS 14.4 update. Moving on to zero-day exploits: A zero-day exploit is a method or technique that takes advantage of zero-day vulnerabilities. Hackers commonly create malware to target these zero-day vulnerabilities, otherwise known as zero-day malware. Unfortunately, exploiting these zero-day vulnerabilities is easier than fixing them. The speed criminals need to create an exploit code is greater than the time developers need to discover the vulnerability, create a security patch, and ship that patch to every network using the system. Patches aren’t the end of the story either; developers assume every organization will apply the patches right away, which is unlikely for most. Hackers might even create the exploit code and not use it right away. They might sell it on the dark web or keep it for a future attack. In that case, since the public and the developers remain unaware of the vulnerabilities, it’s still a zero-day exploit, but hackers can now wait for the “right time” (for them!) to launch an attack. A zero-day attack begins when hackers deploy the code necessary to exploit the vulnerability for their benefit, such as by damaging the infected system or stealing proprietary data. Unfortunately, most zero-day vulnerabilities are only discovered when they’ve already been exploited, obviously causing problems for users. Who and What Are Often Targeted for Zero-day Exploits? Anything that requires code to operate can contain zero-day vulnerabilities, including: - Operating systems - Web browsers - IoT (internet of things) devices like connected cars, refrigerators, and other ‘smart’ appliances - Open-source components - And other hardware and firmware Just like any other cyberattack, the victim in a zero-day exploit might be any organization or individual, such as: - Individuals with sensitive or personal information stored in their network - Businesses or corporations - Government agencies - Political agencies Reasons for a zero-day attack might vary. For cyberattacks against your business, it might be a former employee looking for revenge, a hacker looking to make a quick buck, or even a competing company looking for confidential data. How Does a Zero-day Attack Work? Threat actors involved in zero-day attacks exploit a software vulnerability stealthily, without the developers knowing about them. Hackers analyze the code in software or systems they want to target to find zero-day vulnerabilities. After spotting a vulnerability, malicious actors will seek to take advantage of these vulnerabilities. They might develop the exploit code by themselves or buy one from the dark web, making the process even faster. Attackers often use various social engineering tactics to deliver the exploit, such as phishing, vishing, and smishing. They might also inject malicious code into websites and advertisements, redirecting users to an exploit kit server and infecting the system. Since zero-day attacks are usually launched stealthily, it might take a while before developers notice the leak. Once they notice the problem, developers can launch security patches to cover the vulnerabilities and mitigate further attacks. But until then, hackers are free to exploit these zero-day vulnerabilities. A well-known example of a zero-day attack is the Stuxnet worm discovered in 2010. And surprisingly enough, the development of this worm has been mentioned to have started as early as 2005. It was one of the most famous zero-day attack incidents, and it’s been recorded in a documentary called Zero Day. How to Prevent a Zero-day Attack It’s hard to prevent zero-day attacks since you’re trying to protect unknown security vulnerabilities against unknown exploits. However, you can use various threat intelligence solutions to mitigate zero-day attacks. In addition to upgrading your security, here are a few things you can do to prevent zero-day attacks: - Only use essential applications and software - Keep all operating systems, apps, and software up to date - Utilize anti-virus software and firewalls - Educate all users on ways to avoid phishing, smishing, and vishing attempts. They need to know how and when to alert the IT department if things seem suspicious (if in doubt, unplug the computer immediately!) Only use essential apps and software More applications and software means more potentially vulnerable systems to protect. Since zero-day attacks exploit unknown vulnerabilities, using less software will reduce the number of “doors” you need to guard. Besides, the more software you’re using, the harder it is to keep track of security patches and keep them updated. Keep all operating systems, apps, and software up to date Security patches include fixes to known vulnerabilities. We recommend installing security patches as soon as they’re available to ensure you’re not presenting an opportunity to hackers. A well-known incident related to this is the WannaCry ransomware attacks in 2017. The attackers targeted a known security vulnerability in the Microsoft Windows operating system. Even though the security patch was released a few months before the attacks, some organizations hadn’t yet downloaded and installed their security patches. As a result, these organizations became victims of a costly ransomware attack. If you have an IT team, make sure that there is a regular schedule to update all systems you use in your organization. Better yet, get a patch management program to automate the whole process and ensure that your software is always up to date. Utilize anti-virus software and firewalls Although zero-day threats are hard to handle because the exploit used is unknown, any malware used likely has similar characteristics to known types. Anti-virus software may be able to detect and isolate corrupted files before they can do any harm to your network. Even if your anti-virus software can’t detect and isolate the exploit just yet, anti-virus companies are constantly updating their libraries and pushing out updates. Antivirus software can protect the network as soon as the exploit’s signature is identified and released. Educate all users in your organization Often, hackers deliver exploits into your computer through social engineering tactics. In cases where your security system failed to detect a social engineering attempt, your users are your last line of defense. Protect your system from exploitation by educating your users on spotting and handling possible social engineering attacks. Even if security vulnerabilities haven’t been patched yet, it would be harder for hackers to exploit these vulnerabilities if all of your users are cautious and don’t fall for elaborate phishing schemes commonly used to deliver malware. However, phishing attempts aren’t as obvious as they used to be. Just like security researchers keep upgrading their defenses, cybercriminals are generating more creative and believable scenarios to break through your defense. Keeping your users updated on what they should be cautious about in their inbox with regular cybersecurity awareness training and phishing simulations is a good way to inform your users of the threats they face. Protect Against Zero-day Exploits with Inspired eLearning Zero-day exploits are a pain to handle, especially once they enter your system. The best solution against zero-day exploits is never to let them enter your network in the first place. Defending against phishing attempts and malicious websites commonly used to deliver these exploits is an important step in your prevention plan. Training your employees to defend against social engineering attempts against your company is essential. A regular training program will help you teach your employees how to spot malicious links and attachments in their inboxes. It will also help them understand the tangible next steps they need to take if they encounter a social engineering attempt. For now, here are some social engineering prevention tips to share around the office. When you’re ready to start a comprehensive and accessible security awareness training program, let’s talk, and we’ll see how we can help.
<urn:uuid:e0179f85-13b3-4b2d-8ebd-db142b5d2031>
CC-MAIN-2024-38
https://inspiredelearning.com/blog/zero-day-exploit/
2024-09-15T23:05:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00804.warc.gz
en
0.939915
1,864
3.234375
3
What is a Keylogger and How Does it Threaten Cybersecurity? Keyloggers are a type of malware designed to record keystrokes on a victim's device, capturing sensitive information such as passwords, credit card numbers, and personal communications. This data is then sent back to the attacker, who can use it for malicious purposes, including identity theft or unauthorized access to systems. Keyloggers can be hardware-based, plugged directly into the device, or software-based, running invisibly in the background. The stealthy nature of keyloggers makes them particularly dangerous, as they often go undetected by the victim until significant damage has been done. How Do Botnets Compromise Network Security? Botnets are networks of infected computers, or "bots," controlled remotely by an attacker. These bots can be used for a variety of malicious activities, such as launching distributed denial-of-service (DDoS) attacks, sending spam, or spreading other types of malware. The vast scale at which botnets operate makes them a powerful tool for cybercriminals, capable of overwhelming even large networks and causing significant disruptions. Botnets often spread through phishing emails or vulnerabilities in software, and once a device is compromised, it becomes part of the botnet without the owner's knowledge. What are Rootkits and Why are They Difficult to Detect? Rootkits are a type of malware designed to gain unauthorized root-level access to a computer or network while hiding its presence. They can modify system files, access sensitive information, and allow attackers to control the device remotely. What makes rootkits particularly dangerous is their ability to conceal themselves from antivirus software and system administrators, making detection extremely difficult. Rootkits often enter a system through malicious software downloads or exploited vulnerabilities in the operating system. Because they operate at the core of the system, removing rootkits can be a complex and risky process. How Does Adware Impact User Experience and Privacy? Adware is a type of malware that automatically displays or downloads advertisements on a victim's device. While not always malicious, adware can be intrusive, slowing down devices and causing unwanted pop-ups or redirects to potentially harmful websites. In some cases, adware may also track user behavior, collecting data on browsing habits and personal information without the user's consent. This data can be sold to advertisers or used for targeted phishing attacks. Although adware is often bundled with legitimate software, it can be difficult to remove, leading to a degraded user experience and potential privacy concerns. What is Spyware and How Does it Infiltrate Personal Data? Spyware is a type of malware designed to gather information about a person or organization without their knowledge and send it to the attacker. This can include anything from tracking browsing history to recording passwords and capturing screenshots. Spyware often infiltrates devices through phishing emails, malicious websites, or software vulnerabilities. Once installed, it can be difficult to detect, as it operates silently in the background. The stolen data is typically used for identity theft, financial fraud, or corporate espionage, making spyware a serious threat to both individual privacy and organizational security. What Makes Ransomware One of the Most Dangerous Types of Malware? Ransomware is a type of malware that encrypts the victim's files, rendering them inaccessible until a ransom is paid to the attacker. Ransomware attacks have become increasingly common and sophisticated, targeting individuals, businesses, and even critical infrastructure. The encrypted files are often crucial to the victim's operations, creating a sense of urgency to pay the ransom, which is usually demanded in cryptocurrency to maintain anonymity. Ransomware can spread through phishing emails, infected websites, or vulnerabilities in software, and once it takes hold, it can be challenging to remove without paying the ransom or restoring from a backup—if one exists. What are Trojans and How Do They Deceive Users? Trojans, or Trojan horses, are a type of malware that disguises itself as a legitimate program or file to trick users into installing it on their devices. Once activated, Trojans can perform a variety of malicious actions, such as stealing data, creating backdoors for other malware, or taking control of the system. Unlike viruses or worms, Trojans do not replicate themselves but rely on the user to spread them by sharing the infected file. The deceptive nature of Trojans makes them a popular tool for cybercriminals, often used in phishing attacks or bundled with pirated software. How Do Worms Spread Across Networks and What Damage Can They Cause? Worms are a type of self-replicating malware that spreads across networks without the need for user interaction. Once a worm infects a device, it automatically searches for other vulnerable systems to infect, often spreading rapidly across an entire network. Worms can cause significant damage by consuming bandwidth, overloading servers, or delivering payloads that delete or corrupt files. Unlike viruses, worms do not require a host file to spread, making them particularly dangerous in networked environments. The ability of worms to spread autonomously and quickly makes them a significant threat to both individual systems and large-scale networks. What is a Computer Virus and How Does it Differ from Other Types of Malware? A computer virus is a type of malware that attaches itself to a legitimate program or file and spreads to other devices when the infected program or file is shared. Viruses can cause a wide range of damage, from corrupting data and stealing information to crashing entire systems. Unlike worms, which spread independently, viruses require some form of user interaction, such as opening an infected email attachment or running a compromised program. Viruses are one of the oldest forms of malware, but they continue to evolve, making them a persistent threat in the digital landscape.
<urn:uuid:9fd06842-3b4b-4a8a-be83-2848022ea7c5>
CC-MAIN-2024-38
https://www.commandlink.com/what-is-a-keylogger-and-how-does-it-threaten-cybersecurity-2/
2024-09-17T06:31:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00704.warc.gz
en
0.938303
1,190
3.078125
3
This feature first appeared in the Spring 2021 issue of Certification Magazine. Click here to get your own print or digital copy. Cybersecurity is one of the hottest areas of the tech job market right now. According to the 2020 (ISC)2 Cybersecurity Workforce Study, there are more than 3 million open cybersecurity jobs globally — with 64 percent of organizations reporting a shortage of cybersecurity staff. Since the onset of the COVID-19 pandemic, the FBI has tracked a 300 percent rise in reported cybercrimes. Companies are accelerating their digital transformation strategies, and remote work is becoming the new normal. The recent SolarWinds breach drove home the point that the overall cybersecurity threat only gets worse with time, the stakes get higher, and the tools used by hackers get more sophisticated. Cybercrime Magazine estimates that the cost of cybercrime will reach $10.5 trillion globally by 2025. Naturally, there is a growing demand for skilled cybersecurity professionals to educate workplace personnel, help companies manage and improve their cyber defenses, and combat threats. Career paths in cybersecurity There are many different roles and responsibilities in cybersecurity. Some of the most common include: - Application Security - Data Loss Prevention - Incident Response - Network Security - Security Architecture - Threat Intelligence - Vulnerability Management - Penetration Testing - Cloud Security So there's no shortage of options for choosing a career path in cybersecurity. As you can see from looking at that list, there are many different types of skills and experience that anyone with a career interest in cybersecurity will need in order to be successful. There's a lot of ground to cover in that list, and a successful cybersecurity professional doesn't necessarily need to have expertise in everything. Having a variety of skills, however, will certainly help open up more opportunities to any individual pursuing a cybersecurity career path. Keys to success Whether you are just starting to think about a career in cybersecurity, or you are already well on your way, there a number of steps you can take to help prepare for success. Let's talk about some of the best things that any aspiring cybersecurity professional can do while in high school or college to prepare for a successful cybersecurity career. Build a solid foundation Before you start working on gaining cybersecurity-specific skills, make sure that you have a solid foundation of both technical and non-technical skills. Doing security effectively at scale requires a holistic understanding of the threat landscape and how people, processes, and tools all play a role in combatting threats and securing an organization. From a technical perspective, you will want to get exposure to and experience with a wide variety of information technology (IT) domains. This includes, but is not limited to, operating systems, networking, application development, databases, identity and access management, logging and monitoring, cryptography, and cloud-based services. You don't need to become an expert in every technical area. On the other hand, having a wide-ranging IT foundation will greatly help you when you are ready to focus more specifically on security-related technologies and techniques. From a non-technical perspective, having effective communication, analysis, and critical thinking skills is also essential. Cybersecurity isn't just a technical issue — effectively managing and addressing cybersecurity challenges requires collaboration from everyone involved in a given business or organization. Education and training Get as much education and training as you can so that you can learn from the experts and not have to figure out everything on your own. If you can, get a degree in Computer Science (CS). Having a CS degree will provide a solid foundational understanding of a wide range of IT disciplines. It will also open up opportunities to pursue some of the more advanced cybersecurity careers. Getting a degree, however, isn't the only educational path to consider. Industry certifications can also help you gain and demonstrate critical skills and knowledge. As you start down this road, don't make cybersecurity certifications your first point of emphasis — start with the basics. Begin with certifications that cover fundamental IT skills, like the A+ and Network+ credentials offered by tech industry association CompTIA. From there, you can expand to areas like server and cloud administration, and programming. Once you have a strong IT foundation, you'll be in a much better position to take on security-specific training and certifications. As part of charting your education and training course, you should set up your own lab environment where you can practice and test your skills. Your personal lab doesn't need to be expensive, or expansive. It can be put together with used equipment you can buy online, or get cheaply from a local computer recycling company. You can also check with local companies or schools in your area to see if you can take older equipment they no longer use off their hands. Once you have some 'live' equipment to play around with, sign up for one of the free or low-cost subscriptions offered by Amazon Web Services (AWS), Azure, or Google Cloud. Cloud computing is taking over most IT functions, and this will help you gain hands-on cloud experience. Learn to use ethical hacking tools So-called 'ethical' or 'white hat' computer hacking involves testing your skills against live targets, always with the permission and full awareness of whatever entity you are attempting to breach. Before you get to that level, however, it's critical to become familiar with the skills and techniques you will be using. There are a number of free ethical hacking tools available to help any prospective security professional learn effective cybersecurity skills and techniques — indeed, many of them are the same tools that you will use in your professional career. Purpose-built exploitation toolkits like Metasploit and the Kali Linux distribution are a good place to start. You should also learn to use vulnerability assessment tools like Wireshark, Nmap, Nessus, and OpenVAS. There are numerous tutorials and training videos available online to help you learn and master these and other tools. If you aren't sure where to look, ask a teacher or security professional for assistance. Bug bounties and capture the flag Once you have gained some solid cybersecurity skills, test what you've learned by participating in bug bounty programs and capture the flag (CTF) tournaments. Helping to uncover bugs in real-world programs and applications, and engaging in supervised competition against others, are both excellent ways to gain practical experience and also demonstrate your skills and abilities to potential employers. Some bug bounty programs and CTF tournaments even offer financial incentives to the strongest participants. Don't focus on making money as your primary incentive, however — this is mostly an opportunity to learn and demonstrate your skills. Most importantly, always play by the rules in both settings. Never hack a system without getting permission first, and be ethical in everything you do. Stay connected with other cybersecurity professionals One of the best ways to keep your cybersecurity skills sharp, while also connecting with others in the field, is to attend local or virtual cybersecurity conferences and participate in security clubs. At these conferences and club events, you will learn about the latest threats and the tools and techniques used to fight against them. You will also have opportunities to meet and network with a wide variety of people in the cybersecurity field. These connections can be key to helping get your foot in the door at a future employer. Security clubs can provide great opportunities to meet local cybersecurity pros in your area, and even find mentors who may be able to help you navigate your career options. Look into summer internships If you are planning to pursue a degree in Computer Science, or are already doing so, then be sure to look into cybersecurity-focused summer internships. More and more companies are offering these as a way to not only augment their full-time security team during summer months, but also get on the radar of upcoming CS graduates who will consider them as a future employer. These internships are a great opportunity to get real-world, paid cybersecurity experience while you are in college. While cybersecurity internships are generally for junior-and senior-level students, some companies will consider freshmen and sophomore students who have exceptional skills and abilities. Start checking job postings early in the year, as such opportunities can fill up quickly. Even if you aren't pursuing a degree in Computer Science, if you have several certifications and some demonstrated experience with things like bug bounty programs, then it may still be worth it to see whether employers would consider you for an internship program. It doesn't hurt to ask — with demand for cybersecurity specialists outstripping supply, you may surprise yourself and others. Pursuing cybersecurity will set you on a path to one of the most exciting, challenging, and rewarding career opportunities available. As a trained cybersecurity professional, you will find that there's never be a dull moment, and your opportunities for continuous learning and advancement are almost limitless. Not only that, but all the while you will be helping to protect your employer's most critical assets, data, and infrastructure — ensuring that they can continue to grow and prosper into the future.
<urn:uuid:691d6893-cef6-41c6-a676-55aa62dcf051>
CC-MAIN-2024-38
https://www.certmag.com/articles/get-started-cybersecurity-key-steps-beginners
2024-09-07T14:33:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00704.warc.gz
en
0.955821
1,846
2.640625
3
In the continued fight against drug abuse worldwide, authorities are getting additional help from technology. Thanks to computer programs and applications for cell phones and other mobile devices, the campaign on preventing and treating substance abuse is moving forward and gaining teeth. The 2015 World Drug Report of the United Nations Office on Drugs and Crime (UNODC) noted that some 246 million people used an illicit drug in 2013 with 27 million considered to be problem drug users. It also found that men are three times more likely than women to use marijuana, cocaine and amphetamines. Women, on the other hand, are more likely to misuse opioids and tranquilizers. In the U.S. alone, abuse of illicit drugs as well as tobacco and alcohol is costing the country more than $700 billion. This is in terms of lost work productivity, health care and rise in crime. Role of Technology Thankfully, parent and authorities can now get more help from the various types of advanced technology available to them in addition to the existing drug laws. Health experts confirm that technology-based interventions are helpful as an adjunct to traditional treatment and can even be used as a stand-alone therapy. A research led by Lisa A. Marsch, PhD, Director of the Center for Technology and Behavioral Health at the Dartmouth Psychiatric Research Center has found that the use of technology in substance abuse prevention and treatment is not only cost effective but it also reaches new target audiences including those not undergoing treatment. Marsch added that interventions using technology are as effective as the science-based interventions led by highly trained clinicians. Technology can also be used as a tool to identify risky behavior and promote resilience among communities. One advantage of technology-based intervention is it can be made available in areas that have limited resources for drug abuse treatment. These locations include schools and prisons. Mobile devices, for instance, can also deliver interventions to people undergoing drug treatment. It can even help patients stay longer in treatment. Online self-help programs for various health conditions and risk factors are also emerging in many countries, according to the World Health Organization (WHO). Apart from being free of charge, they are user-friendly, available round the clock and eliminates the need to wait and travel to a specific location. In terms of information dissemination, social media plays a significant role these days. These popular social networking sites such as Facebook, Twitter and Google Plus as well as blogs can be used to inform people particularly parents and authorities about the real situation involving drug use and abuse and encourage users to do what’s right and seek treatment as early as possible.
<urn:uuid:a0018414-069f-41f2-a28a-14da1fece865>
CC-MAIN-2024-38
https://www.it-security-blog.com/uncategorized/how-authorities-are-using-it-in-their-fight-against-drug-abuse/
2024-09-10T00:18:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00504.warc.gz
en
0.965612
520
3.296875
3
How Machine Learning Can Advance Cybersecurity Landscape Businesses today are gathering huge amounts of data. Data is at the heart of just about any business-critical system you can think of. This also includes infrastructure systems. Today’s high-tech infrastructure, including network and cybersecurity systems are gathering tremendous amounts of data and analytics on most key aspects of mission-critical systems. While human beings still provide the key operational oversight and intelligent insights into today’s infrastructure, machine learning and artificial intelligence are gaining huge momentum in most areas of today’s systems, whether positioned on-premise or in the cloud. Due to the sheer enormity of the data being parsed and analyzed, it is simply impossible for human beings to filter through, analyze, and make operational decisions based on all of the information taken in by these infrastructure systems. Today, Machine Learning (ML) and Artificial Intelligence (AI) are making it possible for computers to analyze this massive amount of data using advanced algorithms and make intelligent real-time decisions based on this data analysis. An ever-growing number of systems and infrastructure today are benefiting from advanced machine learning and artificial intelligence. In particular, network and cybersecurity systems benefit from advanced machine learning and AI. Using both ML and AI in these realms, allows analyzing data and providing real-time intelligence in securing today’s infrastructure. Let’s take a closer look at machine learning and artificial intelligence. What are they exactly? How is machine learning and AI enabling organizations today to bolster their overall network and cybersecurity stance, both on-premise and in the cloud? What is Machine Learning and Artificial Intelligence? Machine learning and artificial intelligence may seem like something out of a science fiction movie or futuristic novel. However, with today’s advancements in both machine learning and artificial intelligence, they are very real technologies that are seeing a tangible benefit in today’s complex infrastructure systems that are fueled by data. In fact, if you have heard of or utilized any of the following technologies, you have benefited from machine learning and artificial intelligence: - Apple’s Siri and Microsoft’s Cortana - Autonomous vehicles To get a better understanding of how ML and AI are creating more powerful IT infrastructure, especially in the world of network and cybersecurity, let’s look at what machine learning and artificial intelligence actually are. Machine learning utilizes advanced algorithms to “learn” from data automatically. Performing data analysis by utilizing advanced algorithms, allows computers to intelligently filter through data and find patterns, points of interest, and anomalies that are worth noting. This allows computers to “learn” by analyzing data and make decisions based on this data analysis for which they were not explicitly programmed. Machine learning allows computer systems to do all of these tasks with little or no human intervention. Learning in this context allows computers to gain “experience” in a sense by being exposed to more and more data, it is able to make more educated decisions. Supervised and Unsupervised Machine Learning Machine learning algorithms generally fall in a couple of categories – supervised and unsupervised. Let’s look at the difference between these types of algorithms and how they go about “learning”. With supervised machine learning algorithms, we may think of a student and teacher analogy. If working on a math problem, the teacher often will give the student the correct answer to the problem and then train the student on how the correct answer can be achieved. The student keeps working on the problem, perhaps needing to be corrected by the teacher several times until the student is proficient at arriving at the correct answer. This is similar with supervised machine learning. The end result or answer is known. The trainer may have to correct the learning algorithm’s predictions many times until it is able to arrive at the right predictions in an acceptable amount of time. Most machine learning mechanism are using supervised machine learning machines need humans to provide guidance along the way to achieve the desired end result and applied logic. Unsupervised machine learning is where machine learning technology ultimately is trying to get to. This type of machine learning is more closely aligned with true artificial intelligence. In unsupervised machine learning, a computer is able to “teach” itself to identify complex processes and patterns all on its own without the need for humans to provide the training to the machine to perform those tasks. What is the difference in machine learning (ML) and artificial intelligence (AI)? Both of these terms are extremely hot buzzwords among the tech community and use quite interchangeable in marketing materials and featured capabilities with products and software. Both concepts are actually parallel domains whose paths intersect but are not quite the same. Artificial intelligence or AI, is the broader of the two terms and has been around the longest as well. Artificial intelligence represents the broad concept of computers/machines being “intelligent” and having the ability to solve problems on their own without any guidance. AI concepts have certainly morphed over the past few decades. Machine learning and artificial intelligence has come many steps forward in recent years by researchers and programmers developing code that facilitates computers/machines being able to extrapolate ideas from the massive amount of data that is now accessible via the Internet. By having access to the Internet and massive databases of information that companies today are keeping on any number of disciplines, computers/machines have access to all the information they need to be able to learn and become much more intelligent. Machine Learning (ML) is a specific concept of AI that puts into practice the previously described ability for machines to be given access to the data and allowed to learn on their own. This newly branded concept that is part of the overall AI landscape is driving tremendous innovation in products that perform critical core business functions. Machine learning is allowing businesses today to operate with better efficiency, accuracy, agility, and intelligence than before machine learning was utilized. Machine learning today is powered by a very human-like “neural network” of countless compute resources that have access to untold amounts of data. Similar to the way the human brain searches for, retrieves, stores, and learns, these very complex neural networks are allowing machine learning to accomplish things never before thought possible. A very interesting and extremely powerful use case for machine learning abilities is found in the realm of network and cybersecurity and cyber risk management. How Machine Learning is Used in Cybersecurity and Network Machine learning predictive analytics provides a powerful use case for network and cybersecurity applications. Organizations today are inundated with myriads of network connections and traffic flows, as well as cybersecurity events that require analysis and potentially, remediation. The sheer volume of traffic and events as well as the complexity of today’s hybrid cloud networks makes it impractical to have human beings attempting to analyze all the network and cybersecurity data being collected and making decisions based on this data. Machine learning in the realm of network and cybersecurity allows network and cybersecurity systems to do some pretty amazing things. Machine learning today is able for the most part accurately determine and pick up on anomalies in traffic patterns, connections, user activity, and many other aspects of network. Powerful machine learning algorithms are able to filter through traffic patterns and learn what the normal fingerprint of network activity looks like and then make decisions based on machine learning algorithms Traditional firewalls for several years now have been able to form a “baseline” of what normal activities are across the network and then use certain traffic pattern “rules” to be able to decide whether or not a certain traffic pattern or network flow fits that rule or heuristic analysis. Today’s powerful cybersecurity platforms have moved far beyond what traditional on-premise firewall devices have been able to accomplish. A great example of this are robust CASB (Cloud Access Security Broker) that are transparently integrated with public cloud environments and are able to use machine learning to do much more than apply a list of “rules” to determine a true or false conclusion. These new CASB security solutions are able to identify attack patterns that may exist without being explicitly programmed to do so and do this much more efficiently than humans are able to do. Spinbackup – Cybersecurity based on Machine Learning Organizations today are housing more and more data inside cloud environments, and specifically the public cloud. Traditional firewalls that protect a perimeter network are no longer effective in today’s hybrid cloud environments where users may be connecting from any number of locations or devices on the Internet. Businesses today need an inherently intelligent solution that can leverage machine learning to protect valuable business-critical data. Spinbackup is a robust, API driven CASB that integrates transparently with Google G Suite environments. It incorporates the latest in machine learning to not only proactively protect organizations from data loss but also data leak and cybersecurity concerns. Spinbackup utilizes machine learning for detecting cybersecurity events such as the following: - Malicious Third-party Apps - Sensitive Data leak - Ransomware Detection - Insider Threats Detection - Brute Force Login Attacks By proactively “learning” by leveraging machine learning algorithms, Spinbackup allows organizations to remove much of the human element of recognizing attack vectors and attack patterns in the massive amount of security data being captured. The complexity of today’s hybrid cloud networks and usage patterns requires organizations to leverage computers to recognize these patterns in the data. The sheer amount of data would be impractical if not impossible for humans to properly filter through and recognize. This allows businesses without proper cybersecurity funding and lacking cyber security teams to be able to successfully defend themselves against today’s sophisticated cyber attacks. Spinbackup evens the playing field by leveraging its own sophisticated machine learning algorithms against these threats. Machine learning (ML) and artificial intelligence (AI) are exciting technologies that are rapidly advancing the landscape of data analysis and intelligent computing. Harnessing the power of machine learning and artificial intelligence allows businesses today to make effective use of the massive amounts of data at their fingertips. Faced with the task of combatting sophisticated cybersecurity threats, organizations have no choice but to leverage machine learning in the fight to protect their own data. The sheer amount of collected security data coupled with the complexity of today’s hybrid networks require businesses to leverage the power of computers and the “intelligence” of machine learning algorithms. It would simply be impossible for humans to effectively filter through security data and correctly identify potential cybersecurity anomalies with the speed and efficiency of computers. Spinbackup’s data protection, data leak protection, and cybersecurity solutions provide a great example of the power of machine learning presented in a powerful API driven Cloud Access Security Broker. It helps organizations without a full-time cybersecurity staff to effectively protect cloud resources and apply on-premise security policies at the cloud level. Its abilities to identify cybersecurity threats such as malicious third-party apps, ransomware, and sensitive data are powered by its machine learning algorithms. Next, we will take a closer look at these specific capabilities and how Spinbackup utilizes machine learning with each of these specific cybersecurity modules. Was this helpful? How Can You Maximize SaaS Security Benefits? Let's get started with a live demo Latest blog posts Communication is a critical success factor for any organization. Effective communication enables employees to collaborate...
<urn:uuid:214f2def-8292-4fb9-81ee-9dcf9dc7a664>
CC-MAIN-2024-38
https://spin.ai/blog/how-machine-learning-can-advance-cybersecurity/
2024-09-12T13:34:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00304.warc.gz
en
0.940698
2,310
2.609375
3
Both Encryption and Hashing are fundamental building blocks of cryptosystems. When it comes to best practices for storing credentials in your application however, best practice is largely driven by what you’re trying to do. There are a lot of well-meaning security professionals who elect an extremely dogmatic stance: “Encrypting passwords is bad! You must hash them”. This is usually true, except when it’s not. We do not need a generation of security practitioners inflexibly making demands from the ivory tower. We need professionals that understand these concepts and are able to make prudent recommendations. - What is Encryption? - What is Hashing? - What about client-side hashing? - Salting and Peppering - Other Scenarios What is Encryption? Encryption is the act of transforming “plaintext” (your original message) into “ciphertext” (a transformation of your original message which can only be decoded by the intended recipient). There are two primary kinds of encryption: symmetric and asymmetric. Symmetric encryption is wicked fast, but utilizes a shared “key” which must be exchanged beforehand by the parties involved. Additionally, symmetric encryption can’t uniquely distinguish between the parties conversing, since they all share the same key! Asymmetric encryption on the other hand relies upon a fundamental relationship between two keys per party, a “private key” which is known only to the person who generated it, and a “public key” which is intrinsically and mathematically linked to the particular private key. What is encrypted with the public key can be decrypted only by the private key, and vice versa! (It is this inversion of asymmetric cryptography which is used together which hashing in order to create digital signatures). What is Hashing? Hashing is the act of running a message through a “one-way function”. A one-way function simply describes any process that is easy to compute an output, but computationally difficult to reverse-engineer the original input based on the output alone. We bend this fundamental property to our will in order to build complex cryptosystems. Note that one-way functions do not necessarily produce a unique output. (Two inputs with the same output are referred to as a collision, and the ability to reliably generate them based on arbitrary input makes a particular hashing algorithm unsuitable for cryptographic purposes. This is the fundamental reason why SHA-1 and MD5 are considered “broken” today. When authenticating users to your application, the old wisdom is undeniably correct. Storing user’s passwords in your database is a recipe for disaster. Should your database be compromised, that user’s password could be used to authenticate in their context to any service where their password is re-used. While at first blush it may seem that the answer is to encrypt the user’s password and then decrypt it prior to comparing it against what the user sends, this too is fraught with danger. Your web application has access to some decryption key, so while attacks based on say, SQL injection would be thwarted by this attempt, an attacker who has performed a buffer overflow and can execute arbitrary code in the context of your application could simply exfiltrate both the database records from your database server and the decryption key from your server’s file system! Hashing a password allows for your user’s plaintext password to only exist in main memory (RAM) and never make its way to persistent storage. What about client-side hashing? “Wait a second”, astute readers will ask to themselves, likely scratching their heads. “Why send the password at all? Couldn’t the client be responsible for hashing the password instead? Then it wouldn’t even need to live in main memory on the server!” At this point though, the hashed value of the plaintext password becomes the plaintext password from the view of the server! For it to store that value unaltered is no better than storing plaintext passwords. For it to hash it again serves no value. Salting and Peppering If web applications only had a single user, the conversation would be over. However, that is not the world in which we live. On a multi-user system which hashes passwords, it turns out that even password hashes can provide value to an attacker who has exfiltrated a copy of the database table containing those hashes. If two values match, it is almost a certitude (except in the case of a collision!) that the users have the same password. This would make this hash a high value target for comparing against “rainbow tables”. “Rainbow tables” are an example of the classical “time-space” tradeoff playing out in the real world. Rainbow tables consist of “pre-computation” making reversing a computed hash a much easier affair, but only in the cases that the rainbow table considered the original input! A password salt works by concatenating a random string (usually stored alone with the password) to the end of the user-supplied password before hashing. This insures that no two hash inputs are the same! “Peppering” your hash works much the same way, except the value is treated as a separate secret, and ideally stored in hardware such as an HSM. What happens though if you’re NOT authenticating a user against your service? Many applications must handle application secrets in some manner. Some of these may be user-generated, some of them may be used for the application itself to authenticate to other resources, etc. Take the case of a system acting as a password manager. It would not be suitable to hash passwords, because hashing is not a reversible operation! The system could not accomplish its sole purpose. In other cases, a system may need to authenticate to a third party in the context of a user. While an argument could be made that ideally all systems would participate in federated authentication through a secure protocol like SAML, WS-FED or OAUTH which are made to accommodate this use case, this is not always feasible. A popular example of a service which requires a user’s password to another service is Mint.com, a popular personal finance management tool. Mint does take advantage of federated walkthroughs with financial institutions which support it, but for those that don’t, there is no other option to provide the service than storing user’s passwords in a reversibly encrypted manner. A particularly dogmatic practitioner might insist that this in unacceptable, and the best approach would be for Mint.com to not exist as a service at all, or to only participate with banking institutions supporting this kind of federated access. They would be missing the forest for the trees and likely not succeed in meeting the needs of a business. Both hashing and encryption are incredibly useful tools. Both have their place when manipulating user passwords. These concepts are so incredibly relevant and fundamental that they should be well understood by everyone in your technical organization from the CIO to the project manager. Being able to openly discuss and critically examine the purpose of why your application is doing what it’s doing is a one of the most difficult yet the most valuable competencies a technical person can possess.
<urn:uuid:afc72a14-fe51-4eae-b713-8349d188ecb8>
CC-MAIN-2024-38
https://www.ssltrust.com.au/blog/encryption-vs-hashing
2024-09-12T11:24:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00304.warc.gz
en
0.943719
1,535
3.46875
3
Understanding Security Vulnerabilities: Vulnerabilities, Threats & Exploits Australian companies face an average loss of $30,000 per cyber attack. In cybersecurity, a vulnerability refers to a weakness that exposes a computer system to potential exploitation by cybercriminals, granting them unauthorised access. Once a vulnerability is exploited, cyberattacks can implant malware, execute malicious code, and pilfer sensitive data. Vulnerabilities can be exploited through various means, such as SQL injection, buffer overflows, cross-site scripting (XSS), and open-source exploit kits, which search for known vulnerabilities and security flaws in web applications. Numerous vulnerabilities affect widely used software, significantly increasing the risk of data breaches and supply chain attacks for many customers relying on such software. These zero-day exploits are registered by MITRE as a Common Vulnerability Exposure (CVE). BE PREPARED: Understanding the Potential Threat to Your Business With an ever-growing number of devices connected to the internet, hackers find ample opportunities to exploit hardware like printers, cameras, and televisions that were never designed to withstand sophisticated invasions. Consequently, both companies and individuals must reassess the security of their networks. As cyber incidents rise, it becomes crucial to accurately assess the dangers posed to businesses and consumers alike. Two common terms often discussed in the context of cyber risks are vulnerabilities and exploits. Distinguishing Between Vulnerabilities and Risks It is essential to recognise that vulnerabilities and risks are not interchangeable, even though cyber security risks are sometimes referred to as vulnerabilities. Risk can be understood as the likelihood and impact of a vulnerability being exploited. When the probability and impact of a vulnerability being exploited are low, the risk is also low. Conversely, when the probability and impact of a vulnerability being exploited are high, the risk is likewise high. Identifying an Exploitable Vulnerability A vulnerability that has at least one known, functioning attack vector is classified as an exploitable vulnerability. The window of vulnerability is the period between when the vulnerability is introduced and when it is patched. Implementing robust security practices can make many vulnerabilities non-exploitable for an organisation. For instance, appropriately configuring S3 security can significantly reduce the probability of data leakage. Regularly checking S3 permissions is crucial to avoid potential data breaches. Similarly, employing third-party risk management and vendor risk management strategies can help mitigate third-party and fourth-party risks.
<urn:uuid:e3f7ee5e-4ded-4671-8f3e-0a8beef3e957>
CC-MAIN-2024-38
https://siegecyber.com.au/blog/understanding-security-vulnerabilities-vulnerabilities-threats-and-exploits/
2024-09-14T23:32:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00104.warc.gz
en
0.942162
505
3.28125
3
September 21, 2020 — In the United States, widespread public use of mobile phone cameras and social media has thrust the longstanding issue of police brutality against Black Americans into the national spotlight like never before. Delving deeply into the subject of how digital tools have contributed to the goals of anti-brutality activists, panelists at a Brookings Institution event on September 14 detailed the rise of the Black Lives Matter movement and whether the explosive growth of the hashtag #BLM might result in any institutional change. In the summer of 2014, videos, images, and text narratives of violent encounters between police officers and unarmed Black people circulated widely through news and social media, spurring public outrage. “A large digital archive of Tweets started in 2014, when Michael Brown was killed,” said Rashawn Ray, professor of sociology and executive director of the Lab for Applied Social Science Research at the University of Maryland. Media activism fueled by the deaths of Michael Brown and Eric Garner gave rise to Black Lives Matter, or #BLM, a loosely-coordinated, nationwide movement dedicated to ending police brutality, which uses online media extensively. The panelists referenced the “Beyond the Hashtag” report authored by Meredith Clark, assistant professor at the University of Virginia, analyzes the movement’s rise on Twitter. “Mobile technology became an agent of change,” said Mignon Clyburn, former commissioner at the Federal Communications Commission, referring to the 2007 introduction of the iPhone as a turning point in the way individuals utilize devices. “Devices became smaller, less expensive, and more ubiquitous,” said Clyburn, “we are now seeing a global, mobile revolution.” Increased accessibility to mobile devices and social media cracked open doors previously kept tightly shut by pro-corporate, pro-government gatekeepers of the media, which spread anti-Black ideologies. Mobile devices initiated a leveling of the media playing field, allowing for marginalized groups to intervene in dialogues. “Black Americans have the opportunity to share distinctively what is happening to us,” said Nicol Turner Lee, senior fellow in governance studies and the director of the Center for Technology Innovation at the Brookings Institute. “These videos show our humanity, and how it is destroyed and undermined,” added Kristen Clarke, president and executive director of the National Lawyers’ Committee for Civil Rights Under Law. While videos taken to report instances of police brutality are critical resources, they come with significant consequences for those filming and viewing them. In order to record an instance of police, an individual has to be courageous, as many citizen journalists attempting to capture an act of police brutality, end up a subject of cruelty. “You have the right to record protected under First Amendment,” Clarke informed, urging that officers be trained on respecting citizens First Amendment rights to film. While recording instances of police brutality is distressing in itself, sharing the video online, although necessary, amplifies the video’s power to traumatize indefinitely. “There will no doubt be a generation of children that will be traumatized,” by repeatedly seeing images of Black Americans brutalized by the police, said Lee. Clarke urged individuals who decide to share content, to do so with a trigger warning. While digital tools have enabled video evidence of brutality to be caught, amass widespread attention, and cause public outrage, as of yet, it has not translated into real-life justice for Black individuals. Difficulty to bring prosecution against excessively violent officers remains. Clarke noted that police union contracts are barriers to reform. “The terms of collective bargaining agreements allow officers to see video evidence before reporting on how the events transpired,” detailed Clarke. Ray called for the passage of the George Floyd Justice in Policing Act, H.R. 7120, introduced by Rep. Karen Bass, D-California, which he said was currently ‘collecting dust’ in the Senate. The bill would establish new requirements for law enforcement officers and agencies, necessitating them to report data on use-of-force incidents, obtain training on implicit bias and racial profiling, and wear body cameras.
<urn:uuid:b225d7cd-5844-4f88-86ee-c82b00772b90>
CC-MAIN-2024-38
https://broadbandbreakfast.com/mobile-technology-aided-the-growth-of-black-lives-matter-but-will-hashtag-outrage-lead-to-change/
2024-09-17T09:57:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00804.warc.gz
en
0.944084
859
2.96875
3
The advancement of computer technology by integrating optical, optoelectronic, and electronic components is the objective of the U.S.-Japan Joint Optoelectronic Project (JOP), which began in 1992 as part of a global partnership agreement between the United States and Japan. At a recent seminar, the performance of the JOP since 1996 was characterized as "a working model for international research collaborations in competitive, high-technology areas." The project's success, however, is not limited to merely fostering cooperation between two countries interested in furthering optoelectronic technology. The JOP also promotes the supply of prototype optoelectronic devices to speed the development of optics in computing applications. As a "go-between" of sorts, the JOP makes available wider access to prototype optoelectronic devices by system researchers and provides an expeditious funding mechanism to support the cost of new prototypes. Consequently, developers can reduce the time between a research idea and a commercial application. According to Judson French, director emeritus of the electronics and electrical engineering laboratory at the National Institute of Standards and Technology (NIST), the JOP was formed in response to Japan's desire for U.S. participation in a 10-year, $500-million program on advanced computing. The Japan program is now referred to as the Real World Computing Partnership (RWCP). Along the way, the United States proposed a way of cooperating in the RWCP, whereby both countries could contribute and benefit from their combined technological know-how. The idea of provisioning advanced experimental devices from either country to computer researchers and designers in either country was the basis for the JOP. It would enable researchers on both sides to develop and verify their advanced computer designs. "A mechanism was set up for that, called the broker system," says French. "It consists of a broker in each country to perform a number of functions. Simply put, the broker's purpose is to make known the sources of advanced optoelectronic devices and determine the willingness of developers to make them available. It then looks for potential users of those devices and provides the mechanism for bringing the two together and arranging for the devices to be transferred to the users." The Ministry of International Trade and Industry (MITI), a part of the RWCP, funds the brokers in both countries. The U.S. broker function is fulfilled by a team headed by the Optoelectronic Industry Development Association (OIDA), with representatives from NIST, the Defense Advanced Research Projects Agency (DARPA), Departments of Energy and State, and the National Science Foundation. The Japanese broker is the Optoelectronic Industry and Technology Development Association (OITDA). "The JOP has turned out to be a very unique activity and a very successful one," says French. "First, it's considered a model for U.S.-Japan cooperative research-in particular, exploring new ways of cooperating in a high-tech competitive field. It was run on a trial basis for three years and evaluated against some formal criteria established as evidence of its success. It passed the trials and our recent symposium of users provided a lot of very positive feedback through many successful applications." The JOP offers a catalog of specific devices or assemblies that are still in the research and development stage and not commercially available. Developers of the devices are willing to make them available and will sometimes make modest modifications if necessary. Functional descriptions of the devices are included in the catalog, and the information is made available to users in the United States and Japan. The brokers provide assistance in import and export activities to ease the country-to-country transfers. The only cost involved for the devices is the incremental cost of making the device-minus the overhead charges for R&D or facilities. This arrangement makes the cost to users relatively modest. "Most of the users have been universities, while most suppliers are within the industry itself, although there are exceptions," says French. "It's as though the JOP is providing a virtual laboratory. For instance, if a researcher at a university is working on a computer design concept and wants to verify his design, he needs a device that will perform the way his theoretical computer operates. He doesn't have the device he requires, nor does he have the laboratory facility in which to develop it. So he's at a loss, with no way to prove his idea is right and will work. "Now he has the option of going to the JOP in hopes they know somebody in the United States or Japan who makes the device that does just what he needs it to do. The JOP may be able to provide the device from an R&D laboratory, enabling the researcher to incorporate the device into his own and advance his research or design." At press time, more than 80 transactions were completed during the last three years, with costs ranging from $8000 to $51,000. The University of Colorado was developing new interconnect devices and, at the time, the only source of the 980-nm vertical-cavity surface-emitting lasers (VCSELs) needed for the research to continue was Japan's NEC Corp. The VCSELs were obtained through the JOP, allowing the university to carry its research to another level. Stanford University secured a superstructure grating tunable laser and a 32x32 waveguide filter for a testbed to optimize the scalability of computer network architectures. Neither item was available in the United States at the time. According to NIST, users in universities, government agencies, and throughout the industry have successfully attained results from the JOP. These users have stated that without JOP-brokered, state-of-the-art devices, many R&D experiments would have been impossible.
<urn:uuid:1b2e4408-2222-4b59-bb3a-6bc9b39d2b2b>
CC-MAIN-2024-38
https://www.lightwaveonline.com/optical-tech/article/16648626/joint-us-japan-project-stimulates-growth-of-optoelectronic-technologies
2024-09-18T13:46:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00704.warc.gz
en
0.96446
1,174
3.5
4
It’s not uncommon nowadays to see both Macs and PCs together the same office. Technology has progressed to a point where both types of systems can now get along smoothly with each other. Its easy to share files between the two systems, share printers, have them communicate to each other on the same network, even run the same applications on both systems! Read on to find out how. Unlike a few years ago when Microsoft’s Windows operating system virtually dominated office desktops everywhere, today we are increasingly seeing the use of other operating systems in the office. Typically these other systems are some model of Apple’s Macintosh running its own operating system called the OS X. The OS X, known for its sleek graphics, great multimedia handling capabilities, and easy-to-learn user interface, has gained favor among many users and businesses. Sometimes, however, problems arise when having to use different systems in the same office or network environment. Here are some tips to eliminate common issues your users might face when working with others on a different system: - File Sharing. There was a time when transferring files between a Mac and a PC was a painful process requiring understanding different file system structures, resource forks, file name limits, and other such nonsense. Thankfully those days are over. Many Mac applications today can open files created on a PC and vice versa—such as office documents, images, video, and more. Getting files from one system to another is also easy as you can transfer via a removable drive. Both systems should recognize the file system on the drive—especially if it was formatted using Window’s file system (doing it the other way around might be a bit more difficult). OS X “Leopard” Macs can also read or write to drives that have been formatted using a special format from Microsoft called NTFS, and other freely downloadable utilities can also help. If this sounds like too much work to understand, you can also simply burn a CD or email files from one system to another—or better yet, set up a network for file sharing. - Making Macs and PCs talk on the same network. If you’re a little more tech savvy, you can connect your Macs to your PCs directly or via a network. Typically this requires a network cable connected to both devices and having network sharing turned on. Enabling network sharing is outside the scope of this tip, but many online resources are available to help you connect a PC to a Mac or a Mac to a PC. - Running the same desktop applications on both a Mac and a PC. For really advanced users, did you know that you can run Windows on a Mac or OS X on a PC? The former is bit easier and more common, thanks to techniques such as dual booting or virtualization. In dual booting (what Apple calls “Boot Camp”), you essentially install both operating systems on a Mac and on power up, you can choose which operating system to boot. Virtualization on the other hand is way slicker as you can run both operating systems at the same time. In virtualization, you boot Windows in a window within OS X, allowing you to effectively run Windows applications on a Mac. There are also many commercial applications that can help with this. - The future: Cloud Applications. As we all start to access more cloud-based applications, the operating system you use is no longer as critical. As long as your systems have an Internet connection and a browser, then you can use different systems and it doesn’t matter what operating system or hardware is being used. So running both Macs and PCs in the same office is not necessarily a bad thing, as it has been in the past. Dozens of options exist today to make the situation manageable, if not downright easy. If you need help, don’t worry — we’re here to assist. Call us today to find out how you can get Macs and PCs to work together for your business today.
<urn:uuid:4eb6e094-e618-405a-b509-4d0a251dc09f>
CC-MAIN-2024-38
https://www.manhattantechsupport.com/blog/macs-and-pcs-why-cant-we-just-get-along/
2024-09-08T21:45:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00704.warc.gz
en
0.954585
818
2.59375
3
"RISC" was an important architecture from the 1980s when CPUs had fewer than 100,000 transistors. By simplifying the instruction set, they free up transistors for more registers and better pipelining. It meant executing more instructions, but more than making up for this by executing them faster. But once CPUs exceed a million transistors around 1995, they moved to Out-of-Order, superscalar architectures. OoO replaces RISC by decoupling the front-end instruction-set with the back-end execution. A "reduced instruction set" no longer matters, the backend architecture differs little between Intel and competing RISC chips like ARM. Yet people have remained fixated on instruction set. The reason is simply politics. Intel has been the dominant instruction set for the computers we use on servers, desktops, and laptops. Many instinctively resist whoever dominates. In addition, colleges indoctrinate students on the superiority of RISC. Much college computer science instruction is decades out of date. For 10 years, the ignorant press has been championing the cause of ARM's RISC processors in servers. The refrain has always been that RISC has some inherent power efficiency advantage, and that ARM processors with natural power efficiency from the mobile world will be more power efficient for the data center. None of this is true. There are plenty of RISC alternatives to Intel, like SPARC, POWER, and MIPS, and none of them ended up having a power efficiency advantage. Mobile chips aren't actually power efficient. Yes, they consume less power, but because they are slower. ARM's mobile chips have roughly the same computations-per-watt as Intel chips. When you scale them up to the same amount of computations as Intel's server chips, they end up consuming just as much power. People are essentially innumerate. They can't do this math. The only factor they know is that ARM chips consume less power. They can't factor into the equation that they are also doing fewer computations. There have been three attempts by chip makers to produce server chips to complete against Intel. The first attempt was the "flock of chickens" approach. Instead of one beefy OoO core, you make a chip with a bunch of wimpy traditional RISC cores. That's not a bad design for highly-parallel, large-memory workloads. Such workloads spread themselves efficiently across many CPUs, and spend a lot of time halted, waiting for data to be returned from memory. But such chips didn't succeed in the market. The basic reason was that interconnecting all the cores introduced so much complexity and power consumption that it wasn't worth the effort. The second attempt was multi-threaded chips. Intel's chips support two threads per core, so that when one thread halts waiting for memory, the other thread can continue processing what's already stored in cache and in registers. It's a cheap way for processors to increase effective speed while adding few additional transistors to the chip. But it has decreasing marginal returns, which is why Intel only supports two threads. Vendors created chips with as many as 8 threads per core. Again, they were chasing the highly parallel workloads that waited on memory. Only with multithreaded chips, they could avoid all that interconnect nastiness. This still didn't work. The chips were quite good, but it turns out that these workloads are only a small portion of the market. Finally, chip makers decided to compete head-to-head with Intel by creating server chips optimized for the same workloads as Intel, with fast single-threaded performance. A good example was Qualcomm, who created a server chip that CloudFlare promised to use. They announced this to much fanfare, then abandoned it a few months later as nobody adopted it. The reason was simply that when you scaled to Intel-like performance, you have Intel-like liabilities. Your only customers are the innumerate who can't do math, who believe like emperors that their clothes are made from the finest of fabrics. Techies who do the math won't buy the chip, because any advantage is marginal. Moreover, it's a risk. If they invest heavily in the platform, how do they know that it'll continue to exist and keep up with Intel a year from now, two years, ten years? Even if for their workloads they can eke out 10% benefit today, it's just not worth the trouble when it gets abandoned two years later. Thus, ARM server processors can be summarized by this: the performance and power efficiencies aren't there, and without them, there's no way the market will accept them as competing chips to Intel. This brings us to chips like Graviton2, and similar efforts at other companies like Apple and Microsoft. I'm pretty sure it is going to succeed. The reason is the market, rather than the technology. The old market was this: chip makers (Intel, AMD, etc.) sold to box makers (Dell, HP, etc.) who sold to Internet companies (Amazon, Rackspace, etc.). However, this market has been obsolete for a while. The leading Internet companies long ago abandoned the box vendors and started making their own boxes, based on Intel chips. Making their own chips, making the entire computer from the ground up to their specs, is the next logical evolution. This has been going on for some time, we just didn't notice. Most all the largest tech companies have their own custom CPUs. Apple has a custom ARM chip in their iPhone. Samsung makes custom ARM chips for their phones. IBM has POWER and mainframe chips. Oracle has (or had) SPARC. Qualcomm makes custom ARM chips. And so on. In the past, having your own CPU meant having your own design, your own instruction set, your own support infrastructure (like compilers), and your own fabs for making such chips. This is no longer true. You get CPU designs from ARM, then have a fab like TSMC manufacture the chip. Since it's ARM, you get for free all the rest of the support infrastructure. Amazon's Graviton1 chip was the same CPU core (ARM Cortex A72) as found in the Raspberry Pi 4. Their second generation Graviton2 chip has the same CPU core (ARM Cortex A76) as found in Microsoft's latest Windows Surface notebook computer. Amazon doesn't care about instruction set, or whether a chip is RISC. It cares about the rest of the feature of the chip. For example, their chips support encrypted memory, a feature that you might want in a cloud environment that hosts content from many different customers. Recently, Sony and Microsoft announced their next-gen consoles. Like their previous generation, these are based on custom AMD designs. Gaming consoles have long been the forerunners of this new market: shipping in high enough volumes that they can get a custom design for their chip. It's just that Amazon, through its cloud instances, is now of sufficient scale, that they can sell as many instances as game consoles. The upshot is that custom chips are becoming less and less a barrier, just like custom boxes became less of a barrier a decade ago. More and more often, the world's top tech companies will have their own chip. Sometimes, this will be in partnership with AMD with an x86 chip. Most of the time, it'll be the latest ARM design, manufactured on TSMC or Samsung fabs. IBM will still have POWER and mainframe chips for their legacy markets. Sometimes you'll have small microcontroller designs, like Western Digital's RISC-V chips. Intel's chips are still very good, so their market isn't disappearing. However, the market for companies like Dell and HP is clearly a legacy market, to be thought of in the same class as IBM's still big mainframe market.
<urn:uuid:bc03e629-dd12-4e74-9f44-05a528949d1f>
CC-MAIN-2024-38
https://blog.erratasec.com/2019/12/this-is-finally-year-of-arm-server.html
2024-09-10T04:24:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00604.warc.gz
en
0.971632
1,629
3.25
3
Scrum is a framework used to apply the Agile Software Development Lifecycle (SDLC) methodology. Scrum provides a guideline for groups of people working collectively toward solving complex problems. These problems may constitute a project in software development, hardware development and many other forms of development assignments. The framework is focused on using creativity, communication and collaboration to reach project goals with higher productivity and lower waste. The Scrum framework is simple, lightweight and applicable to a variety of project types. Mastering the framework can transform your organization’s product development capabilities. However, Scrum is difficult to master and requires close adherence to the guidelines defining the Scrum framework. Scrum was found on the basis of process control theory and asserts that decisions should be made based on knowledge that comes from experience of performing the tasks and controlling the inherent risks through an iterative process. Each repetition of the workflow incrementally improves the predictability of the outcome, reduces the risks and optimizes collective productivity of the team. Organizations adopting Scrum should therefore understand the three fundamental pillars of the process control theory that was used in devising the Scrum methodology: Transparency: Clear visibility of the processes involved to yield a specific outcome. Every responsible individual should have adequate understanding of significant aspects of the processes. This understanding should be based on a standardized system employed to perform every process. For instance, team members communicating in the same language, same definition of the timeline and intended outcome. Inspection: Evaluate the progress of the iterative processes to identify undesirable variance from pre-defined or intended goals. The inspections can be carried out by skilled individuals who may not be a part of the project team, at a frequency that doesn’t interrupt regular project progress. Adaptation: If the inspection identifies changes that are necessary to reduce the undesirable variances, then the process should be adaptable to accommodate the required changes. The adaptation should be streamlined, performed soon after identification of the requirement and yet not disrupt the entire project management plan. With this strategy into consideration, the following four tips can help you get started with Scrum framework implementation for your project management goals: 1. Know Your Scrum Scrum is not just about having a daily meeting over the project progress. The individuals involved in the team must develop adequate knowledge of Scrum, understand the roles, events and artifacts constituting the framework. If only a few individuals from the team have actually read Scrum guidelines, the rest of the team should be trained on the methodologies and best practices employed for the unique project management application. To help develop a general understanding, the Scrum framework is condensed as follows: - A prioritized wish list of the task is created known as Product Backlog. - A piece of this task list is selected and executed as part of a Sprint Backlog. - The team aims to complete the task in a specific time window while accounting for the progress in regular daily meetings. - The goal of this Sprint is to ensure that a tangible, shippable development feature or product is to be released. - Once completed, the team reviews the progress, identifies the variances, adapts the process to accommodate the necessary changes and applies the sprint to the next Product Backlog item. - This process is repeated until the larger project goal is achieved, such as developing the entire product within the intended deadline and budgetary conditions. 2. Understand Roles and Team Development There are several ways to develop a team and the reasons for team member selection should align with the goals and requirements of the Scrum framework. Small organizations may have limited options for team selection whereas large enterprises may have several competencies and skills available to contribute toward those goals. Known team building models such as the Tuckman model can be used to start a successful new Scrum Team. The model involves the following stages to help achieve this goal: - Forming: Understanding the purpose, opportunities and challenges associated with the group development. - Storming: Learning about each other, expressing opinions and possibly leading to conflicts and differences. - Norming: resolving the co-operation issues and gaining the ambition of working toward the collective goals. - Performing: Translating the motivation for collaboration into collective works. Competence, autonomy and decision-making be come critical at this point. It’s important for the Scrum Teams to identify the stage at which they begin the Scrum Sprints and make appropriate decisions accordingly. It may be suitable to reach a certain level of maturity as per the Tuckman’s model prior to working on the tasks autonomously. 3. Welcome Experimentation The Scrum process inherently evolves across every Sprint as necessary. There are no hard and fast rules for many aspects of Scrum. For instance, a Sprint can run for two weeks, or more, or less. The priority for choosing work items from the Product and Sprint Backlog can vary as the project progresses. Less successful Sprints can be revisited and marked as an attempt instead of a failure. Since communication is a critical element of the Scrum framework, the choice of words and mindset used in communication can impact team motivation and the ability of individuals to satisfactorily work with their team members. Adaptability of the Scrum project should reflect in the flexibility of the team members in handling diverse situations, some of which may not always be perceived as the most suitable path by every member of the team. Only by experimentation can every team member truly gain the knowledge necessary to evaluate different pathways and pursue the best approach for future Sprints. 4. Help Execute a Collective Vision Despite the flexibility and adaptability of Scrum teams and projects, members should consistently follow the vision of the Scrum Team. The task for leadership, or Scrum Master, is to ensure the communication and behavior of the team facilitate the execution of the project goals productively. Particularly during the Norming phase of the group development, team leaders may need to involve themselves in resolving team related issues and ensure that every team decision is directed toward productively fulfilling the goals of the Scrum Sprint. At the same time, the input of Scrum Master should not be entirely authoritative but provide a global direction toward a collective vision that every team member can own and use to guide their individual and collective decisions.
<urn:uuid:b03b37c7-0e96-4c68-93a7-24a55e61852d>
CC-MAIN-2024-38
https://blogs.bmc.com/agile-scrum-getting-started/
2024-09-11T08:47:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00504.warc.gz
en
0.935379
1,284
3.109375
3
Ransomware-as-a-service (RaaS) and Malware-as-a-service (MaaS) are common schemes that cybercriminals use to perform their nefarious activities. However, there is a new addition to the list: Dropper-as-a-Service (DaaS). Threat actors increasingly use DaaS to rapidly spread their malware across thousands of PCs and steal sensitive data. DaaS is used by threat actors to distribute their malware to unsuspecting users more easily. They mask their malware as real or pirated apps and trick their targets into downloading and installing them. In a research conducted by Sophos, a huge network of websites was revealed that provides a low-cost solution to help threat actors distribute their malware. “During our recent investigation into an ongoing Raccoon Stealer (an information stealing malware) campaign, we found that the malware was being distributed by a network of websites acting as a “dropper as a service,” serving up a variety of other malware packages,” wrote Sophos. “Multiple front-end websites targeting individuals seeking “cracked” versions of popular consumer and enterprise software packages link into a network of domains used to redirect the victim to the payload designed for their platform.” Some of these websites charge only $2 for tricking a thousand targets into downloading the malware. The scheme includes dropping various types of malware based on location and time. Some droppers were also used as infostealers, researchers found. The evolution of the website networks has been largely based on the changes in the market dynamics. Currently, cybercriminals are leveraging the stolen credential market and cryptocurrency scams to identify their targets. The only silver lining here is that most of the droppers are easily identified. However, there’s a caveat: Since they are encrypted in archives, they cannot be detected by security tools unless they are unwrapped. The recent years have seen a growth in the trend of X-as-a-service. The easiest way to avoid falling into their trap is to avoid cost-cutting measures, such as using cracked software. Researchers say offering cracked versions of expensive software is a common strategy cybercriminals adopt to break into an organization’s internal network and system.
<urn:uuid:e5f2e0b9-4b7c-4c19-bfa4-7382d9cbfcac>
CC-MAIN-2024-38
https://cyberintelmag.com/malware-viruses/cybercriminals-are-adopting-dropper-as-a-service-to-spread-malware/
2024-09-12T14:41:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00404.warc.gz
en
0.962838
473
2.53125
3
The Nintendo Switch has become one of the most popular gaming consoles in recent years, with its unique hybrid design allowing users to play both at home and on the go. But have you ever wondered what powers this innovative device? In this article, we will dive into the world of CPUs and explore what exactly makes the Nintendo Switch tick. - 1. The History of Nintendo’s CPUs - 2. The Switch’s CPU - 3. Comparison with Other Consoles - 4. Performance of the Switch’s CPU 1. The History of Nintendo’s CPUs Before delving into the specifics of the Nintendo Switch’s CPU, it is important to understand the history of Nintendo’s CPUs and how they have evolved over the years. Nintendo has been in the gaming industry for decades, and as technology advanced, so did their processors. From NES to GameCube Nintendo’s first console, the Nintendo Entertainment System (NES), was released in 1983 and was powered by an 8-bit MOS Technology 6502 processor. This processor was also used in other popular consoles such as the Commodore 64 and the Atari 2600. As technology progressed, Nintendo continued to use variations of the 6502 processor in their following consoles, including the Super Nintendo Entertainment System and the Nintendo 64. In 2001, Nintendo released the GameCube, which was the company’s first console to feature a custom-built processor instead of using off-the-shelf components. The GameCube’s IBM PowerPC Gekko processor boasted a clock speed of 485 MHz and was the precursor to the CPU used in the Nintendo Switch. The Wii Era In 2006, the Wii was released, and although it was not as powerful as its competitors, Sony’s PlayStation 3 and Microsoft’s Xbox 360, it was a massive success for Nintendo. The Wii featured a more advanced version of the GameCube’s processor, known as the Broadway, with a clock speed of 729 MHz. Nintendo’s next console, the Wii U, was released in 2012 and featured a custom-built IBM PowerPC processor with three cores and a clock speed of 1.24 GHz. However, due to its lackluster sales, the Wii U did not receive much attention from developers, and Nintendo needed to come up with something new and innovative to stay relevant in the gaming market. 2. The Switch’s CPU Released in 2017, the Nintendo Switch was unlike any other console on the market. It offered gamers the ability to play both at home and on the go, thanks to its unique hybrid design. To achieve this, Nintendo had to create a powerful yet energy-efficient processor, and they turned to NVIDIA for help. The NVIDIA Tegra X1 The Nintendo Switch is powered by the NVIDIA Tegra X1 processor, which is also used in NVIDIA’s SHIELD TV. This processor features an octa-core ARM Cortex-A57 CPU and a 256-core NVIDIA Maxwell GPU. The CPU has a clock speed of 1.02 GHz, while the GPU runs at 768 MHz. The Tegra X1 was chosen by Nintendo due to its low power consumption and its ability to deliver high-quality graphics. Its ARM-based architecture also made it easier for developers to port their games to the Switch. Customizations for the Switch Nintendo did not use the Tegra X1 as it was; they made some customizations to fit their needs. One significant change was the addition of two more ARM Cortex-A57 cores, bringing the total to eight. This was done to improve the Switch’s multitasking capabilities and allow for smoother gameplay while running multiple applications. Another customization was the clock speed, which was lowered from 1.9 GHz to 1.02 GHz for better power efficiency. This decision was likely made to extend the battery life of the console when being used in handheld mode. 3. Comparison with Other Consoles The Xbox One and PlayStation 4 While the Nintendo Switch has a unique design and features, it is not as powerful as its competitors, the Xbox One and PlayStation 4. Both consoles feature custom-built x86 processors, which are more powerful than ARM-based processors like the Tegra X1. The Xbox One’s processor, the AMD Jaguar, has eight cores with a clock speed of 1.75 GHz, while the PlayStation 4’s processor, the AMD Jaguar 2.0, has eight cores with a clock speed of 1.6 GHz. Both consoles also have more powerful GPUs than the Switch, making them able to handle high-end graphics and performance-demanding games. The Nintendo Switch and Mobile Devices As mentioned earlier, the Tegra X1 is also used in NVIDIA’s SHIELD TV, which is an Android-powered device. This raises the question of how the Nintendo Switch compares to other mobile devices. Compared to flagship smartphones, the Nintendo Switch’s CPU is superior, with most smartphones using quad-core processors with lower clock speeds. However, when compared to high-end tablets, the Switch’s CPU falls behind, with some featuring octa-core processors with clock speeds reaching 2.5 GHz. 4. Performance of the Switch’s CPU The Nintendo Switch’s CPU may not be as powerful as its competitors, but it is still capable of delivering an enjoyable gaming experience. Let’s take a look at how it performs in different scenarios. When the Switch is docked, it runs at a higher clock speed of 1.02 GHz compared to its handheld mode, where it runs at 768 MHz. This boost in clock speed allows for better graphics quality and smoother gameplay. Some games even offer a performance mode that increases the clock speed further for a more stable frame rate. As mentioned earlier, the Switch’s CPU runs at a lower clock speed in handheld mode to conserve battery life. While this may result in slightly downgraded graphics, most games still look and play great in this mode. The Switch’s CPU has eight cores, which allows for efficient multitasking. Users can quickly switch between games, applications, and even take screenshots without experiencing any significant lag or slowdown. Information Security Asia is the go-to website for the latest cybersecurity and tech news in various sectors. Our expert writers provide insights and analysis that you can trust, so you can stay ahead of the curve and protect your business. Whether you are a small business, an enterprise or even a government agency, we have the latest updates and advice for all aspects of cybersecurity.
<urn:uuid:c543c442-c3bb-42de-9868-d9f40aa0ac42>
CC-MAIN-2024-38
https://informationsecurityasia.com/what-cpu-is-in-the-nintendo-switch/
2024-09-13T21:33:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00304.warc.gz
en
0.947263
1,357
2.78125
3
IBM Research Unveils Silicon Photonics Chip Capable of 100Gbps Making a big step forward in silicon photonics, IBM Research designed and tested a fully integrated wavelength multiplexed silicon photonics chip, which fully enables the use of pulses of light instead of electrical signals over wires to move data. May 15, 2015 Making a big step forward in silicon photonics, IBM Research said it has designed and tested a fully integrated wavelength multiplexed silicon photonics chip, which fully enables the use of pulses of light instead of electrical signals over wires to move data. This step will lead to the eventual manufacturing of 100Gbps optical transceivers for commercial use. With supercomputing and data center interconnects in mind for initial uses of this technology, IBM says it has demonstrated pushing 100Gbps over a range up to two kilometers. Whether the new CMOS Integrated Nano-Photonics Technology IBM has developed will be built with existing manufacturing processes or not will determine exactly how quickly the product will come to market. IBM says its chips use four distinct colors of light traveling within an optical fiber, each acting as an independent 25Gbps optical channel, for 100 Gbps bandwidth over a duplex single-mode fiber. The chip enables integration of different optical components side-by-side with electrical circuits on a single silicon chip using sub-100nm semiconductor technology. The essential parts of an optical transceiver, both electrical and optical, can be combined monolithically on one silicon chip, and are designed to work with with standard silicon chip manufacturing processes, according to IBM. Arvind Krishna, senior vice president and director of IBM Research said that "just as fiber optics revolutionized the telecommunications industry by speeding up the flow of data -- bringing enormous benefits to consumers -- we’re excited about the potential of replacing electric signals with pulses of light. This technology is designed to make future computing systems faster and more energy efficient, while enabling customers to capture insights from Big Data in real time." About the Author You May Also Like
<urn:uuid:d1722909-52d5-4790-ba7e-d0b3717883be>
CC-MAIN-2024-38
https://www.datacenterknowledge.com/data-center-chips/ibm-research-unveils-silicon-photonics-chip-capable-of-100gbps
2024-09-16T09:37:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00104.warc.gz
en
0.895106
420
2.828125
3
Table of Contents OSPF (Open Shortest Path First) is a Link-State Routing Protocol. It is also Classless and an Open, Standard Protocol used by all vendors. OSPF uses Dijsktra Algortihm, a Shortest Path First Algorithm (SPF) to determine the best path to a destination network. In this lesson, we will focus on Open Shortest Path First and we will learn the details of this Routing Protocol. Open Shortest Path First provides neighborship between other OSPF routers by the help of OSPF messages. With these messages, different tables are built. And Open Shortest Path First routing mechanims works with these tables. Basically, there are three tables. These tables are : - Neighboring Table - Topology Table - Routing Table In Open Shortest Path First, all routers build its own Topology Table and has the full view of the network. It calculates the next hope independently from other routers. This is also the characteristic of Link-State Protocol. As you remember, Link-State protocols are “OSPF” and “IS-IS“. In Distance-Vector protocols, the situation is not like this. Distance-Vector protocols needs a neighbor to know about the network. Hierarchical Design and Area Types Open Shortest Path First has an hierarchical architecture. It uses “Areas” for hieararcy. There is a Backbone Area as Area 0 in the center for all OSPF Networks. Around this Backbone Area, Area 0, there are some other Areas. All these areas are connected to this Area 0. If there is a uncontinuous structure, Virtual-Links can be used to connect an area to the Backbone Area(Area 0). With this hierachical architecture of OSPF, you can divide your network into different small areas and by doing this, you can reduce overhead of this small areas. You can also use specific areas in OSPF network. These areas will be explain detailly later. Here, let’s only give the area types used in OSPF: - Standard Area - Backbone Area (Area 0) - Stub Area - Not So Stubby Area (NSSA) - Totally Stub Area - Totally NSSA We will talk about the details of these areas in the following articles. OSPF Router Types In OSPF Design there are different Router types. These router types are named according to their roles and their place of the network. Let’s see these router: - Internal Routers : Routers in a Single Area. - Backbone Routers : Routers in Backbone Area (at least one interface). - Area Border Routers(ABRs) : Routers have interfaces at least in two areas. - Autonomus System Boundary Routers(ASBRs) : Routers connected to an another AS and redistribute external routes. OSPF Administrative Distance Values For Best Path selection, Administrative Distance(Preference) values are very important. Every Routing Protocol has an AD value. The Administrative Distance (Preference) of the Open Shortest Path First is 110 for Cisco devices. This is a little different for Alcatel-Lucent, Huawei and Juniper devices. They use Interneal and External Preference values for Open Shortest Path First. Preference value is 10 for Internal OSPF Routes and 150 for External OSPF Routes on the devices of these vendors. Open Shortest Path First Cost Open Shortest Path First uses path Cost as its metric. Generally Bandwidth value is used as path Cost. AS a formula, the Cost is calculated like below in OSPF: Cost = Reference BW(default 10 000 000) / BW Protocols are developed and then they enhanced with new features. This new enhanced versions came with new versions. Open Shortest Path First also has different versions. There are two OSPF versions. These are: OSPFv2 is the first version of Open Shortest Path First and it is used with IPv4. OSPFv3 is the enhanced version of OSPFv2 that supports and designed for IPv6. In the below table, you can check the differences of these two versions as summary. In this article, we have talked about an overview of OSPFv2. In the following articles, we will explain this lesson more deeply. If you want to learn more about OSPFv3, you can check IPv6 Routing Protocols lesson.
<urn:uuid:6347c676-257c-4735-824e-e7238ce33b91>
CC-MAIN-2024-38
https://ipcisco.com/ospf-open-shortest-path-first-part-1/
2024-09-18T18:39:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00804.warc.gz
en
0.892316
960
3.78125
4
The nature of cyber threats is evolving rapidly, and in some ways is a perfect synonym of the common cold as they change rapidly and are difficult to protect against. The types of attacks themselves can be varied. They could be targeting a business’s intellectual property, customer data, or aimed at causing maximum disruption, such as a distributed denial of service (DDOS) attack. The good news is that the awareness of the threats is evolving too, and businesses finally understand the consequences of suffering a data breach. Modern cyber threats have three unique challenges. One, they are intangible, so understanding the nature of the risk exposure is a real challenge for insurers, this is underpinned by a lack of trusted insight into the frequency and severity of attacks. >See also: How to communicate cyber risk to the board Secondly, cyber threats are systemic and a single attack can lead to a lot of consequences, so threats can quickly propagate throughout networks, even across geographical borders. The third component is that cyber threats are still generally human driven, whereby those planning an attack will study the defences in front of them and try to circumvent them – so it is always a battle to stay one step ahead of the threat. Another big problem for the industry is how to value any potential payout. It is no longer a physical entity – such as offices or stock – that are being insured but the data itself. So the nature of the causable loss is difficult to pinpoint. A question of time The nature of how insurance policies should be sold was also a point of concern for attendees. As cyber insurance policies have become more popular, brokers are finding it increasingly difficult to find the time to actually visit a customer’s site to provide a proper risk assessment. Much is having to be done via a checklist and taken as read by the insurers, but mistakes or misunderstandings can happen. Whilst a site visit would always be preferable, it is only really practical with the biggest clients. Yet, this is an issue that needs to be solved as most of the interest in cyber insurance is from medium-sized businesses. A topic of intense debate revolves around where the onus of education falls. Some feel that the insurers should be educating businesses, whilst others feel that it is businesses themselves that need to drive the insurance industry to provide such cyber policies. The industry needs to improve how it shares data and the government needs to enforce this, albeit on an anonymised basis. The industry needs leadership from the government, a legislative force, and access to detailed data and research so that premiums can be better set. The insurance industry is already talking to government regarding how the industry can drive resilience against cyber attacks. One thing they could do it target big multinational companies with a substantial supply chain (such as the large supermarkets) – by them implementing cyber insurance policies, it would filter down through the supply chain to their various suppliers and partners. Putting a price on failure The insurance industry needs to collectively set premiums that truly reflect the risk, but how do you put a price on a breach? Unfortunately, there is no quick fix. The challenge is to achieve an objective measurement of the true costs incurred following a breach. This is where by working the infosec industry, insurers can more accurately calculate a risk profile and what the potential impact cost would be for different events. This education would not only benefit the insurance industry but the companies themselves, as business will be encouraged to mitigate the risk by being given an incentive to do so. For example, a specific regulatory regime might force certain types of businesses to purchase cyber insurance. Recent worrying research highlighted the disparity between the perception of CEOs as to their cover, and the reality. Of those CEOs in large organisations surveyed, over half (52%) thought they had suitable insurance to be covered in the occurrence of a breach, whilst risk professionals said only 15% were in reality, and insurers themselves estimated the number with applicable cover to be only 2%. Uncertainty and ambiguity are the biggest issues with any type of insurance policy. The industry needs to appreciate that businesses want one single integrated insurance policy that covers everything needed to protect the business, and that cyber insurance is a component of that. If policies are split by occurrence, or even worse by provider, then you will get to a stage where each insurer is pointing the finger at the other, or even worse the businesses cover could simply fall between the cracks between the policies. There is no doubt as to the severity of business interruption caused by a cyber attack and how the ever expanding digital world and the Internet of Things means that the risk of exposure is growing exponentially. The insurance industry needs to act now to be able to cope with this coming wave. Sourced from Kirill Slavin, GM of UK&I at Kaspersky Lab
<urn:uuid:77394cdf-8324-473d-aa86-c66cd2deae41>
CC-MAIN-2024-38
https://www.information-age.com/digital-safety-net-need-modern-day-cyber-insurance-33241/
2024-09-19T23:47:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00704.warc.gz
en
0.969105
977
2.640625
3
You can spot a phishing website by checking the URL, looking at the website’s content, reading reviews of the website and using a password manager that Data breaches are on the rise and they can be both unexpected and costly. The average estimated cost of a data breach has peaked at an all-time high—an astounding $4.35 million, according to a 2022 report by IBM and the Ponemon Institute. Poor password hygiene and legacy software are two key culprits that will increase your chances of falling victim to a password breach. Keep reading to learn more about how to check if a password is breached and five ways to prevent a password breach from occurring. How to Check if a Password is Breached The most straightforward way to determine whether or not your password has been breached is by using a dark web monitoring tool such as BreachWatch®. BreachWatch immediately notifies you if your credentials have made their way onto the dark web so that you can update your credentials before a bad actor has the opportunity to take advantage of your compromised details. You can also scan your email, for free, to see if any of your passwords have been stolen in a data breach using our free BreachWatch scan. If you find that your credentials are compromised, take action immediately. Five Ways to Prevent a Data Breach It is important to take preventative action to ensure that your cybersecurity posture is strong enough to keep away cybercriminals seeking vulnerabilities. Use these five tips below to prevent a password breach from occurring in the future. 1. Eliminate Weak Passwords Stop using the same password you have been using for everything. Instead, use a password generator to create unique passwords for each account to help prevent compromised credentials. If your password simply uses short dictionary words, cyber attackers can easily crack these weak passwords through a brute force attack. Password generators create strong passwords by formulating a string of random characters, making it nearly impossible for cybercriminals to guess. In addition to generating robust passwords, it is essential to practice password hygiene to ensure that you are taking additional steps to protect your passwords from unauthorized users. 2. Use Multi-Factor Authentication (MFA/2FA) Many cloud solutions offer Multi-Factor Authentication (MFA), which can prevent 99.9% of password-related cyber attacks on your accounts, according to Microsoft. While Two-Factor Authentication (2FA) requires a second authentication method, MFA requires users to present two or more types of authentication, which can strengthen your security and further prevent cybercriminals from infiltrating your account. Once you log into an online account, MFA can protect you and your account since it requires users to provide an additional authentication factor, such as: - Answering a personal security question - Submitting a 6-digit code retrieved by mobile - Scanning your face or fingerprint Even if a cybercriminal manages to gain access to your login credentials, MFA makes it exponentially more difficult for unauthorized users to fully access your account, since they cannot pass the second authentication method. 3. Delete Inactive Accounts Since a cybercriminal could use inactive user accounts, keeping an account alive but inactive is a crucial security risk. If a cybercriminal gains access to one of your inactive accounts, the bad actor may be able to access your private information. To make matters worse, if you are guilty of password reuse, a compromised email address and password pairing give cybercriminals access to any account using the same login credentials. Suppose you find that you are no longer using a specific online account. In that case, you are encouraged to delete it to prevent unauthorized users from potentially gaining access to personal or financial information. 4. Update Software Outdated software is a massive security vulnerability as it can be filled with bugs that put you at risk if they aren’t resolved. Updating software to the latest version is crucial since updates can prevent security issues, while improving compatibility and program features. Legacy software can be riddled with software vulnerabilities, opening up opportunities for cybercriminals to take advantage. Outdated software is also more exposed to viruses. Not only do viruses impact the infected device, but they can also be passed to your colleagues’ devices. Updated software allows you to stay up to date with the latest fixes and improvements. These updates were made for a reason, as they offer improved security and a better end-user experience. 5. Stay Updated on the Latest in Cybersecurity Staying up-to-date on the latest cybersecurity news can prevent you from becoming a victim by understanding modern cyber attack strategies. Reading this information allows you to learn from the mistakes of other data breach victims and help prevent a breach from happening to you. How Keeper Strengthens Your Cybersecurity Posture Keeper Security provides cybersecurity solutions to protect your passwords from cybercriminals and prevent password-related data breaches. Built with a zero-trust, zero-knowledge security architecture with 256-bit AES encryption, our password management tool offers several additional features to strengthen your security posture – ensuring that your passwords are protected. And with our dark web monitoring tool BreachWatch, you‘ll always be notified if your passwords have been compromised. Register today for a free 30-day trial to see how our cybersecurity solutions can protect you from cyber threats.
<urn:uuid:1b24c7ee-53b1-40d0-a1e3-a5e3c5d2cb49>
CC-MAIN-2024-38
https://www.keepersecurity.com/blog/2023/02/08/five-tips-for-data-breach-prevention/
2024-09-10T07:05:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00704.warc.gz
en
0.925804
1,085
2.6875
3
“Not all the data processing in our system is close to us” Didn’t understand what I mean? Let us just look at cloud computing and how data is accessed and used and processed in it. Data is stored at a remote location and can be accessed from anywhere around the globe. But this feature of cloud computing has led to some security threats. This makes the working vulnerable as all the data is stored and processed online. Data stored in the cloud is constantly at a threat of getting misused. It is prone to cyber-attacks and data leakages. Some way was needed to protect the data stored on the cloud from these cyber-attacks and threats. That is when “edge computing” came to light. What is Edge Computing? Edge computing is a networking paradigm that focuses on placing processing as close as feasible to the source of data to reduce latency and bandwidth usage. In layman's terms, edge computing is transferring fewer processes from the cloud to local locations, such as a user's PC, an IoT device, or an edge server. In simple words, Edge computing is a distributed IT architecture that brings computing resources from clouds and data centers as near to the source as possible. Its major purpose is to reduce latency while processing data and lowering network expenses. The most important thing about this network edge is that it should be geographically close to the device. (Related blog: Cloud computing guide) Now you must be thinking, how does this data come close to the device? How does this “Edge Computing” work? Let us understand how it works. Working of Edge Computing In traditional computing methods, the data is first displayed on the screen of a user and is then sent to the server using the internet, intranet, LAN e.t.c. Here, at the server, the data is stored and processed. But, as this data kept on increasing with the growth of the internet and so did the devices connected to the internet, these data storage infrastructures started seeming incapable of storing that much data. According to a Gartner study, by 2025, 75% of company data would be generated outside of centralized data centers. This volume of data places tremendous pressure on the internet, causing congestion and interruption. To tackle this, the concept of “Edge Computing” was introduced. The idea behind edge computing is straightforward: rather than bringing data closer to the data center, the data center is moved closer to the data. The data center's storage and processing resources are installed as close as possible to where the data is generated (preferably in the same place). For example, an edge gateway can process data from an edge device i.e. the device used to control the flow of data between two networks, and then send just the necessary data back to the cloud, minimizing bandwidth requirements. In the case of real-time application requirements, it can also send data back to the edge device. An IoT sensor, an employee's notebook computer, their latest smartphone, the security camera, or even the internet-connected microwave oven in the office break room are examples of edge devices. Within an edge-computing infrastructure, edge gateways are considered edge devices. (Must catch: What is Neuromorphic Computing? Working and Features) Let us try to compare the working of all the types of computing used to play with data. Only one isolated computer was used to store data. Data is stored either in the user's device or data centers. Data is stored in data centers and is processed via the cloud. Data is brought close to the user, either to the user’s device or to any nearby device. Now that we know about the working of Edge Computing, let us try to know where it is used. Uses and Examples of Edge Computing Edge computing can be used in a wide range of products, services, and applications. Among the possibilities are: To reduce latency and provide a completely responsive and immersive gaming experience, cloud gaming businesses are looking to install edge servers as close to gamers as feasible. (Related blog: What is virtualization in cloud computing?) All these uses suggest that there must be so many benefits of edge computing over other ways of computing. Let us now try to look at the advantages or benefits of Edge Computing. Advantages and Disadvantages of Edge Computing Here are some of the most important advantages of Edge Computing: The time it takes to send data between two places on a network is referred to as latency. Delays can be caused by large physical distances between these two points, as well as network congestion. Latency difficulties are essentially non-existent thanks to edge computing, which brings the points closer together. The rate at which data is transported over a network is referred to as bandwidth. Because all networks have a finite amount of bandwidth, the amount of data that can be transferred and the number of devices that can process it are also constrained. Edge computing allows multiple devices to function over a much smaller and more efficient network by installing data servers at the places where data is created. Despite the fact that the Internet has changed through time, the sheer volume of data created every day by billions of devices can cause significant congestion. Local storage is available in edge computing, and local servers can execute critical edge analytics in the event of a network failure. (Also read: What is SaaS?) There are some drawbacks too. Here are some of the major drawbacks of Edge computing: Implementing an edge infrastructure in a company may be both complicated and costly. Before deployment, it requires a clear scope and goal, as well as extra equipment and resources. Data that is not complete Edge computing can only process subsets of data, which should be determined in advance of implementation. Companies may lose crucial data and information as a result of this. Because edge computing is a distributed system, it might be difficult to ensure proper security. Processing data outside of the network's edge has several hazards. The adoption of edge computing has ushered in a new era of data analytics. This technology is being used by an increasing number of businesses for data-driven operations that require lightning-fast results. Although it's still developing, the future holds a lot for Edge Computing. If the drawbacks are put aside, Edge seems like a perfect replacement for data centers, which indeed is a very important thing. So, let us wait and watch and observe what lies in the lap of the future for Edge Computing.
<urn:uuid:470edebe-3853-48e6-a3ba-4ea446f7400a>
CC-MAIN-2024-38
https://www.analyticssteps.com/blogs/what-edge-computing-working-and-benefits
2024-09-12T20:29:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00504.warc.gz
en
0.944468
1,347
3.59375
4
A SQL Server deadlock is a special concurrency problem in which two transactions block the progress of each other. The first transaction has a lock on some database object that the other transaction wants to access, and vice versa. (In general, several transactions can cause a deadlock by building a circle of dependencies.) Example 1 below shows the deadlock situation between two transactions. The parallelism of processes cannot be achieved naturally using the smallsampledatabase, because every transaction in it is executed very quickly. Therefore, Example 1 below uses the WAITFOR statement to pause both transactions for ten seconds to simulate the deadlock. If both transactions in Example 13.5 are executed at the same time, the deadlock appears and the system returns the following output: As the output of Example 13.5 shows, the database system handles a deadlock by choosing one of the transactions as a “victim” (actually, the one that closed the loop in lock requests) and rolling it back. (The other transaction is executed after that.) A programmer can handle a deadlock by implementing the conditional statement that tests for the returned error number (1205) and then executes the rolled-back transaction again. You can affect which transaction the system chooses as the “victim” by using the DEADLOCK_PRIORITY option of the SET statement. There are 21 different priority levels, from ?10 to 10. The value LOW corresponds to ?5, NORMAL (the default value) corresponds to 0, and HIGH corresponds to 5. The “victim” session is chosen according to the session’s deadlock priority.
<urn:uuid:3d70d19d-7ce5-413e-a81b-a727c9694c85>
CC-MAIN-2024-38
https://logicalread.com/understanding-sql-server-deadlocks-mc03/
2024-09-14T00:40:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00404.warc.gz
en
0.902108
338
3.171875
3
What do Jeffrey Dahmer, Ted Bundy, Wayne Gacy, Dennis Rader, and Frank Abigail all have in common, aside from the obvious fact that they are all criminals? They are also all master manipulators that utilize the art of social engineering to outwit their unsuspecting victims into providing them with the object or objects that they desire. They appear as angels of light but are no more than ravenous wolves in sheep’s clothing. There are six components of an information system: Humans, Hardware, Software, Data, Network Communication, and Policies; with the human being the weakest link of the six. Phishing is an age-old process of scamming a victim out of something by utilizing bait that appears to be legitimate. Prior to the age of computing, phishing was conducted mainly through chain mail but has evolved over the years in cyberspace via electronic mail. One of the most popular phishing scams is the Nigerian 419 scam, which is named after the Nigerian criminal code that addresses the crime. Information security professionals normally eliminate the idea of social norms when investigating cybercrime. Otherwise, you will be led into morose mole tunnels going nowhere. They understand that the social engineering cybercriminal capitalizes on unsuspecting targets of opportunity. Implicit biases can lead to the demise of the possessor. Human behavior can work to your disadvantage if left unchecked. You profile one while unwittingly becoming a victim of the transgressions of another. These inherent and natural tendencies can lead to breaches of security. The most successful cybersecurity investigators have a thorough understanding of the sophisticated criminal mind. Victims of social engineering often feel sad and embarrassed. They are reluctant to report the crime depending on its magnitude. And the CISO to comes the rescue! In order to get to the root cause of the to determine the damage caused to the enterprise, the CISO must put the victim at ease by letting them know that they are not alone in their unwitting entanglement. These are some tips that can assist you with an anti-social engineering strategy for your enterprise: Employ Sociological education tools by developing a comprehensive Information Security Awareness and Training program addressing all six basic components that make up the information system. The majority of security threats that exist on the network are a direct result of insider threats caused by humans, no matter if they are unintentional or deliberate. The most effective way an organization can mitigate the damaged caused by insider threats is to develop effective security awareness and training program that is ongoing and mandatory. Deploy enterprise technological tools that protect your human capital against themselves. Digital Rights Management (DRM) and Data Loss Prevention (DLP) serve as effective defensive tools that protect from the exfiltration enterprise data in the event that it falls into the wrong hands. On a macro level, local and universal government agencies must be seamlessly collaborative in addressing this cybercrime and bringing cybercriminals to justice. Security is everyone’s business. So, let’s together create a ubiquitous culture of informed secure consumers. Each one reaching one, each one teaching one. Lifting as we climb. We can change the world one head at a time! About the Author Zachery S. Mitcham, MSA, CCISO, CSIH is the VP and Chief Information Security Officer at SURGE Professional Services-Group. He is a 20-year veteran of the United States Army where he retired as a Major. He earned his BBA in Business Administration from Mercer UniversityEugene W. Stetson School of Business and Economics. He also earned an MSA in Administration from Central Michigan University. Zachery graduated from the United States Army School of Information Technology where he earned a diploma with a concentration in systems automation. He completed a graduate studies professional development program earning a Strategic Management Graduate Certificate at Harvard University extension school. Mr. Mitcham holds several computer security certificates from various institutions of higher education to include Stanford, Villanova, Carnegie-Mellon Universities, and the University of Central Florida. He is certified as a Chief Information Security Officer by the EC-Council and a Certified Computer Security Incident Handler from the Software Engineering Institute at Carnegie Mellon University. Zachery received his Information Systems Security Management credentials as an Information Systems Security Officer from the Department of Defense Intelligence Information Systems Accreditations Course in Kaiserslautern, Germany. Views expressed in this article are personal. The facts, opinions, and language in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.
<urn:uuid:c63def79-a490-4cf5-860f-dd7582f9b4a6>
CC-MAIN-2024-38
https://cisomag.com/social-engineering-phishing/
2024-09-15T06:46:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00304.warc.gz
en
0.939441
923
2.890625
3
Data from four NOAA-operated weather satellites located roughly 22,300 miles above the Earth could help federal agencies improve how they detect and track wildfires under a recently announced agreement. The collaborative agreement between the National Oceanic and Atmospheric Administration, the Department of the Interior, and the Department of Agriculture’s U.S. Forest Service was announced July 23, and serves as a step toward developing specific fire-tracking tools with the needs of those agencies in mind. It builds upon NOAA’s existing work to modernize and automate fire detection. “This agreement in particular enables very targeted co-development of capabilities that will maximize the exploitation of NOAA satellites for detecting fires as early as possible and just tracking them better over time so you can make better decisions,” Michael J. Pavolonis, program manager of the Wildland Fire Program in NOAA’s National Environmental Satellite, Data, and Information Service, told FedScoop. Pavolonis said that will be particularly helpful “when you have a very busy burning period like we have right now.” Wildfire season generally happens in the summer and fall, according to USDA. But over the past several decades that season has expanded. Since the 1970s, it’s grown from five months to over seven months, which has been a product of factors related to climate change. Just this summer, a fire in New Mexico killed two people, burned more than 17,000 acres, and destroyed roughly 1,400 structures, according to NOAA. The new agreement specifically takes advantage of NOAA’s Geostationary Operational Environmental Satellite R series satellites, commonly known as GOES-R. The fleet is the most current version of NOAA’s weather satellites that are “geostationary,” which means their orbit matches the rotation of the earth so they stay hovering over a fixed location. The first of the satellites launched in 2016 and the fourth and final installment launched in June, reaching its orbit in July. Those satellites can “not only capture the evolution of weather in great detail, but they can also sense the heat from fires,” Pavolonis said. While satellite imagery like what’s broadcast on the news during a fire typically shows heat and smoke that can be seen with the naked eye, Pavolonis said that to take advantage of that information, there’s a need for automation that can find and track the blaze and provide decision-makers with those details. Already NOAA’s satellite service has built a tool to provide some of those automated capabilities called the Next Generation Fire System, which it recently tested in its Fire Weather Testbed. That test, Pavolonis said, demonstrated that automated satellite-based fire detection from GOES-R can be used to identify fires more quickly. That testbed could also be used to eventually evaluate the tools that are created as part of the agreement. In an emailed statement, an Interior spokesman told FedScoop the agreement will mean “systematically incorporating” data from the NOAA satellites into wildfire response and expanding the use of that data for early detection and to provide “near-continuous monitoring of fire intensity.” “This initiative will leverage new technology like artificial intelligence to analyze, detect, and provide alerts regarding new wildfire ignitions and critical growth of existing fires, such as when the fire moves toward a community that should be evacuated,” the Interior spokesman said. “In conjunction with location-based services, it will also help to identify when a fire begins to shift toward firefighters on the ground to enable them to reposition.” The work under the agreement stems from actions under the 2021 Bipartisan Infrastructure Law, from which it draws its funding. It comes after a commission on fire mitigation established under that law made dozens of recommendations to Congress, including expanding the use of fire detection systems and more collaboration. The initiative is also supported by $20 million under the law, which is split evenly between Agriculture and Interior, according to a release from the Department of Commerce announcing the agreement. Notably, both the Next Generation Fire System and the Fire Weather Testbed at NOAA were also funded by the Bipartisan Infrastructure Law. In a statement at the time of the announcement, Commerce Secretary Gina Raimondo said the administration’s investments have allowed NOAA “to significantly improve fire weather forecasts to keep communities safe.” “This new partnership, a result of the most ambitious climate agenda in history, will help expand our ability to detect wildfires, improve response times, and keep families in western communities out of harm’s way,” Raimondo said.
<urn:uuid:437c3d22-79c7-4084-b427-125eea174d57>
CC-MAIN-2024-38
https://fedscoop.com/noaa-teams-up-with-agriculture-interior-on-satellite-powered-wildfire-detection/
2024-09-15T06:49:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00304.warc.gz
en
0.956421
961
3.34375
3
To err is human: the role of AI in cybersecurity By Wai Kit Cheah, senior director of APAC Products and Practices at Lumen Technologies Thursday, 26 October, 2023 Like many other nations, Australia has not been immune to a surge in data breaches and cybersecurity incidents, impacting both public and private sectors. Compounding financial and reputational losses, these incidents have exposed sensitive personal and financial information, leaving decision-makers to rethink their approach to cybersecurity measures. Amid evolving cyberthreats, organisations are increasingly exposed to risks like phishing, social engineering, DDoS attacks and ransomware. It’s easy to assume that the greatest threats to our digital security are lurking in the dark corners of the internet, carried out by sophisticated hackers and malicious software. However, the truth is that most cyber incidents are not the result of complex code or impenetrable firewalls being breached; rather, they are caused by human error. The proliferation of artificial intelligence (AI) in the workplace has also introduced new avenues for human error in safeguarding sensitive information, creating an even larger environment for cybersecurity. However, AI is also providing a remedy to complement cybersecurity measures, particularly when it comes to data privacy. Despite this, more than half of Australians (57%) believe AI creates more problems than it solves. As data-fuelled technologies like AI and Machine Learning (ML) enter the mainstream, stewards of customer data now have dual responsibilities. They must build privacy and security into products and processes, while keeping that same data accessible and useful. So how can businesses take a more holistic approach when it comes to transforming security while also adapting new AI processes? AI-driven proactive defence is better than a cure AI already permeates every facet of our digital lives, eliminating tedious manual tasks from email filtering to identifying anomalous behaviours through advanced threat detection capabilities. Fuelled by data, it has enabled a shift from reactive to proactive responses to cybersecurity. For example, by applying AI and ML, organisations can analyse data to predict system and application outages or anomalies even before they happen. Security teams can then use this knowledge to create an algorithm — or a set of instructions — to remedy the problem or start an alert process. When it comes to cybersecurity incidents, people are our Achilles heel and Australian business leaders cannot afford to overlook the profound impact of human error as the predominant catalyst behind major cybersecurity challenges. In security terms, human error is defined as unintended actions or lack of action by employees and users which cause or spread cybersecurity incidents. For example, a hacker pretending to be an employee may persuade a technical support member to reveal a password, or a seemingly friendly email may contain suspicious links. These incidents not only cause major headaches for under-resourced IT teams, but can lead to serious data breaches. Cybercriminals perpetually seek vulnerabilities or loopholes within an organisation’s networks and applications. Frequently, the weakest link can be attributed to human errors, fragile authentication systems, or inadequately safeguarded assets. Of course, no organisation can protect itself completely from cyberthreats — companies must all acknowledge and assume that their systems will be breached in some way. To transform security comprehensively and integrate AI processes seamlessly, businesses must recognise the fundamental role of human error in cybersecurity threats. This demands a holistic approach that combines cutting-edge AI technologies with robust cybersecurity training and awareness programs, ultimately ensuring a safer and more resilient digital future for individuals and organisations across Australia and beyond. Addressing the ‘three Ms’ of cybercrime The majority of cybercrimes stem from the ‘three Ms’: mistakes, misconfigurations and mismanagement. Often more destructive and challenging to control, human error can lead to unsuspecting employees falling victim to phishing attacks, resulting in data leaks. For instance, authentic-looking emails containing dangerous attachments or hyperlinks often do the trick. Misconfigurations that exist in software subsystems or components are yet a‘nother human factor that exposes organisations to cyberthreats. For example, running outdated software or unnecessary features and services, and glitches or gaps in setting up cloud services or access controls. Finally, the industry grapples with mismanagement of data and security as cloud usage surges. Cyber attacks stemming from security mismanagement often emerge when IT security teams exclusively depend on passive or reactive measures, like firewalls and anti-malware systems, to address cybersecurity concerns. For most businesses today, the first step to navigate the complexities of managing cybersecurity in their business is to start with awareness and training. It is crucial for businesses to create a security culture that not only protects themselves, but also their customers’ data from both internal and external risks. Needless to say, security awareness training and simulated phishing security tests are crucial in helping enterprises today reinforce their ‘human firewall’ through regular employee education on risk awareness and vigilance. There has been a significant shift in data privacy dynamics, with technology now having the dual role of using and protecting data. AI-based automation plays a pivotal role in enhancing privacy protections, alongside emerging technologies like homomorphic encryption, differential privacy and federated learning. These technologies empower businesses to handle demanding tasks with unmatched speed and accuracy, placing less pressure on IT security teams. By embracing these innovations and adopting a comprehensive cybersecurity awareness strategy, Australian organisations are better placed to navigate an evolving digital landscape, ensuring data protection and sustaining trust among their customers. Organisations have been confronting security risks in their supply chains for years, but a new... Third-party cybersecurity breaches occur when the victim's defences are compromised through a... While MFA remains effective, highly motivated threat actors are using tactics that seek to...
<urn:uuid:740d55af-6b21-4575-b250-6cb3eb569c6e>
CC-MAIN-2024-38
https://www.technologydecisions.com.au/content/security/article/to-err-is-human-the-role-of-ai-in-cybersecurity-953897888
2024-09-17T20:06:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00104.warc.gz
en
0.928923
1,160
2.515625
3
Table of Contents Switches are one of the most important network equipments of a computer network. It is the main device of a LAN (Local Area Network). In this lesson we will see Switches. You can alse check wiki definition, here. Switch is a Layer 2, Data-Link Layer device in general. But this is true for Layer 2 switches. There are also Multilayer Switches which acts like a switch router in multiple layers (Layer 2,3,4). But in the basic definition, when we hear switch keyword, we think about Layer 2 switches generally. In the hardware of swithes, ASIC (Application-Specisif Integrated Circuit) are used. So they are very fast devices. A switch has multiple ports ( 8, 16, 24, 48 etc..). This ports provide enlargement of the network. Switches operates with MAC addresses. Each of the ports has a specific MAC address. It is because each of them is also a NIC (Network Interface Card) at the same time. If we look at the characteristic of a switch, the main characteristic are like below: A switch itself is one Broadcast Domain by default. But if you use VLANs, then each VLAN become a separate Broadcast Domain. Because, each VLAN is a seperate Logical Network. A switch has Collision Domains as the number of its ports. Each port is one Collision Domain. To learn more about Collision Domain and Broadcast Domain, you can check related lesson. Switches are smart devices. They saves some records in its tables and according to these tables, they forwards the traffic. These tables are called MAC Tables. They stores the MAC addresses and the related port. In other words, they record which device is connecting through which port. You can see the below example for MAC Table operation.
<urn:uuid:59982d40-b0fa-4b6e-a4cf-6ff8c7b4b891>
CC-MAIN-2024-38
https://ipcisco.com/lesson/switches/
2024-09-08T04:17:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00104.warc.gz
en
0.929562
382
4.46875
4
Spam can now enter your mobile devices from multiple directions. It could appear as annoying emails, texts, or phone calls. Previously, email was the digital equivalent of postal mail, but now it has your phone numbers. This means that you may also get spam SMS and calls. It is more challenging to address the spam issue than an intelligent chameleon constantly changing its color. The good news is that you can take precautions to guard against spammers and scammers who aim to steal your data. By reading this article, you can learn how to stop spam text from email. What are Spam Texts Through Email? Spam is unwanted and undesired communication. It originates from computers and is typically sent to your phone via email or messaging apps. Scammers can send it easily and at a low cost. It can be difficult to answer calls or get messages from unfamiliar numbers on your phone. You might be surprised to learn, though, that occasionally, these messages don’t even originate from a phone number. Did you know you may send an SMS or call with any email address? Yes, that is accurate. Additionally, they may appear to be ordinary messages or phone calls, except that the contact information has an email address rather than a phone number. However, the truth is that these emails can carry just as much risk as any other spam letter. How to Spot Scam Text Messages from Emails One of the things that people dislike the most about email is spam. Sometimes, like with bulk advertising, it is simple but annoying. The spam emails with computer viruses or attempts to mislead you into schemes are much worse. You may assist protect yourself by becoming aware of the most typical forms of spam. The process of spotting scam messages is given below. A Strange Sender Can Be a Sign of Spam There are instances when you can identify spam in emails without even opening them. You are aware of the kinds of emails you receive from your email provider and the newsletters you have signed up for. Spam may also come from the websites you visit and, of course, your friends and relatives. As a result, you should never reply to an email from someone you’ve never heard of. It’s important to remember that proficient spammers may fake the sender’s name to make users believe they are receiving an email from Amazon, LinkedIn, or another respectable business. Placing your mouse pointer over the sender’s name and comparing the email address is a way to know whether it’s fake. Be careful to check for little errors that could indicate the legality of an address, such as paypol.com instead of paypal.com. Additionally, make sure the email domain name is correct. For instance, an email from Amazon will originate from an official Amazon.com address rather than a private one like @gmail.com. Grammatical and Spelling Errors Not every fraud person speaks English well. Check the text you receive for spelling and grammar mistakes. If the recipient uses misspelled words or sentences, you should block them. Usually, scammers send texts with attractive offers to trick you into clicking on a dangerous link. They also disclose your credit card number or login credentials. Don’t be pleased the next time you receive a message offering a $99 Samsung Galaxy S24. You may receive an SMS offering you the chance to pay down your student debt or win money by completing an online form. Avoid responding to such texts. Prompting Immediate Action Take caution when you receive text messages requesting that you pay a custom duty on goods you do not recall ordering immediately. Watch for notifications that demand that you pay your credit card or electricity account immediately. These kinds of emails are used by scammers to obtain your bank login credentials. How to Stop Spam Text from Email To stop spam emails, unsubscribe from undesired spam email campaigns, and report and block spam. However, you’ll also need to modify your internet habits and privacy settings to stop receiving spam emails permanently. Let’s look at some steps you should take to stop spam text from email. Report the Suspicious Email as Spam Even if you manually delete unwanted emails, spammers can still send you more in the future. Furthermore, it cannot shield you from viruses or other malware that may be concealed in spam emails. A malware removal program is required for this purpose. Reporting spam is essential if you want to stop it successfully, as it teaches your email client which email addresses to ban and how to filter spam in general. Once you’ve trained your email to detect and block phishing emails from Amazon and other sources, you can learn how to identify and block spam emails. Spammers might be able to avoid detection by spam email filters, but they might not be able to trick a skilled observer. You should ban dangerous, phony, or constant spam email addresses even while filtering some of them. Remember to report any online risks or scams, including phishing schemes targeting Apple IDs. Delete Spam Emails When dealing with spam emails, the golden rule is to delete them without clicking or downloading anything. These emails might include software that notifies the sender that you have opened it, verifying that your account is active. This could result in receiving additional spam emails. Some malicious software can steal your email address and send spam again, using it as a front for an actual address. For instance, frauds may pretend to be someone you know, such as a friend, family member, or coworker. Reach out to the person if the communication seems to be from someone you know, even if it’s not by email. Block Spam Email Addresses Blocking undesirable emails permanently stops spam from reaching that email address. However, be cautious because opening certain spam emails can result in an inbox full of further trash emails from numerous spam accounts. Keep Your Email Address Private If you disclose your email address, you may receive more spam emails. Thus, if sharing is unnecessary, keep it confidential and change your email privacy settings. Change Your Email Privacy Settings It’s nice to stop spammers from sending you many unwanted junk emails. Although, it’s preferable to adjust your email privacy settings and behaviors to minimize your exposure to spam in the first place. Remain as private as possible while sending emails. This entails exercising caution regarding the persons and institutions to whom you provide your address. Additionally, to better protect your mailbox, use two-factor authentication. Here’s how to change your email privacy settings to stop spam and prevent your address from being accessible to advertising and other outside parties. Unsubscribe from Unwanted Newsletters or Mailing Lists You may also be receiving spam because, whether you realize it or not, you decided to sign up to receive it. One of the best ways to reduce spam in your inbox is to remove your email address from spam subscription lists. Start by opening your email and typing unsubscribe into the search field. You may unsubscribe from all those newsletters and mailing lists by doing that. You can now choose which spam emails you no longer want to receive or never open by clicking the unsubscribe link at the bottom of the email. Even from respectable services, many newsletters and advertising campaigns make it difficult to discover the unsubscribe button. However, if you carefully scroll, you will locate it. If not, you can automatically unsubscribe by marking the email as spam. How do Spammers Collect My Email Address? Spammers gather email addresses using various approaches. These techniques include buying lists and utilizing software to predict addresses based on common naming patterns. They can also exploit security flaws and scrape websites. Is it Okay to Open Spam Email? You won’t usually damage yourself or your device if you accidentally open a spam email since you cannot recognize it. It only takes a little to initiate a download or direct you to a phony website. The risk arises when you open a malicious attachment, click on phishing links, or reply to a message with private information (such as your credit card number). Thus, if you read an email and discover it contains spam, transfer it immediately to your spam folder. Is it Safe to Unsubscribe from Spam Emails? Removing your subscription from spam emails is usually not secure. It signals to the sender that your account is still active, which motivates them to send more junk. Instead, file the communication as spam and set up filters to route any more correspondence from that address to your spam folder. Final Words from Us Knowing how to stop spam text from emails is essential in today’s digital world to preserve online safety and mental peace. Crown Computers has a long history of providing high-quality information technology solutions. Through our collaborations with the biggest brands in the market, including Microsoft, Ubiquiti, Sophos, HP, Dell, Acronis, and Sentinel One, we guarantee that our clients have access to the best IT solutions available to fight spam emails. Accept an email free of spam and enjoy the digital calm we all deserve. Let the professionals maintain your digital area clean while you remain informed and safe.
<urn:uuid:a70306ba-fc60-45c7-b06c-367c0f134451>
CC-MAIN-2024-38
https://www.crowncomputers.com/how-to-stop-spam-text-from-email/
2024-09-10T09:48:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00804.warc.gz
en
0.945967
1,889
2.828125
3
Multi-programming is a method used in computer operating systems to execute multiple programs simultaneously. This approach enhances the utilization of CPU resources by managing the execution of more than one program at a time, allowing for increased efficiency and performance. Overview of Multi-Programming Multi-programming is a fundamental concept in operating systems that allows for the concurrent execution of multiple programs. This technique maximizes the utilization of the CPU by ensuring that it is always busy performing tasks. When one program is waiting for I/O operations to complete, the CPU can switch to execute another program, thus optimizing system performance. Benefits of Multi-Programming - Increased CPU Utilization: Multi-programming ensures that the CPU is always engaged by switching between tasks, which reduces idle time. - Better Resource Management: By running multiple programs concurrently, system resources such as memory and I/O devices are used more efficiently. - Reduced Turnaround Time: The total time taken to execute a set of programs is reduced since multiple programs progress simultaneously. - Enhanced System Throughput: With multiple programs running at the same time, the system can process more work in a given period. - Improved System Responsiveness: By allowing multiple programs to share CPU time, the system can respond more quickly to user inputs and other events. How Multi-Programming Works In a multi-programming system, the operating system is responsible for managing the execution of multiple programs. Here’s a step-by-step overview of how it works: - Job Scheduling: The operating system maintains a job queue where multiple programs are stored. Job scheduling algorithms determine the order in which programs are loaded into memory and executed. - Memory Management: Programs are loaded into different parts of memory. The operating system uses techniques such as paging and segmentation to manage memory efficiently. - Context Switching: When a program requires I/O operations, the CPU switches to another program that is ready to execute. This switch is known as context switching, and it involves saving the state of the current program and loading the state of the next program. - I/O Management: While one program is waiting for I/O operations, the CPU can execute another program. This overlap of CPU and I/O operations enhances system performance. - Synchronization and Communication: The operating system ensures that programs can communicate and synchronize with each other, particularly when sharing resources. Types of Multi-Programming Systems - Batch Processing Systems: In batch processing systems, programs are collected into batches and executed sequentially. Multi-programming in such systems allows multiple batches to be processed simultaneously, enhancing throughput. - Time-Sharing Systems: Time-sharing systems allow multiple users to interact with the computer simultaneously. Multi-programming in time-sharing systems enables each user to run their programs concurrently, providing the illusion of a dedicated system. - Real-Time Systems: Real-time systems require immediate processing of input data. Multi-programming in real-time systems ensures that critical tasks are given priority and processed within strict time constraints. Features of Multi-Programming - Concurrency: Multiple programs run concurrently, with the CPU switching between them. - Efficient CPU Utilization: The CPU is kept busy by executing different programs while others are waiting for I/O operations. - Resource Sharing: Programs share system resources such as memory, I/O devices, and CPU time. - Inter-process Communication (IPC): Programs communicate with each other through IPC mechanisms to synchronize their operations. - Scheduling Algorithms: Various scheduling algorithms are employed to manage the execution order of programs and ensure efficient CPU utilization. Challenges in Multi-Programming - Deadlocks: Deadlocks occur when two or more programs are waiting indefinitely for resources held by each other. The operating system must implement strategies to detect and resolve deadlocks. - Resource Contention: When multiple programs compete for limited resources, performance can degrade. Effective resource management techniques are essential to mitigate this issue. - Context Switching Overhead: Frequent context switching can lead to overhead, reducing overall system performance. Optimizing context switching is crucial to maintain efficiency. - Complexity in Synchronization: Ensuring that programs synchronize correctly and share resources without conflicts adds complexity to system design. Use Cases of Multi-Programming - Scientific Computing: Multi-programming allows researchers to run multiple simulations and data analysis programs concurrently, speeding up scientific discovery. - Business Applications: Businesses can run multiple applications such as databases, accounting software, and customer management systems simultaneously, enhancing productivity. - Web Servers: Web servers handle multiple client requests concurrently, providing quick responses and high availability. - Embedded Systems: Multi-programming in embedded systems allows for the concurrent execution of control and monitoring tasks, improving system reliability. Multi-Programming vs. Multi-Tasking While multi-programming and multi-tasking are often used interchangeably, there are key differences between the two: - Multi-Programming: Focuses on maximizing CPU utilization by managing the execution of multiple programs. It primarily deals with batch processing and time-sharing systems. - Multi-Tasking: Extends the concept of multi-programming by allowing users to interact with multiple applications simultaneously. It is more user-centric and is prevalent in modern operating systems like Windows and macOS. Frequently Asked Questions Related to Multi-Programming What is multi-programming in operating systems? Multi-programming is a method used in operating systems to run multiple programs simultaneously. This approach enhances CPU utilization by managing the execution of more than one program at a time, allowing for increased efficiency and performance. How does multi-programming improve CPU utilization? Multi-programming improves CPU utilization by ensuring the CPU is always busy. When one program is waiting for I/O operations, the CPU can switch to execute another program, thus optimizing system performance and reducing idle time. What are the benefits of multi-programming? Benefits of multi-programming include increased CPU utilization, better resource management, reduced turnaround time, enhanced system throughput, and improved system responsiveness. What is the difference between multi-programming and multi-tasking? While both involve running multiple programs, multi-programming focuses on maximizing CPU utilization by managing the execution of multiple programs. In contrast, multi-tasking allows users to interact with multiple applications simultaneously, extending the concept of multi-programming to a user-centric approach. What are the challenges of multi-programming? Challenges of multi-programming include dealing with deadlocks, resource contention, context switching overhead, and the complexity of synchronization to ensure programs share resources without conflicts.
<urn:uuid:e5a5bfe2-a12f-4975-8ce6-09701f51b5f2>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-multi-programming/
2024-09-12T21:31:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00604.warc.gz
en
0.923156
1,394
3.734375
4
Cybercrime is now considered a top risk to most enterprises. Many organizations seek to build security by adding tools and processes on top of their established operations. It’s important for these organizations to take different approaches and see what results in greater momentum and more effective investment. What is Cloud Computing? Cloud computing is defined as the use of a collection of services, applications, information and infrastructure composed of pools of computer, network, information and storage resources. These components can be rapidly orchestrated, provisioned, implemented, decommissioned and scaled up or down, providing for an on-demand, utility-like model of allocation and consumption. Corporations today are thinking about how to protect assets. A few of the white collar crime problems include hacking/intrusions (cyber vulnerability), insider/outsider trading (convergence of cyber and financial crimes), the Foreign Corrupt Practices Act (FCPA), spear fishing (email compromise) and economic espionage. They must consider the possibility of internal corruption or external corruption, and environmental factors such as culture and competition contributing to these crimes. As protection, organizations can use cyber security, pen testing and data loss prevention tactics. Add a Comment:
<urn:uuid:35367f82-829e-4290-969d-828cecdcc84d>
CC-MAIN-2024-38
https://info.knowledgeleader.com/topic/cybersecurity
2024-09-15T10:05:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00404.warc.gz
en
0.929572
244
2.625
3
Over the past year, there has been a significant surge in DDoS attacks, largely attributed to the global pandemic. With a vast number of individuals working remotely and relying heavily on online services through remote connections, the vulnerabilities in digital infrastructure became more apparent. As per the reports, around 10 million DDoS attacks happened against essential online services. The DDoS attacks wreaked havoc on e-commerce, healthcare, streaming services, and remote learning, disrupting business operations and extortion by hackers. What is a DDoS attack? A Distributed Denial-of-Service (DDoS) attack is a malicious attack intended to disrupt the regular traffic of a targeted network, service, or server by flooding the target or its surrounding infrastructure with Internet traffic. It involves multiple connected devices, known as botnets, used to flood target websites with malicious traffic. How does a DDoS attack work? DDoS attacks are worked with networks of Internet-connected machines. These networks include systems and other IoT devices infected with malware and controlled remotely by an attacker. The individual devices are known as bots, and a group of bots is called a botnet. If a botnet has been established, the attacker can attack by sending remote instructions to each bot. When the botnet targets a victim’s network or server, each bot sends requests to the target’s IP address, which potentially causes the server or network to become infected, resulting in a DDoS to regular traffic. As each bot is a legitimate Internet device, splitting the attack traffic from regular traffic can be difficult. Best practices to prevent DDoS attacks: Following the best practices and measures can mitigate the potential impacts of a DDoS attack. 1.Create a DDoS response plan Developing an incident response plan that ensures staff members respond promptly and effectively. This plan should include the following: 2. Continuous monitoring of traffic Using continuous monitoring (CM) to analyze traffic in real time is the best way to identify DDoS attacks. The following are the benefits of continuous monitoring: 3. Limit network broadcasting A hacker behind a DDoS attack will send requests to every device on your network in order to amplify its impact. The security team can counter this tactic by restricting network broadcasting between devices. 4. Ensure high levels of network security High-level network security is essential for preventing any DDoS attack attempt. Identifying a DDoS early on is vital to controlling the radius. The following types of network security help protect your business from DDoS attempts: 5. Identify the signs of a DDoS attack If your security team can rapidly identify the signs of a DDoS attack, you will be able to take the necessary precautions and limit the damage. Common signs of a DDoS attack include: A low-volume, short-duration attack often goes unnoticed as a random occurrence, and such attacks can be a test or a distraction for dangerous attacks (such as ransomware). As a result, detecting a low-volume attack is essential. 6. Make use of server redundancy Using multiple distributed servers makes it difficult for a hacker to attack all servers simultaneously. If an attacker successfully launches a DDoS attack on a single hosting device, other servers remain unaffected and accept additional traffic till the targeted system is restored. 7. Leverage the cloud to prevent DDoS attacks Cloud-based protection can scale and handle even a major volumetric DDoS attack easily. The use of a third-party vendor for DDoS prevention can have the following key benefits: DDoS attacks are increasing and becoming more dangerous to mitigate. Even experts predict an increase in DDoS attacks in the upcoming days to 15.4 million by 2023. Implementing the best measures to prevent DDoS attacks in the future is more important to avoid data loss, network interruptions, and disturbances in business operations. InfosecTrain‘s various cybersecurity training courses provide comprehensive knowledge and practical skills to understand and implement effective DDoS prevention strategies, including network security, traffic monitoring, incident response, and mitigation techniques.
<urn:uuid:0ce54460-e4ec-4975-9d99-33203a11880e>
CC-MAIN-2024-38
https://www.infosectrain.com/blog/how-to-prevent-ddos-attacks/
2024-09-15T10:48:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00404.warc.gz
en
0.930212
829
3.1875
3
The compiler needs to give a name to each XFD file that is built. It attempts to build the name from your COBOL code, although there are some instances where the name in the code is nonspecific, and you must provide a name. Each XFD name is built from a starting name that is derived (if possible) from the SELECT statement in your COBOL code. The following table explains how that occurs. ASSIGN name is a variable | If the SELECT for the XFD file has a variable ASSIGN name (ASSIGN TO filename), you must specify a starting name for the .xfd file via a FILE XFD directive in your code. This process is described in Using XFD Directives | ASSIGN name is a constant | If the SELECT for the XFD file has a constant ASSIGN name (such as ASSIGN TO COMPFILE), that name is used as the starting name for the .xfd file name | ASSIGN name is generic | If the ASSIGN phrase refers to a generic device (such as ASSIGN TO DISK), the compiler uses the SELECT name as the starting name for the XFD |
<urn:uuid:662890a0-3d1d-4eee-8a32-5150adfb2402>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/visual-cobol/vc70/VS2017/BKGLGLDICTGL326.html
2024-09-15T11:15:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00404.warc.gz
en
0.937313
238
2.890625
3
Of course you know what a computer mouse is. Everyone does. In fact, you’ve probably got one in your hand right now. But if I told you what it was originally created for, you’d be shocked. When Doug Engelbart patented the computer mouse on this day in 1963, he intended it to be a tool to transform the human race. And it was a revolution left unfinished. The simple mouse was meant to be a simple tool as part of a grand plan to use computers to augment human intellect and increase our “collective IQ” – a visionary idea that was neglected and unfunded. A Dream of Building Better Humans There was a time when the computer science community thought the name Doug Engelbart was synonymous with “a crackpot”. He was a man with big, wild ideas – but he had not always been that way. At the start of his life, Engelbart had simple plans of getting a “steady job, getting married, and living happily ever after.” He had spent some time as a radar technician during WWII, and was studying a PhD in Engineering at UC Barkley. Shortly after getting married, Engelbart realized his career ambitions were nearly non-existent. And over several months, he came up with a guiding philosophy that would change the course of history. - Engelbart decided that he would focus on trying to make the world a better place. The world’s problems were getting more complex, and so we needed to rise to meet the challenge. - Any serious effort to meet this challenge and make the world better couldn’t be done alone. It would need some organized effort – one that harnessed the collective human intellect of all people to create an effective solution. - He believed if you could dramatically improve the efficiency that people work on problems (Their ‘Collective IQ’), you’d be boosting every effort on the planet to solve important problems. - He believed that computers could be the vehicle for improving people’s people’s Collective IQ, not just for crunching data. As a radar technician, he’d seen how information could be displayed on a screen. Decades before the personal computer, he began envisioning people sitting in front of cathode-ray-tube displays, “flying around” in an information space where they could formulate and portray thoughts. This would allow them to better harness their otherwise untapped sensory, perceptual and cognitive capabilities. It would also let them communicate and collectively organize their ideas with incredible speed and flexibility. In short, back in 1951, Engelbart was sitting around and imagining using PCs to browse the internet and use Microsoft Teams. And in the computer science industry, this quickly made the name “Doug Engelbart” synonymous with “a crackpot”. In fact, once he completed his PhD and became an acting assistant professor, he was tipped off by a colleague that if he kept talking about his “wild ideas”, he’d stay stuck in his role forever. In fact, through his entire life, he warned by people in both academia and the computer industry that using computers for anything other than computations and business data was “crazy” and “science fiction.” Not one to go with the flow, Engelbart joined up with Stanford Research Institute (Now SRI International), and earned a dozen patents working on things like magnetic computer components and miniaturization scaling potential. He had also worked for the precursor to NASA (NACA). Of Mice and Men For most of his life, had two struggles: getting people to take his ideas seriously, and getting out from the shadow of his most well known accomplishment – the computer mouse. Only for one decade did he get significant backing. In 1963, the U.S. Defense Department gave him the funding and ability to assemble a team. With it, he assembled a team of computer engineers and programmers at his Augmentation Research Center (ARC). His idea was to free people’s minds about computing being about calculations, and make it about augmenting the capabilities of the human mind (‘Augmented Intelligence’). Over six years with funding from NASA and ARPA, he put together the elements that would make such a machine a reality. And in 1968, he performed what is considered in retrospect to perhaps be the most important computer demonstration of all time – an event now called “The Mother of all Demos“. The team demonstrated the results of their work: the NLS System, which stood for “oN-Line System”. It was the first system to employ the use of hypertext links, raster-scan video monitors, information organized by relevance, screen windowing, context-sensitive help, integrated hypermedia e-mail, and of course, the computer mouse. And the whole demo was done by shared-screen teleconference over modem – another historic first. As Engelbart moved his mouse across the screen – highlighting text and resizing windows – people stopped thinking of him as a crackpot. By the end, he was described as “dealing lightning by both hands.” When he finished, the audience gave him a standing ovation. The future of modern computing had been demonstrated for the first time. The demo had even introduced the concept of collaborative document creation (aka Wikis). History had been made. The Ideological Divide It’s worth noting that Engelbart had spoken during the Mother of All Demos about how the computer was designed for Augmented Intellect and increasing humanity’s IQ. A theory of technological coevolution. But this was not what people had taken away from it. And arguably, it would be Engelbart’s and his ARC program’s downfall. The NLS had been designed with a difficult learning curve. This was by intentional design – it was made for an educated user, and rewarded learning. In fact, Engelbart was dismissive of ease-of-use. And ease-of-use is everything in the mass market. “Engelbart, for better or for worse, was trying to make a violin… most people don’t want to learn the violin,” Alan Kay said of Engelbart’s failed vision. In fact, it was an ideological divide that split Engelbart from his team. Many of his researchers left his organization for Xerox PARC. Where Engelbart saw a future with collaborative, networked computers (and time-sharing still had a place), younger programmers felt personal computers were the future. The argument was historically situated: the younger programmers came from an era where centralized power was suspect, and personal computing was just on the horizon. In many ways, history both sides with and against Engelbart’s crusade. The automation and deskilling of human operators – and the rise of Artificial Intelligence – means that low-skilled users are the trend. From an economic point of view, it costs less and takes less time to have unskilled employees. On the other hand, these days people are hardly as skeptical of centralized power as the 1960’s. Collaborative, networked computers are the norm, and one could argue that servers are an far more evolved form of time-sharing. The Mouse Becomes an Elephant The computer mouse was meant to be a small footnote to Engelbart’s goals. But instead, it became everything anyone wanted to talk about. “Along the way we needed to have a gadget that would be a better windshield wiper or something, but the whole rest of the expedition had so many other things, it just wasn’t a big deal,” Engelbart said. He said while the original mouse was designed with a tail-like cord coming out the back – hence the name ‘mouse’. Unfortunately, Engelbart didn’t get rich off his invention. In an interview, he said “SRI patented the mouse, but they really had no idea of its value. Some years later it was learned that they had licensed it to Apple Computer for something like $40,000.” After the exodus of his staff, he slipped into relative obscurity and was marginalized for a time. Engelbart expressed disappointment in interviews in the 1980’s and 2000‘s that while computing power had increased by millions of times, his vision of human augmentation had not. Towards the end of his life, he set up the Doug Engelbart institute to promote his vision. In December 2000, Bill Clinton awarded him the National Medal of Technology, the U.S.’s highest technology award. In fact, his mouse prototype is encased in the Smithsonian. Moore and Engelbart Now if you’re in IT, you’ve probably heard of Moore’s Law (which may be under threat). What you may not know is that Moore – the co-founder of Intel – was inspired to write this law after sitting in on a speech made by Engelbart. He had been talking about how at the time aerospace engineers made tiny models and then scaled them up – and how computing technology would involve taking chips and scaling them down. Engelbart’s papers and speeches on chip miniaturization directly affected Moore, who wrote a paper on his famous law a decade later. If you’re feeling bad that Engelbart doesn’t have his own law, don’t. He actually does, though it’s nowhere near as famous. Unsurprisingly, it’s called Engelbart’s law.
<urn:uuid:c9a8008e-0b06-456c-8149-d8f34d44c15c>
CC-MAIN-2024-38
https://www.backupassist.com/blog/the-unfinished-revolution-of-the-mouse
2024-09-19T05:20:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00104.warc.gz
en
0.980009
2,002
3.390625
3
In light of the ongoing Internet shutdowns, such as the recent government-ordered Internet disruption in Algeria aimed at curbing cheating during national exams, the Internet Society (ISOC) has unveiled a tool, the NetLoss Calculator. This tool calculates the economic costs of internet shutdowns, a feat that has been a significant challenge until now. Economic Fallout: The move was triggered by the economic fallout from these shutdowns, as reported by Algerians themselves. Local journalist Mehdi Dahhak of Desert Foot recounted how the shutdown hampered their operations and caused a decline in their readability rates. Likewise, tourism businesses like Sarah Zahaf’s suffered from disrupted communication and missed ticket reservation deadlines, causing material losses. The NetLoss Calculator estimates the economic cost of internet shutdowns using a rigorous, reproducible, econometric framework. It uses a comprehensive set of data, including shutdown details from Internet Society Pulse, data on protests and civil unrest from The Armed Conflict Location & Event Data Project, elections data from Yale University’s Constituency-Level Elections Archive, and socioeconomic indicators from the World Bank. Quantifying Costs: These data, along with other factors like inflation rate and the percentage of the labor force with basic education, are used to calculate the economic implications of shutdowns. For example, the NetLoss calculator estimated that the recent shutdown in Pakistan cost the country over 17 million USD and resulted in increased unemployment. Policy Implications: The tool is intended to persuade governments to reconsider these measures and educate technologists, journalists, policymakers, and advocates about the harmful impacts of shutdowns. Governments often enact internet shutdowns to control situations like unrest or exam-related cheating. However, the costs are reported to be detrimental—disruptions to e-commerce, increases in unemployment, and financial and reputational risks for companies are a few negative consequences.
<urn:uuid:db95dc75-af77-48ed-9784-c7a31406759b>
CC-MAIN-2024-38
https://circleid.com/posts/20230628-quantifying-internet-shutdowns-isoc-introduces-the-netloss-calculator
2024-09-20T11:37:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00004.warc.gz
en
0.936605
380
2.65625
3
Hackers may be in your phone right now (or your tablet). Think it’s not possible because your connected device is performing well? Think again. These hacks are discreet, using your device’s computing power to commit crimes. In a study commissioned by Distil Networks entitled “Mobile Bots: The Next Evolution of Bad Bots” it was determined that as many as 5.8 percent of all mobile devices worldwide have been infected with bots, a kind of malware that parasitically uses the computing power of its host device. While 5.8 percent may seem like a tame figure, it’s millions of phones. In the United States alone where there are an estimated 300 million smartphones and tablets in use, the number of affected devices could be as high as 15 million. The reason the bot activity on these devices goes unnoticed is simple: They use very little data. Each bot only sends out about 50 “bad data requests” per day. What’s at Stake? The Cybersecurity Research Center at Ben-Gurion University estimates that a state-sponsored actor can launch a distributed denial of services attack capable of crippling the 911 emergency service system for an entire state with as few as 6,000 malware infected cell phones. To read more, click here.
<urn:uuid:91af8328-9851-42f6-b58e-68a7f18455be>
CC-MAIN-2024-38
https://adamlevin.com/2018/06/27/millions-of-smartphones-hosting-hacker-bots/
2024-09-08T06:29:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00204.warc.gz
en
0.945219
267
2.828125
3
Preparing for Severe Weather and Other Hazards 7 Best practices for emergency managers By recognizing that hazards, including severe weather events, are unpredictable and cannot be completely prevented, emergency managers can instead focus their efforts on promoting a resilient organization. A community is resilient when it can recover from a disaster or other stressor and get back on its feet as quickly as possible. There are many best practices for emergency managers to prepare for severe weather events such as hurricanes, floods, tornadoes, wildfires, and other risks. Preparing for hazards can involve planning and training with departments, jurisdictions, agencies, and community members. To help you better understand how to promote resilience in your organization, Everbridge hosted a 4-part webinar series focused on the phases of emergency management: Mitigation, Preparedness, Response, and Recovery. For an overview, read on to learn the top seven best practices that emergency managers can implement to ensure severe weather preparedness. 1. Conduct a THIRA (thorough hazard identification and risk assessment) as part of your Hazard Mitigation Planning A thorough hazard identification and risk assessment (THIRA) is a critical component of any disaster planning process. It provides a foundation for additional planning efforts and is an essential first step in the process of creating an emergency management plan. Jenny Demaris of the Lincoln County Sheriff’s Office explains that “integrating THIRAs and mitigation planning into recovery is important to create greater resilience. Many of us emergency management professionals know the cycle is continuous. Being able to recognize and document what is needed to better mitigate disasters helps greatly so when something does happen, you have a plan.” By conducting a THIRA, you’ll be able to identify the types of hazards that might impact your community, including extreme weather events, natural disasters, and man-made hazards, and then plan accordingly. To learn more, make sure to check out the Preparedness webinar episode. 2. Identify and Work with Partners When preparing for severe weather and other hazards, it is important to identify and work with partners. A partner can be anyone in your community who is likely to be involved in or impacted by a disaster, as well as government agencies. Partners can include faith-based organizations, businesses, community organizations, schools, doctor’s offices, state and local agencies, and other jurisdictions. Paul Lupe from Fairfax County Emergency Management shares how he and his team develop relationships with partners: We look for “those opportunities to develop relationships with our external partners that aren’t necessarily emergency management specialists but work in coordination with our agency as a part of the county to help with a significant event. For example, the Red Cross is a big partner of ours, a lot of our utility companies, our gas companies, etc. and we’re in contact with them semi-regularly in the event that something happens.” Partners can play an important role in your emergency management efforts, including providing assistance during an actual emergency and helping distribute critical information to the public. For example, local governments, community organizations, schools, and workplaces can make sure that people are properly prepared and trained to deal with a severe weather event. 3. Plan for Those with Accessibility or Functional Needs When planning for hazards, it is important to recognize and plan for people with accessibility or functional needs who may be at greater risk during or after an emergency or disaster. According to FEMA (Federal Emergency Management Agency), people with access or functional needs are defined as “individuals who need assistance due to any condition (temporary or permanent) that limits their ability to act. To have access and functional needs does not require that the individual have any kind of diagnosis or specific evaluation” (FEMA, 2021). For example, this may include individuals who are seniors, have disabilities, have limited English proficiency, lack access to transportation, or lack financial resources. In order to prepare effectively, these community members must be identified. Then, when a disaster does happen, it is crucial that these people are notified in a way that is accessible to them. For example, providing alerts in multiple languages can help with accessibility. In the case of Lincoln County, Jenny discusses how they run their “special needs registry through Everbridge. The health department helps verify the need people are registering for. When there is an incident, calls and messages can go out to poll people and understand how to prepare them. You’re able to also use the information from that to actually map out transportation routes, if they need transportation assistance.” With the right Critical Event Management (CEM) platform, it is possible to reach all those who need to be contacted in a disaster, including people with accessibility or functional needs. 4. Create a Communication Plan that Fits Your Community Communication is an important part of emergency management. It involves effectively relaying critical information to the public and helping people be prepared, respond to hazards, and recover after an event. To prepare for a severe weather event, you can begin by creating a communication plan that fits your community. This includes determining where you will get your information, who will disseminate it, and through what channels. It can also involve partnering with other organizations that can help you get critical information to the public. Make sure you have the necessary tools and are using channels that are accessible for everyone in your community, such as a mass notification or emergency alert system. 5. Use Communication Channels to Your Benefit Communicating effectively with your community is essential to preparedness efforts, but it is also important to recognize ways in which other entities are communicating with your community. Other entities, including the media, non-governmental organizations, and businesses, may play a crucial role in informing the public during a severe weather event or other disaster. James Podlucky, Industry solutions expert for SLED (State and Local Government Education) at Everbridge and former Emergency Manager of Sarasota County, discusses the importance of “having a streamlined way of communicating and using multimodal communications, because many people communicate or receive information through a variety of ways such as email, social media, a mobile phone, or a landline,” but not everyone may use all of these forms. To reach the most people, all forms of communication must be utilized. You can use these communication channels to your benefit to convey critical information. For example, during COVID, with so much being done virtually, organizations have been able to hold webinars and virtual meetings through which they provided updates on the pandemic. The agencies distributed information through media channels, which allowed them to reach many people who would otherwise not have seen the information. 6. Amplify Communications After your initial communication efforts, it can also be helpful to amplify communications with the public. Amplifying communications involves distributing information from other entities, such as the media, through your communication channels. For example, using social media channels such as Facebook and Twitter has been essential in reaching more people and alerting them when disaster strikes. It is also possible to use social media to encourage employees or citizens to sign up for alerts and messages through Critical Event Management (CEM) platforms. Spreading the word about opting in to these kinds of communications enables emergency managers to reach people with critical updates more immediately. It is important to make sure critical information about preparing for a disaster is available for as many people as possible. Paul Lupe from Fairfax County Emergency Management made sure they have hard copies as well: “We have created a community emergency response guide, where we share a written plan to the community at large, with the steps they can take, and resources that they could leverage to prepare for any hazard. We print copies of it to leave in our libraries, but mostly we leverage it through our website.” By making it accessible in print and on the web, more people will know what to do in case disaster strikes. 7. Understand What Funding is Available and Who Pays for Damages In your preparations for and during a severe weather event, it is important that you understand what funding is available to you, as well as who will pay for damages. Understanding what funding is available to you can help expedite response to and recovery from hazards and diminish impact. A crucial part of mitigation efforts includes obtaining funding for projects like infrastructure that will prevent future disasters from happening. For example, this could include making sure a building can withstand a hurricane or additional reinforcements in a landslide area. Ultimately, this will save lives and money. When an incident happens, the question of who pays for what when is important. This is where local jurisdictions can help communicate with the public about how to navigate the process of obtaining funding to pay for damages. Mark Slauter of Floodmapp emphasizes that “a key word with funding is navigate. Navigating through the process and helping others with funding and the recovery process. It’s critical for those of us who are on the recovery side of this, particularly in the public sector, to be able to walk people through that process.” How a Critical Event Management Platform Can Help Severe weather events, as well as other hazards, can impact any community at any time. By implementing the above seven practices, emergency managers can proactively build resilience across their departments, organizations, and communities. To put the importance of severe weather preparedness in perspective, “in 2021, the U.S. experienced 20 separate billion-dollar weather and climate disasters, putting 2021 in second place for the most disasters in a calendar year. Damages from the 2021 disasters totaled approximately $145 billion. (All cost estimates are adjusted based on the Consumer Price Index, 2021)” (Climate.gov, 2022). While it is impossible to completely avoid all impacts from threats, emergency managers can instead focus their efforts on promoting resilience to best protect their people and assets. A Critical Event Management (CEM) platform can help emergency managers better implement the above practices, which protects people and operations while reducing financial impact of disasters. Furthermore, CEM platforms can help organizations manage and respond to all kinds of critical events, not just severe weather, and build a culture of preparedness throughout your organization.
<urn:uuid:63bea0d4-5bac-4601-83a2-3a81a7894bfc>
CC-MAIN-2024-38
https://www.everbridge.com/blog/7-best-practices-for-emergency-managers/
2024-09-08T06:17:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00204.warc.gz
en
0.953789
2,100
2.796875
3
In the realm of modern science, the pursuit of understanding the fundamental nature of reality has led to remarkable advancements in various disciplines. Among the most fascinating and challenging areas of study are the interactions between quantum computing and gravity measurements. While seemingly distinct, these fields are intertwined in ways that continue to captivate researchers and push the boundaries of human knowledge. Interweaving Threads: Quantum Physics Meets Gravity Measurements At the forefront of scientific exploration, researchers are beginning to explore the intriguing connections between quantum physics and gravity measurements. The marriage of these two realms offers insights into the fundamental nature of the universe and may lead to a more comprehensive theory that unifies quantum mechanics and gravity—something that has eluded scientists for decades. Quantum effects become more pronounced in extreme conditions, such as near black holes or during the universe’s early moments. Therefore, understanding quantum behavior in gravitational contexts can provide vital clues about the behavior of matter and energy in these extreme environments. These insights could also help physicists better understand the origins of many aspects of our universe. Furthermore, the marriage of quantum computing and gravity measurements holds the potential to revolutionize our approach to solving complex gravitational equations. Quantum computers could simulate the behavior of particles in strong gravitational fields more accurately and efficiently, contributing to the advancement of gravitational wave detection and the study of black holes. Additionally, recent research has shown that quantum entanglement can reduce environmental noise in quantum interferometers, a type of quantum sensor that could greatly impact the future of measuring gravitational waves at even smaller levels. This could help with natural disaster mitigation, such as in cases of earthquakes or landslides, saving hundreds, if not thousands, of lives. From government agencies like NASA or NIST to commercial companies like Q-CTRL, all of which are developing quantum sensors for applications like timekeeping, gravitational wave detection, or even telecom, research in quantum entanglement could significantly accelerate the creation of these next-generation sensors. The Quest Continues As our scientific understanding deepens, the synergies between quantum physics, quantum computing, and gravity measurements become increasingly evident. The ongoing research in these fields not only expands our knowledge but also paves the way for groundbreaking technologies and applications that were once considered the stuff of science fiction. While many questions remain unanswered, the intersections of these disciplines underscore the beauty and complexity of the universe we inhabit. As researchers delve into the intricate interplay between quantum phenomena and gravitational forces, we can anticipate a future where our understanding of reality is refined and expanded, reshaping how we view the cosmos and our place within it. Kenna Hughes-Castleberry is a staff writer at Inside Quantum Technology and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). Her writing beats include deep tech, quantum computing, and AI. Her work has been featured in Scientific American, New Scientist, Discover Magazine, Ars Technica, and more.
<urn:uuid:b9861116-e026-4184-bc71-3bbccdbe848a>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/inside-quantum-technologys-inside-scoop-quantum-and-gravity/amp/
2024-09-08T07:10:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00204.warc.gz
en
0.926704
597
3.515625
4
Through 2021 and beyond, the world will begin to recover from its acute crisis — Covid-19 — and will turn its attention to other matters. Few if any of these issues will be as important as climate change, a chronic condition that will become more pressing and acute as each year passes. In the critical digital infrastructure sector, as in all businesses, issues arising directly or indirectly from climate change will play a significant role in strategic decision-making and technical operations in the years ahead. And this is regardless of the attitude or beliefs of senior executives; stakeholders, governments, customers, lobbyists and watchdogs all want and expect to see more action. The year 2021 will be critical, with governments expected to act with greater focus and unity as the new US government rejoins the global effort. Impacts of climate change We can group the growing impact of climate change into four areas: - Extreme weather/climate impact - As discussed in our report The gathering storm: Climate change and data center resiliency, extreme weather and climate change present an array of direct and indirect threats to data centers. For example, extreme heatwaves — which will challenge many data center cooling systems — are projected to occur once every three or four years, not once in 20. - Legislation and scrutiny - Nearly 2,000 pieces of climate-related legislation have been passed globally to date (covering all areas). Many more, along with more standards and customer mandates, can be expected in the next several years. - Litigation and customer losses - Many big companies are demanding rigorous standards through their supply chains — or their contracts will be terminated. Meanwhile, climate activists, often well-resourced, are filing lawsuits against technology companies and digital infrastructure operators to cover everything from battery choices to water consumption. - The need for new technologies - Management will be under pressure to invest more, partly to protect against weather events, and partly to migrate to cleaner technologies such as software-defined power or direct liquid cooling. In the IT sector generally — including in data centers — it has not all been bad news to date. Led by the biggest cloud and colo companies, and judged by several metrics, the data center sector has made good progress in curtailing carbon emissions and wasteful energy use. According to the Carbon Trust, a London-based body focused on reducing carbon emissions, the IT sector is on course to meet its science-based target for 2030 — a target that will help keep the world to 1.5oC (34.7oF) warming (but still enough warming to create huge global problems). Its data shows IT sector carbon emissions from 2020 to 2030 are on a trajectory to fall significantly in five key areas - data centers, user devices, mobile networks, and fixed and enterprise networks. Overall, the IT sector needs to cut carbon emissions by 50 percent from 2020 to 2030. Data centers are just a part of this, accounting for more carbon emissions than mobile, fixed or enterprise networks, but significantly less than all the billions of user devices. Data center energy efficiency has been greatly helped by facility efficiencies, such as economizer cooling, improvements in server energy use, and greater utilization through virtualization and other IT/software improvements. Use of renewables has also helped: According to Uptime Institute data (our 2020 Climate Change Survey) over a third of operators now largely power their data centers using renewable energy sources or offset their carbon use (see figure). Increasing availability of renewable power in the grid will help to further reduce emissions. But there are some caveats to the data center sector’s fairly good performance. First, the reduction in carbon emissions achieved to date is contested by many who think the impact of overall industry growth on energy use and carbon emissions has been understated (i.e., energy use/carbon emissions are actually quite a lot higher than widely accepted models suggest — a debatable issue that Uptime Institute continues to review). Second, at an individual company or data center level, it may become harder to achieve carbon emissions reductions in the next decade than it has been in the past decade — just as the level of scrutiny and oversight, and the penalty for not doing enough, ratchets up. Why? - Many of the facilities-level improvements in energy use at data centers have been achieved already — indeed, average industry power usage effectiveness values show only marginal improvements over the last five years. Some of these efficiencies may even go into reverse if other priorities, such as water use or resiliency, take precedence (economizers may have to be supplemented with mechanical chillers to reduce water use, for example). - Improvements in IT energy efficiency have also slowed — partly due to the slowing or even ending of Moore’s Law (i.e., IT performance doubling every two years) — and because the easiest gains in IT utilization have already been achieved. - Some of the improvements in carbon emissions over the next decade require looking beyond immediate on-site emissions, or those from energy supplies. Increasingly, operators of critical digital infrastructure — very often under external pressure and executive mandate — must start to record the embedded carbon emissions (known as Scope 3 emissions) in the products and services they use. This requires skills, tools and considerable administrative effort. The biggest operators of digital infrastructure — among them Amazon, Digital Realty, Equinix, Facebook, Google and Microsoft — have made ambitious and specific commitments to achieve carbon neutrality in line with science-based targets within the next two decades. That means, first, they are setting standards that will be difficult for many others to match, giving them a competitive advantage; and second, these companies will put pressure on their supply chains — including data center partners — to minimize emissions. For more on sector dynamics, innovations, challenges and opportunities in 2021, see Uptime Institute’s report Five data center trends for 2021.
<urn:uuid:dbb710d2-a0aa-41a4-a6ee-d7d16332955c>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/opinions/sustainability-more-challenging-more-transparent/
2024-09-10T16:54:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00004.warc.gz
en
0.948894
1,194
2.5625
3
One of the problems with relying entirely on one security solution is that the cyber threat landscape changes rapidly. Antivirus software identifies threats by matching a particular piece of software’s code to programs it has identified as “malicious” in its database. But what happens when an unidentified virus infects a victim’s computer? Antivirus programs can only protect against threats they already know. In today’s world of evolving malware, there are likely a lot of threats antivirus doesn’t know about.The solution, according to a recent Data Center Journal article, is application control. Instead of compiling a list of threats, this technique looks at what is already on a computer and identifies programs as safe, blocking software that doesn’t match. This allows application control solutions to block unknown threats. Software developers can use coded certificates to digitally sign their software, which means any code with that signature becomes a trusted source. Utilizing multiple security solutions may become even more important as the level of sophistication in malware grows. “Targeted attacks can be engineered to seek out a very specific machine, infrastructure or geography,” the article stated. “They can target a single company, maybe with the intention of stealing trade secrets or discrediting that company. If you want a good example, just look at the infection map for Flame: it is tightly grouped around the Gulf States. The other development is the apparent involvement of the nation state.” How protected are you? Another problem is that organizations are likely already affected by malware. According to a recent V3.co.uk article, 95 percent of companies have already fallen victim to attacks from advanced malware and suffer from an average of 643 successful infections per week. However, perhaps even more troubling, is the statistic that there has been a 400 percent increase in the number of infections since last year. The article highlighted the increasing popularity of targeted attacks. Imagine you got an email that looked like it was from a friend. Maybe the text in the email jokes about the trip you took last week and how you came back sunburnt. Would you click on the links in the email, even if it came from an address you didn’t recognize? If you have old vacation pictures on Facebook, a determined hacker could use them to write such an email, and cyber criminals are starting to use that kind of information to craft targets specifically for their victims. “The attacks reportedly use social engineering to create Trojan email campaigns custom-designed for their victims,” the article stated. “The campaigns contain malicious web links and attachments that infect users’ machines with malware when opened.” Have you ever been targeted by a social engineering attack? Was it through email or a social networking website?
<urn:uuid:e0809317-c8c1-438b-a82d-6e28820a474d>
CC-MAIN-2024-38
https://www.faronics.com/news/blog/malware-becoming-more-sophisticated-majority-of-organizations-infected
2024-09-11T19:37:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00804.warc.gz
en
0.961
569
2.859375
3
In today’s digital age, technology surrounds us like never before. As our world becomes increasingly intertwined with machines and gadgets, it’s essential to introduce kids to the wonders of technology at an early age. One fascinating and rapidly growing field in the tech world is machine learning, and it’s not just for adults! In this article, we’ll explore the exciting world of machine learning for kids, breaking down complex concepts into bite-sized, kid-friendly pieces. What is Machine Learning? Machine learning is like teaching computers to think and make decisions on their own, just like how we learn from our experiences. It’s like magic, but with math and data! Imagine you have a robot friend who can watch you play soccer and then learn how to play by itself. That’s what machine learning is all about – computers learning from data to do cool stuff. Now, don’t let the word “algorithm” scare you. Think of it as a set of rules that help the computer make decisions. Picture it as a recipe for a yummy cake. With the right ingredients and steps, you can bake a delicious cake. Similarly, algorithms help computers solve problems, play games, and even recognize your drawings! Training Your Computer Friend To make a computer learn, we need to train it. Just like teaching your pet dog tricks, we show the computer lots of examples until it figures things out. Let’s say you want your computer to recognize cats. You’d show it lots of pictures of cats and tell it, “This is a cat.” After seeing many cat pictures, your computer friend will become a cat expert! Fun with Data Data is like a treasure chest for machine learning. It can be anything – pictures, numbers, words, or even music. For example, if you want your computer to guess if a fruit is an apple or a banana, you’d collect data about apples and bananas. You’d tell your computer, “When it’s red and round, it’s probably an apple.” The more data you have, the smarter your computer becomes! Teaching Computers to See One exciting part of machine learning is teaching computers to see. With the help of special cameras, computers can look at pictures and recognize objects just like you do. They can even tell the difference between your pet dog and a teddy bear! Let’s Play Games! Machine learning can make games even more fun. Ever played a game where you compete against a computer opponent? That’s machine learning at work! The computer learns from your moves and gets better over time. So, the more you play, the more challenging it becomes! Making Art Together Machine learning can be an artist’s best friend too. There are computers that can create beautiful paintings, compose music, and even write stories. You can team up with a computer to make your own masterpieces! Keep Learning and Exploring Machine learning for kids is a fantastic world of possibilities waiting to be explored. It’s like having a new friend who can do amazing things with your help. So, don’t be afraid to dive into the world of machine learning. Who knows, you might become the next young genius to create something incredible with your computer friend by your side!
<urn:uuid:08cbd119-ed51-4e2d-8803-5c602047756e>
CC-MAIN-2024-38
https://generaltonytoy.com/unleashing-the-magic-of-machine-learning-for-kids/
2024-09-13T01:31:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00704.warc.gz
en
0.941276
702
3.84375
4
To protect ourselves against hackers, it is not enough to be careful with the apps we install, we also have to worry about what we connect to our device. Just when you think you’ve protected your system as well as you can, a new attack vector appears, a new technique that makes all your work obsolete. And that’s fine. Not all hackers have malicious interests, and many want to find security holes precisely to cover them up and prevent others from using them; even if that means exploiting a process as common and simple as connecting a cable to our mobile. The cable that hacks the mobile Oddly enough, the O.MG Cable presented at the last Def Con, a conference for hackers and cybersecurity experts, is capable of doing just that. Although at first glance it looks like a conventional cable, inside it has the necessary hardware and software to steal our data and send it to an external server using Wi-Fi, as explained in The Verge. In effect, the cable itself has a Wi-Fi connection, which speaks volumes about the complexity of this project; its creator, known simply by the acronym MG, has managed to integrate all the necessary chips into the USB connection itself, so that the attacker only has to get the victim to use the cable, for example, to charge the mobile, and the device will do all the work. The cable is able to fool the system because it passes itself off as a USB keyboard, and in this way it is able to insert commands and execute them; In the same way, it is also capable of recording everything we write on the mobile while the cable is connected, saving up to 650,000 keystrokes in the internal memory. That includes data that is not normally obtainable because the connection is encrypted, such as our password or our bank details. The attacker doesn’t even have to get the cable to retrieve the stolen data; the integrated Wi-Fi connection allows you to connect to an external server in which to record the information. In this way, the usual security measures are bypassed, such as antivirus or firewalls, since the connection is not made through the mobile. That also means the cable is capable of getting data from ‘air gapped’ systems, which are not connected to the internet for security. The cable can be configured with different connections, such as USB-A, USB-C or Lightning for iPhones; in the same way, it is compatible with all kinds of operating systems, such as Android, iOS, Windows and macOS. The O.MG Cable has been designed as a tool for professionals to test the security of their systems, and the price of $179.99 reflects this. Although it is unlikely that you will be the victim of an attack with this cable, this is a good reminder that you should never connect unknown devices to your mobile or computer, be they cables, USB sticks or anything else, since physical access is the most coveted by hackers. Cyber Security Researcher. Information security specialist, currently working as risk infrastructure specialist & investigator. He is a cyber-security researcher with over 18 years of experience. He has served with the Intelligence Agency as a Senior Intelligence Officer. He has also worked on the projects of Citrix and Google in deploying cyber security solutions. He has aided the government and many federal agencies in thwarting many cyber crimes. He has been writing for us in his free time since last 5 years.
<urn:uuid:753dcbfe-0e11-4a4f-9642-cbc233ac326a>
CC-MAIN-2024-38
https://www.exploitone.com/mobile-security/o-mg-cable-can-hack-any-smartphone-has-keylogger-and-wifi-to-steal-data-from-iphone-or-android/
2024-09-13T01:50:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00704.warc.gz
en
0.970929
709
2.609375
3
What Is Fiber Optic Systems and Why Do You Need It? What Is Fiber Optic Systems and Why Do You Need It? Fiber optics is a technology that internet services use to transmit information over long distances using light pulses. It uses strands made of the fiber of glass or plastic. Fiber-optic cables are made up of optical fibers with a diameter similar to a human hair strand. They transmit more data at longer distances than other media and can be bundled into optical fibers. This technology provides homes and businesses with fiber-optic internet, telephone, and TV services. This blog will discuss the benefits of fiber optic services. Fiber-optic cables contain a variety of optical fibers in a plastic case. They are also known as optic cables or optical fiber cables. They transmit data signals in light form and travel hundreds of times faster than traditional electrical cables. Fiber-optic cables do not suffer from electromagnetic interference, which is a form of electromagnetic interference. Lightening can slow down transmission speed. Fiber cables are safer because they don’t carry any current and cannot produce sparks. Characteristics of fiber optic These are the main properties of fiber optic communication. This communication uses the light signal to transmit through the optical cable. One laser light can disperse many signals per second, resulting in high BW over long distances. The optical fiber cable diameter is approximately 300 millimeters. Fiber optic cables are lighter than copper types. Long-distance Signal Transmission Laser light is transmitted long distances because it doesn’t disintegrate. Fiber optic cables are made of glass, and laser light travels through them. The signal loss while transmitting is just 0.2 dB/km. Security of Transmission Data over optical fiber cables can be protected using optical encryption. IoT: The Impact of Optical Fiber These are the top reasons IoT greatly impacts Fiber Optics Communication. - Quick Transmission Media - Data Security - Interference does not cause data loss Why Do You Need a Fiber Optic System? ● Fiber Supports Very High Bandwidth Levels In terms of bandwidth, currently, no technology is more efficient than fiber – particularly one-mode fiber. Fiber optic cables have greater bandwidth and can carry more data than copper cables with a similar diameter. Whatever the latest fiber-optic technology comes to the market, whether it’s transceivers or other electronics, The advantages of fiber optics are that the cables themselves don’t hamper its performance. It’s rather restricted to the electronics that comprise the system. Replace the components so that your cabling is good to go. Fiber’s latency also reduces, allowing for speedier upload and download speeds and quicker access to the resources. Because of this, fiber has a low loss. Fiber allows you to transfer data over greater distances without delay or interruptions. ● Fiber is Inherently Secure Fiber cables do not emit signals. Connecting taps to the fiber cable to capture data transmission is extremely difficult. Since the signal that travels through fiber cable is stored within the fiber, the strand has to be obtained from the edge of the cable by cutting it. In most cases, this would shut the entire network offline, and then everyone would be aware of the problem. ● Fiber is Intrinsically Safe Since electricity doesn’t play a role in the transfer of information (data is transferred using light instead), The benefits of fibers include that it’s safe to use. ● Fiber Withstands Water and Temperature Fluctuations Cables made of fiber aren’t affected by temperature fluctuations, extreme weather, or moisture. For instance, when it comes, the contact with rainwater, communications continue as normal. If lightning hits a fiber cable, the electricity surge isn’t propagated since the fiber cable does not contain metal components. It can withstand extreme conditions without affecting its performance, which makes it perfect for harsh environments such as industrial, long-distance, and outdoor applications. ● Fiber is Immune to EMI The event that you place a lot of electronic cables (which transmit electricity) in a crowded area can cause crosstalk between cables. This can cause performance issues as well as a data-transmission interruption. The fiber cable, on the other hand, does not create electromagnetic interference (EMI). They don’t suffer from EMI as, well. They can be placed close to industrial equipment and not anxiety. ● Generate Constructive Phase Shifts An important feature of fiber optics is its ability to generate constructive phase shifts, which help to reinforce the transmission. These phase shifts occur at reflective boundaries in the optical fiber cladding. These changes occur because of environmental changes and the speed of light. The refractive index of the fiber cladding is different from that of the core. This makes the signal in the cladding bounce off the core and travels farther, which reduces the signal’s attenuation. Also, the size of the core has a great effect on the bandwidth of the fiber. The Bottom Line In conclusion, a fiber optic network is ideal for long-distance data networking because it is efficient and reliable. By using a fiber optic system, businesses can save on costs and reduce the risk of reliability issues. Let your Arizona-based company benefit from fiber optic networks. AZ CCTV will enhance your current network or create brand-new cables.
<urn:uuid:a6c7a458-1597-4313-a0ea-903fdc132c1b>
CC-MAIN-2024-38
https://az-cctv.com/what-is-fiber-optic-systems-and-why-do-you-need-it/
2024-09-14T06:31:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00604.warc.gz
en
0.90474
1,131
3.484375
3