text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Insight and analysis on the data center space from industry thought leaders. Global Warming of Data The general consensus is that there will be 40-50 zettabytes of data by the year 2020, and 80-90 percent of that will be unstructured. May 18, 2016 Eric Bassier is Senior Director of Datacenter Solutions at Quantum. It already reached 90 degrees in Seattle this year. In April. I’m not complaining – yet – but I’m definitely a believer that global warming is happening and that we need to make some changes to address it. But this article isn’t about climate change – it’s about data. Specifically, it’s about the growth of unstructured data and the gloomy fate ahead if we continue to deny the problem and ignore the warning signs. Sound familiar? It’s hard to argue with the evidence of unstructured data growth. Estimates and studies vary, but the general consensus is that there will be 40-50 zettabytes of data by the year 2020, and 80-90 percent of that will be unstructured. What’s Driving Unstructured Data Growth? Data growth comes from many places. Of course there are sources like 4K HD movies and TV shows, and movies, pictures, and images that all of us take on our smartphones every day, but unstructured data growth is much broader than that. There are also vast amounts of data generated everyday by machines and sensors across a wide variety of data-driven industries like research, engineering and design, financial services, geospatial exploration, healthcare, and more. Video surveillance alone is creating almost an exabyte of unstructured data every day as camera resolutions and retention times have increased. These diverse datasets share some common characteristics. Typically, they are: Comprised of large file sizes; Un-compressible – i.e., techniques like deduplication are not effective at reducing the data; Valuable to the company, department, or users that created the data; Stored for years. The Parallels with Global Warming So how is unstructured data growth like global warming? People behave like this problem doesn’t exist: Every day companies are spewing out more and more unstructured data into their IT environments, but when it comes to managing this growth, it is business as usual. Despite all evidence to the contrary, many businesses are still attempting to manage and store unstructured datasets using the same approaches to data storage they’ve always used – they put it all on disk. This approach is starting to break down in the face of both the size and scale of this data. Beyond growing costs, the ability to ingest the content into a storage system quickly enough degrades over time, and traditional backup approaches are no longer sufficient to protect the data. For these massive machine- and sensor-generated datasets, clearly a different approach to storing and managing this data is required. Data that has been thought of as “cold” is starting to “warm up”: A really interesting dynamic is appearing across multiple industries. With all of these datasets, the data is generated, processed and then archived. But now more and more examples are surfacing where companies can get additional value out of this “cold” data: For video content generated for movie or TV studios, it can be repurposed and redistributed – think “behind the scenes” episodes of your favorite reality TV show. Retail companies are analyzing video surveillance footage to track shopping patterns, and using the insights to increase sales. Scientists are able to run analyses on datasets generated years ago to gain new insights and advance new innovations in their fields. Autonomous car developers are using video and sensor data generated during early test drives to make autonomous cars safer and more efficient. The list goes on, but the point is that for these types of datasets, as cold data becomes more valuable or “warms up,” the storage approach for that data needs to change. Even archived data needs to remain accessible to the users. There’s a need to act now. Before you place that next large order for more disk storage, the time is now to stop and consider other alternatives. Sticking with the status quo is the easiest approach, but also one that leads to excess storage costs and inefficiencies. What’s the Solution? To tackle this problem, let’s first introduce what might be a new term: data workflow. In some industries this is a common term, but for many industries it might be a new concept, albeit an intuitive one. All of these unstructured datasets I’ve mentioned thus far have a workflow associated with them. It looks something like this: data is generated or captured, ingested into a storage system, and stored and processed to reach some result (often collaboration between many users is required); then data is archived for long-term preservation and re-use. This process is more efficient using a storage system that is customized from the outset for specific dataset workflows. Workflow storage must handle high performance ingest when needed. Also key is the ability to share across the network to enable collaboration – as well as the ability to tier data to lower cost tiers of storage such as tape while preserving access on the network for the users and applications that need the data. This last piece is what really unlocks the ability to get more value out of the archived data in a way that doesn’t break the bank. This workflow-based approach to storage results in significant cost reductions compared to keeping all data on flash or spinning disk, and it enables other organizations to do more with their data. And, One More Parallel… By using tiered storage and keeping most of this data on low-cost, low-power storage like tape, you’re actually doing your part to help the environment, and fight global warming. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. About the Author You May Also Like
<urn:uuid:0dd05cd3-c20d-4e6b-a783-1dbf7ba508ad>
CC-MAIN-2024-38
https://www.datacenterknowledge.com/business/global-warming-of-data
2024-09-15T06:52:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00729.warc.gz
en
0.940324
1,289
2.5625
3
World Wildlife Fund for Nature Indonesia (WWF-Indonesia) is working with Amazon Web Services to rev up efforts to save critically endangered orangutans in Indonesia. Using AWS machine learning services, WWF-Indonesia can better understand the size and health of orangutan populations in their native habitat, enabling the nonprofit to survey more territories with fewer resources, reduce operating expenses, and channel more of the conservation funding to protect the biodiversity of Indonesia. Human activities including poaching, destruction of habitat, and the illegal pet trade have caused severe declines in the orangutan population, which is comprised of three species of great apes native to Indonesia and Malaysia. According to WWF, Bornean orangutan populations have declined by more than 50% over the past 60 years and the species’ habitat has been reduced by at least 55% over the past 20 years. Orangutans are largely solitary and spend much of their lives in trees, complicating conservationist efforts to accurately measure remaining populations. Using AWS, WWF-Indonesia now automatically gathers images from mobile phones and motion-activated cameras at its basecamp and uploads these to Amazon Simple Storage Service (Amazon S3) where they are analysed. Using technologies including Amazon SageMaker, a fully-managed machine learning service that allows data scientists and developers to quickly and easily build, train, and deploy machine learning models at scale, WWF Indonesia has reduced its analysis time from up to three days to less than ten minutes. By adopting machine learning, WWF-Indonesia has reduced its reliance on a limited pool of conservationist experts and improved the accuracy and breadth of its data about orangutan populations. In the future, WWF-Indonesia plans to explore the use of additional machine learning services, such as Amazon Rekognition, an image and video analysis service, to further improve the speed and accuracy of its population identification and tracking efforts. “With careful use of technology, this innovation will help the biologists and conservationists to effectively and cost-efficiently monitor the wildlife behaviour through time and thus we can allocate our resources to scale up the monitoring efforts and invest more in conservation actions,” said Aria Nagasastra, finance and technology director of WWF-Indonesia. “The collaboration…can lead to the opportunity to elevate the biodiversity conservation practices in Indonesia to the next level.”
<urn:uuid:2eb2b66e-4458-4116-ace0-a0497e8d1727>
CC-MAIN-2024-38
https://www.frontier-enterprise.com/wwf-indonesia-saves-orangutans-with-aws/
2024-09-17T18:03:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00529.warc.gz
en
0.932658
489
2.9375
3
Since the beginning of 2020, many organizations have been forced to transition to remote working. In fact, 66 percent of US employees are now working remotely as a direct result of COVID-19, according to the Clutch 2020 Remote Work Survey. For many, this is their first experience with working from home and their first exposure to phishing scams without the security of the office network. Cybercriminals see the rapid shift to remote work as a great opportunity to obtain access to sensitive information, extort money from victims, and cause chaos. Organizations that want to thrive in this new environment must quickly learn how to defend remote employees against all types of phishing scams. Types of Phishing Scams Phishing may seem like an old threat, and it is. But even though the term “phishing” was first used and recorded on January 2, 1996, the attacks that can be described by it are still the number one attack method behind data breaches, according to the 2019 Verizon Data Breach Investigations Report. What some organizations don’t realize is that cybercriminals have come a long way since the days of Nigerian princes asking for bank account details in broken English. Here are the most common phishing attacks employees are likely to encounter when working from home: - Email phishing: Email is still the tool of choice of phishers because it allows them to quickly and cost-effectively target many potential victims. Regardless of how convincing they are, the goal of phishing emails is always the same: to trick the victim into taking an action that’s against their best interest, typically resulting in the disclosure of sensitive information or the spread of malware. - Spear phishing: The main thing that separates this type of phishing from regular phishing is the amount of research it involves. Spear phishers spend days and even weeks gathering information about their victims to craft extremely convincing email messages that seem to have originated from a trustworthy source. - Whaling: When a spear phishing attack targets upper management, then cybersecurity experts describe it as whaling. Because such attacks have the potential to yield enormous results, they are planned well in advance and coordinated with other attacks. - Vishing and smishing: Phishing doesn’t always happen over email. Vishing attacks attempt to lure victims into disclosing sensitive information, such as a password or someone’s birthday, over the phone. Smishing attacks to the same but use SMS messages instead. - Angler phishing: This relatively recent type of phishing takes advantage of social media by creating bogus customer service accounts on sites like Twitter and Facebook. In many cases, all that attackers have to do to obtain sensitive information is wait and let unsuspecting social media users come to them. There has been a steady increase in the number of COVID-19-related spear-phishing attacks since January 2020, according to data from Barracuda Networks. Attackers are unlikely to slow down in the foreseeable future now that the much-feared second wave of coronavirus is hitting countries around the world. How to Protect Your Business Against Phishing Scams Ensuring effective protection against all common types of phishing scams requires a multi-layered approach to security. Educate Remote Employees About Phishing Scams Employee education is always the most effective protection against phishing scams. Remote employees need to be trained on how to recognize phishing attempts by keeping an eye for common phishing signs, such as spelling and grammar mistakes, suspicious sender addresses, urgent calls to action, attachments, and links to third-party websites, just to give a few examples. It’s important to encourage remote employees to verify all suspicious requests over the phone or using some other communication channel besides email. To reinforce what employees have learned, it’s a good idea to create mock phishing drills that simulate real-world attacks and give employees a valuable opportunity to realize their own mistakes. Use a Reliable Email Spam Filter Even the most security-aware employees can make mistakes, especially when working from home, with kids, pets, and other distractions being in ample supply. A reliable spam filter can detect and catch phishing emails before employees have a chance to open them. Modern business email spam filters are highly configurable and offer many useful features that increase their effectiveness, such as logging and reporting, auto-whitelisting, or the ability to set independent policies for incoming and outgoing mail. When choosing a spam filter for your organization, you should take into consideration the deployment options it offers, ease of use, spam detection rate, and affordability. The installation and management of the spam filter can be outsourced to a managed services provider. Strengthen Employees’ Cybersecurity Posture Remote workers don’t have the luxury of working behind a company firewall on a highly secure network that automatically blocks all potentially dangerous communication. But just because they have to rely on their home internet connection or a public WiFi network doesn’t mean they can’t take certain steps to strengthen their cybersecurity posture. For example, remote employees can use a VPN, or virtual private network, to protect private traffic from snooping. They can also enable multi-factor authentication to prevent a data breach in the event of a password leak. Last but not least, they can proactively update their operating system, applications, and antivirus to fix critical vulnerabilities as soon as they are discovered. Avoid Breaches from Remote Work Cybercriminals don’t hesitate to take advantage of any opportunity they get to steal sensitive information and use it for their personal gain. The recent shift to remote working has resulted in a substantial increase in the number of phishing attacks against organizations of all sizes and their remote employees. To avoid costly data breaches, organizations must ramp up their email security efforts and update their existing defenses for the era of remote working. At Aligned Technology Solutions, we offer custom-made cybersecurity packages that include anti-malware software, usage approval, email protection, and identity recognition. We can monitor and maintain your data and IT infrastructure with a multi-layered cybersecurity strategy to protect you against all types of phishing scams. Contact us to protect your remote employees now.
<urn:uuid:33523f18-9578-44a9-9579-af10ab1ee409>
CC-MAIN-2024-38
https://www.myalignedit.com/business-advice/defending-remote-employees-against-phishing-scams/
2024-09-17T19:11:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00529.warc.gz
en
0.949904
1,277
2.578125
3
There are various ways for network traffic control. One of the common ways for this job is using Access Contorol Lists. There are three types of access lists. These are; • Router Access Control Lists( RACLs) • Port Access Control Lists (PACLs) • VLAN Access Control Lists (VACLs) RACL is the most known Access Control List. Generally when ACL abbreviation is used it means RACL. RACL is used to control traffic for layer 3. Port Access Control is used to control the traffic for inbound layer 2. It is only used inbound direction because there is an hardware limitation for outbound direction.The last one, VLAN Access Control List is used to control the traffic within the VLAN. Here to explain all these ACL types the below topology will help us. Table of Contents RACLs (Router Access Control Lists) As mentined before, RACLs are used for controlling layer 3 traffic. These ACLs can be issued for both inbound and outbound direction. Below, the links that RACLs can be implemented is highlighted. RACL for both direction Assume that we have a gigabitethernet 1/0/1 port on our router and we will add a RACL to this interface for both inbound and outbound direction. Firstly we must define the RACL and then we will apply the RACL to the interface. Here is the configuration commands… To control the configuration and the RACL assignation to the port, use the following show commands: show ip interface gigabitethernet 1/0/1 show running-config interface gigabitethernet 1/0/1 PACLs (Port Access Control Lists) In layer layer 2 interfaces PACLs are used instead of RACLs. PACLs are implemented only inbound direction because of the switches’s hardware limitations. Below, the ports that PACLs can be implemented are highlighted.
<urn:uuid:fd8b6224-6ec0-4b32-b406-515ec6f46f18>
CC-MAIN-2024-38
https://ipcisco.com/access-control-lists-for-traffic-control/
2024-09-08T03:03:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00529.warc.gz
en
0.835968
431
3.125
3
Excel continues to evolve, introducing features like TRIMRANGE and dot notation to assist users in managing and automating data more efficiently. These tools are particularly useful in avoiding common spreadsheet issues such as unwanted zeros and errors that occur when data is extended beyond its current range. TRIMRANGE, for instance, specifically targets these issues by excluding empty data that might otherwise disrupt the visual presentation and accuracy of reports. The inclusion of dot notation as a parallel function helps simplify formula applications, making it easier for users to maintain clean and efficient sheets. As these features develop, they provide significant improvements for data management, but they also come with a learning curve and some limitations as highlighted in educational resources and demonstrations. Users are encouraged to utilize example files and step-by-step video guides to fully leverage these new capabilities in their Excel projects. David Benaim's YouTube video delves into the utility of the new TRIMRANGE function and dot notation in Excel, a pivotal tool for users looking to enhance their spreadsheet skills. This functionality is critical for streamlining complex formulas and ensuring cleaner data presentation without manual adjustments. One notable introduction is the TRIMRANGE function. Benaim explains it helps in excluding empty cells automatically below a data table. This means when users extend a formula downwards, it doesn't drag along the unwanted zeros or error messages that typically show up when reaching non-existent data, maintaining neatness and accuracy. This feature has an immediate impact on how formulas are designed, making spreadsheet management more efficient. The dot notation, demonstrated as "=B5.:.B8," offers a shorthand version of the TRIMRANGE, though it isn’t without limitations, which Benaim discusses in his video. The video begins with an introduction to the problem of excess rows/columns affecting data presentation. Benaim then dives into how TRIMRANGE can correct these issues by demonstrating examples on an actual data set. He also contrasts TRIMRANGE with dot notation, discussing pros and cons along with suitable Use Cases for either option. Ensuing portions of the video address other complementary tools that can be synergized with TRIMRANGE, such as XLOOKUP and SPILL errors. He explains these errors usually occur when data spills beyond its intended range and how to prevent them using newer functions. Notably, Benaim shows practical applications of these functions with Pivots and charts, where clean, concise data pulling becomes crucial. He provides viewers with resources, downloadable example files to practice on, easing the learning curve and application of the discussed functions. Lastly, issues pertaining to real-world application limitations of these functions are discussed, giving a holistic view towards the end. This includes the functionality in dynamic usage scenarios such as updating auto-populated tables or generating financial models. The YouTube presentation by David Benaim empowers both novice and seasoned Excel users to optimize their spreadsheets using advanced functions like TRIMRANGE. As spreadsheets often include forecasts and projections, many cells will likely remain blank until new data populates them. Traditionally, managing these potentially error-prone formulas involves manual adjustments, a time-consuming task that risks human error. Technological advancements in spreadsheet software focus on reducing these manual elements, making spreadsheet management not only more error-proof but also easier for the user. By mastering these functions, users can automate more of their tasks, leading to efficient data management and interpretation. This streamlining directly improves how businesses analyze their data, thereby enhancing decision-making processes. Moreover, the inclusion of functions that preemptively handle errors and maintain data integrity is a massive boon for users who rely on meticulously maintained records. Healthcare, finance, and retail sectors, where data often drives critical decisions, particularly benefit from such efficient data management tools. An understanding of how and when to use these modern functions like TRIMRANGE and dot notation can drastically change the landscape of data management within Excel. Benaim’s tutorial bridges the gap between complex functionality and user-friendly guidance, allowing viewers to leverage Excel’s full potential in their respective endeavors. Learn more about Excel functions and download practice files here: Excel. The TRIM function in Excel is designed to eliminate extra spaces from text. This function is particularly useful for cleaning up data imported from various sources by removing all superfluous spaces, including leading and trailing ones, while maintaining single spaces between words. The formula to use this function is =TRIM(text). Note that it only targets ASCII space characters. In Excel, TRIM helps improve the cleanliness of your text data. It effectively removes all forms of space characters except for single spaces between words. This function is highly beneficial when dealing with text from other applications that might contain inconsistent spacing. TRIMRANGE Excel, A4:A8 Excel formula, exclude rows Excel, dynamic range Excel, Excel tips, Excel functions, manage Excel data, Excel range tutorial
<urn:uuid:f0673f2b-e228-49f3-a547-48e07bf0b2af>
CC-MAIN-2024-38
https://www.hubsite365.com/en-ww/crm-pages/trimrange-a4-a8-in-excel-to-exclude-future-rows.htm
2024-09-08T03:46:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00529.warc.gz
en
0.898359
1,006
3.015625
3
Hardly a day goes by that you don’t hear about another loss of confidential information. These events typically occur due to inadequate physical security, missing or improper implementation of technology, not adhering to security procedures or lack of awareness of potential vulnerabilities. Over the last 10 years companies have invested millions of dollars on keeping the bad guys out of their organizations. Unfortunately, today you have to assume the bad guys are already in. Enterprise Digital Rights Management is the best way to protect your confidential data. There are three main phases of data lifecycle management that need to be considered when developing a viable security strategy: Protection of data at rest – this phase is commonly addressed using technologies such as full disk encryption. Basically, this method encrypts every bit of data that goes on a disk or disk volume. The term “full disk encryption” is often used to signify that everything on a disk is encrypted, including the programs that can encrypt bootable operating system partitions. But they must still leave the master boot record (MBR), and thus part of the disk, unencrypted. There are, however, hardware-based full disk encryption systems that can truly encrypt the entire boot disk, including the MBR. Another technology that provides protection of data at rest is Enterprise Content Management (ECM). ECM refers to the technologies, strategies, methods and tools used to capture, manage, store, preserve, and deliver content and documents related to an organization and its processes. ECM can also provide some level of protection for data in transit and in use as long as the data stays within the ECM application. Protection of data in transit – The secure transmission of data in transit relies on both encryption and authentication by either hiding or concealing the data itself, and on ensuring that the computers at each end are the computers they say they are. Applications such as Public and Private Key Encryption, Secure Socket Layer encryption, Secure HTTP, secure email, and PCI for financial transactions are typically employed for secure data transmission. Data Loss Prevention or DLP is a computer security term referring to systems that identify, monitor, and protect data in use (e.g., endpoint actions), data in transit (e.g., network actions), and data at rest (e.g., data storage) through deep content inspection and with a centralized management framework. The systems are designed to detect and prevent the unauthorized use and transmission of confidential information. Protection of data in use – Organizations that protect data in the rest and transit phases remain at risk if files are not protected during use. Persistent file level protection insures that the file remain in control of the author or company. Use of the file can be controlled by a wide variety of criteria including, edit, print, access date range, number of views and locations that file can be accessed. The file owner can even revoke the file even after it has been received. Internal threats either intentional or unintentional represent significant risks for most organizations. Nearly 60 percent of employees who quit a job or are asked to leave are stealing company data, according to a report by the Ponemon Institute. Data from lost or stolen laptops continue to cost companies tens of millions of dollars and expose customers and employees to additional risk. Closing these gaps will become a priority for all companies in the coming years. Enterprise Digital Rights Management is the only application that addresses all three critical phases of data lifecycle management.
<urn:uuid:f9101ea0-c678-4743-a31f-27ca5fd52357>
CC-MAIN-2024-38
https://en.fasoo.com/blog/reduce-risk-due-to-loss-of-documents/
2024-09-09T08:20:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00429.warc.gz
en
0.937542
694
2.609375
3
February 26, 2018 | Written by: Matthias Biniok Categorized: IBM Watson | Space Exploration Share this post: In June, German astronaut Alexander Gerst will embark on his second six-month mission to the International Space Station (ISS), serving as station commander in the second half of his stay. On this mission, Gerst and his team will receive some unusual support: CIMON (Crew Interactive Mobile Companion) will be on board – a medicine ball-sized device, weighing about 11-pounds. Photo: Courtesy of Airbus CIMON is currently being developed by Airbus on behalf of the German Aerospace Center (DLR) as an intelligent, mobile and interactive astronaut assistance system. This new technology will be tested on the ISS as part of the Horizons mission of the European Space Agency. CIMON, using IBM’s Watson technology, will help astronaut Gerst to perform three tasks: Together they will experiment with crystals, solve the Rubik magic cube based on videos and conduct a complex medical experiment using CIMON as an ‘intelligent’ flying camera. CIMON’s digital face, voice and use of artificial intelligence make it a “colleague” to the crew members. This collegial “working relationship” facilitates how astronauts work through their prescribed checklists of experiments, now entering into a genuine dialogue with their interactive assistant. The developers responsible for CIMON predict that this will help reduce astronauts’ stress and at the same time improve efficiency. In addition, CIMON helps enhance safety, because it can also serve as an early warning system in case of technical problems in the future. How CIMON learns CIMON is currently being trained to identify its environment and its human interaction partners. AI gives the space assistant text, speech and image processing capabilities, as well as the ability to retrieve specific information and findings. These skills, which can be trained individually and deepened in the context of a given assignment, are developed based on the principle of understanding – reasoning – learning. Photo: Courtesy of Airbus Watson speech and vision technologies helped train CIMON to recognize Alexander Gerst, using voice samples and Gerst, as well as “non-Gerst” images. It also used the Watson Visual Recognition service to learn the construction plans of the Columbus module on the International Space Station to be able to easily move around. CIMON also learned all the procedures to help carrying out the on-board experiments. Experiments sometimes consist of more than 100 different steps, CIMON knows them all. AI from the Cloud – proprietary data in a protected space IBM Watson services run on the IBM Cloud, which provides a further advantage for users, in general, and for use on the ISS in particular: sensitive, proprietary data can remain where it is created, such as in the protected area of your own server or database. You don’t need to upload it to an external cloud for it to be enriched with appropriate AI capabilities. The IBM model for data and privacy allows you to train your own AI models with Watson technology without having to integrate proprietary or sensitive data into a public model. No other company, no other organization – not even IBM – can use this data for the further development of AI applications. This ensures that users can keep their critical information private and proprietary. What’s more, a company’s intellectual property and data serve to enhance only its own competitive advantage. This was one of the main reasons why Airbus chose IBM as its partner to develop CIMON. In the mid-term, the CIMON project will also be devoted to psychological group effects that can develop in small teams over a long period of time and occur during long-term space missions. CIMON’s creators are confident that social interactions between humans and machines, in this case between astronauts and a space attendant, equipped with emotional intelligence could make an important contribution to mission success. We predict that assistance systems of this kind also have a bright future right here on earth, such as in hospitals or to support nursing care.
<urn:uuid:f7ef5487-f1bd-4545-be5a-4a36fed4b92a>
CC-MAIN-2024-38
https://www.ibm.com/blogs/think/2018/02/watson-space/
2024-09-09T09:53:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00429.warc.gz
en
0.946034
848
3.0625
3
Domain Hijacking or Domain Spoofing is an attack where an organization’s web address is stolen by another party. The other party changes the enrollment of another’s domain name without the consent of its legitimate owner. This denies true owner administrative access. Scammers then use the legitimate web address for any purpose they choose. Domain loss to another person can occur under mundane circumstances, such as upon the expiration of a domain name when, at such time, another person quickly registers it. A true hijack of a domain happens when a domain’s legitimate owner unwittingly loses it. This occurs when they volunteer their Domain Name System (DNS) credentials as a result of a phishing or other social engineering scam. Other causes of a DNS hijacking stem from when a partnership between more than one person who has access to the DNS registration dissolves, and one party hurries to reset access credentials, locking out the other party. Domain spoofing is a related but separate action. Here, the illegitimate party mimics the website at the true domain, doing whatever they like: destroying the reputation of the true business, collecting credentials and payment card data, conducting SPAM, and basically abusing all domain-related privileges (including email control). "I responded to an urgent message about the expiration of our domain, but it wound up being a domain hijacking. Our website now shows really embarrassing content and I'm hearing of emails pretending to be me...saying inappropriate things."
<urn:uuid:ffe0336d-48a7-4572-aaff-ddd0aaecceea>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/domain-hijacking-spoofing
2024-09-10T15:27:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00329.warc.gz
en
0.914789
301
2.6875
3
In recent years, cyberattacks on world governments have been rife, and here in the United Kingdom, local authorities have also suffered assaults on their systems. Back in 2018, UK government agencies and local councils made the headlines when their dedicated websites were knocked offline in a hack that impacted thousands of sites around the world. In the next sections, we’ll explore this global event, taking a closer look at how it occurred and what the effects were. Thousands of websites infected across the globe Over 5,000 websites were hacked in order to force site visitors’ devices to run malicious software that mined a cryptocurrency called Monero, which is similar in nature to the popular Bitcoin. Here in the UK, users who loaded up the official government websites for the Information Commissioner’s Office (ICO) and the Student Loans Company (SLC), along with the local council sites for Croydon, Camden and Manchester City, had their devices’ processing power exploited by hackers. Across the Atlantic, the homepage for the United States Courts was also hijacked by the threat operators. Malevolent code for specific software called Coinhive was used in the widespread attack. Coinhive is a program that is advertised by the slogan “A Crypto Miner for your Website”. The hack involved the code running discreetly in the background, up until the point a webpage was closed. Scott Helme, a security researcher, was informed of the cyber strike by an associate who sent him over antivirus software alerts, he had received after accessing a website run by the UK Government. The associate commented to Helme that while the attack type was not new, it was the largest instance of it being deployed he had ever seen. With threat operators hacking a single organisation, thousands of websites had been impacted in the UK, the United States and Ireland. He later added that an Australian local government site that was using the software had been hacked as well. Helme explained that, unlike with Bitcoin, where client wallet addresses are all stored on a publicly accessible database, with this attack, it was impossible to identify the location of the user account that was profiting from the malevolent code. The security researcher added that a simple method existed to defend against this type of attack: “Every single website I run has an ‘Integrity Attribute’, which is a tiny change in how the script is loaded but is there because I’m worried about exactly this type of thing happening.” Malevolent code hidden in an accessibility plugin The malicious Coinhive script was cleverly inserted into a commonly used third-party plugin designed for accessibility. Entitled BrowseAloud, it was employed to assist both partially sighted and blind people to access the internet easier. Software developer Texthelp, which operated the exploited BrowserAloud plugin, announced to news services that its software product had been hacked and was active for a period of up to four hours. Data Security Officer at Texthelp, Martin McKay, commented at the time on its security measures and action taken: “Texthelp has in place continuous automated security tests for Browsealoud, and these detected the modified file and as a result the product was taken offline. This removed Browsealoud from all our customer sites immediately, addressing the security risk without our customers having to take any action. Texthelp can report that no customer data has been accessed or lost.” McKay also confirmed that Texthelp had enlisted the assistance of an independent security firm to act as consultant on the incident and carry out a comprehensive review of all the software developer’s internal systems. Impact of the attack The result of the far-reaching hack was that many official government websites were forced to be taken offline, including the Information Commissioner’s Office, the regulator that businesses must report data breaches and other cyberattacks to. In-house IT teams worked to resolve the problem, and the UK’s National Cyber Security Centre’s (NCSC) dedicated Incidents team was called in to investigate the case. A spokesperson from the NCSC commented: “Technical experts are examining data involving incidents of malware being used to illegally mine cryptocurrency. The affected service has been taken offline, largely mitigating the issue.” While no private data was stolen during the hack, with government websites non-functioning, many people were unable to access important services when required, until they were safely restored. IT security lessons to for local and national governments to learn For governments to ensure they stay resilient and can keep official websites operational, it is essential that any third-party software or services pass stringent security checks. It is vital that any third-party worked with shares the same cybersecurity protocols as the government department or local authority if sites are to remain protected. During the pandemic, online services and help centres have been key to keeping people safe and informed. This has made ensuring that government websites have 100 per cent up time even more critical than back in 2018. To avoid members of the public being unable to access critical information and services, all sites should be safeguarded by a dedicated cybersecurity solution that will raise an alarm if an incident arises. Sites must be protected against infiltration, or personal data can be stolen or held to ransom, which can result in a lack of confidence in governments and put members of the community at risk. Strong support against cyberattacks At Galaxkey, we have created a secure system that can be employed by governments, enterprises and educational institutions to keep data safe and sites online. Our solution has zero backdoors and stores no passwords, blocking common penetration paths used by hackers. It also features powerful encryption software that allows emails and important documents to remain unreadable by hackers, while any alteration to documents will raise alerts with users, to identify issues. Get in touch with our expert team today to test our system for yourself with a free, two-week trial term that allows you to explore its innovative and dependable features.
<urn:uuid:2303e1e0-03d0-4516-b95c-47fe7e004ddf>
CC-MAIN-2024-38
https://www.galaxkey.com/blog/a-look-back-at-when-the-uk-government-website-was-hacked/
2024-09-11T20:31:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00229.warc.gz
en
0.972205
1,223
2.84375
3
The term user, within the context of the application domain, has always referred to the entity that interacts with an application. User acceptance testing was, at one time, one of the final stages of application development during which the people who would be interacting with the application sat down to determine whether it was acceptable and met their business requirements. As technology has evolved, so too has the definition of user. This evolution can be attributed to the maturity of the Internet and the rapid increases in computing power. The observation that our smartphones hold more computing power than was used to reach the moon in 1969 is only amusing because it’s true. Advances in hardware design have led to the ability to cram incredible amounts of compute power into very small components. Just as shifts in application architectures drive changes in the technologies that deliver and secure them, shifts in the definition of user have driven changes in where applications are deployed. In the Data Center era, users were almost exclusively human beings who accessed applications in the workplace. Hence, data centers were nearly always collocated with the business they supported. During the Cloud era, the Internet enabled a new operational model—public cloud—to better serve a new category of users who accessed applications from home. Applications needed to be more broadly and easily accessible from multiple locations because users were now accessing them from multiple locations. Today, new categories of users are being added that span software, machines, devices, and sensors in addition to human beings. These users access applications from anywhere and everywhere. As of November 2020, in the US alone, 45.38% of web traffic originated from mobile phones. The average American boasts more than ten different connected devices. Televisions, appliances, and even our light bulbs are using applications via the Internet. This change in definition and distribution of users is a significant driving force behind edge computing. Consider Cisco’s Annual Internet Report analysis and forecast that predicts that "by 2023, there will be more than three times more networked devices on Earth than humans. About half of the global connections will be machine-to-machine connections and the M2M space will be dominated by consumer-oriented 'things' in smart homes and automobiles." (RCRWireless News) Business has always sought to provide applications that meet users where they are. Business now needs to meet users of all kinds at the edges of the Internet. The applications that enable humans and machines to conduct business and execute designated tasks need to be closer to the users that interact with them. One of the top reasons for this is the universal need for speed. Whether it’s a need for application performance or for a rapid response to instruct a device, speed is something both humans and machines look to edge computing to provide. To wit, two of the top three use cases for edge according to respondents in our annual research are: From this, one can infer that existing speeds are not fast enough. One reason is the composition of applications, which today incurs a great deal of latency. With myriad components, each requiring time to look up and retrieve, it’s no surprise that application performance continues to plague brands and users alike even as network speed and capacity have steadily increased. And though compute power has dramatically increased over time, the network continues to determine just how fast we can move data across the network. With multiple human users and even more machine and system users per household, increasing available bandwidth isn’t solving the equation needed to achieve better performance. In many cases, it’s simply not possible to increase bandwidth and network speeds due to the laws of physics and economics. Moving applications—particularly those that process and analyze data—closer, then, is one solution business can leverage because application location is the most flexible variable in the performance equation. We are moving into an era in which applications must be as mobile as their users. An era in which data centers and public cloud have a role, but not as the "final destination" for deployment. Instead, enterprise and cloud data centers will serve as sources of compute, network, and storage that can be made part of a larger, more flexible mesh of resources that spans locations across which applications can move fluidly and on demand. That era is the edge era, and we believe the platform that will enable it is Edge 2.0.
<urn:uuid:528838f2-613a-400d-b7fa-c8ca8615c363>
CC-MAIN-2024-38
https://www.f5.com/pt_br/company/blog/increasing-diversity-of-location-and-users-is-driving-business-to-the-edge
2024-09-13T00:43:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00129.warc.gz
en
0.949861
883
2.875
3
Environmental effects of computer manufacturing and disposal will soon become part of the price of maintaining the enterprise IT portfolio, under legislative efforts gaining momentum throughout the world. Manufacturers face technical challenges, and buyers may need to reconsider accounting methods and timetables for equipment replacement as ITs environmental costs come home to roost. In its annual “State of the World” analysis for 2004, the Worldwatch Institute, in Washington, calls every personal computer “a toxics trap.” CRT displays, the report observes, contain hexavalent chromium—the pollutant made famous by activist Erin Brockovich—in addition to their better-known payload of lead, which readily leaches into groundwater when monitors are discarded in landfills. The institutes report further illuminates the toxic content of PCs. Resistors contribute cadmium; connectors add beryllium; plastic cases and circuit boards contain various plastics, including the difficult-to-recycle polyvinyl chloride, that are often laced with bromine-based flame retardants. And by next year, the institute predicts, one computer will be discarded for every new computer purchased in the United States. The good news, if one can call it that, is that more than two-thirds of discarded computers go into storage for lack of suitable disposal sites. The bad news is that even so, discarded computers and other electronic waste contribute more than two-thirds of the heavy metals input to U.S. landfills, as estimated by groups including the Silicon Valley Toxics Coalition and the National Safety Council. Exporting the Problem Exported electronic waste, meanwhile, is becoming a serious pollutant in developing countries, where component materials are reclaimed by crude methods that largely ignore workers health. In a report last month on the “Earth Files” program produced by the British Broadcasting Corp., a toxicologist with the International Solid Waste Association described the cottage industry of computer recycling in India, saying, “Youve got lead being taken on to peoples clothing, youve got lead being taken on to peoples hands. Quite often in these small workshops, people have small smelters or ovens [with] no fume extraction. … Not only have people got this waste in solid form, theyre also breathing it in.” Lead is an accumulative poison, meaning it can build up in the body over periods of many years. Australian occupational safety and health guidelines estimate that 30 percent of swallowed lead is absorbed by the body, along with 70 percent of inhaled lead. Obvious symptoms include headaches and joint pain, but stealthier and more severe consequences include kidney damage, nervous system damage, and sterility or birth defects. Two major manufacturers of semiconductor chips recently announced measures to reduce or eliminate lead from their products. Intel Corp. will seek a 95 percent reduction by next quarter, and National Semiconductor plans to be lead-free by years end. Japans NEC Corp., including subsidiary NEC Electronics Corp., seeks lead-free production by March 2006. This deadline looks as if its aimed at compliance with the July 1, 2006, effective date of the European Unions Restriction of Hazardous Substances Directive, which will limit the use of lead and other materials in new electrical and electronic equipment. Interpretation of that EU directive was muddied, however, by a committee meeting late last month. At that meeting, specific limitations on lead were discussed, and it was not clear at that time whether limits on lead as a percentage of weight would apply to components or to fully assembled items. EU member states require clarification before they can write their own enabling laws. Eliminating lead from electronic components is no small task, as noted by Melissa Grupen-Shemansky, director of packaging and interconnect technology at Agere Systems Inc., in Allentown, Pa. Lead-free solders, she said, in comments on the companys Web site, require higher temperatures—on the order of 500 F compared with roughly 420 F for conventional lead-containing solders. More troubling, Grupen-Shemansky said, is the tendency of lead-free solders to form crystalline “tin whiskers” that can grow long enough to create short circuits among components. In one test described by Grupen-Shemansky, whiskers bridged one-third of the way across a 200-micron gap between chip leads after only five weeks of storage at 140 F and 93 percent humidity. These are not typical indoor conditions but not unlike what might be found in a warehouse. If lead must be eliminated, she said, then other materials such as nickel may form an effective barrier against whisker formation. It seems likely to eWEEK Labs that this will become an area of competition among electronics manufacturers as regulators demand reductions in toxic material use. In addition to reducing toxic input to new equipment and thus, to the waste stream, regulators in the United States and Europe are exploring means to place accountability for downstream costs with builders and users of IT gear—rather than leaving them, as they are now, to be absorbed by municipalities and developing countries. The EUs Waste Electrical and Electronic Equipment Directive will mandate, among other measures, the free return of old equipment when comparable new equipment is purchased after Aug. 13, 2005, with producers paying for subsequent “environmentally sound disposal.” In the United States, Rep. Mike Thompson, D-Napa Valley, Calif., proposes to avert proliferation of inconsistent state laws with a program administered by the EPA (Environmental Protection Agency) at the federal level. Issuing grants to governments and private organizations for computer recycling programs, Thompsons federal plan would be supported by fees of up to $10 collected on sales of individual computers, monitors and laptops. Said Thompson of his proposed bill: “We cant afford to continue endangering our health and our environment and packing our landfills by ignoring the problems created by computer waste.” Technology Editor Peter Coffee can be reached at firstname.lastname@example.org.
<urn:uuid:6708c6e7-d273-4607-b6db-8241569bccf5>
CC-MAIN-2024-38
https://www.eweek.com/pc-hardware/greener-computing-on-the-horizon/
2024-09-15T10:55:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00829.warc.gz
en
0.955333
1,246
2.921875
3
Yes, you too can create an interface between your brain and a robot. Jedi-style mind control may soon be in your grasp. BCI, or brain computer interface, technology reads the electrical signals in your brain and muscles and uses that to control and manipulate connected items in the real world. The technology is already approved by the Food and Drug Administration for use in artificial limbs. Anyone with an interest in this futuristic tech can explore it. Open BCI, a collective of engineers and artists, has created affordable open source hardware that allows everyone to experiment with creating an interface between their brain and a computer. "It's really awesome to see people witnessing their own brain activity for the first time," said Conor Russomanno, co-founder of Open BCI. While the technology can't allow you to convince others that "these are not the droids you're looking for," it can let you control and steer a toy robotic spider using only your mind. To learn more, check out the video below, from Wired: NEXT STORY: NIST Testing out Passwordless Smart Home
<urn:uuid:d886a44f-f6e0-42a5-b8be-8d78d3806fca>
CC-MAIN-2024-38
https://www.nextgov.com/emerging-tech/2015/11/video-want-control-things-your-mind-just-get-some-open-source-hardware/123817/?oref=ng-next-story
2024-09-16T15:58:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00729.warc.gz
en
0.957398
225
3.203125
3
Our team of industry domain experts combined with our guaranteed SLAs, our world class technology . Get Immediate Help Security operations centres (SOCs) have emerged as critical bastions in the fight against cyber threats. With online attacks becoming more sophisticated and frequent, their importance in protecting an organisation's digital assets has grown exponentially. These centres stand as the vanguard, ensuring robust security measures to counteract potential threats. This article will explore the depth of SOC infrastructure and its indispensable role in fortifying network and cloud security. What Is a SOC? This is a centralised unit that monitors, assesses, and defends an organisation's information systems from cyber threats. Its primary functions encompass the continuous surveillance of security events, identification of malicious activities, and swift incident response.SOC in Network Security Network security is a subset of the broader cybersecurity framework, and a SOC helps ensure that your network remains resilient against various forms of web-based threats. Below are some of the ways SOCs contribute to network security:Aspect | Role in Network Security | Monitoring | Continuous real-time surveillance of network traffic for early detection of anomalies and potential threats. | Incident Response | Rapid containment and mitigation of network-based threats, which may include isolating affected systems. | Configuration Management | Ensures that network security tools like firewalls, IDS, and IPS are properly configured to maximise protection. | Threat Intelligence | Utilises up-to-date information on emerging threats to proactively adjust security measures. | Vulnerability Assessment | Regularly scans the network to identify and patch security vulnerabilities. | Logging and Reporting | Maintains detailed logs of all network events and incidents for forensic analysis and compliance purposes. | Compliance | Ensures the network's adherence to industry regulations such as GDPR, HIPAA, or PCI-DSS. | User and Entity Behaviour Analytics (UEBA) | Employs UEBA to detect abnormal behaviour patterns in the network that could indicate a security issue. | Automation and Orchestration | Employs Security Orchestration, Automation, and Response (SOAR) tools to handle common threats, allowing human operators to focus on more complex issues. | Training and Awareness | Educates staff on network security best practices, aiding in the human element of cybersecurity. | Benefits of a Robust SOC Infrastructure The implementation of a SOC brings with it a myriad of advantages that significantly fortify an organisation's cybersecurity framework. Here are some of the primary benefits:Benefit | Description | Real-Time Monitoring | Provides 24/7 oversight of networks, systems, and data for early detection of security threats. | Improved Compliance | Helps meet industry-specific compliance standards such as GDPR, HIPAA, or PCI-DSS. | Proactive Threat Hunting | Actively searches for indicators of compromise that might go unnoticed, providing a proactive security approach. | Enhanced Incident Response | Specialised teams follow well-defined protocols for each type of threat for quick and effective response. | Centralised Security | Consolidates data from multiple sources for easier correlation and pattern recognition. | Expertise and Specialisation | Staffed by experts in various cybersecurity domains, ensuring high-level skills in tackling security incidents. | Cost-Effectiveness | While initial setup costs are high, the long-term benefits in terms of reduced security incidents often outweigh the investment. Outsourced SOCs are also a viable option. | Data and Business Continuity | Helps in maintaining business operations by preventing and mitigating cyber-attacks. Also assists in data backup and recovery. | Improved Customer Trust | Demonstrates a commitment to security, thereby enhancing the trust and confidence of clients and stakeholders. | Strategic Decision-Making | Provides valuable insights into the web-based risk landscape, aiding senior management in resource allocation and strategic planning. | Reduced Alert Fatigue | Centralised monitoring and specialisation help filter out false positives, reducing the occurrence of 'alert fatigue' among IT staff. | Key Components of SOC Infrastructure A robust SOC infrastructure is a synergy of cutting-edge technology, skilled personnel, and streamlined processes, all working to safeguard an organisation's digital assets.Outsource SOC Services to Microminder Do you need to monitor your network traffic to prevent cyber-attacks? Do you want to set up a SOC without assembling an in-house team? Microminder has got the answer. We are a top-rated cybersecurity provider with a squad of security specialists with expertise in various industries.Conclusion The significance of a robust digital security infrastructure cannot be understated. As the bedrock of cybersecurity, a well-structured SOC is imperative to fend off threats and keep your business secure. However, establishing and maintaining an optimal SOC can be daunting for many organisations. This is where we shine. At Microminder, we offer bespoke solutions tailored to your needs. Our unparalleled expertise and cutting-edge technology fortify security and empower businesses to channel their energies towards growth and innovation. Entrusting digital security responsibilities to us is a strategic move towards ensuring a fortified and secure operational environment. Don’t Let Cyber Attacks Ruin Your Business Call: +44 (0)20 3336 7200 Call: +44 (0)20 3336 7200 To keep up with innovation in IT & OT security, subscribe to our newsletter Cyber Risk Management | 17/09/2024 Cyber Risk Management | 17/09/2024 Cyber Risk Management | 13/09/2024 What is SOC in network security? SOC in network security is a dedicated hub that continuously monitors, detects, and responds to potential cyber threats within an organisation's network infrastructure.What is SOC in cloud security? In cloud security, SOC refers to a specialised centre that oversees and manages the safety of cloud-based assets, ensuring data protection and compliance in cloud environments.What are the key components of a SOC? The key components include advanced security tools, a team of cybersecurity experts, real-time threat intelligence, incident response protocols, and continuous monitoring systems.Unlock Your Free* Penetration Testing Now Secure Your Business Today! Unlock Your Free* Penetration Testing Now Thank you for reaching out to us. Kindly expect us to call you within 2 hours to understand your requirements.
<urn:uuid:76affbb8-ab46-4319-a70e-23cb47561184>
CC-MAIN-2024-38
https://www.micromindercs.com/blog/soc-infrastructure-and-security
2024-09-17T22:36:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00629.warc.gz
en
0.905318
1,308
2.5625
3
Quick tech specs - Capacitive touchscreen responds to it without the need of any excessive pressure - The stylus is an extremely helpful device intended for improved operation of the tablet Know your gear ONE CRAYON.WORLDS OF POSSIBILITY.| Unleash what's possible in your classroom. Crayon empowers students write, draw, create, and learn however they do best. CRAYON FOR EVERY STUDENT| Crayon opens up whole new learning avenues for all students, regardless of age, subject, or learning style. The kid-friendly design and pixel-perfect technology lets students write, take visual notes, draw idea graphs, and craft illustrations. This allows plenty of opportunity for adapted learning, so students can solidify and show their knowledge their own way, and reach their full potential. KEY APPLICATIONS | |ALL DISCIPLINES. BETTER RESULTS.| Good note taking is a skill that crosses disciplines, and it's a fact that students learn when they write. Writing and annotating can help improve their critical thinking, writing, and listening skills-which is tremendously valuable for their learning and development. COLLABORATE WITH CRAYON| Crayon can open doors for student-to-student collaboration, working alongside a growing list of digital apps. It makes student work more legible, and allows students to revisit their assignments without having to dig through papers. With classrooms becoming more and more paperless, it's a valuable tool. CRAYON FOR STEM| The right tools can garner confidence and excitement for students, even in STEM, empowering them to succeed. Crayon allows STEM Students to effectively work through equations, illustrate complex scientific concepts, or make annotations. CRAYON FOR DESIGN| Empower students to express themselves and learn intuitively with a tool that lets them unleash their creativity. Whether it's presentations, illustrations, or all manner of visual learning, students get a boost with Logitech Crayon. Add to Compare
<urn:uuid:fe404260-ebe8-4a56-9094-4fcc802bfcc6>
CC-MAIN-2024-38
https://www.cdwg.com/product/logitech-crayon-stylus/7387745
2024-09-19T04:24:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00529.warc.gz
en
0.921512
408
2.84375
3
For many years IT security was seen as just one small part of what IT teams did. Since the explosive growth in cyber attacks for many organizations, this is no longer the case. IT security roles are kept separate from the other roles in IT, recognizing that cybersecurity and IT security require specialist skills and dedicated roles. In this article, we will examine the roles and responsibilities of IT security and explore some typical cyber security roles. Table of Contents ToggleRoles and responsibilities of security team Within the field of IT, the roles and responsibilities of security teams tend to focus on the technical aspects of protecting against cyber threats. Other non-IT roles tend to worry about countering the information security threats that aren’t technology-based, such as storing physical documents and securely sending information by post. Providing the functions of information security to an organization has to consider all aspects of information security responsibilities to maintain the confidentiality, integrity, and availability of all data, irrespective of how the data is stored and transmitted. The individual activities that are included in what are the roles and responsibilities of IT security can vary from organization to organization, depending on factors including: - The size of the organization. - It’s structure. - The technologies in use. - The type of business. This is also true for the information security and cybersecurity roles and responsibilities of an organization. It is important that the individual IT security roles, cyber security roles, and information security roles are all clearly defined, communicated, and understood by all stakeholders. One individual can, of course, take on multiple roles. The roles must be defined so that there are no overlaps in responsibilities. They should be underpinned by an information security roles and responsibilities policy, setting out the rules and controls that are necessary for effective information security. It is a good idea to publish a well-defined and understandable organizational chart that shows the structure of your IT security team and how it fits within the wider organization. Here are some examples of commonly seen IT security team roles and responsibilities: Gaining and maintaining buy-in for IT security from the top of the organization is crucial to success. Not only will this help to secure the necessary funding, but it will also demonstrate to all employees that the organization is taking IT security seriously. The executive-level roles that are accountable for all aspects of IT security include: - CISO (Chief Information Security Officer) - CTO (Chief Technology Officer) - CRO (Chief Risk Officer) - CSO (Chief Security Officer) These roles acting together have the responsibility for ensuring the development and use of an enterprise information security strategy that ensures the protection of all information assets. IT Security Professionals: These roles are responsible for designing, implementing, managing, and maintaining the organization’s security policies, standards, baselines, procedures, and guidelines. Example role titles include: - IT security manager. - IT risk manager. - IT security analyst. Users are responsible for adhering to the organization’s IT security policy, including preserving the confidentiality, integrity, and availability of assets under their personal control. Users are often the most neglected role in IT security, even though they create the greatest vulnerability to the organization due to cyber-attacks delivered by email and social media. IT security can’t operate in a vacuum; it must be part of an overall information security provision. In addition, defining the roles and responsibilities of security team members is not enough to safeguard all data assets. A good information security roles and responsibilities policy will also take into account roles that are specifically concerned with the data. These roles should work with the IT security teams, not in isolation, and include: Every element of data should have an owner. For some pieces of data, such as an individual user’s name, the owner will be obvious. It will be less obvious for many other pieces of data that the organization relies on, particularly data used by many different people. Efforts must be taken so that all data has clearly defined ownership. Data owners are responsible for: - Ensuring that appropriate security is in place for the data. - Deciding on the sensitivity level for the data. - Determining appropriate data access privileges. Data custodians are responsible for taking care of data on behalf of the data owners. An example from outside IT is the person that looks after the key to the safe. Within IT security roles, typical data custodians include database administrators and network administrators. Documenting IT security roles Documenting job descriptions helps ensure that all staff understand their roles and responsibilities and contribute to IT security. New staff should undergo induction activities that use the job descriptions to help the employee understand what these responsibilities are and how they fit into the overall information security model. This is true whether they are taking one of the IT security roles, cyber security roles, or any other role in the organization. IT security can only be as strong as the weakest link, and every employee has a role to play in providing protection against cyber threats. The job descriptions for IT security roles, in fact for all roles, should be kept up-to-date, with regular review to ensure that they are still relevant and appropriate. They should also be reviewed following any breach of security as part of a lessons learned activity. Organizational charts are a useful way to show how IT security is organized and how it fits with the rest of the organization. Charts are usually depicted as a tree with the highest level roles at the top underpinned by the roles that report upwards. The purpose of these charts is to create an easily understood view of the organization’s hierarchy allowing all employees to understand the lines of authority, relationships between other individuals and teams in the organization, and to who they need to report any issues. I hope that this article has given you insight into the roles and responsibilities of IT security teams. Whichever specific roles you choose for your organization IT security has to be seen as a crucial part of your defense against the ever-growing threat of cyber attacks. Creating IT security roles can no longer be done as an afterthought in designing the structure of your IT department. Security has to be at the heart of everything that you do if you want to survive in today’s business environment. A: The role of an IT security professional is to protect an organization’s computer systems, networks, and data from potential threats and vulnerabilities. They implement security measures, monitor for incidents, and respond to security breaches. A: Access management is crucial in IT security as it ensures that only authorized individuals have access to sensitive systems, data, and resources. It involves implementing strong authentication methods, user provisioning, and regularly reviewing access rights to prevent unauthorized access. A: An IT security team stays updated with the latest security threats and trends. They continuously assess the organization’s security posture, implement proactive security measures, and collaborate with industry peers to share information and best practices. They also monitor threat intelligence sources to detect and respond to emerging threats. A: Employee awareness plays a vital role in IT security. It involves educating employees about potential security risks, best practices for secure behavior, and the importance of adhering to security policies and procedures. Well-informed employees act as the first line of defense against social engineering attacks and other security threats. A: IT security plays a vital role in regulatory compliance by implementing measures to protect sensitive data and ensuring adherence to relevant industry regulations and standards. This includes implementing access controls, encryption, and security monitoring to meet compliance requirements, such as the General Data Protection Regulation (GDPR) or Payment Card Industry Data Security Standard (PCI DSS).
<urn:uuid:d5de2ce1-9e1a-430c-81cb-94f2af2b9d24>
CC-MAIN-2024-38
https://itchronicles.com/information-security/what-are-the-roles-and-responsibilities-of-it-security/
2024-09-20T10:34:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00429.warc.gz
en
0.950796
1,569
2.59375
3
Creator: University of Michigan Category: Software > Computer Software > Educational Software Topic: Arts and Humanities, Music and Art Tag: application, Code, design, language, python Availability: In stock Price: USD 49.00 Why should a designer learn to code? As our world is increasingly impacted by the use of algorithms, designers must learn how to use and create design computing programs. Designers must go beyond the narrowly focused use of computers in the automation of simple drafting/modeling tasks and instead explore the extraordinary potential digitalization holds for design culture/practice. Structured around a series of fundamental design problems, this course will show you Python code in terms of its rules and syntax, and what we can do with it in its application and design. So, by the end of this course, you will know the fundamentals of Python and Rhino script, but importantly, through the lens of their application in geometrically focused design lessons and exercises. Subjects covered in this course – An introduction to Design Computing as a subject and why designers should learn to code. Interested in what the future will bring? Download our 2024 Technology Trends eBook for free. – The fundamentals of coding in the Python scripting language. By the end of the course students will be familiar with the basic structure and syntax of this language. – The understanding and application of Rhinoscriptsyntax, a native coding language in Rhinoceros that's imported into Python, which allows one to create and control geometries through authoring code. – The application of Procedural Logics – the structuring of coding systems to produce variable geometric form. – The output of geometries in still and animate forms.
<urn:uuid:9740fac0-a936-496b-981e-d4fe97a11a30>
CC-MAIN-2024-38
https://datafloq.com/course/design-computing-3d-modeling-in-rhinoceros-with-python-rhinoscript/
2024-09-08T07:28:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00629.warc.gz
en
0.908202
346
3.28125
3
A Barrier to Terrorism or the Public? Terrorism takes many forms, but the most potent attacks come when everyday items are transformed into weapons of destruction. Cities the world over are faced with such attacks, and their responses will shape not only safety practices but the development of those cities as a whole. As more and more metropolitan areas add barricades and additional security measures, it comes at a cost. Pedestrian traffic and public enjoyment might be limited by barriers, but what is being sacrificed, and can those concerns be mitigated? Around the World in 80 Barricades Many European cities have been the focus of terrorist activities, and it has taken its toll on their citizens. Britain, France and Germany have all stepped up their security infrastructure by incorporating portable and permanent barriers throughout major municipalities. Much of the main focus has been establishing barriers to vehicular attacks while still allowing access by emergency vehicles and personnel. Some attempts are meant to also preserve the inherent beauty of these cities. For example, in Florence, Italy, security bollards are being installed in tandem with oversized flower pots to create a visually pleasing blockade. Such efforts both provide a sense of security without distraction alongside actual stopping power if the need arises. Even with an eye on aesthetics, it’s difficult to establish safety measures that don’t restrict foot-travel movement, even in France where significant steps have been made towards making Paris a pedestrian-friendly city. To safeguard the Eiffel tower, its base has been surrounded by a glass wall, significantly restricting access while only moving the area of attack a short distance away. Other cities have found that permanent barriers and even temporary barricades create flow problems both with vehicular and pedestrian traffic. Every metropolitan area must make that choice between public safety and freedom of movement. An Alternate Choice The best way to balance these ideas is to incorporate types of barricades that can be as flexible as each city needs them to be. For example, Delta Scientific offers a host of portable barricades and bollards that are stylish, strong and convenient. By incorporating temporary solutions, cities can work around traffic issues as they arise and allow their security plans to evolve depending on the situation. If one layout has unexpected consequences, a new design can be implemented with little work. Fluid security layouts have excellent advantages, especially in cities where the public needs the freedom to be themselves while they also require the safety of barricade protection. Contact Delta Scientific today for help planning your next security layout, and get the best defense available today! Share This Story, Choose Your Platform!
<urn:uuid:017a35e6-dd4f-4e0e-81fc-c106f27a1cb4>
CC-MAIN-2024-38
https://deltascientific.com/2020/02/12/a-barrier-to-terrorism-or-the-public/
2024-09-10T18:45:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00429.warc.gz
en
0.952268
525
2.578125
3
What is the GDPR? The General Data Protection Regulation is a framework of legal guidelines for collection and processing of personal info of individuals within the European Union. Or, in short, it’s a set of rules that companies need to follow to collect and protect a user’s data. As of May 25th, 2018, any company, group, or individual, that handles European data, or resides in Europe, must comply with GDPR. This goes for massive players like Google down to bloggers who collect email addresses for their newsletter. Four terms to understand: Personal Data – any identifying information . This includes submitted information such as your name, email, SIN, address, phone number, biometrics, or account numbers. It also includes information that could be used to identify you indirectly such as location data. Data Subject – simply put it is the user or the person who is identified by the information. Data Controller – the person or company who determines the purpose and use of the collected data. Data Processor – the person or company who processes the data. This includes analytics, marketing, as well as storage such as cloud services. The controller and processor can be the same or separate. For example, my company collects email addresses for our newsletter. That makes us the controller. We can choose to store these addresses on our own machines, that makes us the processor. Alternatively we can choose to store them in Mailchimp, which is a newsletter app, or perhaps on a document in Google Drive, which is the cloud. Now we have chosen an outside processor. As the controller, we are responsible for GDPR compliance for both our company and our chosen processor. Security and Privacy Features The full regulation is over 100 printed pages long. It includes your rights as a data subject as well as regulations around how controllers and processors are required to protect your data. The basis of GDPR is the User Rights, second to that is emphasis on consent, and finally privacy incident response. Here are 5 highlights from the regulation: Right to be forgotten – This is the most talked about and least understood. It is the right for a user to retract their data from storage or processing, from any company at any time. When the Cambridge Analytica scandal broke with Facebook in 2018, people wanted to delete their accounts but there was no regulation to dictate that that Facebook had to delete their data as well. This ruling would have forced both Facebook and Cambridge Analytica to delete the data they had on any qualifying individual that requested it (Facebook as the Controller and CA as the Processor). Note: This right is not an opportunity to have unflattering articles or reviews removed. The rule allows for personal mentions if they fall under freedom of expression, public interest, public health, or research. Right of access – As a data subject, this is your right to ask about the purpose for the collected data, the processors involved, and even if the data is being manipulated with artificial intelligence or machine learning. All these answers *should* be covered in the new consent request (see below). Right of restricton of processing – You know those pesky ads that follow you from one website to another? That’s called direct marketing. The restriction of processing means you can indicate specifically that you do not want your data used in direct marketing campaigns. Consent – For all data collection, the data subject has to have the ability to both opt in AND withdraw consent. Controllers also have to present the information to support right of access. I like to break these down as the 5 Ws: - WHO – Details of the recipients of the data including links to the controller and names of processors - WHAT – List of the data being collected - WHY – Reason for the collection (known as legal basis) - WHEN – The duration for which the data will be retained - WHERE – Clear links provided so user knows where to go to enact requests Privacy Incident Management and Breaches – In terms of protection of data, this is a big one. In the past there was NO regulation that a company had to report a privacy breach. Uber took 6 months to report their 2018 data breach. Now compliant companies have to report any breaches within 72 hours of their knowledge. Technically the GDPR only applies to citizens in the European Union. In fact, the UK doesn’t even fall under these rules though they do support a Data Protection Bill, which is similar. Check the links below for the related terms and service agreements. For Businesses and Corporations Any organization that “processes or stores large amounts of personal data, whether for employees, individuals outside the organization, or both” is required to designate a Data Protection Officer. That person is responsible for ensuring compliance of GDPR. If a company is caught in non-compliance then they face a fine. Depending on the infraction, a tier 1 offence results in a fine of the higher of 2% of the company’s world wide gross revenue or 10 million euros. A tier 2 offence is the higher of 4% of global revenue or 20 million euros. As an example, when Equifax was breached in 2017 they suffered no penalties. Had GDPR been in place they would have owed 67 million dollars in fines. Every applicable company needs to run a DPIA, or Data Protection Impact Assessment, that includes an explanation why they are collecting the data requested, an assessment of risks to the rights and freedoms of data subjects, and documented proposed measures for safety and security of the collection. Global Privacy Regulation Compliance is largely what WE do as a company. From DIY checklists to templates to having us do it for you, contact us for more information on how we can help. For Small Business, Charities, & Clubs Unfortunately even small businesses, groups, not-for-profits, and charities fall under these regulations. If you run or are part of a group that collects information (newsletters, databases, list serves, forums, etc.) then this could apply to you. Fortunately most of the individual tools that small business uses, like cloud servers, Newsletters, and CRMs, have updated their terms to comply. What you should do: - Make a list of all of the software and services you use (good to have this anyway) - Consider each one for data collection, storage, and processing - For those that do, type the name of the service and ‘GDPR’ in to Google for instructions Loads of Links For a full picture of what compliance looks like for GDPR, download our FREE GDPR overview page of all ten areas on which you need to focus. After having gone through multiple sites, here several you may find useful: - The official GDPR site - The entire GDPR brown down nicely by links - The GDPR broken down by chapter and in plain english - Facebook group: GDPR for entrepreneurs – as written by a lawyer Actions for Businesses that Collect Data - Getting consent in Google For Business – as controllers. Includes ads, apps, sites - Google’s G-suite and Cloud Agreements – as processors - AWS GDPR site - Rules for Email Marketers - Mailchimp’s New Consent Collection - Facebook for Business - LinkedIn for Business – includes marketing, sales, developers Individual Terms and Services for Networks - Instagram (now combined with Facebook) - LinkedIn – link for their new terms, coming soon. If you want help from our consultants, have any questions or find something we’ve missed let us know!
<urn:uuid:38e34e8e-9c26-4473-b4a2-fdb613c2e6c0>
CC-MAIN-2024-38
https://www.binarytattoo.com/guide-gdpr-general-data-protection-regulation/
2024-09-10T17:09:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00429.warc.gz
en
0.9489
1,567
3.140625
3
What is a 25G CWDM Transceiver? The 25G CWDM transceiver plays a crucial role in modern telecommunications, particularly in 5G fronthaul networks. Operating at 25 Gbps, it utilizes CWDM technology to transmit multiple signals over one fibre, optimising bandwidth. Specifically designed for 5G fronthaul, it supports 25G Ethernet and CPRI/eCPRI, with impressive 10km link distances over single-mode fibre. Fully compliant with SFP28 MSA, CPRI, and eCPRI standards, it typically operates within the wavelengths of 1270nm-1370nm and 1470nm-1570nm. If you want to know more about the differences between the 25G CWDM module and other 25G SFP28 modules, you can check out this article: 25G SFP28 Transceiver Module Overview. What is the 5G Transport Network? The 5G transport network encompasses fronthaul, midhaul, and backhaul, connecting cell sites with one another, then with the core network, and ultimately with data centres. As 5G technology continues to evolve, the significance of “fronthaul” in the telecommunications industry is on the rise. This fiber-based link, integrated within the Radio Access Network (RAN) infrastructure, plays a pivotal role in achieving faster speeds and reduced latency. With the introduction of Distributed RAN (DRAN) and Centralized RAN (CRAN) approaches, base station components such as the Central Unit (CU), Distributed Unit (DU), and Active Antenna Unit (AAU) are undergoing substantial restructuring to meet evolving requirements. Fronthaul acts as the vital connection between the active antenna unit (AAU) and the distributed unit (DU), ensuring smooth communication and efficient data transmission. Innovations like the 25G CWDM SFP28 transceiver are essential for facilitating seamless communication and efficient data transfer across 5G fronthaul networks. Midhaul is a vital element of the telecommunications network, acting as the intermediary between the fronthaul and backhaul segments. It encompasses the transmission path from the Distributed Unit (DU) to the Centralised Unit (CU). In the context of 5G networks, base stations are structured into a distributed architecture. Here, the DU oversees the transmission and reception of wireless signals, while the CU manages communication with the core network. Acting as a pivotal link between these two units, midhaul facilitates the transfer of data from the DU to the CU for further processing and dissemination across the network. In addition to fronthaul and midhaul, the 5G transport network also includes the backhaul. This component consolidates access to traffic from the Radio Access Network (RAN) and utilises various technologies such as Ethernet, microwave, and optical fibre to transport it to the central office or data centre. The backhaul serves as a crucial link, connecting the fronthaul and midhaul to the core network, facilitating seamless data transmission across extensive distances. Utilisations of 25G CWDM Transceivers In the initial stages of setting up 5G networks, fronthaul predominantly relies on direct fibre links, along with extensive coverage of both high-frequency and low-frequency spectrums for additional access points. To optimise the utilization of existing fibre resources, CWDM optical modules play a crucial role. The 25G CWDM solution allows for the selection of 6 or 12 wavelengths from the 18 specified in the ITU-T G.694.2 standard, spanning from 1271nm to 1611nm. Adhering to this standard enables optical transmission equipment from various vendors to operate harmoniously within the same network, ensuring network stability and reliability while mitigating issues stemming from equipment mismatches. - 25G CWDM SFP28 6-Wavelength Solution The 6-wavelength 25G CWDM solution opts for the initial 6 shorter wavelengths (1271nm~1371nm) due to the maturity of the industry chain and the lesser impact of transmitter dispersion penalties (TDP). It’s widely agreed upon that the AAU side utilizes wavelengths of 1271nm, 1291nm, and 1311nm, while the DU side employs wavelengths of 1331nm, 1351nm, and 1371nm, as depicted in Fig.3. Additionally, the optical module on the AAU side requires cooled directly modulated lasers (DMLs) to meet industrial-grade standards. - 25G CWDM SFP28 12-Wavelength Solution The 12-wavelength 25G CWDM solution addresses a mixed transmission scenario involving both 4G and 5G networks. To enhance reliability and reduce component costs, the wavelengths ranging from 1271nm to 1371nm operate at a 25Gbit/s data rate for 5G fronthaul networks, while the wavelengths from 1471nm to 1571nm operate at a 10Gbit/s data rate for 4G fronthaul networks. This arrangement, illustrated in Fig. 4, facilitates the smooth transition from 4G to 5G base stations. However, in practice, the 25G SFP28 connector takes precedence due to its compatibility with both 4G and 5G networks, making the 12-wavelength solution less commonly used in real-world scenarios. Benefits of 25G CWDM Transceivers - Cost-effectiveness CWDM technology enables the transmission of multiple signal wavelengths over the same fibre optic cable, efficiently utilising fibre optic resources. With 25G CWDM optical modules, multiple data streams can be transmitted over a single fibre optic cable without the need for additional fibres, thus conserving fibre optic resources and reducing network construction costs. - Flexibility and Scalability Given the significant and ever-growing volumes of data typically associated with big data applications, networks must possess robust flexibility and scalability. By utilising 25G CWDM modules, users can dynamically select different wavelengths for data transmission, enhancing the adaptability and scalability of the network to meet the continuously expanding demands of big data processing. - Data Security In the realm of big data applications, the handling and processing of extensive volumes of sensitive data are routine, emphasising the critical importance of data security. 25G CWDM modules enhance data transmission security by segregating data streams of varying wavelengths into separate channels. This segregation reduces the risks of data leaks and interference, thereby enhancing the reliability and security of data transmission. In brief, the 25G CWDM SFP28 is a critical optical transceiver that efficiently sends multiple signals down a single fibre optic cable using CWDM technology. It plays a pivotal role in providing effective data transmission solutions for 5G fronthaul networks. This technology not only optimizes how bandwidth is used but also meets the high-speed and low-latency demands of 5G networks. Moreover, it enhances data transmission security by segregating data streams into separate channels based on different wavelengths. Overall, its use ensures comprehensive protection for network performance, flexibility, and security, laying a solid foundation for the future of 5G communication.
<urn:uuid:8ee69ef6-64cc-4419-8803-37229209c986>
CC-MAIN-2024-38
https://www.fiber-optical-networking.com/category/fiber-optic-transceiver
2024-09-13T06:03:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00229.warc.gz
en
0.884014
1,476
3.09375
3
In the interconnected world we live in, networks play a pivotal role in facilitating communication and data exchange. Whether it’s the local area network (LAN) in your home or a wide area network (WAN) connecting global offices, understanding the basics of networking is essential. In this article, we’ll explore the fundamental components of a network, the distinction between LAN and WAN, and the various types of equipment that make these networks function. 1. Types of Networks: LAN vs. WAN - Local Area Network (LAN): LANs are confined to a limited geographic area, typically within a single building or campus. They facilitate fast communication between devices, making them ideal for home networks, small businesses, or educational institutions. - Wide Area Network (WAN): In contrast, WANs cover a broader geographical scope, connecting LANs over larger distances. The internet itself is a prime example of a WAN, linking networks across cities, countries, and continents. 2. Basic Components of a Network: - Routers: Responsible for directing data between different networks, routers act as traffic coordinators. They analyze the destination addresses of data packets, determining the most efficient path for transmission. Routers play a crucial role in connecting local networks to the broader internet. - Switches: Within a LAN, switches enable devices to communicate with each other efficiently. Switches are intelligent devices that direct data specifically to the device it is intended for, minimizing unnecessary data broadcasts. - Expansion Modules: Essential for facilitating communication within a network, networking modules perform the crucial task of transforming digital signals generated by devices into the necessary analog signals. This conversion ensures smooth communication over various network infrastructures, contributing to the seamless operation of interconnected systems. - Servers: Serve as robust resource storage and management centers, offering essential services such as file storage, email, and web hosting, ensuring efficient data distribution within the network. - Firewalls: Safeguard networks by actively monitoring and controlling both incoming and outgoing network traffic, providing a secure barrier against unauthorized access and potential threats. - Gateways: Act as indispensable bridges between different network protocols, facilitating seamless communication between disparate networks and ensuring compatibility. - Clients: Include devices like computers, laptops, and smartphones, serving as end-users that access and utilize resources from servers, contributing to the overall functionality of the network. - Cables and Connectors: Ethernet cables and connectors facilitate wired connections within a network. - Wireless Access Points (WAPs): For wireless connectivity, WAPs allow devices to connect to the network without physical cables. - TCP/IP (Transmission Control Protocol/Internet Protocol): The backbone of the internet, TCP/IP ensures seamless communication between devices. Understanding the basics of networking is crucial for anyone navigating the modern digital landscape. Whether you’re setting up a home network or managing a global enterprise, comprehending the distinctions between LAN and WAN, familiarizing yourself with network components, and recognizing the purposes of various equipment types will empower you to make informed decisions in the realm of networking. Dive into the world of networks, and unlock the potential for seamless connectivity and efficient data exchange. Now that you understand the components of networks, start browsing for your perfect equipment.
<urn:uuid:968d2d58-8024-4cc2-8fcf-b57ffb9a99dc>
CC-MAIN-2024-38
https://dedicatednetworksinc.com/demystifying-networking-understanding-the-basics-of-networks/
2024-09-15T16:16:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00029.warc.gz
en
0.891735
671
3.890625
4
Edge computing refers to a computing network topology where network functions that are typically located and grouped in central locations are moved and distributed closer to users (i.e., the network’s edge). An enterprise’s end users can, therefore, benefit from more versatile and capable cloud/data center applications as well as reduced latency. In other situations, an enterprise can achieve cost savings as end-user hardware complexity and processing are handed off to edge computing nodes. Mobile Edge Computing (MEC) looks into placing typical data center servers, including computing, storage, and networking components, within a network’s radio access network domain. Typical sites include macro basestation locations as well as traffic routing/switching offices. The MEC system consists of a number of hardware, software, and management elements required to create a virtualized environment. At the heart of the MEC architecture, there are three categories: • The mobile edge host, which refers to the actual physical and logical components at the edge and is essentially the combination of the virtualization infrastructure and the application platform. • The mobile edge system-level management, which manages a wider aspect of the MEC-based network. • The mobile edge host-level management, which manages a specific host. Mobile operators, MEC solution vendors, and application software developers have been developing and evaluating a number of edge computing application scenarios. Video Stream Analysis The main advantage of MEC, its local processing capability, is most prominent when used to analyze large quantities of video data. MEC enables video monitoring systems to do a more effective job at handling near-real-time video analysis near the camera location or by transmitting video to the core network, or even to the public cloud. By installing video processing applications at the edge, savings are achieved at both the content generation source and in the backhaul, guaranteeing low latency for video distribution. Augmented & Virtual Reality Augmented reality (AR) and virtual reality (VR) applications offer advanced methods for businesses and venues to advertise and promote information and new video experiences. AR applications rely on continuously analyzing a device's camera output, tracking the device’s movement and orientation in order to respond with the desired content in real-time. Combining local content caching and processing with device location tracking, the use of MEC is ideal for guaranteeing the delivery of the quality of service required for such services. Enterprise Deployment of MEC This use case mainly targets offices and corporate environments where workplaces are rapidly shifting from fixed desk structures to mobile cultures with the help of cloud-based software and various mobile devices like smartphones, tablets, and laptops as well as the now established culture of bring-your-own-device. Implementing an IP-PBX through a MEC platform enables the seamless integration of in-building cellular networks into an enterprise’s WLAN network. Connected car systems can sense vehicle behavior and communications on the road and offer valuable notification and alerting services to increase safety and reduce traffic congestion. Connectivity can also enable new entertainment services and other value-added services like location finder and parking assistance. Deploying MEC in base stations or small cells along the road provides the required low latency. The MEC application directly receives the information from the vehicles and other road sensors, analyzes them, and transmits real-time messages to the vehicles while propagating the information to other neighboring MEC servers and the central cloud service for larger scale reporting. IoT Gateway Service Scenario Internet of Things (IoT) systems depend on a collection of connected devices and sensors that are served by a set of underlying networks including 3G and LTE and unlicensed technologies such as Wi-Fi, Bluetooth, ZigBee, RPMA, Sigfox, and LoRa. For various IoT scenarios, a MEC-enabled gateway is necessary to ensure that requirements for low latency responses are met by providing real-time aggregation and distribution services for the IoT devices locally. It can also provide analytics and decision logic based on analytic results. At the edge, a considerable quantity of data is collected about users, network conditions, local context, and consumer behavior. Transmitting all this data to the core may be counterproductive due to high latency and wasted bandwidth. Hence, there is a solid case for deploying MEC in such scenarios. Analytics may include various activities, including but not limited to event correlation, big data applications, and machine learning. Retail Customer Behavior Analysis Traditional high street stores are looking for any competitive advantage they can attain over online retailers. Edge analytics—encompassing sales data, images, coupons used, traffic patterns, and videos generated and analyzed—provides unprecedented insight into consumer behavior patterns. The healthcare industry is changing with the rise of the digital era. Devices such as Fitbits, telehealth tools, and glucose monitors are reshaping the sector completely. The data stored on these devices can be used to update a patient’s digital medical records; however, the existing cloud infrastructure cannot manage the amount of data they produce. Edge computing connects these medical devices, providing doctors and physicians with reliable and up-to-date patient monitoring information during medical emergencies and routine follow ups alike. Mobile Edge Computing is still going through proof of concept (PoC) demonstrations and limited customer trials by mobile telcos. However, the business case rationales are starting to mature, and mobile operators such as Telefonica, Orange, AT&T, and Verizon are moving beyond these PoCs and trials. Huawei, Nokia, Samsung, and ZTE are members of the official ETSI MEC group and are developing relevant products. However, enterprise and MEC specialists such as SpiderCloud (Corning), Quortus, and Vasona Networks are also jockeying for positions in the MEC market. MEC can be deployed on 4G LTE architectures, but because MEC business models are relatively nascent it is likely MEC will develop alongside 5G. 5G will of course deliver additional enhancements for MEC, as it will further boost data throughput, reduce latency, and assure reliability.
<urn:uuid:c7cb33eb-7687-49d0-84a9-bed98cbd0722>
CC-MAIN-2024-38
https://sdn.cioadvisorapac.com/cxoinsights/can-edge-computing-stimulate-novel-applications-for-enterprise-customers-nwid-2431.html
2024-09-19T08:19:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00629.warc.gz
en
0.925851
1,250
2.515625
3
Understanding how to create and deliver realistic phishing emails is a topic that’s shrouded in mystery. In this blog, we’ll walk through the step-by-step process of creating phishing emails, explore delivery methods, analyze email filtering technologies, examine evasion techniques, demonstrate domain spoofing, and provide actionable measures to mitigate against phishing. Jump To Creating The Phishing Email What You'll Learn In This Article. - What services cyber criminals commonly create phishing emails against. - How to create and deliver phishing emails. - How to use real-world phishing techniques such as sender address spoofing. - How to overcome the challenge posed by email filters which commonly detect and block phishing emails. Understanding The Basics Before deep diving into the phishing email creation process, let’s get a better understanding of the services commonly impersonated by attackers. Cybercriminals frequently impersonate banks, credit card companies, and payment processors. Their goal is to trick victims into sharing sensitive data, such as account credentials, financial information, or credit card details. Example: A phishing email claiming to be from a renowned bank, requesting immediate verification of recent credit card activity. The email includes a link to a spoofed login page designed to capture user credentials. Social Media Platforms Popular social media platforms like Facebook, Twitter, and Instagram are often impersonated by cybercriminals. Their goal is to trick victims into divulging their login credentials or personal details. Example: A deceptive email appearing to be from Facebook, notifying users of a recent breach and urging them to reset their password. The email includes a link that leads to a malicious website designed to steal login credentials. Cybercriminals frequently impersonate well-known online retailers, such as Amazon, eBay, and Groupon. Their goal is to trick victims into sharing credit card details, login credentials, or other personal information. Example: An email posing as an order confirmation from Amazon, stating that the recipient needs to click on a link to track their recent purchase. The link leads to a fake website designed to harvest user data. Cybercriminals may impersonate professional networking platforms like LinkedIn or Yammer with the goal of tricking victims into clicking on malicious links, revealing their account credentials, or even tricking them into accepting fake job opportunities to steal money from them through complex trust scams. Example: An email impersonating LinkedIn, claiming that a password reset request has been initiated and the recipient needs to verify the code to set their new password. The link leads to a phishing page requesting login credentials. Creating The Phishing Email Now that we understand the basics, let’s delve into the actual process of creating a phishing email. This can be broken into four distinct steps. Step 1. Defining Your Goals Before attempting to create a phishing email, you need to clearly establish the objectives of your campaign. Is it to test employee awareness? assess vulnerabilities within your organization? or perhaps, gather credentials or other sensitive information as part of a red-team exercise. Perhaps it’s a mixture of everything. Step 2. Conducting In-Depth Research To create a convincing phishing email, you need to understand your intended audience. This could include researching the services they use on a day-to-day basis, it could include knowledge of their geographic area, and even understanding the internal organizational structure to know whom certain employees report to and the business function they operate within. Once these factors are known, you can begin creating the phish. You should also sign-up for the service you intend to impersonate and look at the transactional and marketing-related emails they send. Test out functionality such as initiating a forgotten password request, account lockout scenario and setting up multi-factor authentication, to learn about their designs, logos and writing styles. Step 3. Creating An Engaging Email Based on the research performed, you should now have at least one email that you can alter for phishing purposes (Tip: See the CanIPhish Email Inbox Simulator for inspiration on some phishing emails.). This could involve slight alterations to make the email more convincing and urgent. You need to use language and formatting that closely resembles the legitimate email. Ensure that logos, branding, and visual elements in the email are unchanged or appear authentic. For example, suppose you’re targeting an online banking service. You’ll want to use a subject line that includes urgency, such as "Important Security Alert: Immediate Action Required." In the body, adopt a tone of concern, emphasizing the need for the victim to verify their account details to prevent unauthorized access. Step 4. Embedding The Payload Cybercriminals often use disguised hyperlinks to trick victims into clicking on malicious phishing websites. To do this, you’ll need to create phishing links that closely resemble the legitimate service's domain. This could involve the use of lookalike domains, sub-domains, and even complex URL query string structures that obfuscate the actual domain in use. Additionally, attachments, such as PDFs or Office documents, can be used to exploit vulnerabilities or encourage victims to enable macros which can then facilitate code execution on their device. Step 5. Selecting An Email Provider You’ll need to choose an email provider that suits your needs and allows you to send without interruption. Now if you’re performing highly targeted spear-phishing to a small number of victims, you can likely use a provider like Gmail or Yahoo. However, if you’re conducting larger campaigns, such as targeting an entire organization, you’ll need more control over the email infrastructure to improve deliverability. Additional factors that may also be considered include ease of use, customization options, and the likelihood of the provider banning you from using their service. Step 6. Purchasing A Phishing Domain You’ll need to register a domain name that closely resembles the impersonated service's domain. A variety of domain registrars can be used for this, such as AWS Route53, GoDaddy, and much more. These providers will outline domain availability and even provide recommendations and domains that are available which closely resemble the service provider’s domain. Step 7. Configuring The Phishing Infrastructure Depending on the complexity of your campaign, you may need to set up additional infrastructure, such as mail transfer agents (MTAs) and email relay services. You’ll then need to configure the DNS settings of your newly purchased domain to ensure email authentication protocols like Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication, Reporting, and Conformance (DMARC) are configured. These measures help improve deliverability and phish click rates. CanIPhish has created a tool called CanIBeSpoofed which assists with setting up SPF records, you can also use a tool such as MXToolBox to assist with this. Leveraging Domain Spoofing Techniques Cybercriminals may use domain spoofing to make phishing emails appear more legitimate. These emails impersonate trusted domains by abusing a weakness in the way their email authentication records have been configured. This could involve forgery of the “From” field in the email header to display a fake sender address. SPF is designed to prevent this type of spoofing. Additionally, attackers could use an advanced spoofing technique known as spf-bypass, which involve a misalignment between the Mail Envelope from address “SMTP.MailFrom” and email header “From” address. DMARC is designed to prevent this type of spoofing. Cybercriminals will typically use open-source intelligence (OSINT) to find potential domains to abuse for spoofing. There are a variety of domain reputation tools that attackers can use to discover and automatically analyze domain SPF and DMARC records to discover vulnerabilities that can be abused. Another much simpler method of spoofing is the abuse of sender display names which don’t have any email authentication protections. Often this is used in conjunction with spoofed “From” addresses. The Challenge Posed By Email Filters Email filters play a crucial role in detecting and blocking phishing emails. They utilize various techniques, such as signature-based filtering, heuristics, and machine learning, to detect suspicious emails. However, these detection mechanisms aren’t perfect and cyber criminals are continuously refining their tactics to avoid detection. Some of these techniques include: - Polymorphic Attacks: Generating multiple variants of the same email to avoid detection based on known signatures. By altering content, subject lines, and attachments, cybercriminals can bypass traditional detection methods. - Image-based Text: Embedding text within images helps cybercriminals bypass content-based filters that analyze text in the email body for malicious keywords or URLs. By converting text into an image, cybercriminals can deceive filters and increase the chances of successful delivery. - URL Obfuscation: Cybercriminals employ techniques like URL encoding, URL shortening services, or redirectors to make malicious URLs appear legitimate. These tactics make it challenging for filters to recognize and block malicious links. Website Redirection: Phishing campaigns may include unique URLs within each phishing email. If a unique URL is found to open a phishing link too fast, from a certain IP, or with a certain user agent string, the website may fail to load. This method prevents the detection of phishing websites where sandboxing is in use. - Zero-day Exploits: Exploitation of vulnerabilities in email clients or servers that have not yet been patched by software providers. Cybercriminals can leverage these vulnerabilities to bypass email filters and deliver their malicious payloads. If you're looking to deliver phishing emails as part of a red-team engagement, you'll need to implement some of the techniques listed above to ensure your emails aren’t blocked by email filters. This is often the hardest part of running a phishing campaign and it’s often easier to simply implement email allowlisting. Managing The Outcome Once you’ve created the phishing email, set up the phishing infrastructure, and configured any additional mechanisms such as domain spoofing or email filter evasion, you’re ready to begin sending! During the sending process, it’s crucial to monitor the campaign’s progress, track email opens, link clicks, credentials harvested, emails responded to, and attachments opened. Depending on the outcome, you may also want to provide additional educational content to victims. Defending Against Phishing Emails To better defend against phishing emails, you can several different strategies, including: - Regular Phishing Training: By conducting regular phishing simulations and security awareness training, employees can learn to better detect and report phishing attempts. - Multi-Factor Authentication (MFA): By implementing MFA, it makes it more difficult for cybercriminals to gather credentials and access accounts. - Strong Email Filtering: By deploying advanced email filtering solutions, you can significantly reduce the number of phishing emails that land in employee inboxes. This protection is far from perfect, but it has a significant impact on the likelihood that an employee clicks on a phishing email. Creating effective and realistic phishing emails is a multi-faceted process that needs to be thoroughly researched to ensure the right email is crafted for the right victim. By following the process outlined in this blog, you’ll be able to significantly increase the phish click rates for your simulated phishing campaigns and ensure you’re replicating real-world threats. By using the CanIPhish Cloud Platform you can easily create phishing emails, spoof domains, deliver phishing, and track statistics. To get started all you need to do is signup! Note: Are you looking to create a phishing website that can be paired with your phishing email? Read our blog outlining the step-by-step process of cloning a website.
<urn:uuid:83c3a5f8-3f16-45b6-b5e7-4fa8075ec886>
CC-MAIN-2024-38
https://caniphish.com/blog/how-to-create-a-phishing-email
2024-09-20T13:28:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00529.warc.gz
en
0.922459
2,503
2.609375
3
Should office spaces embrace openness or privacy? Thanks to constantly-improving technology in the field of audio engineering, it is now possible to have both. Introducing a sound masking system into an open environment offers that solution. What is white noise? You probably already have some idea of what the phrase “white noise” means. You may find devices available for purchase with claims of providing neutral sound into the environment. However, all that true white noise does is add the sound of radio static to the air. Ultimately, you end up adding a distraction rather than taking one away! Think of how we move about the room; the white noise machine sits in one place, becoming louder or quieter as we reposition ourselves. It would be worth it to consider an office sound masking system instead. Is there a better option? Sound masking is often confused with white noise, and on the surface, this seems to make sense. Both claim to use the addition of neutral noise to eliminate distractions and increase privacy. However, only sound masking is able to do this effectively. The advanced technology integrates seamlessly into any space. Sound masking vs. white noise Sound masking is a much more advanced technology than white noise. It is engineered with a specific purpose in mind: to literally mask and overlap naturally with human speech. Because it focuses on only these auditory frequencies, it does not register to human ears as a distraction, instead blending naturally into the environment and essentially disappearing. Additionally, the process of sound masking installation allows sound to be integrated into the entire office space. Because of this, there is no single source of the sound. As workers and clients move about the room, the noise does not change. This integrated system provides a consistency that further maximizes the ultimate goal of office sound masking: to forget that it’s even there. Why should more workplaces use sound masking? Sound masking increases the efficiency of work completed in a number of ways. First, it creates a distraction-free ambiance where individual conversations, phone calls, and other activity can take place with ease. Additionally, it increases the privacy of these conversations by keeping what is said between the interested parties. Clients and employees feel comfortable sharing sensitive information more freely, increasing the capacity of work that can be done. Sound masking is useful outside of traditional office environments as well. Anywhere where focus and privacy are valued within an open space, from bars to nursing homes, sound masking can help to create that ideal environment. Don’t waste your time on ineffective white noise machines. Contact BCS Consultants today so we can maximize the efficiency, privacy, and focus of your workplace.
<urn:uuid:13501c56-55d6-4210-9a3c-a84ac71dcb5e>
CC-MAIN-2024-38
https://www.bcsconsultants.com/blog/sound-masking-vs-white-noise-what-can-sound-masking-do-that-white-noise-cant/
2024-09-08T09:22:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00729.warc.gz
en
0.948676
552
2.59375
3
As European companies navigate complex regulatory environments and growing concerns over data privacy and security, understanding data sovereignty and its implications has become a strategic imperative. Data sovereignty refers to the principle that data is subject to the laws and governance structures within the nation where it is collected and stored. This means that European data should be protected and managed in accordance with European laws, such as the General Data Protection Regulation (GDPR), which sets a high standard for data privacy and security. For European companies, particularly those in highly regulated sectors like finance, healthcare, and government, ensuring compliance with local laws is not just a matter of legality—it's a matter of trust and credibility. Data sovereignty assures clients and stakeholders that their data is handled with the utmost care and that their privacy is respected. Compliance with Local Regulations: European companies must ensure that their data handling processes comply with local regulations to avoid hefty fines and reputational damage. A truly sovereign cloud can offer peace of mind by aligning with local laws and standards. Data Privacy and Security: In an era of frequent data breaches and cyberattacks, maintaining robust data privacy and security is paramount. Sovereign clouds should provide the highest level of protection, ensuring that data is not vulnerable to unauthorized access. Control and Autonomy: A sovereign cloud should empower European companies to maintain control over their data, ensuring that it is managed by local entities and not subject to foreign government intervention or extraterritorial reach. With the rapid digitization of business processes, the demand for cloud services has skyrocketed. As a result, many global cloud providers claim to offer “sovereign clouds” that meet European standards, but how sovereign are these offerings? Major cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) have recently announced plans to establish "sovereign clouds" in Europe. While these initiatives seem promising, they raise critical questions about the true sovereignty of such offerings. U.S. Ownership and Influence: AWS, Azure, and GCP are all U.S.-based companies, which means they are subject to U.S. laws, such as the CLOUD Act. This law allows U.S. authorities to request access to data held by U.S. companies, regardless of where the data is stored. This creates a significant risk for European companies that require full sovereignty over their data. Jurisdictional Conflicts: The extraterritorial reach of U.S. laws creates potential conflicts with European regulations. Even if data is stored in Europe and managed by local staff, the ownership of the infrastructure by U.S. companies poses a risk of data access by foreign governments. Trust and Transparency Issues: For truly sovereign operations, trust and transparency are essential. European companies need assurance that their data will not be accessed by unauthorized entities. The ownership and control of infrastructure by foreign entities can undermine this trust. As European businesses, we must critically evaluate the claims of sovereignty made by large cloud providers. While their offerings may meet certain technical and regulatory standards, the inherent risks associated with foreign ownership cannot be ignored. Demand True Sovereignty: Advocate for cloud solutions that are genuinely sovereign, managed, and operated by European companies that are fully compliant with local laws and regulations. Assess Risks Carefully: Evaluate the potential risks associated with foreign-owned cloud services and consider alternatives that offer greater control and compliance with European standards. Promote Local Innovation: Support local cloud providers – like Impossible Cloud – and initiatives that aim to create truly sovereign cloud solutions, fostering innovation and economic growth within Europe. In conclusion, while the push for sovereignty by AWS, Azure, and GCP reflects the importance of data sovereignty, European companies must remain vigilant and demand cloud solutions that genuinely meet their sovereignty requirements. By doing so, we can protect our data, comply with regulations, and maintain the trust of our clients and stakeholders. Ready to get your business on a real sovereign cloud? Impossible Cloud is Europe's premier cloud storage, made, managed, and stored entirely within the EU. If your business is based in Europe and you’re looking for a truly sovereign cloud storage solution, contact us today to get a personalized consultation. The information provided in this blog is for informational purposes only and does not constitute legal advice. Readers should consult with a qualified legal professional for advice regarding specific legal matters or concerns. Impossible Cloud does not assume any liability for actions taken based on the information provided herein.
<urn:uuid:94066f7b-6cde-4514-b84c-cd6e07539213>
CC-MAIN-2024-38
https://www.impossiblecloud.com/blog/data-sovereignty-in-europe-the-importance-of-truly-sovereign-clouds
2024-09-08T08:56:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00729.warc.gz
en
0.936105
913
2.71875
3
Firewalls do a good job of identifying suspicious activity on your network, and you’ve probably been using them for years. The question for many K-12 leaders is, “Is a firewall the right technology to protect against the rising number of cyberattacks against the education industry?” Today, the answer is, “No, not all by itself.” A recent K-12 Cybersecurity Cost Report by CoSN (the Consortium for School Networking) uncovered two top firewall concerns among K-12 IT leaders. While completely valid, these firewall concerns don’t even scratch the surface for what IT teams in districts should be thinking about in cloud computing and remote learning security. The good news is that CoSN’s report found that virtually all school districts are using some type of a firewall. The disturbing takeaway from this finding is that, while a majority of school districts are using cloud applications—mainly Google G Suite and/or Office 365 as their data hubs—only a small minority (3%) are using any kind of cloud application security. While virtually all districts are using a firewall of some type, almost none have a cloud computing security layer in place to secure the sensitive data stored in G Suite and/or Office 365 for Education. This is understandable because there are several common misconceptions that are driving this trend amongst K-12 cybersecurity managers. Let’s set the record on these cybersecurity myths straight. Just using content filtering and firewalls leaves a big hole in your district’s cybersecurity infrastructure. Why? Because firewall technology hasn’t kept up with the needs of K-12 schools who are using cloud computing with G Suite and Office 365. In addition to information about school districts’ use of firewalls, and how much they’re paying for them, the CoSN study also identified a couple of top firewall concerns. The two top firewall concerns for IT staff are: 1. Lack of Time and Staffing: A firewall can only alert administrators of problems that it finds. School employees must act on those alerts. What IT leaders indicated in the survey is that they often don’t have the staff with enough time and the expertise required to monitor and react to firewall alerts. Without that staff involvement, the usefulness of a firewall is greatly reduced. 2. The High Cost of Advanced Capabilities: Another concern expressed was the cost of using a firewall to its fullest. Respondents noted that a standard firewall provides some type of intrusion detection or prevention and rules-based access management. Capabilities such as data loss prevention and monitoring encrypted traffic are often add-on services that they usually can’t afford with their budgets. The CoSN survey results didn’t address two other top firewall concerns. Which is a concern in itself. It creates a situation where K-12 leaders seem to be unaware of the gaps in their cybersecurity tech stack that can hurt their ability to secure student, staff, and district data. The first missed firewall concern is one that wasn’t on most people’s radar until just a couple weeks ago. It’s that your district’s firewall becomes significantly less effective once the student leaves the school’s network. This is particularly true if your district doesn’t have a 1:1 program and students are using personal devices to access resources stored in school G Suite and/or Office 365 environments. In a remote learning environment, your IT team has significantly less visibility and control over data being accessed without a cloud security layer in your cybersecurity infrastructure. Another firewall concern is the effect it has on network speed. A firewall works by inspecting every transmission that enters or leaves the network. Taking time to do that inspection slows down network speeds. The result is that the end-user suffers. Delays and interruptions in the classroom make it difficult for teachers to do their jobs and for students to learn efficiently. This becomes an increased concern if your district is moving to remote learning, where different teachers and students will have varying degrees of internet connectivity strength. The most critical concern that K-12 leaders didn’t mention is the fact that firewalls in any form can’t secure data stored, accessed, and shared in G Suite and Office 365. Is this lack of concern hurting you and other leaders? Yes, it is. Despite 100% use of firewalls at school districts, K-12 cybersecurity incidents are on the rise. Between December 1, 2018 and December 1, 2019 districts experienced a 256% increase in data breaches. The task of keeping school districts safe is getting more complex all the time. Not all of the increase is due to the limitations of firewalls. Human error causes some of the problems and some result from misconfiguration of the software. Cybercriminals attacking schools more often also contributes; schools are prime targets because of their poor security history. You can’t control the cybercriminals or eliminate human error, but what you can control is protecting your cloud applications like G Suite and Office 365. Firewalls provide perimeter security for your network. However, there is no perimeter in the cloud. To protect data in G Suite and Office 365, you need API-based cloud security that adds a new layer to your cybersecurity tech stack. Whether K-12 leaders know it or not, here’s a summary of the top firewall concerns for your remote learning plans: The current crisis and the shift to remote learning is a huge issue that has blindsided the nation. The dedicated staff in K-12 school districts have taken on a Herculean task to transition to remote learning at a moment’s notice. Though there are always going to be bumps in the road with a transition like this, everyone who has taken part to make this happen are among the true heroes of this crisis. To help K-12 IT staff make sure that exposure of sensitive student, staff, and district data doesn’t become a self-inflicted wound in this critical time, we’re offering ManagedMethods K-12 cybersecurity & safety platform for free until May 31 for K-12 school districts that are new to ManagedMethods. If your district is using G Suite and/or Office 365 for Education, you need to think beyond the firewall. Sign up today to get the most value out of this offer.
<urn:uuid:8c0a3e51-0592-4342-aa08-236ae9286981>
CC-MAIN-2024-38
https://managedmethods.com/blog/top-firewall-concerns-in-remote-learning/
2024-09-12T02:29:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00429.warc.gz
en
0.949214
1,307
2.609375
3
Teachers are increasingly focused on bringing technology into the classroom, yet neither Democrat Hillary Clinton nor Republican Donald Trump, the presumptive presidential candidates, even mention the word technology in their K-12 education platforms. According to a recent study by Edgenuity, a provider of online and blended learning services, 91 percent of teachers agree with the statement: “Technology provides a greater ability for teachers to tailor lessons and homework assignments to the individual needs of each student.” However, almost half of teachers (48 percent) consider the technology they have to be outdated. While both candidates are discussing education, with Trump calling for the end of Common Core and Clinton going as far to say that every school district should offer computer science and coding programs for students, neither has discussed how technology can be integrated into the classroom. However, both candidates agree the United States needs a well-educated citizenry, and that there is significant room for improvement in our current education system. The ability to tailor lesson plans and homework assignments on an individual level, which teachers say is possible with increased ed tech, would help students grow academically and help the United States produce more college- and career-ready citizens. When teachers were asked what they wanted for their dream classroom, 47 percent said better classroom technology. Edgenuity also found that teachers are spending 10 hours a weekday on school-related tasks and activities, with 33 percent of that time spent on administrative tasks. Improving classroom technology could decrease the amount of time teachers spend on administrative paperwork, and increase the time spent lesson planning and teaching. However, one key road block in acquiring more and better technology is that Trump wants to cut Federal education spending and the U.S. Department of Education, potentially impeding school districts from acquiring new classroom technology. On the other side of the aisle, Clinton is arguing for increased Federal K-12 education spending, with increased focus on students with disabilities and schools that serve a large percentage of low-income students. With a larger budget, schools could increase technology spending to meet teachers’ needs for improved classroom technology. Moreover, Clinton’s goal to improve learning outcomes for students with disabilities is directly tied to increasing technology in the classroom. A large swath of teachers agree that technology provides a variety of learning tools and modalities (62 percent) and diversifies the learning experience (48 percent) for all students. Equipping teachers with more learning tools and enabling a more personalized education experience allows students with disabilities to achieve more academically. While Trump hasn’t specifically discussed his goals for students with disabilities, his desire to radically cut down on Federal education spending calls into question how learning outcomes for students with disabilities would improve. To learn more about how teachers view classroom technology and what their dream classroom entails check out Edgenuity’s infographic here.
<urn:uuid:6ab2496d-e0af-4b2c-b986-09614e712e9c>
CC-MAIN-2024-38
https://meritalkslg.com/articles/trump-clinton-short-on-education-technology/
2024-09-12T01:58:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00429.warc.gz
en
0.968912
571
3.25
3
Vaccinia virus (VACV) is a member of the poxvirus family, which consists of large, double-stranded DNA viruses that replicate in the cytoplasm of host cells. These viruses are widely distributed across the globe and can infect a wide variety of animals, including insects, reptiles, birds, and mammals. The most infamous member of this virus family is the smallpox virus, which caused approximately 500 million deaths in the 20th century before its eradication through an extensive vaccination campaign using VACV. Although smallpox has been eradicated, VACV and other poxviruses still pose significant threats to human health. Here’s a structured table outline that simplifies and explains key medical concepts related to Vaccinia Virus (VACV) and the immune system: Medical Concept | Simplified Explanation | Relevant Details | Examples/Analogies | Poxviruses | Poxviruses are a group of viruses with a large DNA structure that can infect various animals and humans. | They replicate inside the cells of their host and are responsible for diseases like smallpox. | Think of them as different types of invaders that can attack various species, causing illnesses. | Vaccinia Virus (VACV) | VACV is a type of poxvirus used to create the smallpox vaccine, helping to eradicate the disease. | It is a live virus, meaning it is active and can cause a mild infection to build immunity in the body. | Similar to how a weakened version of a problem can be used to prepare for a bigger issue. | Immune System Evasion | Some viruses, like VACV, can hide from the immune system, preventing the body from recognizing and fighting them effectively. | VACV uses special proteins to block the immune system’s alarms, delaying the body’s response. | It’s like a burglar disabling an alarm system before breaking into a house. | Antigen Presentation | This is the process where the body’s cells show pieces of viruses (antigens) to immune cells to trigger a defense. | VACV can interfere with this process, making it harder for the immune system to notice and respond to the infection. | Imagine trying to show someone a warning sign, but someone else keeps covering it up. | MHC Class II Molecules | These are molecules on the surface of certain cells that help show viral antigens to immune cells, helping the body recognize invaders. | VACV can reduce the number of these molecules, which weakens the immune response. | Like a teacher trying to show a class what to study, but the study materials are missing. | CD4+ T Cells | A type of immune cell that plays a key role in fighting infections by helping other immune cells respond to threats. | VACV can prevent these cells from being activated by blocking the antigen presentation process. | Think of them as the leaders of a defense team who can’t do their job if they don’t get the right signals. | Cytokines | Cytokines are chemical messengers that help immune cells communicate and coordinate a response to infections. | VACV can lower the production of these messengers, making it harder for the immune system to organize an effective response. | It’s like cutting the communication lines in a military operation, leading to a disorganized response. | Apoptosis | Apoptosis is the process of programmed cell death, which the body uses to remove damaged or infected cells. | VACV can cause cells to die early (apoptosis), which might prevent the immune system from responding properly. | Like a self-destruct button that is pressed too early, destroying something before it can be fixed. | Immunocompromised Individuals | People whose immune systems are weakened or not functioning properly, making them more vulnerable to infections. | These individuals are at higher risk of severe reactions to live vaccines like VACV. | Like a damaged shield that doesn’t protect well, making it easier for an enemy to cause harm. | Subunit Vaccines | A type of vaccine that uses only parts of a virus (like a protein) instead of the whole virus to train the immune system. | These vaccines are safer because they can’t cause the disease but still help the body prepare to fight the real virus. | It’s like training with just a part of the problem to get ready for the full challenge. | Adjuvants | Substances added to vaccines to enhance the body’s immune response to the vaccine. | They help make the vaccine more effective by boosting the immune system’s reaction. | Similar to adding extra fuel to a car to help it go further or faster. | Live Virus Vaccine | A vaccine that uses a live but weakened virus to build immunity without causing the disease itself. | This type of vaccine can sometimes cause mild symptoms similar to the disease it’s protecting against. | Like using a sparring partner in training who can still land light punches but prepares you for the real fight. | Zoonotic Viruses | Viruses that can jump from animals to humans, sometimes leading to new and unexpected outbreaks. | VACV and other poxviruses can spread from animals like rodents to humans, which can lead to serious diseases. | Similar to how a fire can spread from one building to another, causing more damage. | Vaccine Safety Concerns | Issues related to the potential risks or side effects of vaccines, particularly in certain vulnerable populations. | For VACV, the main concerns are for people with weakened immune systems or certain skin conditions. | Like knowing that a medication might have side effects for people with allergies or pre-existing conditions. | Post-Vaccination Complications | Problems that can occur after receiving a vaccine, ranging from mild symptoms to severe health issues. | In rare cases, VACV can cause serious reactions, such as widespread rash or inflammation of the heart. | Like experiencing side effects from medicine, which can sometimes be more severe in certain people. | The Role of Vaccinia Virus in Smallpox Eradication and Its Ongoing Threats The eradication of smallpox was one of the greatest achievements in public health, largely due to the use of the VACV-based vaccine. This live virus vaccine was highly effective in preventing smallpox infection, leading to the eventual eradication of the disease. However, the use of VACV as a vaccine is not without risks. The virus can cause adverse reactions, especially in individuals with compromised immune systems or skin conditions like eczema. In a study involving nearly 39,000 volunteers who were vaccinated as first responders, it was found that approximately 1 in 450 individuals had to be hospitalized due to adverse reactions, and there was a mortality rate of about 1 in 13,000. This highlights the potential risks associated with VACV-based vaccines, particularly for certain vulnerable populations. Moreover, the threat of poxviruses has not disappeared with the eradication of smallpox. New poxviruses are identified each year, particularly in animal populations, and some of these viruses have the potential to infect humans. For example, Cantagalo virus has emerged in South America, Tanapox has been found in Africa, Europe, and the USA, and buffalopox has been identified in India. Additionally, molluscum contagiosum virus, which causes wart-like lesions, is becoming more common as a sexually transmitted disease, leading to an estimated 300,000 doctor visits each year in the USA. Perhaps the most dangerous poxvirus currently in circulation is the monkeypox virus, which causes a smallpox-like illness in humans and is endemic to Africa. In 2003, an outbreak of monkeypox occurred in the USA, underscoring the ongoing threat posed by poxviruses. Vaccinia Virus as a Vaccine Vector: Efficacy and Risks While VACV is highly effective as a vaccine vector, its use is not without risks. The virus can cause severe complications, particularly in immunocompromised individuals or those with skin conditions like eczema. Post-vaccination complications range from mild and self-limiting to severe and life-threatening, including progressive vaccinia, eczema vaccinatum, and postvaccinal encephalitis. The incidence of serious adverse reactions is relatively low but significant, with historical data indicating hospitalization rates of approximately 1 in 450 vaccinees and mortality rates of 1 in 13,000. Mechanisms of Immune Evasion and Suppression by Vaccinia Virus One of the key concerns with VACV is its ability to evade and manipulate the host immune system. The virus employs several strategies to avoid immune detection and clearance: - Inhibition of Antigen Presentation: VACV interferes with the antigen presentation capabilities of major histocompatibility complex (MHC) class II molecules on dendritic cells, macrophages, and B cells. This inhibition impairs the activation of CD4+ T lymphocytes, crucial for initiating adaptive immune responses. - Modulation of Cytokine Production: The virus can alter cytokine profiles, reducing the secretion of critical immune mediators such as IL-1, TNF-α, and IFN-γ. This modulation helps the virus to create an environment conducive to its survival and replication, while dampening the overall immune response. - Direct Infection of Immune Cells: VACV can infect a range of immune cells, including natural killer (NK) cells and monocytes, further disrupting the immune response. Immune Evasion by Vaccinia Virus: A Double-Edged Sword While VACV is a powerful tool for preventing poxvirus outbreaks, it also has the ability to evade and suppress the immune system. This immune evasion is a key factor in the virus’s success as a pathogen and poses significant challenges for the development of safer and more effective vaccines. Poxviruses, including VACV, produce a wide range of proteins that can interfere with various aspects of the immune response. These proteins can block processes such as apoptosis (programmed cell death), chemokine and cytokine binding and synthesis, and cell signaling. This allows the virus to replicate within the host without being detected or destroyed by the immune system. For instance, VACV can directly infect immune cells such as lymphocytes, natural killer (NK) cells, and monocytes/macrophages, leading to a reduction in antigen presentation—the process by which the immune system identifies and targets pathogens. Antigen-presenting cells (APCs) like macrophages and dendritic cells play a crucial role in initiating the immune response by presenting viral antigens to T cells. However, VACV has been shown to disrupt this process. For example, in studies using rat peritoneal macrophages, VACV was found to inhibit the presentation of antigens on major histocompatibility complex (MHC) class II molecules, which are essential for activating CD4+ T cells. These T cells are critical for clearing poxvirus infections, but when their activation is impaired, the immune response is weakened, allowing the virus to persist and potentially cause disease. The Complexity of VACV-Induced Immune Suppression The suppression of the immune response by VACV is not solely due to the induction of apoptosis in infected cells. While apoptosis does occur, VACV also actively interferes with the antigen presentation machinery within APCs. This results in decreased expression of MHC class II molecules on the surface of these cells and a reduced ability to stimulate T cells. This mechanism not only limits the immediate immune response but also impacts the development of long-term immunity, raising concerns about the efficacy and safety of VACV-based vaccines. Further research has shown that VACV infection leads to a broad decrease in cytokine production following antigen presentation. Cytokines are signaling molecules that play a crucial role in coordinating the immune response, and their reduction can have significant consequences for the body’s ability to fight off infection. However, not all cytokines are equally affected by VACV. For example, interleukin-18 (IL-18), which is important for inducing antiviral responses, is less inhibited by VACV. This suggests that the virus has evolved to selectively target certain aspects of the immune response while allowing others to proceed, which may help it evade detection while still benefiting from some host immune functions. Interestingly, the inhibition of antigen presentation by VACV appears to vary depending on the type of APC involved. In some cells, such as professional APCs, VACV reduces MHC class II expression and induces apoptosis. However, in other cells, such as certain B-cell lines, MHC class II expression is reduced without the induction of apoptosis. This indicates that VACV has multiple strategies for evading the immune system, and these strategies may be tailored to the specific type of cell it infects. Implications for Vaccine Development: Balancing Efficacy and Safety The dual role of VACV as both a vaccine vector and a pathogen poses significant challenges for vaccine development. To create safer and more effective VACV-based vaccines, it is crucial to understand and mitigate the virus’s ability to evade the immune system. One approach to improving the safety of VACV-based vaccines is to engineer strains of the virus that lack certain immunomodulatory genes. By removing or modifying these genes, it may be possible to create a vaccine that retains its immunogenicity—its ability to provoke an immune response—while reducing the risk of adverse effects. This could be particularly important for individuals with compromised immune systems or other health conditions that make them more susceptible to vaccine-related complications. Another promising avenue of research is the development of subunit vaccines, which use only specific parts of the virus, such as proteins or peptides, rather than the entire virus. These vaccines are less likely to cause adverse reactions because they do not contain live virus particles. However, they must be carefully designed to ensure that they still elicit a strong and effective immune response. Researchers are also exploring the use of adjuvants—substances that enhance the body’s immune response to an antigen—to boost the efficacy of subunit vaccines. The Need for Ongoing Research and Vigilance Despite the success of VACV in eradicating smallpox, the ongoing threats posed by other poxviruses and the potential risks associated with VACV-based vaccines underscore the need for continued research and vigilance. Understanding the mechanisms by which VACV and other poxviruses evade the immune system is essential for developing next-generation vaccines that are both safe and effective. This research is particularly important in light of the potential for poxviruses to be used as bioterrorism agents. The ability of VACV to suppress the immune system and evade detection makes it a potentially dangerous tool in the hands of those who would seek to use it for nefarious purposes. As such, public health officials and researchers must remain vigilant and proactive in their efforts to understand and counter the threats posed by poxviruses. In addition to the development of safer vaccines, there is also a need for antiviral drugs that can effectively treat poxvirus infections. While vaccination is the most effective means of preventing poxvirus-related diseases, having effective treatments available is crucial for managing outbreaks and protecting vulnerable populations. Research into the development of antiviral drugs that target specific aspects of poxvirus replication and immune evasion is ongoing and represents an important area of focus for the future. Vaccinia virus, as a member of the poxvirus family, has played a pivotal role in the eradication of smallpox and continues to be an important tool in the fight against emerging poxvirus threats. However, its ability to evade and suppress the immune system presents significant challenges for vaccine development and public health. By understanding the complex interactions between VACV and the immune system, researchers can develop safer and more effective vaccines, as well as antiviral treatments, to protect against poxvirus-related diseases. Ongoing research and vigilance are essential to ensure that the benefits of VACV-based vaccines continue to outweigh the risks and that public health is safeguarded against the evolving threats posed by poxviruses. reference link : https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2567.2009.03120.x
<urn:uuid:cc25398c-1ed3-45f1-9b67-d0ef172dd8c7>
CC-MAIN-2024-38
https://debuglies.com/2024/08/30/vaccinia-virus-and-the-immune-system-understanding-the-complex-relationship-for-safer-mpox-vaccine-development/
2024-09-13T08:09:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00329.warc.gz
en
0.940642
3,459
4.15625
4
Space exploration has reached new heights with recent developments in both government and private sectors. The European Space Agency (ESA) and SpaceX have made headlines with two significant missions: the JUICE spacecraft’s groundbreaking flyby and Transporter 11’s massive satellite rideshare. These advancements highlight the intricate planning, technological innovation, and international cooperation driving humanity’s quest to explore the cosmos. ESA’s JUICE Mission: A Leap Towards Jupiter The European Space Agency’s JUICE (Jupiter Icy Moons Explorer) mission represents a monumental leap in our exploration of the outer solar system. Launched in April 2023, JUICE is tasked with investigating three of Jupiter’s largest moons—Ganymede, Europa, and Callisto—focusing on their potential for harboring life. Ganymede, with its subsurface ocean, is of particular interest. To navigate the vast distance to Jupiter, the JUICE spacecraft leverages a series of gravity assists. This method, which conserves onboard propellant, involves strategic flybys of Earth, Venus, and the Moon. The upcoming lunar-Earth flyby marks a significant milestone. By using the gravitational pull of these celestial bodies, the spacecraft gains the necessary momentum to journey further into space, setting the stage for a historic arrival at Jupiter in 2031. The journey is both a testament to meticulous planning and an exercise in precision. Flyby maneuvers must be executed with near-perfect accuracy. Any deviation could endanger the mission. This complex choreography also serves as an opportunity to test and calibrate JUICE’s suite of scientific instruments, particularly the Radar for Icy Moon Exploration (RIME), which has faced some challenges due to electronic noise. The Road to Jupiter: Science and Innovation The anticipation surrounding JUICE’s mission grows as it approaches its destination. For the scientific community, the potential discoveries about Ganymede and the other Jovian moons could revolutionize our understanding of extraterrestrial environments and their ability to support life. Ganymede’s subsurface ocean makes it a prime candidate for exploration. By studying its ice-covered surface and underlying saltwater ocean, scientists hope to gain insights into the moon’s geological activity, magnetic field, and potential habitability. These observations could provide a new perspective on where and how life could exist beyond Earth. The JUICE mission’s ambitious 11-year trajectory exemplifies the blend of scientific curiosity and technical prowess. Each flyby, each data collection, and each moment of the journey could yield groundbreaking results that expand our knowledge of the universe. With a sophisticated suite of instruments ready to probe the mysteries of these icy moons, JUICE is set to be a cornerstone in the annals of space exploration. SpaceX’s Transporter 11: Expanding the Frontier While ESA’s JUICE mission reaches for the outer planets, SpaceX continues to revolutionize satellite deployment closer to home. The Transporter 11 mission, launched from Vandenberg Space Force Base in California, successfully carried 116 payloads from nine different companies into orbit. This rideshare mission stands as a testament to the growing complexity and frequency of modern satellite launches. Among the various payloads were satellites from entities such as the European Space Agency, the United Kingdom’s Surrey Satellites, Japan’s iQPS, and the United States’ Planet Labs. This diverse collection highlights the international collaboration and multifaceted purposes driving satellite missions today. From weather forecasting to earth observation and global communications, the payloads on Transporter 11 are set to make significant contributions to diverse fields of research and industry. As SpaceX continues to refine its launch capabilities, the Transporter missions have become a crucial component of the company’s portfolio. Not only do they offer a cost-effective solution for satellite deployment, but they also demonstrate SpaceX’s reliability and innovation in managing complex logistics and mission execution. Setting New Standards in Space Logistics SpaceX’s Transporter 11 mission marks its 80th launch for the year, underscoring the company’s rapid growth and expanded role in global space operations. This mission, like those before it, reinforces SpaceX’s reputation as a dependable launch provider, adept at handling high-volume satellite deliveries with precision and efficiency. The success of these missions is pivotal not just for the private sector, but also for national and international space agencies. NASA, for instance, has increasingly relied on SpaceX for critical tasks, including ferrying astronauts to the International Space Station. This partnership highlights SpaceX’s evolution from a pioneering startup to a cornerstone of contemporary space logistics. The transport of such a diverse array of satellites aboard Transporter 11 also speaks to the versatility of SpaceX’s Falcon 9 rocket. Whether facilitating the expansion of the Starlink communications network or deploying instruments for scientific research, these missions are integral to advancing our capabilities in space and reinforcing the infrastructure necessary for future explorations. The Future of Space Exploration: Collaborative and Driven Space exploration has hit new strides recently thanks to exciting developments in both government and private sectors. The European Space Agency (ESA) and SpaceX are at the forefront of these advancements, capturing global attention with two major missions. ESA’s JUICE spacecraft embarked on a groundbreaking flyby, a mission aimed at unveiling the mysteries of Jupiter’s icy moons. This mission represents years of meticulous planning and cutting-edge technological innovation. Meanwhile, SpaceX’s Transporter 11 made waves with its massive satellite rideshare, showcasing the potential for cost-effective and efficient space transport. These missions underscore more than just technological prowess; they highlight the importance of international cooperation and strategic partnerships. Among the most striking aspects of these missions is the intricate planning that goes into every detail, reflecting human ingenuity and the relentless pursuit of knowledge. The collaboration between various nations and private entities shows a unified urge to explore and understand our universe. Both ESA and SpaceX are proving that the sky is not the limit—it’s just the beginning. These missions mark significant milestones in our ongoing journey into the cosmos, paving the way for future explorations and discoveries. As humanity continues to push the boundaries of what’s possible, these achievements serve as a testament to our shared aspiration to reach for the stars.
<urn:uuid:ff7ab991-65fc-465e-aacc-a6033feddb06>
CC-MAIN-2024-38
https://educationcurated.com/education-management/esas-juice-flyby-and-spacexs-transporter-11-milestones-in-space-exploration/
2024-09-15T20:55:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00129.warc.gz
en
0.889518
1,308
3.390625
3
Forms are used in workflows to collect predefined information from users. Some common examples of forms in workflows include service requests, help desk tickets, among others. Forms are .NET, so you have full .NET capabilities, including server-side code, which enables more complex implementation. Markup is generated in XHTML standard, which supports multiple browsers. In this section, we'll cover some key terms and concepts regarding Cora SeQuence forms. First Define View Method vs. First Define Data Method When you create a form, you select whether to first define the view or the data. The best practice is to first define the data model, but the view first method is a good option for quickly creating a form, and generally used by business users. Separation between model and view enables you to create an unlimited number of views for a single data model. The data model consists of an entity model or queries that are connected to one of several data source types. The data model is not compiled to an object, but delivers the same performance as if it was compiled to an object. Data is stored in the cache memory, which saves time and load of accessing the database. The server loads the most recent version in-memory. - Lookup Table - Stored Procedure - Service (WCF, web service, external service) - SQL queries Data Model Wizard In the data model wizard, you define the data sources for the form. Every Form Activity and Task Activity (which includes a message and a form) has a Form Definition. The Form Definition consists of a data model and a form template. The data model includes queries, which connect to data sources. The form template contains one or more form view for various platforms, for example, desktop, mobile, and so on. Each entity model consists of one or more entity queries. There are two entity query types. Each entity query consist of the following components. - Entity Data Source (always) - Entity Type (the structure returned by the query; always) - Entity Parameters (only if the query is SP or service) Entity Query Reference The entity query reference points to an entity query in another entity model (a reference to a user or system lookup table). There are several entity types. - Primitive (such as standard form fields, textbox, and so on). - Association (reference to another query) The form template consists of one or more views. Each view is designed for a different technology. You can create a new form, or duplicate a form from the database. After you select the form creation method, you select how to define the data. Data Definition Options - First define view - First define data Note: The Fast Track option is useful for basic scenarios. There is no option to go back to a previous step of the wizard. In this step, you define the structure of the table or lookup table query by adding fields and configuring the field properties. Property | Description | Can be blank (Nullable) | Null values are valid. | Data Source Expression | The expression of the projection. For service queries only. | Data Source Name | Field name in the database. | Default Value | Use the expression editor to define the field's default value. | Display Name | A friendly name for the field. | Is Generated | The field's value is generated in the database. | Is Primary Key | The field is a primary key. | Is System | The field is a system field, for example, fIdId. | Max Length | The field's maximum length. | Sync Mode | Determines when the field's value is synced from the database to . This is required, if for example, the field is generated, which means that upon insert, a value is created for the field in the database. The Sync Mode then determines if this value is copied back to . The options are: Always, Never, On Insert, and On Update. When the field is set to Default, determines the sync behavior based on the other field properties. | Form options marked with the letter R indicate responsive forms.
<urn:uuid:ec11c68e-7b85-42f4-bca2-d4f510e16081>
CC-MAIN-2024-38
https://knowledgecenter.gcora.genpact.com/help/working-with-forms
2024-09-15T19:39:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00129.warc.gz
en
0.869097
866
3.140625
3
California University Electrification Project - A university and a giant electric company start a pilot project together. - Electrification projects boost economic growth and development. - The goal is to help the environment and give universities a new path. - Giving up old systems for new resources does face pushback. - Transitioning away from outdated hardware is done with this partner. What is more efficient, electrical or natural gas power? Getting the answer to that question may be sooner than we think. The East Campus of California State University Monterey Bay is about to become a unique pilot project as they partner with Pacific Gas & Electric (PG&E). With more significant concerns about carbon dioxide emissions and climate change, the university is undertaking this project to reach its goals. If successful, other universities can transition and help achieve carbon neutrality by 2030. Here’s more from Luis in today’s video. What Is An Electrification Project? Electrification projects are not new. The United States Congress passed the Rural Electrification Act, which became law on May 20, 1936. That law would help rural communities receive access to electricity. Back then, only cities had access to this resource. An electrification project transforms a property, community, college, or university from an existing resource, e.g., natural gas, propane, etc., to electrical only. Electrification has always played a key role in economic development and growth. The East Campus of California State University Monterey Bay wants to retire its natural gas pipelines built by the Army many years ago. The project could reduce an average of 5 million pounds of carbon dioxide emissions annually and cut the school’s utility costs. What Will The Electrification Goal Achieve? Electric systems continually prove more efficient and economical than natural gas systems. That allows the college to see its utility savings reduce over time. Additionally, any risk of a natural gas pipeline breaking or leaking under a building or structure gets permanently removed. In the case of the East Campus, their end goal is to achieve carbon neutrality by 2030. They ultimately want to replace their natural gas reliance with only having electricity in new and existing school buildings. Once this electrification project completes, the university will reduce its carbon dioxide emissions contributing to climate change and benefit its students and staff. In the end, they’re also hoping their electrification project will help other universities want to transition. Transitioning From Old Systems To New Resources Anytime a school, business, or organization is faced with giving up old systems for new, more efficient resources, it experiences pushback. The belief “if it’s not broke, why fix it” always rears its ugly head before a transition begins. Internal resistance always occurs even when a project saves money, benefits a community, and sends others a positive example. For instance, an in-house IT department does not realize its firewall isn’t closing off open ports. Just because there’s never been a breach or cyberattack doesn’t mean the organization’s security shouldn’t get updated annually. Not doing so, the cost to recover after a cyber theft becomes significant. Alvarez Technology Group Helps Companies Transition The thought of tearing out your old computer hardware and eliminating a server closet to make room for an efficient IT infrastructure scares many decision-makers. You’re used to the old way. However, with evolving technology, change needs to take place quickly. At Alvarez Technology Group, we partner with businesses and organizations to transition from outdated, end-of-life hardware to a more robust network. The assistance we provide opens more doors for your company. Contact us today or call Toll Free 1-866-78-iTeam about your pilot project.
<urn:uuid:72cb186a-3633-49e5-860e-6d33ed0b4411>
CC-MAIN-2024-38
https://www.alvareztg.com/california-university-electrification-project/
2024-09-15T19:38:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00129.warc.gz
en
0.936519
772
3.0625
3
Multi-Factor Authentication & 2-factor authentication Abstract - The so-called multi-factor authentication (MFA) is also often referred to in practice as two-factor authentication (2FA) and offers secure access to a system through several (often two) independent factors. Monday, 23. August 2021 How does the login work without two-factor authentication? Most access restrictions to IT systems are still based on simple factor authentication: the user password. The password is then a factor for authenticating the user to the system. This authentication factor is assigned to the "Knowledge" category, since it is knowledge that the user must possess in order to successfully log on to the system. What is two-factor authentication, why is it useful and what does it consist of? Since knowledge alone is not a sufficient factor to reliably protect IT systems from third-party access, an additional factor should be used for legitimization. This is because knowledge, i.e., the password, can be stolen or guessed, for example, so that access by unauthorized persons is possible. This is where the second factor comes in: by using a cell phone, another factor for secure login can be added by sending an SMS or by retrieving a code in an app, which confirms the identity of the person logging in. This type of authentication factor is categorized as "ownership" because the person who wants to authenticate needs the device on which the second authentication takes place in addition to the password. Consequently, an attacker would have to steal the password and the device that the user has set up for two-factor authentication. 2FA or MFA refers to the combination of at least two credentials for authentication/logon to a system. - Online banking or ATM - Identity card - Tax return with Elster - Card payment - Tax return with Elster - Online services like PayPal, Facebook, etc. What are the requirements for implementing multifactor authentication? Depending on the system, there may be different supported options for the second factor. Major vendors offer a variety of the following options. In principle, you should give your employees the opportunity to decide for themselves which method is most suitable for them. In any case, the company should offer hardware tokens for the key ring, for example, for employees who do not have a company cell phone, do not want to use their own cell phone voluntarily, or simply do not have any of the options available. Most commonly used authentication methods: - One-Time-Secrets i.e. one-time passwords so-called OTP or Tans. - Google Authenticator - Microsoft Authenticator - Yubico Authenticator - SMS (also known as SMSTan) - Phone call - Cryptographic keys - Software certificate (token, key) as file - HBCI, signature cards, NFC or USB tokens 2-factor authentication sounds very sensible - so why is it not yet being used across the board? To put it succinctly, it's all about usability, or convenience. Logging on to systems secured with multiple features can be "annoying," to put it bluntly. Whereas it was otherwise sufficient to enter the password, with activated multifactor authentication, the authentication device must now also be at hand for logging in. For many users, this is a reason not to activate this function. Here, it is important to explain to users how important IT security is and to outline the concrete threat scenario. For the rest: The BSI recommends at least two-factor authentication for all online accounts. On the topic of IT security, you might also be interested in: How to choose a secure password.
<urn:uuid:ef84ce85-bff1-4f15-abfd-31a0d54626ba>
CC-MAIN-2024-38
https://www.menten.com/en/blog/multi-factor-authentication
2024-09-08T12:33:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00893.warc.gz
en
0.9401
762
3.421875
3
Network intrusion refers to unauthorized activity within an IT infrastructure network. The purpose of unauthorized network activities range from espionage and exploitation to data leaks and network downtime. According to the 2018 Verizon Data Breach Investigations Report that studied more than 53,000 security incidents around the world, most network infringements attempts successfully compromise the network within a few minutes. Two-thirds of the security incidents occurred months before they were discovered. As a consequence, organizations victimized by sophisticated cybercrime incidents lost mission-critical business information, incurred significant financial losses and faced costly lawsuits due to security non-compliance. It is therefore critical to both detect and prevent network intrusions proactively, before the impact escalates beyond control. Intrusion Detection and Intrusion Prevention both refer to a different set of tooling and practices applicable at different stages of the cyber security kill chain for network security and protection. Both technologies are designed to analyze and understand networking activities that have the potential to damage the security posture and health of a computer network. They may be designed to work together with a suite of technologies, protocols and mechanism to maintain optimal standards of security. What is an intrusion detection system? Intrusion Detection System (IDS) refers to the technology that passively monitors the network to identify anomalous activities and traffic patterns. The activities may encompass inbound and outbound network traffic posing threats from within and outside of the network. The IDS is configured to detect traffic anomalies in reference to organizational policies of user access and privileges. In response to unauthorized network activities and incidents, the IDS system can alert appropriate personnel or technologies to act against the detected threats. A simple open source IDS solution may detect intrusions by comparing the network traffic information to databases of known attack signatures. In this case, the effectiveness of the IDS solution is limited by the digital signatures of known network exploits available and updated at the time of network intrusion. Sophisticated commercial Intrusion Detection Systems rely on advanced technologies including machine learning algorithms designed to understand baseline network traffic operations and identify anomalous incidents in real-time. IDS capabilities embedded within an external networking hardware equipment can be used to detect suspicious traffic activity to and from a specific device in real-time, whereas hosted IDS solutions may be used to evaluate traffic at a holistic network level close to real-time depending upon the technology capability and configurations. Unlike a firewall, IDS solutions are not designed to block data packets once a suspicious activity is detected. Instead, IDS complement the overall security system of the organizations so appropriate response can be launched to reduce the risk of security infringement. The purpose of the IDS system typically involves gathering useful understanding on potential threats that are likely to impact network security. Understanding intrusion prevention systems Intrusion Prevention System (IPS) refers to the technology solution that actively responds to a potential threat by blocking the network traffic or unauthorized associated actions at various levels of the system. An IPS solution typically controls the network access and acts as a sophisticated firewall-like technology with built-in IDS capabilities to prevent the attacks from happening in the first place. The IPS solution offers advanced capabilities such as analyzing network incidents and identifying patterns of potential threats before taking preventive action. Unlike a firewall that only identifies packet headers and rejects the data from entering the network, the IPS system analyzes the entire packet and correlates the information with known events of high network security risk. It then blocks the data based on specific organizational policies pertaining to user access and privileges. An IPS system may also be configured to take no action against specific threats, making it similar to an IDS solution. IPS systems are also available as hosted IPS solutions that protect the organization at the holistic network level as well as Network-Based IPS solutions designed to protect individual networking devices. The principle of operation of both technologies is similar and each can be configured to meet the unique monitoring needs of the network architecture. Which is better: IDS vs IPS The choice between IDS and IPS technologies comes down to the use cases, IT budget, compliance requirements, network architecture and the overall security strategies, among other factors. IDS solutions can help your organizations evaluate the internal user behavior as well as potential threats originating from the outside. It can be used to identify infections and virus leading to information leakage. IT security personnel can also use the technology to identify configuration errors, scan for Shadow IT or unauthorized apps, and other clients and servers involved in traffic routing, information flows and network access. An IPS technology can also be used to address these problems by preventing unauthorize network activities by itself, but the role of the technology must align with the organization’s security strategy in thwarting these risk vectors. For instance, an organization may have lined up a series of security layers that analyze potential network intrusions to strategically understand the risk, root causes and taking preventive action. They may also need to understand the risk of an apparently unauthorized traffic and information flow across the network. Perhaps there may be a rogue employee attempting to gain access to sensitive business information or a legitimate networking attempt by an app to serve a sudden spike in traffic critical to business growth. Either way, organizations may need more than just a standalone IPS solution to realize the best course of action. If organizations don’t need to actively block the network traffic identified as a potential intrusion and already have appropriate security measures in place, then additional investments in IPS over an IDS solution may not justify the choice. Research suggests that cybersecurity risks are on the rise. The threats are coming from all directions – from disgruntled employees within the organizations to cybercrime underground rings and state-sponsored actors. The attacks are not only financially motivated but are also used as a symbolic representation of political activism, among other reasons. It is imperative for organizations to understand the impact of the threat and take proactive action in mitigating risks of network intrusions. By detecting and preventing network intrusions proactively, organizations can achieve these goals and save their business as well as users from the unforeseen consequences of cyber-attacks.
<urn:uuid:5513503c-f0c1-4297-a3ef-1bf101580fe7>
CC-MAIN-2024-38
https://blogs.bmc.com/ids-intrusion-detection-vs-ips-intrusion-prevention-systems/
2024-09-10T23:03:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00693.warc.gz
en
0.940701
1,209
2.84375
3
How AI Perceives the World? Neural Networks – AI’s Brain As human beings, we often don’t give much thought to how we obtain information. After all, we’ve been doing it, adjusting, and soaking it in since birth. We use our five senses to explore our world and our surroundings to make judgments and decisions to classify pieces of information. So, how does Artificial Intelligence (AI) perform these very same tasks? For human beings, we recognize and process images such as handwritten digits with our eyes, and it’s automatic. Our brains are super-computers that have developed over a millennium of evolution. However, science is advanced enough today that we know more about the process. We now know that in each hemisphere of our brain, we have a primary visual cortex (V1) that contains more than 140 million neurons. These neurons have tens of billions of connections between them. The number is staggering even to comprehend. Now try to understand that we not only have V1, but there is a V2, V3, V4, and so on. Each visual cortex progressively aggregates, categorizes, and deciphers more complex visual knowledge. Now try to understand that we, as humans, are doing all of this on an unconscious level. Stunning, isn’t it? If we are simply considering visual perception, how then can a set of software algorithms perform the same function? How can AI ‘learn’ what it sees when images are input as a form of data? To have AI recognize the same handwritten numerical data that humans can automate without thinking isn’t as easy as it would seem. Coding for visual pattern recognition can have an endless amount of exceptions. AI is built around the same concept as our own neural networks. The first time you were taught your alphabet or numbers didn’t register. It took practice and time. Recognition developed as a child when you were given a myriad of examples of ways that Aa Bb or 0 1 2 could be written. It works the same with AI. You have to use data input as training examples and to allow AI to learn to perceive and recognize written constructs, you would begin with multiple examples of Aa Bb… or 0 1 2… in a variety of handwriting styles. This is done so the algorithm can then ‘learn’ that there are many ways, slants, curves, styles, but a 0 is a 0, and an 8 is an 8. This type of ‘learning’ is often referred to as deep learning, which is a more sophisticated type of data integration than the more generic term of machine learning. Is Deep Learning Really Perception? Deep learning is just one part of machine learning. AI needs to learn from experience, the same way that we as humans had to learn as infants and children. Deep learning algorithms are used to create artificial neural networks, similar to the human brain. Machines are then able to learn through experience, trial, and error, and even gain new skills without human interaction. The same way an infant can pick up a block, handle it, drop it, pick it up again, and ‘learn’ about the shape to eventually come to understand that it is a block, then progress to a useful thing to build with – machines learn from similar experiences. Human minds absorb information through experience a deep learning algorithm (artificial neural network or complex program application) can perform a task or receive input data repeatedly – each time learning a little more, tweaking the results until it achieves 100% accuracy or near this. After a lot of handwritten examples of numerical values, the system can ‘decipher’ nearly any pattern and tell the difference between a 6 and an 8. It has learned to ‘perceive’ the fine details of scriptwriting. What is Perceptron? The term Perceptron is not new. It was coined in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory. It is primarily based on our own neural networks and how they function. Like our own visual neural network described above, with V1, V2, V3, etc., a neural network is created when a collection of neurons or individual nodes are interconnected through synaptic links. In every artificial neural network, there are three layers – input, output, and a hidden layer. Inputs are given a value, then moved to the hidden layer, which has its own set of neural pathways, then to output. During this process, a learning algorithm is used to determine what the required behavior should be. An example would be – does the input data of the handwritten number 8 meet all the specifications of all the known sources of what an 8 could look like? If yes, store, if not send out as unknown. A learning algorithm is generally a closed-loop feature that goes through all known values and corrections previously known to the network. The perceptron algorithm uses a binary classification system for data. In some situations, the perceptron algorithm can be discovered. In this uncovering, you can learn about limitations within the program that you may have been completely unaware of. This can be problematic and is resolved with multi-layer perceptron networks. Today, Perceptron has become the most significant learning algorithm available. AI is Artificial Perception Intelligence in human beings comes from the ability to derive conclusions from patterns of ‘data’ that are input through our senses from the world around us. Our inferences or results from the data input are based on either structured or rational decision-making processes. For our super-computer brain, these processes can happen so quickly that we believe we just ‘know’ the results. Before we could ever come to a point in life where that was possible, there was a beginning, where it was learned from patterns presented to us. We learned that it is not rational that the egg came before the chicken, as we know chickens lay the eggs. A form of making rationalizations. We may have learned that 1 + 1 = 2, and can compute this without even realizing that we have done so. A structured process. Both forms of coming to conclusions are distinct but can also complement each other. Machine-based intelligence works in much the same fashion. It has two forms of classifying data and working through the decision-making process. To clarify this further, the two forms can be equated to one being similar to your calculator or your pc. These use a structured decision-making process, i.e., instruction by instruction, line by line of coding to make a structured decision. AI, on the other hand, justifies the data against pre-existing data patterns to conclude that it ‘perceives’ to be the correct choice. AI is, more or less, a form of artificial perception. Tomorrow, After a Nap We now know that AI is machines programmed with artificial neural pathways to make decisions, much like the human brain. As humans, we begin collecting data before we even open our eyes in the morning. It stirs us from our sleep. Everything that your eyes can see throughout the day is literally ‘live-streaming’ data into your brain. We are creatures who explore our world, invent, create, and build. We ingest data the way we breathe air. However, at the end of a long day, we require sleep. If we don’t get enough sleep or we stay up for too long, our neurons can begin to ‘misfire,’ and we can make poor decisions. Sleep restores us so that we can start the following day anew. Recent studies have concluded that AI also requires sleep. This is a fascination for researchers. The artificial neural networks are closely built to mimic the human brain in their functionality, that they can become overloaded. Researchers from the Los Alamos National Laboratory have discovered that in order to maintain the top functionality of their AI systems, they require a period of sleep. AI needs sleep the same as the human brain. What researchers discovered in their project is that after long periods of unsupervised learning, the AI systems would begin to malfunction and not work as it was designed. AI was being used to learn, much more like the way humans take in information and attempt to classify objects without prior examples for comparison. The discovery came when they realized that exposing the AI to an artificial analog of ‘sleep’ or rest, that the problem corrected itself. In order for the AI to remain stable for long periods of time, it required a standard set for sleep intervals. Exposing the artificial neural networks to an ‘analog sleep time’ was actually their last resort. The results surprised the scientists. The next step in their development process of AI evolution is to include designated sleep times or downtime so that the artificial neurons have a chance to reset and recharge. This will bring about a new level of how AI perceives the world.
<urn:uuid:a0dd7f6a-6a3f-4c3d-9fcc-012a0a051e04>
CC-MAIN-2024-38
https://caseguard.com/articles/how-ai-perceives-the-world/
2024-09-11T00:15:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00693.warc.gz
en
0.960651
1,837
3.484375
3
Top 10 Most Expensive Cars In The World Updated · Oct 10, 2023 Cars are four-wheeled automobiles with roadways for propulsion. From compact city cars to luxurious supercars, they come in an array of forms, sizes, and designs. Vehicles can be powered by petrol, gasoline, electrical, or fusion motors with varying levels of fuel economy and environmental effects. Cars have become an indispensable part of modern society since they provide quick and efficient means of transportation. Vehicles provide individuals with the luxury, speed, and freedom to move around; however, they also contribute to traffic jams, fatalities, and the environmental degradation we witness around us. Automobiles provide us with luxury but at a cost; they increase traffic jams, cause fatalities, and pollute our environment in unintended ways. In the late nineteenth century, diesel fuel cars began appearing, ushering in an era of automobiles. Karl Benz created the first gasoline-powered car in 1886 which spurred numerous developments by automakers throughout Europe and America. Henry Ford introduced his Model T into mass production on a factory line in 1908 – this mass production allowed cars to become cheaper for workers across America and lower their price tags to make them accessible to more people. Cars underwent an exciting transformation during the 1920s and 1930s, adding features like transmitters, climate control, and gearboxes. Unfortunately, both the Great Depression and World War Two severely disrupted this industry; sales fell off dramatically after both crises ended. After that, though, vehicles gained prominence as status symbols – supercars for power and quickness, while luxury models offered convenience and aesthetic beauty. Throughout these decades, many drivers sought out supercars due to their speed, while wealthier individuals preferred luxury models for convenience and elegance. In the 1970s; global electricity shortages and concerns about pollution and gas mileage caused Japanese automakers such as Honda and Toyota to develop lighter; more fuel-efficient models. They challenged American and European automakers' dominance of this sector with cutting-edge technologies and layouts. With advances in hybrid and electric vehicle equipment; automated driving capabilities; as well as other developments designed to make automobiles friendlier; more effective; and greener – cars have continued their journey into the 21st century. Thousands of individuals across industrial, marketing, and service industries depend on cars for mass transport and economic life, making them indispensable components. Types Of Cars: - CONVERTIBLE CARS - ELECTRIC CARS - SPORTS CARS - HYBRID VEHICLES Popular Car Brands: Environmental Benefits And Disadvantages of Cars: - Cars emit CO2, a greenhouse gas with significant environmental consequences. According to the Worldwide Energy Agency, approximately 23% of Earth's power-related Carbon dioxide emissions come from transportation (which includes automobiles). - Automobiles emit volatile organic compounds (VOCs), air pollutants (PM), sulfur dioxide (SOx), oxides of nitrogen (NOx), and hydrogen sulfide (Hs) into the atmosphere. These emissions may cause breathing issues as well as ocean acidification, endangering both people's health and the ecosystem. - Oil and fuel are necessary for cars to run, yet their mining, shipping, and refinement can harm our planet through oil spills, habitat degradation, and marine pollution. - Habitat destruction, biodiversity loss, and other adverse effects on natural environments and ecosystems can all be traced back to the construction and upkeep of highways for automobiles. - Electric vehicles (EVs) emit zero pollutants compared to their fossil-fuel counterparts, providing a viable means for a reduced transport network. Unfortunately, the energy required to recharge them compromises their environmental benefits. - Vehicles with improved fuel economy can reduce greenhouse gas emissions and save owners money on fuel expenditures. - Comparing automobiles alone to mass transportation like subways or buses, mass transit drastically reduces pollution levels. - Splitting a vehicle or sharing rides with others may reduce carbon emissions and save money on gasoline. - Physical transport options like cycling, jogging, and running do not emit pollutants and are healthy options that don't pollute. - Federal programs can help reduce car pollution by increasing fuel taxes, setting automobile requirements, and offering subsidies for lower latency drivers. Why Cars are Expensive? - Automobiles are complex machines that require exact precision and quality control during production. The cost of natural resources, manpower, and necessary technology in production may be high, which ultimately drives up the final price for customers. - The auto industry invests heavily in research and development to maintain and innovate designs, technology, and services. Unfortunately, these costs often get passed on to customers, increasing expenses. - Automakers must adhere to numerous regulations and standards that may increase the cost of production, thus increasing a vehicle's price tag. - Many buyers desire cars that can be personalized according to their tastes in colors, equipment, and options. As a result, the price of a vehicle may rise significantly with personalization. - Some car companies and producers' vehicles may be perceived to be of higher value and costlier. This perception can be influenced by factors like promotion, elegance, and corporate reputation. - Government-imposed taxes and duties on foreign vehicles or certain automobile types may increase the price of a vehicle. - The pricing of vehicles is heavily determined by economic principles such as demand and supply. A car's cost could be higher depending on a large market for it and limited supply. Mentioned below are the World’s top 10 Most Expensive Cars - 1955 MERCEDES-BENZ SLR – WORTH $146M - FERRARI 250GT0 – WORTH $55.8M - BUGATTI TYPE 567sc ATLANTIC – WORTH $40M - ROLLS-ROYCE BOAT TAIL – WORTH $28M - PAGANI ZONDA HP BARCHETTA – WORTH $17.8M - ROLLS-ROYCE SWEPTAIL – WORTH $12.8M - 1937 MERCEDES-BENZ 540K SPECIAL ROADSTER – WORTH $9.9M - 1957 FERRARI 500 TRC SPIDER – WORTH $7.8M - 1954 FERRARI 375 AMERICA VIGNALE CABRIOLET – WORTH $7.5M - 1955 PORSCHE 550RS – WORTH $3M #1. 1955 MERCEDES-BENZ SLR – WORTH $146M A remarkable amount was paid for the 1955 Mercedes-Benz SLR Uhlenhaut Coupe. Even if one adds up all of the costs associated with other cars in this ranking, they would still far fall short of matching what was spent on this vehicle – yet what an incredible investment it was! The SLR Uhlenhaut Coupe is one of Mercedes-Benz's most celebrated vehicles. Famed architect Rudolf Uhlenhaut designed one of only two cars modeled on the 300 SLR that can be raced. At 180 mph, the Uhlenhaut Coupe was among the fastest cars in history – not to mention its stunning aesthetic design. While its story is intriguing, and now being made available for purchase for the first time, its high price tag certainly makes this car worthwhile. #2. FERRARI 250GT0 – WORTH $55.8M The Ferrari 250 GTO is one of the world's most stunning and recognizable supercars. Its remarkable driving capabilities and acceleration earned it the attention of motorsport aficionados and celebrities around the world, making it one of the most expensive and sought-after automobiles available today. Hagerty's assessment tool values the 250 GTO at $55.8 million. A most expensive superstar vehicle owned by Ralph Lauren and Pink Floyd musician Nick Mason sold at secret bidding for $70 million in 2014. #3. BUGATTI TYPE 567sc ATLANTIC – WORTH $40M Bugatti is now renowned for its luxurious supercars with engines powerful enough to power a small town, but the manufacturer used a similar strategy back in the 1930s. This stunning Type 576SC Atalante can be customized with a root system supercharger upon purchase, making it one of the earliest 57SCs ever made. The compressor upped the car's capacity to 200, making it one of the fastest and most powerful automobiles available before World War Two began. This particular 57SC Atalante boasts several distinguishing features that set it apart from other models, adding to its allure. Bugatti's iconic Scintilla headlamps and scalloped back bumpers, which completely conceal the rear wheels, showcase their exquisite coachbuilding. As one of only 17 examples still in operation today, this car has a long and fascinating story that dates back to when it was first delivered in Paris in 1937. #4. ROLLS-ROYCE BOAT TAIL – WORTH $28M The Rolls-Royce Boat Tail, built on the Spectre chassis, is an extremely rare and customized sporty sedan that has undergone release features as well as personalization to meet the company's specifications. Its distinctive look was inspired by vintage sporting vessels and crafts. The Boat Tail stands out with its long, arching canopy, compact design, and distinctive “boat tail” back part. Musician Jay-Z is said to be its sole owner, and it reportedly costs around $28 million. #5. PAGANI ZONDA HP BARCHETTA – WORTH $17.8M Pagani Automobili's first automobile, the Zonda, was produced nearly fifty years ago. Given their Huayra model's success, production should have ended years ago; yet Pagani continues to craft numerous Zonda special editions today. Horatio Pagani imagined the Zonda HP Barchetta to look similar to an Italian Barchetta, or sailboat. Constructed entirely out of fiberglass, its compact dimensions and responsive feel bear testament to Pagani's inspiration. At its highest point, this 21-inch car measures just 21 inches (0.5 meters) tall with a tinted blue windscreen. Only four examples exist worldwide of this unique Zonda design – with the most recent sale fetching $17.6 million. Accelerating from zero to sixty mph (100 km/h) takes just 3.4 seconds, and its top speed is 220 miles per hour. #6. ROLLS-ROYCE SWEPTAIL – WORTH $12.8M The Rolls-Royce Sweptail wasn't created with one purpose in mind but rather out of necessity. It has captured the attention of auto enthusiasts worldwide; once considered one of the world's most expensive automobiles. This vehicle's greatest strength lies in the way it blends old and new: modern convenience combined with classic Rolls-Royce designs from the 1920s and 1930s. We're talking about an elegant blend of modernism and innovation that gives this remarkable car such a rare feel – although who owns it remains unknown! #7. 1937 MERCEDES-BENZ 540K SPECIAL ROADSTER – WORTH $9.9M One important consideration among customers is a rarity. Small-production run automobiles will always be difficult to locate, such as the three Mercedes-Benz 540 K Special Coupes believed to exist today – each estimated to have sold for $10 million or more. Furthermore, in 1937 King Mohammed Zahir Shah of Afghanistan owned this model; thus its potential appearance on modern Afghan roadways remains a mystery today. It remains only anecdotal how such a car might have looked back then on the newly constructed pavement. #8. 1957 FERRARI 500 TRC SPIDER – WORTH $7.8M The 1957 Ferrari 500 TRC Spider is the premier racer on this list, having competed in the 1957 24hrs of Le Mans and earning 12 class wins between 1958 and 1959. Additionally, this vehicle finished on top 18 times between 1957 and 1963 – an absolute must if you wish to compete at Le Mans Classic or Mille Miglia Storica events. The 500 TRC Spider featured one of Ferrari's last four-cylinder motors, rather than their traditional V8 or V12 options. With 190 horsepower, this engine could reach a top speed of 153 mph – an achievement only available to industrialist riders. #9. 1954 FERRARI 375 AMERICA VIGNALE CABRIOLET – WORTH $7.5M Ferrari wasn't the same company it is today in the 1950s. Back then, Italian designers placed much more focus on great riders – and this can be seen with their iconic Ferrari 375 America model. Ferrari only produced 10 375 Americas, replacing the 342 USA. A monophonic Lampredi motor from Ferrari's 375 MM racer had been used in its replacement; two additional cars that had originally been 250 Europas were later transformed at Ferrari's factory. This automobile is unique in that it was the only model with coachbuilder work done by Vignale and a cabriolet body type, although purchasers had several upgrade options from Pinin Farina or Vignale to choose from. Enzo Ferrari himself offered it directly to Bianca Colizzi – daughter of director Giuseppe Colizzi – with identical numbers on all its chassis components (rear wheel, motor, and gear), plus its genuine hard top that came with it). It also retains its original hard top which came with it originally. #10. 1955 PORSCHE 550RS – WORTH $3M Jerry Seinfeld is projected to reach a net worth of $950 million by 2022, making him the richest entertainer worldwide. He had invested much of his income into amassing an impressive collection of more than 150 unique vehicles, many of which were rare and valuable Porsches such as the $3 million 1955 Porsche 550 RS. The 550 RS was Porsche's inaugural race-specific vehicle and earned itself the moniker Giant Killer. As one of their finest and most sought-after models, only 90 examples were produced by Porsche – making it both iconic and highly sought after. For over a generation, cars have revolutionized movement and transit, becoming an indispensable part of modern civilization. Their effects can be felt across various aspects of human life, such as the economy, ecology, and society. Vehicles have made it simpler and more accessible for individuals to obtain jobs, education, and extracurricular pursuits. Unfortunately, they also come with drawbacks such as income disparity, transportation overcrowding, and environmental degradation. Due to evolving social needs, the automobile sector is also evolving quickly – with self-driving and electric cars at the top of its priority list. Future automobiles will be safer, more economical, and much greener – offering us the potential to continue revolutionizing transport. We must consider how cars affect society as we move toward this future and create regulations that promote justice and environmental sustainability. Automobiles boasting enhanced productivity capabilities; luxurious features and the most up-to-date security and technological innovations are classified as premium vehicles. Vehicles with longer wheelbases typically ride more comfortably than those with shorter spoked wheels; as there is more cushion between the front and rear tires when they encounter obstacles. As such; larger cars provide better control when cornering or stopping abruptly. A measure of a bank's cash reserves that takes into account its proportion of uncertainty debt instruments is known as the capital adequacy ratio (CAR). If you plan on keeping your automobile idle for a couple of weeks; experts recommend not running it too far each day. After those initial few weeks have elapsed; however; experts recommend taking steps to maintain it so that it starts up correctly and runs optimally. Aditi is an Industry Analyst at Enterprise Apps Today and specializes in statistical analysis, survey research and content writing services. She currently writes articles related to the "most expensive" category.
<urn:uuid:a273829a-0085-4f03-a967-73b84951ea3e>
CC-MAIN-2024-38
https://www.enterpriseappstoday.com/top-10/most-expensive-cars.html
2024-09-10T23:43:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00693.warc.gz
en
0.936102
3,300
2.796875
3
Cyber Secure with the Help of Artificial Intelligence Artificial Intelligence (AI) is one of the most widely misunderstood technologies in use today. While scenes from the Matrix make for good entertainment, the reality of AI is much more mundane and practical than Hollywood blockbusters. On its most fundamental level, artificial intelligence is simply a term that describes machines that can simulate the cognitive functions demonstrated by human intelligence – functions like language processing, planning for the future, and basic reasoning. Far from being self-aware entities driven by existential pursuits, AI is just another tool, like a hammer or a screwdriver. Not only are these tools much more pragmatic than commonly perceived, but they are also much less futuristic – indeed, they are already here. Widely used in everyday activities, AI enhancements can be found in search engines like Google, or video recommendations on Netflix and YouTube suggestions, and even voice-controlled assistants like Apple’s Siri or Amazon Alexa. Like any other tool, AI technologies are most effective when applied to the right kind of task. Under Siege: The Alarming Explosion of Cyber Crime It’s no secret that the threat of cyber-attacks has exponentially increased since the beginning of the COVID-19 pandemic. According to the FBI’s annual report on Internet Crime, reports of over 791,000 internet crime complaints during 2020 mean an increase of more than 69.4% from the previous year – with reported losses above $4.2 billion. Following high-profile attacks like the Colonial Pipeline hack, the ransomware assault on the Washington D.C. Police Department, and a severe breach at the U.S. State Department, it has become painfully evident that no organization, business, or institution is inherently safe. Indeed – some of the most favored targets for cybercriminals are companies that manage critical national infrastructure, like hospitals that save lives. These attacks have prompted the Department of Justice to launch an aggressive cybersecurity program to help resecure the nation’s digital profile. Read more about the DOJ’s new Cybersecurity Program Who Is Really at Risk? By processing or storing client data, your business faces a substantial threat from cybercrime. Contrary to popular belief, small businesses are in fact bearing the brunt of the recent explosion in ransomware attacks. In recent testimony to the U.S. Senate on the rise in cybercrime, Chair of the Senate Judiciary Committee Dick Durbin testified that “Though any person or entity can be targeted in a ransomware attack, it has been estimated that small businesses make up over half of the victims.” Not only are small businesses accounting for the majority of all ransomware victims, the damage to small businesses from ransomware attacks far exceeds that of their larger counterparts. Many small businesses are already operating on thin margins. With the average recovery time for these victims coming in at around 9 months, many small businesses are simply unable to overcome the financial devastation. Every business, whether it employs 10 people or 10,000, must take this threat seriously. It’s real, it’s becoming more frequent, and it’s becoming more destructive. Using AI to Solve Problems The task then becomes the challenge of keeping an organization secure amidst the commoditization and increased ease of digital crime. Cyber security experts are applying the incredible capabilities of AI integration to respond to a menace, optimizing proven defensive techniques to counter the escalating pressure effectively. Some of the most strategically practical applications of AI technologies are in these critical areas: - Threat Identification – Maintaining digital security requires constant work. Like a white blood cell in the body’s immune system, AI-enhanced security programs actively monitor a company’s digital anatomy to hunt for threats. These programs weaponize the Machine Learning (ML) faculties inherent to AI tech: not only to exhaustively search for the key signatures of known cyber threats, but also to improve its understanding of an organization’s operations over time – becoming more accurate and more sensitive to potentially harmful system process changes as it learns. - Vulnerability Management – In addition to the ability to quickly identify potential threats to system security, companies need to have the capacity to prioritize these threats effectively. The sheer amount of positive threat identifications that a company might face regularly is staggering – some companies could be facing hundreds of new attacks each day. Prioritizing these threats into manageable risk categories dramatically increases the effectiveness of cybersecurity defenses, ensuring that the most pressing risks are isolated and addressed before they cause issues. The extraordinarily rapid data process scrutiny powers needed to weigh volumes of digital pattern analyses against one another successfully is mind-boggling – and only achievable through the help of AI technology. - Network Security – Conventional network security models rely on the observation of two key network features: user/device traffic and operational topology. By integrating AI capabilities, organizations can more efficiently deploy their network traffic authentication defenses to determine whether specific sources are legitimate or if they pose a threat, isolating and removing unauthorized accessors before systems are breached. In addition, machine learning functions enable AI-powered network security applications to observe and learn the normal network shape over time, allowing it to recognize network patterns to improve its security abilities continuously. - User Authentication – Authenticating user and device access has become more challenging during the Covid-19 pandemic, as more and more businesses are implementing hybrid and remote working models. Companies need a security solution that operates on a framework of zero trust. Zero trust architectures do not grant automatic access to assets or user accounts based solely on their physical or network location. By relying on a principle of least privilege, these systems restrict user and device access by default, ensuring that multifaceted authentication procedures never get bypassed. AI augmentation provides the resources businesses need to implement and manage complex zero-trust frameworks successfully. - Systems Observability – Besides monitoring network activities and system processes, AI-enhanced cyber security applications can monitor an organization’s physical IT infrastructure for key operational metrics. These physical systems are crucial in maintaining digital security from cooling, power, internal temperature, and hardware backups. With AI assistance, companies gain valuable insights into the effectiveness of their IT infrastructure components and deploy critical hardware maintenance preemptively before device failures can weaken digital defense. AI is a complex tool that generates real benefit in the war against cybercrime and is quickly becoming an absolute necessity as criminal actors begin to deploy their own AI enhancements in their attacks on businesses. The imperative for companies to respond appropriately to protect their interests, and the interests of their clients, has never been greater. When working in tandem with a team of proven cybersecurity experts and AI applications, businesses can minimize their exposure to cyber threats, outmaneuver would-be hackers, and protect the future of their operations. Learn more about how AI can help protect your business and what you can do to minimize the chance of a successful cyber attack.
<urn:uuid:e0b496ae-fe20-430f-a7b9-5ee5e3ecad35>
CC-MAIN-2024-38
https://blog.etrepid.com/blog/cyber-secure-with-the-help-of-artificial-intelligence
2024-09-12T05:35:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00593.warc.gz
en
0.937874
1,405
2.96875
3
It’s easy to take for granted the widespread availability of high-powered computers in today’s society. However, not every household has the means to invest in a state-of-the-art desktop. According to the U.S. Census Bureau, approximately 75 percent of American households had a computer in 2011. That leaves a quarter of the U.S. population without access to net-based tools in their home, significantly impairing their ability to do things as basic as applying for a job or doing homework online. Until there is a fully functioning computer in every North American household, many community members will continue to rely on the services provided by local public computer labs. Community centers, libraries and other public institutions remain pivotal in bringing computational and network resources to underserved areas. For example, the impoverished citizens of Mississippi’s Madison County have relied on a local organization’s computer lab to learn vital skills for navigating the modern world. Without these services, approximately 3,500 local families would be unable to gain access to computing resources, inhibiting their ability to finish school and find means of employment. Madison Countians Allied Against Poverty offer weekly classes to teach interested individuals how to operate a computer, including laying out the basics for effective network navigation. These services are vital for both adults looking to reenter the workforce and children trying to complete schoolwork and prepare for the digitized future. Providing shelter to crime-stricken areas In poverty-stricken areas, public computer labs provide an additional, all-important service beyond offering computing and network resources: providing a safe place for kids to hang out. For children living in neighborhoods with high crime rates, simply walking the streets could put them in the line of fire. A high-quality computer lab such as that offered by the Kansas City Public Library can provide these kids with a safe haven to go to after school away from the violence of the neighborhood. The downside to a publicly available computer lab is that they are prone to numerous malware breaches and system errors. Community users are unlikely to take proper precautions when using a workstation that is owned by another entity. In addition, they may make changes to the computer’s registry, causing performance issues if not rendering the system unusable entirely. To protect these networked environments from such problems, administrators can employ system restore utilities and provide coverage for each workstation. If one machine becomes infected, the rest can fall victim as well in short order. With system restore and recovery solutions in place, however, computers can be configured to revert back to optimized settings after each session. This way, any traces of malware or disruptive system changes can be alleviated with ease. Check out some more tips on how you can keep malware off your computers.
<urn:uuid:5919f91a-5344-4e1a-8ca8-c493552c8785>
CC-MAIN-2024-38
https://www.faronics.com/news/blog/protecting-public-computer-labs-with-system-restore
2024-09-13T11:42:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00493.warc.gz
en
0.954267
560
2.984375
3
What are AI models? AI models or artificial intelligence models are programs that detect specific patterns using a collection of data sets. It is an illustration of a system that can receive data inputs and draw conclusions or conduct actions depending on those conclusions. Once trained, an AI model can be used to make future predictions or act on data that was not previously observed. AI models can be used for a variety of activities, from image and video recognition to natural language processing (NLP), anomaly detection, recommender systems, predictive modeling and forecasting, and robotics and control systems. What are ML or DL models? ML (Machine Learning) and DL (Deep Learning) models describe the use of complex algorithms and techniques to process and analyze data to produce predictions or decisions in real-time. ML models: ML models employ learning algorithms that draw conclusions or predictions from past data. This comprises methods like decision trees, random forests, gradient boosting, and linear and logistic regression. HPE offers a variety of machine learning (ML) tools and technologies that may be used to build and use ML models widely. Deep learning (DL) models: A subset of machine learning (ML) models that uses deep neural networks to learn from a lot of data. DL models are frequently used for image and audio recognition, natural language processing, and predictive analytics since they are built to handle complicated and unstructured data. TensorFlow, PyTorch, and Caffe are just a few of the deep learning (DL) tools and technologies that are offered by HPE that can be used to create and use DL models. Both ML and DL models are utilized to address a variety of business issues, including fraud detection, customer churn analysis, predictive maintenance, and recommendation systems. Organizations can use these models to acquire fresh perspectives of their data. Differences between AI, ML, and DL AI (Artificial Intelligence) - AI covers a wide range of tools and methods that replicate human intelligence in machines. - Artificial intelligence can be applied to a wide range of data types, including structured, unstructured, and semi-structured data. - Given that they can use a variety of different methodologies and algorithms, AI systems can be challenging to understand and comprehend. - As AI systems sometimes entail more sophisticated algorithms and processing, they can be slower and less effective than ML and DL systems. - AI can be applied to a wide range of applications, including natural language processing, computer vision, robotics, and decision-making systems. - AI systems can be fully autonomous or require some level of human intervention. - It requires a large team of professionals to create and manage AI systems as they can be quite complicated. - Given that they frequently include complicated algorithms and Given that they frequently include complicated algorithms and processing, AI systems can be challenging to scale. - As AI systems frequently use fixed methods and processing, they might be less flexible than ML and DL systems. - The need for substantial volumes of data to train properly is one drawback of AI, ML, and DL. ML (Machine Learning) - Machine learning is a subset of AI that includes teaching machines to learn from data and make predictions or judgments based on that data. For applications like image identification, natural language processing, and anomaly detection, ML techniques can be employed. - For ML to learn from and make predictions or judgments, it needs labeled training data. - As ML models rely on statistical models and algorithms, they can be easier to comprehend. - Due to their reliance on statistical models and algorithms, ML systems have the potential to be quicker and more effective than AI systems. - Many of the same applications as AI may be used for ML, but with a focus on data-driven learning. - ML systems are created to automatically learn from data with little assistance from humans. - ML systems can be less complex than AI systems since they rely on statistical models and algorithms. - As ML systems rely on statistical models and algorithms that can be taught on big datasets, they can be more scalable than AI systems. - As ML systems can learn from fresh data and modify their predictions or choices, they may be more flexible and adaptable than AI systems. - The quality of the data can also have an impact on the accuracy and robustness of the ML model and collecting and labeling data can be time-consuming and expensive. DL (Deep Learning) - DL is a specialized subset of ML that mimics how the human brain functions using artificial neural networks. Image and speech recognition are two examples of complex subjects that DL is exceptionally effective at solving. - To efficiently train deep neural networks, DL requires vast volumes of labeled data. - DL models are sometimes regarded as "black boxes" because they include several layers of neurons that might be difficult to read and comprehend. - As deep neural networks are trained using specialized hardware and parallel computing, DL systems have the potential to be the fastest and most effective out of the three methods. - DL is particularly well-suited for applications requiring complex pattern recognition, such as image and audio recognition, as well as natural language processing. - Some human interaction is required in DL systems, such as determining the design and hyperparameters of the neural network. - DL systems can be the most complex since they involve many layers of neurons and require specialized hardware and software to train deep neural networks. - DL systems can be the most scalable since they use specialized hardware and parallel processing to train deep neural networks. - Because of its capacity to learn from vast volumes of data and adjust to new circumstances and tasks, DL systems have the potential to be the most adaptive. - Deep neural network training in DL can be computationally complex and need specialized gear and software, which can be costly and restrict the technology's accessibility. How do AI models work? AI models operate by receiving large data inputs and by generating technical approaches to discover, trends and patterns that are pre-existing in the data set provided to the program. Since the model is developed on a program that runs on large data sets, it helps the algorithms to find and understand the correlation in patterns and trends that can be used to forecast or formulate strategies based on previously unknown data inputs. The intelligent and logical way of decision-making that mimics the inputs of the available data is called AI modeling. Simply described, AI modeling is the development of a decision-making process that consists of three fundamental steps: - Modeling: The first stage is to develop an artificial intelligence model, which employs a complicated algorithm or layers of algorithms to analyze data and make judgments based on that data. A good AI model can serve as a stand-in for human expertise. - AI model training: The AI model must be trained in the second stage. Training often entails running huge quantities of data through the AI model in recurrent test loops and inspecting the results to confirm the accuracy and that the model is performing as anticipated and required. To understand this method we must also understand the difference between supervised and unsupervised learning; 1. Supervised learning refers to classified data sets that are labeled into correct output, meaning the data provided have pre-existing relations between input data, the model then makes use of this labeled data to discover the connections and trends between the input data and the desired output. 2. Unsupervised learning is a sort of machine learning in which the model is not given access to labeled data. Instead, the model must independently identify the connections and trends in the data. - Inference: Inference is the third step. This stage involves deploying the AI model into its actual use case in real-life scenarios, where it regularly draws logical inferences from the information at hand. After being trained, an AI model can be utilized to make forecasts or perform actions based on fresh, unforeseen data inputs. In essence AI models operate by processing input data, mining it using algorithms and statistical techniques to uncover patterns and correlations, and then using what they have discovered to anticipate or act upon subsequent data inputs. How do you scale AI/ML models across GPU, compute, people, and data? Scaling AI/ML models across GPU, compute, people, and data requires a combination of technology, infrastructure, and expertise. GPU and Compute: High-performance computing solutions, including GPU-accelerated computing platforms and cloud-based services can be leveraged to scale AI/ML models. These solutions enable organizations to run complex and demanding AI/ML algorithms efficiently, without sacrificing performance. - People: The scaling process for AI and ML depends heavily on people. To design, develop, and implement AI/ML models at scale, organizations need to assemble a team of highly qualified AI/ML specialists. Additionally, it's critical to grasp the organization’s AI/ML priorities and goals, as well as the abilities and resources needed to carry them out. - Data: Organizations need to have a well-designed data architecture to support the scalability of AI/ML models because data is the lifeblood of these models. To do this, businesses need a solid data management strategy that enables them to store, handle, and analyze massive volumes of data in real-time. Organizations must also make sure that their data is reliable, accurate, and secure. By leveraging these capabilities, organizations can drive the growth and success of their AI/ML initiatives and stay ahead of the competition in the digital age. How do you build and train AI models? - To build and train AI models, we first need to define the purpose and choose the model's objectives. The remaining steps will be guided by the purpose a model is meant to serve. - Work with a subject-matter expert to assess the data's quality. With a thorough grasp of the data gathered, the data inputs must be accurate and devoid of errors. This information is going to be utilized to train the model. These data should be accurate and consistent, and they need to be pertinent to the purpose the AI is meant to serve. - Select the ideal AI algorithm or model design like Decision trees, support vector machines, and other popular techniques that are used to train AI models. - Utilize the cleaned and prepared data to train the model. This usually entails putting the input into the selected algorithm and employing a technique called backpropagation to tweak the model's settings and boost efficiency. - Check the correctness of the trained model and make any required corrections. This can entail putting the model to the test on a different set of data and assessing how well it predicts actual results. - Once the model is performing to the appropriate degree of accuracy, fine-tune it and repeat the training procedure. This may entail modifying the model's hyperparameters, such as the learning rate, or employing techniques such as regularization to prevent overfitting. - In general, creating and training an AI model involves a mix of expertise in the relevant field, familiarity with machine learning algorithms and techniques, and an intention to experiment and repeat to enhance the model's performance. What is data bias in AI models? The likelihood of systematic and unfair bias in the data used to train AI models is referred to as data bias in AI models. If the data used to train the model contains biased inputs or is not representative of the sample or audience to whom it will be applied, the predictions may become inaccurate or unjust. As a result, the model can treat certain persons unfavorably and discriminatorily. To eliminate data bias, it is vital to have a broad and representative dataset while training AI models and for the ability of the AI model to share learnings from different data sets to reduce bias and increase the accuracy of the model. How to maintain data privacy in AI/ML models In AI/ML models, maintaining data privacy is a crucial concern, and there are a variety of technologies and best practices to make sure of that. Data encryption: Encrypting data is a fundamental step in ensuring data privacy in AI/ML models. To safeguard sensitive data from unwanted access, businesses need encryption solutions for data both in transit and at rest. Data anonymization: The practice of eliminating personally identifiable information (PII) from data sets is known as data anonymization. Businesses need solutions that protect customer information while still giving AI/ML models access to the information they require to work. Access control: Businesses need access control solutions that enable enterprises to regulate accessibility to sensitive data, ensuring that only authorized people may access it. Compliance: Keeping data private in AI/ML models requires careful consideration of compliance. Businesses need products that follow compliance best practices to ensure that businesses adhere to data privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Auditing and logging solutions let businesses keep track of who has access to sensitive data, ensuring that any potential breaches are swiftly found and fixed. Organizations can safeguard the security of sensitive data and keep the confidence of their customers and stakeholders by leveraging data privacy compliant solutions and best practices. How to increase accuracy in AI/ML models? Increasing accuracy in AI/ML models is a critical concern, and there are several strategies and best practices that can be used to achieve this goal. Data Quality: Data quality is a critical factor in the accuracy of AI/ML models. Solutions for data quality management can ensure that data sets are complete, accurate, and consistent. This allows AI/ML models to learn from high-quality data and make more accurate predictions. Data quality management includes: - Data cleansing: the process of removing inconsistencies, duplicates, and errors from data sets. - Data standardization: the process of converting data into a common format. - Data enrichment: the process of adding additional data to a data set. - Data validation: the process of checking data for accuracy and completeness. - Data governance: the process of managing data quality, security, and privacy. Engineering features: Engineering features are the process of turning raw data into features that AI/ML models can employ. Data visualization, feature selection, dimensionality reduction, feature scaling, and feature extraction are all effective feature engineering approaches that may dramatically increase model accuracy. Model selection: Choosing the best AI/ML model for a specific task is essential for improving accuracy. There are several models to pick from, such as decision trees, logistic regression, linear regression, and deep learning models. It is crucial to pick a model with a high accuracy rate that is suitable for the issue at hand. Hyperparameter tuning: Hyperparameters are settings made before an AI/ML model's training. The accuracy of the model can be significantly impacted by the selection of hyperparameters. Organizations can automatically tune hyperparameters using HPE's hyperparameter tuning solutions, improving model accuracy. Model validation: Model regularization is the process of decreasing overfitting in AI/ML models. Overfitting is a condition when a model performs poorly on fresh data because it is too complicated and fits the training data too well. L1 and L2 regularization are two model regularization methods that can aid in reducing overfitting and enhancing model accuracy. Organizations can evaluate the correctness of their models and spot any possible problems with the help of tools and best practices for model validation. How do you deploy AI models? There are many ways to deploy AI models, and the specific approach will depend on the type of model you are working with and the goals you want to achieve. Some common strategies for deploying AI models include: - Hosting the model on a dedicated server or cloud platform, where it can be accessed via an API or other interface. This approach is often used when the model needs to be available for real-time predictions or inferences. - Embedding the model directly into a device or application, which allows it to make predictions or inferences on local data without the need for a network connection. This is a common approach for deploying models on edge devices or in applications where low latency is important. - Packaging the model into a container, such as a Docker container, allows it to be easily deployed and run in a variety of environments. This approach can be useful for deploying models in a consistent and reproducible way. Regardless of the method, it is crucial to thoroughly test and verify the model before deploying it to make sure it is operating as intended. HPE and AI Models HPE understands artificial intelligence (AI) technology. With a proven, practical strategy, verified solutions and partners, AI-optimized infrastructures, and ML Ops solutions, organizations can decrease complexity and realize the value of data quicker, giving them a competitive advantage. - The HPE Machine Learning Development System is a turnkey system that combines high-performance computers, accelerators, and model training and development software in an optimized AI infrastructure. It is supported by professional installation and support services. It is a scaled-up AI turnkey solution for model development. - HPE Swarm Learning is a decentralized, privacy-preserving framework for performing machine learning model training at the data source. HPE Swarm Learning addresses issues about data privacy, data ownership, and efficiency by keeping the data local and just sharing the learnings, which leads to superior models with less bias. An applied blockchain is also used by HPE Swarm Learning to securely enroll members and elect the leader in a decentralized manner, giving the swarm network resiliency and security. - Determined AI, an open-source machine learning training platform that HPE purchased in June 2021, serves as the basis for the HPE Machine Learning Development Environment. To execute, scale and share experiments with ease, model creators may begin training their models on the open-source version of Determined AI. - The HPE GreenLake platform offers an enterprise-grade ML cloud service that enables developers and data scientists to quickly design, train, and deploy ML models—from pilot to production, at any scale—to bring the benefits of ML and data science to your organization. - HPE Ezmeral ML Ops gives enterprises DevOps-like speed and agility at every stage of the ML lifecycle by standardizing procedures and offering pre-packaged tools to design, train, deploy, and monitor machine learning workflows. - HPE SmartSIM can help to identify plagiarism in written content, the software application SmartSim employs machine learning and natural language processing. It is intended to evaluate text and find similarities between it and other information that is already published online or in a browser database. The program can be used to verify the authenticity of academic papers, research papers, and other written materials. It serves as a tool to avoid plagiarism and provide original material. These features help in the following parameters; - Pre-configured, fully installed, and performant out of the box - Seamless scalability - distributed training, hyperparameter optimization - Manageability and observability - Trusted vendor and enterprise-level support and services - Flexible and heterogenous architecture - Component architecture - Software and hardware support - Service and support
<urn:uuid:9f3d80ae-115f-4852-a426-deb0b884ec7b>
CC-MAIN-2024-38
https://www.hpe.com/ca/en/what-is/ai-models.html
2024-09-13T11:36:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00493.warc.gz
en
0.937233
3,922
3.71875
4
Information Technology (IT) is sometimes thought of as “the great leveller”, but a recent Labour Market Report by the Canadian Software Human Resource Council (SHRC) contradicts that notion. Its analysis of Canadian IT worker employment over the past four years reveals a great imbalance. The April 2005 report depicts a predominantly male (78 per cent) IT workforce, with three-quarters hailing from Central Canada (Ontario and Quebec). IBM Canada, in some small way, is trying to rectify this imbalance, by reaching out to a segment of the population left behind by the IT juggernaut – Aboriginal communities. As part of this initiative, on Wednesday, the company announced several strategies and programs. One of these is series of three-day camps for Aboriginal youth appropriately named IGNITE – short for IGniting Interest in Technology and Engineering. “The goals are to create a positive experience for Aboriginal boys and girls at a critical point in their educational evolution,” said John Longbottom, the IBM Canada executive in charge of Aboriginal strategy. “We want to get them excited about maths and sciences so they [are] positioned for IT careers over the longer term.” Longbottom hopes these camps will not only encourage Aboriginal youth to pursue technology-related careers, but – even more fundamentally – would convince them to stay in school. IGNITE is patterned on another IBM camp dubbed EXITE (Exploring Interests in Technology and Engineering) that encourages girls to pursue IT careers. According to IBM, IGNITE builds on elements of EXITE but adds cultural components to make it more appropriate to an Aboriginal demographic – elements such as the presence of an elder, as well as mentors from Aboriginal communities. Camp events will be conducted by a team of IBM coaches and external Aboriginal groups. Longbottom described some camp highlights. They include exposure to robotics, Web design workshops, and opportunities to take a PC apart, and then put it back together. In addition, the camp will instruct participants on how to do presentations using tools like Power Point. They will do a final presentation on their career aspirations. The camps train Aboriginal youth on the use of technology to deal with real world issues, said Longbottom. Many in the Aborginal community are happy with IBM Canada’s initiatives. One of them is John Bernard, president and CEO of Ottawa-based Donna Cona Inc., Canada’s largest Aboriginal-owned technology company. “Our company believes education is critical for Aboriginal youth to develop personally and professionally,” said Bernard, who is also a member of the Madawaska First Nation community in New Brunswick. Donna Cona, he said, is always looking for talented Aboriginal men and women to fill various technical and business roles but finds it difficult to locate individuals with the required IT skills and experience. He said, in many cases, the steep cost of education is an obstacle for Aboriginal youth wanting to pursue a career in technology, but added that Donna Cona offers several scholarships to help students overcome that barrier. Donna Cona and other partners are helping to select appropriate participants for the IBM camps – individuals who would most benefit from the training offered. IGNITE camps will be held in Edmonton from August 10 to 12 and in Vancouver from August 23 to 25. The number of participants will be limited to 35 to in order to make them more intimate. Longbottom hopes to offer six more camps later this year with an initial focus on areas with a large Aboriginal population. Eventually camps will be held in the east in cities like Ottawa, Halifax and Toronto.
<urn:uuid:fa95a52f-64dd-4304-88cc-22a71171cf98>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/igniteing-the-it-fire-in-aboriginal-youth/13252
2024-09-13T12:56:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00493.warc.gz
en
0.958356
734
2.5625
3
Universal Serial Bus (USB) is an industry standard developed in the mid-1990s that defines the cables, connectors and communications protocols used in a bus for connection, communication, and power supply between computers and electronic devices. USB was designed to standardize the connection of computer peripherals (including keyboards, pointing devices, digital cameras, printers, portable media players, disk drives and network adapters) to personal computers, both to communicate and to supply electric power. It has become commonplace on other devices, such as smartphones, PDAs and video game consoles. USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports, as well as separate power chargers for portable devices. Universal Serial Bus
<urn:uuid:ffbf07c3-66ae-42f1-9312-f9f7502c15e1>
CC-MAIN-2024-38
https://wiki.commscopetraining.com/content/usb/
2024-09-14T17:41:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00393.warc.gz
en
0.961086
143
3.5625
4
How Were the First Mouses Launched? In the late 1970 and early 1980s, several companies were building devices that allowed for better use of computers with graphical user interfaces. However, manufacturing costs were prohibitive. In 1983, Microsoft launched its first mouse. Its updated MS-DOS application Microsoft Word was now compatible with muse technology. That version of Word, bundled with a mouse, tutorial and Notepad, cost $195. That same year, Apple released a relatively inexpensive mouse to go along with its Macintosh and Apple II desktop computer lines. Other top manufacturers, including Atari and Commodore, followed suit. What Did the First Mouses Look Like? The first Microsoft mouse was rather clunky and had two green buttons, not on the top, but the front of the device. Microsoft was so concerned about people not understanding how to use the mouses that it came with an instruction manual that was 120 pages long. The Apple mouse had one button on the top, a steel rollerball and could be used as either a pointer or joystick depending on the application. How Has the Mouse Changed? If you are old enough to remember early generations of mouses, you’ll recall different button configurations, the migration from the metal ball to a rubber one (both of which often picked up lint from your mousepad and needed to be cleaned) and gradual improvements in ergonomic design. The number of buttons has stayed constant, based on the type of computer you’re using, though some specially designed mouses have multiple buttons. In recent years, the wireless mouse has made it easier to take the technology with you with one less cord to worry about. What Is In Store for the Mouse in the Future? Mouse design continues to evolve. Today, there are mouse models that feature a touchpad and remote control, a vertical conical design, compatibility with 3D designers or wearable functionality. Some experts suggest that with advances in automation and virtual reality, the mouse and the keyboard could soon be a thing of the past. Are We Ready for Changes in Technology? Consider why that early mouse model came with a 120-page instruction manual. It was included because company officials feared that users would not be able to figure out how to use the new device. However, as design has improved, so too has the focus on user experience. New technologies are no longer to be feared. At Alvarez Technology Group, we help businesses leverage new technologies to solve problems and spur growth. For an initial, free consultation, contact us today.
<urn:uuid:64bfd5d8-937e-495a-87ae-ea09459354d3>
CC-MAIN-2024-38
https://www.alvareztg.com/mouse/
2024-09-14T17:10:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00393.warc.gz
en
0.968622
517
3.15625
3
一单词一卡片-AI-powered vocabulary learning AI-powered tool for mastering English vocabulary Load MoreAnki QuickCard Concise USMLE Step 1 summaries and Anki card creator Erstellt Karteikarten zum Lernen aus Dateien, Text oder Keywords (optimiert, Quizlet & Anki kompatibel) I create enhanced medical flashcards with notes. Engages in dialogue and explains English words in Chinese. 20.0 / 5 (200 votes) Introduction to 一单词一卡片 一单词一卡片 is a language learning tool designed to help users learn and memorize English vocabulary effectively. It operates by breaking down complex words into their roots, prefixes, and suffixes, and then creating vivid, memorable images or stories based on these components. This method leverages mnemonic techniques to enhance retention and recall. For example, the word 'incredible' might be broken down into 'in-' (not), 'cred' (believe), and '-ible' (able to be), and then an imaginative story or image is created to link these components together. Main Functions of 一单词一卡片 Breaking down the word 'unbelievable' into 'un-' (not), 'believe' (trust), and '-able' (able to be). A user encounters the word 'unbelievable' and wants to understand its components. The tool breaks the word down and explains each part, aiding comprehension and memory. Mnemonic Image Creation Creating an image of a flying pig for the word 'impossible' to signify something that cannot happen. A student is struggling to remember the meaning of 'impossible.' The tool generates a vivid image of a flying pig, making it easier for the student to recall the word's meaning. Contextual Example Sentences Using the word 'extraordinary' in a sentence like 'The magician's extraordinary tricks left the audience in awe.' A learner needs to see how 'extraordinary' is used in context. The tool provides a sentence that clearly demonstrates the word's meaning and usage. Ideal Users of 一单词一卡片 Individuals who are learning English as a second language. They benefit from the tool's ability to break down complex words and create memorable images or stories, making vocabulary acquisition easier and more enjoyable. Students Preparing for Exams Students who are preparing for standardized tests such as the TOEFL, IELTS, or SAT. These users benefit from the tool's mnemonic techniques, which help them remember large amounts of vocabulary required for these exams. How to Use 一单词一卡片 Visit aichatonline.org for a free trial without login, also no need for ChatGPT Plus. Enter the English word you want to learn more about into the input field. Review the detailed breakdown of the word, including its roots and meanings. Read the provided memory story to help reinforce the word's meaning and usage. Utilize the generated image and example sentence to further cement your understanding of the word. Try other advanced and practical GPTs AI-Powered Text Rewriting and Rephrasing AI-driven assistance for prompt perfection. AI-powered image description breakdown Empowering Minds with AI Assistance AI-driven answers, research, and content. AI-Powered Learning Assistance AI-powered text transformation for stylish Unicode AI-powered framework generator for custom methodologies AI-powered problem-solving and creative thinking. AI-powered insights for clarity AI-Powered Academic History Insights AI-Powered Insights into Chinese Policies - Language Learning - Exam Preparation - Vocabulary Building - Educational Tools - Memory Techniques Frequently Asked Questions about 一单词一卡片 What is 一单词一卡片? 一单词一卡片 is a tool designed to help users learn English vocabulary by breaking down words into their roots and creating memorable stories to aid in retention. How does 一单词一卡片 help with vocabulary retention? It uses a combination of visual imagery, word root analysis, and engaging stories to make learning new words more interactive and memorable. Do I need to create an account to use 一单词一卡片? No, you can start using 一单词一卡片 immediately by visiting aichatonline.org without needing to log in or subscribe to ChatGPT Plus. Can 一单词一卡片 be used for exam preparation? Yes, it is particularly useful for students preparing for exams as it helps in building a strong vocabulary foundation through detailed word analysis and retention techniques. Is there a cost associated with using 一单词一卡片? No, 一单词一卡片 offers a free trial, allowing users to experience its features without any initial cost.
<urn:uuid:ccd0e690-c6f4-4e0e-81d3-b380c57a09fe>
CC-MAIN-2024-38
https://theee.ai/tools/%E4%B8%80%E5%8D%95%E8%AF%8D%E4%B8%80%E5%8D%A1%E7%89%87-ZxWyZYur
2024-09-15T23:18:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00293.warc.gz
en
0.838047
1,106
3.296875
3
Platform-as-a-service (PaaS) is a cloud-hosted service option in which a data center service provider hosts a customer’s hardware and infrastructure and delivers that infrastructure—and sometimes applications— via the Internet. Who uses platform-as-a-service? PaaS is used by organizations that wish to conserve IT budget and minimize effort in situations where IT resources are constrained or where specific areas of IT expertise are lacking. It also allows organizations to deploy complex environments more quickly, as the service provider does the work of configuration, deployment and optimization. What types of environments are hosted on PaaS? A typical PaaS deployment is used for Web application development. The host provides developers with access to the development environment as well as staging and production servers, data storage and whatever else is needed for the upcoming development initiatives. Developers log in and do their development work in the host environment without any corresponding operational IT requirement.
<urn:uuid:16713505-fd39-47e8-bc17-cfe473188948>
CC-MAIN-2024-38
https://www.informatica.com/in/services-and-training/glossary-of-terms/paas-definition.html
2024-09-16T00:42:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00293.warc.gz
en
0.940523
195
2.8125
3
The most widely used matrix-matrix multiplication routine is GEMM (GEneral Matrix Multiplication) from the BLAS (Basic Linear Algebra Subroutines) library. And these days it can be found being used in machine learning, neural networks, and machine vision. “The biggest performance challenge for matrix multiplication is size. To naïvely multiply two n by n matrices requires n3 floating point operations, and n2 data movement. This becomes immediately unmanageable when n gets really large, which is typical in most big science HPC applications.” It is not surprising then that a great deal of effort has gone into optimizing GEMM. For really large matrices, data movement and cache issues will have a significant impact performance. The standard approach with most HPC applications is to transform the input data into a packed format internally first and then perform the multiplication over smaller blocks using highly optimized code and block sizes chosen to maximize cache and register usage. Packing in GEMM allows more data to fit into the caches, enabling contiguous, aligned, and predictable accesses. Packing works well in most HPC situations where the matrix sizes are generally large and the time spent packing the input matrices is small relative to the computational time. However, when matrix sizes tend to be smaller, as is common in some machine learning algorithms, the packing overhead starts to become significant. In fact, a GEMM implementation that does not pack the input matrices will outperform a conventional GEMM implementation that does. This is particularly true when multiple matrix operations involve the same input matrix. [clickToTweet tweet=”Deep learning applications get an automatic benefit from Intel MKL without needing any modification of the code.” quote=”Deep learning applications get an automatic benefit from the latest Intel MKL GEMM without any modification.”] Intel® Math Kernel Library 2017 (Intel® MKL 2017) includes new GEMM kernels that are optimized for various skewed matrix sizes. The new kernels take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and the capabilities of the latest generations of the Intel® Xeon PhiTM processors. GEMM optimally chooses at runtime whether to use the conventional kernels that pack data, or the new kernels without packing. The choice is based on characteristics of the matrices and the underlying processor’s capabilities. As a result, deep learning applications that rely on GEMM will automatically benefit from these optimizations without needing any modification of the code. The latest release of Intel® MKL 2017 also introduces another way to minimize the overhead for applications that rely on packed data. It provides new APIs that allow preserving the packed input matrices for use in multiple matrix multiplications that use the same input matrix. This way the packing overhead can be amortized over multiple GEMM calls. These two approaches help to achieve high GEMM performance on multicore and many-core Intel® architectures, particularly for deep neural networks. Intel® MKL 2017 includes all the standard BLAS, PBLAS, LAPACK, ScaLAPACK, along with specialized Deep Neural Network functions (DNN), optimized for the latest Intel® architectures.
<urn:uuid:1d562236-344e-4858-a2bb-cbab6feb8438>
CC-MAIN-2024-38
https://insidehpc.com/2017/06/deep-learning-frameworks-get-a-performance-benefit-from-intel-mkl-matrix-matrix-multiplication/
2024-09-17T05:35:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00193.warc.gz
en
0.881685
666
2.65625
3
We often need to think about security in the context of both physical security and digital security. The act of a cyber attack is typically not a singular event. Oftentimes we see hackers using multiple levels of probing before they execute their attack. We are going to reference each action that happens during the typical cyber-attack and relate it to the physical security of your home. The Drive-by Scan: If you are connected to the internet you are susceptible to all sorts of drive-by attacks and drive-by scans. These scans are an initial phase of data gathering and investigation that are typically being executed by a bot. In the physical realm, this is where a criminal may be driving by in their car looking at your house. They may be looking in the windows or the door and seeing what shape they are in. They are looking to see if you might have a window open that they might be interested in. On the digital side of that analogy would be where a threat takes and runs a scan against your public IPS or your public assets and understands what ports you have open, or what ports you are having services to that might have SSL certificates or not. The threat then starts to understand a little bit about your footprint from a digital perspective. The Deeper Scan At this point, they have garnished enough information about your environment and move on to a deeper investigation. We call this a deeper scan. To relate this to the physical side, this would be where somebody would pull in your driveway. They might even walk up to your door and, and act like they are dropping something off. Next is where a bot has done all of its work and a deeper level scan will be started by an actual human being. This is the first time the attacker attempts to access your environment. In the physical realm, this is where we would see somebody walk up to the door, they might even test the door, or walk around your back of your house and they notice that you had a door that was unlocked. The First Attempt Everybody, at some point in time, becomes comfortable with their current security configuration. When people begin to become complacent then they start leaving a door or window unlocked when they leave. In the digital realm, this is happening a lot. Not everybody realizes that a criminal hacker can gain access to an environment and you might not even know it. A hacker may gain access to your environment and not even touch anything. They looked around and they got back out, which then tends to lead into the attack. The First Attack Over the course of the last week or two, they have noticed that you have made zero changes. They have figured out your configuration that you are comfortable with. The attackers then go in and they start the attack. We have seen this with a prospective client. It was discovered by a digital forensics firm that the attack happened over the course of multiple months. After the first attack, they were able to ransom the environment. The prospective client then either had two choices. They could either pay the ransom or they could do a restore of their data and servers.
<urn:uuid:751709ab-431c-4552-af2d-e7c85da1d88d>
CC-MAIN-2024-38
https://www.bitlyft.com/resources/how-do-cyber-criminals-attack
2024-09-17T06:31:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00193.warc.gz
en
0.979938
630
2.921875
3
These days access to the internet is a business requirement. Most businesses are selling their products and services on the internet which sometimes requires customers to have access to the critical assets such as applications and databases. The global growth of the internet has increased complexity and potential risks to these assets. In some cases, one potential breach may put the organization’s very existence at risk. French bank Société Générale made a frightening announcement in Jan. 2008 that it has uncovered a $7.14 billion US fraud — one of history’s biggest. A trader at the futures desk misled investors in 2007 and 2008 through a “scheme of elaborate fictitious transactions.” In a security review, the reviewer will first determine the criticality of an asset and focus on how that asset is accessed by employees, the risks that unauthorized access by insiders or outsiders could pose to the organization, and if access control has sufficient countermeasures in place to mitigate those risks. In other words, the security review will determine the risk level of access control to a particular asset and what appropriate control should be in place based on level of risk. At the same time, the business’s first priority is to make information available with effective access control in place. Based on criticality, assets subject to security review present different level of risk associated with access control. In other words, “not all data breaches are created equal.” Authorization control is utilized to determine access to network resources. Authentication will determine the identity of the user. Authentication verifies that the login belongs to a user who is attempting to gain access to the system which can be obtained through PKI, smart cards, USB devices, tokens and biometrics. Accounting keeps the records of user activity including what was used, when and for how long. Most of the application and operating systems have strong auditing features in place to track the activities of a user. Accounting records can be very useful for forensic evidence in case of a security breach. Authenticity covers validity of the information, if someone misrepresents your information by claiming that it is his or hers. Authenticity addresses all forms of information misrepresentation and authenticity of the system users. In system profiling, the reviewer determines the criticality of access control and the risk posed to an organization where the risk is directly proportional to the criticality of an asset. Higher risk will require stronger controls or perhaps multiple controls. Security review should determine that controls in place are sufficient to avoid unauthorized access and non-repudiation of information and people. In many ways a password is the weakest link in the access control of a network defense. The best passwords are at least 60 random characters, letters, numbers, and punctuation which can be stored on a portable flash drive flash drive, to be retrieved when needed. All the passwords for the critical infrastructure should have these password characteristics. One weak password in the critical infrastructure can become a launching pad to access other resources in the network. Security tools can be used to collect user permissions in a spreadsheet, which can be utilized to analyze the effectiveness of authentication, authorization, accounting, and authenticity. This analysis will determine if users have appropriate access based on need, role and security policy of the organization. Non-repudiation is the cornerstone of access control which assures the validity of a transaction and user. Regular monitoring and non-repudiation of users in all facets of access control might be necessary to mitigate the identity fraud associated with high profile assets. Compliance only addresses the bare minimum required to comply with a control but to measure the strength of a control in high profile assets, a security reviewer should use due care to regularly evaluate the effectiveness of access control at all levels. It might not be an example of due diligence when some regulations fail to require data encryption. Rogue Trader Crushes Bank Societe Generale
<urn:uuid:9a99520d-1f78-49dc-bfbd-b0468dc32416>
CC-MAIN-2024-38
https://blog.deurainfosec.com/tag/accounting/
2024-09-18T12:17:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00093.warc.gz
en
0.93187
779
2.65625
3
So, allow me to expand on my prior blog entry — Architecture Frameworks Don’t Make Architects — and answer the question, what does make an architect? To help structure my query, I went in search of a concrete specification that defines the difference between and engineer and an architect and found this http://www.pels.ca.gov/pubs/building_design_auth.pdf STRUCTURAL ENGINEERS may design any building of any type. CIVIL ENGINEERS may design any building of any type EXCEPT public schools and hospitals. ARCHITECTS may design any building of any type EXCEPT the structural portion of a hospital. Whoa! Stop the presses! In the State of California the STRUCTURAL ENGINEER has no limitations on what they may design, but the architect cannot design portions of a hospital. Interesting, but this didn’t really fit what I was looking for in an explanation. I found the following on Google Answers and I believe it does an excellent job of qualifying the term, and title, architect across all vocations. So, based on his explanation the engineer is responsible for the design of an entity, typically to the exclusion of the environment in which the entity will exist. The architect is responsible for ensuring that the entity also serves and does not negatively impact the environment in which the entity will exist. Hence, the architect needs to fully understand and be a master engineer, but also have experienced how past engineering projects have impacted an environment once it was introduced. I guess, a good question on an interview for an architect might be, “in a past system in which you led the systems engineering design, how have you had to change the design once the system was put into production?” I would follow up to this question with, “what factors led you to know what changes were required?” I believe my final question might be, “on your next system design what factors you anticipated to limit the requirement to make changes once placed into production?” Thus, architects need to think strategically about the use of end product, whereas engineers tend to focus solely on the end product. This begs the question, “do engineers need to fully understand the big picture, or just focus on building the building?” This is an interesting area onto itself. For example, what if the architect didn’t think of everything? What if an engineer is familiar with problems with using a particular material on a job that the architect recommends? To fall back on a structural building analogy, it seems that it could be very disruptive to have an engineer focused on whether the placement of the front door is placed optimally for access from the parking lot. At some point, the engineer has to trust the architecture and the architect has to trust the engineer. However, this is the focus on a whole other blog entry. To complete the thought, I am on the side of separation of concerns, but an engineer who is bright enough to consider the optimal placement situation should be a target for apprenticeship to become an architect.
<urn:uuid:55be707d-dab0-40f3-a0a6-f90b0de49a6d>
CC-MAIN-2024-38
https://jpmorgenthal.com/2009/07/06/then-what-does-make-someone-an-architect/
2024-09-20T20:57:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00793.warc.gz
en
0.969774
633
2.703125
3
Dial Peers in CUBE A dial peer is a static routing table, mapping phone numbers to interfaces or IP addresses. A call leg is a logical connection between two routers or between a router and a VoIP endpoint. A dial peer is associated or matched to each call leg according to attributes that define a packet-switched network, such as the destination address. Voice-network dial peers are matched to call legs based on configured parameters, after which an outbound dial peer is provisioned to an external component using the component's IP address. For more information, refer to the Dial Peer Configuration Guide. Dial-peer matching can also be done based on the VRF ID associated with a particular interface. For more information, see Inbound Dial-Peer Matching Based on Multi-VRF. In CUBE, dial peers can also be classified as LAN dial peers and WAN dial peers based on the connecting entity from which CUBE sends or receives calls. A LAN dial peer is used to send or receive calls between CUBE and the Private Branch Exchange (PBX)—a system of telephone extensions within an enterprise. Given below are examples of inbound and outbound LAN dial peers. A WAN dial peer is used to send or receive calls between CUBE and the SIP trunk provider. Given below are examples of inbound and outbound WAN dial peers.
<urn:uuid:0350c5ab-f495-45ac-952b-394dd66300d7>
CC-MAIN-2024-38
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/voice/cube/configuration/cube-book/cube-dp.html
2024-09-07T14:23:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00193.warc.gz
en
0.944654
289
2.515625
3
Software Defined Radio (SDR) for Hackers, Part 2: Building Our First SDR Radio (FM) Updated: Dec 30, 2022 Welcome back, my aspiring RF hackers! In part 1 of this series, we set up the HDSDR software and the RTL-SDR hardware to work together to create our software-defined radio. Now that we have those elements functioning, let's use our radio initially for some simple, basic radio signal capture such as your local FM radio station. The first step is to setup our sampling rate. Radio signals are continuous and analog. To use them, we need to take discrete samples of this continuous process. In order words, we need to capture pieces of the analog signal at a fixed time interval and feed that to our system. As you can see in the diagram below, the continuous wave of audio is broken into a sample at a fixed time interval. These samples can then be used to retrieve the original signal by sending them through a reconstruction filter. Let's click on the bandwidth button in HDSDR as seen below. This opens a window to set the sampling rate. We can set both the input sampling rate and the output sampling rate. You can set the sampling rate at the level of your choice but most audio engineers believe that the human ear can not distinguish differences in sampling rates above 48khz (48000). Since we will be sampling FM radio, a sampling rate above 48khz will not make a distinguishable difference to the quality of the signal. To listen to your local FM radio, click on the FM mode icon near the top of the panel. Now, go down to the Tune section (see above) and set the tuner to the frequency of your favorite local radio station. You can also use the slider to adjust the frequency of your captured signal. For the best reception, place the frequency slider in line with the peak here. Once you have done so, you should now be able to hear you radio through your speakers. To adjust the volume, you can use the volume slider as seen below. Congratulations! You have just built your first software defined radio! Enjoy your local FM radio station and experiment with the various buttons and sliders in HDSDR and watch what happens. Software Defined Radio is the leading edge of cybersecurity research. Now that we have completed our first software defined radio, look for future tutorials as we look to capture satellite signals, aircraft signals and so many more! As we develop our skills, we will advance to transmitting, replaying, and decoding signals from a multitude of sources.
<urn:uuid:3a239b82-ca9f-4a2f-a5ac-5c85cf86de20>
CC-MAIN-2024-38
https://www.hackers-arise.com/post/software-defined-radio-sdr-for-hackers-part-2-building-our-first-sdr-radio-fm
2024-09-07T15:34:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00193.warc.gz
en
0.924875
535
2.5625
3
Definition: Homogeneous Network A homogeneous network is a type of computer network in which all the hardware and software components share the same architecture and platform. This uniformity ensures compatibility and simplifies management, maintenance, and troubleshooting. Understanding Homogeneous Networks A homogeneous network comprises devices and systems that use the same operating systems, network protocols, and applications. This uniformity simplifies various aspects of network management, including security, performance optimization, and software updates. Key Features of Homogeneous Networks - Uniformity: All network components share the same platform and architecture, ensuring seamless interoperability. - Simplified Management: Network administrators deal with a consistent set of tools and procedures for managing the network. - Improved Compatibility: With uniform hardware and software, compatibility issues are minimized. - Streamlined Maintenance: Updates, patches, and troubleshooting procedures are uniform across the network. - Cost Efficiency: Standardizing on a single platform can reduce training and operational costs. Benefits of Homogeneous Networks - Ease of Administration: Managing a homogeneous network is simpler because administrators only need to be familiar with one type of system. - Enhanced Security: Uniformity in software and hardware reduces the risk of vulnerabilities due to inconsistent updates or incompatible security protocols. - Reduced Complexity: Fewer variations in systems and devices reduce the overall complexity of the network. - Scalability: Adding new devices is more straightforward when they are compatible with the existing network infrastructure. - Consistent Performance: Uniform hardware and software ensure predictable and reliable performance across the network. Uses of Homogeneous Networks Homogeneous networks are particularly beneficial in environments where consistency and reliability are paramount. Typical use cases include: - Corporate Environments: Many businesses standardize on a single platform for their desktops and servers to streamline support and reduce costs. - Educational Institutions: Schools and universities often use homogeneous networks to simplify the management of large numbers of computers. - Healthcare Systems: Hospitals and clinics benefit from homogeneous networks for reliable performance and stringent security requirements. - Government Agencies: Standardization helps in maintaining high security and efficiency in government operations. - Data Centers: Homogeneous networks in data centers facilitate easier management and maintenance of servers. Features of Homogeneous Networks - Centralized Management: Centralized control over network configurations, security policies, and software updates. - Consistent User Experience: Users experience uniform interfaces and functionality across devices. - Unified Support: Simplified support processes due to uniformity in hardware and software. Implementing a Homogeneous Network To implement a homogeneous network, organizations typically follow these steps: - Assess Requirements: Determine the specific needs and goals of the organization. - Choose a Platform: Select a uniform hardware and software platform that meets the requirements. - Standardize Configurations: Develop standardized configurations for all devices on the network. - Deploy and Integrate: Roll out the standardized systems and integrate them into the existing infrastructure. - Train Staff: Ensure that IT staff are trained to manage and support the homogeneous network. - Monitor and Maintain: Continuously monitor the network for performance and security, applying updates and patches uniformly. Challenges of Homogeneous Networks While homogeneous networks offer numerous benefits, they also come with certain challenges: - Vendor Lock-In: Reliance on a single vendor can lead to higher costs and limited flexibility. - Lack of Diversity: Uniform systems may lack the diversity needed to handle specific tasks or unique requirements. - Scalability Issues: Scaling up may require significant investment if the chosen platform becomes obsolete or inadequate. - Single Point of Failure: A homogeneous network can be more vulnerable to widespread issues if a single component fails. Enhancing Homogeneous Networks with Virtualization Virtualization technologies can enhance the flexibility and efficiency of homogeneous networks. By creating virtual machines (VMs) that run on a uniform hardware platform, organizations can optimize resource usage and improve scalability. Virtualization allows for better isolation of applications, easier disaster recovery, and more efficient resource allocation. Future Trends in Homogeneous Networks As technology evolves, homogeneous networks are likely to incorporate more advanced features and capabilities, such as: - Automation: Increased use of automation tools for network management and maintenance. - AI Integration: Artificial intelligence to optimize network performance and enhance security. - Cloud Services: Greater integration with cloud services for scalability and flexibility. - IoT Compatibility: Enhanced support for Internet of Things (IoT) devices within a homogeneous framework. - Enhanced Security: Continuous improvements in security measures to protect against emerging threats. Frequently Asked Questions Related to Homogeneous Network What is a homogeneous network? A homogeneous network is a type of computer network where all hardware and software components share the same architecture and platform, ensuring compatibility and simplifying management, maintenance, and troubleshooting. What are the key features of a homogeneous network? Key features include uniformity in hardware and software, simplified management, improved compatibility, streamlined maintenance, and cost efficiency. All components share the same platform and architecture, making the network easier to manage and maintain. What are the benefits of using a homogeneous network? Benefits include ease of administration, enhanced security, reduced complexity, scalability, and consistent performance. These networks are easier to manage, have fewer compatibility issues, and provide predictable performance. Where are homogeneous networks commonly used? Homogeneous networks are commonly used in corporate environments, educational institutions, healthcare systems, government agencies, and data centers. These settings benefit from the simplicity and reliability of uniform networks. What are the challenges associated with homogeneous networks? Challenges include potential vendor lock-in, lack of diversity in systems, scalability issues, and vulnerability to widespread issues if a single component fails. These networks can become expensive and less flexible over time.
<urn:uuid:11c6518e-bf32-4950-bd80-2d422a6d7051>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-a-homogeneous-network/
2024-09-07T13:32:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00193.warc.gz
en
0.906219
1,207
3.53125
4
Creating a new list of integers in a column within a Datameer worksheet. For example, if one wanted to calculate percentiles from 0 through 99, this technique could be used to create a column that contains row records of 0 through 99. This formula uses two functions. The function RANGE takes two arguments and builds a list of integers between the starting and ending value (inclusively). The EXPAND function takes the generated list and creates a row for each element in the list.
<urn:uuid:7f2f6ad1-d301-4616-931d-b32cec735b6d>
CC-MAIN-2024-38
https://support.datameer.com/hc/en-us/articles/204623864-How-to-Create-a-Range-of-Integers-in-a-Column
2024-09-09T20:35:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00893.warc.gz
en
0.817189
100
3.203125
3
Sometimes you don’t know what you have until it’s gone. Consider, for example, net neutrality – the guiding principle of the internet since its beginning. The intent of net neutrality is to keep the internet free and open for everyone. This means that in the U.S., we can share and access information without interference. In 2015, in response to public pressure, the Federal Communications Commission (FCC) formally adopted net neutrality rules. Less than two years later, following the election of the new administration, the FCC voted to dismantle the net neutrality rules. Under the net neutrality protections passed in 2015, Internet Service Providers (ISPs) are not permitted to block or otherwise hinder access to content. Specifically, they cannot speed up, slow down or block any content, applications or websites that you may want to use or visit. Without the protections that net neutrality affords, it’s possible that companies like AT&T, Comcast and Verizon will decide to block or slow down online content. One way they may do this is by establishing “fast lanes.” Fast lanes are essentially a system of paid prioritization in which an ISP charges certain companies an additional fee to carry their content. For example, Verizon or Comcast could charge sites and services like YouTube or Netflix more in exchange for faster loading and streaming times. Online platforms that don’t (or can’t) pay would be relegated to “slow lanes.” Consumers also could be charged additional fees to access certain types of streaming content such as sports or music on the fast lane. Of course, consumers will ultimately be on the hook for the cost of additional fees charged to platforms such as Netflix to use the fast lane. Why we support net neutrality Future Link is fully in favor of net neutrality. We believe that consumers who pay to be connected to the internet should not have the performance or cost of their service determined by the content they consume. Net neutrality is crucial for small business owners, startups and entrepreneurs. They rely on the open internet to reach their customers. Without the protections of net neutrality, ISPs could charge businesses more for the fast lane. If they can’t afford it, they’ll be stuck in the slow lane. To quote an analogy from a recent article at theguardian.com: - Imagine they (private road owners) were allowed to charge companies different amounts to use them (roads), so that companies with enough cash could pay for exclusive use of fast lanes, leaving their smaller competitors consigned to lag behind on slow, badly maintained roads. Sounds outrageously anti-competitive, doesn’t it? Want to learn more about net neutrality and what you can do to help restore its protections?
<urn:uuid:4d4376a4-b839-4ebf-ad15-20aad6be337c>
CC-MAIN-2024-38
https://futurelinkit.com/care-net-neutrality-want-unrestricted-access/
2024-09-11T03:25:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00793.warc.gz
en
0.948147
567
3.140625
3
The recent wave of cyber attacks is a clear reminder of the importance of proper information security. The latest Equifax breach didn’t just leave over 140 million customers exposed and vulnerable; it also emphasized the need for better security in today’s digital age. The risk of cyber attacks is even more severe in the realm of Internet of Things or IoT. Hackers can do so much more than carry out digital attacks on unsecured IoT devices. They can actually gain access to physical things. Fortunately, a security standard for IoT devices may come sooner than experts predicted. Demand of Businesses It is interesting to note that IoT is entering different industries faster than anticipated. The convenience and the wealth of possibilities offered by IoT made the concept very interesting in the eyes of businesses. Instead of controlling the climate inside a factory manually, for example, a combination of smart sensors and IoT can now take over the same task and do it more efficiently. This type of implementation also brings the possibility of full automation and so much more. Even top University programs are discussing IoT regularly, giving perspectives on how the Internet of Things will impact the Global Economy and how it has started changing the way Enterprises design their products and services. Those who want to be ready for a world where everything will be connected should consider learning the latest business practises in a digital society, for example by pursuing an executive MBA degree online addressing the business impacts of the Internet of Things. Unfortunately, the unsecured nature of most IoT devices is a challenge for businesses. An internet-enabled door lock that can be hacked is a big risk because it means hackers can access the physical location easily. A better security standard is required and the demand of businesses should help the IoT world get there faster. A Joint Effort Michela Menting, the director of Digital Security Research, confirmed a move in the right direction. Top players in the IoT industry are taking security more seriously and in a more unified fashion. A standard in implementing IoT security will help boost confidence and trust in the technology as a whole. “Without such trust, IoT adoption may prove disastrous. And not just financially. Failure of critical devices, such as connected cars or medical appliances, could have life-threatening implications,” said Menting. Getting to a unified security standard will be challenging. First, the industry needs to agree on an architecture as a reference and other technical frameworks to support it. Only then can proper IoT security standards be established. The Big Push More research institutions, governments, corporations and non-profit entities are approaching the table and joining the big push. The latest research by ABI Research predicted a jump to a whopping 48 billion IoT devices by 2021. Having even a small portion of that market occupied by vulnerable devices will be disastrous. “Standards can and will play a significant role in enabling this trust. Security standards specifically can provide a foundation for building robust and trusted IoT devices, both from a digital and a physical security perspective.” Whether we can reach an agreement and have an IoT security standard soon remains to be seen. The entire industry can learn one important message from the Equifax breach and that is to never wait until it is too late.
<urn:uuid:e2f68d6b-f0d1-451e-a153-12c7be265716>
CC-MAIN-2024-38
https://iotbusinessnews.com/2017/09/19/64339-iot-security-standard-needed/
2024-09-11T03:16:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00793.warc.gz
en
0.938918
656
2.8125
3
Content Copyright © 2017 Bloor. All Rights Reserved. Also posted on: Accessibility Both machine and deep learning are very hot topics. However, at heart, neither is particularly revolutionary. Machine learning is essentially about self-training data mining. Data mining algorithms and the tools (SPSS, SAS, Statistica and so on) to build them have been around for thirty or forty years. What used to happen was that what we now call a data scientist had a problem to solve such as making better recommendations, identifying fraud and so on. The data scientist would deploy various data mining algorithms against the problem set, train them (that is, feed them with lots of relevant data) and determine which algorithm best suited the problem at hand, and then that model would be deployed. Best practice meant that because things like buying patterns change over time, then the data scientist would revisit the problem set on a periodic basis to ensure that this algorithm remained the best fit and either to update the algorithm or replace it, as appropriate. What machine learning does is to automate the process of improving the existing deployed algorithm. Best practice would now mean periodically checking that this is still the best algorithm but no longer requires checking that the algorithm is performing optimally. From a business perspective this is very important. It means that recommendation engines gradually get better over time. It means that false positives and false negatives (whether in fraud detection or other environments such as name and address matching) incrementally improve. Deep learning, in effect, goes one step further, by automating the process of creating the best algorithm for the task in hand. This doesn’t actually do anything directly for the business that machine learning does not: for example, it does not reduce the rate of false positives any more than a well-designed machine learning algorithm might do. What it does do is to remove the need to develop and test multiple algorithms to see which is the best fit against the problem dataset. In other words, to a large extent it removes the data scientist from the equation. Taken to its logical conclusion this means that deep learning will ultimately automate the role of the data scientist out of existence.
<urn:uuid:af7de271-5523-4ad9-b4b4-a8e89e40fa1f>
CC-MAIN-2024-38
https://www.bloorresearch.com/2017/10/machinedeep-learning-debunking-the-hype/
2024-09-11T02:28:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00793.warc.gz
en
0.959378
438
2.90625
3
Pocket-size your OCR with MyQ Roger Optical Character Recognition or OCR is a big deal in digitization. It brings the potential that printed analog information can be easily transformed into a digitized, electronic form. It’s a huge potential – and one that’s an essential bridge step in the digitization of daily life. But then there is the reality. To turn image to text just seems like a hugely complicated and technical process that is not very user-friendly. Even some of the terms used with OCR such as “inputting the document” sound overly bureaucratic. Does OCR fit your needs and situation? OCR tends to be lumped into the big technology category in three basic categories – size, languages, and organization. With size, people normally associate OCR capability with the office MFP, equipment that is large and difficult to move from room to room. With language, OCR tends to be English-centric – simply because that is the globe’s business language and there are more potential users. Finally, there is organization – OCR by itself can just generate more documents for the stressed user to save and file somewhere. Can OCR put your notetaking on steroids? Instead of thinking about the technical descriptions of how OCR transforms physical information into an electronic form, what if we just think of OCR as notetaking on steroids. It’s the kind of useful assist that a student would appreciate. But for this activity to be really useful, OCR has to meet three mandatory categories. It has to be compact – fit in the student’s pocket, it has to be a linguist – fluent in a couple of languages, and it has to be organized – create searchable notes that are easier to find than the usual sticky notes or index cards. Without OCR, the low-tech research process is well known. The student takes notes on cards, including the source details and the precise quote for the citation. These cards can be coded with keywords or colors or even a language symbol to help the student find and identify the important details. Finally, after these cards have been sorted and categorized, the writing begins, and the student can – slowly and inaccurately – type in the details to their computer. Put MyQ’s OCR to work in pursuit of academic excellence Then there is the OCR approach with MyQ Roger. While perusing publications, our student flips on the MyQ Roger scan function on their smartphone. This transforms the device (either iOS or Android) camera into a scanner that automatically evens out the wrinkles and edges on the document. The new scan is stored as a searchable PDF on the phone and relayed to the cloud with a single click. It gets even better when the student realizes that they have source documents in multiple languages and they need these citations in languages such as Czech, French, German, Polish, and Spanish. No problem, they just need to switch the OCR language choice within the action settings for MyQ Roger app. Then comes the technical work of filing and finding their new OCR scans. MyQ Roger enables scans to be saved to a cloud destination of their choice – Google Drive, One Drive, or Dropbox. In addition, the scan is a searchable PDF, making it simple to search for specific keywords when it’s time to sort through these academic notes and start writing. Size and abilities do matter – in and out of academia Size, abilities, and convenience matter when it comes to OCR – and MyQ Roger, the Smart Digital Workplace Assistant. As a SaaS (software as a service), MyQ Roger is 100% in the public cloud. It uses your mobile device as its platform – making it possible to put your OCR scanner in your pocket and take it everywhere. With MyQ Roger, it’s time to downsize your scanner and upsize your abilities – in and out of the study hall. When it comes to cloud printing, there is often a tension between convenience and security. But with MyQ Roger, you’re no longer in an “either/or” situation. 2 min read Despite the hype about digitization, printing is still critically important among administrative processes in companies of all types and sizes. The unanswered question is how and where companies and their employees will do this printing. 4 min read
<urn:uuid:e56ccb88-1959-4381-adb6-752dc97d5f80>
CC-MAIN-2024-38
https://www.myq-solution.com/cs/pocket-size-your-ocr-with-myq-roger
2024-09-13T16:05:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00593.warc.gz
en
0.946206
910
2.5625
3
Steel casting involves pouring molten steel into a mold. The castings are shaped to meet the specific design needs of the product being manufactured. Unlike iron, steel can be difficult to melt. Today, steel casting is used to improve a diverse set of industries—including the agriculture, construction, automobile, aviation, gas and oil, mining, and marine industries. What Is Steel Casting Used For? Casting steel allows foundries to form complex shapes in fewer steps. The castings created are used in a variety of products, but especially in heavy-duty components. The castings are used in construction and mining equipment, railroad cars, pumps and valves, heavy trucks, and more. What Are the Different Types of Steel Casting? Did you know casting is one of the oldest manufacturing techniques? While advances in casting technology allow foundries to create unique and specialized casting methods today, the origin of this process can be traced back more than 7,000 years. Sand casting, plaster casting, die-casting, and investment casting can each be used to meet the manufacturing specifications for a wide range of products. - Sand casting - plaster casting - investment casting What Is the Difference Between Steel Casting and Cast Iron? Without talking too much about chemical composition and sounding like your high school chemistry teacher, there are several noticeable differences between steel casting and cast iron. Iron has better corrosion resistance than steel, for one, and is often cheaper than cast steel. Because it pours easily and doesn’t shrink as much as steel, cast iron can be easier to work with, as well. Why Is Steel Casting Used? The chemical composition of steel can allow for a greater flexibility in design. When coupled with steel’s larger weight range—some products can weigh hundreds of tons—casting steel allows for increased structural strength and dependability, making steel casting the ideal choice for a variety of larger projects. What Is It Like Working with Steel Casting? The casting process requires a wide-ranging set of skills as our team moves each project from concept to completion. We need the deepest thinkers and the toughest workers who can deliver the best products to our customers while keeping everyone safe. If you take pride in a job well done and aren’t afraid of a challenge, you might be a good fit here. We are a hardworking, dedicated family that produces and delivers high-quality products. Harrison Steel is a place to sharpen and develop lifelong skills, and we offer competitive wages, ongoing training, and excellent benefits.
<urn:uuid:35ecddd5-4893-4e64-9ecb-04eb5043fa9e>
CC-MAIN-2024-38
https://www.hscast.com/steel-casting/
2024-09-16T02:13:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00393.warc.gz
en
0.943914
514
2.9375
3
Data lakes are a powerful tool for storing and analysing large amounts of data, but they can quickly turn into data swamps if they’re not managed properly. A data swamp is a data lake that is full of disorganised, dirty and unused data. This can make it difficult and expensive to find and use the data you need. Why your data lake becomes a data swamp There are several challenges that can cause a data lake to become a data swamp. One common challenge is a lack of data governance. Data governance is the process of managing data throughout its lifecycle, from creation to destruction. Without proper data governance, it can be difficult to keep track of where data came from, what it means, and how it should be used. Another challenge is a lack of data quality. Data quality refers to the accuracy, completeness, and consistency of data. Dirty data can lead to inaccurate and misleading results from data analysis. The third challenge generally facing data lakes is too much data. Rather than identifying what data you need and storing that, organisations are putting everything they have into their data lake without considering if they would ever need or use that data. This pushes the storage costs up and fills your data lake with unused data. Finally, data lakes can also become data swamps if they’re not used regularly. When data is not used, it can become stale and irrelevant. This can make it difficult and expensive to clean and analyse the data when it is needed. Improving your data lake Here are a few tips for improving your data lake and preventing it from becoming a data swamp: - Implement data governance policies and procedures. This will help you to track and manage your data throughout its lifecycle. - Establish clear data quality standards. This will help you to ensure that your data is accurate, complete, and consistent. - Use a data catalogue to organise your data. A data catalogue is a repository of information about your data, such as its source, type, and format. This will make it easier to find and use the data you need. - Regularly clean and analyse your data. This will help you to remove stale and irrelevant data, and to identify and fix any data quality issues. By following these tips, you can prevent your data lake from becoming a data swamp and ensure that you’re getting the most out of your data. At Bridgeall we help organisations build data platforms and implement good data governance to help reduce the cost of data and save you worrying about data quality. To find out more check out our data governance services here.
<urn:uuid:17edc517-7d6d-4069-8425-253f9115dc1e>
CC-MAIN-2024-38
https://www.bridgeall.com/2023/10/13/is-your-data-lake-turning-into-a-data-swamp/
2024-09-17T09:30:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00293.warc.gz
en
0.94137
530
2.6875
3
AI-assisted development is one of the hottest new use cases for artificial intelligence. By using AI to streamline programming, AI-assisted development tools promise to help developers work faster while making fewer errors. But can AI really revolutionize the way developers write software? Should you expect AI-assisted development to become the norm, or just an approach used in certain niche cases? And is it really all that different from other code-automation technologies, like low-code programming? Read on to find out. What Is AI-Assisted Development? AI-assisted development is the use of AI to guide developers while they write code. AI-assisted development tools can perform a wide range of tasks to help developers. They can automatically generate code, potentially reducing the number of keystrokes that programmers make manually by half. AI-assisted development can also help coders catch bugs, identify security vulnerabilities and avoid sloppy coding practices that make it harder for others to understand their code. What Problems Does AI-Assisted Development Solve? In a nutshell, AI-assisted development helps programmers solve some of the most basic challenges associated with writing code: ensuring that it is clean, secure and bug-free. In most cases, there is a virtually infinite number of ways in which a developer could write the code required to achieve a certain task. AI-assisted development offers a means of automatically guiding developers as they write code to help them adhere consistently to best practices (as the AI-assisted development tool vendor defines them, at least). At the same time, there are obvious efficiency benefits from AI-assisted development. By auto-generating a healthy portion of the code that programmers write, AI-assisted development tools can help programmers write more code in less time. Or, they could empower companies with small development teams to create more apps than they could build using a fully manual approach. Both benefits are highly advantageous in a world where there is a persistent shortage of developers, combined with ever-increasing pressure on businesses to build apps. How New Is AI-Assisted Development? Although AI-assisted development has become a hot topic only in the past couple of years, many of the concepts and technologies behind it are not entirely novel. In many ways, you could argue that AI-assisted development is an extension of no-code and low-code programming, a methodology that also relies on auto-generated code to help developers work faster. Low code and no code are different from AI-assisted development in that the former are usually powered by prebuilt modules that developers string together. AI-assisted development is geared more toward automatically generating original code without relying on preconfigured modules. Still, the techniques are not wildly different. Along similar lines, the ability of AI-assisted development tools to spot bugs is not unlike the functionality long offered by Static Application Security Testing (SAST) and Source Composition Analysis (SCA) tools. These types of tools can also detect security problems or other potential issues within source code. AI-assisted development is a little different in that it can detect problems as the code is being written, whereas SAST and SCA scans are usually performed after code exists. But, again, there’s not a huge difference in functionality here. At a more basic level, you could even say that IDEs with code auto-complete features, which have been around for quite a long time, are a primitive form of AI-assisted development. So, it’s fair to say that AI-assisted development brings coding automation to a new level and makes it more interactive. But it’s not exactly a radically new type of solution, categorically speaking. Who Should Use AI-Assisted Development? Who needs AI-assisted development, then? One way to answer that question is to think about the extent to which different teams already rely on code-automation tools. As noted above, AI-assisted development builds on categories of coding tools that are already in widespread use today, like low-code and no-code solutions, or SAST scanners. If your organization uses tools like these, it’s likely that you’ll benefit from AI-assisted development, too. But if you write truly complex applications that are difficult for even the most sophisticated AI to understand, let alone help to write, it’s unlikely that today’s generation of AI-assisted development tools will be of much benefit. Getting Started with AI-Assisted Development The market for AI-assisted development tools remains small, and enterprises looking to deploy such solutions today have limited options. In general, the best place to look is at vendors in the low-code/no-code space, some of which are actively expanding their products to support AI-assisted development use cases. Some code security companies are also dipping their toes into the AI-assisted development market. For example, Snyk acquired DeepCode, an AI-assisted coding startup. There are also some independent startups in this space, such as Kite. So far, the open source community has produced very few tools for AI-assisted development. That may change as this category of technology continues to grow. But, for now, if you want an AI-assisted development tool, you’ll likely have to go with a commercial solution (although some tools in this niche are available free of charge). About the Author You May Also Like
<urn:uuid:45338b47-1f17-49e8-b1b4-47b6f4e80574>
CC-MAIN-2024-38
https://www.itprotoday.com/ai-machine-learning/who-does-and-doesn-t-need-ai-assisted-development-
2024-09-18T15:22:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00193.warc.gz
en
0.944966
1,112
2.953125
3
The modern world has advanced rapidly over the past few years. It’s become far more connected and integrated than ever before. The internet, in combination with modern technology and devices, has brought a new and unique way of communicating with other people. From video calls with people on the other side of the world to instant messaging from absolutely anywhere, our modern society has become almost unrecognizable compared to what it looked like 20 years ago. One of the most important things that our modern world thrives on is social media. Platforms like Facebook, Twitter, Instagram, and TikTok have taken over our world, and it’s almost impossible to find someone that doesn’t have a profile on at least one of these platforms! Social media has become an integral part of our daily lives. We use it to connect with friends and family, to stay informed, and to express ourselves. While it’s convenient and fun, it’s important to be aware of the potential risks associated with social media. Your accounts could be at risk from hacking, phishing, and other malicious activities coordinated by cyber criminals. In the article below, we will discuss the importance of using strong and unique passwords, and why you should consider using a premium password manager to keep your passwords safe. Use Strong and Unique Passwords One of the most significant risks associated with social media is hacking. Hackers are always on the lookout for vulnerabilities in popular websites and apps, and once they find one, they can exploit it to gain access to your account. From there, they can steal your personal information, spread malware on your devices, or even take control of your account and use it for malicious purposes. This is why it’s crucial to use strong and unique passwords for each of your social media accounts. A strong password is one that is long and complex and contains a combination of uppercase and lowercase letters, numbers, and special characters. The longer the password, the more secure it is. The most important part of creating strong passwords is that you don’t use your personal information in the password — many people do this because it’s much easier to remember. From their date of birth to use their names, people will usually use their personal information to create passwords because they think it’s safe and secure while being easy to remember. However, hackers can easily find this information on the internet, and it’s usually the first thing that they will try when attempting to break into your account. Studies have shown that it’s best to create passwords using random words, or even a random combination of letters, numbers, and symbols. You’re also going to need to use unique passwords. A unique password means not using the same password for multiple accounts. This is important because if one of your accounts is compromised, the hacker won’t be able to access your other accounts with the same password. Websites and companies often suffer data breaches, which is when hundreds (sometimes thousands) of users’ login credentials are uncovered and sold on the dark web. If you’re using the same password for multiple accounts, a hacker could steal your Facebook login credentials and try to use the same password for your online banking profile — which would be successful if you’re using the same login credentials! Other Risks to Your Login Credentials You should also be careful when using public Wi-Fi networks. Public Wi-Fi networks are often unsecured, which means that anyone can intercept your information as it travels over the web. Cybercriminals often use these networks to steal the login credentials of other users on the same network. When you’re using social media, it’s best to avoid using public Wi-Fi and instead use a private, encrypted network. You can also use a VPN to secure your connection if you need to use a public WiFi hotspot. Another risk associated with social media is phishing. Phishing is when hackers send you an email or message that appears to be from a trusted source, such as your bank, your social media account, or your email provider. The message will ask you to click on a link or provide personal information, and if you do, you’ll be taken to a fake website that looks like the real thing. The fake website will then steal your personal information or install malware on your device. To avoid falling for a phishing scam, be wary of unsolicited messages and always check the sender’s email address before clicking on any links. Hackers are becoming more and more clever by using social engineering. This is when they pose as a person or company that you might know and trust. On social media, the hackers can pose as a page that is offering a giveaway to something that you might be interested in. For example, they might be giving away free tickets to an upcoming sporting event, and to enter you simply need to log into your account, provide some personal information, and you’re in the fake prize draw. How to Safely Keep Track of Passwords It’s important to use strong and unique passwords, but it’s also difficult to remember them all. This is where a premium password manager comes in. A password manager is software that securely stores all of your passwords in one place — it’s like a secure virtual vault! You only have to remember one master password to access the password manager. The master password is randomly generated by the software, so you don’t have to worry about it being hacked by a cybercriminal. The software will automatically fill in your login information for you. This not only saves time but also ensures that you’re using strong and unique passwords for each of your accounts. Social media can put your accounts at risk, but you can protect yourself by using strong and unique passwords, being careful when using public Wi-Fi networks, and avoiding phishing scams. A premium password manager can also help keep your passwords secure, so you don’t have to remember them all, and you can rest assured that your personal information is protected. By taking these steps, you can enjoy the benefits of social media without putting your accounts at risk. ABOUT THE AUTHOR IPwithease is aimed at sharing knowledge across varied domains like Network, Security, Virtualization, Software, Wireless, etc.
<urn:uuid:69912859-ddcf-40cc-8b4d-8a0bf4b6b383>
CC-MAIN-2024-38
https://ipwithease.com/social-media-could-put-your-accounts-at-risk/
2024-09-19T21:44:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00093.warc.gz
en
0.931491
1,307
3
3
scratch coder-AI-powered Scratch programming assistant AI-powered Scratch projects made easy The worlds most powerful coding assistant. code: python java html react web c+ (copy) The worlds most powerful coding assistant. I deliver ready-to-use JUCE/C++ code with minimal explanation. explain the code block as comments, please give the code Concise coding assistant for expert software engineers Advanced Coding Assistant. Press 'S' for a new query, 'C' to continue with the current task, or 'P' to proceed to the next task. Begin with 'Create' followed by a coding query. 20.0 / 5 (200 votes) Introduction to Scratch Coder Scratch Coder is a tool designed to assist users in developing, troubleshooting, and refining Scratch projects. It provides detailed guidance and advice on Scratch code, projects, and concepts. By asking specific details about the user's project or issue, Scratch Coder tailors its advice to fit the unique needs of each user. The main design purpose of Scratch Coder is to facilitate learning and creativity within the Scratch environment, encouraging users to experiment and enhance their coding skills. For example, if a user is struggling with making a sprite move in a particular way, Scratch Coder can provide step-by-step instructions and examples to solve the issue. Another scenario might involve a user wanting to create an interactive game; Scratch Coder can guide them through the process of using variables, loops, and events to achieve this. Main Functions of Scratch Coder Guidance on Scratch Programming A user wants to create a game where a sprite collects objects and keeps score. Scratch Coder can provide a detailed explanation on how to use variables to keep score, how to detect collisions between the sprite and the objects, and how to update the score each time an object is collected. Debugging and Troubleshooting A user encounters an issue where their sprite does not respond to keyboard inputs. Scratch Coder can help identify common mistakes such as not attaching the correct event blocks or not initializing variables properly, and offer solutions to fix the code. Project Enhancement Suggestions A user has created a basic animation and wants to add sound effects and more complex interactions. Scratch Coder can suggest ways to integrate sound blocks, create custom blocks for repetitive tasks, and use broadcast messages for better control of interactions between sprites. Ideal Users of Scratch Coder Services Beginners and Young Learners Young learners and beginners who are new to programming can greatly benefit from Scratch Coder. It provides a gentle introduction to programming concepts through the use of Scratch's visual programming language, which is easier to grasp for novices. The step-by-step guidance helps them build foundational skills in a fun and engaging way. Educators and Instructors Teachers and instructors who use Scratch as part of their curriculum can leverage Scratch Coder to enhance their teaching. It offers additional resources, tutorials, and troubleshooting tips that can be incorporated into lesson plans. This allows educators to better support their students and provide more comprehensive learning experiences. How to Use Scratch Coder Visit aichatonline.org for a free trial without login, also no need for ChatGPT Plus. Ensure you have an internet connection and a modern web browser (Chrome, Firefox, Safari, or Edge) to access the Scratch Coder online. Create a new project or upload an existing one. After accessing the site, click on 'Create' to start a new project or 'Upload' to continue working on an existing one. Sign in if you want to save your progress online. Familiarize yourself with the Scratch interface. Explore the Blocks Palette, Stage, Coding Area, Sprite List, and the Toolbar to understand where everything is located and how to use the different tools. Start coding with blocks. Drag blocks from the Blocks Palette to the Coding Area to create scripts. Snap blocks together to define the behavior of sprites (characters or objects). Experiment with different blocks to see their effects. Test and refine your project. Click the green flag to run your script and see how it works on the Stage. Make adjustments as needed, and use the 'Help' section or online tutorials for additional guidance. Try other advanced and practical GPTs Gottfried Wilhelm Leibniz Unleashing AI-powered intellectual brilliance. Xem Tử vi AI-Powered Horoscope and Life Guidance. Creador de Copys AI-powered copywriting at your fingertips. AI-powered assistance for Workday HCM AI-powered prompt enhancement tool Cannabis Harvest Hero✨ Grow smarter with AI-powered cannabis cultivation. AI-powered real estate assistant AI-powered Anime Art Creation AI-powered 80s Anime Illustrations PHP, jquery and Bootstrap helper AI-powered coding assistance for PHP, jQuery, and Bootstrap. Unlock The Day Empower Your Day with AI-Driven Insights - Game Development - Interactive Stories - Educational Projects - Animation Creation - Visual Programming Detailed Q&A about Scratch Coder What is Scratch Coder? Scratch Coder is an interactive tool that allows users to create animations, games, and stories using block-based programming. It is designed to introduce beginners to programming concepts in a fun and engaging way. How do I start a new project in Scratch Coder? To start a new project, visit aichatonline.org, click on 'Create' to open the Scratch Editor, and begin by dragging blocks from the Blocks Palette to the Coding Area. You can also sign in to save your progress. Can I use Scratch Coder offline? Yes, you can download the Scratch app from scratch.mit.edu/download to use it offline. This is useful if your internet connection is unreliable or if you prefer to work without needing to be online. What are sprites in Scratch Coder? Sprites are characters or objects in Scratch that you can control using scripts. Each sprite can have its own set of scripts, costumes, and sounds to define its behavior and appearance. How do I share my Scratch Coder projects? You can share your projects by clicking the 'Share' button on the project page. This allows others to view and interact with your project. You can also turn commenting on or off based on your preference.
<urn:uuid:db60ce91-1750-4370-ac4f-503bee87814a>
CC-MAIN-2024-38
https://theee.ai/tools/scratch-coder-2OToElEM8m
2024-09-19T22:12:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00093.warc.gz
en
0.880497
1,385
2.515625
3
Creating A.I. That Can Build A.I. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms. With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. “We are following the same path that computer science has followed with every new type of technology. We are eliminating a lot of the heavy lifting.” - Joseph Sirosh, VP at Microsoft Read the full article here.
<urn:uuid:5234dc13-9910-47b4-8046-bf11f06052bb>
CC-MAIN-2024-38
https://www.databahn.com/blogs/fortune-1000-sales-trigger-events/creating-a-i-that-can-build-a-i
2024-09-21T00:45:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00893.warc.gz
en
0.947464
183
3.09375
3
The new 5G network provides a wide range of benefits should enterprises choose to take advantage of it. Conversely, it creates security challenges for these organizations as well. One of the greatest challenges facing enterprise security teams is a growing lack of visibility into their enterprise network traffic. While these teams can monitor business traffic over their broadband Internet and multiprotocol label switching (MPLS) links, they are blind to traffic flowing directly to cloud resources over the public Internet or the use of mobile networks by company-owned devices. With 5G, this visibility problem will increase. Higher network speeds and bandwidth on mobile networks will encourage the use of these networks for corporate IoT and mobile devices, increasing the percentage of corporate traffic into which the security team lacks visibility. In addition to high speeds and increased bandwidth, 5G also offers a 90% reduction in energy consumption, making it an ideal choice for power-constrained IoT devices. As a result, these devices will increasingly be connected to and accessible from the public internet. IoT devices are notorious for their poor security, which includes the use of default passwords, insecure protocols, and built-in backdoors. Connecting these devices directly to mobile networks, where the company lacks visibility, will make them increasingly vulnerable to attack. In recent months, Huawei has frequently appeared in the news as countries consider banning the company’s systems from their 5G networks. These decisions are significant and newsworthy because there are few companies manufacturing the systems needed for 5G networks and Huawei is the largest. With 5G, mobile networking has moved to primarily software-defined networking, meaning that programming errors in 5G systems can have significant impacts on mobile network security. Huawei components are known to have security vulnerabilities, which could potentially enable cyber criminals to exploit the 5G network and connected devices. Other vendors’ products could have similar vulnerabilities or be targeted by supply chain attacks. The 5G network is designed to move most of the network’s functionality to the edge, within 5G base stations. With 5G, more base stations are required, and they cover a smaller geographic area. This shift to a greater number of more powerful cellular towers makes them a potential target of attack. Alternatively, a fake base station can be used to eavesdrop upon or attack devices using the 5G network.
<urn:uuid:d327cd07-1afa-4c91-a6c4-fe3914708282>
CC-MAIN-2024-38
https://www.morganfranklin.com/insights/5g-network-and-security/
2024-09-21T00:42:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00893.warc.gz
en
0.949563
473
2.53125
3
Machine learning is the science of getting computers to act without being explicitly programmed. Using an experimental interactive design, the new R2D3 Blog offers an instructive Visual Introduction to Machine Learning. - Machine learning identifies patterns using statistical learning and computers by unearthing boundaries in data sets. You can use it to make predictions. - One method for making predictions is called a decision trees, which uses a series of if-then statements to identify boundaries and define patterns in the data. - Overfitting happens when some boundaries are based on on distinctions that don’t make a difference. You can see if a model overfits by having test data flow through the model. In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. How pervasive is machine learning today? In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.
<urn:uuid:7fdb4cd2-dbfc-4c2d-befd-5f2efe70a7d4>
CC-MAIN-2024-38
https://insidehpc.com/2015/08/interactive-design-powers-visual-introduction-to-machine-learning/
2024-09-07T16:43:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00293.warc.gz
en
0.916779
245
3.8125
4
To understand the differences of the three types of SSL certificates—Domain Validated (DV), Organization Validated (OV), and Extended Validation (EV)—it is helpful to understand what certificates are and how certificates are issued by authorized Certificate Authorities (CAs) like DigiCert. CAs are trusted third parties that issue TLS/SSL certificates by verifying identity details of a website owner. The only way to see these details is to look beyond the lock in the address bar. TLS/SSL certificates are two things. First, they provide a secure connection between a website by encrypting the data that is passed between users and the domain. Secondly, certificates verify the ownership and identity of the business or person that owns the URL. Just as a certificate would in the physical world, a digital certificate is essentially certifying your right to represent your business or organization online.
<urn:uuid:1abfe080-7ea0-406b-9867-6e6ca9427a2c>
CC-MAIN-2024-38
https://www.digicert.com/difference-between-dv-ov-and-ev-ssl-certificates.
2024-09-08T22:10:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00193.warc.gz
en
0.961901
176
3.046875
3
Many Americans refrain from shopping, stating opinions online Recently released results of a survey by the US Department of Commerce’s National Telecommunications and Information Administration (NTIA) have revealed that security and privacy fears stopped 45 percent of polled households from conducting financial transactions, buying goods or services, posting on social networks, or expressing opinions on controversial or political issues via the Internet. “Privacy and security concerns deterred each of these important activities in millions of households, and this chill on discourse and economic activity was even more common among online households that either had experienced an online security breach or expressed two or more major concerns about privacy and security risks,” says Rafi Goldberg, an NTIA policy analyst. “NTIA’s initial analysis only scratches the surface of this important area, but it is clear that policymakers need to develop a better understanding of mistrust in the privacy and security of the Internet and the resulting chilling effects. In addition to being a problem of great concern to many Americans, privacy and security issues may reduce economic activity and hamper the free exchange of ideas online,” Goldberg noted. The survey took into consideration answers from 41,000 households which have at least one Internet user. Key survey results - 19 percent of US Internet-using households have been affected by an online security breach, identity theft, or similar malicious activity in the year before the survey was carried out. - Online households are most concerned with identity theft (63%) and credit card or banking fraud (45%), and less with data collection or tracking by online services (23%), loss of control over personal data (22%), data collection or tracking by government (18%), and threats to personal safety (13%). The percentages (not wholly unexpectedly) run higher when taking into consideration just the answers of households that have been affected by a security breach in the year before the survey. It seems reasonable that identity theft and credit card fraud fears are much greater than, for example, fears about data collection by the government. Consider that the pollees, if not hit themselves, likely know at least one person that has had their identity stolen or credit card info pilfered and misused.
<urn:uuid:cad8a877-3bf6-4be9-9c7d-42363aad05e5>
CC-MAIN-2024-38
https://www.helpnetsecurity.com/2016/05/16/refrain-shopping-stating-opinions-online/
2024-09-08T22:26:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00193.warc.gz
en
0.956396
441
2.515625
3
The FSS has announced that they will be “wound up” by 2012 due to its operational losses of £2 million per month. The Forensic Science Service (FSS) is the government-funded provider of forensic services to England and Wales’ police forces. Despite the benefit to the commercial sector that this will cause, the Forensic Science Service has been groundbreaking in the industry, as they pioneered the development and implementation of DNA technologies and also paved the way for the formation of the first DNA database, which was successfully launched in April 1995. The business became a government agency on April 1 1991, while in December 2005 it turned into a government-owned company. During this period the FSS has been responsible for innovations such as the introduction of the National Firearms Forensic Database in 2003 and the UK’s first online footwear coding and detection management system, Footwear Intelligence Technology (FIT), three years ago. However, despite the triumphs, there have also been well documented problems. The FSS failed to revamp quickly enough to the scientific demands of digital devices and therefore entered the computer forensics market very late. This gave commercial competitors “an edge” that has led to their downfall. Other more serious problems have also occurred. For example; “The FSS suffered damage to its reputation following the failure to recover blood stains from a shoe in the murder of Damilola Taylor. Further damage occurred when the FSS failed to use the most up-to-date techniques for extracting DNA samples in cases between 2000 and 2005. This led the Association of Chief Police Officers (ACPO) to advise all police forces in England and Wales to review cases where samples had failed to give a DNA profile.” [Source: Wikipedia] The work of the FSS (both good and bad) has resulted in a better understanding of successfully working withincomputer forensics and mobile phone forensics, which the entire industry should be very grateful for.
<urn:uuid:e4e51d0d-9705-4b81-872c-e8fac29ccad6>
CC-MAIN-2024-38
https://www.intaforensics.com/forensic-science-service-to-close-in-2012/
2024-09-11T05:35:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00893.warc.gz
en
0.969499
400
2.765625
3
This article is the first in our series on the common Unix commands every Mac admin must know. In a world of endless possibilities where everyone seeks to work smarter not harder, IT and system admins cannot afford to be left behind. A great way for Apple Admins to become smart is to master as many commands as possible. Terminal.app is a utility that gives the admin direct access to the Unix underpinnings of the macOS operating system. It lets the admin perform tasks quickly and efficiently on the local computer (directly or remotely). All you need to do is to send a few text commands, and you can make your way through both simple or complex tasks easily. It is the magic that saves you time and makes you more efficient. Therefore, we have decided to explore some of the most important macOS commands in this series. In this article, you will learn how to enable SSH for accessing a remote Mac’s shell securely. What Is SSH? SSH — also known as Secure Socket Shell or Secure Shell — is a secure network protocol that allows users, especially system admins, to securely access remote devices. It encompasses a cryptographic network protocol and the suite of utilities that implement the protocol. SSH encrypts the communication with a remote system by utilizing a pair of SSH keys which are cryptographic in nature and made up of a public and private key pair. The keys work collaboratively to provide authentication between the client and the remote system. SSH keys can and should be used in any situation where there is an unsecured network. Aside from providing strong encryption and secure remote connections, SSH encrypts the data during file transfers or while securely managing network infrastructure components. In addition, it can be configured to allow port forwarding by mapping the default SSH port to an available port number on the destination. How SSH Works in Mac Secure Shell leverages a client-server model to connect an SSH client application (where the session is displayed) with an SSH server (where the session runs). SSH has three layers: - The transport layer, which establishes secure communication between the client and the SSH server. - The authentication layer, which sends the supported authentication methods to the client. - The connection layer, which manages the connection between the client and the server after a successful authentication. To establish a connection with an SSH server, the client needs to initiate a request with an SSH server. Once the server receives the connection request, encryption negotiation begins. The server sends a public cryptography key to the client and the key is used to verify the identity of the SSH server. Afterwards, the server negotiates parameters and creates a secure channel for the client. Finally, the client logs into the server. Enabling SSH to Securely Access a Remote Mac’s Shell SSH remote login to an Apple computer is disabled by default. In this section, we will take you through the process of enabling SSH. Open the Terminal App on Your MacBook You can do this by searching “terminal” using the Spotlight search option of your computer or navigating through Applications > Utilities > Terminal. Enter and Run the Command To enable SSH, enter and execute the -setremotelogin command as follows: sudo systemsetup -setremotelogin on It is necessary to add sudo because the command requires administrator privileges. You will be required to input your user password when you run the command. Provide the password and press enter (as shown in Figure 1 below). Note: In Mac, SSH is also known as Remote Login. Check if SSH is Enabled Once you complete step 2, you will not get any message to confirm that SSH has been enabled. However, you can use a command to know if SSH has been successfully enabled. Simply run and execute the following: sudo systemsetup -getremotelogin If SSH is on, you will get a message that reads “Remote Login: On” (refer to Figure 2). Want to Disable SSH? While you have now learned how to enable SSH, it’s equally important to know how to turn it off in case you wish to disable any remote login in future. The process of disabling SSH is similar to the process you followed to enable it. Simply open the terminal app and run the following command: sudo systemsetup -setremotelogin off After successfully executing the command, you will get a question: “Do you really want to turn remote login off? If you do, you will lose this connection and can only turn it back on locally at the server (yes/no)?” Refer to Figure 3. Type “yes” to confirm. This will disable SSH and disconnect any active SSH connections on your MacBook. Bypass the Yes/No Question Anytime You Disable SSH Meanwhile, if you want to bypass being asked a question of yes/no anytime you try to disable SSH, you can use the -f flag to force the command to execute immediately and without the prompt. sudo systemsetup -f -setremotelogin off To confirm if SSH is off, run the command: sudo systemsetup -getremotelogin You should get a message that reads “Remote Login: Off” (as shown in Figure 4). As stated earlier, SSH is a cryptographic network protocol used to establish a secure, encrypted connection between two computers. In this article, you learned how to enable or disable SSH by running a command in the terminal app. Enabling SSH will allow you to remotely connect your macOS device, transfer files, and perform admin tasks securely. There are two other ways you can enable SSH for macOS devices: - Turn on SSH in the GUI by going to System Preferences > Sharing > Remote Login. - Leverage the Commands tab in the JumpCloud Directory Platform to enable SSH across your fleet. Overall, SSH keys provide a more secure and convenient way to authenticate remote systems than the conventional username/password approach. To ensure the authorization each SSH key has is accurate, it’s important to deploy the right management tool and put sound policies in place. Simplified SSH key management is one of the many ways IT admins can make their lives easier with our cloud directory platform. Sign up for a trial of JumpCloud today to test out the possibilities in your own environment.
<urn:uuid:be95e9b2-6b4a-4faf-ae8d-8e811e4b18a7>
CC-MAIN-2024-38
https://jumpcloud.com/blog/how-to-enable-ssh-mac
2024-09-13T18:31:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00693.warc.gz
en
0.907294
1,311
2.796875
3
Battery charging for electric vehicles (EV) remains difficult because of high electricity costs, and users of electric vehicles are frustrated by inconsistent pricing practices, frequently broken equipment, and a lack of battery chargers in strategic locations for all but Tesla drivers. Some drivers find charging for EVs to be a frustrating experience, especially those who are located in states with steadily rising electricity rates due to inflation and intermittent renewable energy sources (wind and solar power). According to a J.D. Power study of EV owners who use Level 2 charging stations, overall satisfaction with the charging experience has decreased 12 points since last year, primarily because rising electricity prices are starting to affect consumers directly. Because the permitting and construction of chargers can take 18 months or longer, those funds have not produced many new chargers. This issue also affects vehicle fleet operators and businesses with fleets, since each has a significant interest in “going green” with their fleets. Fortunately, advanced data and sophisticated AI-powered connected telematics are now in place to help fleet operators and users identify the most optimum time to plug in. The Cost of Electricity Can be Problematic Aside from all the challenges with infrastructure, the energy costs alone can be severely problematic for fleets and users of EVs, further driving the need for this advanced connected vehicle data and insights. The cost of energy for an EV is equated to the cost of electricity per kilowatt-hour (kWh) and the energy efficiency of the vehicle. For example, to determine the energy cost per mile of an electric vehicle, select the location on the left axis (Electricity Cost per kWh) at 10 cents in the graph below. Draw a horizontal line to the right until you bisect the EV 3 mi/kWh line. Now draw a vertical line down until you bisect the bottom axis (Energy Cost per Mile). This tells you that the fuel for an electric vehicle with an energy efficiency of 3 miles per kWh costs about 3.3 cents per mile when electricity costs 10 cents per kWh. It is important to note that electricity costs in the U.S. is about 10 cents per kWh, while the average residential rate is about 11.7 cents per kWh. Charge rates for EVs in select areas may vary by time of use, day, and season. In the past, these rates have ranged from 3 cents to as high as 50 cents per kWh. Older electric vehicles can also have varying levels of electricity usage, as well as different brands of vehicles. To determine the energy cost per mile of a gasoline vehicle, pick the location on the right axis (Gasoline Cost per gallon) at $3.50. Draw a horizontal line to the left until you bisect the Gas 22 mi/gal line. Now draw a vertical line down until you bisect the bottom axis (Energy Cost per Mile). This tells you that the fuel for a gasoline vehicle with an energy efficiency of 22 miles per gallon costs about 15.9 cents per mile when gasoline costs $3.50 per gallon. The mileage for commercial fleet vehicles such as light-duty pickups range from below 17 miles per gallon to generally about 22 miles per gallon. Understanding How AI and Data Can Help Control Costs Despite all of this, leading AI and data technologies are offering intelligent solutions that can reduce the headaches and costs associated with driving and charging an EV, or a fleet of EVs for a business. Today’s available EV charge data solutions for fleets and vehicles leverage an Augmented Deep Learning Platform (ADLP) that utilizes machine learning and data science with unique indicators that allow predictive real-time data insights to OEMs that enhance their vehicle’s performance and quality as well as the customer experience related to vehicle usage. See an example in the image below: This data connects and analyzes everything in real time from charge stations to optimized energy outputs at locations, time of day, cost savings, congestion reduction rates, and the technology can even predict failure cycles that holistically feed data into smart city data infrastructure platforms. This type of AI-driven connected vehicle data helps fleet customers and EV users make a more seamless, successful transition to a greener, cleaner, and more sustainable future. Users can charge EVs with accurate energy cost and rate plan selections. Intelligent energy consumption means consumers will lower the impact on their energy bill and get the most out of their solar panels by charging EVs at the most optimal time. Lastly, they can leverage the power of smart cities by receiving in-car notifications for the nearest charging station, reserving charging slots in the near future. With these AI and data strategies available, fleets and drivers of EVs will have a better experience in adopting a greener solution for transportation while better controlling the cost of charging.
<urn:uuid:27aeb8ed-cfbe-4cdc-91f2-97ce3f8c7aa7>
CC-MAIN-2024-38
https://techstrong.ai/articles/leveraging-ai-and-data-technologies-to-better-control-charge-costs-for-electric-vehicles/
2024-09-17T12:53:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00393.warc.gz
en
0.939273
960
2.625
3
With businesses relying more and more on portability and mobility, it should come as no surprise that businesses also have to devote more time to the proper management of their wireless network. A considerable portion of this management is reliant on the router the business uses, as without the router, the business simply couldn’t have a wireless connection. For this week’s tech term, we’ll discuss the router in a little more depth. What is a Router, Anyway? The router got its start almost 50 years ago, developed under the name ‘Interface Message Processor’ by BBN at the end of the 1960s. Since then, routers have increased in utility, now serving to enable the user to communicate through a variety of different means – including data, video, and voice. The router, or Interface Message Processor, was initially created to be used on the Internet’s predecessor, the ARPAnet. After years of development, Bill Yeager created the code that enabled the first multi-protocol router, which in turn led to the development of the first Local Area Network, or LAN, by Len Bosack and Sandy Lerner. This pair would go on to create Cisco Systems in 1984. Cisco has since grown to become the largest networking company anywhere in the world. What Does a Router Do? To greatly simplify the responsibilities of the router, it helps to imagine them as the exchange tubes that banks, and credit unions use in their drive-thru, and the funds they transport as the data that is exchanged through the router’s activity. The vacuum tube serves as the go-between between you in your car and the teller inside the bank, allowing you to communicate and exchange information. Your router serves a very similar purpose to your network as the vacuum tube does to the bank, as your router establishes a connection between you and the Internet. Routers provide the connection between the Internet (or more literally, your Internet modem) and your devices. While many routers are described as wireless, this isn’t completely accurate. Any router will typically require a pair of connections – one to a power source, and the other to the modem. How Does a Router Really Work? Assuming that the necessary wires are properly connected, your router will send a signal out to the rest of your devices, so they can connect to the Internet. These signals will usually reach anywhere between 90 to 300 meters away, depending on the power of the router. Any device with a Wi-Fi connection built into it will connect, assuming that it has the proper credentials to do so. This number of devices will only grow as more consumer goods, like fitness wearables and other ‘smart’ accessories, are granted the ability to access the Internet as a part of their function. At the very least, you will need to account for these connections when selecting a router. You should also do some research and identify any features that may be of particular use to you. Some Options and Features As is the case with any other piece of technology, a router gets better with every additional feature and capability it has. Routers are now able to leverage assorted features and capabilities that improve both their function, and their security. - Dual-band Wi-Fi – Since there are so many devices using the 2.4GHz frequency, now many wireless routers come with dual bands (2.4GHz and 5Ghz). - Wireless On/Off Toggle – For ease of use, having a dedicated on/off switch on the device is always practical. - Detachable Antennas – Today, a lot of the routers you’ll see don’t have external antennas, but if you can find a model with them, they will provide more coverage to your Wi-Fi connection – and can even be upgraded! - IPv6 Support – IPv4 addresses have been exhausted for some time, so every router you plan to have for a while has to support IPv6. Catharsis Managed IT Ltd has technicians on staff that can help you build a successful wireless network. For more information, call us today at (416) 865-3376.
<urn:uuid:b5428ca0-f49b-4c79-8309-cb209e72ad3b>
CC-MAIN-2024-38
https://www.catharsis-it.com/blog/tech-term-routers-defined/
2024-09-20T00:03:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00193.warc.gz
en
0.963966
856
3.0625
3
Even though you know about detailed financial reporting processes that have largely gotten into practice in many countries, you may not have fully realized their potential. Yet, financial analysis is an essential part of modern business. Ultimately, the new tool helps companies stay compliant and streamline revenues and expenditures across their entire portfolio. Using financial data in a web analysis helps internally collect valuable information and leverage data and insights to improve the areas where business operations can flow significantly. This article informs you about financial reporting and analysis’s importance, purpose, main objectives, benefits, users, and types of financial reporting. What is financial reporting and analysis? Financial reporting and analysis refer to the process of preparing, presenting, and interpreting financial information to stakeholders such as investors, creditors, and management and businesses. Stakeholders or businesses use this information to make informed decisions and comply with tax regulations. Financial reporting involves preparing and presenting financial statements such as the income statement, balance sheet, revenue statement, expenses, and cash flow statement. These statements provide a comprehensive view of a company’s financial performance. Financial analysis involves using financial data to evaluate a company’s performance and make informed decisions. That can include analyzing revenue growth or profitability trends, assessing liquidity and solvency ratios to determine a company’s capacity to meet its financial obligations, or comparing its performance with its competitors. Purpose of finance reporting and analysis Financial reporting and analysis provide stakeholders with relevant information that enables them to make informed decisions about investing in or lending money to a company. That includes generating reports summarizing the organization’s financial results and analyzing those results to determine trends, patterns, and areas for improvement. Financial reporting and analysis also help managers assess their performance and identify areas for improvement. The main objectives of the finance report and analysis are To manage the financial health of an organization Finance reporting and analysis help stakeholders track the organization’s performance over time and identify any changes or trends that may be cause for concern. To make informed decisions By collecting and analyzing real-time data, stakeholders can make informed decisions about allocating resources, investing in new projects, or adjusting their operations to improve profitability. To comply with regulatory requirements Many organizations are legally required to prepare and submit financial reports to government agencies or other regulatory bodies. To attract investors Investors often use financial reports to evaluate the strength and stability of a company before deciding whether to invest. To communicate with stakeholders Financial reports allow organizations to communicate important information about their finances to shareholders, employees, customers, and suppliers. Why is financial reporting important? In the latest report, McKinsey argues that using data to create more accurate marketing information can increase sales and marketing effectiveness by 15-20%. When we take the same logic to finance departments, it’s clear these reports can be of great value and give your business a better overview of your operations. Financial statements give insight into your business finances. Communicates essential data Key shareholders, executive investors, and professional users rely on the latest financial data for decision-making, budgeting, and performance monitoring. Open communication and transparency are crucial to support funding, investments, and financial reviews. Most investors rely on company information to evaluate profitability, risk and return. Supports financial analysis and decision-making Financial reporting is important in the process of analyzing the results and supporting decisions. Utilizing financial statements increases accountability and helps in analyzing important financial data. Documents such as the earnings statement are easy to track your financial position and make accurate forecasts. Compliance and law Financial reporting has a major impact despite meeting only some regulatory requirements. Most corporations involve some stakeholders that require periodic financial reports to be filed. In private companies, it’s the SEC. Private company loans may require periodic reports of debt covenants. The Internal Revenue Service mandates that all U.S. businesses report on their financial statements to meet the requirements. For raising capital and performing audits Financial reporting can help to raise more money and manage it more efficiently. Reporting financial information ensures continued commercial success in a competitive digital world and helps companies raise local and overseas capital. In addition, a detailed audit of the finances helps facilitate the statutory audit. Statutory auditors must audit a company’s financial statements to determine its opinion. For improved internal vision Analyses are an effective way to communicate critical financial data across an organization. However, when financial insights and information are scattered, everything will quickly crumble. Financial analysis and reports help answer your company’s key financial questions and provide internal and external users with a detailed overview and analysis. In addition, the metrics needed in a business decision-making process are also provided. 6 Common types of financial reporting There are many financial reports. Here we listed common types of financial reporting. 1. Balance Sheet A balance sheet is a financial statement that gives a snapshot of an organization’s financial position at a specific time. The balance sheet shows the organization’s assets, liabilities, and equity. Assets represent the company’s resources that have economic value and can be used to generate future revenue. Examples of assets include cash, accounts receivable, inventory, property, plant and equipment, and investments. Liabilities represent the company’s obligations to pay debts or other financial obligations. Examples of liabilities include accounts payable, loans payable, accrued expenses, and deferred revenue. Equity speaks for the residual interest in the company’s assets after all liabilities are paid off. Equity includes common stock, retained earnings (profits earned but not distributed to shareholders), and other comprehensive income. 2. Cash Flow Statement It displays your total revenue and expenses from the business over the past year. The cash flow statement helps you track how much you can afford your employees. This highly interactive and visually appealing template provides all the necessary information to understand your business’s liquidity and financial position. In addition, this quick ratio demonstrates red exclamations, which suggest your company can’t cover current liabilities with the most liquid assets. 3. Income statement An income statement, a profit and loss or P&L statement, is one of the financial statements used to report a company’s financial performance over a specific period. The income statement provides a company’s revenues, expenses, gains, and losses during the reporting period. This report aims to confirm whether the business is making a profit. The report considers and subtracts production and other process costs from revenue to calculate profit. 4. Shareholders’ equity A shareholder’s equity statement, also known as a statement of changes in equity, is a financial statement showing changes in a company’s equity over a specific period. Shareholder’s equity represents the residual interest in the company’s assets after all liabilities are paid off. A shareholder’s equity statement provides information about how a company’s equity has changed over time and what factors have contributed to those changes. Investors can use it to assess a company’s financial performance and make investment decisions. The shareholder’s equity statement typically includes Beginning Equity, Net Income, Issuance of Stock, Dividends Paid, and Ending Equity. 5. Retained earnings statements A statement of retained earnings shows the changes in a company’s retained earnings over a specific period. It represents the portion of a company’s net income that the company keeps instead of being distributed as dividends to shareholders. Net income, beginning retained earnings, Dividends Paid, Other Adjustments, and Ending Retained Earnings. The purpose of a retained earnings statement is to provide information about how a company has used its profits over time. For example, investors can use it to assess whether a company is reinvesting profits back into its business or distributing them as dividends to shareholders. It can also indicate a company’s financial health and stability. 6. ESG reporting ESG reporting involves disclosing a company’s environmental, social, and governance (ESG) performance to stakeholders. ESG factors are used to evaluate a company’s sustainability and ethical impact on society and the environment. ESG reporting aims to inform investors and other stakeholders about a company’s non-financial performance to make informed decisions. ESG reporting can also help companies identify areas to improve their sustainability practices and enhance their reputation as socially responsible businesses. ESG reporting typically includes information on a range of topics such as climate change, energy efficiency, labor practices, human rights, diversity and inclusion, executive compensation, board composition and structure, anti-corruption policies, supply chain management, community engagement, and other issues that affect a company’s social and ecological footprint. There are many standards for ESG reporting. However, there are several frameworks that companies can use to guide their disclosures. Financial statistics cannot be understood, and the Excel sheet’s complexity makes the extraction even harder. In this sense, Interactive Financial Reporting Software was designed to assist businesses in obtaining accurate financial information. In addition, automated dashboard technology allows business users to gain real-time information about their financial situation for better decision-making. We already discussed many typical financial reports in this article. The following are the visual representations of those reports. Financial KPI dashboard A robust financial dashboard with detailed KPIs is designed to maintain financial health. This dynamic financial reporting system provides tools to reduce inefficiencies and forecast accuracy while efficiently ensuring cash flows through the organization. It can also be called the ‘CFO cockpit’ and provides an easily accessible view of key economic metrics. Here is what you need as a Senior Decision Maker to understand workplace trends and make informed business decisions. Accurate vs forecast dashboard An accurate vs forecast dashboard is a tool used to monitor and compare actual performance against forecasted performance. The dashboard typically displays key metrics and KPIs (Key Performance Indicators) in an easy-to-read format, allowing users to identify discrepancies between actual results and forecasts quickly. In addition, the dashboard provides insights into revenue, costs, and net profits. Finally, it aims to provide accurate information about the organization’s performance by comparing actual results against forecasts. As a result, knowing this organization can improve areas where they are underperforming. Benefits of financial reporting We will examine how financial reporting is beneficial to businesses. Builds strategies and ensures profitability Financial analysis and reporting are crucial for creating a successful strategy to maintain profitability for the business. According to a Deloitte survey, 75% of the respondents said using financial statements is key to finding effective strategies to reduce costs. However, this kind of report becomes crucial for determining financial strength. Manages financial ratios The ratio is critical to any business’ finances and must be considered. The ratio is an example of the fine jiggling act the business must carry out to ensure the operation runs efficiently. The financial ratio helps companies understand and analyze the colossal financial data they are getting. A ratio provides data form and direction, enabling accurate comparisons between reporting periods. Visualize today’s financial graphs, and dashboards provide invaluable performance information in one click. The financial reports help to identify current trends, which helps organizations to identify weak areas. In this way, they enhance the overall growth of the business. Enhances working capital management Real-Time Financial Reporting assists managers in planning and managing current assets and achieving current obligations without causing underutilization, overdrafts, and loss. The government is also responsible for managing unsecured debts, primarily revolving credit accounts and other short-term lending instruments such as credit cards. Liability management is a crucial part of the financial health of the business. Credit cards, credit lines, Business loans, and credit extended from clients are manageable liabilities. Before submitting a business expansion loan application, utilizing financial report templates can help you assess and evaluate your finances, identify any existing liabilities that need to be reduced, and ultimately determine how viable the request will be. Enhances cash flow Cash flow is very important for businesses financial health. With the help of key performance indicators (KPIs), businesses can easily analyze cash flow to predict profits and losses. Who uses financial reporting and analysis? A wide range of stakeholders uses financial reporting and analysis in an organization, including: - Management: The management team uses financial reporting and analysis to make strategic decisions, monitor performance, and identify areas for improvement. - Investors: Investors use financial reporting and analysis to evaluate a company’s financial health and stability before making investment decisions. - Lenders: Lenders use financial reporting and analysis to determine a company’s creditworthiness before granting loans or lines of credit. - Regulatory institutions: Regulators use financial reporting and analysis to ensure companies comply with relevant laws, regulations, and accounting standards. - Employees: Employees may use financial reporting and analysis to understand the company’s financial performance and how their roles contribute to overall success. - Customers: Customers may also be interested in a company’s financial performance as it can impact their perception of the brand’s stability and trustworthiness. Financial reporting and analysis is a critical component of business operations that helps stakeholders make informed decisions based on accurate data about the organization’s finances. What are the three types of financial analysis? There are three main types of financial analysis: Horizontal Analysis: This type of financial analysis compares an organization’s financial performance over multiple periods by analyzing changes in its financial statements. It involves comparing line items on financial statements from one period to another, such as revenue figures from the current year to the previous year. Vertical Analysis: Vertical analysis compares different line items on a single financial statement to each other, expressed as a percentage of a selected base figure. Ratio Analysis: Ratio analysis involves using ratios and metrics to analyze an organization’s financial performance and operational efficiency. What are the four types of financial ratio analysis? The four types of financial ratio analysis are: Liquidity Ratios: Liquidity ratios measure a company’s ability to meet its short-term obligations. Investors can get an idea of the operational efficiency of the company. Examples are the current ratio, quick ratio, and cash ratio. Profitability Ratios: Profitability ratios measure a company’s profit relative to its revenue, assets, or equity. Examples of profitability ratios include gross margin, net profit margin, return on assets (ROA), and return on equity (ROE). Solvency Ratios: Solvency ratios measure a company’s ability to meet its long-term obligations. These ratios include the debt-to-equity ratio, debt-to-total-assets ratio, and interest coverage ratio. Valuation Ratios: Valuation ratios are used to analyze the attractiveness of investment in an organization. Examples of valuation ratios include price-to-earnings, price-to-book, and price-to-sale. In conclusion, financial reporting and analysis is an indispensable way of obtaining, understanding, and utilizing a company’s financial information. It offers incredible insights into the condition of a company, helping investors and stakeholders make more informed decisions. In addition, financial reporting is essential in calculating certain business characteristics, such as profitability ratios, liabilities to capital ratios, and growth performance. This post covered the definition, importance, objectives, types, and benefits of financial reporting and analysis. We hope it is a useful guide for you.
<urn:uuid:a0d238f1-313e-4fbe-9832-b29fd7e8d0a2>
CC-MAIN-2024-38
https://www.erp-information.com/finance-reporting-and-analysis
2024-09-20T01:27:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00193.warc.gz
en
0.935191
3,205
3.21875
3
From storing medical records online to the rise of surgical robotics and video diagnostics, digital transformation is both revolutionising patient care and enabling health organisations to reduce the amount of time staff spend on documentation and data entry. Other developments likely to benefit the healthcare industry are augmented reality and virtual reality (e.g. surgeons being able to practice a surgery in a VR environment, using it as a training tool), artificial intelligence (e.g. diagnosis tools designed both for patients and doctors) and personalised medicines (e.g. where drugs can be tailored to an individual’s genetic code). In terms of digitisation, the private sector has already come a long way. There are currently a range of online services available offering GP consultations, pharmacy and advice. However, successful implementation of digital transformation in the NHS has posed a greater challenge and slower update. Below are five of the biggest challenges that need to be addressed to enable uptake in both public and private sectors: 1. Changing human behaviour Changing a piece of equipment or even software is relatively easy to achieve compared with persuading people to change the way that they work and to take the time to learn how to use new systems. This process needs to be carefully managed to ensure that the change is as easy for everyone to adapt to as possible and that people are incentivised to make the change. People will also only use a new system if they see the gap that it fill or efficiency it creates – these messages need to be clearly transmitted. End user engagement from an early stage is critical to ensure that the technology answers their need and that the user interface is logical to them. The lack of any agreed standards for interoperability for digital health systems creates much uncertainty for both NHS procurers and suppliers alike. Integrations and interfaces are then unnecessarily complex and risky. >See also: Healthcare efficiency through technology Without prescribed or accepted interfaces and open access initiatives, the industry risks walking into the creation of companies with data monopolies and systems in silos. 3. Reinventing the wheel After the perceived failure of a centrally procured software system in NPfIT, IT strategy in the NHS turned to locally-led procurements. This is right to ensure that local needs are met, as these differ substantially in different areas of the country, but it has also led to many Clinical Commissioning Groups and Trusts trying to solve the same questions. A more joined up approach would save everyone effort and spread best practice in the healthcare industry. 4. Stringent data protection laws GDPR is a challenge for all organisations which process a lot of personal data. The challenge is even greater in the NHS and its ecosystem due to its lack of centralisation, which means that there is a complex structure of data controllers and processors with many different policies, privacy notices and consents. The sensitivity of the data and the fact that much of its data is still paper-based makes it less easy to audit, control and access. The fundamental standards of protection for data in GDPR are not materially different from the Data Protection Act – the difference is in the rigour required to demonstrate compliance. Given the different approaches across healthcare organisations to obtaining consent for processing under the current regime, it will be a challenge to harmonise the approach to make it easier for patients to understand what is happening with their data. 5. Ransomware attacks Earlier this year, the worldwide ransomware outbreak WannaCry was the biggest cyber attack to have hit the NHS to date. In October, a government report stated that the NHS could have prevented the attack with ‘basic IT security’. The increasing sophistication of ransomware attacks is a threat for all industries with increasing reliance on and use of technology. Historically, cyber security has not been at the top of many organisations’ agenda. That needs to change. Finding staff with the right skill set, educating employees on safety measures, updating known software vulnerabilities and ensuring that systems and processes have a secure design are all critical to implementing more rigorous security measures to address the risks of cyber threats. Keeping abreast of updated technology and changing regulations is crucial to improve processes and to fit the needs of patients in the changing digital world. Both are trends here to stay, and which need to be tackled to keep healthcare in the twenty-first century. Sourced by Jocelyn Paulley, director at Gowling WLG
<urn:uuid:c41ca8a3-9915-43bf-b708-2e8f5abe5d13>
CC-MAIN-2024-38
https://www.information-age.com/top-5-biggest-challenges-digital-transformation-nhs-8676/
2024-09-20T00:16:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00193.warc.gz
en
0.954108
894
2.59375
3
Managing any organization may be characterized as a perpetual stream of problems that must be controlled and resolved. However, individuals frequently choose to rapidly implement solutions without first taking the time and making the effort to fully comprehend and assess the nature of the situation at hand. Thus, organizations invest a great deal of time, effort, and money without knowing precisely how the exercise will benefit them. The demand to precisely identify the organization's most important problem requiring a remedy, is a genuine one. A company that can effectively execute change on a regular basis will be an industry leader, if it is able to precisely describe the issue. Decades of research indicate that the human mind has at least 2 distinct methods for attempted Problem Solving. Both the person's current position and the surrounding environment will determine which strategy will prevail. These 2 methods of Problem Solving are: - Automatic Processing—occurs when humans have no control over the processing and are unaware that it is taking place. - Conscious Processing—represents the portion or function of the brain that a person has control over. These 2 methods tackle issues differently and at different pace. A growing body of studies indicates that it is advantageous to distinguish between the 2 modes of thought. Structured Problem Solving is associated with the 2nd process, namely Conscious Processing. Structured Problem Solving entails constructing a logical argument that links observed facts to underlying causes and, eventually, a solution. The formation of an effective chain of clarity begins with a coherent statement of the issue. A quality Problem Statement should have the following five elements: - Problem-Solution Gap Developing a Problem Statement increases the likelihood of maximizing the benefits of Conscious Processing and may also set the stage for inducing and subsequently evaluating an "Aha!" moment. Let's examine these components in further depth. Importance refers to the Problem Statement's capacity to identify a characteristic that is crucial to an organization and connect that feature to a well-defined and unique objective. This is only achievable if there is a direct link between the Problem Statement and the organization's larger mission and objectives. The temptation of focusing on unimportant topics from the beginning should be avoided, and attention should concentrate on the essentials. A solid Problem Statement should include a cogent explanation of the Gap between the current circumstance and the desired outcome. When people have clear and easily comprehensible objectives in front of them, they are more focused and exert greater effort. A proper Problem Statement facilitates this concentration by defining the Gap that must be filled. Effective Problem Statements should quantify key factors, such as the objective, the current circumstance, and the gap. Quantification of a characteristic just indicates that it has a clear direction, i.e., that more of it is either beneficial or detrimental. A good Problem Statement should retain Neutrality with respect to probable diagnoses or remedies. During problem formulation, as few assumptions about the origin of an issue should be made as is practically practicable. A Problem Statement's Scope should be succinct enough to be addressed quickly. Interested in learning more about the 5 Elements of a Problem Statement? You can download an editable PowerPoint on 5 Elements of a Problem Statement here on the Flevy documents marketplace. Do You Find Value in This Framework? You can download in-depth presentations on this and hundreds of similar business frameworks from the FlevyPro Library. FlevyPro is trusted and utilized by 1000s of management consultants and corporate executives. Here’s what some have to say: “My FlevyPro subscription provides me with the most popular frameworks and decks in demand in today’s market. They not only augment my existing consulting and coaching offerings and delivery, but also keep me abreast of the latest trends, inspire new products and service offerings for my practice, and educate me in a fraction of the time and money of other solutions. I strongly recommend FlevyPro to any consultant serious about success.” – Bill Branson, Founder at Strategic Business Architects “As a niche strategic consulting firm, Flevy and FlevyPro frameworks and documents are an on-going reference to help us structure our findings and recommendations to our clients as well as improve their clarity, strength, and visual power. For us, it is an invaluable resource to increase our impact and value.” – David Coloma, Consulting Area Manager at Cynertia Consulting “FlevyPro has been a brilliant resource for me, as an independent growth consultant, to access a vast knowledge bank of presentations to support my work with clients. In terms of RoI, the value I received from the very first presentation I downloaded paid for my subscription many times over! The quality of the decks available allows me to punch way above my weight – it’s like having the resources of a Big 4 consultancy at your fingertips at a microscopic fraction of the overhead.” – Roderick Cameron, Founding Partner at SGFE Ltd
<urn:uuid:cd9e8c95-0426-47cd-9ee1-01a92470d9db>
CC-MAIN-2024-38
https://globalriskcommunity.com/profiles/blogs/5-elements-of-a-quality-problem-statement
2024-09-09T03:10:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00293.warc.gz
en
0.933101
1,011
2.625
3
A staggering 68 percent of business leaders feel their cybersecurity risks are increasing, and it’s believed that cybercrime damage costs hit somewhere in the region of $6 trillion (US) in 2021 - that’s up from $3 trillion in 2015. That’s why data protection is a crucial strategy for modern businesses. As organizations increase the amount of data they store, the risk of cyber attacks also increases. Data protection helps mitigate the risk of a company’s sensitive and personal information being stolen from fraudulent activities such as hacking, phishing and identity theft. Unfortunately, data breaches can cause devastating damage to an organization, resulting in hefty fines, reputational damage, a decrease in sales, a loss of trust and legal penalties from governing bodies. The United States, for instance, follows a data privacy approach that is guided by various state laws and sector-specific privacy laws, meaning you have to understand the various regulations that apply to the states and industries you operate (or collect data) in. Are you looking to improve your data protection strategy and reduce your cybersecurity risk? To help, we’ve answered some of the most common data protection FAQs to help get you started. Question 1: What is data protection and why does it matter? In its simplest definition, data protection is a strategy that focuses on protecting a company’s data from data breaches and fraudulent activities, such as hacking, phishing, identity theft and other threats from external forces. Data protection mitigates risks and strengthens vulnerabilities through a variety of best practices, from employee training, encryption, data management, data backup and recovery, data loss prevention, and firewalls. There are two reasons why this is important. Firstly, because protecting this data is crucial to the seamless operations of your business, and, secondly, because when handling personal data your organization must comply with the data privacy regulations that apply to your business. Question 2: What data protection regulations do I need to comply with? The data protection regulations that your business is required to comply with depends on where you operate. Typically, if you offer goods or services, or if you monitor the behaviour of residents within a specific location, then you are required to comply with that jurisdiction's data privacy laws. For example, a company based in the US will still need to comply with the European Union's General Data Protection Regulation (GDPR) if they offer goods or services to EU residents or collect consumer data within the union. Some of the most prominent regulations to look out for include: - General Data Protection Regulation (GDPR) in the European Union - California Consumer Privacy Act (CCPA) in the US - Lei Geral de Proteção de Dados (LGPD) in Brazil - Personal Data Protection Act (PDPA) in Thailand There are also industry specific regulations that may apply to your business, such as the Payment Card Industry Data Security Standard (PCI DSS) and the Health Insurance Portability and Accountability Act (HIPAA). Question 3: How do I gain visibility into where my data lives? Ever heard the popular business saying, you can’t manage what you can’t measure? Well when it comes to data protection we’ve changed that slightly - you can’t protect what you can’t see. Gaining visibility into where your data lives, how it’s being used and who has access to it is crucial to understanding your company’s data risk and building a data protection program. It’s for this reason why data discovery should be an integral component of your data protection strategy. You can learn about data protection in our blog, What is Data Discovery and Classification, and Why is it Important? Question 4: What is considered personal data? Personal data is typically referred to as personally identifiable information (PII), and the various legislations we discussed above set the rules and standards for how your organization can use and handle this data. PII includes directly identifiable data, such as names, addresses, telephone numbers, bank details and social security numbers, as well as information that can be linked together to identify an individual, such as an employee record number. All personally identifiable information must be stored and handled based on the regulations that apply to your business, including consumer information, employee information and transaction details. Question 5: What is data processing? Data processing refers to any operation which is performed on personally identifiable information. Typically, this is any step that your organization takes to collect and manipulate that data into meaningful information. Data processing is likely to involve various stages, such as collection, validation, sorting, storage, classification and reporting. Question 6: Our business was hacked, will we be fined? Not necessarily. The consequences of non-compliance with data privacy regulations can be eye watering. For example, a breach of GDPR can see organizations fined up to 20 million (euros) or 4 percent of their annual turnover, whichever is greater. Yet, despite this, data breaches aren’t 100 percent avoidable, even with the most robust measures in place. A fine is for noncompliance with data privacy regulations, not for the actual act of being breached itself. As long as you are compliant with the regulations, then your business should avoid a fine - but hopefully your data protection strategy will do enough to mitigate the risks of a breach in the first place. Question 7: Who is responsible for data protection in my business? The responsibility of data protection compliance lies with what is known as the “data controller”, which is the ‘person’ that collects and processes the data. This ‘person’ includes individuals, organizations and companies. You can appoint a data protection officer to ensure compliance with data privacy laws and you can also outsource your data protection to a managed security services provider (MSSP), but ultimately it is your business that will be liable for noncompliance. That’s why, no matter which route you take, it’s crucial that you ensure you have a robust data protection strategy in place and the person, or company, you hire to protect your data is both experienced and skilled in data protection. Question 8: Are there technology solutions to manage our data protection strategy? There are a number of technology solutions that can improve your data protection strategy. For example, here at Cavelo we help enhance companies data protection programs through continuous and automated data discovery and classification. After all, the first step to securing data is to first understand where it’s stored and how it’s being used. Are you interested in learning more? Watch the Cavelo virtual demo today and find out how our innovative platform can help your business gain complete visibility into its sensitive data.
<urn:uuid:5ce5779a-6ba8-4ac6-bc61-d5403a1c8a4e>
CC-MAIN-2024-38
https://www.cavelo.com/blog/data-protection-faq
2024-09-12T14:55:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00893.warc.gz
en
0.935089
1,397
2.625
3
How Does Cybersecurity Affect Everyone? In today’s digital world, cybersecurity is a big deal for everyone. We all use technology a lot, so keeping our online data safe is super important. This article will talk about how cybersecurity affects us all. We’ll look at the dangers of cyber attacks and why keeping our data safe is key. - Cybersecurity is a crucial issue that affects all of us in the digital age. - Cyber threats, such as data breaches and identity theft, pose significant risks to individuals and businesses. - Protecting personal information and maintaining online privacy are essential for safeguarding digital lives. - Cybersecurity measures are crucial for securing corporate networks, customer data, and critical infrastructure. - Developing cybersecurity awareness and best practices is key to empowering users and staying ahead of evolving cyber threats. The Pervasive Threat of Cyber Attacks In today’s world, cyber threats are a big worry for everyone. They affect individuals, businesses, and governments. Cyber attacks have become more common, making us all more vulnerable. Criminals use digital tools to cause harm and make money. Read More: Top 10 Most Common Types of Cyber Attacks Understanding Cybercrime and Its Global Reach Cybercrime knows no borders. It uses the internet to reach across the globe. It includes stealing data, identity theft, and financial fraud. As cybercriminals get better, their threats affect people and countries everywhere. Identifying Potential Vulnerabilities in Daily Life We use digital devices and the internet every day, often without thinking about the risks. Our info and connected devices can be at risk of cyber attacks. It’s important to know these risks and take steps to protect ourselves. - Understand the growing threat of cyber attacks and their global reach. - Identify potential vulnerabilities in daily digital activities. - Stay informed about the latest cyber threats and best practices for personal cybersecurity. Read More: What is Identity and Access Management? Safeguarding Personal Information and Online Privacy In today’s digital world, keeping our data safe is key. We use technology a lot, so it’s vital to protect things like our bank info, social media, and personal details. Keeping our data private is a basic right we should fight for. By being proactive, we can lower the chances of data breaches and identity theft. This helps keep our online lives safe. Securing Your Data Here are some ways to keep your info safe: - Use strong, unique passwords for all accounts. Think about using a password manager for more security. - Turn on two-factor or multi-factor authentication to make your accounts even safer. - Think twice before sharing personal stuff online, especially on social media. - Check and update your online account privacy settings often to control who sees your data. Enhancing Online Privacy It’s not just about keeping your data safe. Here are ways to boost your online privacy: - Use a trusted VPN to encrypt your internet and hide what you’re doing online. - Use incognito or private mode to reduce what websites and search engines know about you. - Clear your browser’s cache, cookies, and history often to leave less digital trace. - Look into privacy-focused search engines and messaging apps that care about your data. By following these steps, you can better protect your info and improve your online safety. This helps keep your online life secure and reduces the risk of cyber threats. How does cybersecurity affect everyone? Cybersecurity Impacts on Individuals and Families Cybersecurity is now a big deal for everyone, not just big companies or government groups. In our digital world, individuals and families must worry about it. We face risks like identity theft and the need to protect our data. With threats like hacking, phishing, and malware on the rise, we all need to be careful online. Cybersecurity changes how we handle money, talk to family, and use important services. Protecting Personal Data from Malicious Actors Cybersecurity is key to keeping our data safe from bad guys. Things like social security numbers and bank info are targets for cybercriminals. This can lead to identity theft and financial fraud. To stay safe, we must use strong passwords, turn on two-factor authentication, and keep our software and devices updated. These steps help protect our info and lower the chance of falling victim to cybercrime. The effect of cybersecurity on us can’t be ignored. By knowing the risks and protecting ourselves, we all help make the internet safer. This keeps our personal lives secure against new cyber threats. Read More: What is cybersecurity service management? The Role of Cybersecurity in Business Operations Cybersecurity is key to a business’s success and ongoing operations. It’s not just for individuals. Companies must protect their networks and customer data to keep trust, avoid financial losses, and keep operations running smoothly. Read More: What is the Main Role of Cyber Security? Securing Corporate Networks and Customer Data In today’s digital world, all businesses face cyber threats. Hackers are always finding new ways to get into networks and steal sensitive info like customer data and financial details. To fight these threats, companies need strong cybersecurity in business to protect their corporate network security and customer data protection. Good cybersecurity for businesses means: - Keeping software and systems up to date to fix weaknesses - Using strong access controls and multi-factor authentication - Telling employees how to spot and report suspicious activities - Backing up data and testing disaster recovery plans - Working with cybersecurity experts to handle threats quickly By being proactive with cybersecurity in business, companies can keep their valuable assets safe. This helps maintain customer trust and ensures the business can keep going. Read More: Who needs cyber security? Cybersecurity and Critical Infrastructure Protection In today’s world, cybersecurity is key to protecting more than just our data or networks. It’s vital for keeping our society’s critical infrastructure safe. Things like power grids, transportation, and healthcare depend on complex tech that must be shielded from cyber threats. Keeping critical infrastructure safe is key to keeping our society stable and ensuring a secure digital future. If these systems get hacked, it can cause big problems. It can disrupt services, threaten public safety, and even put our national security at risk. We need strong cybersecurity to protect the infrastructure we all rely on. Securing critical infrastructure means tackling both physical and digital risks. This means: - Implementing advanced threat detection and response capabilities - Regularly updating and patching systems to address known vulnerabilities - Fostering strong collaboration between government agencies, private sector organizations, and cybersecurity experts - Educating and empowering employees and the public on cybersecurity best practices By focusing on protecting critical infrastructure, we can make our communities more resilient online. This is a vital step in keeping our tech-dependent society safe. Cybersecurity Awareness: Empowering Users In today’s digital world, we face many cybersecurity threats. It’s key to teach users about online safety. This way, we can help everyone stay safe online. Online Safety Best Practices for All Ages Everyone needs to know how to stay safe online. This means using strong passwords, spotting phishing scams, and keeping personal info safe. We all play a part in fighting cyber threats. - Use unique, complex passwords for each account. - Turn on two-factor or multi-factor authentication for extra security. - Watch out for suspicious emails or links that ask for personal info. Check if they’re real before you click. - Make sure your software and systems are updated with the latest security fixes. - Check and update your privacy settings on social media and online platforms to control what you share. By following these safe online habits, we all help make the internet safer. We empower ourselves and others to be confident and strong in the digital world. Emerging Cyber Threats and Future Challenges The digital world is always changing, and we must watch out for new cyber threats. Cybercriminals use new tech to break into systems, steal data, and harm our infrastructure. It’s important to know about these threats to keep our online world safe. Staying Ahead of Evolving Cybersecurity Risks Cybersecurity risks keep changing, so we need to be proactive. We face threats like ransomware, phishing, malware, and social engineering. To fight these threats, we need strong security steps. This includes using advanced malware protection, encrypting data, and training people on security. Keeping up with new threats means watching trends like IoT devices and supply chain attacks. By staying updated and using the latest security tips, we can protect ourselves better. This helps us deal with new cyber threats. Stopping cyber threats is a constant challenge. We need to work together as individuals, companies, and governments. Sharing information, creating new security tools, and teaching people about cybersecurity help us protect our digital world. This way, we can make the internet safer for everyone. Collaborative Efforts: Governments, Organizations, and Individuals Dealing with cybersecurity challenges needs a team effort from governments, groups, and people. Governments are key in making strong laws, enforcing cyber rules, and working with the private sector for better security. Groups must also put in place strong security steps to keep their digital stuff and customer info safe. But, each person also has a big part to play in keeping things secure online. By using strong passwords, turning on two-factor authentication, and being careful with emails, we all help make the internet safer. Together, we can make our digital world stronger, keep important info safe, and make the internet a safer place for everyone. Because cybersecurity is so connected, different groups are joining forces to spread the word, share tips, and invent new ways to fight threats. This team effort is key to tackling the dangers of cybercriminals, groups backed by governments, and other bad actors. By working together, we can build a strong shield against cyber threats. This protects important systems, our info, and the digital world we all use. How does cybersecurity affect everyone? Cybersecurity is vital for everyone in today’s digital world. It protects our data and keeps us safe online. It’s important for individuals, families, businesses, and critical systems. Understanding the threats and the need for cybersecurity helps us keep our digital lives safe. What are the potential vulnerabilities in daily life? Today, we face risks like cybercrime, data breaches, and identity theft. Knowing about these threats helps us see why cybersecurity is so important. How can I safeguard my personal information and online privacy? Keeping our personal info and online privacy safe is crucial. We can do this by securing our data, managing social media wisely, and using strong security steps. These actions help protect our online privacy and keep our digital identities safe. How does cybersecurity affect individuals and families? Cybersecurity affects our personal lives by protecting our data and preventing identity theft. Using strong passwords and malware protection is key. It keeps our digital lives safe for everyone at home. How does cybersecurity impact business operations? Businesses need strong cybersecurity to keep their networks and customer data safe. This builds trust, protects against financial losses, and keeps operations running smoothly. Good cybersecurity is a must for staying competitive online. How does cybersecurity affect critical infrastructure? Cybersecurity is vital for protecting things like power grids, transport systems, and healthcare facilities. Keeping these systems safe is key to our society’s stability. We need to use cybersecurity to make these systems more resilient and protect them from threats. How can I improve my cybersecurity awareness and online safety? Learning about cybersecurity is crucial for a safer digital world. We can improve our safety by using strong passwords, avoiding phishing scams, and keeping up with security news. A cyber-aware culture helps us all stay safer online. What are the emerging cyber threats and future challenges? New cyber threats come with new technology. We need to stay ahead by using strong security, building cyber resilience, and fighting malware. Facing these challenges head-on is key to a secure digital future. How can governments, organizations, and individuals collaborate to enhance cybersecurity? Improving cybersecurity needs teamwork from governments, groups, and people. We must work together to create strong security, spread awareness, and make the internet safer for everyone. This teamwork helps us protect our digital lives and build a strong cyber future.
<urn:uuid:a755e67f-eb6a-4107-92c5-b8ceea16c544>
CC-MAIN-2024-38
https://arksolvers.com/how-does-cybersecurity-affect-everyone/
2024-09-16T10:26:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00593.warc.gz
en
0.917796
2,596
3.25
3
As more people access the Internet in more diverse ways, the established measurement systems for advertising and other analytics are becoming less effective. A new study by GlobalWebIndex, a market research firm, indicates that this problem is getting worse: As more people from around the world come online and access the internet in more diverse ways, the established measurement systems for advertising and other analytics are becoming less and less effective. They overestimate the population of Americans and Europeans online, and undercount users from the developing world, skewing ad budgets and causing misfires of content created to cater to advertisers. The fundamental flaw is that mechanisms developed for an earlier, desktop-based era of web measurement—cookies, IP addresses, and other signals sent by your device in the background—are not fit for today’s purposes. The report from GlobalWebIndex says these methods are so flawed that they overestimate the number of people online from the developed world and miss an estimated billion people elsewhere. GlobalWebIndex estimates that over 400 million people in the worldwide log on using virtual private networks (VPNs) that obscure their true location. Another 400 million-odd remain uncounted because they share devices, and more than 150 million aren’t counted because they access the web from mobile devices only, avoiding traditional tracking techniques such as cookies. As a result, online measurement reports tend to overstate the importance of developed, mature markets, even as their share of the global online population shrinks. Here, for instance is the US: That means advertisers, lured by the internet’s promise of precise targeting, are spending money on ads optimized to, for example, a British audience, when many of those users may be sitting in Indonesia or Vietnam. But measurement in these parts of the world is even harder . Companies ranging from Facebook and Verizon to dozens of smaller ad-tech firms hawking tracking technology are working to get around this problem by installing ever more intrusive surveillance mechanisms. The new mantra is identification : Companies are trying harder to keep users logged in . That way, it doesn’t matter whether the user is on a mobile phone or using a desktop or a tablet or all three, so long as she logs in, identifying herself. It also bypasses the need for cookies, which don’t work with mobile apps. And it then doesn’t matter whether the connection is via a VPN because users who self-identify don’t need to be tracked through their internet connection. More and more services require a log-in today, helped along the way by third-party companies that plug Google and Facebook’s identity credentials into apps. And it’s easy to see why Facebook has launched an ad platform based on the promise of identification. This is also the subtext to GlobalWebIndex’s study. It repeatedly cites the failures of “passive measurement techniques,” implicitly suggesting that active approaches—user-reported, such as login data—are the way forward.
<urn:uuid:969aa889-a259-4b74-90cc-f26530a8c841>
CC-MAIN-2024-38
https://www.nextgov.com/modernization/2014/11/americans-share-online-global-population-decline/98513/
2024-09-20T03:16:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00293.warc.gz
en
0.935806
615
2.578125
3
Electricity is fundamental to our society. As climate change becomes more severe and demand for clean energy increases, the future is the electrification of everything and along with it, the need for reliable energy. The U.S. infrastructure spans over a vast 200,000 miles and inspecting all of it is a time-consuming and high-risk process that often calls for hanging from helicopters or climbing tall towers. It is inefficient, costly, and dangerous. According to T&D World, utility line work is one of the top 10 most dangerous jobs in America. Around 30 to 50 workers in every 100,000 are killed on the job every year. That’s more than twice the fatality rate of police officers and firefighters. According to the American Society of Civil Engineers (ASCE), the majority of the nation’s grid is aging, with some components over a century old — far past their 50-year life expectancy — and others, including 70% of transmission and distribution (T&D) lines, are well into the second half of their lifespans. Facing extreme weather events caused by climate change, the U.S.’s aging electric infrastructure in many areas is not equipped to handle volatile conditions – this has played a significant role ranging from destructive summer wildfires in California to deadly winter storms in Texas. Grid outages like these are highly disruptive, expensive, and threaten lives. One way to address and prevent damage is through routine inspections and maintenance. Utilities have incorporated drones to photograph T&D lines and assets, but have struggled to inspect, analyze and process the images quickly enough to gain precise insights and make proactive maintenance decisions. Conventional inspection methods require manually and subjectively reviewing thousands of images, which can take weeks. Hitachi Vantara has launched Hitachi Image Based Inspections, an image analytics-based inspection software solution that identifies defects in T&D line assets quickly and accurately. The image-based inspection solution leverages pre-built or custom-built machine learning models to automate defect assessments by instantly processing and analyzing thousands of images in seconds. Hitachi Image Based Inspections replace dangerous, expensive, and time-consuming manual inspections by using the human-in-the-loop approach when required. This results in a significantly faster end-to-end process for image identification, cataloging and health evaluation. Here’s how it works: - High-resolution photos of T&D assets are captured by helicopter, ground-based and pole-based photography from multiple angles and are imported for ingestion and preprocessing. - Hitachi’s AI-based software identifies, inspects, and analyzes the images using machine-learning models and computer vision algorithms to determine defects and failure potential, including information about where the assets are located, for potential equipment defects, such as with insulators or dampers. - Based on a sample size of 5000 images, on average: - the model accuracy for assets like dampers, pins, polymer insulators, glass insulators, wooden poles, clamps and copper pins is about 81%. - the model accuracy for defect detection for items such as bent or damaged dampers, porcelain or ceramic disk damage (such as flashing), insulator or wood pole caps, etc. is approximately 79%. - Next, we leverage human expertise by bringing in subject matter experts into the analysis loop to validate the AI responses for various conditions including location, defect type, defect severity and time stamp. Over time, the human-in-the-loop inputs create feedback that further trains (or retrains) the AI models to more accurately identify defects. Hitachi Image Based Inspections generate insights closer to real-time, enabling businesses to make smarter decisions faster, prioritize the most pressing maintenance issues and prevent problems before they can become costly and dangerous. Asset managers can implement a priority-based maintenance strategy to improve reliability, providing operations and maintenance savings. This solution helps increase worker safety, grid reliability and resiliency through more frequent inspections and reduces threat of fire and risk to the public. Plus, Hitachi Image Based Inspections is a fully integrated and scalable solution because it works with existing performance management platforms. Dashboards calculate asset risk scores, predict asset failure, suggest mitigating action, and can create proactive maintenance plans. Looking beyond power lines, AI-driven automated inspections are a growing class of solutions applicable in any industry where inspections are expensive, high-risk and critical to business continuity such as on-shore or offshore oil rig platforms, wind turbines, hydro dams, utility poles, transformers, cellular towers and antennas, wayside transportation assets, mining and heavy manufacturing equipment, and more. Stay tuned to learn about more use cases in the near future. Be sure to check out Insights for perspectives on the data-driven world. Shamik leads industry solutions marketing for Hitachi Digital Services, with deep expertise in energy, transportation and manufacturing. As well as +25 years working in semiconductors, renewable energy, IoT and data management and analytics.
<urn:uuid:d9e6baf0-3fbe-4d5a-9f60-d1c6c0e154a2>
CC-MAIN-2024-38
https://www.hitachivantara.com/en-anz/blog/how-data-ai-can-help-make-utility-line-inspections-safer
2024-09-09T05:59:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00393.warc.gz
en
0.93763
1,027
2.578125
3
As the number of Internet users and systems we use rises, we continue to see a subsequent increase in security breaches and other cybersecurity concerns. Many companies make an effort to minimize the chances of these risks by implementing antivirus scanners and more. The problem is, that they are still at risk – the biggest being from weak passwords. Many of the major security threats that harm a business have one factor in common – a hacker gaining access to systems by cracking a user’s password. The one reason hackers can get into systems, again and again, is largely because users often don’t pick strong enough passwords. Even what we might perceive to be a strong password may not be as secure as we think. Sure, when you enter a new password many websites have a bar that indicates how strong your password is, but the issue is, these so called strong passwords are becoming easier to guess as more websites utilize the same requirements. Think about the last time you changed your password. You were likely told to key in a password longer than 6-8 characters, with at least one capital letter, one number, and a special character like ‘!’ or ‘$’. Many major systems have these exact, or at least very similar, requirements for password setting. However, If this is the norm, and you use a password like this too often then your passwords likely aren’t as secure as you might believe them to be. The reason for this is because of the way hackers usually capture passwords. The most common method adopted is brute force – getting a username then trying every password combination until the hacker finds one that works. There are programs you can download from the Internet that try thousands or more passwords a second, and many now include special characters, numbers, and capital letters, which makes finding passwords even easier. How do I know if my password is secure? In an effort to showcase how unsecure some passwords are, Microsoft’s Research (MSR) Center and an intern from Carnegie Mellon University developed a password guesser called Telepathwords. The way it works is you enter the first few letters of your password and the system guesses the next. It uses common letters and combinations to help gauge the effectiveness of a password. For example, if your password begins with the letter ‘v’, it will tell you that ‘I’, ‘S’ and ‘A’ are the most common letters to follow. If the next letter of your password isn’t one of these three, there is a good chance it is more secure. If the second letter is one of these three, then your password is less secure. This may sound a little complicated, but you should check out the system here. It is eerie at how accurate the next letters and characters often match, and this is a good tool to determine whether to create a more robust password. You don’t have to worry about testing your password out either as Microsoft has noted that they don’t track the keystrokes, so you password should remain secure. How do I create a stronger password? Ask 10 experts and you will likely get 10 different answers as to what makes a strong password. Here are three different ways to create secure passwords: [list style=”bullet”][li]Use an algorithm – The easiest way to do this is take the first letter of a saying and add a number before or after. You can also create a saying and take the first letter of each word, then add the first letter of the website, followed by the last, and then a number. This method is best for when you have a large number of websites you access on a regular basis, it can help you remember your passwords for each without you having to write these down.[/li][li]Use a sentence or saying – For systems that allow you to have spaces in your password, try using a random saying like, ‘Dogs like pudding cups’. Sayings like this are harder to crack. This is largely because they include the space and are longer than usual.[/li][li]Use an acronym – Come up with a saying that describes you e.g., ‘I’ve worked at a gas station for 20 years’, and take the first letter/number of each word to create: ‘Iwaagsf2y’. This gives you an easy to remember password that can be adapted for other sites.[/li][/list] Regardless of what type of password you develop, you should be aware that even strong passwords can still be cracked with enough persistence. So, you should be sure to change passwords on a regular basis and also not to use the same one twice. This will limit the chances of hackers being able to access your other accounts. If you are looking for more ways to secure your systems, we can help, so get in touch with us today. Call 214-297-2100
<urn:uuid:c984133e-fdb7-476b-b139-98c699c1b645>
CC-MAIN-2024-38
https://www.axxys.com/blog/password-may-not-be-secure/
2024-09-10T11:21:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00293.warc.gz
en
0.951042
1,026
3.515625
4
Artificial intelligence, no longer confined to small research projects and blue-sky thinking, has established a solid and valuable presence in government IT portfolios. Federal AI teams are improved detection of stock market misconduct, better intelligence interpretation and more accurate weather predictions. IT managers may think of new AI-based applications as just another app, but that could be dangerous. Using AI means using machine learning and neural networks, and these technologies can have a huge impact on both on-premises and cloud-based resources. Let’s look at how AI can affect the main components in data centers: storage, network and compute. Click the banner below to receive information about becoming an Insider Agencies Recognize the Need for Extra Storage Space AI, machine learning and neural networks eat storage like crazy. Consider some of the big open-source data sets being used for ML: YouTube-8M, which has 350,000 hours of video; Google’s Open Images, with 9 million images; and ImageNet, with 14 million images. ML tools will stress both the capacity and performance of storage systems. Data centers based on storage area networks with spinning disks have mostly given way to flash-based solid-state drive arrays, yet that may not be enough performance for demanding ML applications. IT managers looking for serious performance may wish to investigate the new Non-Volatile Memory Express–based storage arrays. Fortunately, NVMe is now becoming mainstream enough that most popular storage vendors are on top of it, including NetApp, HPE, Dell and IBM. EXPLORE: The latest tape innovations and its impact on data storage. NVMe can be attached directly to systems and delivers performance by connecting to the PCIe bus. This allows every CPU core to talk directly to the storage system and take advantage of NUMA memory, eliminating the bottleneck of a controller and the single queue that comes with a traditional storage array. But attaching NVMe directly to a single server depends on the speed of that server, which may simply shift the bottleneck. IT managers also would be wise to investigate NVMe over Fabric SANs. These extend the speed of NVMe storage arrays across network fabrics, most commonly Ethernet and Fibre Channel. NVMe over Fabric delivers best when paired with a high-speed backbone, which brings us to the next part of our data center equation: the network. Why Agencies are Switching to Spine-and-Leaf Architecture High-speed data center networking functions are the basis for everything else: intersystem links, storage and reliable connectivity to customers. That means not only high-speed but also low-latency and low-loss networks. To deliver the performance needed for AI, IT managers should think about changes to both architecture and hardware. IT managers with traditional three-tier core/distribution/edge networks in their data centers should plan to replace all that gear — even without AI in the picture — with spine-and-leaf architecture. Changing to spine-and-leaf ensures every system in a computing pod is no more than two hops from every other system. Selecting 40-gigabit-per-second or 100Gbps links between leaf switches and the network spine helps reduce the impact of oversubscription when servers are commonly connected at 10Gbps to the network leaf switches. To really be on the cutting edge of performance, IT managers can aim for a 100Gbps fabric end to end, although some find that 10Gbps server connections occupy a price-performance sweet spot. LEARN ABOUT: Edge computing and how it's enhancing information gathering. When a network supports high-speed NVMe over Fabric storage, IT managers have another option for notching up speeds to match the demands being made by ML models: remote direct memory access (RDMA) combined with lossless Ethernet. NVMe over Fabric can run over standard Ethernet, using Transmission Control Protocol to encapsulate traffic. However, NVMe over Fabric storage delivers even lower latency when server network interface controllers (NICs) are replaced with RDMA NICs (RNICs). By offloading everything from the CPU and bypassing the OS kernel, network stack and disk drivers, performance is supercharged over traditional architectures. The lossless Ethernet side of the equation is provided by modern high-performance network switches that can compensate for oversubscription, prioritize RDMA traffic and manage congestion end to end within the data center. The amount requested in the Biden administration’s fiscal 2022 budget for artificial intelligence research and development Source: The Networking and Information Technology Research and Development Program and the National Artificial Intelligence Initiative Office, Supplement to the President’s FY2022 Budget, December 2021 IT Managers Must Consider GPUs Carefully With high-speed networking in place and high-speed storage systems ready to roll, IT managers are poised for the last part of the equation: computing power. Start researching AI and ML, and you may discover that your old servers are not powerful enough; you may need to immediately invest in graphics processing units to handle the load. In truth, moving to GPUs will give the best results in many cases, but not all the time. For IT managers with extensive experience in traditional servers who have large server farms already deployed, adding GPUs can be an expensive choice. The key point here is parallelism: the requirement to run multiple streams at the same time, combined with memory use. GPUs are great at parallel operations, and mainstream ML tools are especially efficient and high-performing when they can run on these GPUs. DISCOVER: How agencies are working to upgrade their legacy systems. That said, all this performance comes at a cost, and GPU upgrades don’t do anything if your developers and operations teams don’t dim the lights as they run the processor-intensive parts of their ML models. That’s the big difference between GPUs and storage and network upgrades, which deliver better performance for everything running in the data center, all the time. IT managers should plan their investments carefully when it comes to GPUs and make sure that workloads are heavy enough to justify investing in this new technology. It’s also worthwhile to look at the major cloud computing providers, including Amazon, Google and Microsoft. They already have the GPU hardware installed and ready to go, and will be happy to rent it to you through their cloud computing services.
<urn:uuid:350cc91c-320d-4580-ba55-927f1426a634>
CC-MAIN-2024-38
https://fedtechmagazine.com/article/2022/09/how-prepare-federal-network-ai
2024-09-11T16:01:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00193.warc.gz
en
0.933648
1,311
2.671875
3
In today's dynamic cybersecurity landscape, Peer-to-Peer (P2P) Virtual Private Networks (VPNs) have gained popularity for their decentralized approach offering improved privacy and efficiency. Businesses need a fast, secure, reliable way to connect employees to their company's networks and applications from anywhere in the world and from any device. However, VPNs can be risky if not designed and implemented correctly, as they serve as a remote path into your network. With the wide variety of VPN products and types available, it's crucial to be aware of the legal and security risks associated with different types of VPNs and choose the right solution for your business. Here's What You Need to Know About Decentralized VPNs: P2P VPNs, or peer-to-peer virtual private networks, are a type of VPN service that differs from standard VPNs. In P2P VPNs, a network of user-operated nodes is created, where each participant acts as a client and a server. This means that users can access and provide resources to other users on the network. While this model may seem efficient, it also opens doors to significant cybersecurity concerns and legal issues. Since each user acts as both a client and a server, it can be challenging to ensure that all nodes on the network are secure and trustworthy. Additionally, using P2P VPNs can potentially violate laws related to intellectual property, privacy, and data protection. Overall, while P2P VPNs offer some unique benefits, they also come with significant risks and challenges that users should carefully consider before using them. The Risks for Host Computers When you participate in a peer-to-peer (P2P) virtual private network (VPN), your device becomes a node in a larger network of devices that communicate with each other directly. This decentralized network of devices means that there is no central server or authority, and each device is responsible for routing traffic to other devices in the network. While P2P VPNs offer several advantages, such as increased privacy, better performance, and reduced costs, they also have some inherent security risks. One of the main security risks of P2P VPNs is that the decentralized nature of the network makes it difficult to implement strong security measures. Unlike traditional VPN services with dedicated security teams and centralized servers that can monitor and filter traffic, P2P VPNs rely on the collective efforts of all the devices in the network to provide security. This means that if one device is compromised, it can potentially compromise the security of the entire network. Another security risk of P2P VPNs is that because each device in the network is responsible for routing traffic, it increases the vulnerability of each device to cyber threats like malware and hacking attempts. When your device becomes a node in a P2P VPN network, it becomes more visible to potential attackers, and they can potentially exploit vulnerabilities in the network to gain access to your device and steal your personal information. To mitigate these security risks, it is important to choose a reputable P2P VPN provider that has a good track record of implementing strong security measures. Additionally, it is essential to keep your device and software up to date with the latest security patches and to use strong passwords and encryption. By taking these precautions, you can enjoy the benefits of a P2P VPN while minimizing the security risks. Securing Your VPN from Cyber Threats - Deploy software patches and security configurations to VPNs and remote devices. - Reboot workstations every 1-2 days to ensure updates are synced. - Cyber actors have ramped up complex social engineering and phishing attacks since the mass shift to remote work. - Enforce device health checks such as Updated Endpoint Protection, OS Updates, etc. - Alert employees to an expected increase in phishing attacks. - Implement MFA on all VPN connections to increase device security. - Only grant access to required files and systems. - Limited VPN connections can impact IT security personnel's ability to perform routine cybersecurity tasks. - Ensure IT personnel are prepared to handle remote access cybersecurity tasks. - Select the strongest device encryption available. - Test VPN limitations to prepare for mass usage bandwidth requirements. - Disable local VPN network usage to prevent Split-Tunneling. Legal Implications and Liability Issues When you use a peer-to-peer (P2P) virtual private network (VPN), you essentially share your IP address with other users. Unfortunately, if one of these users conducts illegal activities while using your IP address, you could be liable for their actions. This could lead to serious legal implications, and you may face legal scrutiny. It is common for individuals to find themselves in such situations, making it a critical concern for P2P VPN users. Therefore, it is crucial that you take this into account before using a P2P VPN to ensure that you are not putting yourself at risk. To avoid some of these issues, you could outsource your VPN to ensure it is secure and implemented correctly. Bandwidth and ISP Limitations When it comes to P2P VPNs, it's essential to consider the potential bandwidth and ISP limitations they pose. P2P VPNs can significantly impact your internet speeds, leading to slower connections. Additionally, it may cause you to exceed your ISP data cap, resulting in additional charges. This not only incurs extra costs but can also lead to legal issues or service interruptions if it violates your ISP's terms of service. While P2P VPNs may seem appealing due to their ability to hide your internet activity and IP address, they come with risks and legal complexities that may not be worth it. The exposure to cyber threats, legal ramifications, and bandwidth issues are significant drawbacks that users must consider. Therefore, opting for traditional VPN services that offer robust security and legal compliance is often the safer route to safeguard your digital activities. What Are the Next Steps? Decentralized VPNs, or P2P VPNs, promise enhanced privacy but pose security and legal risks. Acting as both client and server, users face cybersecurity concerns and potential legal scrutiny. Bandwidth limitations may lead to slower speeds and ISP issues. Reach out to iCorps to learn more about outsourcing your VPN to ensure it is secure and properly set up.
<urn:uuid:b6dca998-5763-4ad2-8ebd-11462e1dc0b7>
CC-MAIN-2024-38
https://blog.icorps.com/understanding-decentralized-vpns
2024-09-15T07:11:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00793.warc.gz
en
0.94517
1,290
2.71875
3
Diversity, equity and inclusion (DEI) initiatives should be a core value in every organization and can have a lasting impact on company culture. Organizations have the power to influence and promote a more open and inclusive society. To understand the importance of DEI initiatives, it helps to have a grasp on what each of these terms mean more than just recognizing them as being buzzwords. The Racial Equity Tools Glossary defines diversity as “all the ways in which people differ,” and it encompasses all the different characteristics that make one individual or group different from another. It is all-inclusive and recognizes everyone and every group as part of the diversity that should be valued. A broad definition includes not only race, ethnicity and gender—the groups that most often come to mind when the term “diversity” is used—but also age, national origin, religion, disability, sexual orientation, socioeconomic status, education, marital status, language, and physical appearance.” Diversity alone as a strategy is not enough for organizations to be successful. Equity is closely related and refers to “the fair treatment, access, opportunity, and advancement for all people, while at the same time striving to identify and eliminate barriers that have prevented the full participation of some groups” as defined by the Independent Sector. The last aspect of DEI is inclusion, and the Racial Equity Tools Glossary describes inclusion as “authentically bringing traditionally excluded individuals and/or groups into processes, activities, and decision/policy making in a way that shares power.” Understanding the value of DEI initiatives The US population, and as a result the nation’s workforce, is becoming increasingly diverse. According to the Bureau of Labor Statistics, the white labor force population is projected to decline from near 85% in 1994 to 77% in 2024, while the minority working population is projected to increase from 15% to 23%. DEI tools and programs give employees the ability to be themselves at work without fear, creating a sense of belonging that translates into positive outcomes in many areas of the organization. Below are some of the benefits of prioritizing diversity and inclusion in the workplace. - Increased engagement - When employees feel represented, they are naturally more satisfied with their employer and become more engaged at all levels. While diversity pertains to characteristics inherent to the employee, such as race, age or gender, inclusion relates to the lived experiences of employees, how those experiences are valued, and amplifying the voices of everyone in the workplace. When employees feel heard and included, they are more likely to engage with the organization in a meaningful way. It is not enough for organizations to be diverse if they are not inclusive, understanding that each person experiences the workplace differently. This idea also extends to employee retention because when employee belonging is prioritized through diversity and inclusion in the workplace, turnover is reduced. - Increased innovation - For an organization to be successful, it needs talent from diverse backgrounds with different ideas to bring to the table. True creativity is fostered where different worldviews and skills comingle, and increased creativity, in turn, leads to greater innovation. Organizations who prioritize DEI initiatives will always be more effective and adaptable, outperforming organizations that do not invest in these initiatives. The bottom line is that in rapidly changing industries, being a thought leader is important, and it’s impossible to be a thought leader if everyone’s thoughts are the same. - Positive company reputation - DEI impacts how an organization is perceived by more than potential hires. Potential and existing clients want to do business with organizations that understand how vital these initiatives are. It sends a powerful message when clients see an organization prioritizing DEI initiatives in a sincere way. Additionally, having a diverse workplace allows organizations to reach a wider client base and have an increased understanding of client needs. How your organization can prioritize DEI To benefit from DEI initiatives, organizations must have real, substantial commitment and empathetic leadership. For positive, lasting change to occur, DEI should be a continuous effort from leaders at all levels, not just a single, one-time initiative. Empathetic leaders who listen and take the time to understand employees build the foundation of inclusivity they want within their company. Additionally, diversity and inclusion best practices suggest leaders set actionable, measurable goals, making their expectations for the company clear. Tracking data on information like diversity, recruiting and retention are good steps in compiling concrete information relating to DEI. When embarking on DEI initiatives within an organization, leaders must be conscious and aware of implicit bias and the tricky nature of biases in the workplace. Sometimes recognizing that the implicit bias exists is the hardest part, as our brains may quickly make judgements about people or situations without realizing it. The first step for leaders is to acknowledge and overcome their own biases, fostering a culture where their employees can do the same. Everyone has biases, and the only way to stop them from negatively affecting a diverse workplace is to take strides to recognize and change those thoughts. Utilize data in DEI Having a data-driven approach is critical for successful DEI initiatives, with the data organized in a meaningful way to help organizational leaders understand their employees. DEI tools gather and analyze relevant, real-time DEI data and provide organizations with information relating to their employees such as ethnicity, gender, age and more. Utilizing these tools can aid in recruiting a diverse workforce, help gauge promotion metrics, measure diversity in leadership, and manage engagement with employees. Overall, leadership must realize that DEI can’t be an afterthought and must be ingrained into the long-term goals and continuous efforts of the organization. Creating a culture of respect, openness and belonging so that all employees feel empowered to be themselves and contribute their ideas in the workplace will only serve to benefit their organization in the long run.
<urn:uuid:29140002-12e6-4554-9df0-47d23a2070f9>
CC-MAIN-2024-38
https://www.cognizant.com/us/en/insights/workday/the-value-of-dei-in-the-workplace-wf2288432
2024-09-15T06:20:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00793.warc.gz
en
0.957636
1,195
3.15625
3
Working At Full Power: Data Centers in The Era Of AI If ever there was a moment in which technology defined the cultural zeitgeist, it’s now. Artificial intelligence (AI) innovation has shaken every industry and has or will alter how we work and live in fundamental, profound, and likely irreversible ways. Notably, Bill Gates has asserted that AI is the most important and revolutionary innovation in 30 years, comparing today’s AI tech race to the emergence of graphical user interfaces in the early 1980s, mobile phones, and the internet itself. It’s exciting, disruptive, and a little bit scary. Of course, along with the increased adoption of AI tools, new challenges have emerged — particularly in the ways we store, transmit and process data, and our capability to do so. Data Centers Working Overtime Unsurprisingly, AI applications are very power-intensive. Particularly, deep learning models lead to higher processing requirements for data centers because training and executing AI models relies on substantial computational power. Running these applications demands advanced hardware such as GPUs (specialized electronic circuits that accelerate graphics and image rendering) and TPUs (circuits designed to accelerate AI and machine learning workloads). Traditional data centers are designed with 5-to-10 kilowatts per rack as an average density; the advent of AI now requires 60 or more kilowatts per rack. Moreover, AI applications generate far more data than other types of workloads — which requires significant amounts of data center capacity. New data centers must be built with a great deal more power density; that’s one part of enabling AI. Current data centers are adapting to these changes, increasing their capacities by implementing optimized interconnection, compute and storage solutions — something some legacy and most on-premises data centers would have trouble accomplishing at the scale needed to keep up with the latest tools. Energy-intensive GPUs and TPUs give off so much heat that enhanced environmental controls, including liquid cooling solutions, can be required. This issue of heat is both a technical consideration and an environmental one. Increased Benefits of Colocation In the past, in the same way big banks have big vaults, big companies had gigantic data centers that were designed for their operations. The advent of cloud computing (AWS, Google Cloud, Microsoft Azure, etc.) created a new utility for enterprises to use services on demand, as opposed to hosting them on-premises or buying “seats.” Some industries are still very traditional: Insurance companies, banks, and healthcare companies often hold their servers very close and own them due to data security and privacy constraints. But even these companies have started relying more on SaaS and have become more comfortable using the cloud as well as third-party data centers for an increasing range of services. Companies had to solve for a self-serve digital economy when the pandemic took off, resulting in rapidly increased migration of workloads from private data centers to the cloud and, more recently, to a multi-cloud architecture. This transition has also led to hybrid models, wherein a company has some applications that reside in the data center and other applications in their private cloud. One role of modern colocation data centers, then, is providing the conduit between the private and the public clouds. For companies using many AI tools, colocated data centers present a far more efficient option than the on-site data centers of yore: They provide robust connectivity options and low-latency access to the types of powerful computing resources on which these applications depend for real-time processing, reducing data transfer time and accelerating time to cloud. And let’s not forget scalability. Colocation data centers offer the sheer space, power, cooling capability and infrastructure to allow companies to expand AI usage as their business needs change. Ultimately, the results for enterprises are increased AI performance, reduced costs, greater sustainability, smaller carbon footprints and greater flexibility on the whole as more of their workloads become AI-driven. A Look Ahead In my view, the optimal topology for an enterprise includes having your IT infrastructure adjacent to the cloud, so you have the capability to query that cloud — for storage, for analytics, for AI — at your fingertips, with near real-time latency and minimal data transfer costs. That’s one of the reasons we’re seeing more distributed or hybrid cloud architectures as well; companies can have instantaneous compute resources closer to their end users while using increasingly more data, due largely to AI dependence. Fundamentally, however, companies should look at their data center infrastructure with an eye toward future-proofing and preparedness. We are entering a business world in which ever more processes must, by necessity, be AI-supported, data-driven, and operating as efficiently as possible in periods of economic uncertainty and pervasive climate change. Ultimately, every business is, at this point, in the process of hybridization, existing on a continuum between completely private cloud and completely public cloud — and enterprise leaders must ensure that their data and cloud infrastructure transforms with the times, placing them wherever they need to be within that continuum, lest they fall behind. Note: This article was previously published by the Forbes Technology Council and can be found here.
<urn:uuid:1c13db7c-e168-45fa-aae2-6dbb1207d7b9>
CC-MAIN-2024-38
https://www.coresite.com/blog/working-at-full-power-data-centers-in-the-era-of-ai
2024-09-15T05:53:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00793.warc.gz
en
0.955819
1,073
2.625
3
It’s an unavoidable truth of information technology that the operators and users are sometimes at odds with each other. Countless stories, comics, and television shows have driven home two very unpleasant stereotypes: the angry, unhelpful system administrator who can’t wait to say “no!” to a user request, and the clueless, clumsy user always a keystroke away from taking down the entire infrastructure. There is a kernel of truth to them. While both resource providers and resource users may want the same end result — the successful completion of computational tasks — they have conflicting priorities when it comes to achieving it. System operators are tasked with keeping the resource available and performing for all users. This includes ensuring and enforcing proper resource allocation. It also means that any changes to the system have to be thoroughly vetted to ensure that they do not negatively impact the availability and performance of the resource. As a result, the operators have gained a monopoly on configuration and deployment. All of this is true for any IT resource, so what make this relevant to readers of The Next Platform? Simply put, high-performance computing is a more sophisticated endeavor than general-purpose computing. HPC, by its very nature, is inclined to be experimental and push boundaries. Thus the users’ need to try experimental software packages is directly at odds with the operators’ need to prevent that software package from taking the cluster offline. Traditionally, HPC systems and operations have been designed around monolithic applications that are compute-heavy and often latency-sensitive. Weather modeling and computational fluid dynamics are two of the classic cases that still embody this paradigm today. The approach in this model is to throw as much homogeneous hardware at the problem as the budget will allow in order to increase the simulation resolution or shorten the time-to-results. These traditional applications fit well with the resources that have been developed to support them. Being computationally-bound, they are scheduled by the number of CPUs and the walltime requested. Over the years, a new model of HPC has begun taking root. The new class of HPC applications is often smaller with different resource requirements. Some jobs are data-intensive and require fast access to local storage in order to perform computation against the data. Other jobs may require access to remote network resources, making network bandwidth the constraining factor. A third type of job is dynamic in its resource needs, potentially changing the core count or walltime by large amounts depending on the input parameters. None of these applications are well-served by scheduling systems that depend on the up-front request of fixed CPU and walltime. Furthermore, the varying nature of the “secondary” resource (e.g. network bandwidth and local IOPS) requirements leaves jobs susceptible to interference and competition from other jobs on the same machine. In order to provide better support for this new class of HPC application, several projects are being developed. These projects make use of the Linux container (LXC) feature to allocate and enforce process-level resource utilization beyond CPU and memory, as well as to provide process isolation and application portability. Containers rely on the host kernel and thus are lighter weight than full-fledged virtual machines. Docker is the most well-known container platform, and it has seen wide adoption among hyperscalers. Although its primary use is in powering scalable webservices, Docker offers some features that are compelling to HPC shops. Using the kernel cgroups feature to allocate and enforce resources means jobs can be scheduled to minimize or eliminate contention for non-CPU resources. Additionally, since the container is an isolated environment in which the user code runs, users can bundle the version of applications and supporting libraries specific to the job in question. Thus, there’s no need for the resource operators to worry about conflicting MPI libraries. The container format also lowers the barrier to using federated resources, which have historically suffered from a lack of application and library standardization. The National Energy Research Scientific Computing Center’s Shifter and the Berkeley Lab-developed Singularity project are two containerized HPC approaches currently in use. Researchers from the University of Edinburgh and the University of St. Andrews present a third project in the HPC Docker ecosystem: cHPC. Like Shifter and Singularity, cHPC provides a mechanism by which HPC resource resource providers can make use of container technology to support what the authors call “second generation” HPC applications. cHPC also provides a telemetry layer that combines physical status (process placement, memory and CPU consumption, I/O activity, network activity, et cetera) and logical status (e.g. whether the job is running, idle, checkpointing, restoring, or in an error state). Combining these statuses allows operators and users alike to see a holistic view of jobs and resources, solving what the authors describe as an asymmetry of information. Containerized HPC projects are an attempt to eliminate the conflict between the concerns of HPC resource providers and HPC resource users by enabling awareness of more resources and eliminating the need for operators to monopolize application deployment. Because these containers can be used alongside “first generation” HPC jobs in traditional schedulers, we do not expect to see the traditional HPC jobs adopt containers with any haste. However, the use of containers does solve real problems that many HPC shops face, and we will watch their adoption with keen interest.
<urn:uuid:d18867f3-fe80-4459-a225-977a176a2b69>
CC-MAIN-2024-38
https://www.nextplatform.com/2017/03/02/solving-hpc-conflicts-containers/
2024-09-16T12:21:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00693.warc.gz
en
0.935373
1,115
2.5625
3
MIT is the trusted choice in website support in Toronto and GTA locations that help shield your critical assets from getting compromised. We have dedicated our Toronto IT support to help personal and business networks from falling victim to cyber threats, breach attempts and unauthorized access. The invention of email has revolutionized the way we communicate. It’s instant, efficient and simple. You can access your email at the touch of a button in the palm of your hands. There are numerous advantages to electronic mail, but what happens when cyber criminals try to use email to steal money and tarnish your reputation? What Are Spam Emails? ‘Spam’ mail is the equivalent of unwanted flyers that are placed on your doorstep. Unwanted mail can be annoying but electronic spam mail can be dangerous. An innocent misclick on an unwarranted email can flood your device with malware, viruses and phishing threats that can endanger your business’ reputation. Here are some examples of spam mail that include: ➤ Unknown attachments ➤ Encouraging you to download unknown software ➤ Asking for your personal information ➤ Emails telling you that your computer has a virus and to click on the attached link to remove the threat ➤ Advertisements that encourage gambling, pyramid schemes, online shopping ➤ Emails urging you to donate to charity Why Are Spam-Mails Dangerous? A sender might claim to be you to use your web platform to infiltrate networks of unsuspecting cyber victims. Examples include, “Congratulations! You have won a free cruise in the Bahamas! Fill out the below form and claim your reward! Hurry because this prize expires in 24 hours!”. The cyber attacker tries to capture the reader to present important personal details. Unknown attached files embedded with viruses can easily be clicked on in a spam email. Once you download the file or click on the unsecured link, hackers can steal your personal data and use it to their advantage. Clicking on a malicious link can even result in a large financial deficit. One of the most severe spam contains child pornography, offensive picture and abusive content. Aside from potentially getting you into severe legal predicaments, some illegal emails will also request credit card information. Don’t fall victim to credit card theft and false reputation. How Can You Protect Your Personal Data From Being Breached? The good news is that you can take simple steps to ensure that your network does not get infiltrated by malicious hacking. MIT’s cybersecurity experts can help keep your sensitive information only in your hands: Here is how you can prevent spam emails from damaging your network: ➤ Set-up multiple email addresses: Private– Use solely for personal correspondence. This email address should never be shared on online forums and if you need to publish said address to a website, attach it as a graphic file rather than a link, Public– Use when you need to register to public online forums or when you subscribe to mailing lists. This is your temporary address that has to be changed frequently as spammers can much more likely find this address than the Private one. ➤ Keep your browser updated: Only use the latest version of your browser that has all the appropriate security components installed. ➤ Never respond to any spam: The more you respond, the more you will receive. ➤ Use anti-spam filters: Click only on emails that have verified providers. Choose our cyber services that will shield you from incoming spam emails. Protect Your Money and Reputation – Hire Our Toronto IT Support Don’t let your business be a negative headline. Your peace of mind shouldn’t be compromised by malicious viruses, malware and ransomware. MIT offers all-in-one cybersecurity solutions to protect your personal information from getting stolen. Don’t let your business be a target for malicious hacking. Hire our online professional IT services today by contacting us here.
<urn:uuid:56b4b411-1b15-4038-a98a-a7f26a85cbf1>
CC-MAIN-2024-38
https://mitconsulting.ca/blog/dont-let-spam-mail-jeopardize-your-business-in-2021-protect-your-personal-data/
2024-09-17T17:48:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00593.warc.gz
en
0.925612
820
2.640625
3
The agency is exploring how thermal energy might one day enable 3D vision. Even without drivers, most autonomous and semi-autonomous systems need light to navigate in the dark. But in dangerous, unlit environments, visible headlights can draw attention to those vehicles that would rather go unseen during missions. The Defense Advanced Research Projects Agency aims to “eliminate this vulnerability” through its newly unveiled Invisible Headlights program. Through it, the Pentagon’s research arm aims to explore the potential for thermal energy to enable 3D vision and ultimately help autonomous and semi-autonomous systems more safely navigate at night and in foggy or underground environments. “We’re aiming to make completely passive navigation in pitch dark conditions possible,” Joe Altepeter, program manager in DARPA’s Defense Sciences Office said in a recent announcement. Every animate and inanimate thing on the planet gives off thermal energy. With that in mind, DARPA said it wants to find out and quantify precisely what information can be captured from the tiniest bit of thermal radiation, and then work to “develop novel algorithms and passive sensors to transform that information into a 3D scene for navigation.” The hope is that that thermal energy might help the systems visualize their surroundings without emitting any light. Altepeter said today’s autonomous systems can’t make sense of dark domains without radiating some sort of signal, including light beams and laser pulses, which the agency now seeks to avoid. “If it involves emitting a signal, it’s not invisible for the sake of this program,” Altepeter said. DARPA plans to release a broad agency announcement with more information on the effort at some point in March, and will also hold a proposer’s day on March 16 in Arlington.
<urn:uuid:1b91f525-197c-4b55-bf87-c51860fbb3f4>
CC-MAIN-2024-38
https://www.nextgov.com/emerging-tech/2020/03/darpa-wants-produce-invisible-headlights/163474/
2024-09-17T17:56:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00593.warc.gz
en
0.904018
380
3.234375
3
The Avyu Virus Impact on an Infected PC What is Avyu virus? It is a data locker ransomware that interferes with computer systems in order to corrupt valuable data and extort ransom payment from victims. The Avyu virus is a new version of STOP ransomware. It drops an executable file on the system to start the infection process. Once the executable is running on the system Avyu virus becomes able to complete several attack stages. Such kind of malware primarily aims to find files of particular types on the PC so it can encrypt them with a strong cypher algorithm. It is known that the Avyu virus uses a strong algorithm to encrypt target user data. That’s why all corrupted files remain inaccessible. Additionally, they appear with the extension .avyu appended to their names. The Avyu ransomware tries to extort a ransom fee for their decryption. For this purpose, it drops a ransom message and loads it on the screen. Special Offer for Users Infected by Avyu Avyu Virus Distribution Methods The malicious samples that trigger Avyu ransomware virus may be delivered via a Word document attached to an email or sent via a message on any social media channel. It is also possible for the document to be added to a ZIP archive file. Once the file is started on the PC, it may ask you to enable macros which will start the Avyu infection. The document may also be designed to display a system notification that misleads victims to click the “OK” button in order to open the content of the file. Another commonly used trick of ransomware and malware dissemination is a spoofed link that redirects to a crafted web page that can automatically download the payloads on the PC. The sender itself may impersonate well-known companies and services. However, malicious traits can be revealed almost always. Online scanning services like ZipeZip (free online archive extractor and malware scanner) and VirusTotal (a free service that analyzes suspicious files and URLs) can help the detection of potential malware infections. Impact of Avyu Ransomware Virus The so-called Avyu virus has been detected in active attack campaigns. It is based on the code of the infamous ransomware family STOP. The code of this threat is designed to plague essential system settings with the purpose to reach target types of files and encode them with a sophisticated cypher algorithm. The beginning of the attack is marked by the execution of Avyu ransomware payload file. Soon after this event occurs, the threat becomes able to pass through several stages. At first, it triggers the creation of additional malicious files that support all of the following infective operations. The ransomware could either create or drop them on the system. Typically, threats like Avyu ransomware are designed to place malicious files in the following system folders – %Roaming% , %Windows% , %AppData% , %Local% , %Temp% Afterwards, Avyu ransomware starts executing them in a predefined order. As a result, some essential system settings are heavily modified and misused by the cryptovirus. Affected could be also registry keys stored by the Registry Editor, legitimate processes and other major components that control the regular system performance. Following system corruption, the Avyu virus utilizes a built-in encryption module to complete its main purpose – data encryption. Since this module is designed to transform the code of targeted files with a sophisticated cypher algorithm, the files remain unusable until their code is reverted back to its original state. All files that are renamed with the extension .avyu are encrypted by the ransomware. Unfortunately, they could be all files that store valuable data of your like: Following data corruption, Avyu STOP virus drops a text file which contains a ransom message. This file appears on the screen as its purpose is to blackmail you into paying hackers a ransom fee. The file which is generated by the engine. For the sake of your security, it is recommendable to refrain from contacting hackers. They may attempt to trick you once again by sending an inefficient decryption tool or additional malware. Furthermore, you will only encourage them to continue with their vicious operations, if you pay the demanded ransom. Security experts advise victims to remove malicious Avyu ransomware files and wait patiently for a free decryption solution. The good news is that Emsisoft has released a free STOP ransomware decryption tool. It is not updated to support the Avyu STOP virus strain but chances are that it will be very soon. Don’t lose faith, remove Avyu ransomware, back up .avyu files and wait until the decryptor is updated. The Avyu ransomware is an offensive threat that endangers the overall PC performance. For the sake of your PC security and your privacy, the threat should be removed completely. The detailed removal guide below contains detailed steps of Avyu ransomware removal. There are also several alternative data recovery approaches that may restore important Avyu files. WARNING! Manual removal of Avyu ransomware virus requires being familiar with system files and registries. Removing important data accidentally can lead to permanent system damage. If you don’t feel comfortable with manual instructions, download a powerful anti-malware tool that will scan your system for malware and clean it safely for you. Avyu Ransomware Virus – Manual Removal Steps Start the PC in Safe Mode with Network This will isolate all files and objects created by the ransomware so they will be removed efficiently. The steps below are applicable to all Windows versions. 1. Hit the WIN Key + R 2. A Run window will appear. In it, write msconfig and then press Enter 3. A Configuration box shall appear. In it Choose the tab named Boot 4. Mark Safe Boot option and then go to Network under it to tick it too 5. Apply -> OK Show Hidden Files Some ransomware threats are designed to hide their malicious files in the Windows so all files stored on the system should be visible. 1. Open My Computer/This PC 2. Windows 7 – Click on Organize button – Select Folder and search options – Select the View tab – Go under Hidden files and folders and mark Show hidden files and folders option 3. Windows 8/ 10 – Open View tab – Mark Hidden items option 4. Click Apply and then OK button Enter Windows Task Manager and Stop Malicious Processes 1. Hit the following key combination: CTRL+SHIFT+ESC 2. Get over to Processes 3. When you find suspicious process right click on it and select Open File Location 4. Go back to Task Manager and end the malicious process. Right click on it again and choose End Process 5. Next, you should go folder where the malicious file is located and delete it Repair Windows Registry 1. Again type simultaneously the WIN Key + R key combination 2. In the box, write regedit and hit Enter 3. Type the CTRL+ F and then write the malicious name in the search type field to locate the malicious executable 4. In case you have discovered registry keys and values related to the name, you should delete them, but be careful not to delete legitimate keys WARNING! All files and objects associated with Avyu ransomware virus should be removed from the infected PC before any data recovery attempts. Otherwise the virus may encrypt restored files. Furthermore, a backup of all encrypted files stored on external media is highly recommendable. SpyHunter is a Windows application designed to scan for, identify, remove and block malware, potentially unwanted programs (PUPs) and other objects. By purchasing the full version, you will be able to remove detected malware instantly. Additional information about SpyHunter / Help to uninstall SpyHunter 1. Use present backups 2. Use professional data recovery software Stellar Phoenix Data Recovery – a specialist tool that can restore partitions, data, documents, photos, and 300 more file types lost during various types of incidents and corruption. 3. Using System Restore Point – Hit WIN Key – Select “Open System Restore” and follow the steps 4. Restore your personal files using File History – Hit WIN Key – Type restore your files in the search box – Select Restore your files with File History – Choose a folder or type the name of the file in the search bar – Hit the “Restore” button
<urn:uuid:0112cd20-4c39-43fc-b35b-301a5dcaf907>
CC-MAIN-2024-38
https://bestsecuritysearch.com/remove-avyu-ransomware-virus/
2024-09-19T00:11:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00493.warc.gz
en
0.89407
1,716
2.640625
3
What is Artificial Intelligence (AI)? As a field of computer science, Artificial Intelligence (AI) is the science of making a computer, a computer-controlled robot, or software think intelligently, in a similar manner to intelligent humans. The aim of Artificial intelligence (AI) technologies is to replicate, reproduce and even surpass those abilities and tasks that are generally thought to be intelligent, and would usually be performed by a human. These tasks are typically classified into three categories, namely mundane, formal, and expert: - Mundane or general tasks – Routine tasks which require common sense reasoning. For example, natural language, perception, robotics, vision, speech. - Formal tasks – Tasks which requires logic and constraints to function. For instance, mathematics, gaming (chess, backgammon, checkers), verification. - Expert tasks – Tasks which require high analytical and thinking ability, a task that only professionals can do, such as, consulting, financial analysis, engineering, medical diagnostics, scientific analysis. When conducting these tasks, AI systems will demonstrate some of the following behaviours associated with human intelligence: knowledge representation, learning, manipulation, motion, pattern-recognition, planning, problem-solving, reasoning and visual perception, and to a lesser degree, creativity and social intelligence. On this page: Types of artificial intelligence Broadly, artificial intelligence can be split into two types: - Narrow Artificial Intelligence – Also known as ‘weak’ AI, is the more common type and refers to systems designed to carry out a single task intelligently. Narrow AI has a vast number of emerging applications, for example, organising personal and business calendars, responding to simple customer-service queries, helping radiologists to spot potential tumours, flagging inappropriate online content, and detecting wear and tear in elevators. - Artificial General Intelligence – Also known as AGI or ‘strong’ AI, and is very different from weak AI. Strong AI includes systems or devices that can theoretically handle any task. These systems or devices have adequate intelligence to identify solutions to unfamiliar problems. Artificial general intelligence is the type of adaptable intellect found in humans. More commonly seen in the movies, AGI is mainly theoretical at this present time. Strong AI technologies are still in very early stages of development; valid examples of strong AI don’t currently exist. Many of the current applications of AI are made possible through ‘machine learning’. What is machine learning? A subset or application of artificial intelligence, Machine learning is the application of AI which provides systems with the ability to learn and improve from experience without being given a specific set of instructions. Machine learning emphasises the development of computer programs which can access data, using it to learn for themselves. The foundations of machine learning (ML) lie in statistics. The learning process starts with sample data, also known as “training data”. Using learning algorithms and statistical models, machine learning looks for patterns in the data to produce a mathematical model, to make improved decisions in the future. The aim is to allow computers to learn, unsupervised, and adjust their actions accordingly. For instance, ML applications can: - After reading a piece of text, determine if the author is making a complaint or a purchase - Find other tunes to match the mood after listening listen to a piece of music - Identify images and categorise them according to the elements they contain - Recognise faces, speech and objects with a high degree of accuracy - Translate significant volumes of text in real-time Machine learning itself is reliant upon an inter-linked system of algorithms, referred to as neural networks. What are neural networks and deep learning? Neural networks are the key to the process of machine learning. Vaguely inspired by biological neural networks, neural networks are interconnected algorithms. These algorithms, or neurons, “learn” to perform tasks by considering examples, generally without being explicitly programmed. The algorithms that make up a neural network, feed data into one another. They are trained to carry out specific activities and tasks by augmenting the importance attributed to the input data as it passes through the layers. Deep learning is a subset of machine learning which makes use of Neural Networks. Deep learning is where neural networks are expanded into multiple interconnected networks with a large number of layers, trained using large volumes of data. The ability of computers to carry out tasks such as speech recognition and computer vision have been enabled by development in deep neural networks. How is artificial intelligence (AI) used in business? Today AI is ubiquitous. AI is used to make online purchase recommendations, used by virtual assistants such as Google Assistant and Amazon’s Alexa to understand what you say, recognise people, place and objects in a photo, identify spam, or detect credit card fraud. At least 30% of companies globally will use AI in at least one portion of their sales processes In business, AI is already widely used in automation, data analytics, and natural language processing. Automation reduces repetitive or even dangerous tasks. With data analytics, businesses gain insights never before possible. While natural language processing enables intelligent search engines, chatbots, and better accessibility for customers. Globally, businesses today with the help of these three fields of AI, are leveraging AI technologies to optimise their operations, improve efficiencies and acquire an increase in profits. Artificial intelligence and machine learning have become the latest tech buzzword everywhere. Since the first piece of AI, the artificial neuron, developed in 1943 by William McCulloch and Walter Pitts, research in artificial intelligence has driven many technological advances, such as: - Autonomous driving - Chatbots and virtual agents - Facial recognition - Machine translation - Pattern recognition - Predictive analytics - Suggestive web searches - Targeted advertising - Voice and speech recognition Many provide solutions to a significant number of business challenges and complex, real-world problems, and are now commonplace. In the next several years, supply chain management is also poised to make significant AI-based advances. Providing companies with accurate and comprehensive insight to monitor and improve operations in real-time. Other areas where significant AI-based advancements are expected to be seen include healthcare, data transparency, and the security industries. For more AI use cases, see also Practical applications of artificial intelligence in business. See also What is Internet of Things (IoT)? and discover the business benefits of digitalisation.
<urn:uuid:0c76e315-1331-4b8d-9209-472a981a356b>
CC-MAIN-2024-38
https://www.businesstechweekly.com/operational-efficiency/artificial-intelligence/what-artificial-intelligence-ai/
2024-09-20T08:17:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00393.warc.gz
en
0.938931
1,337
3.640625
4
Active learning machine learning: What it is and how it works September 27, 2020 · 4 min read This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here. Active learning is the subset of machine learning in which a learning algorithm can query a user interactively to label data with the desired outputs. A growing problem in machine learning is the large amount of unlabeled data, since data is continuously getting cheaper to collect and store. This leaves data scientists with more data than they are capable of analyzing. That’s where active learning comes in. Active learning is the subset of machine learning in which a learning algorithm can query a user interactively to label data with the desired outputs. In active learning, the algorithm proactively selects the subset of examples to be labeled next from the pool of unlabeled data. The fundamental belief behind the active learner algorithm concept is that an ML algorithm could potentially reach a higher level of accuracy while using a smaller number of training labels if it were allowed to choose the data it wants to learn from. Therefore, active learners are allowed to interactively pose queries during the training stage. These queries are usually in the form of unlabeled data instances and the request is to a human annotator to label the instance. This makes active learning part of the human-in-the-loop paradigm, where it is one of the most powerful examples of success. Active learning works in a few different situations. Basically, the decision of whether or not to query each specific label depends on whether the gain from querying the label is greater than the cost of obtaining that information. This decision making, in practice, can take a few different forms based on the data scientist’s budget limit and other factors. The three categories of active learning are: In this scenario, the algorithm determines if it would be beneficial enough to query for the label of a specific unlabeled entry in the dataset. While the model is being trained, it is presented with a data instance and immediately decides if it wants to query the label. This approach has a natural disadvantage that comes from the lack of guarantee that the data scientist will stay within budget. This is the most well known scenario for active learning. In this sampling method, the algorithm attempts to evaluate the entire dataset before it selects the best query or set of queries. The active learner algorithm is often initially trained on a fully labeled part of the data which is then used to determine which instances would be most beneficial to insert into the training set for the next active learning loop. The downside of this method is the amount of memory it can require. This scenario is not applicable to all cases, because it involves the generation of synthetic data. The active learner in this method is allowed to create its own examples for labeling. This method is compatible with problems where it is easy to generate a data instance. Reinforcement learning and active learning can both reduce the number of labels required for models, but they are different concepts. Reinforcement learning is a goal-oriented approach, inspired by behavioral psychology, that allows you to take inputs from the environment. This implies that the agent will get better and learn while it’s in use. This is similar to how us humans learn from our mistakes. We are basically functioning with a reinforcement learning approach. There is no training phase, because the agent learns through trial-and-error instead, using a predetermined reward system that provides inputs about how optimal a specific action was. This type of learning does not need to be fed data, because it generates its own as it goes. Active learning is closer to traditional supervised learning. It is a type of semi-supervised learning, meaning models are trained using both labeled and unlabeled data. The idea behind semi-supervised learning is that labeling just a small sample of data might result in the same accuracy or better than fully labeled training data. The only challenge is determining what that sample is. Active learning machine learning is all about labeling data dynamically and incrementally during the training phase so that the algorithm can identify what label would be the most beneficial for it to learn from.
<urn:uuid:784a1968-2efe-45e3-9ca7-cb78b424cd58>
CC-MAIN-2024-38
https://www.datarobot.com/blog/active-learning-machine-learning/
2024-09-20T06:57:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00393.warc.gz
en
0.950424
891
3.4375
3
PIA or DPIA: What Are They and What’s the Difference? Table of Contents - By Steven - Published: Jul 18, 2024 - Last Updated: Aug 19, 2024 Today, personal data protection is very important as the amount of information shared by internet users keeps increasing daily; there is also a dire need to secure users' privacy. Hence, the importance of Privacy Impact Assessment (PIA) and Data Protection Impact Assessment (DPIA). These two assessments are designed to serve as a process through which organizations can easily track the potential impacts of their projects on privacy and data protection, as well as help their projects stay within the scope of compliance regulations and gain the trust of the relevant stakeholders. This article will describe what these two terms mean and outline their differences. What is Privacy Impact Assessment (PIA)? A Privacy Impact Assessment (PIA) is a process used to evaluate a proposed project, information system, or existing system for its privacy risk and impact on individuals. It involves identifying privacy risks from the collection, use, and storage of personal information and suggesting mitigation measures. The overall objective of a PIA, therefore, is to protect individuals' personal information from any eventful privacy risks. Typically, a PIA helps organizations understand how personal data will be processed, assesses the risks posed by handling these data, and establishes appropriate safeguard measures to protect the data. Key Elements of a PIA PIA entails various key elements. These key elements are the main components involved in conducting a Privacy Impact Assessment: - Data Mapping: Data mapping involves identifying all personal data that will be collected, used, and stored during a project. It is instrumental in understanding the organization's data flow and how it will be handled at each stage. - Risk Assessment: After data mapping, the next stage is the risk assessment of possible privacy risks. This involves the analysis of potential consequences of risk to a user when data gets lost, misused, or accessed without proper authorization. - Mitigation Measures: Once these risks are identified, an organization must implement mitigation practices against them through robust security measures, anonymizing every data obtained and minimizing the amount of data the organization collects. When to Conduct a PIA Privacy impact assessments are conducted whenever a project involves collecting, using, and storing personal information. It is a crucial process an organization should carry out to evaluate, identify, and reduce the privacy risks that come from personal data handling on several occasions. Projects that generally require PIA include: - Development of New Projects or Systems: This will typically involve carrying out a PIA just about the time when the organization is planning for a new project or system that may bring about the collection, storage, or processing of personal data. An example is Introducing a new customer relationship management system for managing client data. PIA is conducted early enough to have privacy embedded at the initial stage of the project or system and point out potential exposure to threats. - Data Sharing and Transfers: PIAs are conducted when organizations plan to share personal data with third parties or transfer data across borders. For instance, if a health institution intends to share its patient data with a research institution, a PIA will help to grade the third party's privacy risk. Another example of data sharing and transfers that require PIA could be an organization developing a new mobile app that gathers users' data. - Introduction of New Technologies: The emergence of new technologies that process personal data may raise significant privacy concerns. For instance, introducing a new surveillance system that recognizes faces will require a PIA to address potential impacts on privacy, including data retention, access controls, and sharing practices, to guarantee that policies comply with the law and public expectations. What Is a Data Protection Impact Assessment (DPIA)? Data Protection Impact Assessment(DPIA) is an assessment process organizations use to minimize the risk of breaches and aid data protection compliance. It is a system of identifying data protection risks and the corresponding mitigation measures. DPIA is a structured process for organizations subject to compliance based on either large-volume data or high-risk processing activities. Several specific objectives and considerations exist for conducting a data protection impact assessment. Its main aim is to ensure appropriate regulatory provisions concerning the European Union General Data Protection Regulation (EU GDPR) principles. Elements that Constitute A DPIA Just like PIA, there are also key elements that constitute a DPIA. Some of the crucial aspects that relate to conducting a data protection impact assessment include: - Data Flow Analysis: This involves mapping out data flow within an organization. This is important because there is a need to understand the pathway that data uses to move between various systems and processes to identify potential risk exposure points. - Risk Identification: When the flow of information is well explained, any potential risks to data protection must be identified. It includes an analysis of 'how' one's data could be compromised, misused, or accessed without appropriate consent, together with the possible impacts on people that come with such risks. - Mitigation Strategies: When a risk is discovered, the organization must develop mitigation strategies. Security measures should be enhanced, access to data should be limited, and all data should be encrypted. - Engagement: One significant component of DPIA is stakeholder engagement. It involves consulting with data subjects, engaging employees and other stakeholders to obtain their input, and addressing the issues plaguing them. - Documenting and Reporting: Finally, all the findings and decisions taken in the DPIA process should be documented and reported. Such documentation may serve as evidence of the data controller's commitment to data protection and be a means for proving conformity. When to Conduct a DPIA The data protection impact assessment is one of the tools mandated by the EU GDPR that can be used to estimate how processing activities will impact the privacy of individuals. There is a need for DPIA in various situations, especially when data processing activities are likely to result in high risks. Some key scenarios where a DPIA is necessary include: - Larger-Scale Processing of Sensitive Data: A DPIA is mandatory when an organization intends to conduct large-scale processing involving sensitive data. Such vital information includes health data, data relating to racial or ethnic origin, political opinions, religious beliefs, genetic and biometric data, sexual orientation, and criminal records. For example, health providers intending to implement a new electronic health record system for handling patients' health data must conduct a DPIA to measure and mitigate probable privacy risks. - Use of New Technologies: Implementing new technologies will significantly impact users' protection or generate new and unforeseen privacy risks. For instance, implementing IoT devices at smart homes or wearable fitness trackers that collect detailed personal data will require a DPIA. This ensures strong security measures are implemented to safeguard users' data and identify possible privacy issues. - Automated Decision-Making and Profiling: When an organization conducts automated decision-making processes, including profiling, with legal effects or similarly essential effects on individuals, a DPIA needs to be undertaken because issues like whether automated decisions are fair and transparent are at stake. An example of this kind of project is an online credit-lending platform that uses algorithms to evaluate its customers' creditworthiness and either approve or reject their credit requests. Key Differences Between PIA and DPIA Both PIA and DPIA are crucial exercises for protecting privacy and data. However, they differ in scope and focus, legal requirements, and implementation processes. Scope and Focus A PIA primarily looks at one's project and its impact on privacy, mainly collecting, using, and storing personal information. On the contrary, a DPIA deals with safeguarding protection for data processing activities by ensuring those activities align with data protection regulations and principles. The legal requirements for PIAs and DPIAs differ from one jurisdiction to another. For instance, the EU GDPR specifies that any processing activity that poses a high risk to the rights and freedoms of individuals should be preceded by a DPIA. PIAs, on the other hand, are not directly required by the GDPR. They are typically conducted to ensure privacy compliance and ensure best practices are met. Although PIA and DPIA activities share similarities, PIAs are different from DPIAs because they assess and mitigate privacy risks. In contrast, DPIAs stress the importance of data protection, compliance, and managing associated risks. PIA and DPIA follow these processes: data mapping, risk assessment, mitigation strategies, stakeholder engagement, and documentation activities. Importance of Conducting PIAs and DPIAs Privacy and Data Protection Impact Assessments have numerous benefits for an organization. They are of vital importance in any organization dealing with users' data. Conducting PIAs and DPIAs will help to: - Identify and mitigate risks: Identify and address risks: Privacy and data protection risks can be identified and addressed early enough in a project so that data breaches or regulatory non-compliance are unlikely to occur. PIAs and DPIAs help organizations comply with all relevant requirements under privacy and data protection laws to avoid legal implications. - Build Stakeholders Trust: Organizations can engender trust with their stakeholders in privacy and protection against data breaches by simply conducting PIAs and DPIAs. This includes their customers, employees, and other categories of stakeholders. In conclusion, PIAs and DPIAs are essential assessment tools for guaranteed protection and information security in this digital age. A Privacy Impact Assessment should be carried out whenever substantial changes in how personal data is processed are brought about by new projects, technological advancements, data-sharing initiatives, or regulatory changes. While DPIA should be carried out in cases where processing activities are hazardous to users' privacy, it helps an organization to identify and mitigate privacy risks upfront by making it compliant with the provisions of GDPR and adding to the trust of data subjects. Organizations handling data and privacy protection projects should conduct PIAs and DPIAs regularly. This will help them become proactive in meeting their privacy risks, improve the safeguarding of personally identifiable information, ensure compliance, and build trust with the users whose data they handle.
<urn:uuid:7731ec46-c75f-427b-9778-e3dd84b19e80>
CC-MAIN-2024-38
https://www.idstrong.com/sentinel/what-is-the-difference-between-a-pia-and-a-dpia/
2024-09-20T07:46:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00393.warc.gz
en
0.919089
2,068
3.078125
3
I’m writing this column on the day of the Virginia Tech national tragedy. Aside from the natural emotions from this overwhelming, random act of violence, this event struck even closer to home because one of my daughters was recently accepted into Virginia Tech, and she and a sister were planning a road trip to the campus. Our neighborhood is full of Virginia Tech flags and Hokielovers. Several student witnesses testified that they didn’t hear the initial gunshots and warnings because they were listening to their iPods or music players. It brought to mind a thought I’ve had for the last five years, and I’m sure I’m far from alone with the concern. We need a pervasive EWS (early warning system) that can override any and all multi-media sources. Our internationally connected, multimedia, convergent world is quickly making our traditional EBS (early broadcast system) alerts less useful. Think about the sheer number of electronic devices that occupy our ears and eyes that aren’t connected to our traditional radio or television system: DVRs, iPods, MP3 players, media players, Internet radios, satellite radio, digital TV, and more. Many of these devices have absolutely no way of receiving an EBS alert. It will take an entire rethinking of our traditional emergency alert system, plus a coordinated open standard to be applied to all media devices. The good news is that the United States is headed in that direction, albeit not quickly enough. A little history first. The first national broadcast warning system was established by U.S. President Harry Truman in 1951 and was called Control of Electromagnetic Radiation (or CONELRAD ). It was an invention of the cold war. It involved radio, both AM and FM, and television stations, and was solely used for national defense purposes. CONELRAD was replaced by the Emergency Broadcast System in 1963. Its use was expanded to the National Weather Service, FCC, national wire services, and for local and regional use. It was used in over 20,000 weather events until its retirement in 1996. (Remember your grandparent’s emergency weather radio?) EBS was replaced by a significantly more comprehensive Emergency Alert System in 1994. The Federal Emergency Management Agency (FEMA) joined the FCC, National Weather Service, and the President of the United States as overseers. EAS covers dozens of radio and television frequencies, including AM, FM, VHF, UHF, satellite radio and TV, digital radio, cable television, music sources, video broadcasters, and other media sources. Those sources are required to participate by the end of 2007. My question is whether these EWS methods can override a TiVo, iPod, DVD player, Internet videocast, or other digital media device? I’m guessing not. I can’t count the number of times that I’ve been watching a TiVo’d program as it shows an emergency weather broadcast. I’ve jumped up from the couch to check the skies, only to remember that my show is recorded. It’s even more embarrassing because it’s happened more than once. Ironically, if I’m watching a prerecorded show, the real emergencies won’t get through to me. There are some partial solutions. The Emergency Email and Wireless Network Web site will send you e-mail or SMS messages. Weatherbug and Weather.com will send you weather warnings. Most of the major online news services will send you news alerts. AtHoc offers many enterprise-focused, network-based alert products ; their product and customer list is impressive. Still, none of these are complete solutions. I want international, national, regional, and local emergency warning. I want perimeter-based emergency systems of the type that could warn a school campus about a deranged killer. I want a personal warning system: If a loved one of mine gets injured or needs my help, I want them to hit one button (a la “Help, I’ve fallen and I can’t get up! “). I want that message to reach me no matter what I’m doing. We need an international agreement among broadcasters, media sources, media devices, and information sources on a universal standard for emergency broadcasts. How long will it be before we have this service? Why isn’t it already mandated? How many people will be watching their DVR as a tornado or chlorine gas cloud bears down on their house? How many people have been killed already because we don’t have a universal, pervasive system? Every device sold without a mandated warning system is an alert device wasted. Would it be as simple as properly equipping cell phones? After all, so many of us have one. Both of my daughters weathered the Virginia Tech massacre news as well as anyone could. I’m strangely comforted that next year, my daughter will be attending what will probably be one of the most secure universities in the United States. I wouldn’t be surprised to see armed guards or police stationed near every building. Next time the warning system will work. Such is the guaranteed outcome from our shared national tragedy. As a civilization, we humans are terrible about being proactive, but we excel in our overreaction to past threats.
<urn:uuid:a8b98d65-cf33-4f6f-8838-9cc5f0ee0019>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/virginia-tech-tragedy-signals-need-for-a-pervasive-emergency-alert-system/1450
2024-09-20T08:21:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00393.warc.gz
en
0.952252
1,103
2.765625
3
What is Image Segmentation? What are its Applications? While object detection and classification are two major factors that influence the development of video software programs that function on the basis of computer vision, a third major factor is image segmentation. As a machine learning model that has been trained on a particular dataset of images must be able to recognize specific categories or characteristics within these images, said images must first be segmented into different regions before objects can be detected with them. For example, a machine learning algorithm that is being training to recognize animals within pictures that have been taken on farms must be able to differentiate between the animal in said photos in comparison to the trees, tools, food, and other objects that may be present within the images. How does image segmentation work? Put in the simplest of terms, the goal of image segmentation is to highlight the foreground elements within a particular image to make it easier for a machine learning algorithm to evaluate objects or information with the said image. As image recognition and object detection software function on the basis of accuracy, image segmentation plays an important role in ensuring that these algorithms are able to perform in a manner that is as consistent and efficient as possible. During the image segmentation process, all elements within the image that have the same category will be assigned a common label. In keeping with the example of photographs that have been taken on a farm, all of the animals within these images would be grouped together, while farm tools such as tractors and shovels would be assigned a different label. These individual categories can then be fed to a machine learning algorithm, as recognizing these specific image segments within a photograph is far easier than recognizing a single element within an image that contains various other objects. To this point, image segmentation can be accomplished using different techniques and approaches. Different approaches to image segmentation Two common approaches in image segmentation include the similarity and the discontinuity approaches respectively. The similarity approach in image segmentation involves detecting the similarity between the pixels within a particular image to form an individual segment, in accordance with a specific threshold that is established beforehand. Alternatively, the discontinuity approach instead focuses on identifying the pixel intensity values that are present within the image. With all this being said, image segmentation can be implemented using a variety of techniques. Image segmentation techniques Image segmentation techniques utilize different machine learning algorithms to identify specific classes of objects and information that appear within images. For example, Mask R-CNN image segmentation algorithms produce three different outputs for each object within a given image: the object mask of the image, the class of the image, and bounding box coordinates. On the other hand, edge detection segmentation algorithms take advantage of the discontinuous local features with images to detect the edges within said images and ultimately, define a boundary of a particular object. In addition to this, many software developers will combine different image segmentation techniques to solve problems within a specific domain. Applications of image segmentation As the process of image segmentation plays a crucial role in the development of software programs that use computer vision techniques, the applications of said process are very much widespread. To illustrate this point further, video surveillance systems use image segmentation to identify people, cars, street lights, and other miscellaneous objects within video recordings. Conversely, healthcare professionals rely on image segmentation when making use of medical imaging software, as these programs must be able to identify specific features within the human body. Furthermore, video redaction software programs that rely on facial recognition techniques also work in accordance with the process of image segmentation. Through the process of image segmentation, software engineers have been able to create machine learning algorithms that can detect everything from malignant tumors within an individual organ in the human body to a green traffic light on a busy street corner in a bustling metropolitan American city. To this end, digital technology has truly altered the way in which software programs are able to interact with the physical world, as photographs are no longer restricted to their physical forms. This said, despite the fact that the applications of image segmentation in the business world are already very common, further applications are sure to emerge in the near future.
<urn:uuid:b50237d1-658c-4517-8e06-bd4c3f1c6dc9>
CC-MAIN-2024-38
https://caseguard.com/articles/what-is-image-segmentation-what-are-its-applications/
2024-09-09T09:20:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00493.warc.gz
en
0.936526
843
3.578125
4
Which two benefits are provided by using a hierarchical addressing network addressing scheme? (Choose two.) Click on the arrows to vote for the correct answer A. B. C. D. E.AE A hierarchical addressing network addressing scheme offers several benefits that make network management more efficient and easier to maintain. Here are the two benefits that are provided by using a hierarchical addressing network addressing scheme: A. Reduces routing table entries: With a hierarchical addressing scheme, networks are organized into smaller, more manageable subnets. Each subnet can be assigned a unique network address that helps reduce the size of routing tables. By reducing the number of entries in routing tables, routers can process and forward packets more quickly, leading to better network performance. E. Ease of management and troubleshooting: A hierarchical addressing scheme makes it easier to manage and troubleshoot networks. Each subnet can be assigned to a specific department, location, or function, making it easier to identify the source of network problems. Network administrators can also use hierarchical addressing to control access to network resources, ensuring that only authorized users can access critical data and applications. B, C, and D are not benefits of using a hierarchical addressing scheme: B. Auto-negotiation of media rates: Auto-negotiation of media rates is a feature of Ethernet that allows devices to automatically negotiate the speed and duplex mode of a network connection. While it can help ensure that devices are communicating at the same rate, it has nothing to do with a hierarchical addressing scheme. C. Efficient utilization of MAC addresses: MAC addresses are used to identify network devices. While hierarchical addressing can help reduce the number of MAC addresses needed, it is not a primary benefit of a hierarchical addressing scheme. D. Dedicated communications between devices: Dedicated communications between devices refers to a point-to-point connection between two devices, such as a leased line. While hierarchical addressing can help identify the source and destination of traffic, it does not provide dedicated communications between devices.
<urn:uuid:ca8c6b8d-3597-4ad4-b041-50f786ea1e36>
CC-MAIN-2024-38
https://www.exam-answer.com/benefits-of-using-hierarchical-addressing-network-addressing-scheme
2024-09-09T07:58:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00493.warc.gz
en
0.910762
409
2.90625
3
OGNL Injection (OGNL) Mitigation Strategies for OGNL Injection Vulnerabilities Prevent OGNL Injection EffectivelyTable of Contents What is OGNL injection (OGNL)? Object-Graph Navigation Language is an open-source Expression Language (EL) for Java objects. Specifically, OGNL enables the evaluation of EL expressions in Apache Struts, which is the commonly used development framework for Java-based web applications in enterprise environments. The most critical vulnerabilities on the list of Apache Struts CVEs relate to OGNL expression injection attacks, which enable evaluation of invalidated expressions against the value stack, allowing an attacker to modify system variables or execute arbitrary code. OGNL is infamous for related vulnerabilities found in the Struts 2 framework that relies on it. Because OGNL has the ability to create or change executable code, it is also capable of introducing critical security flaws to any framework that uses it. For example, it is possible for the attacker to inject OGNL expressions (which can execute arbitrary malicious Java code), when an OGNL expression injection vulnerability is present. Protections against this CVE include security solutions that can detect the presence of vulnerable Struts2 components in software so that attacks can be prevented. Contrast is the clear customers’ choice Contrast is named a Customers’ Choice in the 2021 Gartner Peer Insights “Voice of the Customer”: Application Security Testing report. With the highest percentage of 5-star ratings, this is the third consecutive year Contrast has received this powerful endorsement from customers. Built for Developers. Trusted by Security. Learn Secure Code CROSS SITE SCRIPTING (XSS) Learn about Cross site scripting (XSS) and how it affects your Java source code Learn about SWL injection and how it affects your Java source code CLIENT SIDE INJECTION Learn about client-side injection and how it can affect your source code
<urn:uuid:9847e45a-1d1e-4a05-ba95-4b06156b8e4f>
CC-MAIN-2024-38
https://www.contrastsecurity.com/glossary/ognl-injection-ognl
2024-09-10T14:36:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00393.warc.gz
en
0.882037
407
2.953125
3
The fate of U.S. climate policy stands at a critical juncture as the nation heads into the 2024 presidential election. Depending on the election’s outcome, Americans may witness a continuation of aggressive climate action or a significant rollback, each with profound implications for the environment, economy, and public health. Current Climate Policies and Achievements Biden Administration’s Climate Initiatives The Biden administration has implemented several key policies aimed at mitigating climate change. One of the most impactful is the Inflation Reduction Act (IRA), which has invigorated the renewable energy sector. With $270 billion in renewable energy projects and $8.4 billion claimed in clean energy tax credits by millions of Americans, the initiative has set a robust foundation for the future. The IRA’s influence extends beyond just numbers; it represents a commitment to cutting greenhouse gas emissions and transitioning to a more sustainable energy infrastructure. In addition to the IRA, the Bipartisan Infrastructure Law and CHIPS Act have played a substantial role in battling climate change. These laws contribute to job creation, fuel-efficiency standards for vehicles, and regulations on power plant emissions. The tangible results include more than 334,000 new jobs and strengthened vehicle and power plant standards, illustrating significant progress in the fight against climate change. These policies are making headway in reducing U.S. emissions, helping to meet international climate commitments, and highlighting the intersection between economic growth and environmental sustainability. Potential Benefits of Continued Climate Leadership Should the current climate policy trajectory continue, presumably under a Harris administration, the United States could achieve substantial emission reductions. By 2030, emissions could be cut by 50%, aligning with the Paris Agreement’s goals. Further down the line, a 70% reduction by 2035 and net-zero emissions by 2050 are feasible targets. These goals are not just aspirational but are backed by existing policies and technological innovations that pave the way for this green transformation. Continuation also means electrification of industries, stricter clean energy standards, and maintaining America’s role as a global leader in green technology. These efforts would not only tackle climate change but also propel economic growth and job creation, further cementing the U.S.’s leadership in the worldwide shift toward sustainability. It would position America as a beacon of climate responsibility, influencing global practices and setting a standard for others to follow. The commitment to clean energy and emissions reduction also fosters resiliency against climate-related disasters. Project 2025: A Conservative Agenda Goals and Proposals On the flip side, Project 2025, championed by the conservative Heritage Foundation, presents a stark contrast. This agenda aims to roll back substantial portions of current climate initiatives. It proposes repealing most of the IRA and Bipartisan Infrastructure Law, representing a pivot back to traditional energy policies. The agenda does not just shift the policy but fundamentally changes the direction towards increased fossil fuel dependency, which could jeopardize years of progress in combating climate change. Instead of focusing on renewable energy, Project 2025 emphasizes increasing natural gas exports and expanding oil and gas leasing. Fuel economy standards would be weakened, cutting back on energy-efficiency regulations and greenhouse gas emission standards enforced by the EPA. Such shifts would potentially undo the strides made in emission reductions, leading the country back to higher levels of pollutants and diminished regulatory power over environmental protections. The agenda underscores a strategic alignment with fossil fuel interests at the expense of long-term sustainability goals. If Project 2025 is implemented, its environmental impact would be significant. Emissions are projected to increase starkly, with an additional 76 billion tons of CO2 emitted between 2025 and 2050 compared to the current trajectory. This uptick in emissions poses severe risks to air quality and public health. Increased emissions will likely exacerbate the frequency and severity of climate-related events, altering ecosystems and public health conditions irreversibly. Economically, the consequences could be equally dire. There would be a marked increase in electricity prices and household energy expenditures, directly affecting consumers. Job losses in the burgeoning renewable sector and a reduction in GDP—estimated at $320 billion lower per year by 2030 and $130 billion lower each year by 2050—would further underscore the adverse impacts. The financial strain would not only burden the economy but could also stymie innovation and technological progress in green industries. This dual setback in environmental and economic factors could lower the U.S.’s global standing as a leader in climate initiatives. Comparative Analysis: Economic and Environmental Impacts Under the current climate policies, emission reductions are measurable and promising. By striving for a 50% reduction by 2030, extending to 70% by 2035, and ultimately reaching net-zero emissions by 2050, a continued climate leadership path aligns with global sustainability goals. This approach mitigates the effects of climate change, ensuring cleaner air and a more stable environment. The pathway also encourages international cooperation and compliance with the Paris Agreement, fostering a collective effort toward a more sustainable world. In contrast, Project 2025’s agenda to bolster fossil fuel industries would lead to higher emissions and environmental degradation. The additional 76 billion tons of CO2 emissions could exacerbate climate change, intensify weather-related disasters, and degrade ecosystems, severely affecting biodiversity and human health. This increased environmental burden may also result in long-term shifts in climate patterns, impacting agriculture and water resources, and contributing to global instability. The environmental toll would be both immediate and far-reaching, with consequences spanning generations. The economic benefits of continued climate leadership are substantial. Investments in renewable energy projects and green technologies stimulate job creation and spur economic growth. Stringent clean energy standards foster innovation and industry advancements, solidifying the U.S.’s competitive edge in the global market. This pathway not only ensures economic growth but also fosters a resilient and adaptive economy capable of weathering future climate challenges and shocks. Conversely, Project 2025’s focus on fossil fuels could lead to economic downturns. Increased electricity prices and energy costs would burden households, while job losses in the clean energy sector undermine local economies. The anticipated reduction in GDP further highlights the long-term economic challenges posed by this conservative agenda. The focus on short-term gains from fossil fuels could overshadow the necessity of long-term sustainable growth, leading to economic vulnerabilities that could destabilize markets and increase financial inequalities. Public Health and Energy Security Public Health Benefits of Climate Policies Current climate policies contribute significantly to public health. Improved air quality, resulting from reduced emissions, lowers the incidence of respiratory and cardiovascular diseases. The transition to clean energy minimizes pollution, leading to fewer health complications and reduced healthcare costs. These health benefits are a direct result of cleaner technologies and stricter emission standards, highlighting the intrinsic link between environmental policies and public health outcomes. The health advantages extend further by reducing healthcare burdens, which, in turn, can lead to economic benefits through lowered medical expenses and improved worker productivity. Additionally, sustainable practices promote healthier living environments, contribute to mental well-being, and enhance community health resilience against climate-induced diseases. The broader adoption of green technologies ensures a consistent reduction in pollutants, which translates into long-term health benefits for the population. Risks to Public Health Under Project 2025 The trajectory of U.S. climate policy is at a pivotal moment as the 2024 presidential election approaches. The results of this election will heavily influence whether the country continues to pursue ambitious climate initiatives or experiences a significant reduction in these efforts. Both directions come with far-reaching consequences, impacting not only the environment but also the economy and the health of the American public. If the election favors candidates who prioritize climate action, we can expect an expansion of policies aimed at reducing greenhouse gas emissions, increasing investment in renewable energy, and promoting sustainable practices across various sectors. Such measures would help combat climate change, potentially avert some of its most severe impacts, and create green jobs, contributing to economic growth. Conversely, if the election outcome favors those opposing aggressive climate measures, we may see a rollback of existing policies. This could result in higher carbon emissions, a slowdown in the transition to clean energy, and adverse effects on public health due to increased pollution. The decision voters make will be crucial in defining the future landscape of U.S. climate policy, resonating for decades to come.
<urn:uuid:0d4491e8-2e73-408a-ac38-0b94b8bc1d7e>
CC-MAIN-2024-38
https://energycurated.com/environmental-and-regulations/future-of-u-s-climate-policy-hangs-in-balance-with-2024-election-outcomes/
2024-09-13T01:34:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00193.warc.gz
en
0.926004
1,685
2.90625
3
The need for a reliable and expanded power grid in Oregon has never been more pressing. With climate change accelerating and the demand for electricity increasing, the state’s energy infrastructure must evolve to ensure a sustainable, clean energy future. The existing grid faces significant challenges, including increasing load pressures, potential power outages, and inefficiencies. By investing in infrastructural upgrades, leveraging renewable energy sources, and fostering inter-state cooperation, Oregon aims to meet its growing energy demands pragmatically and sustainably. This article delves into the historic challenges, current needs, and future opportunities for Oregon’s power grid, offering a comprehensive analysis and a rallying call to action. Urgency of Grid Expansion The demand for electricity is skyrocketing, driven by multiple factors like climate change, which brings extreme weather events, and the growing trend toward electrification in transportation and other major sectors. Past incidents like the deadly Texas winter blackouts of 2021 serve as stark reminders of the potential dire consequences if our electric grid remains unprepared. These extreme weather events underscore the need for a robust and resilient grid that can not only handle surges in demand but also withstand adverse weather conditions that are becoming increasingly frequent. As Oregon transitions toward a cleaner energy future, the existing grid’s inadequacies become glaringly apparent. The current infrastructure is not equipped to efficiently handle the increasing load, which can lead to potential power outages and significant inefficiencies. Upgrading and expanding the grid is critical to prevent such scenarios, ensuring a reliable power supply that matches the increased demands of an eco-conscious society. Immediate action is necessary to safeguard against the vulnerabilities exposed by climate change and rising energy consumption. Harnessing Renewable Energy Oregon, along with the broader Pacific Northwest, possesses significant potential for renewable energy, particularly in wind and solar power. However, the challenge lies in effectively connecting these often remote renewable energy sources to local demand centers. The existing transmission network is outdated, lacking the capacity to integrate new, renewable sources efficiently. This gap in infrastructure inhibits the full utilization of Oregon’s abundant renewable resources, delaying the transition to a cleaner energy mix and thereby impacting both environmental goals and economic growth. Building new transmission lines is essential to tap into these renewable resources fully. By creating a more extensive and robust network, Oregon can better harness wind and solar power, thereby contributing to a cleaner energy mix. This transition not only supports environmental objectives but also has the potential to boost local economies and create jobs in the renewable energy sector. A well-developed transmission network can facilitate energy independence and resilience, preparing the state to meet future demands sustainably. Historical Neglect and Current Needs Historically, there has been significant underinvestment in transmission infrastructure across the region. Since 1990, the Pacific Northwest’s largest utility network, Bonneville Power Administration (BPA), has added fewer than 400 line-miles to its 15,000-mile high-voltage system. This underinvestment has led to a grid that struggles to meet contemporary energy demands and integrate new renewable sources efficiently. The consequence is a power grid that is increasingly vulnerable to outages and unable to fully support the state’s renewable energy goals, thereby affecting overall grid reliability. Meeting the challenges of today and tomorrow necessitates both the construction of new lines and the strategic upgrades of existing ones. Enhancements can include modernizing aging infrastructure to increase capacity and bolster reliability. Without these critical upgrades, the grid remains perilously vulnerable to blackouts and inefficiencies, hampering Oregon’s progress toward a clean energy future. The necessity for new lines and strategic upgrades is not just an option but an urgent imperative to safeguard energy security and sustainability. Power Exchanges Across Regions Power trading within the Northwest, facilitated by BPA’s grid, has historically played a significant role in balancing electricity supply and demand across the region. These exchanges leverage diverse electricity needs, such as sharing Northwest hydropower with California and receiving surplus solar power in return. This practice not only boosts grid efficiency and reliability but also demonstrates the advantages of inter-regional cooperation in energy management. Maintaining and enhancing these inter-regional exchanges is crucial as Oregon transitions to more renewable energy sources. By continuing to collaborate and trade power with neighboring states, Oregon can ensure a more balanced and reliable energy supply, even during periods of peak demand or adverse weather conditions. Effective power exchanges benefit all parties involved, contributing to a more resilient and interconnected grid system that supports the collective energy goals of the Western United States. Balancing Environmental and Community Concerns Building new transmission lines often faces significant challenges due to environmental, cultural, and community considerations. These concerns are legitimate and must be carefully navigated to achieve grid expansion without compromising local values. Efficient siting of new transmission projects, mindful of local objections and impacts, is essential for balancing development with environmental protection. The harmonization between progress and preservation is critical to achieving sustainable development goals. Engaging with local communities and stakeholders early in the planning process can help address concerns and identify mutually beneficial solutions. By incorporating environmental and cultural considerations into project planning, Oregon can advance its clean energy goals while preserving the integrity of local ecosystems and communities. This approach ensures that the path to a cleaner energy future is both equitable and environmentally conscious, creating a foundation for long-term sustainability and community trust. Local Solutions and Technological Advances To complement the expansion of the transmission network, local solutions and technological advancements play a vital role in meeting energy demands. Maximizing local solar installations, implementing high-efficiency heat pumps, and utilizing electric vehicles for distributed energy storage are just a few examples of innovative approaches. By embedding advanced technologies into the existing infrastructure, Oregon can create a more resilient and responsive grid system that optimally meets the rising demand for clean energy. Modern transmission technology allows for doubling or tripling the power carried through existing lines, reducing the necessity for entirely new infrastructure. These advancements can significantly improve grid efficiency and resilience, making it possible to meet growing energy demands without extensive new construction. Technological solutions offer a pathway to bridging the gap between current capacity and future necessities, embedding flexibility and adaptability into Oregon’s energy infrastructure. Inter-State Cooperation and Market Integration The urgency for a dependable and enhanced power grid in Oregon is at an all-time high. With the rapid progression of climate change and rising electricity needs, the state’s energy infrastructure needs to transform to secure a sustainable and clean energy future. The current grid is grappling with significant hurdles such as escalating load pressures, potential blackouts, and inefficiencies. Tackling these issues demands substantial investments in infrastructural upgrades, the integration of renewable energy sources, and fostering cooperation between states. Strategic investments in technologies like smart grids and energy storage systems are crucial for managing load pressures and reducing the risk of power outages. Furthermore, inter-state collaborations can facilitate sharing resources and balancing loads, enhancing overall grid reliability and flexibility. This approach ensures that Oregon remains resilient and innovative in its energy solutions. This article explores the historical challenges, present requirements, and future prospects for Oregon’s power grid, providing an in-depth analysis and a clear call to action for policymakers, industry leaders, and citizens alike to engage in transformative energy strategies. Only through collective effort and forward-thinking can Oregon effectively address its energy needs and pave the way for a greener future.
<urn:uuid:96ba1d15-f54f-47e4-93bc-d22a4b53a273>
CC-MAIN-2024-38
https://energycurated.com/infrastructure-and-technology/expanding-oregons-grid-for-a-reliable-and-clean-energy-future/
2024-09-13T02:12:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00193.warc.gz
en
0.920558
1,510
2.921875
3
A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science, in partnership with scientists from Sandia National Laboratories and Brookhaven National Laboratory, has introduced a computing architecture specifically designed for use in artificial intelligence (AI). It is hoped the new chip will help usher in a new wave of hardware and software co-design. Until now, the AI industry has been dominated by software companies, due to the unique challenges presented by Big Data, artificial intelligence and machine learning. Co-led by Deep Jariwala, Assistant Professor in the Department of Electrical and Systems Engineering (ESE), Troy Olsson, Associate Professor in ESE, and Xiwen Liu, a PhD candidate in Jarawala’s Device Research and Engineering Laboratory, the research group has adapted an approach known as compute-in-memory (CIM) for the new chip architecture. AI presents a major challenge to conventional computing architecture, say the researchers. In standard models, memory storage and computing take place in different parts of the machine, and data must move from an area of storage to a CPU or GPU for processing. CIM architectures reduce transfer time and minimise energy consumption by processing and storing data in the same place. The team’s new CIM design - the subject of a recent study published in Nano Letters - is transistor-free and optimised for Big Data applications. As AI software continues to develop and the rise of the Internet of Things produces larger data sets, researchers have focused on hardware redesign to deliver improvements in speed and energy usage. “Even when used in a compute-in-memory architecture, transistors compromise the access time of data,” says Jariwala. “They require a lot of wiring in the overall circuitry of a chip and thus use time, space and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small and quick and it requires very little energy.” Mobile tech and wearable devices can benefit from new chip The advance is not only at the circuit-level design, say researchers, and the new computing architecture builds on the team’s earlier work in materials science focused on a semiconductor known as scandium-alloyed aluminium nitride (AlScN). “One of this material’s key attributes is that it can be deposited at temperatures low enough to be compatible with silicon foundries,” says Olsson. “Most ferroelectric materials require much higher temperatures. AlScN’s special properties mean our demonstrated memory devices can go on top of the silicon layer in a vertical hetero-integrated stack.” Olsson compares this to a multistory parking lot with a hundred-car capacity and a hundred individual parking spaces spread out over a wider space. “The same is the case for information and devices in a highly miniaturised chip like ours,” he explains. “This efficiency is as important for applications that require resource constraints, such as mobile or wearable devices, as it is for applications that are extremely energy intensive, such as data centres.” In 2021, the team established the viability of the AlScN as a compute-in-memory powerhouse. In the most recent study debuting the transistor-free design, the team observed that their CIM ferrodiode may be able to perform up to 100 times faster than a conventional computing architecture. “It is important to realise that all of the AI computing that is currently done is software-enabled on a silicon hardware architecture designed decades ago,” says Jariwala. “This is why artificial intelligence as a field has been dominated by computer and software engineers. Fundamentally redesigning hardware for AI is going to be the next big game changer in semiconductors and microelectronics. The direction we are going in now is that of hardware and software co-design.”
<urn:uuid:c47b6dbf-246d-488c-82df-2bccb730ef96>
CC-MAIN-2024-38
https://aimagazine.com/articles/new-chips-for-artificial-intelligence-could-be-game-changer
2024-09-14T06:58:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00093.warc.gz
en
0.94004
821
2.96875
3
A cookie is a small text file that is sent to your computer’s hard drive when you visit a website. A cookie typically contains the name of the website from which it has come, the lifespan of the cookie and a value. The value is usually a unique code that will only make sense to the website that has issued it. Cookies can also be used to measure how people use websites and what kind of browsers or devices they’re using. You can set your web browser to accept or reject cookies, or tell you when a cookie is being sent. You can also delete cookies from your computer. The AboutCookies.org website tells you how to control and delete cookies on most browsers. How to control cookies (AboutCookies.org, external website) How to delete cookies (AboutCookies.org, external website)
<urn:uuid:40ae4c64-2d63-4868-8ea9-959441e9f956>
CC-MAIN-2024-38
https://anyscam.com/anyscam-cookie-policy/
2024-09-15T10:16:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00893.warc.gz
en
0.958714
173
3.46875
3
The first consideration for information systems in any company is how to keep your data secure. The threats to get company data are always present and it is up to the Information Security team to minimize these risks. Social engineering is a great threat to a company. Most social engineering attacks are targeted to a single person. These attacks focus on a single person’s willingness to help, greed, need for connection, fear tactics, or sense of responsibility. These manipulated attacks can be used to get employees to release company information if the employee isn’t properly trained to be on the lookout for these threats. Is My Company Vulnerable to Cyberattacks? Every company has something hackers want. Whether it is employee information, account information, or company clients, it is valuable to someone. Social engineering is a way to obtain this information. Social engineering is a way to get employees to provide sensitive information for the purposes of fraud, system access, or gaining information. There are several types of social engineering attacks such as pretexting, phishing, baiting, tailgating, or quid pro quo. Phishing is probably one of the more recognized attacks. What’s the Difference Between Phishing & Spear-Phishing Attacks? Phishing is a more generalized attack as opposed to spear phishing which involves more time as it is a targeted attack (hence the term spear phishing). Phishing is a generalized attack through email or phone probing to get an employee to give out information about employees, accounts, clients, and other sensitive information. Phishing emails are getting more sophisticated as hackers develop their skills to mimic other companies, invoices, email aliases, and web addresses. The differences are usually subtle but are visible if you know what to look for. Training Your Company on Spotting Phishing Emails Incoming phishing emails may appear more sophisticated. These attacks are meant to look official with logos, even recognized names. However, there will be differences. A timestamp may be off, an email address has transposed letters, a questionable link that goes to an unrecognized web page. It is important to always be suspicious of incoming attachments and websites, even if it is coming from a trusted source. It will never be unacceptable to ask and confirm the request before submitting to the request. In most cases, the source didn’t know their information was compromised. If the source was not aware of the threat made in their name, encourage password changes, and a malware or antivirus scan on the systems they use. In information security, it is always sec“U R IT”y. Proactively Protecting Your Company’s Data Information System and Security professionals are always looking for the newest way to safeguard their company information. As a professional, you should. Server patches should be up-to-date, security policies in place and email filtering on mail services and firewalls at a minimum. However, focusing on the customer and working to serve them is another way to safeguard company information. An IT professional’s customer service skills will allow any user, no matter how simple the question, to seek guidance when finding a possibly malicious attack. If the employees feel safe it creates an environment where an employee is more likely to ask then to answer quickly in fear of a social engineering attack.
<urn:uuid:0df049e7-9e00-4259-a542-782eb87f428f>
CC-MAIN-2024-38
https://businessinformationgroup.com/articles/social-engineering-and-cyber-threats/
2024-09-16T15:39:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00793.warc.gz
en
0.946572
673
2.578125
3
Fighting Ransomware: Using Ivanti’s Platform to Build a Resilient Zero Trust Security Defense Ransomware is a strain of malware that blocks users (or a company) from accessing their personal data or apps on infected iOS, iPadOS, and Android mobile devices, macOS laptops, Windows personal computers and servers, and Linux servers. Then the exploit demands cryptocurrency as payment to unblock the locked or encrypted data and apps. This form of cyber extortion has been increasing in frequency and ferocity over the past several years. Seemingly, a week does not pass without hearing about the latest ransomware exploit attacking government agencies, healthcare providers (including COVID-19 researchers), schools and universities, critical infrastructure, and consumer product supply chains. The most common delivery mechanisms are email and text messages that contain a phishing link to a malicious website. By tapping on the link, the user is redirected to an infected website where they unknowingly download drive-by malware onto their device. The malware can contain an exploit kit that automatically executes malicious programmatic code that performs a privilege escalation to the system root device level, where it will grab credentials and attempt to discover unprotected network nodes to infect via lateral movement. Another common delivery mechanism are email attachments that can also contain malware exploit kits that affix themselves to vulnerable apps, computer systems or networks to elevate their privileges in search of critical data to block. There are 4 main types of ransomware. First is the locker ransomware, where the earliest form on mobile devices was found on Android. It was detected in late 2013 and called LockDroid. It secretly changed the PIN or password to the user’s lock screen, preventing access to the home screen and to their data and apps. The second type are encryptor ransomware that employs encryption of apps and files making them inaccessible without a decryption key. The first exploit using this type of ransomware was found in 2014 and called SimpLocker. It encrypted the personal data contained within the internal Secure Digital (SD) storage of an Android device. Afterward, an official looking message showing criminal violations based on scanned files found in the device is displayed to the victim. This is followed by a demand for payment message that would allow the victim to resolve the fake violations and receive the decryption key to unlock their blocked data and apps. Extortion payments are often made with Monero cryptocurrency because it is digital and often untraceable, ensuring anonymity for the cybercriminals. Bitcoin is still sometimes used, but lately, companies like CipherBlade have been able to track down ransomware gangs using Bitcoin and return the money back to victims. Rarely, mobile payment methods like Apple Pay, Google Pay or Samsung Pay are also used, but cryptocurrency is still the preferred payment for ransomware. Just within the past several years, cybercriminal gangs have added several more types of ransomware exploits including Doxware, which are threats to reveal and publish personal (or confidential company) information onto the public internet unless the ransom is paid. The other is Ransomware-as-a-Service (RaaS). Cybercriminals leverage already developed and highly successful ransomware tools in a RaaS subscription model, selling to lesser skilled cybercriminals to extort cryptocurrency from their victims and then share the ransom money. Android Exploits: Anatomy of the SimpLocker Attack Installation: The victim unknowingly lands on malware compromised or Angler hosted web server and wants to play a video or run an app. The video or app requires a new codec or Adobe Flash Player update. The victim downloads the malicious update software and installs it, requiring device administrator permissions to be activated. The mobile device is infected, and the ransomware payload installs itself onto the device. Communications: The malware scans the contents of the SD card. Then it establishes a secure communications channel with the command and control (C2) server using the anonymous Tor or I2P proxy networks within the darknet. These networks often evade security researchers, law enforcement, and government agencies making it extremely difficult to shut them down. Encrypt Data: The symmetric key used to encrypt the personal data on the attached SD card are kept hidden within the infected mobile device’s file system so the encryption can persist after reboots. Extortion: An official looking message from the FBI, Department of Homeland Security, or other government agency is displayed informing the victim that they are in violation of federal laws based on data found on the device after a scan of their personal files. Demand Payment: A demand-for-payment screen with instructions on the method of payment is then displayed. The fine was normally $300 to $500 and commonly paid in cryptocurrency. If the ransom payment is made, the symmetric key is provided and used to decrypt the personal data. If the victim is fortunate, they can retrieve all their personal files intact, although there have been reports that some if not all the data are corrupted and no longer usable after they are decrypted. Android devices are especially susceptible to ransomware because of several factors. First is its global adoption with 72% of the worldwide market share and 3 billion devices around the world. Next is the 1,300+ original equipment manufacturers (OEM), along with the fragmentation of the Android operating system. Devices running versions from 2.2 to 11.0, means a very large number of them never receive a critical security update leaving them vulnerable to malware. The last factor is Android users routinely root their devices and install apps that are unverified by Google. There are now an estimated three million apps available for download just from the Google Play Store, with potentially a million more that can be downloaded from unknown and many malicious sources. Any one of these apps can be used to host malware that can lead to ransomware exploits. Here are the remediation tasks to help fight ransomware on Android devices. These settings are configured within the Android device: 1. By default, within the Google Settings and Security configuration, the Google Play Protect settings Scan apps with Play Protect and Improve harmful app detection are enabled. These settings are the equivalent to a resident antimalware agent on the device and should remain enabled. 2. Within the Apps & notification and Special app access configuration is Install unknown app settings. Leave storage, email and browser apps as Not allowed, which is the default setting. These settings are configured within Ivanti UEM for Mobile or MobileIron Core: 3. For Android Enterprise devices, the above settings can be configured using the Lockdown & Kiosk configuration. Select Enable Verify Apps and Disallow unknown sources on Device or Disallow Modify Accounts. 4. Create a System Update configuration to automatically update to the latest available Android OS version for the device. Ivanti Mobile Threat Defense (MTD) can also enforce that the latest OS version is running on the Android device and if not, alert the user and UEM administrator that the device is running a vulnerable OS version and apply compliance actions like block or quarantine until the device is updated. 5. Enable Ivanti MTD on-device (using MTD Local Actions) and cloud-based to provide multiple layers of protection for phishing (Anti-phishing Protection) and device, network and app level threats (using the Threat Response Matrix within the MTD management console). 6. Create a SafetyNet Attestation configuration that checks for device integrity and health every 24 hours via Google APIs. 7. Create an Advanced Android Passcode and Lock Screen configuration to turn on multi-factor authentication (MFA) for the lock screen and work profile challenge using a biometric fingerprint, face unlock, or iris (eye) scan instead of a passcode or PIN. 8. Enable Device Encryption. This may sound counter-intuitive but encrypting your personal and work data on the device can prevent the cybercriminals from threatening to publish your work or company information online. 9. Backup data automatically onto a cloud storage provider like Google Drive, OneDrive, Box or Dropbox. Make secondary and tertiary copies of backups using two or more of these personal storage providers since some offer free storage. Also, backup personal data onto a local hard drive that is encrypted, password-protected and disconnected from the device and network. 10. Enable Android Enterprise or Samsung KNOX on the device to containerize, encrypt, and isolate the work profile data from your personal data in BYOD or COPE deployments. Android Enterprise in the various deployment modes and Samsung KNOX can be provisioned by Ivanti UEM for Mobile or MobileIron Core. 11. For BYOD deployments, create a blacklist of disallowed apps on the device. For company-owned devices, create a whitelist of allowed apps that can be installed on the device. Both settings can be configured within MobileIron Core’s App Control feature and applied to the security policy. For Android Enterprise devices, Restricted Apps and Allowed Apps can be applied to the Lockdown & Kiosk configuration or Create an App Control configuration to whitelist or blacklist apps within the personal profile side of the device. This can also be configured within Ivanti UEM for Mobile’s Allowed App settings and Policies & Compliance. 12. Configure a VPN client on the device like MobileIron Tunnel, Ivanti Secure Connect or Zero Trust Access to protect sensitive data-in-motion between the mobile device and MobileIron Sentry or Connect Secure or ZTA gateways. 13. Enable Ivanti Zero Sign-On (ZSO) for conditional access rules like trusted user, trusted device, and trusted app authentication to critical work resources on-premises, at the data center, or up in the cloud. Also, enable MFA using the stronger inherence (biometrics) and possession (device-as-identity or security key) authentication factors. Passwords and PINs can be phished, guessed or brute forced. 14. As a last resort, there are anti-malware vendors that provide software to detect and remove ransomware from an infected device. The user can also boot the device into Safe Mode, deactivate the Device Administrator for the malware, and then uninstall it. In the next blog in this series, we will discuss ransomware attacks and remediation on iOS and iPadOS mobile devices, and macOS laptops and desktops.
<urn:uuid:f1d2395a-fe5c-4be5-84c1-7ac341adeaab>
CC-MAIN-2024-38
https://www.ivanti.com/blog/fighting-ransomware-using-ivanti-s-platform-to-build-a-resilient-zero-trust-security-defense
2024-09-16T15:00:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00793.warc.gz
en
0.900257
2,113
2.6875
3
The easiest way to access censored websites in China is to use a VPN. Short for virtual private network, VPNs are subscription services that encrypt your internet traffic and route it through an intermediary server outside of China. By installing a VPN app on your computer and connecting to a server, you can bypass the Great Firewall and freely access the web. Websites are censored in China at the behest of the government and the ruling Communist Party. Any websites or apps that undermine Party rule, or have the potential to, are typically blocked. This consists largely of western news media, social networks, and sites built on user-generated content. Other content deemed vulgar, pornographic, paranormal, obscene, or violent is also blocked. Some western websites, apps, and services are blocked in order to prevent competition with domestic, homegrown alternatives. Comparitech maintains an updated list of VPNs that work in China here. Sometimes VPN servers get blocked, especially during times of social unrest and international conflict. That’s part of the reality of living in China, but for the most part the VPNs in that list are the best options. The "Great Firewall" is a colloquial term for mainland China's internet censorship system. It's part of the Golden Shield Project, also called the National Public Security Work Informational Project. Both legislative actions and enforcement technologies are used to regulate the country's internet. The Great Firewall blocks foreign websites, apps, social media, VPNs, emails, instant messages and other online resources deemed inappropriate or offensive by authorities. This ranges from vulgar content such as depictions of violence and pornography to more politically-sensitive materials that promote democracy or depict the ruling Communist Party in a poor light. Western social media (Facebook, Twitter), user-generated content sites (Youtube), and tools (Gmail) are blocked almost wholesale unless they agree to comply with Chinese laws and regulations. A combination of technologies are used in combination by government-run internet service providers and domestic internet companies to censor content, including keyword filtering, IP address blacklists, DNS poisoning, packet inspection, and manual enforcement.
<urn:uuid:b86125a2-09f8-42ad-9b67-8fb5201572df>
CC-MAIN-2024-38
https://www.comparitech.com/privacy-security-tools/blockedinchina/
2024-09-19T04:14:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00593.warc.gz
en
0.921538
426
2.515625
3
Cybersecurity experts expect to see threat actors increasingly make use of AI tools to craft convincing, highly targeted and sophisticated social engineering attacks, according to Eric Geller at the Messenger. “One of AI’s biggest advantages is that it can write complete and coherent English sentences,” Geller writes. “Most hackers aren’t native English speakers, so their messages often contain awkward phrasing, grammatical errors and strange punctuation. These mistakes are the most obvious giveaways that a message is a scam. With generative AI platforms like ChatGPT, hackers can easily produce messages in perfect English, devoid of the basic mistakes that Americans are increasingly trained to spot.” In addition to assisting in social engineering attacks, AI can be abused to write malware or help plan cyberattacks. “Programs like ChatGPT can already generate speeches designed to sound like they were written by William Shakespeare, Donald Trump and other famous figures whose verbal and written idiosyncrasies are widely documented. With enough sample material, like press statements or social media posts, an AI program can learn to mimic a corporate executive or politician — or their child or spouse. AI could even help hackers plan their attacks by analyzing organizational charts and recommending the best targets — the employees who serve as crucial gatekeepers of information but might not be senior enough to constantly be on guard for scams.” It’s still too early to foresee all the ways in which AI can be used for malicious purposes, but organizations should anticipate evolving social engineering tactics in the coming years. “It’s hard to predict the exact consequences of the AI revolution for phishing campaigns,” Geller concludes. “Cybercriminals are unlikely to use AI’s advanced analytical features for run-of-the-mill scams. But sophisticated criminal gangs might lean on some of those tools for major ransomware attacks, and government-backed hacking teams will almost certainly adopt these capabilities for important intelligence-gathering missions against well-defended targets....And the easier it becomes to use AI for cyberattacks, the more likely it is that innovative attackers will come up with previously unimagined uses for the technology.” KnowBe4 enables your workforce to make smarter security decisions every day. Over 65,000 organizations worldwide trust the KnowBe4 platform to strengthen their security culture and reduce human risk. The Messenger has the story.
<urn:uuid:4faf800a-08b5-4d3d-9122-63fa48384e73>
CC-MAIN-2024-38
https://blog.knowbe4.com/how-ai-lends-phishing-plausibility
2024-09-20T11:38:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00493.warc.gz
en
0.942886
493
2.640625
3
Cloud computing is an increasing contributor to carbon emissions because of the energy needs of data centers. With demand for digital services and cloud-based computing rising, industry efforts concentrated on energy efficiency will be required. This means organizations across all verticals must fold their cloud carbon footprint into their environmental, social, and governance (ESG) targets. This is especially true for those organizations that have committed to net-zero or science-based targets or other similar decarbonization commitments, as cloud computing would need to be accounted for in the calculations. Depending on an organization’s business model, and especially for companies that focus on digital services, the energy consumed through cloud computing can be a material portion of their overall emissions. In addition, shifting to the cloud can contribute to the reduction of the carbon footprint if it is approached with intent, and explicitly built into the DNA of technology deployment and management. Major Cloud Providers Offering Insight Casey Herman, PwC US ESG leader, explained that the major cloud service providers -- Google, Amazon, Microsoft -- are already providing data on energy usage and emissions on a regular basis. “Smaller players are still playing catch-up either providing online calculations, which require customers to be responsible for securing these values, or there is no information provided at all,” he says. “CIOs should have their operational teams monitor these and preferentially select those service providers that provide real-time tools to optimize the energy usage.” He notes that CIOs should also increasingly build or purchase tools that allow a holistic view across all the cloud computing impacts: Currently, they would need to look at each provider separately and then aggregate them external to any tools that may be provided by service providers. “At PwC, we have been piloting an IT sustainability dashboard that collects data from public cloud providers and on-premises systems and then provides views on key sustainability metrics like energy reuse efficiency or carbon usage effectiveness,” he adds. Herman says that ultimately, organizations are seeking greater use of data for more advanced analysis, which will consume increasingly more computing power, which translates to more energy. “Cloud service providers have been quick to reduce their carbon footprints, including public statements and investing money in renewables and carbon capture projects,” he says. “These organizations are putting in a carbon-neutral infrastructure that could then support the current and growing demand for data, analytics, and computing power.” Using Migration to Install Tools In fact, shifting to the cloud (provided it's the right provider) could reduce a company's carbon footprint through optimization and rationalization of on-premises/private data centers to more efficient (energy and carbon) cloud-based data centers. A company can also use their cloud migration program as a catalyst to transform their technology footprint and become environmentally conscious by design. Herman says that this can include re-architecting applications and building within enterprise architecture a strategy to utilize more discrete and reusable components (microservices, APIs), preventing wasteful use of energy in the cloud. The key to getting cloud carbon impact initiatives underway is aligning the ambition and strategy of the overall business with the IT and digital function around ESG and being an active champion of the ESG agenda within the organization. “Without the tools to measure the carbon footprint of their cloud footprint, companies will struggle to holistically aggregate relevant carbon impact for their IT department or manage to net zero, especially when these represent meaningful parts of their overall footprint,” Herman says. He explains that measurement tools and processes will also allow the organization to leverage that same data and insights to support decarbonization agendas and strategies in the business. AI Provides Insight into Cloud Emissions For Chris Noble, co-founder and CEO of Cirrus Nexus, the focus for his company has been on an artificial intelligence designed to help companies quantify and shrink the level of carbon their cloud operations produce. “By giving organizations the chance to impose a cost on that carbon, it allows them to make a better-informed business decision as their impact on the environment, and then to drive that actual behavior,” he says. By giving businesses a window into how much emissions their cloud computing demands are producing, those organizations are then able to form a roadmap that will help their ESG strategy. This is a part of transparency reporting, which Noble notes will be increasingly required through government regulations. “There's a lot of people making claims about carbon neutrality, but there's no way to verify that -- there's no proof,” he says. “What we allow companies to do is to see what that activity is.” He says that for IT departments to understand cloud-based carbon emissions as a business problem, they need parameters and metrics by which they can tag on cost on the issue and work toward resolving it. “How do we educate, inform and drive that behavioral change across their environments?” Noble says. “We spend a lot of time doing that.” Reliable Data Intelligence is Critical Elisabeth Brinton, Microsoft’s corporate vice president of sustainability, says that accurate, reliable data intelligence is critical for the success of ESG initiatives. “For organizations to truly address the sustainability imperative, they need continuous visibility and transparency into the environmental footprint of their entire operations, their products, the activities of their people and their value chain,” she says. Just as organizations rely on real-time financial reporting and forecasts to guide decisions that affect the fiscal bottom line, they need foundational intelligence to inform sustainability-related decisions. “Leveraging a cloud platform offers organizations comprehensive, integrated, and increasingly automated sustainability insights to help monitor and manage their sustainability performance,” Brinton says. With cloud technology and a partner ecosystem, cloud providers like Microsoft are also bringing integrated solutions to connect organizations and their value chain, ultimately helping organizations integrate sustainability into their culture, activities, and processes to prioritize actions to minimize their environmental impact. Microsoft Cloud for Sustainability is the company’s first horizontal industry cloud designed to work across multiple industries, with solutions that can be customized to specific industry needs. At its core is a data model that aligns with Greenhouse Gas Protocols -- the standard in identifying and recording emissions information. Brinton explains as the company operationalizes its sustainability plan, Microsoft is sharing its expertise and developing tools and methods customers can replicate. “We’re also thinking about where we’re going, what we have to solve as a company to walk our own talk, and how we’re going to enable our customers to deal with that complexity so that at the end, they’re coming out on the other side as well,” Brinton says. The Customer Demand for Clean Clouds Kalliopi Chioti, chief ESG officer at financial services software firm Tememos, notes banks are heavy users of datacenters and so being a part of this positive trend -- moving from legacy on-premises servers to modern cloud infrastructure -- will have a significant impact on emissions. Temenos Banking Cloud, the company’s next-generation SaaS, incorporates ESG-as-a-service to help banks reduce their energy and emissions, gain carbon insights from using their products, and to track their progress towards reaching their sustainability targets. It also runs on public cloud infrastructure, and the hyperscalers Temenos partners with have all made commitments to sustainability goals, science-based targets and using 100% renewable energy. “All these energy efficiencies are passed onto our clients,” Chioti says. “Let’s also remember that banks are in a unique position to influence the transition to a low-carbon economy.” She points out that the move to the cloud also has commercial implications: Consumers are not passive bystanders to the climate agenda, and they are increasingly matching their money with their values and voting with their wallets. “If companies want to continue to thrive and grow in the new era, they need to listen to their customers,” she says. “That starts with using cloud banking solutions to transform their climate credentials and show their customers the work they are doing to transition to a low-carbon global economy.” What to Read Next: About the Author You May Also Like
<urn:uuid:e2fd6680-5a5c-4143-bab8-3b2fcedb7d88>
CC-MAIN-2024-38
https://www.informationweek.com/it-infrastructure/cloud-monitoring-tools-help-cios-reduce-carbon-footprint
2024-09-20T10:58:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00493.warc.gz
en
0.956439
1,734
2.53125
3